Next Article in Journal
Edge-Optimized Lightweight YOLO for Real-Time SAR Object Detection
Next Article in Special Issue
YOLO-SRMX: A Lightweight Model for Real-Time Object Detection on Unmanned Aerial Vehicles
Previous Article in Journal
Direct Forward-Looking Sonar Odometry: A Two-Stage Odometry for Underwater Robot Localization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Coarse-Fine Tracker: A Robust MOT Framework for Satellite Videos via Tracking Any Point

by
Hanru Shi
1,2,
Xiaoxuan Liu
1,*,
Xiyu Qi
1,2,
Enze Zhu
1,2,
Jie Jia
1 and
Lei Wang
1,2
1
Key Laboratory of Target Cognition and Application Technology (TCAT), Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China
2
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(13), 2167; https://doi.org/10.3390/rs17132167
Submission received: 21 April 2025 / Revised: 12 June 2025 / Accepted: 18 June 2025 / Published: 24 June 2025

Abstract

Traditional Multiple Object Tracking (MOT) methods in satellite videos mostly follow the Detection-Based Tracking (DBT) framework. However, the DBT framework assumes that all objects are correctly recognized and localized by the detector. In practice, the low resolution of satellite videos, small objects, and complex backgrounds inevitably leads to a decline in detector performance. To alleviate the impact of detector degradation on track, we propose Coarse-Fine Tracker, a framework that integrates the MOT framework with the Tracking Any Point (TAP) method CoTracker for the first time, leveraging TAP’s persistent point correspondence modeling to compensate for detector failures. In our Coarse-Fine Tracker, we divide the satellite video into sub-videos. For one sub-video, we first use ByteTrack to track the outputs of the detector, referred to as coarse tracking, which involves the Kalman filter and box-level motion features. Given the small size of objects in satellite videos, we treat each object as a point to be tracked. We then use CoTracker to track the center point of each object, referred to as fine tracking, by calculating the appearance feature similarity between each point and its neighboring points. Finally, the Consensus Fusion Strategy eliminates mismatched detections in coarse tracking results by checking their geometric consistency against fine tracking results and recovers missed objects via linear interpolation or linear fitting. This method is validated on the VISO and SAT-MTB datasets. Experimental results in VISO show that the tracker achieves a multi-object tracking accuracy (MOTA) of 66.9, a multi-object tracking precision (MOTP) of 64.1, and an IDF1 score of 77.8, surpassing the detector-only baseline by 11.1% in MOTA while reducing ID switches by 139. Comparative experiments with ByteTrack demonstrate the robustness of our tracking method when the performance of the detector deteriorates.

1. Introduction

With the rapid advancement of satellite remote-sensing technology, object tracking in satellites is playing an increasingly important role in fields such as urban management, ocean monitoring, and disaster response. This progress is significantly driven by the development of deep learning and the emergence of video satellite tracking datasets like VISO [1,2] and AIR-MOT [3].
Based on the number of tracked objects, object tracking tasks include Single Object Tracking (SOT) and Multiple Object Tracking (MOT). SOT requires the location of a specific object to be provided in a frame and requires a tracker to locate the object in subsequent frames continuously. Unlike SOT, MOT inherently requires simultaneous detection of all targets and continuous maintenance of their identity associations across video frames, a dual-task paradigm that amplifies its computational complexity. Satellite videos, the most commonly used MOT methods rely on Detection-Based Tracking (DBT) [4,5,6,7,8,9,10,11,12], where detection and tracking are handled independently using a detector and a tracker. The detector is used to locate objects in each individual frame, and then the tracker associates these positions across frames to generate trajectories. This approach allows for leveraging the strengths of specialized detectors and trackers but may suffer from the compounded errors of the two separate stages. Other popular methods based on Joint Detection and Tracking (JDT) [13,14,15] use one model for end-to-end tracking, simultaneously performing detection and tracking tasks. While JDT methods benefit from unified optimization, they require highly sophisticated models capable of handling both tasks simultaneously.
While MOT methods achieve robust performance in natural scenes through reliable detection, their effectiveness in satellite videos is substantially constrained by the dependence on detection accuracy. The inherent limitations of satellite videos, namely low-resolution tiny objects and complex backgrounds, compromise detection reliability, thereby propagating errors through tracking modules.
  • Low-resolution tiny objects: As shown in Figure 1, satellite videos are captured from significant heights, resulting in lower-resolution imagery where objects appear much smaller, often losing detailed information. The blurring of appearance features makes it more difficult for detectors to recognize and distinguish tiny objects, potentially leading to misidentification or missed detection.
  • Complex background: In satellite videos, such as those involving urban roads and natural terrains, pose significant challenges for object detection. As shown in Figure 1, objects can become indistinct due to their surroundings—such as the dark object blending with the road surface and the occlusion caused by trees along the roadside. These environmental factors make it difficult to accurately detect and track objects, especially when they are small and low resolution.
To mitigate the tracking accuracy decline stemming from suboptimal detector outputs, some methods directly update the detector. For example, GMFTracker [16] uses the tiny object task correction module to employ feature correction, compensating for the offset between the classification task and the localization task to improve the detection accuracy of tiny objects. Using the latest object detection algorithms like the YOLO series [17,18,19,20,21] as detectors is also a common method in DBT. Meanwhile, some methods enhance detection results with spatio-temporal information. SMTNet [22] regresses a new virtual position for tracked objects with historical information for the missed or occluded object. CFTracker [23] uses a cross-frame feature update module to enhance object recognition and reduce the response to background noises using rich temporal semantic information. MTT-STPTR [24] utilizes a spatial-temporal relationship sparse attention module to enhance small target features and a joint feature matching module to reduce association errors. Some methods have achieved notable results by using SOT trackers to partially replace the function of detectors. In BMTC [25], the SOT tracker locates objects in the satellite video, while the detector is used only for detecting and distinguishing new objects. Consequently, the framework demonstrates reduced reliance on detector efficacy while being restricted to offline applications and necessitating specialized strategies for multi-object scenarios. LocaLock [26] further advances this idea by embedding SOT-style local matching into an online MOT network: its Local Cost Volume supplies appearance-based priors to an anchor-free detector, while a Motion-Flow module aggregates short-term dynamics, together yielding detector-agnostic yet real-time tracking without the offline fusion required in BMTC.
The emergence of Tracking Any Point (TAP) techniques [27,28,29,30,31] represents a significant breakthrough in the field of multi-object tracking (MOT) due to their unprecedented capability for spatiotemporal tracking of arbitrary points in dynamic environments. TAP techniques take an RGB video and a pixel coordinate on the first frame as input, producing per-timestep coordinates for tracking a target across time, along with visibility or occlusion estimates for each timestep. This unique approach is especially valuable in contexts where traditional detection-based tracking methods face challenges, such as in satellite videos, where detection performance is often degraded due to various environmental factors, such as low resolution, occlusions, and dynamic background changes. TAP effectively mitigates these issues by relying on first-frame detections to initialize object tracking and then using spatiotemporal modeling to track the objects without the need for continuous detections, thus circumventing detection degradation in satellite videos. This key feature ensures that tracking precision is maintained even in the absence of reliable detections during certain timeframes, a common limitation in satellite video analysis.
As demonstrated by recent works, such as CoTracker [31], TAP excels in its ability to simultaneously track points initialized across asynchronous frames, making it ideally suited for handling dynamically occurring objects in complex video scenes. Inspired by this, we have seamlessly integrated the powerful TAP method (CoTracker) into the established DBT method (ByteTrack [9]), resulting in a new and robust multi-object tracking framework termed the Coarse-Fine Tracker. In our framework, satellite videos are partitioned into multiple sub-videos to facilitate the timely handling of newly appearing objects, while ByteTrack is used for coarse tracking of the outputs from a moderately performing detector. ByteTrack employs a Kalman filter and the Hungarian algorithm, yielding motion-based coarse tracking results and priors for the subsequent fine-tracking process. Given the challenges posed by small objects in satellite videos, we treat each object as a point to be tracked. CoTracker performs fine tracking based on the priors provided by the coarse tracking step, calculating the appearance-based similarity between each point and its surrounding region to produce fine-grained tracking results. Finally, our Consensus Fusion Strategy synergistically combines the coarse and fine-tracking results by using geometric consistency checks to eliminate erroneous detections in the coarse-tracking phase while recovering missed objects through linear interpolation or fitting techniques. This motion-appearance consensus fusion greatly enhances the robustness of the tracking process, ensuring precise spatio-temporal consistency even in challenging satellite video scenarios.
Our method stands out by addressing the inherent shortcomings of detection-based tracking in satellite videos, particularly the degradation of detector performance due to environmental complexities. By incorporating the TAP method, we effectively decouple tracking accuracy from the reliance on continuous detections, offering a more reliable and accurate tracking solution. The core contributions of our approach are as follows:
  • We introduce Coarse-Fine Tracker, an innovative online MOT framework designed for scenarios with tiny objects. This framework is the first to integrate the TAP method into a traditional DBT framework, combining motion-based coarse tracking and appearance-based fine tracking to achieve robust tracking performance beyond the capabilities of the detector alone.
  • We propose a novel Consensus Fusion Strategy that uses both coarse and fine-tracking results: geometric consistency checks eliminate erroneous coarse detections, while missed objects are recovered via linear interpolation or fitting. This fusion of motion and appearance information significantly enhances the robustness and spatio-temporal accuracy of the tracker, making it particularly suitable for challenging satellite video environments.

2. Materials and Methods

2.1. Notation

Generally, DBT paradigm first processes each frame with a detector to obtain the detection results D = { D t } t = 1 T from a video V = { I t } t = 1 T , which is a sequence of T RBG frames I t R 3 × H × W . The detections are denoted as D t = { d i t } i = 1 N containing N detections in the frame t. A detection d i t represented as ( x , y , w , h ) , where ( x , y ) means the center of the bounding box, and w and h indicate its width and height, respectively. The set of M tracklets is denoted by T = { T j } j = 1 M . Each T j represents a tracklet with identity j and is defined as T j = { l j t j , l j t j + 1 , , l j t } , where l j t is the bounding box represented as ( x , y , w , h ) if the object is present, or None otherwise, and t j denotes the initialization moment.
As shown in Figure 2, our Coarse-Fine Tracker follows the DBT paradigm, with coarse tracking using ByteTrack and fine tracking using CoTracker. In a satellite video, there are multiple objects, each appearing at random times. To promptly utilize the CoTracker for simultaneous tracking after detecting these objects, we divide the video into K sub-videos with an interval of S frames, denoted as { V k } k = 1 K , where V k is defined as V k = { I t } t = t k t k + S 1 and t k is the initialized time in the sub-video. Correspondingly, we obtain the sub-detections { D k } k = 1 K , where { D k } k = 1 K is defined as D k = { D t } t = t k t k + S 1 . Our Coarse-Fine Tracker mainly executes two steps for each sub-video:
  • Coarse-to-Fine
    This step aims to provide necessary information to fine tracking through coarse tracking. For a sub-video V k and the corresponding sub-detections D k , we first use ByteTrack for coarse tracking and obtain the predicted state vectors x and the observed state vectors z for each object in every frame. The observed state vectors z are used to generate M k coarse sub-tracklets T k o = { T j o } j = 1 M k , which provide initial matching trajectories and the time and location of each object’s initial appearance. The predicted state vectors x and z can be used to generate the prior T k p = { T j p } j = 1 M k to reduce errors in fine tracking.
  • Fine-to-Coarse
    This step aims to improve tracking accuracy by supplementing coarse sub-tracklets with fine sub-tracklets. CoTracker, a transformer-based model, can determine the subsequent positions of an object through feature similarity calculations, including some positions that the detector missed. So, CoTracker is given as input the sub-video V k and the prior T k p , and outputs the fine estimate of the M k track locations T k f = { T j f } j = 1 M k and visibility flags v k = { v j } j = 1 M k . Finally, T k f and v k are used in the Consensus Fusion Strategy to form the final sub-tracklets T k .
All the sub-tracklets T k are combined to form the overall tracking results T . Our goal is to get tracklets T throughout the duration of a video.
Figure 2. The overview of our Coarse-Fine Tracker. The framework takes a sub-video as input and produces tracking results through two main steps: Coarse-to-Fine and Fine-to-Coarse. In the Coarse-to-Fine process, Coarse Tracking first generates coarse tracking trajectories and provides positional priors for Fine Tracking. In the Fine-to-Coarse process, Fine Tracking produces finer tracking results, which are then further integrated with the coarse-grained trajectories through the Consensus Fusion Strategy to yield the final tracking results.
Figure 2. The overview of our Coarse-Fine Tracker. The framework takes a sub-video as input and produces tracking results through two main steps: Coarse-to-Fine and Fine-to-Coarse. In the Coarse-to-Fine process, Coarse Tracking first generates coarse tracking trajectories and provides positional priors for Fine Tracking. In the Fine-to-Coarse process, Fine Tracking produces finer tracking results, which are then further integrated with the coarse-grained trajectories through the Consensus Fusion Strategy to yield the final tracking results.
Remotesensing 17 02167 g002

2.2. Coarse-Fine Tracker

2.2.1. Coarse-to-Fine

  • Coarse Tracking
In our framework, ByteTrack performs coarse tracking across sub-videos using imperfect detection results. This achieves dual capability: preserving spatial-temporal information critical for fine tracking and maintaining cross-subvideo identity consistency via the Linear Kalman filter. The ByteTrack uses the Linear Kalman filter to predict the tracklets of the objects and then employs the Hungarian algorithm to associate the tracklets sequentially with high-confidence detection boxes and low-confidence detection boxes. Through iterative prediction-update cycles, the Kalman filter generates the predicted state vectors x as motion priors while dynamically calibrating them against the observed state vectors z derived from detection bounding boxes.
As shown in Algorithm 1, the ByteTrack is given as input the D k = { D t } t = t k t k + S 1 , and outputs motion-based coarse sub-tracklets T k o = { T j o } j = 1 M k , where T j o is defined as T j o = { l j t k , l j t k + 1 , , l j t k + S 1 } . Despite potential matching errors in T k o due to inaccuracies in D k , they at least include the position and time of each object’s initial appearance used in fine tracking. In practice, besides the initial appearance time and position, providing approximate prior information for subsequent positions can slightly help reduce errors in CoTracker. So for each sub-tracklet, we save the observed state vectors z j t and the predicted state vectors x j t .
Algorithm 1 Pseudo-code for Coarse Tracking
Input: Sub-detections D k = { D t } t = t k t k + S 1 , previous coarse sub-tracklets T k 1 o
Output: Coarse sub-tracklets T k o = { T j o } j = 1 M k , Prediction states X k = { x j t } , Observed states Z k = { z j t }
 1: Initialize: T k o T k 1 o , X k , Z k
 2: for  t t k  to  t k + S 1  do
 3:       D t D k [ t ]
 4:       D high t , D low t Split ( D t , τ )
 5:      for  T j T  do
 6:           x j t KalmanPredict ( x j t 1 ) ▹ The prediction process of Kalmen filter
 7:           X k [ j ] [ t ] x j t ▹ Save the predicted state vectors z j t
 8:           Z k [ j ] [ t ] None ▹ Save the observed state vectors z j t
 9:      end for
 10:      M high Hungarian ( T k o , D high t )
 11:      M low Hungarian ( T k o M high , D low t )
 12:     for  T j T k o  do▹ Update the tracked tracks
 13:           if  T j is matched in M high M low  then
 14:                z j t d D high t D low t t h a t m a t c h e s T j
 15:                x j t KalmanUpdate ( x j t , z j t ) ▹ The update process of Kalmen filter
 16:               Z k [ j ] [ t ] z j t ▹ Save the observed state vectors z j t
 17:               T j o T j o { l j t = z j t }
 18:           else
 19:               T j o T j o { l j t = None }
 20:           end if
 21:     end for
 22:     Initialize new tracks from D unmatched t
 23:     for  T new in new tracks do
 24:           T k o T k o { T new } ▹ Add the new tracks
 25:     end for
 26:     Remove lost tracks from T k o ▹ Remove the lost tracks
 27: end for
 28: return  T k o , X k , Z k
  • Provide Prior for Better Fine Tracking
Due to the nature of satellite video scenes, where the objects to be tracked are tiny and move in approximately linear trajectories, the object bounding boxes do not undergo significant deformation. Therefore, we can treat each tiny object as a point by using the bounding box center ( x , y ) and leverage state-of-the-art TAP techniques to handle them. CoTracker is a transformer-based point tracker that tracks several points jointly, making it particularly well-suited for this task.
CoTracker initializes the subsequent coordinates with the initial coordinates for a trajectory, then updates them iteratively. This approach, however, may extract spatial correlation features from wrong coordinates, which can propagate localization errors and gradually degrade tracking robustness due to error accumulation. In our method, we enhance CoTracker using motion-aware positional priors, where the trajectory initialization is governed by the posterior predicted state vector x j t instead of copying initial coordinates. This replacement is critical because the observation vector z j t from the detector fails to reliably locate tiny objects in consecutive frames, whereas x j t encodes motion continuity to suppress error accumulation. Therefore, we can obtain T k p = { T j p } j = 1 M k , where T j p is defined as T j p = { l j t j , l j t j + 1 , , l j t k + S 1 } . Here, t j denotes the initial timestamp of object j in the sub-video, where its starting position l j t j is directly derived from the observation state vector z j t j . For subsequent frames, the positional prior l j t is inferred from the predicted state vector x j t .

2.2.2. Fine-to-Coarse

  • Cropping Sub-videos
When using CoTracker for fine tracking, the satellite video frames are too large, making tracking difficult. Training a model with an input size of 1024 × 1024 is impractical, and the model’s processing time also needs to be considered. Therefore, we crop the video frames to avoid this problem.
We first calculate the maximum movement area for each object in all sub-videos (the area covered by the movement of its center point plus its width and height) and find that it does not exceed 50 × 50 with S = 8 . Therefore, as long as the overlapping area between adjacent cropped sub-videos exceeds two 50 × 50 regions, we can ensure that every object will appear completely in at least one of the cropped sub-videos.
As shown in Figure 3, we crop the sub-video V k = { I t } t = t k t k + S 1 into 12 cropped sub-videos V k n = { I t n | n = 1 , 2 , , 12 } t = t k t k + S 1 , with the overlapping area between adjacent videos being much larger than 50 × 50. Then, each trajectory in T k c is assigned to one of the 12 cropped sub-videos based on the location of the midpoint of the line connecting its start and end center points. At this point, some of the cropped sub-videos may not contain any trajectory and can be discarded. Finally, fine tracking is performed in parallel using CoTracker model based on the number of cropped sub-videos.
  • Fine Tracking
Fine tracking begins from the center position of each object’s first detection and returns the estimated position of the bounding box center in subsequent sub-videos. Using the prior information T k p , Cotracker is thus given as input the n-th cropped sub-video V k n , starting and subsequent coordinates { ( x j t j , y j t j ) , , ( x j t k + S 1 , y j t k + S 1 ) } j = 1 M k n , where ( x j t , y j t ) are included in l j t . It then outputs the estimate of the center positions { ( x j t j , y j t j ) , ( x ˙ j t j + 1 , y ˙ j t j + 1 ) , , ( x ˙ j t k + S 1 , y ˙ j t k + S 1 ) } j = 1 M k n and visibility flags v k n = { v j } j = 1 M k n , where v j is defined as v j = { v j t j , v j t j + 1 , , v j t k + S 1 } . Center positions ( x ˙ j t , y ˙ j t ) and ( w j t , h j t ) in l j t can combine to form l ˙ j t . By merging 12 cropped sub-videos, we can obtain the fine sub-tracklets T k f = { T j f } j = 1 M k and v k = { v j } j = 1 M k , where T k f is defined as T j f = { l ˙ j t j , l ˙ j t j + 1 , , l ˙ j t k + S 1 } .
Compared with motion-based coarse tracking that associates based on the bounding boxes, fine tracking is at point-level by calculating similarity after extracting appearance features from objects and sub-videos. Therefore, in situations where the detector does not work well, fine tracking can still detect the object and return the approximate position of the bounding box center. It is important to note that if an object is missed during the initialization phase, particularly in the frames prior to detection, our method is currently unable to recover such objects, as it relies on the detection initialized within the stride. And next, we can use fine sub-tracklets T k f to identify and correct erroneous trajectories in coarse sub-tracklets T k o , which is detailed in Section 2.3.
Figure 3. (a) Cropping regions of a 1024 × 1024 image with overlapping areas between adjacent regions, where green bounding boxes indicate objects to be tracked. (b) The 12 cropped sub-videos obtained after croppping (a). (c) Cropped sub-videos numbered 1, 2, 3, and 4 are selected for fine tracking from (b).
Figure 3. (a) Cropping regions of a 1024 × 1024 image with overlapping areas between adjacent regions, where green bounding boxes indicate objects to be tracked. (b) The 12 cropped sub-videos obtained after croppping (a). (c) Cropped sub-videos numbered 1, 2, 3, and 4 are selected for fine tracking from (b).
Remotesensing 17 02167 g003

2.3. Consensus Fusion Strategy

Our framework produces two outputs: (1) motion-based coarse sub-tracklets T k o , generated by associating detection outputs D k through ByteTrack’s Kalman filter; and (2) appearance-based fine sub-tracklets T k f , constructed by correlating cross-frame features via CoTracker. While T k o provides temporally coherent object proposals by associating detections across frames, it inevitably inherits detection errors from the moderately performing detector. T k f recovers detector-missed objects through appearance similarity, but convolutional operations in CoTracker (e.g., downsampling) cause its predicted centers to deviate from actual box centers. Our Consensus Fusion Strategy synergistically integrates motion-based trajectories T k o and appearance-based predictions T k f through dual mechanisms: geometric consistency validation purges mismatched trajectories in T k o , while appearance-guided interpolation recovers undetected objects from T k o , thus achieving temporally stable and spatially precise tracking.

2.3.1. Filter Erroneous Detections

This stage eliminates detection errors by spatio-temporal cross-validation between T k o and T k f . Temporal validation first locates missed detections where ByteTrack fails (e.g., l j t = None ), while v j t = 1 signals appearance-based visibility, as in frames 5 and 8 of Figure 4. Spatial coherence then flags false positive detections through geometric consistency, where T k f ’s centers ( x ˙ j t , y ˙ j t ) should not exceed T k o ’s bounding box:
( x ˙ j t , y ˙ j t ) x j t w j t 2 , x j t + w j t 2 × y j t h j t 2 , y j t + h j t 2 .
Violations of this spatial consensus trigger false positive invalidation, as in frame 3 of Figure 4. These geometrically inconsistent detections are reclassified as missed detections. Based on the outcome of this process, sub-tracklets are categorized into three classes: valid detections with confirmed localization (e.g., satisfying geometric consistency), non-detection frames indicating object absence (e.g., l j t = None and v j t = 0 ), and missed detections (e.g., l j t = None and v j t = 1 , or failing to satisfy geometric consistency).
Figure 4. Consensus Fusion Strategy workflow for an object tracked from frame 1 to 8. Given coarse tracklet T j o and fine tracklet T j f : (1) Filter Erroneous Detections: Spatio-temporal cross-validation discards l j t if T k f ’s center ( x ˙ j t , y ˙ j t ) violates bounding box constraints (Equation (1)). (2) Recover Missed Detections: Linear interpolation or fitting estimates missed detection center based on valid detections (≥4 or <4 per segment), inheriting w ˙ j t / h ˙ j t from T k f . (3) Adjust Recovered Detections: Enforce geometric consistency (Equation (1)) by aligning recovered boxes with T k f ’s centers.
Figure 4. Consensus Fusion Strategy workflow for an object tracked from frame 1 to 8. Given coarse tracklet T j o and fine tracklet T j f : (1) Filter Erroneous Detections: Spatio-temporal cross-validation discards l j t if T k f ’s center ( x ˙ j t , y ˙ j t ) violates bounding box constraints (Equation (1)). (2) Recover Missed Detections: Linear interpolation or fitting estimates missed detection center based on valid detections (≥4 or <4 per segment), inheriting w ˙ j t / h ˙ j t from T k f . (3) Adjust Recovered Detections: Enforce geometric consistency (Equation (1)) by aligning recovered boxes with T k f ’s centers.
Remotesensing 17 02167 g004

2.3.2. Recover Missed Detections

Our restoration strategy processes missed frames based on the quantity of valid detections within each sub-video segment, utilizing the center coordinates of l j t and the width and height from l ˙ j t of the valid detections.
When the number of valid detections exceeds or equals 4, linear interpolation reconstructs object trajectories between adjacent positions of valid detections. When the number of valid detections is less than 4, we perform linear fitting on the centers of the sub-tracklet in the last sub-video and the centers of the valid detections in the current sub-video. Then, we sample this fitted model at the timestamps of the missed detections to estimate and recover their center points. The recovered boxes inherit width ( w ˙ j t ) and height ( h ˙ j t ) from T k f to maintain scale consistency. As demonstrated in Figure 4, the 3 missed detections in frames 3, 5, and 8 are recovered through interpolation between 5 valid detections.

2.3.3. Adjust Recovered Detections

We enforce the geometric consistency from Equation (1) on recovered detections to maintain spatial coherence. For each recovered box, we verify containment of T k f ’s center ( x ˙ j t , y ˙ j t ) within its boundaries. As shown in Figure 4, frame 8, the recovered box is adjusted to ensure full compliance with the geometric consistency.

3. Results

3.1. Experiment Setting

3.1.1. Datasets

We validate our Coarse-Fine Tracker using the VISO [2] and SAT-MTB [32] datasets, which are captured by the Jilin-1 satellite.
The VISO and SAT-MTB datasets present a variety of traffic scenes, including traffic jams, urban roads, and highways. Therefore, it covers common challenges in MOT tasks in satellite videos, such as tiny objects and complex backgrounds. We follow the official split provided by the VISO dataset authors: the seven 1024 × 1024 test videos are used for evaluation, while the remaining seventy videos at 512 × 512 resolution constitute the official training split and are used to train the detector. Likewise, we adopt the SAT-MTB test split, which contains two 1024 × 1024 sequences and twenty-eight 512 × 512 sequences. The remaining sixty-two 512 × 512 sequences in the training split are employed for detector training.

3.1.2. Evaluation Metrics

In our experiments, we employ the CLEAR metrics [33] from MOTChallenge [34] and KITTI Tracking [35] to quantitatively assess the accuracy of our method, including multi-object tracking accuracy (MOTA), multi-object tracking precision (MOTP), IDF1 score (IDF1), ID precision (IDP), ID recall (IDR), the percentage of mostly tracked trajectories (MT), and most lost (ML), the number of false positives (FP), false negatives (FN), and ID switches (IDS).
In particular, IDF1 focuses more on association performance, MOTA quantifies the accumulated tracking errors with values ranging from ( , 100 ] , and IDS represents the total number of ID switches. Note that a negative MOTA indicates that the tracker has made more errors than the total number of objects in the video. Higher values of IDF1 and MOTA, or lower IDS, indicate a better tracking performance of the model. Their calculation formulas are as follows:
IDF 1 = 2 × IDP × IDR IDP + IDR ,
MOTA = 1 t FN t + FP t + IDS t t GT t ,
where t is the frame index and G T t is the number of ground truth objects. The official evaluation code is available at TrackEval [36].

3.1.3. Implementation Details

For our implementation, we first segment the video into portions with S = 8 frames and process each segment sequentially. Initially, we perform Coarse Tracking on these video segments with a track threshold of 0.4. After this, we crop the video and proceed with Fine Tracking using the CoTracker model.
The CoTracker model used in Fine Tracking is retrained on the VISO dataset. Since the open-source weights of CoTracker accept an input size of 512 × 384, we preprocess the VISO dataset accordingly, using the VISO training set to construct the CoTracker training set and the VISO test set to construct the CoTracker test set. The training set of VISO, originally in 512 × 512 resolution, is split into multiple 512 × 384 video clips of 24 frames each. These clips are used to generate npy files in the same format as TAP-Vid-Kubric [29], according to the annotations. For the test set of VISO, which is in 1024 × 1024 resolution, it is similarly split into 512 × 384 video clips of 24 frames. These clips are used to generate a pkl file in the same format as TAP-Vid-DAVIS [29], according to the annotations and videos.
During the training of the CoTracker model, we set the batch size to 1, the model stride to 2, and the number of virtual trajectories to 4, while keeping all other parameters at their default values. We train the model for 50,000 iterations with a learning rate of 5 × 10 5 using the AdamW [37] optimizer. All experiments, including tracking experiments, are conducted on a single NVIDIA RTX 4090 GPU.

3.2. Quantitative Results

We test our Coarse-Fine Tracker on the VISO test dataset [2] using DSFNet [2] as the detector and compare it with other multi-object tracking methods, including methods designed for ground-based videos such as FairMOT [14] and ByteTrack [9], as well as methods specifically designed for satellite videos, namely, including DBT methods such as TGraM [3], CFTracker [23], and LocaLock [26]. We adopt the settings of LocaLock [26] in VISO: predicted detection is considered TP if the predicted bounding box overlaps with the ground truth (GT) bounding box. Therefore, we set the IoU threshold to 1 × 10 7 . The experimental results are shown in Table 1, with the best results highlighted in bold.
Our proposed Coarse-Fine Tracker outperforms other methods and achieves the highest MOTA and IDF1 scores, indicating that our method surpasses all current leading MOT models in overall performance. For ByteTrack, which also uses DSFNet as the detector, our method has lower FN and the sum of FN and FP, indicating CoTracker and Consensus Fusion Strategy provide more accurate detections beyond the capabilities of the detector alone. This results in improvements in MOTA, MOTP, IDF1, IDS, and other metrics at the track threshold of 0.4.
There are some papers that consider a prediction to be a TP if the IoU between the predicted bounding box and the ground-truth bounding box exceeds 0.4. Therefore, we also report Coarse-Fine Tracker’s quantitative results for the VISO test dataset in Table 2 when the IoU threshold is 0.4. Our Coarse-Fine Tracker achieves state-of-the-art performance in MOTA, IDF1, FP, FN, and IDS. Compared with OC-SORT and Bot-YOLOv7 in natural scenes, our method reduces IDS by 50.3% and 90%, respectively, confirming its excellent stability. Compared with CFTracker and GMFTracker, specialized trackers for remote sensing videos, our method surpasses them in MOTA, IDF1, IDS, and other metrics, demonstrating the competitiveness and robustness of our method.
Additionally, we conduct tests on the SAT-MTB vehicle dataset [32] using YOLOX [19] as the detector and compare it with some methods designed for ground-based and satellite videos. As shown in Table 3, the proposed Coarse-Fine Tracker achieves the best performance on the metrics. It achieves the highest MOTA score of 28.4 and the IDF1 score of 53.1 among the evaluated methods, surpassing ByteTrack by 1.7 and 1.2, respectively. Our framework maintains competitive efficiency while reducing IDS relative to CFTracker, proving robust handling of complex tracking scenarios.

3.3. Visualization Results

To visually demonstrate the performance of our framework, we provide visualization results from the VISO test videos in Figure 5.
Overall, compared with CFTracer and ByteTrack, our Coarse-Fine Tracker has more complete trajectories, indicating better tracking ability. At the same time, the colors in Figure 5 show lower complexity, indicating fewer IDS. In addition, the trajectory is more continuous, which is particularly evident in videos 003, 006, and 008. This indicates that our method successfully recovered some previously undiscovered objects through Fine Tracking and Consensus Fusion Strategy, resulting in an increase in MT and a decrease in FN.

4. Discussion

4.1. Effect of the Detector

To rigorously evaluate the generalization capability of our framework across varying detector architectures, we conduct ablation studies on the VISO test dataset using three different detectors: YOLOX (a widely adopted natural-scene detector), DSFNet (a detector designed for satellite videos), and CFTracker (a JDT tracker in remote sensing). This selection spans the conventional detection paradigm, the specialized remote sensing detection model, and the JDT framework, thereby systematically validating method robustness. As evidenced in Table 4, although detector selection significantly impacts tracking performance, our approach consistently improves MOTA, IDF1, and continuity metrics under all detector configurations, demonstrating reduced sensitivity to detection quality fluctuations compared with baseline ByteTrack. Under the YOLOX detector, our framework increases MT by 10.2% while maintaining higher MOTA. With CFTracker’s tracking results, our method elevates IDF1 from 73.0 to 75.9 and decreases IDS by 28.3%, despite comparable MOTP scores. Based on DSFNet detections, our approach achieves a MOTA of 66.9, surpassing ByteTrack by 6.7 while simultaneously improving MOTP from 60.5 to 64.1 and reducing FN by 12.3%. These improvements highlight our method’s robustness in decoupling detection dependency.
Considering that CFTracker uses a cross-frame feature update module, its tracking results mainly from segmented trajectories, allowing simple associations to achieve good scores in MOTP and other metrics. Given the challenges in satellite videos, where targets are small and backgrounds are complex, conventional detectors often struggle to achieve such results, either failing to detect objects or producing low-confidence detections. To evaluate the performance of Coarse-Fine Tracker when the detector’s performance is insufficient, we systematically removed the CFTracker detection results at deletion probabilities of 2%, 4%, 6%, 8%, and 10%. Concurrently, we adjusted the tracking threshold to 0.3, 0.4, 0.5, and 0.6 across the experiments. The results of these studies are detailed in Table 5.
The results, as shown in Table 5, indicate that our method consistently outperforms ByteTrack across all conditions. Specifically, our method achieves a higher MOTA and IDF1 scores, indicating better tracking accuracy and identity preservation. In addition, it results in fewer IDS, demonstrating more stable tracking.
As the deletion probability increases, both ours and ByteTrack exhibit a marked decline in performance, particularly at higher tracking thresholds of 0.5 and 0.6, underscoring the dependency of DBT methods on the underlying detector’s performance, especially in scenarios where high-confidence detections are scarce.
Focusing on the tracking thresholds of 0.3, 0.4, and 0.5, our method consistently outperforms ByteTrack in terms of MOTA degradation. Specifically, at a tracking threshold of 0.3, ours experiences a MOTA degradation of 0.7 compared with ByteTrack’s 5.7. Similarly, at thresholds of 0.4 and 0.5, our MOTA degradation remains significantly lower than that of ByteTrack, demonstrating enhanced robustness. When examining fixed deletion ratios, ours maintains smaller MOTA degradations across all tested probabilities, with degradations ranging from 6.4% to 8.9%, whereas ByteTrack experiences much larger degradations between 21.6% and 23.4%. These comparisons clearly demonstrate our superior robustness, particularly under conditions of increased detection result degradation. Our ability to maintain higher tracking accuracy despite reduced detection inputs highlights its effectiveness and reliability in satellite video scenarios where detector performance may fluctuate.

4.2. Component Analysis

To assess the contributions of individual components in Coarse-Fine Tracker, we systematically add the Fine Tracking, Providing Prior, and Consensus Fusion Strategy to evaluate their impact on performance. The experiments are carried out using the test dataset, and the results are detailed in Table 6 and Table 7. When evaluating Fine Tracking alone or the combined results of Fine Tracking and Providing Prior, if Fine Tracking identifies any missed detections, these are directly added to the coarse-grained tracking results as supplementary detections.
As shown in Table 6, we test three components based on CFTracker’s tracking results. The inclusion of the Fine Tracking results in a slight decrease of 0.2 in MOTA but an improvement of 0.7 in IDF1, significantly reducing the IDS by 34. However, MOTP exhibits an unexpected decline. As MOTP specifically reflects the localization accuracy of tracking boxes, the original CFTracker has already generated continuous trajectory segments, resulting in a naturally higher MOTP baseline. While the Fine Tracking successfully detects previously missed objects, the convolutional operations in its design (as mentioned in Section 2.3) introduce localization inaccuracies, ultimately leading to degraded MOTP performance. Adding the Providing Prior to the Fine Tracking provides further performance gains, with MOTA increasing by 0.6 and IDF1 by 0.1, and MOPT improved. This combination also reduced IDS by 186 and 37, respectively. When all three parts are integrated, the model achieves the highest improvements, with MOTA increasing to 59.1, IDF1 to 75.9, and MOPT returning to 59.2. This configuration also yields significant reductions in IDS, demonstrating the effectiveness of each module.
As shown in Table 7, we also test three components based on the DSFNet detections. The cumulative integration of components progressively enhances MOTA, MOTP, and IDF1 while effectively suppressing IDS, thereby validating the effectiveness of each individual module. Overall, the combination of Fine Tracking, Providing Prior, and Consensus Fusion Strategy greatly enhances tracking performance, as evidenced by the improved MOTA and IDF1 scores and the reduction in IDS compared with the baseline ByteTrack.

4.3. Effect of Frame Interval Parameter S

To address the issue of large satellite video frames making tracking difficult with Fine Tracking, we crop the video frames. We found that when S = 8 , the maximum area of movement for each object does not exceed 50 × 50, ensuring that objects appear completely in at least one cropped sub-video. Considering the practical limitations and the need to cover object movements, we experimented with different Frame Interval S of 4, 6, 8, 10, and 12. Given that the movement of objects in adjacent frames typically does not exceed 5 pixels in both x and y directions, and the overlap between adjacent sub-videos is at least 169 pixels, choosing S = 10 and S = 12 is feasible.
As shown in Table 8, the performance of our method varies with the different Frame Interval S values. When S = 8 , the method achieves the highest MOTA of 66.9 and IDF1 of 77.8, reflecting the best balance between accuracy and tracking consistency. The MOTP reaches its lowest value of 64.1 at S = 8 , suggesting a slight decrease in precision due to the larger range of object movement. However, this trade-off between MOTA and MOTP is offset by the fact that S = 8 minimizes false negatives and false positives more effectively than smaller or larger stride values. As S increases further (to S = 10 and S = 12 ), the IDS decreases, with S = 12 achieving the lowest IDS of 293, indicating fewer identity switches. However, the improvements in MOTA and IDF1 are less pronounced, and there is a noticeable increase in false positives with S = 12 , suggesting that larger strides may introduce more fragmentation and less accurate matching.
In conclusion, S = 8 provides the best overall performance, with an optimal balance between tracking accuracy, consistency, and minimal false detections, making it the most suitable for tracking.

5. Conclusions

In this study, we present a novel Coarse-Fine Tracker for MOT in satellite videos, leveraging a two-step tracking approach that combines ByteTrack for coarse tracking and CoTracker for fine tracking. This framework integrates the complementary strengths of two tracking methods to deliver interpretable behavioral analytics, providing a robust, mission-critical-ready solution for high-stakes applications. Coarse tracking provides coarse tracking results and the prior for fine tracking through motion-based tracking based on a conventional detector. Fine tracking provides fine tracking results by focusing on appearance-based tracking. The final results are further enhanced by a Consensus Fusion Strategy, which eliminates erroneous matches in coarse tracking results while also incorporating objects that are detected by CoTracker but not by the detector. Our experiments show that this method achieves good performance and robustness against variations in detection quality, maintaining high tracking accuracy beyond the capabilities of the detector alone.
While the Coarse-Fine Tracker demonstrates notable performance when the detector’s performance is suboptimal, there is still room for further enhancement. Specifically, a TAP model designed for remote sensing video scenes could be developed to ensure accuracy while accelerating processing speed. In future work, we intend to combine advanced MOT methods to design a lightweight online TAP model that supports higher-resolution video input and faster processing speed.

Author Contributions

Conceptualization, H.S., X.L., X.Q., E.Z., J.J. and L.W.; methodology, H.S. and X.L.; software, H.S., X.Q. and E.Z.; validation, H.S., X.L. and E.Z.; formal analysis, H.S. and J.J.; investigation, H.S. and L.W.; data curation, H.S. and J.J.; writing—original draft preparation, H.S. and X.Q.; writing—review and editing, H.S., E.Z. and J.J.; visualization, X.Q. and E.Z.; project administration, L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the Key Laboratory of Target Cognition and Application Technology under Grant 2023-CXPT-LC-005, the Science and Disruptive Technology Program under Grant AIRCAS2024-AIRCAS-SDTP-03, and the Key Program of the Chinese Academy of Sciences under Grants RCJJ-145-24-13 and KGFZD-145-25-38.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yin, Q.; Hu, Q.; Liu, H.; Zhang, F.; Wang, Y.; Lin, Z.; An, W.; Guo, Y. Detecting and Tracking Small and Dense Moving Objects in Satellite Videos: A Benchmark. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
  2. Xiao, C.; Yin, Q.; Ying, X.; Li, R.; Wu, S.; Li, M.; Liu, L.; An, W.; Chen, Z. DSFNet: Dynamic and Static Fusion Network for Moving Object Detection in Satellite Videos. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  3. He, Q.; Sun, X.; Yan, Z.; Li, B.; Fu, K. Multi-Object Tracking in Satellite Videos with Graph-Based Multitask Modeling. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  4. Bewley, A.; Ge, Z.; Ott, L.; Ramos, F.; Upcroft, B. Simple online and realtime tracking. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3464–3468. [Google Scholar] [CrossRef]
  5. Du, Y.; Wan, J.; Zhao, Y.; Zhang, B.; Tong, Z.; Dong, J. GIAOTracker: A Comprehensive Framework for MCMOT with Global Information and Optimizing Strategies in VisDrone 2021. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Montreal, BC, Canada, 11–17 October 2021; pp. 2809–2819. [Google Scholar]
  6. Cao, J.; Pang, J.; Weng, X.; Khirodkar, R.; Kitani, K. Observation-centric sort: Rethinking sort for robust multi-object tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 9686–9696. [Google Scholar]
  7. Shuai, B.; Berneshawi, A.; Li, X.; Modolo, D.; Tighe, J. SiamMOT: Siamese Multi-Object Tracking. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 12367–12377. [Google Scholar] [CrossRef]
  8. Qin, Z.; Zhou, S.; Wang, L.; Duan, J.; Hua, G.; Tang, W. MotionTrack: Learning Robust Short-Term and Long-Term Motions for Multi-Object Tracking. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 17939–17948. [Google Scholar] [CrossRef]
  9. Zhang, Y.; Sun, P.; Jiang, Y.; Yu, D.; Weng, F.; Yuan, Z.; Luo, P.; Liu, W.; Wang, X. ByteTrack: Multi-object Tracking by Associating Every Detection Box. In Proceedings of the Computer Vision—ECCV 2022, Tel Aviv, Israel, 23–27 October 2022; Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T., Eds.; Springer: Cham, Switzerland, 2022; pp. 1–21. [Google Scholar]
  10. Aharon, N.; Orfaig, R.; Bobrovsky, B.Z. BoT-SORT: Robust Associations Multi-Pedestrian Tracking. arXiv 2022, arXiv:2206.14651. [Google Scholar]
  11. Wojke, N.; Bewley, A.; Paulus, D. Simple online and realtime tracking with a deep association metric. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3645–3649. [Google Scholar]
  12. Du, Y.; Zhao, Z.; Song, Y.; Zhao, Y.; Su, F.; Gong, T.; Meng, H. StrongSORT: Make DeepSORT Great Again. IEEE Trans. Multimed. 2022, 25, 8725–8737. [Google Scholar] [CrossRef]
  13. Wang, Z.; Zheng, L.; Liu, Y.; Li, Y.; Wang, S. Towards Real-Time Multi-Object Tracking. In Proceedings of the Computer Vision—ECCV 2020, Glasgow, UK, 23–28 August 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer: Cham, Switzerland, 2020; pp. 107–122. [Google Scholar]
  14. Zhang, Y.; Wang, C.; Wang, X.; Zeng, W.; Liu, W. FairMOT: On the Fairness of Detection and Re-identification in Multiple Object Tracking. Int. J. Comput. Vis. 2020, 129, 3069–3087. [Google Scholar] [CrossRef]
  15. Zhou, X.; Koltun, V.; Krähenbühl, P. Tracking Objects as Points. In Proceedings of the Computer Vision—ECCV 2020, Glasgow, UK, 23–28 August 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer: Cham, Switzerland, 2020; pp. 474–490. [Google Scholar]
  16. Chen, H.; Li, N.; Li, D.; Lv, J.; Zhao, W.; Zhang, R.; Xu, J. Multiple Object Tracking in Satellite Video with Graph-Based Multiclue Fusion Tracker. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–14. [Google Scholar] [CrossRef]
  17. Redmon, J.; Divvala, S.K.; Girshick, R.B.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  18. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  19. Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. YOLOX: Exceeding YOLO Series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
  20. Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
  21. Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
  22. Feng, J.; Zeng, D.; Jia, X.; Zhang, X.; Li, J.; Liang, Y.; Jiao, L. Cross-frame keypoint-based and spatial motion information-guided networks for moving vehicle detection and tracking in satellite videos. ISPRS J. Photogramm. Remote Sens. 2021, 177, 116–130. [Google Scholar] [CrossRef]
  23. Kong, L.; Yan, Z.; Zhang, Y.; Diao, W.; Zhu, Z.; Wang, L. CFTracker: Multi-Object Tracking with Cross-Frame Connections in Satellite Videos. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–14. [Google Scholar] [CrossRef]
  24. Hong, J.; Wang, T.; Han, Y.; Wei, T. Multi-Target Tracking for Satellite Videos Guided by Spatial-Temporal Proximity and Topological Relationships. IEEE Trans. Geosci. Remote Sens. 2025, 63, 1–20. [Google Scholar] [CrossRef]
  25. Zhang, J.; Zhang, X.; Huang, Z.; Cheng, X.; Feng, J.; Jiao, L. Bidirectional Multiple Object Tracking Based on Trajectory Criteria in Satellite Videos. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–14. [Google Scholar] [CrossRef]
  26. Kong, L.; Yan, Z.; Shi, H.; Zhang, T.; Wang, L. LocaLock: Enhancing Multi-Object Tracking in Satellite Videos via Local Feature Matching. Remote Sens. 2025, 17, 371. [Google Scholar] [CrossRef]
  27. Sand, P.; Teller, S. Particle Video: Long-Range Motion Estimation using Point Trajectories. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; Volume 2, pp. 2195–2202. [Google Scholar] [CrossRef]
  28. Harley, A.W.; Fang, Z.; Fragkiadaki, K. Particle Video Revisited: Tracking Through Occlusions Using Point Trajectories. In Proceedings of the Computer Vision—ECCV 2022, Tel Aviv, Israel, 23–27 October 2022; Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T., Eds.; Springer: Cham, Switzerland, 2022; pp. 59–75. [Google Scholar]
  29. Doersch, C.; Gupta, A.; Markeeva, L.; Recasens, A.; Smaira, L.; Aytar, Y.; Carreira, J.; Zisserman, A.; Yang, Y. TAP-Vid: A Benchmark for Tracking Any Point in a Video. arXiv 2022, arXiv:2211.03726. [Google Scholar]
  30. Doersch, C.; Yang, Y.; Vecerík, M.; Gokay, D.; Gupta, A.; Aytar, Y.; Carreira, J.; Zisserman, A. TAPIR: Tracking Any Point with per-frame Initialization and temporal Refinement. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–3 October 2023; pp. 10027–10038. [Google Scholar]
  31. Karaev, N.; Rocco, I.; Graham, B.; Neverova, N.; Vedaldi, A.; Rupprecht, C. CoTracker: It Is Better to Track Together. In Proceedings of the Computer Vision—ECCV 2024, Milan, Italy, 29 September–4 October 2024; Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G., Eds.; Springer: Cham, Switzerland, 2025; pp. 18–35. [Google Scholar]
  32. Liao, X.; Li, Y.; He, J.; Jin, X.; Liu, Y.; Yuan, Q. Advancing Multiobject Tracking for Small Vehicles in Satellite Videos: A More Focused and Continuous Approach. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–19. [Google Scholar] [CrossRef]
  33. Bernardin, K.; Stiefelhagen, R. Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. EURASIP J. Image Video Process. 2008, 2008, 1–10. [Google Scholar] [CrossRef]
  34. Milan, A.; Leal-Taixé, L.; Reid, I.D.; Roth, S.; Schindler, K. MOT16: A Benchmark for Multi-Object Tracking. arXiv 2016, arXiv:1603.00831. [Google Scholar]
  35. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar] [CrossRef]
  36. Jonathon Luiten, A.H. TrackEval. 2020. Available online: https://github.com/JonathonLuiten/TrackEval (accessed on 17 June 2025).
  37. Loshchilov, I.; Hutter, F. Decoupled Weight Decay Regularization. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
  38. Yi, K.; Luo, K.; Luo, X.; Huang, J.; Wu, H.; Hu, R.; Hao, W. Ucmctrack: Multi-object tracking with uniform camera motion compensation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 6702–6710. [Google Scholar]
  39. Wang, Y.; Kitani, K.; Weng, X. Joint object detection and multi-object tracking with graph neural networks. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–2 June 2021; pp. 13708–13715. [Google Scholar]
  40. Stanojevic, V.D.; Todorovic, B.T. BoostTrack: Boosting the similarity measure and detection confidence for improved multiple object tracking. Mach. Vis. Appl. 2024, 35, 53. [Google Scholar] [CrossRef]
  41. Morsali, M.M.; Sharifi, Z.; Fallah, F.; Hashembeiki, S.; Mohammadzade, H.; Shouraki, S.B. SFSORT: Scene features-based simple online real-time tracker. arXiv 2024, arXiv:2404.07553. [Google Scholar]
  42. Cetintas, O.; Brasó, G.; Leal-Taixé, L. Unifying short and long-term tracking with graph hierarchies. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 22877–22887. [Google Scholar]
Figure 1. Some challenges in satellite videos. A remote-sensing image is shown in the middle. On the left, two randomly selected white objects are marked in green. On the right, an image along a road is shown, where white and dark objects are marked in green. Trees on both sides of the road cause slight occlusions.
Figure 1. Some challenges in satellite videos. A remote-sensing image is shown in the middle. On the left, two randomly selected white objects are marked in green. On the right, an image along a road is shown, where white and dark objects are marked in green. Trees on both sides of the road cause slight occlusions.
Remotesensing 17 02167 g001
Figure 5. The leftmost column shows the zoom-out tracking results of Coarse-Fine Tracker. The right side displays the zoomed-in areas corresponding to the red boxes in the first column, with trajectories shown from left to right for (a) GT, (b) our Coarse-Fine Tracker, (c) CFTracker, and (d) ByteTrack, respectively.
Figure 5. The leftmost column shows the zoom-out tracking results of Coarse-Fine Tracker. The right side displays the zoomed-in areas corresponding to the red boxes in the first column, with trajectories shown from left to right for (a) GT, (b) our Coarse-Fine Tracker, (c) CFTracker, and (d) ByteTrack, respectively.
Remotesensing 17 02167 g005
Table 1. Quantitative results on the VISO test set. IoU threshold is 1 × 10 7 . ↑ indicates that higher is better, ↓ indicates that lower is better. The best results are shown in bold.
Table 1. Quantitative results on the VISO test set. IoU threshold is 1 × 10 7 . ↑ indicates that higher is better, ↓ indicates that lower is better. The best results are shown in bold.
MethodMOTA ↑MOTP ↑IDF1 ↑MT ↑ML ↓FP ↓FN ↓IDS ↓FPS ↑
FairMOT [14]2.328.0-21623207383,258527.9
ByteTrack [9]60.260.573.435714714,74516,6474632.1
TGraM [3]12.1-27.95343024,35969,3131306-
CFTracker [23]57.658.964.85194727,42311,3275767.8
LocaLock [26]62.667.575.937717811,25423,1902186.8
Coarse-Fine Tracker (Ours)66.964.177.84577015,76014,5983242.5
Table 2. Quantitative results on the VISO test set. IoU threshold is 0.4. ↑ indicates that higher is better, ↓ indicates that lower is better. The best are results are shown in bold.
Table 2. Quantitative results on the VISO test set. IoU threshold is 0.4. ↑ indicates that higher is better, ↓ indicates that lower is better. The best are results are shown in bold.
MethodDetectorMOTA ↑IDF1 ↑MT ↑ML ↓FP ↓FN ↓IDS ↓
Bot-YOLOv7 [10]YOLOv7-X46.148.327523526,45735,2252971
UCMCTrack [38]YOLOv7-X47.151.028839624,94734,9883519
OC-SORT [6]YOLOv7-X48.858.746612925,62035,316578
GSDT [39]DSFNet48.147.929131324,14534,9813128
BoostTrack [40]Swin-b + Dino48.753.637733424,15835,6801696
StrongSORT [12]Swin-b + Dino48.957.23989324,95535,578761
SFSORT [41]Swin-b + Dino49.156.334717824,75035,2031101
SUSHI [42]Cascade-RCNN50.255.64899824,03235,108593
CFTracker [23]-50.957.739210023,51534,657641
GMFTracker [16]-52.361.74998423,46633,231517
Coarse-Fine Tracker (Ours)DSFNet56.472.93869320,73419,572287
Table 3. Quantitative results on the SAT-MTB test set. IoU threshold is 1 × 10 7 . ↑ indicates that higher is better, ↓ indicates that lower is better. The best results are shown in bold.
Table 3. Quantitative results on the SAT-MTB test set. IoU threshold is 1 × 10 7 . ↑ indicates that higher is better, ↓ indicates that lower is better. The best results are shown in bold.
MethodMOTA ↑MOTP ↑IDF1 ↑MT ↑ML ↓FP ↓FN ↓IDS ↓FPS ↑
FairMOT [14]8.5-18.510224179529291,15613895-
ByteTrack [9]26.752.651.91077131272,587177,4641990200
TGraM [3]−0.8-0.533514342,897342,89759-
CFTracker [23]21.0-19.5135072559,694139,85058,3777.8
Coarse-Fine Tracker (Ours)28.453.653.11119127771,093173,014208460
Table 4. Comparison of the performance of Coarse-Fine Tracker using different detectors on VISO test dataset. ↑ indicates that higher is better, ↓ indicates that lower is better.
Table 4. Comparison of the performance of Coarse-Fine Tracker using different detectors on VISO test dataset. ↑ indicates that higher is better, ↓ indicates that lower is better.
DetectorMethodMOTA ↑MOTP ↑IDF1 ↑MT ↑ML ↓FP ↓FN ↓IDS ↓
YOLOXByteTrack46.959.265.329314918,98029,860370
Ours49.060.866.732313319,37327,507358
CFTrackerByteTrack56.559.273.04445621,98418,056382
Ours59.159.275.94985424,25012,963274
DSFNetByteTrack60.260.573.435714714,74516,647463
Ours66.964.177.84577015,76014,598324
Table 5. Comparison of the performance of Coarse-Fine Tracker and ByteTrack in detector performance degradation on VISO test dataset.
Table 5. Comparison of the performance of Coarse-Fine Tracker and ByteTrack in detector performance degradation on VISO test dataset.
PerMethod0.30.40.50.6
MOTAIDF1IDSMOTAIDF1IDSMOTAIDF1IDSMOTAIDF1IDS
0%ByteTrack57.774.324356.573.038234.355.4120010.115.01738
Ours57.375.721359.175.927450.967.1109714.422.71798
2%ByteTrack56.575.127055.373.340333.451.412429.815.21757
Ours57.275.622058.875.827350.466.6115013.922.01818
4%ByteTrack55.474.430354.372.642032.750.712709.314.41812
Ours57.175.523558.675.629649.966.3117813.420.91865
6%ByteTrack54.373.731753.171.743631.849.713548.813.91845
Ours57.075.524758.575.431249.165.6126912.820.31901
8%ByteTrack53.072.933751.870.745931.348.813388.513.31820
Ours56.775.326158.175.031248.664.9127612.419.62421
10%ByteTrack52.072.237650.870.246530.448.014387.912.71933
Ours56.675.226857.975.132647.764.4135711.619.01992
Table 6. Ablation study for different structures using CFTracker’s tracking results on VISO test dataset. ↑ indicates that higher is better, ↓ indicates that lower is better. And ✔ indicates the use of the module.
Table 6. Ablation study for different structures using CFTracker’s tracking results on VISO test dataset. ↑ indicates that higher is better, ↓ indicates that lower is better. And ✔ indicates the use of the module.
Fine TrackingProviding PriorConsensus Fusion StrategyMOTA ↑MOTP ↑IDF1 ↑IDS ↓
56.355.874.8348
56.956.774.9311
59.159.275.9274
ByteTrack (baseline)56.559.274.1382
Table 7. Ablation study for different structures using DSFNet as the detector on VISO dataset. ↑ indicates that higher is better, ↓ indicates that lower is better. And ✔ indicates the use of the module.
Table 7. Ablation study for different structures using DSFNet as the detector on VISO dataset. ↑ indicates that higher is better, ↓ indicates that lower is better. And ✔ indicates the use of the module.
Fine TrackingProviding PriorConsensus Fusion StrategyMOTA ↑MOTP ↑IDF1 ↑IDS ↓
60.861.774.9401
62.962.975.5373
66.964.177.8324
ByteTrack (baseline)60.260.573.4463
Table 8. Performance of S using DSFNet as the detector on VISO test dataset. ↑ indicates that higher is better, ↓ indicates that lower is better. The best results are shown in bold.
Table 8. Performance of S using DSFNet as the detector on VISO test dataset. ↑ indicates that higher is better, ↓ indicates that lower is better. The best results are shown in bold.
SMOTA ↑MOTP ↑IDF1 ↑FP ↓FN ↓IDS ↓
461.164.775.617,98216,820364
664.365.376.616,94415,782332
866.964.177.815,76014,598324
1065.265.477.016,51914,375298
1262.764.975.917,71614,354293
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, H.; Liu, X.; Qi, X.; Zhu, E.; Jia, J.; Wang, L. Coarse-Fine Tracker: A Robust MOT Framework for Satellite Videos via Tracking Any Point. Remote Sens. 2025, 17, 2167. https://doi.org/10.3390/rs17132167

AMA Style

Shi H, Liu X, Qi X, Zhu E, Jia J, Wang L. Coarse-Fine Tracker: A Robust MOT Framework for Satellite Videos via Tracking Any Point. Remote Sensing. 2025; 17(13):2167. https://doi.org/10.3390/rs17132167

Chicago/Turabian Style

Shi, Hanru, Xiaoxuan Liu, Xiyu Qi, Enze Zhu, Jie Jia, and Lei Wang. 2025. "Coarse-Fine Tracker: A Robust MOT Framework for Satellite Videos via Tracking Any Point" Remote Sensing 17, no. 13: 2167. https://doi.org/10.3390/rs17132167

APA Style

Shi, H., Liu, X., Qi, X., Zhu, E., Jia, J., & Wang, L. (2025). Coarse-Fine Tracker: A Robust MOT Framework for Satellite Videos via Tracking Any Point. Remote Sensing, 17(13), 2167. https://doi.org/10.3390/rs17132167

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop