Masked and Clustered Pre-Training for Geosynchronous Satellite Maneuver Detection
Abstract
1. Introduction
- We propose a masked prediction pre-training strategy for geosynchronous satellite maneuver detection. This strategy performs unsupervised pre-training by predicting missing values in the time and frequency domains from unmasked observations, thereby modeling temporal dependencies in the sequence data.
- We introduce a cluster-based pre-training strategy for maneuver detection. This strategy employs unsupervised clustering to separate clusters of different maneuver modes, effectively mitigating the impact of complex scenarios on training.
- Experimental results on simulated and real datasets show that the proposed MC-MD model consistently outperforms baseline methods in maneuver detection. Furthermore, reconstructed trajectories analysis further confirms the model’s effectiveness.
2. Materials and Methods
2.1. Background on Geosynchronous Satellite Maneuvers
2.2. Datasets
2.2.1. The Simulation Dataset
2.2.2. The Real-World Dataset
2.3. Architecture Overview
2.4. Mask Prediction Pre-Training
2.5. Similarity-Based Cluster Pre-Training
2.6. Overall Training
Algorithm 1 Our proposed MC-MD model |
|
3. Results
3.1. Experimental Settings
3.1.1. Baselines
- Interactive Multiple Model (IMM) [45]: It maintains multiple Kalman filters, each representing a different motion model. It updates their probabilities using Bayesian inference and combines predictions. For maneuver detection, sharp changes in mode probabilities or filter residuals are treated as indicators of abnormal motion.
- Kalman Filter [46]: It estimates system states by predicting and updating with new observations. Maneuvers are identified by detecting significant or persistent residuals that deviate from expected motion.
- LSTM [47]: It models sequence dependencies through gated recurrent units. We use it to reconstruct satellite time series and compute the mean squared error between predicted and true values. High errors typically correspond to maneuver points.
- TS2Vec [48]: It learns timestamp-level representations through hierarchical contrastive learning on augmented context views, producing robust contextual embeddings. Sub-sequence representations are obtained by aggregating the embeddings of corresponding timestamps.
- TimesNet [49]: It captures multi-periodic patterns using 2D convolutions across time series. It reconstructs regular orbital behavior, and anomalies are identified based on large deviations from expected patterns.
- GPT4TS [50]: It fine-tunes selected part parameters of a pre-trained Transformer for time series prediction. It learns to complete masked sequences; prediction errors signal unexpected behavior such as satellite maneuvers.
- ModernTCN [51]: It applies temporal convolutions with large kernels to model dependencies. Maneuver points are identified via high reconstruction or prediction error.
- iTransformer [52]: It applies attention and feed-forward layers on inverted dimensions, where time points are represented as variate tokens. This design enables the model to capture cross-variable dependencies and learn nonlinear token representations, yielding strong performance in time series forecasting.
- TimeMixer [53]: It uses multiscale mixing blocks to separate local and global temporal patterns. It forecasts future values, and discrepancies between predicted and observed data indicate potential maneuvers.
3.1.2. Implementation Details
- Accuracy measures the proportion of correct predictions over all predictions:
- Precision evaluates the correctness of positive predictions:
- Recall measures the ability to capture true positive instances:
- F1-Score is the harmonic mean of Precision and Recall:
3.2. Main Results in Maneuver Detection
3.3. Ablation Study
- w/o pre-training: This variant removes the entire pre-training stage, and directly trains the model on the downstream maneuver detection task. This setting assesses the overall importance of our self-supervised pre-training strategy.
- w/o frequency mask loss (freq loss): We remove the frequency-domain loss component from the masked reconstruction pre-training. Only time-domain masking is applied. This allows us to evaluate the contribution of frequency masking.
- w/o cluster loss: This strategy eliminates the similarity-based clustering loss used during pre-training. The model is still trained with dual-view masked reconstruction, but without unsupervised cluster-based pre-training.
- w/o frequency mask and cluster loss (freq and cluster loss): Both the frequency-domain masking loss and the clustering loss are removed. The model is trained only with time-domain masked reconstruction, aiming to assess the joint impact of both components.
3.4. Reconstructed Trajectories Analysis
3.5. Frequency Modeling and Clustering Analysis
3.6. Hyperparameter Analysis
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Roberts, T.G.; Linares, R. A survey of longitudinal-shift maneuvers performed by geosynchronous satellites from 2010 to 2021. In Proceedings of the 73rd International Astronautical Congress, Paris, France, 18–22 September 2022. [Google Scholar]
- Peng, H.; Bai, X. Improving orbit prediction accuracy through supervised machine learning. Adv. Space Res. 2018, 61, 2628–2646. [Google Scholar] [CrossRef]
- Peng, H.; Bai, X. Machine learning approach to improve satellite orbit prediction accuracy using publicly available data. J. Astronaut. Sci. 2020, 67, 762–793. [Google Scholar] [CrossRef]
- Han, H.; Dang, Z. Game-theoretic maneuvering strategies for orbital inspection of non-cooperative spacecraft in cislunar space. Chin. J. Aeronaut. 2025, 103574. [Google Scholar] [CrossRef]
- Roberts, T.G.; Solera, H.E.; Linares, R. Geosynchronous satellite behavior classification via unsupervised machine learning. In Proceedings of the 9th Space Traffic Management Conference, Austin, TX, USA, 1–2 March 2023; Volume 3. [Google Scholar]
- Zhou, H.; Wang, X.; Zhong, S. A satellite orbit maneuver detection and robust multipath mitigation method for GPS coordinate time series. Adv. Space Res. 2024, 74, 2784–2800. [Google Scholar] [CrossRef]
- Wang, S.A.; Zhang, H.; Cai, L.; Wang, Z.; An, Y. Research on Mass Center Identification for Gravitational Wave Detection Spacecraft with Guaranteed Laser Link Pointing Accuracy. Remote Sens. 2025, 17, 296. [Google Scholar] [CrossRef]
- Dai, X.; Lou, Y.; Dai, Z.; Hu, C.; Peng, Y.; Qiao, J.; Shi, C. Precise orbit determination for GNSS maneuvering satellite with the constraint of a predicted clock. Remote Sens. 2019, 11, 1949. [Google Scholar] [CrossRef]
- Sun, C.; Sun, Y.; Yu, X.; Fang, Q. Rapid Detection and Orbital Parameters’ Determination for Fast-Approaching Non-Cooperative Target to the Space Station Based on Fly-around Nano-Satellite. Remote Sens. 2023, 15, 1213. [Google Scholar] [CrossRef]
- Wang, L.; Sun, Z.; Wang, Y.; Wang, J.; Yan, C. Virtual-Integrated Admittance Control Method of Continuum Robot for Capturing Non-Cooperative Space Targets. Biomimetics 2025, 10, 281. [Google Scholar] [CrossRef]
- Maestrini, M.; Di Lizia, P. Guidance strategy for autonomous inspection of unknown non-cooperative resident space objects. J. Guid. Control. Dyn. 2022, 45, 1126–1136. [Google Scholar] [CrossRef]
- Qin, Z.; Zhang, Q.; Huang, G.; Tang, L.; Wang, J.; Wang, X. BDS Orbit Maneuver Detection Based on Epoch-Updated Orbits Estimated by SRIF. Remote Sens. 2023, 15, 2558. [Google Scholar] [CrossRef]
- Yihan, L.; Dong, S.; Han, Y.; Mu, Q.; Liu, X.; Qi, P. Research on Intent Recognition Method for Non-Cooperative Satellite Based on LSTM Network. In Proceedings of the 2023 China Automation Congress (CAC), Chongqing, China, 17–19 November 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 3854–3859. [Google Scholar]
- Wang, S.; Zhao, D.; Hong, H.; Sun, K. A Review of Space Target Recognition Based on Ensemble Learning. Aerospace 2025, 12, 278. [Google Scholar] [CrossRef]
- Roberts, T.G.; Rodriguez-Fernandez, V.; Siew, P.M.; Solera, H.E.; Linares, R. End-to-end behavioral mode clustering for geosynchronous satellites. In Proceedings of the 2023 Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 19–22 September 2023. [Google Scholar]
- Dehadraya, A.; Paliwal, V.; Malhotra, Y.; Vatsal, V. Geosynchronous Satellite Pattern-of-Life Characterization Through Machine Learning-based Mode Change Detection and Classification. In Proceedings of the 2024 IEEE Space, Aerospace and Defence Conference (SPACE), Bangalore, India, 22–23 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 235–239. [Google Scholar]
- Kelecy, T.; Hall, D.; Hamada, K.; Stocker, D. Satellite maneuver detection using Two-line Element (TLE) data. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference; Maui Economic Development Board (MEDB): Maui, HA, USA, 2007; pp. 1–10. [Google Scholar]
- Gong, B.; Jin, X.; Jiang, L.; Ren, M. Space-based passive orbital manoeuvre detection algorithm via a new characterization. In Proceedings of the Aerospace Europe Conference 2023—10th EUCASS—9th CEAS, Ecublens, Switzerland, 9–13 July 2023; pp. 1–16. [Google Scholar]
- Mukundan, A.; Wang, H.C. Simplified approach to detect satellite maneuvers using TLE data and simplified perturbation model utilizing orbital element variation. Appl. Sci. 2021, 11, 10181. [Google Scholar] [CrossRef]
- Solera, H.E.; Roberts, T.G.; Linares, R. Geosynchronous satellite pattern of life node detection and classification. In Proceedings of the 9th Space Traffic Management Conference, Austin, TX, USA, 1–2 March 2023. [Google Scholar]
- Austin Beer, K.S. Geosynchronous Satellite Maneuver Identification and Characterization using Passive RF Ranging. In Proceedings of the dvanced Maui Optical and Space Surveillance Technologies Conference (AMOS), Maui, HI, USA, 14–17 September 2021; pp. 1–12. [Google Scholar]
- Pastor, A.; Escribano, G.; Sanjurjo-Rivo, M.; Escobar, D. Satellite maneuver detection and estimation with optical survey observations. J. Astronaut. Sci. 2022, 69, 879–917. [Google Scholar] [CrossRef]
- Cipollone, R.; Leonzio, I.; Calabrò, G.; Di Lizia, P. An LSTM-based Maneuver Detection Algorithm from Satellites Pattern of Life. In Proceedings of the 2023 IEEE 10th International Workshop on Metrology for AeroSpace (MetroAeroSpace), Milan, Italy, 19–21 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 78–83. [Google Scholar]
- Huo, Y.; Li, Z.; Fang, Y.; Zhang, F. Classification for geosynchronous satellites with deep learning and multiple kernel learning. Appl. Opt. 2019, 58, 5830–5838. [Google Scholar] [CrossRef]
- Mello, C.; Mendoza, M.; Camacho, L.; Eberhardt, D. Advancing Geosynchronous Satellite Classification Utilizing Spectral Data via Fine-Tuned Pretrained Deep Learning Models. In Proceedings of the Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference, Maui, HI, USA, 17–20 September 2024; p. 32. [Google Scholar]
- Wang, Z.; Han, Y.; Zhang, Y.; Hao, J.; Zhang, Y. Classification and recognition method of non-cooperative objects based on deep learning. Sensors 2024, 24, 583. [Google Scholar] [CrossRef]
- Liu, Z.; Ma, P.; Chen, D.; Pei, W.; Ma, Q. Scale-teaching: Robust multi-scale training for time series classification with noisy labels. Adv. Neural Inf. Process. Syst. 2023, 36, 33726–33757. [Google Scholar]
- DiBona, P.; Foster, J.; Falcone, A.; Czajkowski, M. Machine learning for RSO maneuver classification and orbital pattern prediction. In Proceedings of the 2019 Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS), Maui, HI, USA, 17–20 September 2019. [Google Scholar]
- Roberts, T.G.; Linares, R. Geosynchronous satellite maneuver classification via supervised machine learning. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 14–17 September 2021. [Google Scholar]
- Kelecy, T.; Abernethy, S.; Jones, F.; Gerber, E.; Wurzel, H. Predicted intent inferred from real-time rendezvous and proximity behavior. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS), Maui, HI, USA, 27–30 September 2022. [Google Scholar]
- Li, F.; Zhao, Y.; Zhang, J.; Zhang, Z.; Wu, D. A station-keeping maneuver detection method of non-cooperative geosynchronous satellites. Adv. Space Res. 2024, 73, 160–169. [Google Scholar] [CrossRef]
- Tang, C.; Chen, Y.; Chen, G.; Du, L.; Liu, H. A Dynamic and Collaborative Spectrum Sharing Strategy Based on Multi-Agent DRL in Satellite-Terrestrial Converged Networks. IEEE Trans. Veh. Technol. 2024, 74, 7969–7984. [Google Scholar] [CrossRef]
- Ma, Q.; Liu, Z.; Zheng, Z.; Huang, Z.; Zhu, S.; Yu, Z.; Kwok, J.T. A Survey on Time-Series Pre-Trained Models. IEEE Trans. Knowl. Data Eng. 2024, 36, 7536–7555. [Google Scholar] [CrossRef]
- He, K.; Chen, X.; Xie, S.; Li, Y.; Dollár, P.; Girshick, R. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 16000–16009. [Google Scholar]
- Li, Y.; Fan, H.; Hu, R.; Feichtenhofer, C.; He, K. Scaling language-image pre-training via masking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 23390–23400. [Google Scholar]
- Jawahar, G.; Sagot, B.; Seddah, D. What Does BERT Learn about the Structure of Language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; pp. 3651–3657. [Google Scholar]
- Zhou, C.; Li, Q.; Li, C.; Yu, J.; Liu, Y.; Wang, G.; Zhang, K.; Ji, C.; Yan, Q.; He, L.; et al. A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. Int. J. Mach. Learn. Cybern. 2024, 1–65. [Google Scholar] [CrossRef]
- Araguz, C.; Bou-Balust, E.; Alarcón, E. Applying autonomy to distributed satellite systems: Trends, challenges, and future prospects. Syst. Eng. 2018, 21, 401–416. [Google Scholar] [CrossRef]
- Dong, J.; Liu, P.; Wang, B.; Jin, Y. Detection of Flight Target via Multistatic Radar Based on Geosynchronous Orbit Satellite Irradiation. Remote Sens. 2024, 16, 4582. [Google Scholar] [CrossRef]
- MIT ARCLab Prize for AI Innovation in Space 2024. 2024. Available online: https://eval.ai/web/challenges/challenge-page/2164/overview (accessed on 10 December 2023).
- Liu, Z.; Ma, Q.; Ma, P.; Wang, L. Temporal-frequency co-training for time series semi-supervised learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 8923–8931. [Google Scholar]
- Wu, X.; Qiu, X.; Li, Z.; Wang, Y.; Hu, J.; Guo, C.; Xiong, H.; Yang, B. CATCH: Channel-Aware multivariate Time Series Anomaly Detection via Frequency Patching. In Proceedings of the International Conference on Learning Representations, Singapore, 24–28 April 2025. [Google Scholar]
- Wang, Z. Fast algorithms for the discrete W transform and for the discrete Fourier transform. IEEE Trans. Acoust. Speech, Signal Process. 1984, 32, 803–816. [Google Scholar] [CrossRef]
- Ahmed, M.; Seraj, R.; Islam, S.M.S. The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 2020, 9, 1295. [Google Scholar] [CrossRef]
- Johnston, L.A.; Krishnamurthy, V. An improvement to the interacting multiple model (IMM) algorithm. IEEE Trans. Signal Process. 2001, 49, 2909–2923. [Google Scholar] [CrossRef] [PubMed]
- Yang, Y.; Yue, X.; Dempster, A.G. GPS-based onboard real-time orbit determination for LEO satellites using consider Kalman filter. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 769–777. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Yue, Z.; Wang, Y.; Duan, J.; Yang, T.; Huang, C.; Tong, Y.; Xu, B. Ts2vec: Towards universal representation of time series. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 22 February–1 March 2022; Volume 36, pp. 8980–8987. [Google Scholar]
- Wu, H.; Hu, T.; Liu, Y.; Zhou, H.; Wang, J.; Long, M. TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis. In Proceedings of the The Eleventh International Conference on Learning Representations, Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
- Zhou, T.; Niu, P.; Wang, X.; Sun, L.; Jin, R. One fits all: Power general time series analysis by pretrained lm. Adv. Neural Inf. Process. Syst. 2023, 36, 43322–43355. [Google Scholar]
- Luo, D.; Wang, X. Moderntcn: A modern pure convolution structure for general time series analysis. In Proceedings of the The Twelfth International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024; pp. 1–43. [Google Scholar]
- Liu, Y.; Hu, T.; Zhang, H.; Wu, H.; Wang, S.; Ma, L.; Long, M. iTransformer: Inverted Transformers Are Effective for Time Series Forecasting. In Proceedings of the The Twelfth International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024. [Google Scholar]
- Wang, S.; Wu, H.; Shi, X.; Hu, T.; Luo, H.; Ma, L.; Zhang, J.Y.; Zhou, J. TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting. In Proceedings of the The Twelfth International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024. [Google Scholar]
ID | Channel Name | Data Point Example |
---|---|---|
1 | Timestamp | 2022-09-01 00:00:00.000000Z |
2 | Eccentricity | 0.000201808 |
3 | Semimajor Axis (m) | 42,165,366.78 |
4 | Inclination (deg) | 0.139821561 |
5 | RAAN (deg) | 94.42733645 |
6 | Argument of Periapsis (deg) | 52.73309187 |
7 | True Anomaly (deg) | 277.8109802 |
8 | Latitude (deg) | −0.014460426 |
9 | Longitude (deg) | 85.11950612 |
10 | Altitude (m) | 35,786,071.64 |
11 | Position X (m) | 17,838,370.02 |
12 | Position Y (m) | 38,204,848.97 |
13 | Position Z (m) | −50,599.11012 |
14 | Velocity Vx (m/s) | −2786.228658 |
15 | Velocity Vy (m/s) | 1300.258746 |
16 | Velocity Vz (m/s) | 6.534142357 |
Datasets | Trajectories | Total Samples | Training | Validation | Test | Channels |
---|---|---|---|---|---|---|
Simulation | 21,138 | 83,300 | 58,310 | 8330 | 16,660 | 16 |
Real-world | 1945 | 10,000 | 7000 | 1000 | 2000 | 16 |
Module | Layer/Component | Input | Output |
---|---|---|---|
Forward Module | RevIN Normalization | B × T × N (Raw Series) | B × T × N (Normalized Series) |
FFT Transform | B × T × N | B × (T/2+1) × N (Complex Spectrum) | |
Frequency Separation | B × (T/2+1) × N (Complex) | B × (T/2+1) × N (Real), B × (T/2+1) × N (Imag) | |
Patching | B × (T/2+1) × N | B × N × P × patch_size | |
Projection Layer | B × N × P × patch_size | B × N × P × d_model | |
Channel Fusion Module | Mask Generator | B × N × P × d_model | B × N × N (Channel Mask) |
Channel-Masked Transformer | B × N × P × d_model | B × N × P × d_model | |
Time-Frequency Reconstruction Module | Flatten Layer | B × N × P × d_model | B × N × (P × d_model) |
Dual Linear Heads | B × N × (P × d_model) | B × N × (T/2+1) (Real), B × N × (T/2+1) (Imag) | |
Complex Fusion | Real + Imag | B × (T/2+1) × N (Complex Spectrum) | |
Inverse DFT | B × (T/2+1) × N (Complex) | B × T × N (Time Domain) | |
RevIN Denormalization | B × T × N | B × T × N (Final Output) |
Method | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|
IMM | 0.9906 ± 0.0003 | 0.5571 ± 0.0001 | 0.5731 ± 0.0000 | 0.5650 ± 0.0000 |
Kalman Filter | 0.9898 ± 0.0001 | 0.5202 ± 0.0000 | 0.5351 ± 0.0000 | 0.5275 ± 0.0000 |
LSTM | 0.9957 ± 0.0006 | 0.7889 ± 0.0011 | 0.8115 ± 0.0015 | 0.8001 ± 0.0012 |
TS2vec | 0.9963 ± 0.0007 | 0.8918 ± 0.0004 | 0.8169 ± 0.0007 | 0.8527 ± 0.0006 |
TimesNet | 0.9963 ± 0.0003 | 0.8173 ± 0.0008 | 0.8407 ± 0.0004 | 0.8289 ± 0.0005 |
GPT4TS | 0.9957 ± 0.0011 | 0.7918 ± 0.0003 | 0.8144 ± 0.0008 | 0.8029 ± 0.0004 |
iTransformer | 0.9961 ± 0.0004 | 0.8138 ± 0.0002 | 0.8070 ± 0.0003 | 0.8104 ± 0.0003 |
ModernTCN | 0.9961 ± 0.0002 | 0.8105 ± 0.0001 | 0.8337 ± 0.0002 | 0.8220 ± 0.0001 |
TimeMixer | 0.9964 ± 0.0003 | 0.8205 ± 0.0004 | 0.8440 ± 0.0008 | 0.8320 ± 0.0005 |
MC-MD (Ours) | 0.9978 ± 0.0003 | 0.9444 ± 0.0003 | 0.8390 ± 0.0003 | 0.8886 ± 0.0003 |
Method | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|
IMM | 0.9906 ± 0.0004 | 1.0000 ± 0.0002 | 0.5909 ± 0.0001 | 0.7428 ± 0.0001 |
Kalman Filter | 0.9898 ± 0.0001 | 0.9177 ± 0.0000 | 0.7458 ± 0.0000 | 0.8228 ± 0.0000 |
LSTM | 0.9956 ± 0.0006 | 0.9643 ± 0.0012 | 0.8136 ± 0.0004 | 0.8825 ± 0.0008 |
TS2Vec | 0.9960 ± 0.0003 | 0.8934 ± 0.0010 | 0.8127 ± 0.0003 | 0.8511 ± 0.0007 |
TimesNet | 0.9963 ± 0.0012 | 0.8656 ± 0.0009 | 0.8316 ± 0.0003 | 0.8535 ± 0.0006 |
GPT4TS | 0.9957 ± 0.0005 | 0.9219 ± 0.0004 | 0.8141 ± 0.0002 | 0.8647 ± 0.0003 |
iTransformer | 0.9964 ± 0.0003 | 0.8939 ± 0.0004 | 0.8289 ± 0.0008 | 0.8602 ± 0.0006 |
ModernTCN | 0.9961 ± 0.0005 | 0.9121 ± 0.0003 | 0.8340 ± 0.0001 | 0.8713 ± 0.0002 |
TimeMixer | 0.9964 ± 0.0007 | 1.0000 ± 0.0005 | 0.8057 ± 0.0009 | 0.8924 ± 0.0006 |
MC-MD (Ours) | 0.9978 ± 0.0004 | 0.9564 ± 0.0003 | 0.8399 ± 0.0007 | 0.9003 ± 0.0004 |
Method | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|
IMM | 0.9906 ± 0.0003 | 0.9324 ± 0.0001 | 0.5918 ± 0.0000 | 0.7453 ± 0.0001 |
Kalman Filter | 0.9898 ± 0.0001 | 0.7458 ± 0.0000 | 0.8287 ± 0.0001 | 0.7851 ± 0.0000 |
LSTM | 0.9957 ± 0.0010 | 0.9553 ± 0.0012 | 0.8177 ± 0.0002 | 0.8811 ± 0.0008 |
TS2vec | 0.9960 ± 0.0006 | 0.9134 ± 0.0004 | 0.8263 ± 0.0007 | 0.8677 ± 0.0005 |
TimesNet | 0.9963 ± 0.0007 | 0.8997 ± 0.0003 | 0.8416 ± 0.0002 | 0.8697 ± 0.0003 |
GPT4TS | 0.9957 ± 0.0004 | 0.9591 ± 0.0006 | 0.8144 ± 0.0006 | 0.8808 ± 0.0006 |
iTransformer | 0.9973 ± 0.0011 | 0.9208 ± 0.0006 | 0.8356 ± 0.0007 | 0.8761 ± 0.0006 |
ModernTCN | 0.9961 ± 0.0004 | 0.9828 ± 0.0006 | 0.8343 ± 0.0012 | 0.9025 ± 0.0009 |
TimeMixer | 0.9964 ± 0.0004 | 1.0000 ± 0.0010 | 0.8408 ± 0.0005 | 0.9109 ± 0.0007 |
MC-MD (Ours) | 0.9978 ± 0.0007 | 1.0000 ± 0.0004 | 0.8431 ± 0.0006 | 0.9149 ± 0.0004 |
Method | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|
w/o pre-training | 0.9974 ± 0.0003 | 0.9260 ± 0.0005 | 0.8226 ± 0.0007 | 0.8712 ± 0.0006 |
w/o freq loss | 0.9976 ± 0.0001 | 0.9372 ± 0.0004 | 0.8326 ± 0.0004 | 0.8818 ± 0.0004 |
w/o cluster loss | 0.9966 ± 0.0003 | 0.8812 ± 0.0004 | 0.7829 ± 0.0004 | 0.8292 ± 0.0004 |
w/o freq and cluster loss | 0.9965 ± 0.0002 | 0.8809 ± 0.0007 | 0.7826 ± 0.0002 | 0.8288 ± 0.0004 |
MC-MD (Ours) | 0.9978 ± 0.0003 | 0.9444 ± 0.0003 | 0.8390 ± 0.0003 | 0.8886 ± 0.0003 |
Method | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|
HHT | 0.9928 ± 0.0003 | 0.8907 ± 0.0008 | 0.8117 ± 0.0006 | 0.8494 ± 0.0006 |
STFT | 0.9934 ± 0.0005 | 0.8973 ± 0.0007 | 0.7263 ± 0.0011 | 0.8028 ± 0.0008 |
WPT | 0.9971 ± 0.0004 | 0.9001 ± 0.0006 | 0.8055 ± 0.0002 | 0.8502 ± 0.0004 |
STWPT | 0.9951 ± 0.0008 | 0.8831 ± 0.0005 | 0.8190 ± 0.0004 | 0.8498 ± 0.0004 |
DFT(Ours) | 0.9978 ± 0.0003 | 0.9444 ± 0.0003 | 0.8390 ± 0.0003 | 0.8886 ± 0.0003 |
Method | Accuracy | Precision | Recall | F1-Score | Training Time | Inference Time |
---|---|---|---|---|---|---|
K-means + Att | 0.9978 ± 0.0003 | 0.9444 ± 0.0003 | 0.8390 ± 0.0003 | 0.8886 ± 0.0003 | 0.5 ± 0.05 | 0.19 ± 0.02 |
K-means + Raw | 0.9972 ± 0.0001 | 0.9158 ± 0.0006 | 0.8136 ± 0.0004 | 0.8617 ± 0.0004 | 72 ± 3 | 0.19 ± 0.02 |
Random | 0.9970 ± 0.0004 | 0.8342 ± 0.0008 | 0.7757 ± 0.0003 | 0.8039 ± 0.0006 | 50 ± 3 | 0.19 ± 0.02 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tian, S.-H.; Fang, Y.-Q.; Diao, H.-F.; Luo, D.; Zhang, Y.-S. Masked and Clustered Pre-Training for Geosynchronous Satellite Maneuver Detection. Remote Sens. 2025, 17, 2994. https://doi.org/10.3390/rs17172994
Tian S-H, Fang Y-Q, Diao H-F, Luo D, Zhang Y-S. Masked and Clustered Pre-Training for Geosynchronous Satellite Maneuver Detection. Remote Sensing. 2025; 17(17):2994. https://doi.org/10.3390/rs17172994
Chicago/Turabian StyleTian, Shu-He, Yu-Qiang Fang, Hua-Fei Diao, Di Luo, and Ya-Sheng Zhang. 2025. "Masked and Clustered Pre-Training for Geosynchronous Satellite Maneuver Detection" Remote Sensing 17, no. 17: 2994. https://doi.org/10.3390/rs17172994
APA StyleTian, S.-H., Fang, Y.-Q., Diao, H.-F., Luo, D., & Zhang, Y.-S. (2025). Masked and Clustered Pre-Training for Geosynchronous Satellite Maneuver Detection. Remote Sensing, 17(17), 2994. https://doi.org/10.3390/rs17172994