Next Article in Journal
Shape Optimization of Aircraft Outflow Valve for Maximum Thrust Recovery
Next Article in Special Issue
Distribution and Evolution of the Debris Cloud from the Fragmentation of Intelsat 33E
Previous Article in Journal
A Multi-Fidelity Aeroelastic Toolchain: From UAVs to Hydrogen Transport Aircraft
Previous Article in Special Issue
Orbit-Prior-Guided Target-Centered Stacking for Space Surveillance and Tracking Under Dynamic-Platform Optical Observations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Light-Curve Classification of Resident Space Objects for Space Situational Awareness: A Scoping Review

Department of Earth and Space Science and Engineering, York University, Toronto, ON M3J 1P3, Canada
*
Author to whom correspondence should be addressed.
Aerospace 2026, 13(3), 287; https://doi.org/10.3390/aerospace13030287
Submission received: 9 February 2026 / Revised: 11 March 2026 / Accepted: 16 March 2026 / Published: 18 March 2026
(This article belongs to the Special Issue Advances in Space Surveillance and Tracking)

Abstract

The proliferation of Resident Space Objects (RSOs), including satellites, rocket bodies, and debris, poses escalating challenges for Space Situational Awareness (SSA). Optical light curves capture temporal brightness variations influenced by factors such as attitude variation, viewing geometry, and surface properties. When appropriately processed and analyzed, these data can support RSO characterization and classification. This paper presents a scoping review of machine learning (ML) and deep learning (DL) methods for RSO classification using light-curve data. From 297 peer-reviewed studies published between 2014 and 2025, a screened subset of 29 works is selected for detailed methodological comparison. We trace the methodological evolution from handcrafted feature engineering toward convolutional, recurrent, and self-supervised models that learn representations directly from photometric time series. An analysis of three publicly accessible databases, Mini Mega TORTORA, Space Debris Light-Curve Database, and Ukrainian Database, reveals pronounced class imbalance, with payloads comprising over 80% of observations. While models trained on simulated data routinely achieve 95 to 99% accuracy, performance on measured light curves degrades to 75 to 92%, exposing a persistent gap between simulation and observation. We further identify data scarcity, repeated observations of the same objects, and inconsistent evaluation protocols as key barriers to reproducible benchmarking. Future progress will require benchmark-ready, sensor-aware datasets spanning diverse orbital regimes and viewing geometries, alongside physics-informed and transfer-learning approaches that improve robustness across sensors and between synthetic and observational domains.

1. Introduction

As of June 2025, more than 14,600 active and inactive satellites and approximately 15,000 cataloged debris fragments orbit Earth [1]. The rapid spread of space objects, driven by mega-constellation deployments and increased commercial activity, has heightened the probability of on-orbit collisions [2]. Each collision can generate thousands of secondary fragments, potentially initiating a cascading sequence of impacts known as the Kessler Syndrome [3]. These developments underscore the critical importance of Space Situational Awareness (SSA), which involves monitoring, characterizing, and predicting the behavior of Resident Space Objects (RSOs) to ensure the safety and sustainability of the orbital environment. SSA relies on multiple sensing modalities. Radar systems provide robust detection and tracking capability in Low Earth Orbit, but their sensitivity decays with the fourth power of distance, limiting their usefulness at higher altitudes and restricting their ability to infer attitude states or surface properties. Laser-ranging systems offer high-precision distance measurements but require active illumination, strict safety protocols, and precise pointing control [4].
In contrast, optical systems, both ground- and space-based, provide passive, scalable monitoring of RSOs across Low Earth Orbit (LEO), Medium Earth Orbit (MEO), Geostationary Orbit (GEO), and Highly Elliptical Orbit (HEO). Optical sensors measure reflected solar radiation and can reveal dynamical and structural information without requiring cooperative instrumentation [5]. After calibration, optical measurements are converted into photometric time series known as light curves. The light curve of an RSO reflects a complex interplay of physical and observational factors, including geometry [6,7], material composition [8], surface reflectivity [9,10], size [11], solar-panel configuration [8], attitude motion [12,13], phase angle [14], and atmospheric conditions [9]. Periodic signatures can reveal whether an RSO is three-axis stabilized, spin-stabilized, or tumbling, while irregular patterns can indicate structural anomalies or operational events [15,16]. For these reasons, light-curve analysis has long been recognized as a key non-cooperative technique for inferring the physical state and dynamic behavior of RSOs within large orbital catalogs [17]. An overview of the end-to-end optical photometry workflow for producing machine-learning-ready light curves is shown in Figure 1. Beyond sensing, recent SSA work also advances AI-enabled mission planning and trajectory optimization [18,19].
In recent years, the rapid expansion of global optical surveillance networks has produced large volumes of photometric data, rendering manual interpretation increasingly impractical. This has driven the adoption of machine learning (ML) and deep learning (DL) techniques for automated light-curve analysis [20]. ML/DL methods enable scalable inference and can learn temporal structure that is difficult to capture with handcrafted rules or simplified analytical models [20]. A simplified taxonomy of existing light-curve-based RSO studies is illustrated in Figure 2, comprising three broad methodological categories: classification methods, physical characterization frameworks, and hybrid pipelines that integrate characterization results into ML classifiers. A more detailed discussion of light-curve physics, problem formulations, and classification objectives is deferred to Section 2.
While this taxonomy highlights diverse research directions, most existing studies focus on narrow and visually well-separated categories such as distinguishing stable from tumbling objects or classifying object types. A key challenge, not systematically addressed in prior surveys, is the semantic gap between the multi-attribute information required for operational SSA and the single-task label spaces supported by existing light-curve datasets. In practice, SSA decisions benefit from several semantic axes, such as object type (payload, rocket body, and debris), coarse shape proxies (box like, cylindrical, plate like, and irregular), attitude regimes (three axis stabilized, spin stabilized, slow/fast tumbling, chaotic rotation, and nutation), and operational state (active, inactive, maneuvering, and malfunctioning), each of which can influence orbit prediction, anomaly detection, and conjunction assessment [21].
However, most publicly available light-curve datasets expose only a small number of classes (typically 3–8) for supervised learning, often coarse and visually separable (e.g., stable vs. tumbling, or object type). Moreover, no public dataset provides unified, track-level, multi-label annotations that jointly encode object type, attitude, shape, and operational state in a standardized ML-ready format [22]. This mismatch limits the operational relevance of current ML-based classification frameworks and underscores the need for richer datasets, improved labeling practices, and more robust learning paradigms.
This disconnect between operational SSA requirements and available dataset semantics motivates a structured reassessment of light-curve-based RSO classification. Accordingly, this paper presents a systematic scoping review of ML/DL approaches for light-curve-based RSO classification, with particular emphasis on the semantic gap between operational SSA needs and existing dataset label spaces. Specifically, we (1) survey existing photometric datasets and analyze their labeling schemes and limitations; (2) synthesize state-of-the-art ML/DL approaches for classification, characterization, and hybrid modeling; and (3) propose a three-tier semantic roadmap describing what current light-curve methods can achieve, what researchers should target in the near term, and what SSA ultimately requires for operational capability. The insights presented here aim to guide the development of next-generation benchmarks, datasets, and learning frameworks capable of supporting scalable, reliable, and mission-relevant SSA systems.
The remainder of this article is organized as follows. Section 2 provides background on light-curve physics and classification objectives. Section 3 outlines the study selection methodology. Section 4 reviews publicly available light-curve datasets and evaluates their annotation practices. Section 5 synthesizes ML and DL approaches to RSO classification. Section 6 discusses the current challenges, including semantic limitations and dataset biases. Section 7 outlines future research directions for advancing light-curve-based SSA toward operational capability. Finally, Section 8 concludes the paper by summarizing key findings and their implications for scalable and reliable SSA systems.

2. Context and Taxonomy of Light-Curve-Based RSO Classification

Before reviewing specific datasets and machine learning methods, this section establishes the background context and terminology for light-curve-based RSO classification and introduces a taxonomy used to organize prior work.
Light-curve analysis has played a foundational role in astronomy for more than a century, where variations in observed brightness have been used to infer the physical and dynamical properties of celestial objects. Examples range from determining the periods of variable stars [23] to estimating the rotation and amplitude of asteroids [24] and detecting exoplanets through periodic transit signatures. As Earth’s orbital environment has become increasingly congested, these established photometric techniques have been adapted to address the needs of SSA, providing the conceptual foundation for modern light-curve-based classification studies. Within this context, light curves offer a non-cooperative mechanism for assessing the physical condition, operational status, and attitude behavior of RSOs, thereby providing essential information for monitoring an environment characterized by rapid growth in the number and diversity of orbiting objects.
To provide a coherent conceptual structure for interpreting the diverse body of research that uses light curves for RSO analysis, Figure 2 presents a high-level taxonomy of methodological directions in the literature. Although studies often span multiple categories, three recurring approaches can be identified. One category focuses on classification, in which RSOs are assigned discrete labels such as attitude state, object type, spacecraft family, or platform. A second category seeks to characterize physical or optical properties—including spin rate, reflectivity, spectral features, and approximate size—directly from photometric signatures. A third category combines these two aims, using characterization outputs as features within machine learning classification pipelines. While these distinctions are not always rigid, the taxonomy in Figure 2 provides a structured lens for interpreting the diverse analytical strategies reported in the literature. Figure 3 complements this taxonomy by visualizing the distribution and co-occurrence of classification labels across prior studies. The labels span multiple semantic levels, including attitude states (e.g., stable or tumbling), spacecraft platforms (e.g., Starlink, Iridium, OneWeb, and Globalstar), program names (e.g., Nimbus), and standardized bus families (e.g., A2100, HS-601, HS-702, DFH-3, DS-2000, and LS-400). Node size reflects the frequency of each label, while link density indicates how often labels co-occur within the same study. This visualization reveals a highly uneven distribution of ground-truth labels, with most studies emphasizing attitude-state discrimination and comparatively few addressing higher-level semantic categories such as program lineage, bus architecture, or structural surrogates. Together, these visualizations motivate the need for consistent semantic definitions and inform the methodological comparisons presented in later sections.
Attitude classification remains one of the most recurrent themes in the literature. A stable RSO maintains a controlled and predictable orientation either through spin stabilization, where the rotation is about a principal axis at nearly constant angular velocity, or through three-axis stabilization, where the spacecraft maintains a fixed orientation relative to an inertial frame or to Earth. Conversely, a tumbling object exhibits uncontrolled, multi-axis rotation without a consistent principal axis [25]. Some works refine this definition into subcategories such as nadir-pointing, sun-pointing, velocity-aligned, anti-velocity, and zenith orientations. These hierarchical relationships reveal how the stable and tumbling categories branch into more specific operational states. Beyond attitude state, the taxonomy also captures program-level, platform-level, and bus-level designations. Program names such as Nimbus refer to families of spacecraft sharing a common mission lineage. In contrast, satellite bus labels such as DFH-3, HS-601, DS-2000, and LS-400 denote standardized spacecraft architectures upon which different payloads can be mounted. Recurring platform types—such as Starlink, Iridium, OneWeb, Globalstar, NOAA JPSS, Yaogan-31, Navstar, Haiyang-2, Fengyun-3, Shijian-3-4, DMSP 5B/5C, and Yunhai-2—appear frequently as classification labels and are treated consistently following the hierarchy defined by [26]. Classification labels were cross-validated against the satellite catalog maintained by [27] to maintain consistency in distinguishing between program names, satellite buses, operational platforms, and individual spacecraft. Although the orbital regime (e.g., LEO, MEO, GEO, and HEO) is not typically used as a direct classification label in machine learning models, it appears in many studies as contextual information that helps interpret the physical and functional distribution of RSOs. Including orbital regime information therefore provides additional context for interpreting how classification labels relate to orbital behavior. The growing heterogeneity of RSO classification labels in the literature underscores the need for a standardized taxonomy. Without such a framework, comparing machine learning and deep learning approaches becomes difficult, particularly given the wide variability in label granularity and annotation practices across existing datasets. Together, Figure 2 and Figure 3 establish a unified conceptual foundation for comparing classification objectives, interpreting reported results, and assessing the semantic limitations that shape current machine learning evaluations as examined in subsequent sections.

3. Methodology

This review follows a systematic literature survey workflow guided by the PRISMA Extension for Scoping Reviews (PRISMA-ScR) framework [28] to identify, screen, and synthesize studies applying machine learning or deep learning methods to the classification of RSOs using optical light curves. The resulting corpus forms the basis for the analyses presented in Section 5. The methodology comprises database retrieval, citation-based expansion, multi-stage screening using explicit eligibility criteria, and structured data extraction to enable cross-study comparison.

3.1. Search Strategy and Database Coverage

A primary literature search was conducted using Google Scholar and Elsevier’s Engineering Village, selected for their broad coverage of aerospace engineering, astronomy, and machine learning venues. Engineering Village was used to access indexed databases including Compendex and Inspec. The primary database search was conducted in March 2025. To reduce the likelihood of missed studies due to inconsistent indexing or disciplinary fragmentation, the search was supplemented with Connected Papers for co-citation analysis as well as backward and forward citation chaining from highly cited seed papers. A final targeted update search was performed on 1 February 2026 to identify newly published studies meeting the same inclusion criteria. Search queries were constructed using four keyword families summarized in Table 1: object terms, signal terms, task terms, and method terms. The exact database-specific search strings and the dates on which each search was executed are provided in Appendix A.
These families were combined using Boolean AND operators to construct database-specific query strings of the form
( object terms ) AND ( signal terms ) AND ( task terms ) AND ( method terms )

3.2. Screening Workflow and Study Selection

All retrieved records were exported into a structured spreadsheet and deduplicated using title, author, year, and publication venue metadata. Screening proceeded in two stages. First, titles and abstracts were reviewed to remove studies that were clearly out of scope. Second, the remaining articles were assessed through full-text review against the inclusion and exclusion criteria described below. A PRISMA-style flow diagram summarizing the identification, screening, exclusion reasons, and final inclusion counts is provided in Figure 4. A total of 297 records were identified through database searching and citation chaining. After the removal of 152 duplicate records arising from overlapping database coverage and inconsistent bibliographic metadata, 145 unique records remained for title and abstract screening. Of these, 38 studies met the inclusion criteria and were selected for full-text assessment. Only 29 studies satisfied the eligibility requirements and constitute the final corpus of works systematically reviewed in this paper. The included studies are [15,16,20,22,26,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52].
Although a substantial body of literature addresses photometric characterization or attitude estimation of RSOs, only a limited subset explicitly formulates light-curve analysis as a supervised or representation-learning classification problem, which explains the relatively small number of studies retained in the final review corpus. This review follows a systematic survey workflow inspired by PRISMA but does not perform a quantitative meta-analysis, as the included studies employ heterogeneous datasets, label definitions, and evaluation protocols that preclude statistically normalized performance aggregation. Title/abstract screening and full-text eligibility assessment were conducted by the primary reviewer (M.H.), with inclusion decisions discussed and validated with co-authors when ambiguity arose. Consistent with PRISMA-ScR guidance for scoping reviews, no formal quality appraisal of individual studies was performed.

3.3. Inclusion and Exclusion Criteria

Studies were included if they (i) were peer-reviewed journal or conference publications from 2014 to 2025 (with preprints excluded unless a peer-reviewed version was available), (ii) used optical light curves or photometric time series as the primary input signal, and (iii) applied machine learning or deep learning methods to assign discrete labels to RSOs, such as attitude state (e.g., stable vs. tumbling), object type, platform or bus family, rocket-body subtype, or related categorical taxonomies.
Studies were excluded if they focused exclusively on physical parameter estimation without categorical labeling (e.g., reflectivity or Bidirectional Reflectance Distribution Function, BRDF, and modeling), addressed only image-level preprocessing or light-curve extraction without ML/DL-based classification, investigated astronomical targets outside the SSA context, or applied light curves solely for anomaly detection without a classification objective. A limited number of doctoral theses were included when they introduced foundational datasets or methodological pipelines frequently cited but not fully documented in peer-reviewed venues; in this review, this included the doctoral thesis by [52]. These sources were treated as limited supporting gray literature rather than as substitutes for the peer-reviewed core review corpus.

3.4. Data Extraction and Coding Framework

For each included study, a structured set of attributes was extracted to enable consistent comparison across heterogeneous tasks and datasets. These included publication metadata (authors, year, and venue), classification objective and label granularity (including number of classes), dataset source (simulated, measured, or mixed) and sensor details when reported, preprocessing and representation strategy (raw sequences, phase-folded curves, handcrafted features), model family (feature-based ML, CNN/RNN/Transformer architectures, self-supervised learning, and domain adaptation), evaluation protocol (train/test split strategy and cross-validation), and performance metrics (accuracy, F1 score, precision, and per-class recall when available). This coding framework supported both quantitative summaries and qualitative synthesis of methodological trends. Following study inclusion, data extraction and charting were conducted by the primary reviewer (M.H.) using a predefined coding framework developed for this review. Inclusion decisions were finalized prior to extraction, and no additional eligibility judgments were made during the data extraction stage. Because the objective of this scoping review was exploratory and descriptive, calibrated extraction forms and inter-reviewer reliability procedures were not employed. No formal review protocol was prospectively registered prior to conducting the study. Review materials and methodological details were later documented on the Open Science Framework [53].

3.5. Limitations of the Methodology

Several limitations should be acknowledged. First, restricting the search to English-language publications may omit relevant studies reported in other languages. Second, the review is limited to the publicly available literature and does not include proprietary or restricted studies, which may report additional operational results not accessible in the open domain. Third, reported performance metrics vary across studies and are often evaluated under non-standardized protocols, limiting direct comparability and precluding quantitative meta-analysis. These limitations should be considered when interpreting reported performance trends and cross-study comparisons.

4. Light-Curve Databases

Publicly available RSO light-curve databases were examined to assess the distribution, coverage, and observational biases of photometric measurements used in machine learning and deep learning studies. Particular attention was given to the orbital regimes, object classes, and sensor characteristics represented in each dataset, as these factors directly influence the feasibility and generalizability of data-driven classification approaches. Representative light curves from the examined databases are shown in Figure 5. A summary of the sensor and observational characteristics of each database is provided in Table 2.
The Mini-Mega TORTORA (MMT) system currently forms the foundation of most open research on machine learning and deep learning classification of RSO light curves [54]. Its large-scale coverage, long operational history, and public accessibility have made it the primary source of labeled photometric data used in published studies. In contrast, the Space Debris Light-Curve Database (SDLCD) [55], though providing high-quality photometric measurements, has not yet been adopted as a primary training dataset in ML/DL classification research. A smaller number of works rely on light curves from the Ukrainian Database and Atlas [56], particularly for LEO-focused classification experiments [22]. Commercial optical databases operated by organizations such as Electro Optic Systems (EOS) [57] and the ExoAnalytic Global Telescope Network (EGTN) [58] were excluded, as access requires paid subscriptions or restrictive licensing, preventing independent benchmarking and reproducibility.

4.1. Mini-Mega TORTORA (MMT)

MMT is a wide-field, ground-based optical monitoring facility operated at the Special Astrophysical Observatory of the Russian Academy of Sciences [54]. The system consists of nine co-mounted optical channels (MMT-9), each equipped with an Andor Neo sCMOS detector, enabling high-cadence, wide-area sky monitoring. The system operates in a wide-field survey mode (∼900 d e g 2 ) and a narrow-field follow-up mode (∼100 d e g 2 ). Observations are typically acquired using Johnson-Cousins B, V, and R filters, as well as unfiltered white-light and polarimetric configurations. The publicly accessible MMT archive contains light curves for 12,932 unique RSOs, with object identities cross-referenced to NORAD Two-Line Elements (TLEs) and the McCants classified satellite catalog [59]. Notably, observations of objects associated with Commonwealth of Independent States (CIS) country codes are not publicly released, introducing a geopolitical bias into the dataset.

4.2. Space Debris Light-Curve Database (SDLCD)

SDLCD is operated by the Astronomical and Geophysical Observatory in Modra, Slovakia, using the AGO70 telescope [55]. The system employs an FLI ProLine PL1001 Grade 1 CCD camera and is optimized for targeted observations of objects in GEO, Geostationary Transfer Orbit (GTO), and Molniya orbits. Observations are conducted through Johnson–Cousins BVRI filters with typical exposure times ranging from 1.0 s to 5.0 s under terrestrial tracking. The archive contains 2224 light curves corresponding to 791 unique RSOs. Compared to MMT, SDLCD provides a smaller, curated collection intended primarily for photometric analysis and periodicity studies rather than large-scale ML/DL classification tasks.

4.3. Ukrainian Database and Atlas

The Ukrainian Database and Atlas was established by the Astronomical Observatory of Odessa I.I. Mechnikov National University to archive photometric observations of RSOs collected between 2012 and 2020 [56]. Observations were conducted using the KT-50 telescope equipped with a Watec-902H2 CCD camera. The database contains light curves for approximately 340 LEO objects [60]. Although the dataset was explicitly designed to support RSO classification tasks, the public archive has been offline since 2022, limiting its accessibility and long-term utility for reproducible benchmarking.

4.4. Actionable Datasets Versus Benchmark-Ready Datasets

Although the three databases examined are publicly accessible, none constitute benchmark-ready datasets in the ML sense. Here, we distinguish between an actionable dataset and a benchmark-ready dataset. An actionable dataset contains sufficient photometric signal, object identity information, and auxiliary metadata to support ML/DL experimentation after study-specific preprocessing, metadata reconciliation, and label curation. A benchmark-ready dataset goes further by enabling reproducible comparison across studies through standardized data preparation and evaluation design.
In the context of RSO light-curve classification, a benchmark-ready dataset should satisfy, at minimum, the following requirements: (i) stable public access and versioned data release; (ii) standardized train/validation/test splits, ideally defined at both the object and track levels; (iii) a consistent and documented label taxonomy; (iv) transparent preprocessing and inclusion/exclusion rules; (v) track-level observational metadata sufficient to interpret acquisition context, such as cadence, filter, phase angle, and observing geometry; and (vi) a recommended metric suite and evaluation protocol for reproducible model comparison.
By this definition, current public archives such as MMT, SDLCD, and the Ukrainian Database should be viewed as valuable public source archives rather than benchmark-ready community datasets. MMT provides the largest publicly available archive and underpins much of the published ML/DL literature, but it exhibits strong class and orbital imbalance and still requires additional curation before standardized use. SDLCD offers high-quality targeted observations, but its scale and design are better suited to photometric analysis than to community-wide ML benchmarking. The Ukrainian Database is particularly relevant for LEO classification studies, but its long-term reproducibility is limited because the public archive has been offline since 2022.
As a result, most published ML and DL studies still rely on study-specific preprocessing, metadata reconciliation, label mapping, and split design, which limits direct comparability across reported results. This gap between data availability and benchmark readiness remains a major obstacle to systematic progress in light-curve-based RSO classification.

4.5. Database Characterization

To characterize the distribution of RSOs represented in the examined light-curve databases, each object was cross-referenced by NORAD ID with metadata obtained from DISCOS (Database and Information System Characterising Objects in Space) and Space-Track.org. Orbital regimes were categorized as LEO, MEO, GEO, and HEO following standard definitions. Objects lacking valid orbital parameters were assigned to an “Unknown” category. Object class labels (payload, rocket body, and debris) were obtained from Space-Track.org [61]. Figure 5 summarizes the aggregated databases by object class and orbital regime.
Across all three archives, payload objects dominate the available observations (Figure 5A), with payloads comprising over 80% of MMT observations. This pronounced class imbalance motivates the use of class-balanced subsets or cost-sensitive learning strategies in prior work. Rocket bodies are comparatively underrepresented in MMT but constitute the majority of SDLCD targets, reflecting that database’s emphasis on high-altitude debris and disposal orbits.
Orbital-regime coverage also differs substantially across datasets (Figure 5B). MMT is heavily concentrated in LEO (89.2%), with smaller fractions in MEO (6.4%) and limited representation of higher orbits. In contrast, SDLCD provides broader coverage of higher-altitude regimes (43.8% MEO, 12.7% GEO, 26.8% HEO), while the Ukrainian Database and Atlas is almost exclusively LEO focused (94.7%), reflecting its design for short-duration, high-cadence observations of low-altitude objects. These differences highlight the strong observational biases introduced by sensor capability, survey strategy, and target selection, which directly affect the generalizability of ML/DL classification models trained on any single archive. These dataset-level biases likely contribute to the variability in reported model performance across studies, particularly when models trained on one archive are evaluated under different orbital, sensor, or label distributions.
In addition to orbital and class distributions, object size information was examined using geometric cross-sectional area estimates from ESA’s DISCOS database [62]. Geometric cross-sectional area represents the projected surface area of an RSO and serves as a natural size-based discriminator that can improve classification performance. However, DISCOS does not provide cross-sectional information for most debris fragments. Consequently, the size analysis in Figure 6 is restricted primarily to payloads and rocket bodies, potentially underrepresenting the smaller end of the size distribution. MMT primarily observes larger objects (median∼10 m 2 ), while incorporating SDLCD, and the Ukrainian Database broadens the observed size range to approximately 1–100 m 2 , yielding a more diverse training distribution that can reduce size-related biases in learned decision boundaries.

5. Methods for Light-Curve-Based RSO Classification

Prior research on light-curve-based classification of RSOs has progressed through three methodological paradigms: (i) physics- and estimation-driven inference pipelines that recover attitude-related parameters and apply thresholding or rule-based labeling; (ii) supervised machine learning using handcrafted features extracted from photometric time series; and (iii) DL approaches that learn task-relevant representations directly from raw or minimally processed light curves. These paradigms differ in the degree of prior knowledge they assume, the amount of manual feature design required, and their robustness to real-world effects such as irregular sampling, atmospheric turbulence, and sensor-dependent calibration.
This section synthesizes classification-oriented ML/DL methods used in the reviewed studies and organizes them around two recurring design choices: (1) modeling approach and input representation, i.e., whether the pipeline relies on handcrafted features with conventional ML classifiers or learns representations end-to-end from raw or transformed light curves; and (2) task definition (label space), i.e., which semantic categories are being inferred (e.g., attitude state, coarse object type, platform/bus family, or hierarchical label structures).
Table 3, Table 4, Table 5 and Table 6 consolidate datasets, algorithm families, target label spaces, and reported accuracies. The reviewed studies are grouped into simulated studies and real-data studies. This organization helps distinguish results obtained under synthetic conditions from those evaluated on observational datasets. When multiple datasets are evaluated within a single study, accuracies are reported per dataset; when multiple algorithms are evaluated on the same dataset, only the best-performing result is shown. Across Table 3, Table 4, Table 5 and Table 6, classification tasks are described using a consistent taxonomy: attitude state (e.g., stable vs. tumbling), object type (e.g., payload, rocket body and debris), platform/family (e.g., Starlink, Iridium, and standardized bus families), shape/size proxies, rotation state/dynamics, and hierarchical or hybrid tasks combining multiple semantic levels. The performance values presented in the tables correspond to the best-reported result from each study, with the best-performing model highlighted in bold in the algorithm column. Wherever available, mean accuracy or test accuracy is reported; otherwise, alternative evaluation metrics provided by the original study are included.
Table 3 summarizes traditional ML approaches for RSO classification using simulated light curves reported in the literature. The simulated data are generated using several photometric and rendering models, including the Ashikhmin–Shirley [63], Cook–Torrance BRDF [64], Blender-based simulations [65], and other synthetic observational scenarios. Across these studies, a broad range of algorithms has been explored, including bagged trees [66], Random Forest (RF) [67], Logistic Regression (LR) [68], Naive Bayes (NB) [69], k-Nearest Neighbors (k-NN) [70], Neural Networks (NNs) [71], CN2 [72], Decision Trees (DT) [73], Support Vector Machines (SVMs) [74] and XGBoost [75] combined with Wavelet Scattering Transform (WST) [76]. Among these methods, RF and SVMs are the most commonly used and consistently among the strongest performers. The corresponding classification tasks include object type, attitude state, and shape/configuration, with most simulated-data studies reporting high performance, often exceeding 90% accuracy.
Table 4 tabulates traditional machine learning approaches applied to real observational light curves. The studies draw on several public databases, including MMT, EOS, EGTN, IWF SPARC, and the Ukrainian database, while some also use internally curated private light-curve datasets. In addition to the methods listed in Table 3, these studies explore a broader set of algorithms, including Stochastic Gradient Descent (SGD) [77], Cost-Sensitive Random Forest (CSRF) [78], Linear Discriminant Analysis (LDA) [79], subspace K-NN [80], Feedforward Neural Networks (FFNNs) [81], Hidden Markov Model–Random Forest (HMM-RF) [82], 1-NN with Euclidean Distance (ED) [83], and 1-NN with Dynamic Time Warping (DTW) [84]. Across the observational studies, RF and SVMs are the most commonly used and consistently among the strongest performers in the real light curves as well. The classification tasks are broadly similar to those considered for simulated light curves, including object type, attitude state, and shape/configuration, although several real-data studies also place greater emphasis on platform- and family-level discrimination. Overall, the reported performance on real light curves is generally lower than that obtained on simulated datasets, reflecting the greater noise, variability, and labeling challenges present in observational data.
Table 5 provides an overview of deep learning approaches for simulated light-curve-based RSO classification. In addition to the simulation and modeling frameworks summarized in Table 3, these studies also employ Phong [63] and Beard–Maxwell [85] reflectance models. The simulated light curves are generated predominantly for GEO scenarios. The reviewed papers investigate a broad range of DL architectures, including convolutional neural networks (CNNs) [86], fully connected neural networks (FCNNs) [87], deep neural networks (DNNs) [88], ENDE/ENCLA variants [89,90,91], long short-term memory networks (LSTMs) [92], multi-scale convolutional neural networks (MCNNs) [93], LSTM-FCNs [94], and convolutional autoencoder (CAE)-CNN variants [95]. The associated classification tasks are broadly consistent with those in the traditional ML literature, spanning object type, attitude state, and shape/configuration, with several DL studies placing greater emphasis on explicit shape-class prediction. Reported accuracies are generally high, often exceeding 95%, although these values are not directly comparable because of differences in datasets, preprocessing, and evaluation settings.
Table 6 summarizes the deep learning approaches developed for RSO classification using observational light curves. Beyond the datasets introduced earlier, the reviewed studies also incorporate IWF SPARC and a private dataset transformed into short-time Fourier transform (STFT) representations. Transfer learning has additionally been explored, particularly through the adaptation of models pretrained on Blender-simulated light curves to the EOS dataset. Recent studies have introduced a range of advanced architectures, including model-agnostic meta-learning (MAML) [96], ConvLSTM-CNN models [97], CoAtNet [98], HRCNN [99], Transformer [100], 1D-ResNet [101], LC-VAE [102], and Barlow Twins [103]. Across the reviewed studies, MMT is the most frequently used observational dataset, and CNN-based variants remain the most common and often the best-performing approaches. Nevertheless, classification performance on observational light curves is generally lower than that reported for simulated data as expected given the greater measurement noise, class overlap, and variability present in real observations.
Taken together, Table 3, Table 4, Table 5 and Table 6 illustrate a clear progression in light-curve-based RSO classification, from feature-engineered pipelines with conventional learners to end-to-end deep learning models that operate on raw or transformed photometric sequences. While both paradigms report strong performance under controlled conditions, their assumptions, data requirements, and failure modes differ substantially. Notably, approaches explicitly targeting domain shift (e.g., transfer learning and meta-learning) report improved performance under synthetic-to-real transfer relative to naïve training on limited real data, indicating that robustness—not only architecture choice—drives practical gains under operational conditions. The following subsections examine these approaches in detail, beginning with supervised ML methods based on handcrafted features.

5.1. Supervised ML with Handcrafted Features

Early and classical ML pipelines convert a light curve into a fixed-length feature vector and classify it using conventional supervised learners such as DT, RF, SVM, and k-NN. This design is attractive in SSA settings because it supports smaller datasets, yields compact representations, and enables partial interpretability by linking features to physically meaningful signal properties (e.g., periodicity strength, amplitude statistics, or regression coefficients). Accordingly, feature-based ML remains common for low-cardinality label spaces such as binary attitude-state discrimination (stable vs. tumbling) or coarse object-type classification (payload vs. rocket body vs. debris) [16,22,29,30,31,35].
  • Feature families: Handcrafted features span multiple domains: (i) spectral and periodic descriptors, including dominant frequency components, peak structure, and cepstral representations to capture rotational signatures [29]; (ii) regression and parametric fits, where coefficients from polynomial, spline, or Fourier-series fits become discriminative attributes [16,30,33]; (iii) summary statistics (variance, skewness, and robust dispersion measures), sometimes computed via time-series feature toolkits [31] or TSFresh [51]; and (iv) multi-resolution representations such as wavelet coefficients, wavelet scattering transforms, or empirical mode decomposition to capture nonstationary behavior [22,42,47].
  • Strengths and limitations: Across studies, feature-based ML can achieve high accuracy on simulated or carefully curated datasets [15,29,34]. However, performance is sensitive to preprocessing choices (normalization, detrending, and phase folding), feature definitions, and class imbalance. Overall accuracy can be inflated when majority classes dominate, motivating cost-sensitive learning, resampling (e.g., SMOTE), or class-balanced subsets [16,31]. These patterns align with the database-level imbalance described in Section 4.5.

5.2. Deep Learning for End-to-End Representation Learning

Deep learning methods reduce reliance on manual feature design by learning hierarchical representations from raw sequences or lightly processed inputs (e.g., normalized magnitude sequences or time–frequency transforms). CNNs are widely adopted because convolutional filters capture local motifs in brightness variation and can be applied in one dimension to time series or in two dimensions to time–frequency images (e.g., short-time Fourier transform and STFT spectrograms) [20,36,37,40]. Recurrent models, particularly LSTM, explicitly model temporal dependencies and have been used for classification from short observation windows, including LEO stable/tumbling discrimination [22]. More recent work explores attention-based and hybrid architectures that combine convolutional encoders with temporal modeling components [38,47,49].

5.3. Sim-to-Real Generalization and Domain Transfer

A persistent theme across DL studies is the performance gap between models trained on simulated light curves and deployment on measured observations. This sim-to-real gap refers to the challenge of transferring models learned in simulation to real-world environments under distribution shift, where discrepancies between simulated and measured data lead to degraded target-domain performance [104]. In the context of light curves, synthetic data are generated under simplified assumptions about shape, material, attitude dynamics, and noise, whereas real measurements reflect complex illumination geometry, atmospheric effects, calibration differences, and sensor-specific artifacts. Empirically, multiple studies report near-ceiling accuracy on simulated datasets but substantial degradation on real data (Table 5 and Table 6) [20,39]. Two families of strategies recur in the reviewed literature.
  • Transfer learning (supervised domain adaptation): Transfer learning typically pretrains a neural network on a large source domain (often synthetic light curves) and then fine-tunes the model on a smaller labeled target domain (observed light curves). More broadly, this setting can be viewed as supervised domain adaptation, which seeks to minimize performance degradation under distribution shift by aligning source and target domains [105]. This strategy leverages representation reuse and can reduce labeled-data requirements in the observational domain [26,41]. A representative example is provided by [26], who applied transfer learning to a 1D convolutional neural network by pretraining on simulated Blender light curves and fine-tuning on real observational data. When transferring from simulation to the EOS dataset, the approach achieved 78.3% classification accuracy, improving performance by 3% relative to a baseline CNN trained only on EOS. This result provides empirical evidence that simulation-to-real transfer can mitigate domain shift and reduce labeled-data requirements in real light-curve classification.
  • Few-shot and self-supervised learning: Few-shot learning targets regimes where only a handful of labeled examples exist for new objects or classes and aims to generalize new tasks from limited supervision by exploiting prior knowledge learned across related tasks [106]. MAML learns an initialization enabling rapid adaptation to new tasks from limited labeled samples [39]. Self-supervised learning reduces dependence on labels by training representations using pretext objectives on unlabeled data. Barlow Twins is a redundancy-reduction method that learns invariant representations from augmented views of the same input and has been applied to RSO light curves to improve classification under limited labeled data and varying track durations [52,103].

5.4. Model Performance and Evaluation

Table 3, Table 4, Table 5 and Table 6 report accuracies as stated in the reviewed subset of literature. Accuracy is defined as
Accuracy = TP + TN TP + TN + FP + FN
where TP, TN, FP, and FN denote true positives, true negatives, false positives, and false negatives, respectively. While accuracy is widely reported, it is not sufficient for imbalanced label spaces typical of public RSO datasets (Section 4.5). A classifier that predicts the majority class can attain deceptively high accuracy without meaningful discriminative capability. Accordingly, when studies reported additional metrics (e.g., precision, recall, F1-score, or per-class performance), these results were considered qualitatively during synthesis, even when the tables focus on accuracy for consistency. This issue is compounded when repeated observations of the same RSO are split at the track level rather than at the object level. In such cases, the model may encounter highly similar light curves from the same object in both training and test sets, inflating reported accuracy by rewarding recognition of previously observed targets rather than true generalization to unseen RSOs. For this reason, object-disjoint splits and class-sensitive metrics such as macro F 1 , balanced accuracy, and per-class recall provide a more informative assessment of operationally relevant performance than overall accuracy alone. The lack of standardized evaluation protocols and reporting conventions across studies remains a barrier to direct comparability, reinforcing the need for benchmark-ready datasets with defined splits and metric suites.

5.5. Summary of Method–Task Alignment

Across the reviewed corpus, model choice is closely tied to label-space complexity. Feature-based ML methods are most effective when categories are coarse and decision boundaries are separable using engineered descriptors (e.g., stable vs. tumbling, or coarse object type) [16,22,29,31]. Deep learning approaches dominate tasks with higher intra-class variability and nonlinear temporal behavior, including platform- or family-level labeling and higher-granularity classification where discriminative cues are distributed across time and frequency scales [41,47,49]. However, even sophisticated architectures remain constrained by (i) the limited availability of labeled observational data, (ii) repeated observations of the same objects inflating apparent performance, and (iii) heterogeneous preprocessing and evaluation protocols. These constraints motivate the challenges and future directions discussed in Section 6 and Section 7.

6. Current Challenges

Based on the reviewed literature, several persistent challenges continue to limit the reliability, generalizability, and operational deployment of light-curve-based RSO classification within SSA.
  • Class imbalance and multiplicity bias. Most publicly available light-curve datasets exhibit pronounced class imbalance. For example, the MMT database contains a disproportionate number of payload observations relative to rocket bodies and debris, biasing learned models toward majority classes and inflating accuracy metrics through dominant-class predictions [20,26,31,34]. Furthermore, repeated observations of the same RSOs across multiple nights or channels can yield multiple light curves per object; if train/test splits are performed at the track level rather than the object level, the same object may appear in both sets, leading to overly optimistic estimates of generalization performance.
  • Sim-to-real discrepancy. To mitigate class imbalance and data scarcity, many studies rely on synthetic light curves generated using BRDF-based models, including Phong, Cook–Torrance, and Ashikhmin–Shirley formulations. While such simulations provide controlled training environments, empirical evidence consistently demonstrates substantial performance degradation when models trained on synthetic data are evaluated on measured light curves [20,39]. This sim-to-real gap arises from the limited ability of simplified rendering models to capture complex observational effects, including atmospheric turbulence, background illumination, sensor noise, and unmodeled attitude dynamics.
  • Short observation durations and irregular sampling. Many LEO RSOs are observable only over brief time windows (typically a few minutes), which limits the capture of long-period rotational signatures and constrains robust feature extraction [22,52]. Moreover, real-world photometric measurements frequently exhibit irregular sampling, missing observations, and signal degradation arising from atmospheric and instrumental effects, thereby complicating both feature-based approaches and end-to-end temporal modeling.
  • Orbit and illumination dependency. Apparent RSO brightness is strongly dependent on Sun–object–observer geometry, phase angle, and orbital regime. Consequently, even identical objects can exhibit markedly different photometric signatures under varying illumination and viewing conditions [22]. This dependence poses a fundamental challenge for models expected to generalize across orbits, sensors, and observation geometries.
  • Attitude-state ambiguity. Classification accuracy for tumbling or transitional attitude states is consistently lower than for stable configurations [15,22,45]. Tumbling RSOs generate irregular, nonperiodic brightness variations that are highly sensitive to geometry, making it difficult to robustly distinguish between tumbling satellites, tumbling rocket bodies, and slowly rotating stable platforms.
  • Inconsistent evaluation and benchmarking. Evaluation protocols and reporting practices vary widely across studies (e.g., object-level vs. track-level splits, differing class definitions, and inconsistent metric suites), limiting direct comparability and sometimes yielding optimistic results under favorable split assumptions. The absence of standardized benchmarks and recommended metric suites (beyond overall accuracy) remains a barrier to reproducible model comparison and community-wide baselining.
  • Data accessibility and reproducibility. The continued reliance on private or proprietary datasets, such as those from EOS and ExoAnalytic networks, limits reproducibility and impedes cross-study comparison. More broadly, limited availability of openly accessible datasets with unified label taxonomies and track-level metadata prevents consistent benchmarking and hinders the establishment of community-wide performance baselines [107].
Collectively, these challenges are reflected in the heterogeneous performance and evaluation protocols reported in Table 3, Table 4, Table 5 and Table 6. Addressing these limitations requires standardized datasets, balanced sampling strategies, and modeling approaches that explicitly account for observational variability and dataset bias. These improvements are essential for transitioning light-curve-based RSO classification from proof-of-concept demonstrations to operationally reliable SSA capabilities.

7. Future Directions

Addressing the challenges outlined in Section 6 requires coordinated advances across six critical research areas before light-curve-based RSO classification can transition from isolated demonstrations to operationally reliable SSA capabilities.
  • Benchmark-ready, sensor-aware datasets. A primary limitation is the absence of an actionable benchmark dataset with standardized train/test splits, unified class taxonomies, and sufficient sensor metadata. Although archives such as MMT and SDLCD provide valuable measurements, performance comparisons remain unreliable due to heterogeneous preprocessing and evaluation protocols [107]. Future work should prioritize the construction of community benchmarks with fixed splits across orbital regimes and sensors, accompanied by track-level metadata (cadence, filter, and phase angle) and long-tail class coverage to address imbalance issues identified in Section 4.5. Recent work has begun to move in this direction through benchmark-oriented derived datasets; for example, [50] introduced SOLID-50, a benchmark constructed from MMT observations with 104,034 samples across 50 fine-grained subcategories, although the paper does not provide a public release link for the curated dataset, so we treat it here as an important benchmark effort rather than as a publicly accessible community resource.
  • Robust sim-to-real and cross-sensor generalization. The sim-to-real performance gap remains one of the dominant barriers to deployment, with models often degrading substantially when transferred from synthetic to measured observations [20,26,41]. Beyond fine-tuning, future research should explore domain-generalization methods that explicitly learn sensor-invariant representations, as well as adaptation strategies conditioned on telescope-specific parameters such as exposure time, noise statistics, and sampling cadence.
  • Learning under limited-duration and sparse labeling constraints. Operational SSA frequently requires classification from short observation windows (tens of seconds to several minutes) and limited labels for rare object types. Few-shot and self-supervised paradigms, including MAML [39] and redundancy-reduction approaches such as Barlow Twins [52], provide promising foundations, but systematic evaluation under realistic early-classification settings remains limited. Future work should quantify how accuracy and uncertainty evolve as a function of track duration and label availability.
  • Richer semantic label spaces for SSA relevance. Most existing studies focus on coarse object categories (payload, rocket body, and debris), which do not fully capture the semantic attributes required for SSA decision-making [35,47]. Next-generation datasets and models should move toward multi-label frameworks that jointly encode object type, attitude regime, shape proxies, and operational state, enabling outputs that are directly actionable (e.g., distinguishing a stable payload from a tumbling object to support conjunction risk assessment and anomaly triage).
  • Physics-informed and interpretable architectures. Current deep models remain largely black-box, limiting trust and adoption in operational pipelines. Incorporating physically meaningful constraints—such as phase-function priors, rotational dynamics, or cross-sectional scaling—could improve both interpretability and robustness. Hybrid architectures that combine temporal modeling with attention mechanisms (e.g., HRCNN [49] and CoAtNet [47]) represent an initial step, but further work is needed on explainable and uncertainty-aware inference.
  • Multi-platform observational diversity. Expanding beyond a small number of ground-based archives is essential for reducing observational bias. Integrating measurements from stratospheric platforms [108,109] and space-based sensors such as CASSIOPE [110,111] and NEOSSat [112] would introduce complementary illumination geometries, reduced atmospheric effects, and distinct noise regimes. Where available, fusing photometric light curves with other SSA modalities (e.g., radar-derived features or orbital context) could further improve robustness and operational utility.
Overall, progress in RSO light-curve classification will depend on benchmark construction as the enabling foundation, followed by domain-robust learning, richer semantic annotation, and physics-informed, uncertainty-aware modeling to support reproducible, interpretable, and operationally scalable SSA systems.

8. Conclusions

Light-curve-based classification is a promising data-driven approach for characterizing RSOs from passive optical observations. Since 2014, the field has progressed from handcrafted feature extraction and rule-based inference toward deep learning models that learn discriminative representations directly from photometric sequences. This systematic scoping review synthesizes 29 studies (Table 3, Table 4, Table 5 and Table 6) and consolidates the past decade of ML/DL efforts into a unified reference for current capabilities and remaining gaps. Collectively, the reviewed methods demonstrate potential for distinguishing object type, platform/family, and attitude state, particularly when training data are sufficiently diverse and evaluation is conducted under realistic split assumptions.
Reported success often does not generalize beyond specific datasets and observing conditions due to persistent challenges in data scarcity, class imbalance, multiplicity bias, and limited reproducibility (Section 6). At present, only three publicly documented public repositories are widely referenced in the open literature (MMT, SDLCD, and the Ukrainian Database and Atlas), and the Ukrainian archive has been offline since 2022. Consequently, many reported accuracies primarily reflect dataset-specific properties rather than robust performance across sensors, illumination geometries, and orbital regimes.
This review clarifies the current capabilities and limitations of light-curve-based RSO classification by consolidating evidence from both simulation-based and observational studies. Progress in this field will be driven not primarily by increasingly complex architectures, but by the availability of representative open datasets and consistent benchmarks (Section 7). Benchmark-ready resources incorporating measured light curves across multiple observation platforms, including ground-based, stratospheric, and space-based sensors, would enable reproducible comparison and provide a common testbed for domain adaptation, few-shot learning, and physics-informed modeling.
As space activity increases and orbital congestion intensifies, reliable, interpretable, and scalable classification pipelines will become increasingly important for space-domain awareness and orbital safety. By addressing the gaps identified here, the community can unlock the full potential of data and AI-driven light-curve classification and accelerate its transition from proof-of-concept demonstrations to operational capability within next-generation SSA frameworks.

Author Contributions

Conceptualization, methodology, software, validation, formal analysis, investigation, data curation, and visualization: M.H. and V.S.; resources and supervision: R.S.K.L. and G.S.; writing—original draft preparation: M.H., V.S., R.S.K.L., and G.S.; writing—review and editing: M.H., V.S., R.S.K.L., and R.Q.; project administration and funding acquisition: R.S.K.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant (grant number: RGPIN-2025-06284); by the Department of National Defence (Canada) and NSERC through the DND/NSERC Discovery Grant Supplement (application ID: DGDND-2025-06284); and by the Canadian Space Agency (CSA) through the Flights and Fieldwork for the Advancement of Science and Technology (FAST) program (grant number: 23FAYORA06).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
BRDFBidirectional Reflectance Distribution Function
CAEConvolutional Autoencoder
CCDCharge-Coupled Device
CISCommonwealth of Independent States
CNNConvolutional Neural Network
CoAtNetConvolution and Attention Network
CoBo-LSTMConvolution-Boosted Long Short-Term Memory
ConvLSTMConvolutional Long Short-Term Memory
CSRFCost-Sensitive Random Forest
DISCOSDatabase and Information System Characterizing Objects in Space
DLDeep Learning
DNNDeep Neural Network
DTDecision Tree
DTWDynamic Time Warping
EDEuclidean Distance
EGTNExoAnalytic Global Telescope Network
ENCLAEncoder-classifier variant with decoder removed
ENDEEncoder–Decoder
ENDECLA-CREncoder with CNN decoder and RNN classifier
EOSElectro Optic Systems
ESAEuropean Space Agency
FCNNFully Connected Neural Network
FPFalse Positive
FNFalse Negative
FFNNFeedforward Neural Network
GEOGeostationary Orbit
GTOGeostationary Transfer Orbit
HEOHighly Elliptical Orbit
HMMHidden Markov Model
HRCNNHybrid Recurrent Convolutional Neural Network
IWF SPARCSingle Photon Light Curve And Laser Ranging Catalogue
k-NNk-Nearest Neighbors
LDALinear Discriminant Analysis
LEOLow Earth Orbit
LPALongitudinal Phase Angle
LRLogistic Regression
LSTMLong Short-Term Memory
MAMLModel-Agnostic Meta-Learning
MCCMatthews Correlation Coefficient
MCNNMulti-scale Convolutional Neural Network
MEOMedium Earth Orbit
MLMachine Learning
MMTMini-Mega TORTORA
NBNaive Bayes
NNNeural Network
PSFPoint Spread Function
RFRandom Forest
RBRocket Body
RNNRecurrent Neural Network
RSOResident Space Object
SDLCDSpace Debris Light-Curve Database
sCMOSscientific Complementary Metal-Oxide Semiconductor
SGDStochastic Gradient Descent
SMOTESynthetic Minority Over-sampling Technique
SSASpace Situational Awareness
STFTShort-Time Fourier Transform
SVMSupport Vector Machine
TAGThree-Axis Stabilized
TLETwo-Line Element
TPTrue Positive
TNTrue Negative
WSTWavelet Scattering Transform
XGBoostExtreme Gradient Boosting

Appendix A. Database-Specific Search Strings and Search Dates

Table A1 reports the exact search strings used for each database and the dates on which the searches were conducted. Additional methodological details are available on the Open Science Framework [53].
Table A1. Database-specific search strings and search dates.
Table A1. Database-specific search strings and search dates.
DatabaseSearch DateSearch String
Google Scholar1 March 2025(“resident space object” OR “space object” OR satellite OR “space debris”) AND (“light curve” OR photometric OR “optical photometry”) AND (classification OR categorization OR “attitude classification” OR characterization) AND (“machine learning” OR “deep learning” OR “CNN” OR “RNN” OR “LSTM” OR “transformer” OR “self-supervised”)
Engineering Village1 March 2025(“resident space object” OR “space object” OR satellite OR “space debris”) AND (“light curve” OR photometric OR “optical photometry”) AND (classification OR categorization OR “attitude classification” OR characterization) AND (“machine learning” OR “deep learning” OR “CNN” OR “RNN” OR “LSTM” OR “transformer” OR “self-supervised”)
Targeted update search1 February 2026(“resident space object” OR “space object" OR satellite OR “space debris”) AND (“light curve” OR photometric OR “optical photometry") AND (classification OR categorization OR “attitude classification” OR characterization) AND (“machine learning” OR “deep learning” OR “CNN” OR “RNN” OR “LSTM” OR “transformer” OR “self-supervised”)

References

  1. NASA Orbital Debris Program Office; NASA Johnson Space Center. Orbital Debris Quarterly News, 2025, Volume 29, Issue 3. Available online: https://www.orbitaldebris.jsc.nasa.gov/quarterly-news/pdfs/ODQNv29i3.pdf (accessed on 4 February 2026).
  2. Boley, A.C.; Byers, M. Satellite mega-constellations create risks in Low Earth Orbit, the atmosphere and on Earth. Sci. Rep. 2021, 11, 10642. [Google Scholar] [CrossRef]
  3. Kessler, D.J.; Cour-Palais, B.G. Collision frequency of artificial satellites: The creation of a debris belt. J. Geophys. Res. Space Phys. 1978, 83, 2637–2646. [Google Scholar] [CrossRef]
  4. Shohdy, J.; Karl, R.; Short, B.; Delgadillo, R.; Anderson, B.; Mathamba, A.; Dahlin, M. General Purpose, Software Configurable, Intelligent LiDAR Sensor for Space-Based Non-Cooperative Resident Space Object Relative Navigation and Tracking Applications. In Proceedings of the Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference; Maui Economic Development Board, Inc.: Kihei, HI, USA, 2023; p. 33. [Google Scholar]
  5. Hall, D.; Kervin, P. Optical characterization of deep-space object rotation states. In Proceedings of the the Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference; Maui Economic Development Board, Inc.: Kihei, HI, USA, 2014. [Google Scholar]
  6. Linares, R.; Jah, M.K.; Crassidis, J.L.; Nebelecky, C.K. Space object shape characterization and tracking using light curve and angles data. J. Guid. Control. Dyn. 2014, 37, 13–25. [Google Scholar] [CrossRef]
  7. Hall, D.; Calef, B.; Knox, K.; Bolden, M.; Kervin, P. Separating attitude and shape effects for non-resolved objects. In Proceedings of the 2007 AMOS Technical Conference Proceedings; Maui Economic Development Board, Inc.: Kihei, Maui, HI, USA, 2007; pp. 464–475. [Google Scholar]
  8. Endo, T.; Tsuchikawa, T.; Anada, T.; Ono, H.; Tsuji, H. Simulating the Photometric Light Curve of Artificial Satellites in GEO used with a Ray-Tracing. In Proceedings of the Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference; Maui Economic Development Board, Inc.: Kihei, HI, USA, 2023; p. 94. [Google Scholar]
  9. Linares, R.; Shoemaker, M.A.; Walker, A.C.; Mehta, P.M.; Palmer, D.; Thompson, D.C.; Koller, J.; Crassidis, J.L. Photometric Data from Non-Resolved Objects for Space Object Characterization and Improved Atmospheric Modeling; Technical Report; Los Alamos National Laboratory (LANL): Los Alamos, NM, USA, 2013.
  10. Cardona, T.; Seitzer, P.; Rossi, A.; Piergentili, F.; Santoni, F. BVRI photometric observations and light-curve analysis of GEO objects. Adv. Space Res. 2016, 58, 514–527. [Google Scholar] [CrossRef]
  11. Hejduk, M.; Cowardin, H.; Stansbery, E.G. Satellite material type and phase function determination in support of orbital debris size estimation. In Proceedings of the American Geophysical Union Fall Meeting, San Francisco, CA, USA, 3–7 December 2012. [Google Scholar]
  12. Jah, M.; Madler, R.A. Satellite characterization: Angles and light curve data fusion for spacecraft state and parameter estimation. In Proceedings of the Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference; Maui Economic Development Board, Inc.: Kihei, HI, USA, 2007; Volume 49. [Google Scholar]
  13. Holzinger, M.J.; Alfriend, K.T.; Wetterer, C.J.; Luu, K.K.; Sabol, C.; Hamada, K. Photometric attitude estimation for agile space objects with shape uncertainty. J. Guid. Control. Dyn. 2014, 37, 921–932. [Google Scholar] [CrossRef]
  14. Africano, J.; Kervin, P.; Hall, D.; Sydney, P.; Ross, J.; Payne, T.; Gregory, S.; Jorgensen, K.; Jarvis, K.; Parr-Thumm, T.; et al. Understanding photometric phase angle corrections. In Proceedings of the 4th European Conference on Space Debris, Darmstadt, Germany, 18-20 April 2005; European Space Agency (ESA): Paris, France, 2005; Volume 587, p. 141. [Google Scholar]
  15. Lane, B.; Poole, M.; Camp, M.; Murray-Krezan, J. Using machine learning for advanced anomaly detection and classification. In Proceedings of the Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference, Maui, HI, USA, 20–23 September 2016. [Google Scholar]
  16. Bennette, W.D.; Zeliff, K.; Raquepas, J. Classification of objects in geosynchronous Earth orbit via light curve analysis. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI); IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  17. McNally, K.; Ramirez, D.; Anton, A.M.; Smith, D.; Dick, J. Artificial intelligence for space resident objects characterisation with lightcurves. In Proceedings of the 8th European Conference on Space Debris, Darmstadt, Germany, 20 April 2021; European Space Agency (ESA): Paris, France, 2021; Volume 8. [Google Scholar]
  18. Gu, Y.; Li, Z.; Liu, H.; Luo, Q.; Liu, H.; Wu, G. Multi-objective early warning mission planning by multiple satellites using a critical task aggregation-based NSGA-II algorithm. Adv. Space Res. 2025; in press. [CrossRef]
  19. Bai, S.; Wang, Y.; Cheng, M.; Sun, X.; Xu, M. Analytical Sensitivity Matrix for Near-Optimal Solution to Elliptical Orbit Transfer. IEEE Trans. Aerosp. Electron. Syst. 2025, 62, 461–478. [Google Scholar] [CrossRef]
  20. Furfaro, R.; Linares, R.; Reddy, V. Space objects classification via light-curve measurements: Deep convolutional neural networks and model-based transfer learning. In Proceedings of the AMOS Technologies Conference, Maui Economic Development Board, Maui, HI, USA, 11–14 September 2018; pp. 1–17. [Google Scholar]
  21. Krage, F.J. Nasa Spacecraft Conjunction Assessment and Collision Avoidance Best Practices Handbook; Technical Report; NASA: Washington, DC, USA, 2023.
  22. Qashoa, R.; Lee, R. Classification of low earth orbit (LEO) resident space objects’(RSO) light curves using a support vector machine (SVM) and long short-term memory (LSTM). Sensors 2023, 23, 6539. [Google Scholar] [CrossRef]
  23. Leavitt, H.S.; Pickering, E.C. Periods of 25 Variable Stars in the Small Magellanic Cloud. Harv. Coll. Obs. Circ. 1912, 173, 1–3. [Google Scholar]
  24. Magnusson, P.; Dahlgren, M.; Barucci, M.A.; Jorda, L.; Binzel, R.P.; Slivan, S.M.; Blanco, C.; Riccioli, D.; Buratti, B.J.; Colas, F.; et al. Photometric observations and modeling of asteroid 1620 Geographos. Icarus 1996, 123, 227–244. [Google Scholar] [CrossRef]
  25. Isoletta, G.; Opromolla, R.; Fasano, G. Attitude motion classification of resident space objects using light curve spectral analysis. Adv. Space Res. 2025, 75, 1077–1095. [Google Scholar] [CrossRef]
  26. Allworth, J.; Windrim, L.; Bennett, J.; Bryson, M. A transfer learning approach to space debris classification using observational light curve data. Acta Astronaut. 2021, 181, 301–315. [Google Scholar] [CrossRef]
  27. Krebs, G. Gunter’s Space Page. 2025. Available online: https://space.skyrocket.de/index.html (accessed on 26 September 2025).
  28. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.; Horsley, T.; Weeks, L.; et al. PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef]
  29. Howard, M.; Klem, B.; Gorman, J. RSO characterization with photometric data using machine learning. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 15–18 September 2015. [Google Scholar]
  30. Dao, P.; Weasenforth, K.; Hollon, J.; Payne, T.; Kinateder, K.; Kruchten, A. Machine learning-based stability assessment and change detection for geosynchronous satellites. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 11–14 September 2018; p. 39. [Google Scholar]
  31. Khalil, M.; Fantino, E.; Liatsis, p. Classification of space objects using machine learning methods. In Proceedings of the 2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI); IEEE: Piscataway, NJ, USA, 2019; pp. 93–96. [Google Scholar]
  32. Mital, R.; Cates, K.; Coughlin, J.; Ganji, G. A machine learning approach to modeling satellite behavior. In Proceedings of the 2019 IEEE International Conference on Space Mission Challenges for Information Technology (SMC-IT); IEEE: Piscataway, NJ, USA, 2019; pp. 62–69. [Google Scholar]
  33. Dao, P.; Haynes, K.; Gregory, S.; Hollon, J.; Payne, T.; Kinateder, K. Machine classification and sub-classification pipeline for geo light curves. In Proceedings of the Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference, Maui, HI, USA, 17–20 September 2019; p. 53. [Google Scholar]
  34. Lu, Y.; Zhao, C. The basic shape classification of space debris with light curves. Chin. Astron. Astrophys. 2021, 45, 190–208. [Google Scholar] [CrossRef]
  35. Shrive, B.; Pollacco, D.; Chote, P.; Blake, J.A.; Cooke, B.F.; McCormac, J.; West, R.; Airey, R.; MacManus, A.; Allen, P. Classifying LEO satellite platforms with boosted decision trees. Ras Tech. Instruments 2024, 3, 247–256. [Google Scholar] [CrossRef]
  36. Furfaro, R.; Linares, R.; Gaylor, D.; Jah, M.; Walls, R. Resident space object characterization and behavior understanding via machine learning and ontology-based bayesian networks. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 20–23 September 2016; p. 35. [Google Scholar]
  37. Linares, R.; Furfaro, R. Space object classification using deep convolutional neural networks. In Proceedings of the 2016 19th International Conference on Information Fusion (FUSION); IEEE: Piscataway, NJ, USA, 2016; pp. 1140–1146. [Google Scholar]
  38. Huo, Y.; Li, Z.; Fang, Y.; Zhang, F. Classification for geosynchronous satellites with deep learning and multiple kernel learning. Appl. Opt. 2019, 58, 5830–5838. [Google Scholar] [CrossRef]
  39. Furfaro, R.; Campbell, T.; Linares, R.; Reddy, V. Space debris identification and characterization via deep meta-learning. In Proceedings of the First International Orbital Debris Conference, Sugar Land, TX, USA, 9–12 December 2019; Universities Space Research Association: Washington, DC, USA, 2019; Volume 2109, p. 6123. [Google Scholar]
  40. Zhong, W.; Liu, H.; Gong, Y.; Geng, Y.; Yang, Z.; Zhao, C. Space objects attitude discrimination via light-curve measurements and deep convolutional neural networks. In Proceedings of the MIPPR 2019: Pattern Recognition and Computer Vision; SPIE: Bellingham, WA, USA, 2020; Volume 11430, pp. 70–77. [Google Scholar]
  41. Kerr, E.; Falco, G.; Maric, N.; Petit, D.; Talon, P.; Petersen, E.G.; Dorn, C.; Eves, S.; Sánchez-Ortiz, N.; Gonzalez, R.D.; et al. Light curves for geo object characterisation. In Proceedings of the 8th European Conference on Space Debris; ESA Space Debris Office: Darmstadt, Germany, 2021; Volume 5. [Google Scholar]
  42. Balachandran, K.; Subbarao, D. Classification of Resident Space Objects by shape and spin motion using neural networks and photometric light curves. In Proceedings of the 8th European Conference on Space Debris, Darmstadt, Germany, 20–23 April 2021; pp. 20–23. [Google Scholar]
  43. Liu, T.; Schreiber, K.U. Photometric space object classification via deep learning algorithms. Acta Astronaut. 2021, 185, 161–169. [Google Scholar] [CrossRef]
  44. Badura, G.; Valenta, C.R.; Gunter, B.C.; Shoffeitt, B. Multi-scale convolutional neural networks for inference of space object attitude status from detrended geostationary light curves. In Proceedings of the 31st AAS/AIAA Space Flight Mechanics Meeting, Charlotte, NC, USA, 31 January–4 February 2021. [Google Scholar]
  45. Badura, G.P.; Valenta, C.R.; Gunter, B. Convolutional neural networks for inference of space object attitude status. J. Astronaut. Sci. 2022, 69, 593–626. [Google Scholar] [CrossRef]
  46. Badura, G.P.; Valenta, C.R.; Churchill, L.; Hope, D.A. Recurrent neural network autoencoders for spin stability classification of irregularly sampled light curves. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 27–30 September 2022. [Google Scholar]
  47. Simon, E.; Bonizzi, P.; Ntagiou, E.; Siminski, J.; Cordelli, E. Resident Space Object Classification from Light Curves with Deep Learning. In Proceedings of the International Astronautical Congress, IAC; International Astronautical Federation, IAF: Paris, France, 2023; Volume 2023. [Google Scholar]
  48. Adriano, A.; Scott, K.; Lashgarian Azad, N. Extreme Gradient Boosting and Deep Learning Models for the Classification of Synthetic Space Debris Light Curves. In Proceedings of the Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference; Maui Economic Development Board, Inc.: Kihei, HI, USA, 2024; p. 68. [Google Scholar]
  49. Bencivenga, P.; Matacera, M.A.; Isoletta, G.; Opromolla, R.; Fasano, G. A Neural Network analysis of single-channel Light Curves for the characterization of Resident Space Objects. In Proceedings of the 2025 IEEE 12th International Workshop on Metrology for AeroSpace (MetroAeroSpace); IEEE: Piscataway, NJ, USA, 2025; pp. 471–476. [Google Scholar]
  50. Li, W.; Zhang, Y.; Chen, G.; Yin, J. Fine-grained space object classification with Convolution-Boosted LSTM using light curves: A new method and a large scale dataset. Acta Astronaut. 2025, 240, 530–543. [Google Scholar] [CrossRef]
  51. Trummer, N.M.; Reza, A.; Steindorfer, M.A.; Helling, C. Machine learning-based classification for single photon space debris light curves. Acta Astronaut. 2025, 226, 542–554. [Google Scholar] [CrossRef]
  52. Qashoa, R. Resident Space Object Light Curve Classification & Space Situational Awareness Sensitivity and Simulation Studies. 2024. Available online: https://yorkspace.library.yorku.ca/items/5d116faa-87d1-424b-8783-de77a8580cde (accessed on 10 November 2025).
  53. Hwang, M.; Suthakar, V. Light-Curve Classification of Resident Space Objects for Space Situational Awareness: A Scoping Review. 2026. [CrossRef]
  54. Beskin, G.M.; Karpov, S.V.; Biryukov, A.V.; Bondar, S.F.; Ivanov, E.A.; Katkova, E.V.; Orekhova, N.V.; Perkov, A.V.; Sasyuk, V.V. Wide-field optical monitoring with Mini-MegaTORTORA (MMT-9) multichannel high temporal resolution telescope. Astrophys. Bull. 2017, 72, 81–92. [Google Scholar] [CrossRef]
  55. Šilha, J.; Krajčovič, S.; Zigo, M.; Tóth, J.; Žilková, D.; Zigo, P.; Kornoš, L.; Šimon, J.; Schildknecht, T.; Cordelli, E.; et al. Space debris observations with the Slovak AGO70 telescope: Astrometry and light curves. Adv. Space Res. 2020, 65, 2018–2035. [Google Scholar] [CrossRef]
  56. Koshkin, N.I.; Savanevich, V.; Pohorelov, A.; Shakun, L.S.; Zhukov, V.; Korobeynikova, E.; Strakhova, S.; Moskalenko, S.; Kashuba, V.; Krasnoshchokov, A. Ukrainian Database and Atlas of Light Curves of Artificial Space Objects. Odessa Astron. Publ. 2017, 30, 226–229. [Google Scholar] [CrossRef]
  57. Electro Optic Systems. Space Technology. Available online: https://eos-aus.com/space/ (accessed on 4 March 2026).
  58. ExoAnalytic Solutions, Inc. Space Intelligence. Available online: https://exoanalytic.com/space-intelligence/ (accessed on 8 March 2026).
  59. McCants, M. McCants Classified Satellite Catalog. n.d. Available online: https://mmccants.org/ (accessed on 12 April 2025).
  60. Shakun, L.; Korobeynikova, E.; Koshkin, N.; Melikyants, S.; Strakhova, S.; Terpan, S.; Burlak, N.; Golubovskaya, T.; Dragomiretsky, V.; Ryabov, A. The Observations of Artificial Satellites and Space Debris Using KT-50 Telescope in the Odessa University; Odessa Astronomical Publications: Odesa, Ukraine, 2016; pp. 217–220. [Google Scholar]
  61. U.S. Space Command. Space-Track.org Satellite Catalog (SATCAT). 2025. Available online: https://www.space-track.org (accessed on 3 June 2025).
  62. Flohrer, T.; Lemmens, S.; Virgili, B.B.; Krag, H.; Klinkrad, H.; Parrilla, E.; Sanchez, N.; Oliveira, J.; Pina, F. DISCOS-current status and future developments. In Proceedings of the 6th European Conference on Space Debris, Darmstadt, Germany, 22–25 April 2013; Volume 723, pp. 38–44. [Google Scholar]
  63. Ashikhmin, M.; Shirley, P. An anisotropic phong brdf model. J. Graph. Tools 2000, 5, 25–32. [Google Scholar] [CrossRef]
  64. Cook, R.L.; Torrance, K.E. A reflectance model for computer graphics. ACM Trans. Graph. 1982, 1, 7–24. [Google Scholar] [CrossRef]
  65. Blain, J.M. The Complete Guide to Blender Graphics: Computer Modeling & Animation; AK Peters/CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  66. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
  67. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  68. Hosmer, D.W., Jr.; Lemeshow, S.; Sturdivant, R.X. Applied Logistic Regression; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  69. Rish, I. An empirical study of the naive Bayes classifier. In Proceedings of the IJCAI 2001 Workshop on Empirical Methods in Artificial Intelligence, Seattle, WA, USA, 4–6 August 2001; Georgia Institute of Technology: Atlanta, GA, USA, 2001; Volume 3, pp. 41–46. [Google Scholar]
  70. Peterson, L.E. K-nearest neighbor. Scholarpedia 2009, 4, 1883. [Google Scholar] [CrossRef]
  71. Gurney, K. An Introduction to Neural Networks; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  72. Clark, P.; Niblett, T. The CN2 induction algorithm. Mach. Learn. 1989, 3, 261–283. [Google Scholar] [CrossRef]
  73. Kotsiantis, S.B. Decision trees: A recent overview. Artif. Intell. Rev. 2013, 39, 261–283. [Google Scholar] [CrossRef]
  74. Hearst, M.A.; Dumais, S.T.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Their Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef]
  75. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; Association for Computing Machinery (ACM): New York, NY, USA, 2016. [Google Scholar]
  76. Mallat, S. Group invariant scattering. Commun. Pure Appl. Math. 2012, 65, 1331–1398. [Google Scholar] [CrossRef]
  77. Bottou, L. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 421–436. [Google Scholar]
  78. Zhou, Q.; Zhou, H.; Li, T. Cost-sensitive feature selection using random forest: Selecting low-cost subsets of informative features. Knowl.-Based Syst. 2016, 95, 1–11. [Google Scholar] [CrossRef]
  79. Balakrishnama, S.; Ganapathiraju, A. Linear discriminant analysis-a brief tutorial. Inst. Signal Inf. Process. 1998, 18, 1–8. [Google Scholar]
  80. Ho, T.K. Nearest neighbors in random subspaces. In Proceedings of the Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR); Springer: Berlin/Heidelberg, Germany, 1998; pp. 640–648. [Google Scholar]
  81. Svozil, D.; Kvasnicka, V.; Pospichal, J. Introduction to multi-layer feed-forward neural networks. Chemom. Intell. Lab. Syst. 1997, 39, 43–62. [Google Scholar] [CrossRef]
  82. Eddy, S.R. Hidden markov models. Curr. Opin. Struct. Biol. 1996, 6, 361–365. [Google Scholar] [CrossRef] [PubMed]
  83. Danielsson, P.E. Euclidean distance mapping. Comput. Graph. Image Process. 1980, 14, 227–248. [Google Scholar] [CrossRef]
  84. Senin, P. Dynamic time warping algorithm review. Inf. Comput. Sci. Dep. Univ. Hawaii Manoa Honol. USA 2008, 855, 40. [Google Scholar]
  85. Valdez, P.; Donohoe, G. Utility of BRDF Models for Estimating Optimal View Angles in Classification of Remotely Sensed Images. In NASA University Research Centers Technical Advances in Education, Aeronautics, Space, Autonomy, Earth and Environment; NASA: Washington, DC, USA, 1997; Volume 1. [Google Scholar]
  86. O’shea, K.; Nash, R. An introduction to convolutional neural networks. arXiv 2015, arXiv:1511.08458. [Google Scholar] [CrossRef]
  87. Sainath, T.N.; Vinyals, O.; Senior, A.; Sak, H. Convolutional, long short-term memory, fully connected deep neural networks. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); IEEE: Piscataway, NJ, USA, 2015; pp. 4580–4584. [Google Scholar]
  88. Yi, H.; Shiyu, S.; Xiusheng, D.; Zhigang, C. A study on deep neural networks framework. In Proceedings of the 2016 IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC); IEEE: Piscataway, NJ, USA, 2016; pp. 1519–1522. [Google Scholar]
  89. Fu, X.; Ch’Ng, E.; Aickelin, U.; See, S. CRNN: A joint neural network for redundancy detection. In Proceedings of the 2017 IEEE International Conference on Smart Computing (SMARTCOMP); IEEE: Piscataway, NJ, USA, 2017; pp. 1–8. [Google Scholar]
  90. Naul, B.; Bloom, J.S.; Pérez, F.; Van Der Walt, S. A recurrent neural network for classification of unevenly sampled variable stars. Nat. Astron. 2018, 2, 151–155. [Google Scholar] [CrossRef]
  91. Amato, D.; Furfaro, R.; Rosengren, A.J.; Maadani, M. Attitude propagation of resident space objects with recurrent neural networks. In Proceedings of the 2018 Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS), Maui, HI, USA, 11–14 September 2018. [Google Scholar]
  92. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  93. Zhang, L.M. Multi-function Convolutional Neural Networks for Improving Image Classification Performance. arXiv 2018, arXiv:1805.11788. [Google Scholar] [CrossRef]
  94. Karim, F.; Majumdar, S.; Darabi, H.; Harford, S. Multivariate LSTM-FCNs for time series classification. Neural Netw. 2019, 116, 237–245. [Google Scholar] [CrossRef]
  95. Cheng, Z.; Sun, H.; Takeuchi, M.; Katto, J. Deep convolutional autoencoder-based lossy image compression. In Proceedings of the 2018 Picture Coding Symposium (PCS); IEEE: Piscataway, NJ, USA, 2018; pp. 253–257. [Google Scholar]
  96. Finn, C.; Abbeel, P.; Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the International Conference on Machine Learning; PMLR: New York, NY, USA, 2017; pp. 1126–1135. [Google Scholar]
  97. Lin, Z.; Li, M.; Zheng, Z.; Cheng, Y.; Yuan, C. Self-attention convlstm for spatiotemporal prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; AAAI: Washington, DC, USA, 2020; Volume 34, pp. 11531–11538. [Google Scholar]
  98. Dai, Z.; Liu, H.; Le, Q.V.; Tan, M. Coatnet: Marrying convolution and attention for all data sizes. Adv. Neural Inf. Process. Syst. 2021, 34, 3965–3977. [Google Scholar]
  99. Zhang, Y.; Yuan, Y.; Feng, Y.; Lu, X. Hierarchical and robust convolutional neural network for very high-resolution remote sensing object detection. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5535–5548. [Google Scholar] [CrossRef]
  100. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  101. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2016; pp. 770–778. [Google Scholar]
  102. Chen, X.; Gong, C.; He, Q.; Hou, X.; Liu, Y. LDC-VAE: A latent distribution consistency approach to variational autoencoders. arXiv 2021, arXiv:2109.10640. [Google Scholar]
  103. Zbontar, J.; Jing, L.; Misra, I.; LeCun, Y.; Deny, S. Barlow twins: Self-supervised learning via redundancy reduction. In Proceedings of the International Conference on Machine Learning; PMLR: New York, NY, USA, 2021; pp. 12310–12320. [Google Scholar]
  104. Zhao, W.; Queralta, J.P.; Westerlund, T. Sim-to-real transfer in deep reinforcement learning for robotics: A survey. In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence (SSCI); IEEE: Piscataway, NJ, USA, 2020; pp. 737–744. [Google Scholar]
  105. Courty, N.; Flamary, R.; Tuia, D.; Rakotomamonjy, A. Optimal transport for domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1853–1865. [Google Scholar] [CrossRef]
  106. Wang, Y.; Yao, Q.; Kwok, J.T.; Ni, L.M. Generalizing from a few examples: A survey on few-shot learning. ACM Comput. Surv. (CSUR) 2020, 53, 1–34. [Google Scholar] [CrossRef]
  107. Suthakar, V.; Sanvido, A.A.; Qashoa, R.; Lee, R.S.K. Comparative analysis of resident space object (RSO) detection methods. Sensors 2023, 23, 9668. [Google Scholar] [CrossRef]
  108. Suthakar, V.; Porto, I.; Myhre, M.; Sanvido, A.A.; Clark, R.; Lee, R.S.K. RSONAR: Data-Driven Evaluation of Dual-Use Star Tracker for Stratospheric Space Situational Awareness (SSA). Sensors 2025, 26, 179. [Google Scholar] [CrossRef] [PubMed]
  109. Qashoa, R.; Suthakar, V.; Chianelli, G.; Kunalakantha, P.; Lee, R.S.K. Technology Demonstration of Space Situational Awareness (SSA) Mission on Stratospheric Balloon Platform. Remote Sens. 2024, 16, 749. [Google Scholar] [CrossRef]
  110. Kunalakantha, P.; Suthakar, V.; Harrison, P.; Driedger, M.; Qashoa, R.; Chianelli, G.; Lee, R.S.K. Resident Space Object (RSO) Tracking in Space-Based, Low Resolution, Non-Constant-Attitude Imagery. Remote Sens. 2026, 18, 755. [Google Scholar] [CrossRef]
  111. Jeong, Y.; Suthakar, V.; Qashoa, R.; Sohn, G.; Lee, R.S. OrbitTrack: Advanced RSO detection and tracking from wide field-of-view on-orbit images. Adv. Space Res. 2025, 76, 4387–4400. [Google Scholar] [CrossRef]
  112. Stewart, M.; Lee, R.; Ryall, S. Image Processing Techniques for Space Situational Awareness-Performing Photometry on James Webb Space Telescope Imagery from NEOSSat. In Proceedings of the Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference; Maui Economic Development Board, Inc.: Kihei, Maui, HI, USA, 2023; p. 182. [Google Scholar]
Figure 1. This workflow summarizes the end-to-end pipeline used to derive machine-learning-ready light curves from optical observations. Raw images collected from ground- or space-based telescopes undergo preprocessing, including dark-frame subtraction, flat-field correction, background modeling, and astrometric calibration, followed by photometric extraction using aperture or methods based on the Point Spread Function (PSF). The resulting calibrated light curves are normalized, magnitude converted, and time aligned before entering machine learning and deep learning models. These models may use handcrafted temporal features, sequence-based neural architectures (e.g., recurrent neural networks (RNNs), convolutional neural networks (CNNs) and Transformers), data-augmentation strategies (e.g., noise injection and temporal warping), and sim-to-real adaptation via domain transfer or synthetic light curves. Final outputs support RSO classification and characterization tasks such as object type, attitude state, and shape-related properties.
Figure 1. This workflow summarizes the end-to-end pipeline used to derive machine-learning-ready light curves from optical observations. Raw images collected from ground- or space-based telescopes undergo preprocessing, including dark-frame subtraction, flat-field correction, background modeling, and astrometric calibration, followed by photometric extraction using aperture or methods based on the Point Spread Function (PSF). The resulting calibrated light curves are normalized, magnitude converted, and time aligned before entering machine learning and deep learning models. These models may use handcrafted temporal features, sequence-based neural architectures (e.g., recurrent neural networks (RNNs), convolutional neural networks (CNNs) and Transformers), data-augmentation strategies (e.g., noise injection and temporal warping), and sim-to-real adaptation via domain transfer or synthetic light curves. Final outputs support RSO classification and characterization tasks such as object type, attitude state, and shape-related properties.
Aerospace 13 00287 g001
Figure 2. Taxonomy of RSO studies using light-curve or spectral measurements. The flowchart separates prior work into three broad methodological categories according to the study objective: (a) classification, in which objects are assigned discrete labels such as attitude state or object class; (b) characterization, in which physical or optical properties such as spin rate, spectral features, or approximate size are estimated; and (c) hybrid approaches, in which characterization-derived attributes are subsequently used as features for machine learning classifiers. The YES/NO branches indicate the decision logic used to group studies within this taxonomy.
Figure 2. Taxonomy of RSO studies using light-curve or spectral measurements. The flowchart separates prior work into three broad methodological categories according to the study objective: (a) classification, in which objects are assigned discrete labels such as attitude state or object class; (b) characterization, in which physical or optical properties such as spin rate, spectral features, or approximate size are estimated; and (c) hybrid approaches, in which characterization-derived attributes are subsequently used as features for machine learning classifiers. The YES/NO branches indicate the decision logic used to group studies within this taxonomy.
Aerospace 13 00287 g002
Figure 3. Visualization of classification labels used in prior light-curve-based RSO studies. Node size reflects the frequency with which each label appears across the reviewed literature, with larger nodes denoting more commonly used categories such as stable and tumbling attitude states. Link density and shading represent the strength of co-occurrence between labels within the same study; darker and thicker links indicate more frequent pairing. The diagram illustrates the uneven distribution of available ground-truth labels, the dominance of attitude-state classification, and the fragmented use of program names, platform types, bus families, and object classes across existing datasets. For readability within the radial network visualization, selected label names were abbreviated (e.g., Velocity Vector pointing → Velocity vector, Nadir pointing → Nadir, and Three-Axis Stabilized → TAG). Abbreviations preserve the original semantic meaning.
Figure 3. Visualization of classification labels used in prior light-curve-based RSO studies. Node size reflects the frequency with which each label appears across the reviewed literature, with larger nodes denoting more commonly used categories such as stable and tumbling attitude states. Link density and shading represent the strength of co-occurrence between labels within the same study; darker and thicker links indicate more frequent pairing. The diagram illustrates the uneven distribution of available ground-truth labels, the dominance of attitude-state classification, and the fragmented use of program names, platform types, bus families, and object classes across existing datasets. For readability within the radial network visualization, selected label names were abbreviated (e.g., Velocity Vector pointing → Velocity vector, Nadir pointing → Nadir, and Three-Axis Stabilized → TAG). Abbreviations preserve the original semantic meaning.
Aerospace 13 00287 g003
Figure 4. PRISMA-style flow diagram illustrating the study selection process. A total of 297 records were identified through database searching and citation chaining. After the removal of 152 duplicate records, 145 unique studies underwent title and abstract screening. Following full-text assessment, 29 studies met the eligibility criteria and were included in the final systematic scoping review of ML/DL-based classification of RSO light curves.
Figure 4. PRISMA-style flow diagram illustrating the study selection process. A total of 297 records were identified through database searching and citation chaining. After the removal of 152 duplicate records, 145 unique studies underwent title and abstract screening. Following full-text assessment, 29 studies met the eligibility criteria and were included in the final systematic scoping review of ML/DL-based classification of RSO light curves.
Aerospace 13 00287 g004
Figure 5. Distribution of objects represented in the publicly available light-curve databases MMT, SDLCD, and the Ukrainian Database and Atlas. Panel (A) shows the object-class distribution (payload, rocket body, debris, and unknown), while panel (B) shows the orbital-regime distribution (LEO, MEO, HEO, GEO, and unknown). Bars represent the percentage of samples contributed by each database within each category.
Figure 5. Distribution of objects represented in the publicly available light-curve databases MMT, SDLCD, and the Ukrainian Database and Atlas. Panel (A) shows the object-class distribution (payload, rocket body, debris, and unknown), while panel (B) shows the orbital-regime distribution (LEO, MEO, HEO, GEO, and unknown). Bars represent the percentage of samples contributed by each database within each category.
Aerospace 13 00287 g005
Figure 6. Kernel density estimate of average geometric cross-sectional area for objects represented in the publicly available light-curve databases, plotted on a logarithmic area scale ( m 2 ). The comparison is based primarily on payloads and rocket bodies, because ESA DISCOS does not provide cross-sectional area values for many debris fragments; therefore, the smaller end of the debris population is underrepresented in this visualization.
Figure 6. Kernel density estimate of average geometric cross-sectional area for objects represented in the publicly available light-curve databases, plotted on a logarithmic area scale ( m 2 ). The comparison is based primarily on payloads and rocket bodies, because ESA DISCOS does not provide cross-sectional area values for many debris fragments; therefore, the smaller end of the debris population is underrepresented in this visualization.
Aerospace 13 00287 g006
Table 1. Keyword families used to construct literature search queries.
Table 1. Keyword families used to construct literature search queries.
Keyword FamilyRepresentative Terms
Object terms“resident space object”, “space object”, “satellite”, “space debris”
Signal terms“light curve”, “photometric”, “optical photometry”
Task terms“classification”, “categorization”, “attitude classification”, “characterization”
Method terms“machine learning”, “deep learning”, “CNN”, “RNN”, “LSTM”, “transformer”, “self-supervised”
Table 2. Sensor and observational characteristics of publicly available RSO light-curve databases. The reported values reflect typical configurations described in the source publications. Parameters not explicitly documented in the public literature are indicated as “–”. Database sizes reflect counts as of May 2025.
Table 2. Sensor and observational characteristics of publicly available RSO light-curve databases. The reported values reflect typical configurations described in the source publications. Parameters not explicitly documented in the public literature are indicated as “–”. Database sizes reflect counts as of May 2025.
ParameterMMTSDLCDUkrainian Database
Observatory locationKarachay-Cherkessia, RussiaModra, SlovakiaOdessa, Ukraine
Telescope systemMMT-9AGO70KT-50
Detector typesCMOSCCDCCD
Optical filtersB, V, R, white, polarimetricB, V, R, I
Exposure time (s)0.101.0–5.00.02
Field of view (arcmin)1800 × 1800 (survey); 600 × 600 (follow-up)28.5 × 28.512 × 9
Image scale (arcsec/pixel)∼151.67∼1.0
Limiting magnitude (V band)∼11 (at 0.1 s)∼11
Readout/frame rate (frames s 1 )∼1025
Number of Unique RSOs12,932791340
Table 3. Traditional machine learning approaches for simulated light-curve RSO classification. Reported accuracies are not directly comparable across studies because of differences in datasets, preprocessing, label spaces, and evaluation protocols. When multiple models were evaluated, only the best-reported result is shown, and the best-performing algorithm is indicated in bold in the Algorithms column.
Table 3. Traditional machine learning approaches for simulated light-curve RSO classification. Reported accuracies are not directly comparable across studies because of differences in datasets, preprocessing, label spaces, and evaluation protocols. When multiple models were evaluated, only the best-reported result is shown, and the best-performing algorithm is indicated in bold in the Algorithms column.
Data SourceOrbitAlgorithmsClassification TaskReported Accuracy
Ashikhmin–ShirleySVM, Bagged TreesObject type[20]: 95.3%
BlenderSVMShape/configuration[26]: 51.62%
GEORF, LR, k-NN, NB, NN, SVM, CN2Object type and Attitude state[29]: 99.56%
Observational (6 sensors)near-GEOSVM, DT, RF, LRAttitude state[32]: 89%
Cook–TorranceRF, k-NN, DT, SVMShape/configuration[34]: 95%
BlenderLEOXGBoost + WSTAttitude state[48]: 90%
Table 4. Traditional machine learning approaches for light-curve-based RSO classification using measured (observational) data. Reported accuracies are not directly comparable across studies; the best-reported result is shown when multiple models were tested, and the best-performing algorithm is indicated in bold in the Algorithms column.
Table 4. Traditional machine learning approaches for light-curve-based RSO classification using measured (observational) data. Reported accuracies are not directly comparable across studies; the best-reported result is shown when multiple models were tested, and the best-performing algorithm is indicated in bold in the Algorithms column.
Data SourceOrbitAlgorithmsClassification TaskReported Accuracy
EGTNGEORF, k-NN, SVM, SGDAttitude state[15]: 99%/62% (stable/tumbling)
PrivateGEOCSRF, k-NNObject type[16]: 97.8%
MMTBagged Trees, SVMObject type[20]: 63.5%
Ukrainian DBLEOSVMAttitude state and object type[22]: 87%
EOSSVMShape/configuration[26]: 44.07%
MMTSVMShape/configuration[26]: 70.95%; 51.41% (balanced)
PrivateGEORFAttitude state[30]: 92%
MMTSVM, DT, LDA, NB, k-NN, Bagged Trees, Subspace k-NN, FFNNObject type[31]: 88.3%
PrivateGEOHMM–RF, RFHierarchical attitude–subtype[33]: 96%
MMTRFObject type[34]: 85.32%/93.18% (rocket body/satellite)
MMTBoosted DTPlatform/family (pairwise)[35]: 86.13%
IWF SPARC1-NN + ED, 1-NN + DTW, RF, XGBoost, Features + RF, Features + XGBoostObject (family/type/identity)[51]: 90.70%/86.67%/88.17%
Table 5. Deep learning approaches for simulated light-curve-based RSO classification. Accuracies are not directly comparable due to differences in datasets, preprocessing, and evaluation protocols. The best-reported results are shown when multiple models are tested, and the best-performing algorithm is indicated in bold in the Algorithms column.
Table 5. Deep learning approaches for simulated light-curve-based RSO classification. Accuracies are not directly comparable due to differences in datasets, preprocessing, and evaluation protocols. The best-reported results are shown when multiple models are tested, and the best-performing algorithm is indicated in bold in the Algorithms column.
Data SourceOrbitAlgorithmsClassification TaskReported Accuracy
Ashikhmin–ShirleyGEOCNNObject type and attitude state[20]: 97.83%
BlenderCNN, FCNNShape/configuration[26]: 84.4%
Cook–TorranceCNN/DNNShape/configuration[34]: 95.5%
CNNObject type and attitude state[36]: 99.6%
ASGEOCNNObject type and attitude state[37]: 99.6%
PhongGEOENDE/ENCLA variantsShape type and attitude state[38]: 99% (k = 5)
Phongnear-GEOCNNObject and shape type[39]: 97.8%
GEOCAE + CNN variants, LSTMShape class[41]: >90%
LSTM–HMMShape class and attitude state[42]: 91.7%
Beard-MaxwellGEOMCNN, CNNAttitude state[44]: 0.729 *
Beard-MaxwellLEOCNNAttitude state[45]: 86.2%
BlenderLEOLSTM-FCN, CNN, LSTMShape class[48]: 94.6%
* This study reported performance using Matthews correlation coefficient (MCC), for full-night; LPA removed, rather than accuracy.
Table 6. Deep learning approaches for light-curve-based RSO classification using measured (observational) data. Accuracies are not directly comparable due to differences in datasets, preprocessing, and evaluation protocols. Best-reported results are shown when multiple models are tested, and the best-performing algorithm is indicated in bold in the Algorithms column.
Table 6. Deep learning approaches for light-curve-based RSO classification using measured (observational) data. Accuracies are not directly comparable due to differences in datasets, preprocessing, and evaluation protocols. Best-reported results are shown when multiple models are tested, and the best-performing algorithm is indicated in bold in the Algorithms column.
Data SourceOrbitAlgorithmsClassification TaskReported Accuracy
MMTCNNObject type and attitude state[20]: 75.4%
Ukrainian DBLEOLSTMAttitude state and object type[22]: 92%
EOSCNN, FCNNShape/configuration[26]: 75.3%
MMTCNN, FCNNShape/configuration[26]: 90.71; 80.07% (balanced)
Blender to EOSTransfer CNNShape/configuration[26]: 78.3%
MMTDNNObject type[34]: 99.31%
MMTCNNObject type[39]: 77%
MMTMAMLObject type[39]: 85% (20-shot)
Private (STFT)GEOCNNObject type and attitude state[40]: 98.5% (discrete STFT)
MMTConvLSTM + CNNObject type and Attitude state[43]: 86.07%
EGTNGEORNN autoencoder + balanced RFAttitude state[46]: 90.4%
MMTCoAtNetPlatform/family[47]: 92.5%
MMTHRCNN, FCNNOperational status[49]: 92% (80–100s)
MMT CoBo-LSTM, Transformer, LSTM, 1D-ResNet, RLNet, LC-VAEObject type (coarse and fine-grained)[50]: 75.99%/42.46% (coarse/fine) *
IWF SPARCCNN (raw/downsampled/TSFresh features)Family[51]: 88%
Ukrainian DBLEOBarlow Twins, LSTMAttitude state and object type vs. duration[52]: 97% (5 min)
* This study reported performance using macro F 1 rather than accuracy.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hwang, M.; Suthakar, V.; Qashoa, R.; Lee, R.S.K.; Sohn, G. Light-Curve Classification of Resident Space Objects for Space Situational Awareness: A Scoping Review. Aerospace 2026, 13, 287. https://doi.org/10.3390/aerospace13030287

AMA Style

Hwang M, Suthakar V, Qashoa R, Lee RSK, Sohn G. Light-Curve Classification of Resident Space Objects for Space Situational Awareness: A Scoping Review. Aerospace. 2026; 13(3):287. https://doi.org/10.3390/aerospace13030287

Chicago/Turabian Style

Hwang, Minyoung, Vithurshan Suthakar, Randa Qashoa, Regina S. K. Lee, and Gunho Sohn. 2026. "Light-Curve Classification of Resident Space Objects for Space Situational Awareness: A Scoping Review" Aerospace 13, no. 3: 287. https://doi.org/10.3390/aerospace13030287

APA Style

Hwang, M., Suthakar, V., Qashoa, R., Lee, R. S. K., & Sohn, G. (2026). Light-Curve Classification of Resident Space Objects for Space Situational Awareness: A Scoping Review. Aerospace, 13(3), 287. https://doi.org/10.3390/aerospace13030287

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop