Next Article in Journal
Personalized Exoskeleton Gait Training in Incomplete Spinal Cord Injury
Previous Article in Journal
Research on Viscous Dissipation Index Assessment of Polymer Materials Using High-Frequency Focused Ultrasound
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Implementation of the Stack-CNN Algorithm for Space Debris Detection on FPGA Board

by
Matteo Abrate
1,2,
Federico Reynaud
1,2,
Mario Edoardo Bertaina
1,2,*,
Antonio Giulio Coretti
1,2,
Andrea Frasson
2,
Antonio Montanaro
2,
Raffaella Bonino
1,2 and
Roberta Sirovich
3
1
INFN Section of Turin, Via P. Giuria 1, 10125 Turin, Italy
2
Department of Physics, University of Turin, Via P. Giuria 1, 10126 Turin, Italy
3
Department of Mathematics G. Peano, University of Turin, Via Carlo Alberto 10, 10123 Turin, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(17), 9268; https://doi.org/10.3390/app15179268 (registering DOI)
Submission received: 18 July 2025 / Revised: 20 August 2025 / Accepted: 21 August 2025 / Published: 23 August 2025
(This article belongs to the Special Issue Application of Machine Learning in Space Engineering)

Abstract

Featured Application

This work enables real-time, onboard detection of space debris using AI algorithms implemented on FPGA hardware, making it suitable for integration into CubeSat platforms and other resource-constrained space systems for enhanced space situational awareness.

Abstract

The detection of faint, fast-moving objects such as space debris, in optical data is a major challenge due to their low signal-to-background ratio and short visibility time. This work addresses this issue by implementing the Stack-CNN algorithm, originally designed for offline analysis, on an FPGA-based platform to enable real-time triggering capabilities in constrained space hardware environments. The Stack-CNN combines a stacking method to enhance the signal-to-noise ratio of moving objects across multiple frames with a lightweight convolutional neural network optimized for embedded inference. The FPGA implementation was developed using a Xilinx Zynq Ultrascale+ platform and achieves low-latency, power-efficient inference compatible with CubeSat systems. Performance was evaluated using both a physics-based simulation framework and data acquired during outdoor experimental campaigns. The trigger maintains high detection efficiency for 10 cm-class targets up to 30–40 km distance and reliably detects real satellite tracks with signal levels as low as 1% above background. These results validate the feasibility of on-board real-time debris detection using embedded AI, and demonstrate the robustness of the algorithm under realistic operational conditions. The study was conducted in the context of a broader technology demonstration project, called DISCARD, aimed at increasing space situational awareness capabilities on small platforms.

1. Introduction

The proliferation of space debris, especially in low Earth orbit (LEO), poses a growing threat to operational satellites and space missions. While large debris fragments are routinely tracked by ground-based surveillance networks operated by agencies such as ESA and NASA [1,2], smaller objects in the 1–10 cm size range remain undetectable by conventional systems. These objects are too small to be catalogued yet large enough to cause catastrophic damage in the event of a collision. Developing onboard detection systems capable of recognizing such debris in real-time is a critical step toward improving space situational awareness.
Machine learning techniques, and in particular convolutional neural networks (CNNs), have recently shown great potential in detecting faint signals hidden by the noise in astrophysical and orbital observation contexts [3,4]. However, deploying such models onboard small satellites or CubeSats introduces hardware constraints that limit the use of traditional deep learning architectures. Recent work in deep learning–based optical debris detection has explored the use of spatial and temporal information for identifying faint moving objects in video sequences, as in the SDebrisNet approach [5]. For this reason the combination of a lightweight CNN with a stacking procedure known as the Stack-CNN algorithm offers a promising solution for real-time detection of faint, fast-moving objects in noisy optical data [6].
The stacking method, originally proposed by Yanagisawa for GEO debris detection [7], enhances the signal-to-background ratio (SBR) by coherently integrating frames along hypothetical motion vectors. This technique was later adapted as a second-level trigger in the JEM-EUSO program [8]. The CNN component, optimized for low parameter count and shallow architecture, enables efficient classification of stacked frames while remaining compatible with field-programmable gate array (FPGA) implementation [9].
Compared with previous studies, the present work introduces several novel contributions. This is the first FPGA implementation of the patented Stack-CNN algorithm, originally developed within the JEM-EUSO collaboration for the detection of meteors. Here, we demonstrate its adaptation to the space debris domain, turning a method previously confined to fundamental astrophysics into a functional detector concept for orbital safety. Paired with state-of-the-art optical sensors, the algorithm is capable of recognizing debris with sub-10 cm diameters directly from space, a population that is still only characterized through statistical models from agencies such as ESA and NASA [1]. This capability has the potential to deliver the first experimental dataset of small debris objects, contributing with direct measurements to the existing statistical estimates. Several recent efforts have highlighted the importance of establishing direct space-based optical observations of small debris, ranging from multi-CubeSat mission concepts [10,11] to dedicated instruments such as ESA’s planned Space-Based Optical Component (SBOC) [12]. These developments underline the growing recognition that in-situ optical measurements are essential to complement statistical environment models. In this context, the FPGA-based design presented here demonstrates that real-time analysis speeds can be reached under restricted resource constraints. Such capability could be integrated in different operational scenarios, for example by providing continuous information to the onboard control system of a satellite about the local debris environment measured by its sensors. This strengthens the perspective of deploying compact, autonomous debris monitors on CubeSat-class platforms, in line with recent advances in quantization-aware FPGA neural network accelerators [13]. The study therefore carries clear academic significance, by providing a new research tool for orbital debris population studies based on direct observations. It also has industrial relevance, paving the way for collision-avoidance services and sustainable orbital traffic management.
In this work, the FPGA implementation of the Stack-CNN algorithm using a Xilinx Zynq Ultrascale+ platform is presented. Its performance is evaluated through extensive simulations and experimental campaigns. The capability to detect objects with low SBR is evaluated using both synthetic and real-world data, including satellite passages observed during outdoor campaigns. The implementation demonstrates real-time processing capability, robustness to background noise, and high detection efficiency, confirming its suitability for embedded systems with constrained resources.

2. Materials and Methods

2.1. Overview of the Stack-CNN Algorithm

The Stack-CNN algorithm is designed to detect faint, fast-moving objects in optical image sequences, with a particular focus on space debris and meteors. It combines two core stages: a stacking operation that enhances the visibility of linear motion signals across frames, and a convolutional neural network (CNN) that performs binary classification on the resulting stacked images. This hybrid approach allows for effective detection even in low signal-to-noise conditions, while maintaining a computational footprint compatible with embedded hardware platforms. The following subsections describe the two main components of the algorithm: the Stacking Method and the CNN-based classification.

2.1.1. Stacking Method

The Stacking Method is applied to objects, such as space debris or meteors, that move linearly across the field of view (FoV) of an optical instrument, with fixed apparent velocity v and direction θ . These motion parameters are typically a function of the satellite’s orbit and pointing direction. The method is built upon two core operations: shifting and summation [6,14]. Given a sequence of n frames, each defined as I ( x , y , t ) with t { 0 , 1 , , n 1 } and pixel coordinates ( x , y ) , the shifting process is designed to align the signal of a moving object across time. This is done by reversing the object’s expected motion, effectively “freezing” its position in the stack. The shifting vector ( d x , d y ) is derived from the object’s velocity and direction and is defined as:
d x = v · cos ( θ ) · t d y = v · sin ( θ ) · t
Since the image grid is discrete, d x and d y are rounded to the nearest integer via the int ( · ) operation. This yields a shifted image I shift ( x , y ; t ) as:
I shift ( x , y ; t ) = I ( int ( x d x ) , int ( y d y ) ; t )
After shifting all n frames according to the hypothesized trajectory, they are summed to obtain the stacked image:
I stack ( x , y ) = t = 0 n 1 I shift ( x , y ; t )
This operation coherently integrates signal contributions from a moving object while background noise, assumed to be temporally uncorrelated, tends to average out. The improvement is quantified through the signal-to-noise ratio (SNR), defined as:
SNR = Signal σ bkg = Signal μ bkg
Here, μ bkg is the average background level, and  σ bkg its standard deviation. Assuming a Poissonian noise model, σ bkg = μ bkg . In this work, a nominal background of 1 photon/GTU (Gate Time Unit) is considered, aligned with the Mini-EUSO calibration [15,16]. For stacked images, the signal contribution grows linearly with the number of frames n, while the noise grows as n , resulting in an overall enhancement:
SNR stack = Signal · n μ bkg · n = n · SNR
It is critical to choose n such that it matches the effective duration of the object’s visibility within the frame sequence. A mismatch could lead to a reduced improvement in SNR. In our experimental context, typical values of n range from 6 (e.g., for short-lived meteor events, fast debris) up to 40 (e.g., for slower, longer debris crossings). More details can be found in “Stack-CNN algorithm: A new approach for the detection of space objects” by Montanaro et al. [6].

2.1.2. Quantized CNN Architecture

The convolutional neural network used in the Stack-CNN pipeline performs binary classification on stacked images to determine whether a valid object track is present. While the original model proposed by Montanaro et al. [6] was designed for offline analysis with relatively high-resolution input, the version proposed here is optimized for real-time inference on resource-constrained FPGA hardware. The two architectures share a similar logic and structure, but differ in image resolution, layer sizes, and quantization.
The original Stack-CNN model, was designed to process stacked images of size 48 × 48 pixels. It consists of three convolutional layers with ReLU activations and 3 × 3 kernels, followed by 2 × 2 max-pooling layers to reduce spatial dimensions. After feature extraction, the output is flattened and passed through three fully connected layers, with the final layer using a sigmoid activation to yield a binary classification output. The network contains approximately 16,825 trainable parameters and was trained using binary cross-entropy loss with the Adadelta optimizer [17]. The shallow depth and relatively low parameter count make it suitable for deployment in embedded systems.
To meet the requirements of real-time, low-power deployment on FPGA hardware, a compact convolutional neural network using the Brevitas quantization-aware training framework [18] was designed. The architecture is specifically tailored to process input images of size 16 × 16 pixels, matching the resolution of the sensor used in the test system. All weights, activations, and biases are quantized to 8-bit fixed-point integers, ensuring full compatibility with hardware accelerators and enabling highly efficient inference on the Xilinx Zynq Ultrascale+ FPGA platform. Similar FPGA-based quantization-aware approaches have been explored in other contexts, such as the integer-arithmetic-oriented architecture proposed in [13], which demonstrated the efficiency of tailored quantized layers for high-throughput edge AI applications. While their case study focused on peak detection in industrial vision, the underlying concepts of bit-level optimization and streaming architectures are closely aligned with the strategy adopted in this work. The Convolutional Neural Network structure of the Stack-CNN, shown in Figure 1, consists of two convolutional blocks, each comprising a convolutional layer with ReLU activation and 2 × 2 max-pooling, followed by a flattening operation and two fully connected layers. The final layer applies a sigmoid activation function for binary classification. A detailed breakdown of the number of parameters in each layer is provided in Table 1, confirming the suitability of the network for low-resource embedded systems. In total, the model comprises 37,697 parameters, allowing for fast and lightweight inference.
This structure totals 512 input features to the first fully connected layer (since 32 channels × 4 × 4 spatial size). All layers use 8-bit quantized weights and activations, and biases are quantized using the Int8Bias strategy. The model is exported to ONNX format [20] and compiled for hardware deployment using the Vitis AI toolchain [21], which integrates quantized inference engines with FPGA-accelerated pipelines. While the total number of parameters in our quantized CNN architecture exceeds that of the original Stack-CNN presented in [6], this increase is the result of deliberate architectural choices tailored for efficient FPGA deployment. Our model was designed and trained using quantization-aware training (QAT) via the Brevitas framework [18], ensuring that all weights, activations, and biases are represented as 8-bit fixed-point integers. This format is ideal for low-power inference on FPGA devices, as it minimizes memory bandwidth and computation overhead compared to floating-point implementations.
The original Stack-CNN model, although smaller in total parameter count, includes multiple dense layers and was trained using standard 32-bit precision. Its design does not incorporate quantization, which limits its direct applicability to resource-constrained environments such as embedded FPGAs, where floating-point arithmetic can significantly degrade throughput and increase power consumption.
In contrast, our architecture emphasizes structural regularity, with consistent convolutional blocks and minimal fully connected layers. This organization allows for better pipelining and parallelization on the FPGA fabric. Furthermore, by using smaller input dimensions ( 16 × 16 ) and keeping kernel sizes uniform ( 3 × 3 ), our model facilitates efficient reuse of computation blocks and memory. Despite the presence of a relatively large fully connected layer (512→64), the quantized representation ensures that its resource impact remains acceptable. Empirical evaluations confirmed that our model not only meets real-time inference requirements but also achieves superior classification accuracy over the original Stack-CNN, particularly in high-noise and low-SNR conditions. This demonstrates that, when properly quantized and structured, even moderately larger networks can outperform smaller ones in hardware efficiency, as shown in Table 2.
The result is a compact, efficient network that maintains competitive detection performance while staying within the strict resource and latency constraints required by spaceborne CubeSat applications. While the current architecture is sufficient to meet our real-time performance requirements, we note that the fully connected layers still account for the majority of the model’s parameters. This design choice was made to preserve classification accuracy while keeping the network shallow and FPGA-friendly. However, further architectural refinements may be pursued in future work to improve resource efficiency. In particular, replacing the final dense layers with a global average pooling layer or applying parameter pruning techniques could reduce the model size and latency with minimal impact on performance.

2.2. FPGA Implementation

The deployment of the quantized CNN model on the Xilinx Ultrascale+ ZCU104 FPGA board was carried out through a streamlined workflow built entirely within the Vitis AI framework. This section provides an overview of the design flow, toolchain components, and the quantization strategy adopted to optimize the model for efficient inference on low-power hardware.

2.2.1. Design Flow and Toolchain

The complete FPGA implementation of the Stack-CNN model was developed using the Vitis AI framework [21], which provides a comprehensive toolchain for deploying deep learning models on Xilinx devices. The process begins with training the quantized CNN in PyTorch using the Brevitas library [18], which supports quantization-aware training (QAT) and fixed-point simulation. Once training is complete, the model is exported to the ONNX format [20] and then passed through the Vitis AI Quantizer and Compiler. The quantizer finalizes the 8-bit fixed-point representation, ensuring compatibility with the Deep Processing Unit (DPU) available on the Zynq Ultrascale+ platform. The compiler maps the network to the target DPU configuration and generates an optimized model for inference. The bitstream and AI artifacts are then integrated into the overall firmware project using PetaLinux tools and deployed to the ZCU104 evaluation board. Runtime inference is executed via the Vitis AI Runtime API (VART), enabling tight coupling between software and FPGA hardware acceleration.

2.2.2. Model Optimization and Quantization

To ensure optimal performance on embedded hardware, the model was trained using quantization-aware training (QAT), which simulates reduced precision during both forward and backward passes. This was accomplished using Brevitas, which provides drop-in quantized modules for PyTorch while preserving compatibility with the Xilinx Vitis AI stack. All weights, activations, and biases were quantized to 8-bit integers, significantly reducing memory usage and allowing efficient mapping to FPGA resources such as LUTs, DSP slices, and BRAMs. Notably, the quantization-aware training approach preserves model accuracy, avoiding the degradation typically observed in post-training quantization workflows. The quantized model was exported in ONNX format and processed by the Vitis AI Quantizer to apply further optimizations. These include weight folding, batch normalization absorption, and removal of unused graph operations. The resulting model was compiled into a hardware-friendly format suitable for deployment on the ZCU104’s DPU core, ensuring real-time inference with minimal latency and power consumption. The structural and export details of the quantized Stack-CNN are summarized in Table 3.

2.3. Simulation Framework for Evaluating Detection Efficency of the Quantized Stack-CNN Algorithm

To support the development and evaluation of the implemented Stack-CNN architecture for space debris (SD) detection, we developed a dedicated simulation framework designed to reproduce realistic observation scenarios under controlled conditions. The framework generates synthetic datasets that emulate the optical detection of small orbital debris fragments as they traverse a 16 × 16 pixel sensor, matching the resolution of the deployed hardware platform. Each simulated sequence consists of 128 frames, each lasting 50 ms, with fixed spatial resolution ( 16 × 16 pixels). The background signal is modeled as a strong Poissonian noise source centered around 20,000 photons per pixel per frame, reflecting typical conditions encountered on space-based or high-altitude platforms. A single moving object, representing a debris fragment, is injected into each sequence with randomized direction, speed, and intensity. The object’s trajectory, size, and brightness are parameterized based on its physical dimensions (1–20 cm in diameter) and distance from the observer (10–100 km). Motion starts randomly between frame 6 and frame 20, entering from one of the four edges, and proceeds with velocities that match the 3D positioning of the object in the simulated field of view of the detector. To simulate realistic optical signatures, we modeled the interaction between solar UV radiation and the surface of the debris. A typical fragment with a radius of 0.1 m and an albedo of 0.1, when illuminated by solar photons in the 300–400 nm band, receives an incident flux of approximately 10 20 photons · m 2 · s 1 . Assuming Lambertian reflection over a hemisphere, the reflected flux becomes approximately 3 × 10 17 photons · m 2 · s 1 . Only a fraction of these photons reaches the detector, depending on the focal surface area, the distance to the object, and the phase angle between the incident sunlight and the observer. The latter can be properly chosen, according to the orbital configuration, in order to enhance the amount of reflected light reaching the focal plane. The resulting photon flux at the sensor (PhFS) scales with the ratio A FS / 1 2 A R , linking the physical parameters of the system to the signal level observed on the focal surface. The spatial distribution of the signal on the sensor is shaped using a custom point spread function (PSF) that mimics the blurring effects of optical systems. Approximately 40% of the signal energy is assigned to the central pixel, while the remainder is distributed to neighboring pixels according to a physically inspired kernel. This configuration allows the simulation to account for realistic signal shapes and sub-pixel movements. Figure 2 shows an example of frames generated by this simulation framework.
In total, the simulation campaign spans 15 discrete object sizes, 10 different distances, and 10 unique velocity profiles per distance, with 100 randomized realizations for each configuration. This results in a comprehensive dataset suitable for training, validating, and benchmarking detection algorithms, particularly under low signal-to-noise ratio and transient conditions that challenge traditional techniques. While the classification model was trained and validated on a dataset of 10,000 labeled sequences with a balanced 70/30 split between training and test sets, a significantly larger simulation campaign was conducted to evaluate the detection efficiency of the final quantized model. This extended dataset covers more than 15,000 unique physical configurations, generated by varying object dimensions (1–20 cm), distances (10–100 km), and velocity profiles. For each configuration, 100 randomized realizations were produced by altering entry direction, photon signal levels, and noise seeds. In total, over 1.5 million simulated sequences were processed by the trained model to characterize its detection performance across the parameter space. The resulting signal-to-background ratios ranged from 1% to 30%, and the trigger performance was recorded in terms of successful event identification as a function of object size and distance. To support reproducibility and future benchmarking, all simulation data are stored in structured files (representing the full time sequence of photon-normalized frames) with metadata embedded in filenames. Aggregated detection statistics, physical parameters, and trigger responses are stored in tabular format for traceability and post-processing. This two-step approach allows both robust training and comprehensive evaluation of detection sensitivity under realistic and varied observation scenarios.

2.4. Prototype Detector Description

This section outlines the experimental setup employed to validate the Stack-CNN algorithm, with a focus on the structure and functionality of the detector prototype. The design and components are derived from the JEM-EUSO collaboration and include a Photon Detection Module (PDM) based on an Elementary Cell (EC) unit. The detection system consists of a 16 × 16 pixel photomultiplier array coupled with a Fresnel lens, forming the core of the optical assembly. Each part are integrated inside a maechanical structure performed in collaboration with INFN Turin (Figure 3).
This iteration of the prototype detector shown in Figure 3.
(a)
Front section: This contains the optical system, which includes a 25 cm diameter Fresnel lens.
(b)
Back section: This part has all the connector to comunicate with the electronics inside.
(c)
Inner section: This is the most relevant part and it houses the EC with four photomultiplier tubes arranged in a 16 × 16 pixel matrix. Located behind the photomultipliers are four custom ASICs developed by the JEM-EUSO program [22], which are responsible for converting the analog signals from the photomultipliers into digital signals. Behind the PDM there are the electronic components of the data acquisition system, including two Zynq boards (Xilinx Zynq-7000 and Xilinx Artix-7), as well as the high-voltage power supply board that provides the necessary voltage for the photomultiplier operation.

Observation Conditions and Data Collection

To validate the overall acquisition system and evaluate the performance of the Stack-CNN algorithm, several outdoor observation campaigns were conducted. These campaigns were carried out during sunset to optimize conditions for detecting satellite trajectories, taking advantage of the low background light and the period during which satellites are still illuminated by the Sun. A 10 Micron LX2000 telescopic mount was used to ensure precise alignment with known satellite trajectories, thereby enabling accurate calibration of the detector system. This configuration facilitated the reliable association of detected signals with known orbital objects. The primary objective of these campaigns was to identify the specific satellites under observation, enabling the determination of their physical dimensions and apparent magnitudes, which are crucial for assessing the detection capabilities of the system under realistic observational conditions. Figure 4 shows an example of three frames acquired during one of these campaigns, where the trajectory of a satellite is visible crossing the sensor’s field of view.

3. Results

In this section the main results from the discussed work are reported.

3.1. Algorithm Profiling Results

A critical aspect of validating the implemented Stack-CNN algorithm was the assessment of its execution performance under realistic data streams. This profiling analysis aimed to confirm that the model could reliably operate within the real-time constraints required for onboard triggering and acquisition. The following results illustrate the temporal behavior of the processing pipeline across different input conditions. Figure 5 reports one of the profiling studies conducted during this validation phase. In this experiment, two configurations were evaluated: processing sequences containing only background noise (blue curve) and sequences containing both background and simulated debris traces (purple curve). For each package of input frames, the total execution time was measured and plotted as a function of the package ID. The results show that the inference time remained stable and well below the imposed processing time limit of 6.4 s per package (indicated by the red line), which corresponds to processing 128 frames at 50 ms per frame. When only background data was present, the execution time fluctuated around 4 s per package, reflecting the baseline computational load of the stacking and CNN processing stages. When debris traces were included, execution time exhibited a slight decrease and higher variability, with values typically between 2.5 and 3.5 s. This variability is attributed to the dynamic activation patterns in the CNN layers when debris-like features were detected. Overall, the profiling confirmed that the implemented pipeline met the real-time constraints required for onboard operation, leaving sufficient margin for additional pre-processing or communication overhead if necessary.

3.2. Stack-CNN Performances, Simulation Framework

To quantify the impact of quantization on model performance, we evaluated both the original 32-bit floating-point Stack-CNN model [6] and its quantized 8-bit version implemented on FPGA.
As reported in Table 4, the quantized model achieves classification performance comparable to the original floating-point implementation. The reductions in accuracy, precision, and recall are all below 0.1%, demonstrating the effectiveness and stability of the quantization-aware training approach. Notably, the false positive rate remains under 0.07%, aligning with the stringent requirements for reliable real-time space debris detection in embedded systems. These results demonstrate that the quantized Stack-CNN is suitable for deployment in resource-constrained environments, with minimal compromise in detection efficiency. Preliminary results on detection efficiency for 10 cm diameter debris are shown in Figure 6. In this test, the Stack-CNN algorithm was executed directly on the FPGA, using the quantized model deployed on the DPU of the ZCU104 board. Simulated input sequences were streamed to the board via a custom SSH-based interface, designed to emulate real-time telemetry from an onboard optical sensor. These results were obtained by leveraging the simulation framework described in Section 2.4 to generate large datasets representing debris objects of fixed 10 cm diameter at varying distances. For each distance bin, a set of 100 sequences was produced, with randomized brightness, entry direction, and motion parameters. The full set of generated sequences was streamed to the deployed model running on hardware, and the number of correctly triggered events was recorded. This allowed us to compute the detection efficiency as the fraction of input sequences that generated a positive trigger response on the FPGA, mimicking the behavior of an actual onboard trigger system.
Figure 6 reports the trigger efficiency as a function of binned distance intervals from 0 to 100 km. The Stack-CNN trigger achieves 80% efficiency in the 0–10 km bin, with a gradual decline to 65% and 62% in the 10–20 km and 20–30 km bins, respectively. The efficiency drops to 49% between 30–40 km, and further to 29% between 40–50 km. Beyond this range, performance sharply degrades, with negligible detection above 50 km. These results highlight the distance-dependent sensitivity of the algorithm, driven by the decreasing signal-to-background ratio and the limitations imposed by the current optics and background levels. This binned analysis provides a consolidated overview of the trigger’s operating range, guiding further optimization of the algorithm and its training procedures to enhance detection capability at extended distances.

3.3. Stack-CNN Performances and Experimental Campaigns

To assess the performance of the implemented Stack-CNN algorithm under realistic observational conditions, experimental campaigns were conducted using the prototype detector. A dataset comprising seventeen satellite passages recorded during twilight was assembled, a condition that offers favorable contrast between reflected sunlight and the diffuse sky background. Figure 7 presents the continuous light curve recorded during one of the observational campaigns conducted near Turin at sunset. The gradual decline in background signal across all pixels clearly reflects the natural reduction in ambient illumination typical of twilight. Several localized peaks, simultaneously visible across multiple adjacent channels, correspond to the transit of bright astronomical sources, most notably the star Capella. Additionally, multiple satellite passages were identified throughout the acquisition, including during the high-background phase at the beginning of the session. These events confirm the capability of the system to detect moving objects under varying light conditions. To further enhance robustness against varying background levels, the detection pipeline includes a preprocessing stage based on a mobile median filter and pixel-wise normalization. This step effectively compensates for both temporal and spatial background variations, including non-uniformities between different sensor channels. Specifically, each frame is normalized by the spatial median of a 9-frame temporal window centered on the current frame. This approach reduces low-frequency drifts and suppresses fixed-pattern noise, thereby emphasizing transient features such as moving objects. Despite inherent differences in baseline intensity across channels, often reaching a factor of 2–3 under optimal conditions, the Stack-CNN remains capable of successfully identifying weak signals. This highlights the robustness of the system to inhomogeneous observational conditions, enabled by the combined effect of preprocessing and learned spatial-temporal filters in the CNN.
Figure 8 shows the measured signal over background (SOB) for each satellite event recorded. The percentage SOB varied between 1.0% and 8.8%, with a mean of approximately 4.0%. A horizontal red line indicates this value, while a blue line marks the effective experimental detection threshold, set at 1.5%. Notably, several passages exceed 4% over background, confirming the capability of the Stack-CNN trigger to confidently detect faint moving sources under realistic conditions.
The variability observed across events is attributable to differences in apparent magnitude, distance, and the reflective properties of the satellites. Events below the 1.5% threshold represent the lowest detectable contrast achievable with the current optical configuration and background suppression. To identify the observed satellites, acquisition times and pointing coordinates were cross-referenced with orbital data from the Heavens-Above satellite tracking service [23]. This process allowed the trajectories of most targets to be matched with a high degree of confidence, while in some cases, minor discrepancies between predicted and observed positions introduced residual uncertainty. One of the brighter passages was tentatively attributed to Starlink-5172, based on close agreement in timing and motion patterns, although an exact identification could not be fully confirmed. Overall, the satellites in this group exhibited similar apparent trajectories and brightness profiles, supporting the hypothesis that they belonged to the Starlink constellation, whose units share nearly identical physical dimensions and reflective characteristics [24]. For this candidate passage, the reference target has nominal dimensions of approximately 2.8 × 1.4 × 0.2 m and a deployed solar array length near 8 m, resulting in an effective reflective area of about 8 m2. While uncertainties remain regarding the precise orientation and illumination geometry relative to the observer, the signal intensity was averaged over the entire passage to mitigate these effects. Based on this measurement, the experimental detection limit of the system was estimated to be approximately equivalent to an object of 0.35 m diameter observed at 100 km, providing a practical, though approximate, benchmark for the minimum detectable debris size under comparable conditions. These results validate the feasibility of operating the Stack-CNN algorithm in real-time onboard scenarios, offering robust detection performance for targets with low contrast over the background. Ongoing analysis of additional satellite passes, including fainter or less well-characterized objects, will further refine the estimation of the system’s detection limits and improve the characterization of sensitivity under varying observation conditions.

4. Discussion

The results presented in this work demonstrate that the Stack-CNN algorithm, when deployed on an FPGA-based hardware platform, meets the real-time processing requirements essential for onboard detection of small orbital debris. The profiling results confirmed that the inference pipeline maintained stable execution times well below the operational limit of 6.4 s per data package, even when processing sequences containing debris-like traces. This finding supports the initial working hypothesis that a quantized CNN, optimized for resource-constrained devices, could be integrated into an autonomous trigger system without exceeding the temporal constraints typically encountered in space-based observation platforms.
The detection efficiency analysis, based on synthetic datasets generated with a physics-based simulation framework, revealed that the Stack-CNN algorithm maintains high detection rates for 10 cm debris up to approximately 30 km, with efficiencies exceeding 60%. A progressive decline is observed beyond this range, with efficiency dropping below 50% between 30–40 km and falling to 29% in the 40–50 km bin. Detection capability becomes negligible above 50 km. This behavior reflects the expected decrease in signal-to-background ratio with increasing distance, consistent with observation geometry and the optical limitations of the system. These results align with prior findings from space-based missions such as Mini-EUSO [15], which emphasized the importance of photon statistics and contrast levels when detecting faint objects against diffuse background illumination.
To further assess the robustness of the classification, the quantized Stack-CNN model was evaluated on a balanced dataset of 3000 simulated stacked images. The resulting confusion matrix, shown in Table 5, confirms excellent discriminative performance, with only one false positive and two false negatives. The corresponding false positive rate (0.07%), precision (99.93%), and F1 score (0.9990) demonstrate that the trigger remains both sensitive and highly selective, this is a critical requirement for onboard autonomous operation in space-based platforms.
Importantly, the results of both the simulation campaigns and the experimental observations consistently support the robustness of the detection system under varying background conditions. The simulation framework explicitly models a wide range of signal-to-background ratios (from 1% to 30%) by adjusting object brightness, distances, and trajectory geometry, enabling us to quantify the degradation of detection efficiency as a function of SNR. These results show that performance degrades gracefully, in accordance with expectations from photon statistics and optical system limitations. This trend is also confirmed by real-world observational data. A set of seventeen satellite passages recorded under twilight conditions revealed signal-over-background values ranging from 1% to 9%, with a mean of approximately 4%, in close agreement with simulated thresholds. Although precise identification of targets was constrained by uncertainties in orbital data, analysis of trajectory and light profiles suggested that most of the detections corresponded to Starlink satellites [24]. Notably, one candidate event tentatively attributed to Starlink-5172 enabled extrapolation to a minimum detectable size of approximately 0.35 m at 100 km in typical atmospheric-background conditions. This convergence between simulation and experimental performance highlights the ability of the system to operate effectively across a wide range of observational scenarios and reinforces the importance of modeling illumination and observation geometry in future optimization efforts. From a technical perspective, the use of FPGA acceleration to execute quantized convolutional neural networks in real time builds on recent advances in edge computing for space applications [25]. This approach demonstrates that even within the strict power and resource constraints typical of small satellite platforms, it is possible to implement sophisticated detection algorithms capable of operating autonomously without ground intervention.
Nevertheless, some limitations were identified. The sensitivity threshold observed in the 40–50 km range, as well as the variability in measured signal levels across different passes, suggests that further improvements in model training, quantization strategies, and pre-processing techniques will be necessary to extend the detection envelope. In particular, refining the simulation framework to include more complex background variability, reflectivity profiles, and orbital data uncertainties could improve generalization under operational conditions. Additionally, the implementation of adaptive thresholding or uncertainty estimation within the inference pipeline may help mitigate the observed decline in detection efficiency for low-contrast scenarios. Future research should also explore the integration of multi-modal sensor data, such as combining optical and radar measurements, to improve robustness and reduce false positive rates. Moreover, advances in high-efficiency photodetectors and low-noise optical designs may contribute to increasing the signal-to-background ratio, thereby enabling detection of even smaller debris fragments at greater distances.

5. Conclusions

From a broader perspective, this work underlines the feasibility and importance of deploying embedded, autonomous detection systems to improve space situational awareness capabilities. At present, a large portion of the debris population below 20 cm is not systematically measured but instead modeled using fragmentation scenarios and propagation tools such as ESA’s MASTER model [26] and NASA’s ORDEM [27]. The lack of direct observational data for this size regime significantly constrains the validation and refinement of these models, introducing uncertainty into collision risk assessment and debris mitigation planning [28].
The results of the experimental campaigns presented here demonstrate that the Stack-CNN algorithm, implemented on FPGA hardware, can detect and characterize low-contrast moving targets in realistic observation conditions. The measured signal levels across seventeen satellite passages confirm that the system can reliably identify objects whose reflected signals exceed approximately 1.5% over background, with an estimated detection limit equivalent to a debris fragment of around 0.35 m diameter observed at 100 km. This constitutes a practical benchmark for the sensitivity achievable with current optics and embedded processing, bridging the gap between laboratory validation and operational feasibility. The simulation-based performance study further reinforces these findings, showing that the algorithm maintains above 60% trigger efficiency for 10 cm debris up to 30 km, with detection capability rapidly declining beyond 40 km due to geometric and photometric constraints. These synthetic results complement the experimental data, providing a robust framework for sensitivity estimation under controlled conditions and enabling optimization of the system design.
By demonstrating the capability to process data streams in real time on resource-limited platforms, the approach presented here will contribute to closing the critical gap between modeled and observed debris populations. The combination of FPGA-based acceleration, simulation-driven training, and field validation using actual satellite observations provides a concrete pathway toward operational systems capable of monitoring debris populations that have so far remained below the detection threshold of conventional ground-based radars.
In a rapidly evolving orbital environment characterized by the proliferation of mega-constellations and increasing traffic in low Earth orbit, the ability to autonomously detect and respond to small debris represents a strategic capability for ensuring the long-term sustainability of space activities. Continued development in this direction should focus on enhancing detection sensitivity, improving the robustness of satellite identification, integrating complementary sensing modalities, and validating performance in extended observation campaigns. This work thus lays the foundation for future research and practical deployments that can help transform our understanding of the near-Earth debris environment from model-dominated predictions to direct measurement-based monitoring, ultimately supporting safer and more reliable operations in space.

6. Patents

The work presented in this manuscript did not result in the filing of any new patents. However, it constitutes a continuation and technical advancement of a methodology previously protected under the Italian industrial invention patent no. 102021000009845, titled “Metodo e relativo sistema per rilevare oggetti nel campo visivo di un dispositivo di rilevamento ottico”, filed on 19 April 2021, and granted on 8 May 2023. This patent was registered prior to the beginning of the activities reported in this article and forms the technological basis upon which the current study was developed.

Author Contributions

Conceptualization, M.E.B.; methodology, M.A., F.R., A.M. and A.G.C.; software, M.A., A.F. and A.G.C.; validation, M.A., F.R. and A.F.; formal analysis, M.A.; investigation, M.A. and F.R.; resources, M.A. and F.R.; data curation, M.A.; writing—original draft preparation, M.A.; writing—review and editing, M.A., F.R. and M.E.B.; visualization, M.A.; supervision, M.E.B.; project administration, M.E.B.; funding acquisition, M.E.B., R.S. and R.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the NODES Programme (DISCARD Project, Grant agreement no. ECS00000036), supported by the Italian Ministry of University and Research (MUR) under Mission 4, Component 2, Investment 1.5 of the Italian National Recovery and Resilience Plan (PNRR), funded by the European Union–NextGenerationEU. The APC was funded by the same program.

Data Availability Statement

The data supporting the findings of this study are not publicly available due to confidentiality agreements within ongoing research programs. Access to the datasets may be granted upon reasonable request and with permission of the involved institutions.

Acknowledgments

The authors acknowledge the JEM-EUSO collaboration for providing key components of the prototype detector, including parts of the acquisition electronics and access to tested firmware and software tools. The collaboration also offered technical expertise and support during the development and validation phases. Additional components, laboratory facilities, and engineering support were provided by INFN (Istituto Nazionale di Fisica Nucleare), whose contribution was essential for the integration and testing of the detector prototype. The authors also thank the Department of Electronics and Telecommunications (DET) at Politecnico di Torino for the support provided during Matteo Abrate’s master thesis work, which laid the groundwork for the present study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DISCARDStack-CNN Demonstrator: AI Algorithm for Space Debris Detection
FPGAField Programmable Gate Array
CNNConvolutional Neural Network
AIArtificial Intelligence
LEOLow Earth Orbit
NASANational Aeronautics and Space Administration
ESAEuropean Space Agency
GTUGate Time Unit
JEM-EUSOJoint Exploratory Missions for Extreme Universe Space Observatory
SBRSignal to Background Ratio
GEOGeostationary Earth Orbit
FoVField of View
Mini-EUSOMultiwavelength Imaging New Instrument for the Extreme Universe Space Observatory
ReLURectified Linear Unit
SNRSignal to Noise Ratio
VARTVitis Ai RunTime
APIApplication Programming Interface
QATQuantization-Aware Training
BRAMBlock Random Access Memory
DSPDigital Signal Processor
LUTLook Up Table
ONNXOpen Neural Network Exchange
SDSpace Debris
PhFSPhoton rate at Focal Surface
FSFocal Surface
ECElementary Cell
ASICApplication-Specific Integrated Circuit
PDMPhoton Detection Module
SSHSecure SHell
DPUData Processing Unit

References

  1. European Space Agency (ESA). Space Debris by the Numbers. 2023. Available online: https://www.esa.int/Safety_Security/Space_Debris/Space_debris_by_the_numbers (accessed on 1 June 2025).
  2. NASA Orbital Debris Program Office. Orbital Debris FAQs. 2022. Available online: https://orbitaldebris.jsc.nasa.gov/faq/ (accessed on 1 June 2025).
  3. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  4. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  5. Tao, J.; Cao, Y.; Ding, M. SDebrisNet: A Spatial–Temporal Saliency Network for Space Debris Detection. Appl. Sci. 2023, 13, 4955. [Google Scholar] [CrossRef]
  6. Montanaro, A.; Ebisuzaki, T.; Bertaina, M. Stack-CNN algorithm: A new approach for the detection of space objects. J. Space Saf. Eng. 2022, 9, 72–82. [Google Scholar] [CrossRef]
  7. Yanagisawa, T.; Nakajima, A.; Kimura, T.; Isobe, T.; Futami, H.; Suzuki, M. Detection of small GEO debris by use of the stacking method. Trans. Jpn. Soc. Aeronaut. Space Sci. 2003, 51, 61–70. [Google Scholar] [CrossRef]
  8. Bertaina, M.; Ebisuzaki, T.; Hamada, T.; Ikeda, H.; Kawasaki, Y.; Sawabe, T.; Takahashi, Y. The trigger system of the JEM-EUSO project. In Proceedings of the 30th International Cosmic Ray Conference (Merida), Merida, Mexico, 3–11 July 2007. [Google Scholar]
  9. Ghaffari, A.; Benabdenbi, M.; El Ghazi, H.; El Oualkadi, A. CNN2Gate: An implementation of convolutional neural networks inference on FPGAs with automated design space exploration. Electronics 2020, 9, 2200. [Google Scholar] [CrossRef]
  10. Pineau, D.; Felicetti, L. Design of an optical system for a Multi-CubeSats debris surveillance mission. Acta Astronaut. 2023, 210, 535–546. [Google Scholar] [CrossRef]
  11. Liu, L. Design of optical system for space-based space debris detection. In Proceedings of the Proc. SPIE 13278, Seventh Global Intelligent Industry Conference (GIIC 2024), Shenzhen, China, 30 March–1 April 2024; p. 132781H. [Google Scholar] [CrossRef]
  12. Michel, M.; Ceeh, H.; Cirillo, G.; Utzmann, J.; Kraft, S. The Space-Based Optical Component (SBOC) instrument for passive optical in-situ detection of small space debris. In Proceedings of the 2nd International Orbital Debris Conference (IOC II), Sugar Land, TX, USA, 4–7 December 2023; p. 6130. [Google Scholar]
  13. Pistellato, M.; Bergamasco, F.; Bigaglia, G.; Gasparetto, A.; Albarelli, A.; Boschetti, M.; Passerone, R. Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI. Sensors 2023, 23, 4667. [Google Scholar] [CrossRef] [PubMed]
  14. Olivi, L.; Montanaro, A.; Barbieri, C.; Maris, M.F.; Bertaina, M.; Ebisuzaki, T. Refined STACK-CNN for Meteor and Space Debris Detection in Highly Variable Backgrounds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 10432–10453. [Google Scholar] [CrossRef]
  15. Casolino, M.; Battisti, M.; Belov, A.; Bertaina, M.; Bisconti, F.; Blin-Bondil, S.; Cafagna, F.; Cambiè, G.; Capel, F.; Churilo, I.; et al. Mini-EUSO experiment to study UV emission of terrestrial and astrophysical origin onboard of the International Space Station. Proc. Sci. 2019, 212, ICRC2019. [Google Scholar] [CrossRef]
  16. Battisti, M.; Bertaina, M.; Parizot, E.; Abrate, M.; Barghini, D.; Belov, A.; Bisconti, F.; Blaksley, C.; Blin, S.; Capel, F.; et al. An end-to-end calibration of the Mini-EUSO detector in space. Astropart. Phys. 2025, 165, 103057. [Google Scholar] [CrossRef]
  17. Zeiler, M.D. ADADELTA: An Adaptive Learning Rate Method. arXiv 2012, arXiv:1212.5701. [Google Scholar] [CrossRef]
  18. Xilinx. Brevitas: Quantization-Aware Training in PyTorch. 2021. Available online: https://github.com/Xilinx/brevitas (accessed on 1 July 2025).
  19. Lenail, A. NN-SVG: Publication-Ready Neural Network Architecture Schematics. 2019. Available online: https://github.com/alexlenail/NN-SVG (accessed on 1 July 2025).
  20. ONNX Community. Open Neural Network Exchange (ONNX). 2019. Available online: https://onnx.ai (accessed on 1 July 2025).
  21. AMD/Xilinx. Vitis AI: Development Environment for AI Inference on Xilinx Platforms. 2023. Available online: https://github.com/Xilinx/Vitis-AI (accessed on 1 July 2025).
  22. Bacholle, S.; Barrillon, P.; Battisti, M.; Belov, A.; Bertaina, M.; Bisconti, F.; Blaksley, C.; Blin-Bondil, S.; Cafagna, F.; Cambiè, G.; et al. Mini-EUSO mission to study Earth UV emissions on board the ISS. Astrophys. J. Suppl. Ser. 2021, 253, 36. [Google Scholar] [CrossRef]
  23. Peat, C. Heavens-Above: Satellite Tracking. 2024. Available online: https://www.heavens-above.com (accessed on 4 July 2024).
  24. SpaceX. Application for Fixed Satellite Service. FCC Filing SATLOA2016111500118. 2016. Available online: https://fcc.report/IBFS/SAT-LOA-20161115-00118 (accessed on 1 July 2025).
  25. Reggiani, E.; Rabozzi, M.; Nestorov, A.M.; Scolari, A.; Stornaiuolo, L.; Santambrogio, M. Pareto optimal design space exploration for accelerated CNN on FPGA. In Proceedings of the 2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Rio de Janeiro, Brazil, 20–24 May 2019; pp. 107–114. [Google Scholar] [CrossRef]
  26. Horstmann, A.; Wiedemann, C.; Lemmens, S.; Braun, V.; Manis, A.P.; Matney, M.; Gates, D.P.; Seago, J.; Vavrin, A.; Anz-meador, P. Flux comparison of MASTER-8 and ORDEM 3.1 modelled space debris population. In Proceedings of the 8th European Conference on Space Debris, ESA Space Debris Office, Darmstadt, Germany, 20–23 April 2021; Volume 8. Issue 1. Available online: https://conference.sdo.esoc.esa.int/proceedings/sdc8/paper/11 (accessed on 1 July 2025).
  27. Liou, J.C.; Johnson, N.L. Instability of the present LEO satellite populations. Adv. Space Res. 2008, 41, 1046–1053. [Google Scholar] [CrossRef]
  28. Braun, V.; Horstmann, A.; Lemmens, S.; Wiedemann, C.; Böttcher, L. Recent developments in space debris environment modelling, verification and validation with MASTER. In Proceedings of the 8th European Conference on Space Debris, ESA Space Debris Office, Darmstadt, Germany, 20–23 April 2021; Volume 8. Issue 1. Available online: https://conference.sdo.esoc.esa.int/proceedings/sdc8/paper/72 (accessed on 1 July 2025).
Figure 1. Schematic representation of the quantized CNN architecture implemented on FPGA. The model processes 16 × 16 stacked images and consists of two convolutional blocks (Conv + ReLU + MaxPool), followed by a flattening layer and two fully connected layers. A final sigmoid activation performs binary classification. This illustration was generated using the NN-SVG API [19].
Figure 1. Schematic representation of the quantized CNN architecture implemented on FPGA. The model processes 16 × 16 stacked images and consists of two convolutional blocks (Conv + ReLU + MaxPool), followed by a flattening layer and two fully connected layers. A final sigmoid activation performs binary classification. This illustration was generated using the NN-SVG API [19].
Applsci 15 09268 g001
Figure 2. Three frames showing a simulated 10 cm diameter debris object moving across a 16 × 16 pixel sensor at a distance of 30 km. The signal appears over a realistic background and shifts from frame to frame, mimicking motion.
Figure 2. Three frames showing a simulated 10 cm diameter debris object moving across a 16 × 16 pixel sensor at a distance of 30 km. The signal appears over a realistic background and shifts from frame to frame, mimicking motion.
Applsci 15 09268 g002
Figure 3. Prototype Detector: (a) Front panel with Fresnel lens. (b) Back panel with external connectors. (c) Inner part showing the PDM, Zynq boards, and support electronics.
Figure 3. Prototype Detector: (a) Front panel with Fresnel lens. (b) Back panel with external connectors. (c) Inner part showing the PDM, Zynq boards, and support electronics.
Applsci 15 09268 g003
Figure 4. Three frames acquired during an experimental observation campaign near Turin, showing a satellite pass detected by the prototype system. The object’s signal is visible above the background and moves consistently across the sensor’s pixels from frame to frame.
Figure 4. Three frames acquired during an experimental observation campaign near Turin, showing a satellite pass detected by the prototype system. The object’s signal is visible above the background and moves consistently across the sensor’s pixels from frame to frame.
Applsci 15 09268 g004
Figure 5. Profiling study of the Stack-CNN algorithm implemented in FPGA. The blue curve shows execution time when processing background-only sequences, while the purple curve corresponds to sequences containing debris traces. The red line indicates the maximum allowable processing time per data package.
Figure 5. Profiling study of the Stack-CNN algorithm implemented in FPGA. The blue curve shows execution time when processing background-only sequences, while the purple curve corresponds to sequences containing debris traces. The red line indicates the maximum allowable processing time per data package.
Applsci 15 09268 g005
Figure 6. Detection efficiency of the Stack-CNN trigger for 10 cm diameter debris as a function of binned distance intervals from 0 to 100 km. The system maintains over 60% efficiency up to 30 km, dropping below 50% beyond 40 km, and reaching negligible values after 50 km. The result reflects the decreasing signal-to-background ratio with distance, and offers a practical benchmark for the trigger’s effective range.
Figure 6. Detection efficiency of the Stack-CNN trigger for 10 cm diameter debris as a function of binned distance intervals from 0 to 100 km. The system maintains over 60% efficiency up to 30 km, dropping below 50% beyond 40 km, and reaching negligible values after 50 km. The result reflects the decreasing signal-to-background ratio with distance, and offers a practical benchmark for the trigger’s effective range.
Applsci 15 09268 g006
Figure 7. Light curve from a 30-min observation near Turin at twilight, with each colored line representing a different pixel. The plot shows normalized counts per GTU for each channel. The overall background decreases with time due to fading ambient light. Peaks correspond to astronomical sources such as Capella and satellite transits.
Figure 7. Light curve from a 30-min observation near Turin at twilight, with each colored line representing a different pixel. The plot shows normalized counts per GTU for each channel. The overall background decreases with time due to fading ambient light. Peaks correspond to astronomical sources such as Capella and satellite transits.
Applsci 15 09268 g007
Figure 8. Signal over background measured for 17 satellite passages detected with the prototype detector and processed with the Stack-CNN pipeline. The red line indicates the mean signal level (4%), and the blue line represents the estimated experimental detection limit (1.5%).
Figure 8. Signal over background measured for 17 satellite passages detected with the prototype detector and processed with the Stack-CNN pipeline. The red line indicates the mean signal level (4%), and the blue line represents the estimated experimental detection limit (1.5%).
Applsci 15 09268 g008
Table 1. Parameter count for each layer in the quantized CNN architecture.
Table 1. Parameter count for each layer in the quantized CNN architecture.
LayerOutput ShapeParameters
Conv2D (1→16) 16 × 16 × 16 160
Conv2D (16→32) 8 × 8 × 32 4640
FC (Flatten 512 → 64)64 32 , 832
FC (64 → 1)165
Total37,697
Table 2. Comparison of model architectures for FPGA implementation.
Table 2. Comparison of model architectures for FPGA implementation.
PropertyOriginal Stack-CNN [6]Quantized Stack-CNN (QAT-CNN)
Input size 48 × 48 16 × 16
Quantized (8-bit)NoYes
Training with QATNoYes
Number of Conv Layers32
Number of Dense Layers32
Flattened feature size144512
Total parameters16,82537,697
FPGA pipelining friendlyModerateHigh
Table 3. Metadata summary of the exported ONNX model for the quantized Stack-CNN architecture, prepared for FPGA deployment via the Vitis AI toolchain.
Table 3. Metadata summary of the exported ONNX model for the quantized Stack-CNN architecture, prepared for FPGA deployment via the Vitis AI toolchain.
PropertyDescriptionValue
IR VersionONNX Intermediate Representation version6
ProducerExporting framework and versionPyTorch 2.3.0
Opset VersionONNX operator set version11
Number of NodesTotal operations in the ONNX graph11
Number of InitializersTrainable tensors (e.g., weights/biases)8
Input Tensor ShapeInput dimensions (N, C, H, W)(1, 1, 16, 16)
Output Tensor ShapeOutput dimensions(1, 1)
Table 4. Comparison between the original (unquantized) and quantized Stack-CNN performance.
Table 4. Comparison between the original (unquantized) and quantized Stack-CNN performance.
PropertyOriginal Stack-CNN [6]Quantized Stack-CNN (QAT-CNN)
Accuracy99.97%99.90%
Precision100%99.93%
Recall (TPR)100%99.87%
F1-Score99.97%99.90%
False Positive Rate (FPR)0.00%0.067%
Number of Test Samples603000
Table 5. Confusion matrix on test set (quantized model).
Table 5. Confusion matrix on test set (quantized model).
Predicted: 0Predicted: 1
Actual: 014991
Actual: 121498
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abrate, M.; Reynaud, F.; Bertaina, M.E.; Coretti, A.G.; Frasson, A.; Montanaro, A.; Bonino, R.; Sirovich, R. Implementation of the Stack-CNN Algorithm for Space Debris Detection on FPGA Board. Appl. Sci. 2025, 15, 9268. https://doi.org/10.3390/app15179268

AMA Style

Abrate M, Reynaud F, Bertaina ME, Coretti AG, Frasson A, Montanaro A, Bonino R, Sirovich R. Implementation of the Stack-CNN Algorithm for Space Debris Detection on FPGA Board. Applied Sciences. 2025; 15(17):9268. https://doi.org/10.3390/app15179268

Chicago/Turabian Style

Abrate, Matteo, Federico Reynaud, Mario Edoardo Bertaina, Antonio Giulio Coretti, Andrea Frasson, Antonio Montanaro, Raffaella Bonino, and Roberta Sirovich. 2025. "Implementation of the Stack-CNN Algorithm for Space Debris Detection on FPGA Board" Applied Sciences 15, no. 17: 9268. https://doi.org/10.3390/app15179268

APA Style

Abrate, M., Reynaud, F., Bertaina, M. E., Coretti, A. G., Frasson, A., Montanaro, A., Bonino, R., & Sirovich, R. (2025). Implementation of the Stack-CNN Algorithm for Space Debris Detection on FPGA Board. Applied Sciences, 15(17), 9268. https://doi.org/10.3390/app15179268

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop