LEO Object’s Light-Curve Acquisition System and Their Inversion for Attitude Reconstruction

: In recent years, the increase in space activities has brought the space debris issue to the top of the list of all space agencies. The fact of there being uncontrolled objects is a problem both for the operational satellites in orbit (avoiding collisions) and for the safety of people on the ground (re-entry objects). Optical systems provide valuable assistance in identifying and monitoring such objects. The Sapienza Space System and Space Surveillance (S5Lab) has been working in this ﬁeld for years, being able to take advantage of a network of telescopes spread over different continents. This article is focused on the re-entry phase of the object; indeed, the knowledge of the state of the object, in terms of position, velocity, and attitude during the descent, is crucial in order to predict as accurately as possible the impact point on the ground. A procedure to retrieve the light curves of orbiting objects by means of optical data will be shown and a method to obtain the attitude determination from their inversion based on a stochastic optimization (genetic algorithm) will be proposed.


Introduction
The increase in artifacts placed in orbit every year exponentially affects the amount of space debris surrounding the Earth. The European Space Agency (ESA) has estimated that currently, there are about 750,000 orbiting debris larger than 1 cm [1]; this situation worries both the space agencies of any country and the private industry operating in the aerospace sectors because of possible collisions in flight between their systems and a debris swarm [2,3]. Current technology does not allow the removal of uncontrolled debris from orbit but only its monitoring and over the years; a large amount monitoring systems have been developed, which made it possible to track and catalog about 23,000 uncontrolled object [1].
Nowadays, the most comprehensive and available catalog of uncontrolled objects is the North American Aerospace Defense Command (NORAD) one. The American institution periodically releases orbital information about the objects in the form of Two Line Elements (TLE), which report, ordered in two strings of code, both a set of Kozai-Brouwer mean orbital elements and a term, BSTAR (B * ), referring to the area over mass ratio [4]. Unfortunately, the TLEs cannot be used in Space Surveillance and Tracking (SST) as they characterize the orbital motion of tracked object in a purely theoretical model, which accumulate, during each single orbit, the sum of all the variables not considered by the ideal model [5]. For this reason, after a certain period, the errors accumulated by the TLE become so large that they no longer describe the orbit of the tracked object, involving such a level of inaccuracy that it is necessary to update them through a new series of direct measurements and a consequent release of new TLEs after a few hours [6,7].
Due to the huge number of probabilistic variables that contribute to changing the orbit of an object, any predictive model developed will never describe the rigorous behavior of the object. Today, the research is oriented towards continuous and frequent direct measurements of debris which make it possible both to know the position of the object with very low margins of error and to contribute to the development of innovative techniques aimed at improving existing mathematical models. The knowledge of the state of the object is fundamental to understand the behavior of the object itself, therefore it must be observed. Many observation systems are used for the observation of debris and their direct measurement: laser, radar and optical systems.
In this paper, a set of innovative techniques will be discussed, which, by exploiting direct optical observations, make it possible to derive the flight attitude of uncontrolled object.
In particular, a methodology will be discussed which makes it possible to obtain the "light curves" from direct optical observations of debris, acquired from different observatories at same time (simultaneous multi-site observation) and it will be discussed how, through their inversion, it will be possible to derive their attitude dynamics. The knowledge of the attitude dynamics makes it possible to significantly improve the terms related to the atmospheric drag responsible for the orbital degradation of any orbiting object and, consequently, the predictive models can also be improved. Furthermore, performing simultaneous multi-site observation drastically reduces the error of the flight level altitude and makes it possible to attribute very precisely the term related to the effect of atmospheric drag at the observed debris.
In recent years, the Sapienza Space System and Space Surveillance (S5Lab) gained experience in satellite systems manufacturing and launches, and in the space debris field [8][9][10][11][12][13][14][15][16][17][18], operating networks [19][20][21][22][23][24][25] of completely remote optical observatories, which are able to observe, catalogue and identify a huge number of objects both known that unknown, thanks to which it was possible to refine the correlation between light curves and attitude dynamics of the observed debris. For these reasons, every methodology discussed in this paper is strongly based on a great number of observations, which makes it possible to obtain reliable and solid results.

System Description
The S5Lab observatories network includes many telescopes located in different countries around the world [19]. To obtain the real light-curve presented in this article, the Remote Space Debris Observation System (RESDOS) and the Sapienza Coupled University Debris Observatory (SCUDO) observatories, located in Rome (RM) and Collepardo (FR), respectively, are used. In both systems, a scientific Complementary Metal-Oxide Semiconductor (sCMOS) sensor is installed integrally with the main telescope and its mount, so the systems are able to track the satellite and acquire the video in real time. The sCMOS sensor makes it possible to obtain a readout rate higher than Charge Coupled Device (CCD) sensor, ensuring a large value of frame rate (fps). Moreover, the mounts used to move the telescope are able to track every satellite in every orbital regime Low Earth Orbit (LEO), Medium Earth Orbit (MEO), and Geostationary Earth Orbit (GEO). Both systems are equipped with the same sCMOS sensor and mount and in the Table 1 a summary of the main characteristics of the two systems is indicated.
The simultaneous multi-site observations require an extremely precise acquisition synchronization of the order of the milliseconds or tighter. The synchronization error must be as small as possible in order to obtain a better integration of the data acquired from the two observatories. Once the object's enter into the range of visibility, the sCMOS sensor receives a trigger signal for starting the recording. To avoid synchronization errors, a GPS receiver is installed in both systems. The Pulse Per Second (PPS) of the GPS signal and the camera's Lag Time, i.e., the time between the shutter release (trigger signal) and the actual shoot, are taken into account: the first one is negligible [26], whereas the latter is constant (for the described system) and therefore can be correct.

Photometric Routine
The real light-curve of the object is the variation over time of its apparent magnitude. This magnitude is calculated with respect to the stars present in the field of view in each frame. The stars' catalog magnitude is obtained from the star field resolution, and this is performed by a local version of Astrometry.net software (version 0.84, Dustin Lang, Waterloo, Canada) [27]. For each frame, celestial coordinates, pixel coordinates and bolometric (B) and visual (V) catalogue magnitudes for each star are known. The star catalogue used to solve the star field is the Tycho-2 [28].

Reference Star Selection Criterion
Many stars may be present in the field of view, but not all of them can be used as a reference for calculating the magnitude of the object. Moreover, to calculate the apparent magnitude of one star with respect to another, the two stars must be approximately of the same spectral class, that is, be of the same color. The color of the star can be determined from its temperature, and the temperature can be obtained through the knowledge of the difference between bolometric and visual magnitude (B-V), as described in [29]. It has been assumed that if B-V < 0.3 then the star is blue, otherwise red, since for B-V = 0.3 a star temperature of about 7000 K is obtained, which corresponds approximately to a white star [29]. In addition, the stars that are at the edges of the field of view are excluded. Before proceeding with the calculation of the intensity of the star, it is necessary to calibrate the image in order to correct phenomena due to the sensor such as the variation in quantum efficiency, vignetting, dark current, bias level and the gain for AD conversion. The applied method is known as standard calibration [30]. To calculate the apparent magnitude of a star it is necessary to know its intensity, in terms of Analog Digital Unit (ADU), the intensity of other stars and their catalogue magnitude. It is therefore necessary to calculate only the intensity of each star in the field of view since the magnitudes of the catalogue are already known. The procedure used for calculating the intensity of the star (or of an object) is the aperture photometry [31]. The Figure 1 shows the difference between the catalogued magnitude and the calculated one, for a 1000 frames video recording, which is referred to the tracking of ATLAS 2AS CENTAUR (SSN 28096) taken the 30/10/2020 from RESDOS observatory.
Figure 1. The figure shows how the mean and the standard deviation of the difference between the catalogued and calculated magnitude, decrease using stars with the same spectral class. On the left is shown the histogram obtained using all the stars present in the video recording, on the center using only the red stars and, on the right, using only the blue stars.

Star Intensity and Background Estimation
The estimation of the background level makes it possible to correct the fact that the same pixels that collect photons of the object also collect photons from the sky. A common technique is to place a software annulus around the object with a total number of pixels of about three times the number contained within that of the source aperture. For the pixels of the sky annulus, the median intensity is found and then all those with values greater than ± 3 from the median are tossed out. With this cut-off the cosmic ray hits, bad pixels, and astronomical neighbors' contamination are eliminated. The cut-off technique is repeated another time with the new distribution. Finally, the background level is the median value of the final histogram [32].
For calculation of the star intensity, a software aperture of star dependent radius is centred on the pixel coordinate of the star and then the pixels in this area (A) are summed; this is the total integrated photometric source signal (S). To remove the background, the following formula is applied: where is the total number of pixels contained within the area A and is the background level discussed above. With this value, it is possible to exclude all the stars that are saturated.
It has been shown by Howell that there exists a relation between the radius of the aperture of the star and the signal-to-noise ratio obtained for that measurement. An optimum of the radius aperture, that is, one that provides the optimum or best signal-to-noise ratio for the measurement, can be determined and generally has a radius of near 1 · FWHM. This optimum radius is a weak function of the source brightness, becoming smaller in size for fainter sources. When the optimum star radius is found, it is possible to calculate the intensity of the star using this radius [31,32]. The Figures 2 and 3 show, respectively, the growth curve, i.e., the variation of the star intensity with respect to the radius, for three different stars and, for the better one, the Signal to Noise Ratio (SNR) trend with respect to the radius. The figure shows how the mean and the standard deviation of the difference between the catalogued and calculated magnitude, decrease using stars with the same spectral class. On the left is shown the histogram obtained using all the stars present in the video recording, on the center using only the red stars and, on the right, using only the blue stars.

Star Intensity and Background Estimation
The estimation of the background level makes it possible to correct the fact that the same pixels that collect photons of the object also collect photons from the sky. A common technique is to place a software annulus around the object with a total number of pixels of about three times the number contained within that of the source aperture. For the pixels of the sky annulus, the median intensity is found and then all those with values greater than ±3σ from the median are tossed out. With this cut-off the cosmic ray hits, bad pixels, and astronomical neighbors' contamination are eliminated. The cut-off technique is repeated another time with the new distribution. Finally, the background level is the median value of the final histogram [32].
For calculation of the star intensity, a software aperture of star dependent radius is centred on the pixel coordinate of the star and then the pixels in this area (A) are summed; this is the total integrated photometric source signal (S). To remove the background, the following formula is applied: where n pix is the total number of pixels contained within the area A and B is the background level discussed above. With this value, it is possible to exclude all the stars that are saturated. It has been shown by Howell that there exists a relation between the radius of the aperture of the star and the signal-to-noise ratio obtained for that measurement. An optimum of the radius aperture, that is, one that provides the optimum or best signal-to-noise ratio for the measurement, can be determined and generally has a radius of near 1 · FWHM. This optimum radius is a weak function of the source brightness, becoming smaller in size for fainter sources. When the optimum star radius is found, it is possible to calculate the intensity of the star using this radius [31,32]. The Figures 2 and 3 show, respectively, the growth curve, i.e., the variation of the star intensity with respect to the radius, for three different stars and, for the better one, the Signal to Noise Ratio (SNR) trend with respect to the radius.
In the following example, it is shown how the mean and the standard deviation of the error in magnitude change according to a different value of the radius used for each star in each frame. The analysed video is the same used in Section 3.1 and a minimum in the standard deviation occurs for r = 3, as shown in Figure 4. This radius is the optimum value for this video. The optimum radius does not assume the same value for each video, since it depends on several factors: the stars present in Field of View (FoV), the seeing, and the characteristics of the sensor.  In the following example, it is shown how the mean and the standard deviation the error in magnitude change according to a different value of the radius used for ea star in each frame. The analysed video is the same used in Section 3.1 and a minimum the standard deviation occurs for r = 3, as shown in Figure 4. This radius is the optimu   In the following example, it is shown how the mean and the standard deviation the error in magnitude change according to a different value of the radius used for ea star in each frame. The analysed video is the same used in Section 3.1 and a minimum the standard deviation occurs for r = 3, as shown in Figure 4. This radius is the optimu

Stars Magnitude
The previous procedure calculates the intensity of each star present in the field of view for each frame. Therefore, the intensity of each star is obtained as a function of time. Since the object is being chased, the latter will be stationary in the field of view, while the stars will move. Therefore, the same stars will not always be present during the entire recording of the object. The object video is then divided into time intervals. In each interval, only the stars that are present in the entire length of the interval are chosen. The intensity of the star is approximated as a constant that is equal to the median of the intensities that the star assumes in that interval. At this point, the apparent magnitude of each star present in the interval is calculated with respect to all the others present in the same interval by: for i,j = 1,…,Nstar and i≠j where magstar,i is the magnitude of the i-th star, magstar,j_cat is the catalogue magnitude of the j-th star, Istar,i is the intensity of the i-th star in ADU and Istar,j is the intensity of the j-th star in the ADU. For each star, therefore, a vector of possible apparent magnitudes is obtained, and it is assumed that the apparent magnitude of the star is equal to the median of these values. In each interval, there is a certain number of stars n, which is combined in groups of m, with m ≤ n, in order to find the combination of m stars that minimizes the standard deviation of the measurements. The set of combinations of stars chosen for each interval represents the list of stars that will be used as references for calculating the magnitude of the object being tracked. In Figure 5, it is shown how the proposed method improves the magnitudes' estimation.

Stars Magnitude
The previous procedure calculates the intensity of each star present in the field of view for each frame. Therefore, the intensity of each star is obtained as a function of time. Since the object is being chased, the latter will be stationary in the field of view, while the stars will move. Therefore, the same stars will not always be present during the entire recording of the object. The object video is then divided into time intervals. In each interval, only the stars that are present in the entire length of the interval are chosen. The intensity of the star is approximated as a constant that is equal to the median of the intensities that the star assumes in that interval. At this point, the apparent magnitude of each star present in the interval is calculated with respect to all the others present in the same interval by: for i,j = 1, . . . ,N star and i =j where mag star,i is the magnitude of the i-th star, mag star,j_cat is the catalogue magnitude of the j-th star, I star,i is the intensity of the i-th star in ADU and I star,j is the intensity of the j-th star in the ADU. For each star, therefore, a vector of possible apparent magnitudes is obtained, and it is assumed that the apparent magnitude of the star is equal to the median of these values. In each interval, there is a certain number of stars n, which is combined in groups of m, with m ≤ n, in order to find the combination of m stars that minimizes the standard deviation of the measurements. The set of combinations of stars chosen for each interval represents the list of stars that will be used as references for calculating the magnitude of the object being tracked. In Figure 5, it is shown how the proposed method improves the magnitudes' estimation.

Object Magnitude Variation
To calculate the change in magnitude of the tracked object over time (light-curve), it is necessary to calculate the intensity of the object in each frame. If the object is not perfectly still in the field of view during the recording, as often happens, it is necessary to use an algorithm that determines the pixel coordinates of the object center in each frame. Once the coordinates of the object in each frame have been obtained, it is possible to calculate the intensity of the object as described in Section 3.2 and consequently its magnitude, as described in the previous section, using as reference those stars that result from the optimization process. In Figure 6, the flowchart of the software for the video analysis is shown.

Object Magnitude Variation
To calculate the change in magnitude of the tracked object over time (light-curve), it is necessary to calculate the intensity of the object in each frame. If the object is not perfectly still in the field of view during the recording, as often happens, it is necessary to use an algorithm that determines the pixel coordinates of the object center in each frame. Once the coordinates of the object in each frame have been obtained, it is possible to calculate the intensity of the object as described in Section 3.2 and consequently its magnitude, as described in the previous section, using as reference those stars that result from the optimization process. In Figure 6, the flowchart of the software for the video analysis is shown. In Phase 1, each frame of the video is analyzed individually. It consists in initializing some parameters that are useful for the following phases: frame intervals that the user wants to analyze, the object center useful for the subsequent tracking of it, kernel dimensions for the calculation of the object's intensity and background estimation. The stellar field solved in Phase 2 makes it possible to understand where the sensor is observing in the celestial vault. In this way, the stars present in the field of view are identified in terms of celestial coordinates (RA, Dec), pixel coordinates and magnitude (bolometric and visual). In the third phase, since the pixel coordinates of the stars are known, those at the edges are excluded so that the kernel applied to them does not get out of the image. A spectral analysis occurs in the remaining ones, discriminating them into red and blue stars, so as to choose stars belonging to the same spectral class. Finally, the unsaturated

Object Magnitude Variation
To calculate the change in magnitude of the tracked object over time (light-curve), it is necessary to calculate the intensity of the object in each frame. If the object is not perfectly still in the field of view during the recording, as often happens, it is necessary to use an algorithm that determines the pixel coordinates of the object center in each frame. Once the coordinates of the object in each frame have been obtained, it is possible to calculate the intensity of the object as described in Section 3.2 and consequently its magnitude, as described in the previous section, using as reference those stars that result from the optimization process. In Figure 6, the flowchart of the software for the video analysis is shown. In Phase 1, each frame of the video is analyzed individually. It consists in initializing some parameters that are useful for the following phases: frame intervals that the user wants to analyze, the object center useful for the subsequent tracking of it, kernel dimensions for the calculation of the object's intensity and background estimation. The stellar field solved in Phase 2 makes it possible to understand where the sensor is observing in the celestial vault. In this way, the stars present in the field of view are identified in terms of celestial coordinates (RA, Dec), pixel coordinates and magnitude (bolometric and visual). In the third phase, since the pixel coordinates of the stars are known, those at the edges are excluded so that the kernel applied to them does not get out of the image. A spectral analysis occurs in the remaining ones, discriminating them into red and blue stars, so as to choose stars belonging to the same spectral class. Finally, the unsaturated In Phase 1, each frame of the video is analyzed individually. It consists in initializing some parameters that are useful for the following phases: frame intervals that the user wants to analyze, the object center useful for the subsequent tracking of it, kernel dimensions for the calculation of the object's intensity and background estimation. The stellar field solved in Phase 2 makes it possible to understand where the sensor is observing in the celestial vault. In this way, the stars present in the field of view are identified in terms of celestial coordinates (RA, Dec), pixel coordinates and magnitude (bolometric and visual). In the third phase, since the pixel coordinates of the stars are known, those at the edges are excluded so that the kernel applied to them does not get out of the image. A spectral analysis occurs in the remaining ones, discriminating them into red and blue stars, so as to choose stars belonging to the same spectral class. Finally, the unsaturated stars are chosen for the calculation of the object's magnitude. Once the intensity of the stars in the field of view is known, the apparent magnitude of each star is calculated with respect to the others in the field of view in the course of Phase 4. During the whole video, the stars present are not always the same, but some remain for a certain frame interval, and others for other intervals. Then, this analysis is carried out for each frame interval found in which the same stars are always present for the entire duration of that interval. The apparent magnitude of each star is calculated as the median value of the magnitudes of that star with respect to the others. For each frame interval, every single star is compared to different groups of other stars, therefore different combinations could occur. The combination of stars that minimizes the standard deviation of the measurements is the one chosen to calculate the apparent magnitude of the object. In the last phase, the object is chased for the entire duration of the video and tracked along its path. Starting from the coordinates entered in Phase 1, the software is able to identify the position of the object frame by frame. Once the flow and background have been calculated, using kernel parameters given as input in Phase 1, it is possible to calculate the intensity of the object over time and therefore its apparent magnitude. If the frames are finished then the software returns the light curve of the object, otherwise it switches to the next frame until the last frame is reached.
The output light-curve does not take into account of the range-rate between the object and the observatory. So, a normalization with respect to the range and Earth's radius is necessary. In particular, the normalization used must take into account the distance between the observatory and the satellite, thus eliminating the trend due to the range, but must not modify the average value of the object's magnitude. Several normalization procedures have been tried, and the one that was more efficient, according to the criteria described above, can be expressed by the following formula: where M R is the normalized magnitude, which is called reduced magnitude, M is the object's magnitude without normalization, r is the range and R t is the Earth's radius that is used as a scale factor for the range. The result of the normalization procedure is shown in Figures 7 and 8. stars are chosen for the calculation of the object's magnitude. Once the intensity of the stars in the field of view is known, the apparent magnitude of each star is calculated with respect to the others in the field of view in the course of Phase 4. During the whole video, the stars present are not always the same, but some remain for a certain frame interval, and others for other intervals. Then, this analysis is carried out for each frame interval found in which the same stars are always present for the entire duration of that interval. The apparent magnitude of each star is calculated as the median value of the magnitudes of that star with respect to the others. For each frame interval, every single star is compared to different groups of other stars, therefore different combinations could occur. The combination of stars that minimizes the standard deviation of the measurements is the one chosen to calculate the apparent magnitude of the object. In the last phase, the object is chased for the entire duration of the video and tracked along its path. Starting from the coordinates entered in Phase 1, the software is able to identify the position of the object frame by frame. Once the flow and background have been calculated, using kernel parameters given as input in Phase 1, it is possible to calculate the intensity of the object over time and therefore its apparent magnitude. If the frames are finished then the software returns the light curve of the object, otherwise it switches to the next frame until the last frame is reached. The output light-curve does not take into account of the range-rate between the object and the observatory. So, a normalization with respect to the range and Earth's radius is necessary. In particular, the normalization used must take into account the distance between the observatory and the satellite, thus eliminating the trend due to the range, but must not modify the average value of the object's magnitude. Several normalization procedures have been tried, and the one that was more efficient, according to the criteria described above, can be expressed by the following formula: where MR is the normalized magnitude, which is called reduced magnitude, M is the object's magnitude without normalization, r is the range and Rt is the Earth's radius that is used as a scale factor for the range. The result of the normalization procedure is shown in Figures 7 and 8.  The decrease and the increase in the magnitude values, respectively, at the start and at the end of the tracking, represent the rise and the sets of the object.

Light-Curves Examples
To find the optimal shooting parameters, many tests were carried out. Mainly, it is necessary to check the binning, the exposure time, the frame rate, the image type (8, 12 or The decrease and the increase in the magnitude values, respectively, at the start and at the end of the tracking, represent the rise and the sets of the object.

Light-Curves Examples
To find the optimal shooting parameters, many tests were carried out. Mainly, it is necessary to check the binning, the exposure time, the frame rate, the image type (8, 12 or 16 bit) and whether to use global or rolling shutter. The main parameter to take under control is the exposure time, since an overly high value of this means that the stars in the field of view are represented by overly long strips with consequent failure in the resolution of the star field. Conversely, if an overly short exposure time is used, not enough stars are identified for the astrometric resolution. Regarding the object, a high exposure time value could lead to a saturation of the pixels that represent the object in the field of view. Instead, a low exposure time value involves a lower signal to noise ratio of the object that prevents it from being visible.
In Figures 9-11 some examples are shown of the light-curves taken both from RESDOS and SCUDO observatories recorded within the Inter-Agency Space Debris Coordination Committee (IADC) campaign WG1 AI 38.2, "Attitude motion characterization of LEO upper stages using different observation techniques" [33], useful to validate the light-curve analysis software. From these examples, it is possible to see how the tumbling motion of these objects is quite smooth, showing a repetition period even if the passage is longer. The errors represented changes depending on the actual state of the sky observed, so the error bars accommodate, for every single measure, the real observation situation. The decrease and the increase in the magnitude values, respectively, at the start and at the end of the tracking, represent the rise and the sets of the object.

Light-Curves Examples
To find the optimal shooting parameters, many tests were carried out. Mainly, it is necessary to check the binning, the exposure time, the frame rate, the image type (8, 12 or 16 bit) and whether to use global or rolling shutter. The main parameter to take under control is the exposure time, since an overly high value of this means that the stars in the field of view are represented by overly long strips with consequent failure in the resolution of the star field. Conversely, if an overly short exposure time is used, not enough stars are identified for the astrometric resolution. Regarding the object, a high exposure time value could lead to a saturation of the pixels that represent the object in the field of view Instead, a low exposure time value involves a lower signal to noise ratio of the object that prevents it from being visible.
In Figures 9-11 some examples are shown of the light-curves taken both from RESDOS and SCUDO observatories recorded within the Inter-Agency Space Debris Coordination Committee (IADC) campaign WG1 AI 38.2, "Attitude motion characterization of LEO upper stages using different observation techniques" [33], useful to validate the light-curve analysis software. From these examples, it is possible to see how the tumbling motion of these objects is quite smooth, showing a repetition period even if the passage is longer. The errors represented changes depending on the actual state of the sky observed, so the error bars accommodate, for every single measure, the real observation situation.   Many tests were needed to calibrate the software parameters to obtain a correct cal Many tests were needed to calibrate the software parameters to obtain a correct calculation of the magnitude of the stars and the object and also to correctly follow the objects in the video frames. Once the software was validated, a simultaneous observation test was pursued. Figures 12 and 13 show an example of light-curves obtained during a bi-static observation taken from RESDOS and SCUDO observatories. The object under consideration is the satellite ATLAS 2AS CENTAUR (SSN 28096) during its passage above Italy, which occurred on the 30/10/2020 at 19:29:00 UTC. It is possible to notice how, even if the main behavior is similar, some small differences are representative of different attitudes of reflective faces with respect to the two observatories. These differences are of paramount importance in the process of light-curve inversion that is discussed later.  Many tests were needed to calibrate the software parameters to obtain a correct cal culation of the magnitude of the stars and the object and also to correctly follow the object in the video frames. Once the software was validated, a simultaneous observation test wa pursued. Figures 12 and 13 show an example of light-curves obtained during a bi-stati observation taken from RESDOS and SCUDO observatories. The object under considera tion is the satellite ATLAS 2AS CENTAUR (SSN 28096) during its passage above Italy which occurred on the 30/10/2020 at 19:29:00 UTC. It is possible to notice how, even if th main behavior is similar, some small differences are representative of different attitude of reflective faces with respect to the two observatories. These differences are of para mount importance in the process of light-curve inversion that is discussed later.   Many tests were needed to calibrate the software parameters to obtain a correct calculation of the magnitude of the stars and the object and also to correctly follow the objects in the video frames. Once the software was validated, a simultaneous observation test was pursued. Figures 12 and 13 show an example of light-curves obtained during a bi-static observation taken from RESDOS and SCUDO observatories. The object under consideration is the satellite ATLAS 2AS CENTAUR (SSN 28096) during its passage above Italy, which occurred on the 30/10/2020 at 19:29:00 UTC. It is possible to notice how, even if the main behavior is similar, some small differences are representative of different attitudes of reflective faces with respect to the two observatories. These differences are of paramount importance in the process of light-curve inversion that is discussed later.

Attitude Reconstruction
An object's magnitude may be directly measured using optical instruments through the light curve analysis [34][35][36]. The attitude determination issue is fundamental in order to understand the behavior of orbiting objects, above all for the uncontrolled one. In particular, the knowledge of the attitude of a debris object is critical during the re-entry phase. Here, the atmosphere plays an important role in the evolution of the object's descent given

Attitude Reconstruction
An object's magnitude may be directly measured using optical instruments through the light curve analysis [34][35][36]. The attitude determination issue is fundamental in order to understand the behavior of orbiting objects, above all for the uncontrolled one. In particular, the knowledge of the attitude of a debris object is critical during the re-entry phase. Here, the atmosphere plays an important role in the evolution of the object's descent given its interaction with the object's surface. In this section, a method for attitude reconstruction will be explained.

Overview of the Method
Once the observed light-curve is retrieved from an optical observation, the next step is to compare it with a synthetic one in order to retrieve the object's attitude. Many synthetic light-curves are generated with different initial attitude parameters. The more the synthetic light-curve is similar to the observed one, the greater the likelihood that the corresponding attitude parameter represents the real object's attitude. The critical phase, in the whole process, is the synthetic light-curve generator: it must be as realistic and reliable as possible both from the optical and physical point of view.
First, an SGP-4 (Simplified General Perturbation) routine in the interval of interest occurs to retrieve the object orbit [4]. Moreover, the observer and the sun position are computed for each instant in the same object's reference frame.
A 3D model of the object is needed to generate a synthetic image of it over time. The artificial light-curve is the result of the flux of light reflected by the object, intends as the sum of the pixel of the image. Later, a genetic algorithm is used to minimize a cost function defined as the difference between the real light-curve and the synthetic one. The algorithm tries different initial attitude parameters until both curves are comparable. In the next sections, the whole process will be explained in detail.

Virtual Reality Simulation
As mentioned above, the crucial phase is to generate an accurate model of the object and so an as accurate representation as possible of the light reflected by the object (lightcurve). To do this, a virtual reality simulation is implemented; many parameters and variables, both from the optical and physical point of view, have to be taken into account such as the atmospheric extinction, the object's shape and materials, the phase angle between sun-object-observer in order to obtain the light reflected by each part of the object and to accurately reproduce the shadow areas cast by the object on itself. A bad estimate of these parameters leads to an inaccurate synthetic light-curve and therefore to an imprecise attitude determination. In this context, a virtual reality engine was developed specifically for space observation simulations: the Virtual Space Observation Engine (VISONE). Two different modules can be found in the software: the Physical Engine and the Rendering Engine. Figure 14 shows a schematic of the VISONE engine.

Physical Engine
This part of the software has the task of study the physic of the problem. In particular, to obtain a simulation of the light reflected by the object that is as realistic as possible, it is necessary to compute, instant by instant: Observer position (i.e., ground station); • Position of the sun.
All these parameters make it possible to compute the phase angle, see Figure 15, which is fundamental for the computing of the amount of light reflected by the object.
sun-object-observer in order to obtain the light reflected by each part of the object and to accurately reproduce the shadow areas cast by the object on itself. A bad estimate of these parameters leads to an inaccurate synthetic light-curve and therefore to an imprecise attitude determination. In this context, a virtual reality engine was developed specifically for space observation simulations: the Virtual Space Observation Engine (VISONE). Two different modules can be found in the software: the Physical Engine and the Rendering Engine. Figure 14 shows a schematic of the VISONE engine. Figure 14. VISONE schematic-some of the input parameters taken by VISONE are: the interval of integration and the time step the 3D model of the object of which we want to find the attitude, its inertia matrix and its initial dynamical state (i.e., initial angle and angular velocity); the last TLE of the object, as well as the position of the observer. VISONE then computes the position of the object and the sun in the same reference frame and renders on an image the 3D model of the orbital object as it should be seen by an observer on Earth.

Physical Engine
This part of the software has the task of study the physic of the problem. In particular, to obtain a simulation of the light reflected by the object that is as realistic as possible, it is necessary to compute, instant by instant: Observer position (i.e., ground station);  Position of the sun.
All these parameters make it possible to compute the phase angle, see Figure 15, which is fundamental for the computing of the amount of light reflected by the object. Figure 15. Phase angle-this image shows the phase angle, observer-object-sun, needed to compute the amount of light reflected by the object.
The position of the object, for each instant of time, is computed using an SGP-4 [4] routine, that takes as input the TLE of the object. As mentioned in the introduction, the Figure 14. VISONE schematic-some of the input parameters taken by VISONE are: the interval of integration and the time step the 3D model of the object of which we want to find the attitude, its inertia matrix and its initial dynamical state (i.e., initial angle and angular velocity); the last TLE of the object, as well as the position of the observer. VISONE then computes the position of the object and the sun in the same reference frame and renders on an image the 3D model of the orbital object as it should be seen by an observer on Earth. Figure 14. VISONE schematic-some of the input parameters taken by VISONE are: the interval of integration and the time step the 3D model of the object of which we want to find the attitude, its inertia matrix and its initial dynamical state (i.e., initial angle and angular velocity); the last TLE of the object, as well as the position of the observer. VISONE then computes the position of the object and the sun in the same reference frame and renders on an image the 3D model of the orbital object as it should be seen by an observer on Earth.

Physical Engine
This part of the software has the task of study the physic of the problem. In particular, to obtain a simulation of the light reflected by the object that is as realistic as possible, it is necessary to compute, instant by instant: Observer position (i.e., ground station);  Position of the sun.
All these parameters make it possible to compute the phase angle, see Figure 15, which is fundamental for the computing of the amount of light reflected by the object.

Figure 15.
Phase angle-this image shows the phase angle, observer-object-sun, needed to compute the amount of light reflected by the object.
The position of the object, for each instant of time, is computed using an SGP-4 [4] routine, that takes as input the TLE of the object. As mentioned in the introduction, the Figure 15. Phase angle-this image shows the phase angle, observer-object-sun, needed to compute the amount of light reflected by the object.
The position of the object, for each instant of time, is computed using an SGP-4 [4] routine, that takes as input the TLE of the object. As mentioned in the introduction, the TLEs are characterized by low accuracy and short reliability that can lead to errors of several km in the satellite position [5]. This means that a TLE loses its utility in a few days. Moreover, in the case of a re-entering object, for which the light-curve plays a fundamental role in the re-entry prediction, the reliability period decreases to a few hours. Therefore, the TLE closest to the measured time is chosen. While the ground station position is well known, the sun position is computed using the ephemeris. It is essential that all these positions are expressed in the same reference system, e.g., the Earth-Centered Inertial (ECI) one. Figure 16 shows the Physical Engine structure.
Regarding the dynamic state of the object, the classical Euler equation for a rigid body is used under the acceptable hypothesis of the null external torque since their effects on the object are negligible during the light-curve acquisition period. Here, the dynamic state is represented by the Euler angle and the angular velocity components in the satellite reference frame (ϕ 0 , θ 0 , ψ 0 , p 0 , q 0 , r 0 ). A specific matrix of inertia is considered for every type of object in order to consider the different geometry.
Regarding the dynamic state of the object, the classical Euler equation for a rigid body is used under the acceptable hypothesis of the null external torque since their effects on the object are negligible during the light-curve acquisition period. Here, the dynamic state is represented by the Euler angle and the angular velocity components in the satellite reference frame (φ0, θ0, ψ0, p0, q0, r0). A specific matrix of inertia is considered for every type of object in order to consider the different geometry. Figure 16. Physical Engine-this engine takes care of computing for each time step the position of the orbital object and its dynamical state, taking as input the TLE, the initial state with the inertial matrix. At the same time, the position of the observer and of the sun will be computed in the same integrated interval with the same sampling rate. Moreover, all the results will be given in the same reference frame.

Rendering Engine
The second step to retrieve the attitude is the Rendering Engine. In this step, the aim is to recreate a synthetic image of the object as realistically as possible. With this purpose, the output of the Physical Engine will be the input of the rendering one. In the Rendering Engine, new parameters such as atmospheric extinction, the object's shape and materials, and the direction of the sunlight, must be considered. Since the engine must accurately reproduce the shadow on the object's surface due to its components, e.g., solar panels, or antenna, the worse the synthetic image quality, the worse the light-curve quality is, leading to poor results in terms of attitude determination. Indeed, shadows and reflected lights, e.g., by the antenna, create valleys and peaks in the shape of light-curves. In Figure  17, the result of the rendering is shown. Figure 16. Physical Engine-this engine takes care of computing for each time step the position of the orbital object and its dynamical state, taking as input the TLE, the initial state with the inertial matrix. At the same time, the position of the observer and of the sun will be computed in the same integrated interval with the same sampling rate. Moreover, all the results will be given in the same reference frame.

Rendering Engine
The second step to retrieve the attitude is the Rendering Engine. In this step, the aim is to recreate a synthetic image of the object as realistically as possible. With this purpose, the output of the Physical Engine will be the input of the rendering one. In the Rendering Engine, new parameters such as atmospheric extinction, the object's shape and materials, and the direction of the sunlight, must be considered. Since the engine must accurately reproduce the shadow on the object's surface due to its components, e.g., solar panels, or antenna, the worse the synthetic image quality, the worse the light-curve quality is, leading to poor results in terms of attitude determination. Indeed, shadows and reflected lights, e.g., by the antenna, create valleys and peaks in the shape of light-curves. In Figure 17, the result of the rendering is shown. The developed software uses the well-known loader library Assimp [37], which makes the software able to work with most of the material properties used in the 3D rendering field. The Blinn' Phong light model [38] is used to recreate lighting conditions, whereas the shadows' areas are modelized following the exponential shadow mapping model [39]. All these choices make it possible to be fast and precise in the light source computation, be they directional or far away, and in the evaluation of the light reflected by the object. The results are shown in the previous figure.
In Figure 18, it can be seen how the Rendering Engine uses the data computed by the physical engine, and it also shows the correspondence between the real parameters and The developed software uses the well-known loader library Assimp [37], which makes the software able to work with most of the material properties used in the 3D rendering field. The Blinn' Phong light model [38] is used to recreate lighting conditions, whereas the shadows' areas are modelized following the exponential shadow mapping model [39]. All these choices make it possible to be fast and precise in the light source computation, be they directional or far away, and in the evaluation of the light reflected by the object. The results are shown in the previous figure.
In Figure 18, it can be seen how the Rendering Engine uses the data computed by the physical engine, and it also shows the correspondence between the real parameters and the model parameters. The developed software uses the well-known loader library Assimp [37], which makes the software able to work with most of the material properties used in the 3D rendering field. The Blinn' Phong light model [38] is used to recreate lighting conditions, whereas the shadows' areas are modelized following the exponential shadow mapping model [39]. All these choices make it possible to be fast and precise in the light source computation, be they directional or far away, and in the evaluation of the light reflected by the object. The results are shown in the previous figure.
In Figure 18, it can be seen how the Rendering Engine uses the data computed by the physical engine, and it also shows the correspondence between the real parameters and the model parameters.

Automatic Attitude Determination
The real light-curve, obtained as mentioned in Section 3, depends only on the time t; therefore, it can be defined as IT(t). Instead, the synthetic one generated by VISONE depends not only on the time, but also on the initial attitude condition in terms of Euler angles and angular velocity components. Let φ0, θ0, ψ0 be the initial Euler angles and let p0, q0, r0 be the initial angular velocities. The synthetic light-curve IS can be defined as:

Automatic Attitude Determination
The real light-curve, obtained as mentioned in Section 3, depends only on the time t; therefore, it can be defined as I T (t). Instead, the synthetic one generated by VISONE depends not only on the time, but also on the initial attitude condition in terms of Euler angles and angular velocity components. Let ϕ 0 , θ 0 , ψ 0 be the initial Euler angles and let p 0 , q 0 , r 0 be the initial angular velocities. The synthetic light-curve I S can be defined as: I S t, ϕ 0 , θ 0 , ψ 0 , p 0 , q 0 , r 0 (4) Therefore, it is possible to define a cost function as the the sum over time of the modulus of the difference between the real and the synthetic light-curve: ∑ I T (t)-I S t, ϕ 0 , θ 0 , ψ 0 , p 0 , q 0 , r 0 (5) In this way, the issue is reduced to a minimization problem in which the final task is to find the initial attitude parameters that generate the synthetic light-curve that is as close as possible to the real one. Those parameters will be the presumable attitude parameters of the observed object. In this context, a genetic algorithm [40] to solve the problem defined in Equation (5) is used. In addition, it is possible to define validity ranges for each parameter or lock or release parts of the object, such as the movement of the solar panels. This makes it possible to improve both the solution of the attitude when prior information about the attitude of the observed object is known (e.g., solar panels point towards the Sun or the main body point to the nadir) or both in the case of re-entering objects. Figure 19 shows the optimization architecture.
of the observed object. In this context, a genetic algorithm [40] to solve the problem defined in Equation (5) is used. In addition, it is possible to define validity ranges for each parameter or lock or release parts of the object, such as the movement of the solar panels. This makes it possible to improve both the solution of the attitude when prior information about the attitude of the observed object is known (e.g., solar panels point towards the Sun or the main body point to the nadir) or both in the case of re-entering objects. Figure  19 shows the optimization architecture. Figure 19. Optimization architecture-the scheme depicts how the proposed method finds the attitude is shown. The genetic algorithm uses the Equation (5) to compare the observed light-curve and the synthetic one. Then, the genetic algorithm changes the initial attitude parameters accordantly, in order to create a new synthetic light-curve. This process is repeated until the best synthetic attitude is found i.e., the presumable object attitude.

Method Validation
In order to evaluate the proposed method, two kind of tests werecarried out: the first one was for checking the quality and the accuracy of the method on a groundtruth of synthetic datasets, and the latter instead was for validating the method in a real case.

Synthetic Data
The algorithm was tested on synthetic data in order to check the reliability of the procedure. Once the initial attitude parameters were defined, a white noise was applied on the synthetic light-curve. The resulting synthetic light-curve is given as the target to the algorithm, which returns the attitude parameters that have created it. In Figure 20, the synthetic target light-curve and the found ones are shown. In the Table 2, the real attitude's initial parameters and the ones found by our method are shown. This result shows how the proposed method has good performance and reliability also in the case of noise. The difference between the target and synthetic parameters reported in Table 2 is due to the optimization process. Each parameter affects the attitude, and so the resulting light-curve is affected in the same way. Nevertheless, the choices made by the genetic algorithm during the optimization process may affect the precision of the parameters found as well as the time it takes to find them. The more the optimizer is left to find the solution, the more the solutions can be corrected. . Optimization architecture-the scheme depicts how the proposed method finds the attitude is shown. The genetic algorithm uses the Equation (5) to compare the observed light-curve and the synthetic one. Then, the genetic algorithm changes the initial attitude parameters accordantly, in order to create a new synthetic light-curve. This process is repeated until the best synthetic attitude is found i.e., the presumable object attitude.

Method Validation
In order to evaluate the proposed method, two kind of tests werecarried out: the first one was for checking the quality and the accuracy of the method on a groundtruth of synthetic datasets, and the latter instead was for validating the method in a real case.

Synthetic Data
The algorithm was tested on synthetic data in order to check the reliability of the procedure. Once the initial attitude parameters were defined, a white noise was applied on the synthetic light-curve. The resulting synthetic light-curve is given as the target to the algorithm, which returns the attitude parameters that have created it. In Figure 20, the synthetic target light-curve and the found ones are shown. In the Table 2, the real attitude's initial parameters and the ones found by our method are shown. This result shows how the proposed method has good performance and reliability also in the case of noise. The difference between the target and synthetic parameters reported in Table 2 is due to the optimization process. Each parameter affects the attitude, and so the resulting light-curve is affected in the same way. Nevertheless, the choices made by the genetic algorithm during the optimization process may affect the precision of the parameters found as well as the time it takes to find them. The more the optimizer is left to find the solution, the more the solutions can be corrected.    The initial attitude parameters found by the proposed method on a synthetic dataset in comparison with the real one, where ϕ 0 , θ 0 , ψ 0 are the Euler angles (in degrees) and p 0 , q 0 , r 0 the angular velocities (in degrees/sec).

Real Case Test
In order to test the proposed method on real data, the re-entry campaign in the winter of 2017-2018 of the chinese space station Tiangong-1 was choosen. In Figure 21, the lightcurve of the Tiangong-1 passing over Rome on the 15/02/2018 is shown. The light-curve was sampling at 24 fps with a video recording duration of 71 s. The observed light curve (in blue in Figure 21) is very noisy. These fluctuations are caused by the observational method (not entirely optimal) and by the algorithm that determines the flow of light reflected by the object by analyzing the image. Indeed, it seems to be unrealistic that a passive object (i.e., that only reflected the sunlight) varies its intensity in only 1/24 of a second. It is more interesting to see how, even in the presence of noise, the algorithm is able to find those parameters that generate a light-curve similar to the observed one discarding the noise. Figure 20. Synthetic test-. In blue, the synthetic light-curve created with VISONE, while in red the light-curve found by the genetic algorithm are shown. Table 2. The initial attitude parameters found by the proposed method on a synthetic dataset in comparison with the real one, where φ0, θ0, ψ0 are the Euler angles (in degrees) and p0, q0, r0 the angular velocities (in degrees/sec). In order to test the proposed method on real data, the re-entry campaign in the winter of 2017-2018 of the chinese space station Tiangong-1 was choosen. In Figure 21, the lightcurve of the Tiangong-1 passing over Rome on the 15/02/2018 is shown. The light-curve was sampling at 24 fps with a video recording duration of 71 s. The observed light curve (in blue in Figure 21) is very noisy. These fluctuations are caused by the observational method (not entirely optimal) and by the algorithm that determines the flow of light reflected by the object by analyzing the image. Indeed, it seems to be unrealistic that a passive object (i.e., that only reflected the sunlight) varies its intensity in only 1/24 of a second. It is more interesting to see how, even in the presence of noise, the algorithm is able to find those parameters that generate a light-curve similar to the observed one discarding the noise. In the real case, since the SGP4 routine only provides a prediction of the object's position, which could differ from the real one due to a time shift, an additional delay parameter is added to the cost function (Equation (5)) to cope with this kind of error. Moreover, in this particular case, another parameter was added to the cost function to take into account a further degree of liberty due to the rotation of the solar pannel with respect to the principal body.
Analyzing the result, the attitude parameters found show that the station seems to have: the x-axis pointed to the velocity vector; the y-axis pointed to the Nadir with an angle between the solar panels and the Sun of about 24 • . This configuration creates a drag area, with respect to the velocity vector, of about 48 mq. The attitude parameters found by the proposed method are in line with the community consensus regarding the presumable attitude of the station in that period of time [41][42][43].

Conclusions
In this paper, an in-depth method to retrieve light-curve has been shown. Despite the noise of the sCMOS sensor used for optical observation, the results prove that is possible to obtain a light-curve by means of an optical system. To obtain these results, star field resolution was performed in order to find the reference stars on which to calculate the object's magnitude. Moreover, an optimization process that minimized the standard deviation with respect to several photometric parameters occurred. The whole process has been applied to a bi-static optical system, making it possible to obtain simultaneous data. Furthermore, a system for the light-curve inversion based on a stochastic optimization (genetic algorithm) was identified, implemented, and tested on the real case of the Tiangong-1. The obtained results show that the method can be used for future re-entry objects.