Survey of the Current Activities in the Field of Modeling the Space Debris Environment at TU Braunschweig

The Institute of Space Systems at Technische Universität Braunschweig has long-term experience in the field of space debris modeling. This article reviews the current state of ongoing research in this area. Extensive activities are currently underway to update the European space debris model MASTER. In addition to updating the historical population, the future evolution of the space debris environment is also being investigated. The competencies developed within these activities are used to address current problems with regard to the possibility of an increasing number of catastrophic collisions. Related research areas include, for example, research in the field of orbit determination and the simulation of sensor systems for the acquisition and cataloging of orbital objects. In particular, the ability to provide simulated measurement data for object populations in almost all size ranges is an important prerequisite for these investigations. Some selected results on the distribution of space debris on Earth orbit are presented in terms of spatial density. Furthermore, specific fragmentation events will be discussed.


Introduction
An important research field of the Institute of Space Systems is modeling of the space debris environment (cf. Figure 1).
For more than two decades, the Institute has been responsible for developing the European reference model for description of the space debris environment.The model is called MASTER (Meteoroid and Space debris Terrestrial Environment Reference) and is distributed by the ESA's Space Debris Office (https://sdup.esoc.esa.int).The space debris group of the Institute has many years of experience in the field of scientific description of the various sources of space debris, especially in the small particle size regime.This includes the comparison of the scientifically generated particle population with actually obtained measured values.For the validation of the MASTER population, a tool is used that simulates the detection capability of different detection sensors such as radars and telescopes.This tool, called PROOF (Program for Radar and Optical Observation Forecasting), uses the MASTER population and simulates detection rates of the observational measurements [1].It allows a comparison of simulations with actual measurement campaigns and is therefore used to validate the MASTER model.The first version of PROOF was developed in cooperation with the Astronomical Institute of the University of Bern (AIUB) and the Fraunhofer Institute for High Frequency Physics and Radar Techniques (FHR, formerly FGAN).The current version of PROOF is described in [2].The ongoing research at the Institute covers a broad spectrum of space debris related topics ranging from providing a sophisticated model for the space debris environment over the evaluation of upcoming mega-constellations to Space Surveillance and Tracking (SST) infrastructures, precise orbit determination and Active Debris Removal (ADR).This article is a summary publication that aims to present the current state of research at the Institute.Many of the current activities are based on the expertise in the field of space debris modeling.From this main area, many spin-offs have been derived.The contribution to the full research field of space debris is significant when it comes to the MASTER model.The provided space debris population is used in engineering tools that provide risk assessments for operational spacecraft, e.g., debris and meteoroid flux levels or collision avoidance maneuver frequencies, but also provide detailed insight into the space debris situation in terms of highly populated orbit regimes and temporal evolution of the environment.In addition, future population trends are provided that are especially interesting for risk analyses of future satellite missions and the upcoming mega-constellations.The research also spans the development and evaluation of different ADR measures, which combines the longtime experience in orbital dynamics with the development of satellite hardware that will help stabilize the constantly growing space debris environment.Furthermore, the Institute provides support to any space observation segment in terms of design, operation and maintenance.The current state of research in these fields is mentioned and selected research results are presented and discussed.

Fragmentation Events
Fragmentation events refer to on-orbit explosions and collisions.They yield the highest contribution to the space debris environment in terms of numbers for objects larger than approximately 1 cm in diameter.The history of events shape the spatial distribution of the historical population that can be validated against observation data from radars and telescopes.The validated historical population of the MASTER Model is provided up to a specific reference epoch.This epoch is mostly related to the release date of the model.The last version of MASTER has the reference epoch of 1 May 2009 [2].However, since that time, there have been more on-orbit fragmentations that contribute to the space debris environment up to this day.In order to maintain an accurate space debris model, a fragmentation list that contains all historically confirmed fragmentations needs to be maintained.Preliminary work in this field has been done in [3].There have been several new events.On average today, about five events per year can be expected (cf.Section 2.1).Another important task is to review known events.This is required when new measurement data concerning a historical event is available.In particular, the Fengyun-1C event requires an update since recent measurements indicate that significantly more debris has been released [4] as it was expected in 2009 (cf.Section 2.2).Therefore, and above all, the object population must be significantly adjusted in about 800 km altitude [5].The number of objects in this altitude consequently is much higher today.In the following, the current state of knowledge about this event will be presented.

Event History
To establish an accurate space debris model, not only recent events have to be considered, but also the complete history of on orbit fragmentations have to be maintained on a regular basis.Currently, there are 261 considered fragmentations and each of them is modeled as an individual event.
There are some additional events that were categorized as unknown anomalies; however, they are neglected in the fragmentation modeling process [6].All considered events are categorized as explosion or (non-)catastrophic collision including detailed information on the event characteristics such as the involved mass or collision velocity.Out of these 261 events, 247 are categorized as explosions (≈95%) [6].The remaining 14 events are categorized as confirmed (e.g., the Fengyun-1C Anti-Satellite Test) and assumed collisions.However, looking at the total amount of tracked debris by the Space Surveillance Network (SSN), there are twice as many explosion fragments than collision fragments.Figure 2a shows the number of fragmentations over the last 60 years.The mean number of fragmentations per year accounts to 4.9 (SD = 2.8) with no sign of regression or fundamental change.However, there was a slight decrease in fragmentations for three years after the introduction of the Inter-Agency Space Debris Coordination Committee (IADC) Space Debris Mitigation Guidelines in 2010.An examination of the fragmented mass over the years is shown in Figure 2b.It is shown that most of the objects that experienced a breakup had a mass less than 500 kg.Looking at the other extreme of the histogram, there are two events with a fragmentation mass of 26,000 kg and 30,000 kg.These events happened in the late 1960s during the Apollo program, but, due to their low event altitude of below 300 kg, almost all fragments re-entered into Earth's atmosphere and consequently do not contribute to the space debris environment in the long term.
Looking at the difference between the event fragmentation (breakup) epoch and the initial launch of the corresponding payload, the age of the object can be obtained.A histogram covering all fragmentations showing the object's age is shown in Figure 3.It is evident that most breakups occur during the first year of operation in space.The exact reasons for the breakups are very difficult to uncover, but there are a few likely possibilities such as propulsionor electrical-related events.Over 80% of the objects experienced a breakup during the first 10 years of operation.However, there are also single events that happened over 30 years after launch.They are mostly old and non-passivated upper stages with some stored rest energy.The events with the biggest generation of trackable debris up to date are shown in Table 1.The Fengyun-1C event is still by far the most prominent event to date.Due to the relatively high event altitude, most of the objects are still in orbit and consequently could collide with other spacecraft.The second largest event was the deliberate destruction of USA-193.However, due to its low event altitude, no objects pose a long-term thread for other spacecraft [8].These two events have a deliberate cause of their fragmentation, whereas the Cosmos-2251 and the Breeze-M fragmentations have not.There is a trend in the number of generated objects in a way that deliberately destroyed objects yield a higher number of objects.

Fragmentation Characteristics of Individual Events
To explain the properties and model approaches of individual events, the fragmentation characteristics of four exemplary events will be mentioned in the following.These are the fragmentations of Fengyun-1C (2007), Cosmos-2251Cosmos- (2009)), DSMP-F13 (2015), and NOAA-16 (2015).A more detailed representative analysis will be shown for the Fengyun-1C event.The TLE (Two-Line Elements) catalog is a free source for the amount of cataloged debris each related to the parent object [4].However, only objects enter the catalog that could be tracked, correlated and re-tracked.Objects that were tracked and correlated but never re-tracked do not enter the catalog.When modeling individual fragmentation events, it is important to consider the total number of detected objects related to the same event.If not, modeling these events could lead to a high underestimation in terms of object numbers.The initial procedure for obtaining the total number of generated fragments are initial publications by the SSN as well as more refined data from the Orbital Debris Quarterly News (orbitaldebris.jsc.nasa.gov).One specific case of missing objects in the catalog is the explosion of the Breeze-M (2012-044C) in 2012.Although there were about 700 reported detections on this event [9], only 112 objects entered the catalog to this day [4].During the MASTER population generation, detailed research and data acquisition are performed to include the correct number of detected debris originating from each event.
One of the most important models is the standard NASA breakup model [10], which is used in a modified version also in MASTER-2009 [2].Furthermore, the recommendations concerning the implementation given by Krisko [11] are considered.Therein, a power law describes the number-size-distribution of a fragmentation cloud right after the event.The NASA breakup model provides two power laws for the cumulated number of fragments due to explosions and collisions: with N(d > L c ) representing the total number of fragments with a diameter d larger then a characteristic length L c (usually the mean diameter of the fragment).Additionally, s f is an object type depending scaling factor between zero and one and M tot the mass involved in the collision.However, this approach must not be applied ignoring the number of detected objects because these are shaping the power law.To illustrate this, the number-size distribution of the NASA breakup model for the Fengyun-1C fragmentation is shown in Figure 4a.To match the number of detected objects with the number-size-distribution, the shape has to be moved to intercept the number of detected objects at the sensor minimum trackable diameter.This calibration is performed for each event in the fragmentation database to ensure a consistent modeling approach based on the underlying detection data.However, the number of cataloged TLE fragments is not static but is increasing over the years, which forces reevaluation of clouds with new information on the debris number.The time history on number of TLE fragments for four different events is shown in Figure 5.These include the Chinese anti-satellite test regarding Fengyun-1C [12], fragments from Cosmos-2251, which was involved in the Cosmos-Iridum collision in 2009 [13] and the explosions of DMSP-F13 [14] and NOOA-16 [15].Therein, the shaded areas show the time from the last number update until today.In particular, the numbers of TLE fragments for the Fengyun-1C and Cosmos-2251 event have been significantly increasing over the last 6-7 years.After the Fengyun-1C event, the number of cataloged TLE objects was constantly increasing for almost one year.During the MASTER-2009 population update, the numbers had increased to around 2000 and have now stabilized to 3438 objects [4].At each population update, the underlying clouds were generated based on the most recent available data.The evolution of the number-size-distribution for the different number of TLE objects for the Fenyun-1C event is shown in Figure 4b.Over a time frame of 10 years, the total number of objects larger than the minimum trackable diameter, which is approximately 10 cm, increased by almost a factor of 3.5.This has a direct consequence on the object numbers in the 1 cm regime.Table 2 gives an overview over the total number change over the years.

Fengyun-1C debris number evolution
The increasing factor for objects d > 10 cm is also fully applied on the smaller diameter regime.This results in a number increase of objects larger than 1 cm from 60,000 to 204,000.
All fragmentation events undergo the same number change according to their update on cataloged fragments.The process is done for all events but poses some challenges regarding recent or relatively new events.The number of TLE fragments for the DSMP-F13 and NOAA-16 event have increased since the initial tracking of the cloud (cf. Figure 5c,d).Since the events occurred in the beginning of and late 2015, and the fragmentation altitude for both events were between 800 km and 900 km, it is expected that these events will lead to more detected fragments in the future with a long-term contribution to the space debris environment.Validating this modeling approach is done by comparing measurement data from dedicated space debris observation campaigns to the MASTER model population.These so-called "beam-park" experiments usually have a lower minimum detection diameter of around 1 cm in Low-Earth Orbit (LEO), which allows for a more refined calibration of the population.Comparing measurement data with the model data is performed with PROOF and is a crucial step in the validation phase [16][17][18].
One optional step in the model verification is to visualize the initial cloud evolution of each event, as shown in Figure 6.These visualizations are created with a tool called DOCTOR (Display of Objects Circulating in Terrestrial Orbits) and help to identify major modeling issues, data integrity as well as supporting illustrative publications [19][20][21].One major benefit in modeling the space debris environment source wise is that every object can be related to a specific event and source and visualized using a source specific color code.Especially when interpreting measurement data, the MASTER model can be used to check the integrity of dedicated space debris observation campaigns.

Sodium Potassium Droplets
Space debris is made up of very different sources.In the centimeter range, the dominant source is fragments followed by slag particles from solid rocket motors.The third largest contribution is sodium-potassium droplets (NaK droplets) that have been released from orbital nuclear reactors and are highlighted here as an example.These droplets are found today mainly in orbits near 900 km altitude and are treated as a historical source due to their early generation but relatively late discovery.The NaK droplets were a major new contribution to the space debris environment in the 1980s.It has been suggested that about 16 nuclear reactors leaked into space during an end-of-life (EOL) procedure, which is reactor core ejection mostly in orbits near 950 km altitude.During the opening of the reactor casing, inevitably the primary cooling loop was opened.Consequently, the coolant contained therein, the eutectic NaK alloy, could probably escape into space in this way.For 16 satellites, such reactor core rejection is suspected; however, only for 13 satellites is it detected by observations.The modeling of this source has already been described in detail and has been part of the MASTER model for years [2,22].Today, it is estimated that approximately 5.3 kg of NaK per ejection event has been released into space [23].
However, there is a small supplement that has been added to this source.In addition to the 16 reactor core ejections, there have recently been two leakages that have now been added as single events.These two leakages affect two other reactors of a different type that were launched in the late 1980s for testing purposes [24].The cooling system was initially tight, but was broken in both cases a few years ago.As a result, smaller amounts of NaK droplets have leaked, which make only a very minor contribution to the space debris environment.An estimated 250 g have been released per event [23]; however, since these droplets were observed during radar measurements [25], they are considered in the model for the sake of completeness.In total, there are now about 20,000 droplets in space, mostly in orbits near 900 km altitude [26].Today, the NaK droplets are dominating the centimeter population; however, droplets that are smaller than half a centimeter have virtually completely reentered the atmosphere.The contribution of droplets in the entire centimeter population at 800 km altitude today is about 10% [23].Even if it is only a historical contribution to the space debris environment, the droplets still make the second largest contribution to space debris at this altitude.

Spatial Density
With the software tools developed at the Institute of Space Systems (IRAS), a scientific description of the distribution of the objects on orbit is possible.If the number of objects is averaged per altitude shell, a very simple representation is received, with which the distribution of debris over different altitudes can roughly be represented.This is the spatial density.It indicates how many objects per unit volume can be found at a certain altitude.In the following, the spatial density of the last release of the MASTER Model in its version of the year 2009 will be compared with current modeling results.It should be noted that the current data has not yet been finally validated and updated intermediate results are presented.These are advanced analyses of results that have already been published before.Figure 7 shows the spatial densities in low Earth orbits for three different size classes.
The illustrations refer to 1 May 2009.The analysis is limited to orbits between 200 km and 2000 km altitude.The reason is that these orbits today show the highest spatial density, which is related to the high space activities on low earth orbits.As size classes, objects with d > 1 mm (Figure 7a), d > 1 cm (Figure 7b) and d > 10 cm (Figure 7c) were selected.In all three size classes, it can be observed that the spatial density at about 800 km altitude is the highest.In lower orbits, the air drag is still such that the life of debris objects is relatively low.According to the new modeling results, the number of debris at about 800 km is significantly higher than originally assumed.This is mainly due to the revaluation of the Fengyun-1C event.
In the context of this study, however, not only individual events were examined.All event lists have been updated as well.This causes smaller changes in the results.In Figure 7a, the distribution of debris in the millimeter range at about 1500 km is shown, which changes a little bit.This has to do with updating the list for the solid-rocket motor firings.In this case, a more precise assignment of the positions where the firings have been carried out has led, as far as we know today, to expect somewhat lower numbers of slag particles in the millimeter range at 1500 km altitude.
Basically, however, it can be said that the results presented here are qualitatively consistent with those of the past and that they quantitatively meet the expectations that existed before the investigation began.In the case of the Fengyun-1C event, simply the number of observed objects has increased significantly.It is therefore necessary to consider these new findings in the model.The final results may differ slightly from those presented here.An important intermediate result, however, can already be seen clearly.The collision probability, especially on the sun-synchronous orbits, is higher than originally assumed.

Long-Term Evolution
While modeling the historical space debris distribution, existing measurement data is used to validate the scientifically generated population.As a result, important population reference points are created that ensure model reliability.For the future population, referring to the extrapolated population that might stabilize in the next decades, this is not possible.Hence, a meaningful estimate must be made about the expected evolution of the space debris environment.The scientific challenge here is the development of algorithms for the long-term propagation of orbits.Both analytic and numerical methods are available at the Institute.A very important field of research is the investigation of algorithms for calculating the probability of a catastrophic collision.A catastrophic collision is defined as an impact event in which the target, e.g., an upper stage or active payload, is hit so severely that it is completely fragmented.The released debris in this process can cause further catastrophic collisions in the long-term; the Institute has done much research and made many contributions in this field during the last several decades [27,28].A crucial step in modeling future population trends is the prediction of future traffic in space and assessment of post-mission disposal rates.To be able to carry on safely with space exploration, it is necessary to know the dynamic behavior of the space debris environment.Long-term simulations provide the opportunity to gain insight into a temporal evolution of the future space debris environment.Usually, long-term simulations start with a so-called business-as-usual (BAU) scenario that forecasts the development of parameters such as launch rates, post mission disposal (PMD) rates, number of explosions and solid rocket motor firings.The history of these parameters are analyzed and extrapolated to establish future population models that are based on the current and past space flight activities.In most cases, it is reasonable to extrapolate the parameters for different categories (such as countries and orbits) separately to be capable of considering effects that influence only single categories.A famous example of such an effect is the end of the Soviet Union, but also the increasing number of launched objects from private companies in the past few years can be included in this way.

Simulation Tools
After designing the BAU scenario, long-term simulations are conducted with the calculated parameters and in most cases also with its variations.At the Institute, there are currently two tools for long-time simulations under development: ESA's DELTA4 [29] and the in-house tool LUCA2 [30].The general approach for long-term simulations is shown in Figure 8.The starting point of all simulations is represented with the creation of an initial background population, usually including objects larger than d = 10 cm.During the proceedings of the simulation, the different modules seen in Figure 8 are executed at every time step until the desired time frame (e.g., 200 years) is reached.The first step is launching new objects, such as payloads and upper stages, based on the predefined launch rates.In some special cases, it is also possible to launch mega-constellations (cf.Section 5.2).Objects of mega-constellations have the ability to execute maneuvers for spiraling up to their desired orbits with electrical propulsion engines.The next steps are the generation and calculation of explosion as well as collision events.For both, there are several approaches to calculate the number of events.In the case of explosions, the first option includes a fixed number of explosion events per year and the second is described through a time dependent explosion rate.For the calculation of collision events, the first approach (called "CUBE") discretizes the space around Earth into small cubes, usually with an edge length of ten kilometers.With a time step of five days, a collision probability is calculated for each pairing of objects that are in the same cube.The second approach is called Orbit Trace and compares all orbits to find crossings within a defined maximum collision distance.Additional filtering within the algorithm avoids collisions between synchronized objects, e.g., constellation satellites.The third algorithm to calculate the collision rate uses the object-flux.Taking into account the time dependent object-flux on a target object, the algorithm extracts the number of collision events using a Poisson distribution.In addition to collision events, active objects have the possibility to perform collision avoidance maneuvers.For the propagation of the population, a various number of different propagators are available.Regarding the amount of simulation time, semi-analytical propagators are used.Nevertheless, numerical propagators like NEPTUNE [31] can also be included.At the end of each simulation time step, different mitigation measures are performed, such as PMD and Active Debris Removal.For PMD, there are different kinds of maneuvers possible-for example, an eccentric orbit with less than a 25-year lifetime or a graveyard orbit for objects in Geostationary Earth Orbit (GEO), which conforms with the IADC Mitigation Guidelines.In the past, simulations only took chemical propulsion systems into account when modeling PMD maneuvers.To investigate the impact of different systems, there are also possibilities to include electrical propulsion systems as well as the use of solar/drag sails.Because of randomized parameters within the algorithm, especially for collisions, the simulations are conducted within Monte Carlo Runs to get a statistic variation of these parameters.

Kessler Syndrome and Mega-Constellations
One of the most important problems currently being investigated is the possible beginning of the collisional cascading effect, the so-called "Kessler Syndrome" [32].This effect has to be taken into account in long-term simulations, since it cannot be ruled out that it will become the dominant effect in the generation of space debris in the distant future.Although today catastrophic collisions in space are still relatively rare, a gradual transition from the linear increase in the space debris population by entering new objects to the exponential increase by the onset of Kessler syndrome is possible.Several studies are currently investigating the instability, especially of the LEO population [33,34].Due to the high spatial debris density at 800 km altitude, it is expected that the Kessler syndrome will occur at these altitudes.Deviating from this expectation, however, the mega-constellations must also be examined separately [35,36].If such constellations will be established, the problem of space debris may also occur on different orbits.Mega-constellations are characterized by an extraordinarily high number of operational satellites.Although these constellations should preferably be installed at altitudes at which there is a low debris density today, it is to be expected that this can lead to inter-constellation collisions, if the control over the satellites is lost.In particular, the question of the disposal of these satellites from their orbits at the end of their operation life time is of crucial importance for the stable operation of such a constellation.These questions are currently being studied intensively.The Institute has already been involved in associated work [37].The most important statement is that constellations can perhaps be operated stably if a reliable post-mission disposal of satellites can be guaranteed.However, the requirements for such a maneuver are extremely high and the technical reliability must be guaranteed.

Space Situational Awareness
Recently, the existing skills are being extended for the field of Space Situational Awareness (SSA).The focus of the research in the field of Space Surveillance and Tracking (SST), i.e., the awareness of current Resident Space Object (RSO) environment around Earth, which includes all space debris objects, is to optimize SST systems to a point where awareness is as "live" and accurate as possible.This is leading to the necessity of building capabilities to continuously survey and track RSOs.This is done by an orchestration of sensors, sensor-near data processing and data centers with orbit determination, cataloging and sensor tasking capabilities.The Institute is developing a simulator for such an SST system being built from specialized software tools (cf. Figure 9) [38].The MWG tool produces artificial radar sensor measurements in the form of sets comprised of, among others, azimuth, elevation and range values.These sets are called tracklets and are coordinated by the PROCOR tool for further processing.PROCOR assigns the tracklets to instances of another tool called SMART to derive orbit data from them.The orbit data is saved in one of multiple maintained object data catalogs.The catalog data is used for evaluation purposes, such as visualization by a tool named DOCTOR and statistical analyses by a tool named CAT.In order to keep the stored orbit data fresh, the CAMP tool predicts where and when objects can be seen by sensors ("Passes").These passes can be used by sensors or artificial measurement generators like MWG to rediscover the objects and produce new measurements.In particular, the existence of a validated population including the non-trackable objects plays an important role.It is about knowing in which orbits a high number of objects can be expected.This knowledge is a necessary prerequisite for being able to optimally design observation and tracking systems.In addition, it is possible to provide the basis for the effectiveness with which orbit determination of the observed objects is possible.An important spin-off research field based on the modeling skills is the orbit determination.With knowledge of the space debris environment, simulated detections can be provided to test algorithms that can be used to simulate the orbit determination of a large number of objects.Based on these competences, the Institute can simulate detections for selected sensors.It must now be decided whether the quality of the detection is sufficient to carry out a subsequent determination of the orbit.Above all, the duration of the observation and the possible repetition of the detection of the same object play an important role.It also simulates whether enough information has been collected to create an initial orbit.In the field of orbit determination, extensive investigations are currently underway.Several algorithms for initial orbit determination and orbit improvement may be utilized.The SST system is augmented by a sophisticated radar measurement generator to produce artificial measurements.The state of the overall system is that it implements the whole data flow from sensors and pre-existing RSO catalogs back to the tasking of sensors.The system is used to evaluate the performance of a given SST system configuration.Another area of research focuses on the oncoming requirements for SST systems connected to networks of sensor networks, and a technology named Data Stream Management Systems (DSMS) is researched in the SST context.The concept of DSMS arose from conventional database management systems being able to respond to technological advancements.An ever-increasing amount of relatively small devices is capable of producing sensor data and distributing it over networks.The conventional approach to save all raw data and to process it in batches regularly can often not keep up with the demand to get a near-real-time view on what is happening.DSMSs continuously process incoming data streams and produce output data streams to solve this problem (cf. Figure 10).The processing is data-centric, meaning that processing is triggered one element at a time as soon as a data element arrives into the system.The execution of one of the main SST functionalities, initial orbit determination, has successfully been evaluated using the DSMS approach and the execution performance has been benchmarked [39].It has been found that initial orbit determination on raw measurement data output by a radar sensor poses no problem for a DSMS, even on consumer hardware.This leads to the idea that DSMS could be able to enhance conventional SST systems.Such so-called SST System Architecture Enhancements have been conceptualized and designed for a DSMS [40].One specific SST System Architecture Enhancement where a DSMS aggregates sensor measurements on-the-fly has successfully been implemented and tested.In summary, the results show that DSMS can help conventional SST systems to cope with new requirements like (near-)real-time processing.

Orbit Determination
Two Line Elements that are provided by the Joint Space Operations Center (JSpoC) have a low precision due to the analytic SGP4-propagation theory, which can amount to several hundred meters [41,42], whereas precise orbit data is not accessible to publicity.Nevertheless, the need for independent data, which is achieved by telescope and radar sensors, arises for many applications as for the determination of collision probabilities.These sensors, however, can not track every object at every orbit revolution, which is why the objects have to be propagated into the future to get their orbital data.This propagation increases the uncertainty.Thus, the initial uncertainty has to be as low as possible.This is achieved by the field of precise orbit determination (POD).In POD, new measurements of position and velocity of objects have to be achieved-for example, by telescopes, radars or GPS.These measurements are used to correct the predicted orbit using, e.g., least square methods or Kalman filters [43].The latter refine the state vector without processing previous detections, and thus these are applicable to real-time applications.
At the Institute, different Kalman filters are in use.These are the Extended Kalman Filter (EKF) [44], Unscented Kalman Filter (UKF) [38,45], and the Ensemble Kalman Filter (EnKF).The first linear Kalman filter (KF) was proposed in [46].The basic idea of Kalman filters is to use the detections and additionally a mathematical model, which describes the orbit of the objects [43], for the refinement of the detections.Kalman filters consist of two basic parts: the time update and the measurement update [47].Within the time update, the state vector and its uncertainty (usually described by covariance matrices) are propagated to the epoch of the next available detection.The uncertainty of the model, the process noise, has to be considered.Within the measurement update, the propagated state vector is updated using the respective detection.Firstly, the Kalman Gain is calculated.This is a matrix (for multivariate distributions) that states whether the state vector or the real detection corresponds with the truth.Then, the propagated state vector and covariance matrix are modified by the Kalman gain to achieve the update values.
The KF is applicable to linear applications, and, thus, it is not suitable for orbit determination.The EKF extends the time update by a linearization using Jacobi matrices to propagate the uncertainty [43].However, Jacobi matrices are complicated to implement and yield to instability [48].The UKF, however, does not use Jacobi matrices, but propagates the uncertainty using a set of sampling points, which are called sigma points [48].These sigma points map the uncertainty of the state vector, or, more precisely, the uncertainty of the position and velocity.Figuratively, the sigma points span an ellipsoid around the mean, which can be seen in Figure 11.They are threatened as usual state vectors that are propagated within the filtering process.For simplicity, only six sigma points are shown.The propagation yields to a new distribution of the sigma points that describes the uncertainty of the propagated state vector.To generate these sigma points, a Gaussian distribution of the uncertainty is considered, whereas the state vector is its mean and the covariance matrix contains the standard deviations in the respective coordinate directions.Applying the Unscented Transformation (UT), 13 equally distributed sigma points are sampled [48,49].In the literature, different statements are given whether the state vector can be regarded as a sigma point or not [48,50].Thus, 12 sigma points are considered, if the state vector is not accounted for.After propagation, the sigma points are merged to get the propagated covariance matrix.The covariance matrix is achieved by the calculation of the deviations of propagated sigma points in comparison to its mean value.Within the measurement update, state vector and covariance matrix are updated using the Kalman gain [43,48].
A further extension of the KF is the EnKF that was originally proposed for the data filtering within ocean models [51], and then applied to other applications such as geological sciences [52] or weather forecasting [53].Within the research at the Institute, the EnKF is tested within orbit determination.Similar to the UKF, the EnKF uses sampling points, called ensembles, that approximate the covariance matrix.Certainly, the ensembles are randomly distributed.The ensembles are propagated likewise to achieve the propagated ensemble, whereof the covariance matrix is calculated.Another difference is that the uncertainty of the detection (the measurement noise) is approximated by sampling points [51,54], which are used within the measurement update.Finally, the ensemble is merged to achieve the desired state vector and covariance matrix.

Active Debris Removal
The orbital lifetime of many disused spacecraft, especially in the already densely populated orbits at 800 km altitude, is so high that many of these objects can serve as a potential collision partner for a long time.A catastrophic collision event can lead to a drastic increase of the population in these orbits, making these objects pose a high risk.Active Debris Removal (ADR) is, in addition to consequent mitigation, the only possibility to stabilize critical space debris environment regions in the long term.Therefore, the Institute also deals with the problem of actively removing such objects in important secondary areas.The major challenge of a debris removal mission is the non-cooperativity of the target, i.e., the absence of Global Navigation Satellite System (GNSS) data, attitude or orbit control, reflectors for vision based navigation, or a specific interface for docking.For this reason, two major areas are investigated: The overall objective is to identify innovative key technologies for ADR and prove their relevance by achieving high Technology Readiness Levels (TRL).

Proximity Operations
Guidance, navigation and control problems typically have a nonlinear design space, several constraints as well as competing objectives.Therefore, a simulation-based design approach-as depicted in Figure 12-is being developed to achieve efficient and high TRL design processes.Preliminary design stages are performed in lower fidelity environments to assure fast concept verification and low development costs.To this purpose, a Mathematica-based symbolic simulation environment has been created.De-Orbit, Formation Flight and Re-Entry Analytical Toolbox (DIFFRACT) provides a modular set of analytical and semi-analytical tools employed to model motion dynamics, perturbation forces and guidance schemes.The relative motion is modeled using state transition matrices based on a quasi-nonlinear parametrization of relative orbital elements (ROE) [55].
For more advanced design stages, the closed loop performance of several subsystems need to be evaluated.SpaceCraft ATTitude and Orbit numERical simulator (SCATTER) is a Matlab/Simulink-based (2017b, MathWorks, Natick, MA, USA) high fidelity simulation environment developed to this purpose.Compared to the symbolic simulation environment, SCATTER allows the additional modeling of Clohessy-Wiltshire (CW) equations, nonlinear relative motion and attitude dynamics (six degrees of freedom), detailed perturbative forces and Torques, and major subsystems of the spacecraft such as propulsion and attitude determination and control subsystems.The environment is composed of several Matlab scripts and a Simulink library of standard (modified) and own-created blocks.This enables the interconnection of different models to assemble the components of the system to be designed.Both stated environments have been verified, demonstrating their successful implementation and the achievement of design requirements.The verification process included the following steps: sanity checks by comparing simulation results to the analytical solution for simplified models (e.g., CW equations), comparison of SCATTER and DIFFRACT against each other, and comparison to already validated and verified software such as the NEPTUNE propagator.The results show good concordance.Figure 13 depicts the relative trajectory during the closing phase of a rendezvous mission simulated via DIFFRACT (Figure 13a) and SCATTER (Figure 13b).The symbolic simulation uses an unperturbed quasi-nonlinear relative motion model and an idealized impulsive maneuver scheme, while the numerical simulation uses a nonlinear motion model, a reaction wheel model, and thruster model and thus includes an Linear Quadratic Regulation (LQR) position controller and a PD attitude controller.The figure shows that the computed radial impulsive guidance scheme works well for ideal conditions and even under the presence of actuator uncertainties due to the good performance of the closed-loop attitude and position control.The maneuver execution errors result in additional hops during the closing.To refine the design for on-orbit qualification and achieve advanced TRL, a Hardware-In-the-Loop (HIL) simulation environment has been constructed.The Experimental Lab for proxImity operations and Space Situational Awareness (ELISSA) uses an air-bearing table to emulate orbital relative motion and the contact dynamics during the docking phase.The air-bearing table consists of modular aeromechanical platforms with embedded nozzles, which are supported by an aluminum structure.The airflow is supplied by two side channel blowers to the platforms, which generate an air cushion to support propeller driven satellite mockups.The mockup's thrust control is achieved via Pulse Width Modulation (PWM).An optical capture system has been installed and tested, achieving position uncertainties under ≈1 mm.ELISSA provides the possibility of replacing inaccurate or complex models (such as docking mechanisms) with real-world counterparts and test the system under real-time constraints.Mechanisms that are currently in preparation and will be tested soon on the table employ gecko-inspired dry adhesives.

ADR Mechanisms
The Institute investigates several technologies for ADR, namely gecko inspired adhesives [56], robotic arms and grippers [57], and tether systems [58,59].In this section, an overview is given on the tether system research activities.The use of a net to capture the target is a promising solution because the net does not need any specific interface to capture the target.However, the control and stabilization of the target after capture proves to be much more ambitious.Thus, it is required to perform a fast stabilization of the captured target to avoid roll up/collision of the flexible connected objects.
Since 2002, Airbus DS (Bremen, Germany) has been developing a Net Capture System (NCS).The In Flight Demonstration (IFD) of the NCS is planned for 2018, as part of the RemoveDebris Mission [60].The IFD marks an essential step in the development chain of the whole system towards an operational usage.
However, in addition to the validation of the mechanism, issues regarding the control of the system needs to be resolved.Therefore, the Institute develops in cooperation with Airbus a software tool (Tether Dynamics Toolbox-TDT).This Matlab/Simulink Toolbox provides the capability to analyze a ADR mission (with flexible link) starting from the initial capture (including the stabilization phase) until the de-orbiting of the ADR system [61].
To validate the simulation environment in 2017, the Synchronized Position Hold, Engage, Reorient, Experimental Satellites (SPHERES) Tether Demo experimental study have been carried out.More than 16 different test scenarios had been pre-defined to gain experimental data for the validation of the tether dynamics simulation.The results are yet to be published.

Conclusions
The Institute of Space Systems of the Technische Universität Braunschweig has long-term experience in the field of space debris.The research fields cover a broad range of disciplines starting from the scientific and fundamental creation of the space debris environment to direct application in the SSA/SST domain and Active Debris Removal concepts.Whereas the main goal of the Institute is to support sustainable space flight in terms of risk analysis and collision counter measures, it also supports other organizations for space debris related research activities such as planning space debris observations, experimental campaigns or re-entry predictions.
As a non-profit organization, the Institute of Space Systems of the TU Braunschweig can support any company in the space debris sector while no information will be disclosed without an agreement.

Figure 1 .
Figure 1.Visualization of the space debris environment for objects with d > 1 cm in 2018.

Figure 2 .
Figure 2. Evaluation on history of breakup events.(a) number of on-orbit breakups per year; (b) histogram data on number of objects per fragmentation mass.

Figure 3 .
Figure 3. Age of object on breakup epoch.

Figure 4 .
Figure 4. Number-size distributions for the Fengyun-1C cloud.(a) NASA breakup model approach for the Fengyun-1C event as of 2018; (b) simulated Fengyun-1C event for a different number of TLE fragments.

Figure 8 .
Figure 8. Exemplary program flow for long-term simulations.

Figure 9 .
Figure 9. Top-level data flow of the SST System Simulator.

Figure 10 .
Figure 10.Comparison of conventional Database Management Systems (top) and Data Stream Management Systems (bottom).The different filled shapes represent different data, the different empty shapes represent queries.Illustration by German Wikipedia user JakobVoss, derivative work by German Wikipedia user Clavipath and Sven Müller, License: CC-BY-SA 3.0.

Figure 11 .
Figure 11.Sampled sigma points around the state vector that map the uncertainty (covariance matrix).

Figure 13 .
Figure 13.In-plane relative trajectory during closing computed via DIFFRACT, using non-perturbed ROE-based relative motion model (a) and via SCATTER using unperturbed nonlinear cartesian-based relative motion, reaction wheel and thruster models (b).

Table 2 .
Number evolution of the Fengyun-1C event over the years at event epoch.

•
Proximity operations: development and verification of guidance, navigation and control concepts to rendezvous with a non-cooperative target.• Docking and capture technologies: development and verification of suitable mechanisms to dock or capture with a non-cooperative target.