Next Article in Journal
Feedback and Control of Linear Electromagnetic Actuators for Flapping Wing MAVs
Previous Article in Journal
Experimental Study on the Dynamic Characteristics of Gas-Centered Swirl Coaxial Injector under Varying Ambient Pressure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Effective Procedure to Build Space Object Datasets Based on STK

1
Key Laboratory of Infrared System Detection and Imaging Technologies, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
2
School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
*
Author to whom correspondence should be addressed.
Aerospace 2023, 10(3), 258; https://doi.org/10.3390/aerospace10030258
Submission received: 4 January 2023 / Revised: 2 March 2023 / Accepted: 6 March 2023 / Published: 8 March 2023

Abstract

:
With the development of space technology, deep learning methods, with their excellent generalization ability, are increasingly applied in various space activities. The space object data is difficult to obtain, which greatly limits its application in space activities. The images of the existing public spacecraft dataset are mostly rendered, which not only lack physical meaning but also have limited data. In this paper, we propose an effective construction procedure to build a space object dataset based on STK, which can help to break the limitations of deep learning methods in space activities. Firstly, based on STK, we conduct orbit simulation for 24 space targets and establish the simulation dataset; secondly, we use 600 images of 6 typical targets and label them to build a real-shot validation dataset. Finally, the constructed space object dataset based on STK is verified to be effective through six semantic segmentation networks, which can be used to train the real spacecraft’s semantic segmentation. Lots of experiments show that the accuracy of migrating the training results of the simulation dataset to the real shooting dataset is slightly reduced, but the mPA is still greater than 85%. In particular, after adding orbital physics simulation data, the accuracy of six semantic segmentation methods is generally improved. Therefore, the STK-based physical simulation of orbit is an effective method for space object dataset construction.

1. Introduction

With the development of aerospace technology, the competition is increasingly fierce due to the limited space orbit resources. The quantity and difficulty of on-orbit servicing (OOS), such as on-orbit maintenance of failed spacecraft [1,2,3], on-orbit fueling of spacecraft [4,5], orbital transfer [4], and space debris cleaning on important orbits [6,7], are increasing. In view of risk and cost, on-orbit servicing is gradually evolving from manual implementation to robotic automation [8]. High-precision detection, identification, and perception of space objects are the basis for both offensive and defensive needs of space situational awareness and OOS requirements. Most applications of space situational awareness and OOS are for detection and manipulation of non-cooperative targets. Non-cooperative targets refer to space objects that lack communication response devices or other active sensors, and cannot (or cannot completely) obtain information about their structure, size, and motion state [9]. Due to the lack of prior knowledge about non-cooperative targets, most traditional detection and identification methods are difficult to be directly applied to non-cooperative targets [10].
With small size, light weight, low power consumption, and rich information acquisition, vision cameras have become the indispensable means of detecting space objects [11]. Although the image is only a two-dimensional array in the physical sense, it contains deep semantic information. In various real space activities, the feature extraction of space objects not only needs to extract superficial features; the high-level semantic information of images is more instructive in environment understanding and automation. With the emergence of convolutional neural networks (CNN), semantic segmentation methods based on deep learning have developed rapidly, greatly improving the accuracy compared with traditional semantic segmentation methods [12]. The semantic information of images obtained on the basis of deep learning is gradually applied in space activities, such as pose measurement and 3D reconstruction of space objects. Image-based semantic segmentation can be applied to pose measurement of space non-cooperative targets [13,14,15]. Extracting semantic information is the basis of semantic photogrammetry and semantic-based 3D reconstruction [16,17].
Deep learning requires a large amount of data for training, but it is quite difficult to obtain images of space objects in the space environment, let alone obtain the amount of data required for training [18]. Due to the inability to obtain a large amount of space object data, the strength of semantic segmentation method based on deep learning is difficult to exert. In response to the lack of data in deep learning training, many researchers have begun to take transfer learning into consideration by using simulation or rendered images as a substitute for real space data [19]. Hoang released the first space object dataset in 2021 and verified the segmentation accuracy and recognition accuracy of the semantic segmentation and object detection of the dataset, but this paper only verified the accuracy of the semantic segmentation network in the dataset and could not prove the availability of the training results of the dataset in real space environments [20]. Faraco generated the simulation dataset of space objects through CAD and verified it in the real data of MEV-1 on-orbit servicing after training. The results show that the training model can be used in the real space picture only by using the artificially generated image. However, the training dataset produced in this paper only uses six models and the validation dataset only uses one model, lacking robustness [21].
In this paper, in order to verify the robustness of the training results transferred to real space tasks, we make a set of real-shot validation datasets for six space object models in the simulated space environment in the laboratory. We select six groups of the best and latest semantic segmentation networks, use the spacecraft dataset for training, and verify the training results directly on the real-shot validation dataset. Although the validation accuracy of the real-shot validation dataset is lower than the internal validation accuracy of the public dataset, its mIoU (mean intersection over union) is still greater than 80%. The results show that the semantic segmentation training results of the spacecraft dataset have a certain robustness to real tasks.
Since the spacecraft dataset only contains over 3000 images, and most of the images are synthesized by simulation rendering, they cannot truly reflect the real physical conditions in orbit. In view of the difficulty in obtaining real satellite images, we propose a method based on physical simulation of orbit. We use the EOIR module of STK to simulate the real positional relationship between the target and the detector, the orbital parameters, and the space lighting conditions. We can obtain images with real physical meaning by setting the material and reflectivity of each component. We select a total of 24 space objects for simulation and obtain 1440 visible images in total. We add the physical simulation dataset to the spacecraft dataset to form a mixed dataset. In order to verify the effectiveness of the physical simulation data, we use a variety of excellent and advanced semantic segmentation methods to perform the same training on the mixed dataset as the spacecraft dataset and verify the segmentation accuracy on the real-shot dataset.

2. Materials and Methods

The exponential increase in computing power over the past few decades has led to the rapid development and wider application of machine learning. Especially in the field of computer vision, the ability of machines to understand the surrounding environment from images has been widely recognized with the emergence of convolutional neural network (CNN). Various machine learning algorithms for image understanding have been researched, among which object detection and semantic segmentation have attracted the most attention. Object detection refers to the ability to identify various objects in an image and draw a boundary box for each object. Semantic segmentation not only identifies object as in the previous case, but also increases the resolution of recognition to the pixel level.

2.1. Semantic Segmentation Network

With the advent of FCN, semantic segmentation has entered a new era [22]. Various methods have been studied. We select five semantic segmentation models, FCN, PSPNet, OCRNet, APCNet, and Deeplab V3+ [23,24,25,26], and three backbone feature extraction networks are ResNet, ResNeSt, and HRNet [27,28,29]. The networks we selected include classic networks and the latest networks. Several segmentation networks and backbone feature extraction networks are randomly matched to avoid contingencies in the results of verification using a single network. We train the above models by using the MMSegmentation platform [30].

2.2. Precision Validation Index

Pixel accuracy ( P A ) represents the ratio of the number of correctly classified pixels to all pixels in the image segmentation results. The formula is shown in (1), where k represents the number of target categories, and p i j indicates the number of pixels of category i predicted to be category j.
P A = i = 0 k p i i i = 0 k j = 0 k p i j
Intersection over union ( I o U ) and the mean intersection over union ( m I o U ) are important evaluation indicators in the field of semantic segmentation. I o U refers to the intersection and union of the predicted segmentation result and the real segmentation result, and then the intersection over union. The smaller the value, the closer the prediction is to the truth. The formula is as follows:
I o U = p i i j = 0 k p i j + j = 0 k p j i p i i .
m I o U refers to the mean intersection over union of each category. The formula is as follows:
m I o U = 1 k + 1 i = 0 k p i i j = 0 k p i j + j = 0 k p j i p i i .

3. Datasets Preparation

There are many datasets for semantic segmentation, such as VOC 2012 for universal scenes [31], ADE20K for indoor scenes [32], Cityscapes for urban scenes [33], and KITTI for autonomous driving [34]. However, there has been a lack of public dataset applicable to space objects until spacecraft dataset was released in 2021 [20].
This section introduces the datasets used for semantic segmentation network training and validation, including the public dataset (spacecraft dataset) for training, the real-shot validation dataset for validation, and the physical simulation dataset for data expansion.

3.1. Spacecraft Dataset

The spacecraft dataset is the first public dataset for object detection or instance segmentation in space. The dataset consists of 3117 images with an image resolution of 1280 × 720. The dataset includes 10,350 labels for 3667 spacecraft. The pixel values of the spacecraft on the image vary in size, with small ones accounting for 100 pixels, and the large ones accounting for almost the entire image. The area of each aerospace pixel accounts for 13.27% of the total image, among which the proportion of aerospace components is 13.21% for antenna, 43.39% for solar panel, and 43.4% for body [20].
The orbital resources of the geosynchronous orbit are even scarcer. This paper focuses on the space exploration of the geosynchronous orbit, so the influence of the earth background is not taken into consideration. Therefore, the original dataset is preprocessed, and the processed results are shown in Figure 1.

3.2. STK-Based Physical Simulation Dataset Construction

Most pictures of the spacecraft dataset are collected from synthetic images and videos, and most of them are obtained by rendering. With the continuous improvement of computing power of the computer and the application of ray tracing technology, the gap between the rendered image and the real image is narrowing, but the rendered image still has several inherent disadvantages: the reflection, refraction, and indirect lighting are not calculated on the basis of physical properties of the target during the rendering process; lighting conditions are not set by orbit dynamics; the optical simulation part of the imaging system is lacking. For the above reasons, the rendered image differs greatly from the real image taken in orbit. In addition, the training of deep learning is somewhat restricted by only more than 3000 pictures in the public dataset. Therefore, we propose a dataset construction method for physical simulation of orbit based on EOIR module of STK.
The full name of STK is satellite tool kit. It is a simulation analysis tool for aerospace developed by AGI (Analytical Graphics). It is mainly applied to satellite orbit analysis, aerospace visualization, and space target defense. STK is used for space environment modeling, orbit modeling, target structure modeling, target temperature modeling, and photoelectric sensor modeling to realize optical load imaging simulation. The specific process is shown in Figure 2.
A total of 24 space objects were selected, including ALOS, Anik F1, APSTAR-6C, Arkon, Cloud, Comets, Dawn, DSP, ETS-VIII, Galileo, GeoLITE, GPS, GP-B, Hubble, Iridium, Jason1, Quickbird2, QuikSCAT, SINOSAT-2, Telstar, Thuraya, UFO1, and XM. First, the space objects model was preprocessed, and each part of the model was manually divided (mainly including the body, antenna, lens, docking ring, panel, and other key loads, etc.) and materials and properties of each part were determined. Next, the new scene of STK software was entered, a new satellite was created as the target, and geosynchronous orbit was set. Then, a new satellite was built as a detector, with the same orbital altitude as that of the target satellite, the distance of 20KM and the fly-around mode, the satellite orbit setting is shown in Figure 3. Next, the parameters of the sensor were set, including focal length, imaging band, resolution, and CCD size. After setting up the sensor, the situation of the target in the field of view of the camera is shown in Figure 3. By repeating the above steps three times, and setting the reflectivity of the body, solar panel, and antenna, in turn, to 1 and others to 0, the labels of the three types of targets were obtained.
Then, we set the attribute information of the target, such as material, reflectivity, and emissivity for each part according to the preprocessing results. Next, we ran the software to obtain the simulated images of the target at different distances and postures from the perspective of the sensor. The above process was repeated many times. Each part of the target was imaged, respectively, by changing the attribute information of the target, and semantic labels of the simulation data could be obtained after post-processing.
Through the above method, 24 space objects were simulated. A total of 1440 simulation images were obtained, some of which are shown in Figure 4.
We set three categories to comply with the spacecraft dataset, including antenna, solar panel, and body.

3.3. Real-Shot Validation Dataset Construction of Space Object

Space objects mainly include spacecraft, space debris, and asteroids. Asteroids are mostly rocky and of a single type. Most space debris are the remains of spacecraft such as satellites, space stations, and spaceships. Therefore, only semantic segmentation for spacecraft is discussed in this paper. Although spacecraft have different shapes, most of them contain components with the same shape and structure, such as square solar panels and circular reflector antennae. In order to solve the thermal insulation problem in space, the body of the satellite is wrapped with a reflective coating. Although most spacecraft have the same components, the exact structure varies greatly. For example, satellite solar panel includes single panel, symmetrical panel, and multiple panels. Most satellite antennae are circular reflector antennae, and a few are whip antennae. From the perspectives of common and different characteristics of spacecraft, we select six spacecraft models as the simulated non-cooperative spacecraft in space, including the Beidou satellite, Dongfanghong satellite (DFH), Fengyun-3 satellite (FY-3), Fengyun-4 satellite (FY-4), Tiangong spacecraft (TG), and Tianwen satellite (TW). The satellite models we chose are all scale models, and the scale of FY-3, FY-4, and TW are 1:30; the scale of BD is 1:35; the scale of DFH is 1:12; the scale of TG is 1:50, respectively. Models are shown in Figure 5.
The models we chose include spaceship, single-panel satellite, and multi-panel satellite. The surface of the satellite model is covered with a golden coating to simulate the thermal insulation layer of the satellite. None of these six models are contained in the spacecraft dataset and STK simulation dataset, so it can effectively verify the validity of transferring training results of spacecraft dataset and simulation datasets to the semantic segmentation task of non-cooperative targets. Spectral reflective properties of the models were measured using a Fieldspec 4 spectroradiometer ranging from 350 to 1500 nm (1 nm interval), as shown in Figure 6.
As illustrated in Figure 6, the white antenna has high reflectivity in all bands. The solar panels have low reflectivity in all bands, but the blue bands are slightly higher reflective than the red and green bands. The reflectivity curve of the body of the model is located between the antenna and the solar panel and has an absorption valley in the blue band. By comparing the reflectivity of the surface of the body of the model and that of the multi-layer thermal insulation of the satellite, it can be seen that although there are slight differences in the blue band and short-wave infrared band, the overall profiles are consistent.
To simulate the photographing environment in space, we used a black curtain to create a dark room. Shooting in the dark room can prevent the ambient reflection and ensure a single and stable light source. We used a solar simulator to simulate lighting from different incident angles. The solar simulator and the robot arm are shown in Figure 7.
The color temperature of the solar simulator is 6000 K, and the wavelength range is 400–800 nm. We installed the model on the robot arm and obtained space objects images at different positions and attitudes and lighting angles by rotating the manipulator. The layout of solar simulator and robot arm is shown in Figure 8.
When shooting, we placed the camera on a tripod, changed the lighting and relative position by moving the tripod, and changed the relative posture of the model and the camera through the six DoF (degrees of freedom) ABB robot arm. The real-shot validation dataset was collected by a Point Gray camera. The light intensity in the acquisition process was set to 0.1 solar constant, and the lighting angles were 0°, 30°, and 60°, respectively. Taking Beidou as an example, the images of the satellite model under different lighting conditions are shown in Figure 9.
The specific shooting conditions are listed in Table 1.
The target was photographed at different positions, attitudes, and lighting angles. Image Labeler in MATLAB was used for data labelling, and the types of labels included body, antenna, and solar panel of the satellite. The label data was used to post-process the raw images to remove components other than the target model, such as robotic arms and curtain backgrounds. Post-processing makes the data as close as possible to the real data acquired in space [35]. The images and labels are shown in Figure 10.
During the shooting process, we took pictures of all positions and postures of the target under each lighting angle. After shooting, we put all the pictures together and manually removed those which were obscured by the robot arms. The final dataset consists of 100 high-quality images per models.

4. Results and Discussion

In this section, we used the spacecraft dataset to train the six semantic segmentation network combinations and verified their segmentation results on the real-shot validation dataset. The simulation dataset was added to the spacecraft dataset to form the mixed dataset, the same training parameters were applied to train these six network combinations, and the segmentation results were verified on the real-shot validation dataset.

4.1. Joint Validation of Semantic Segmentation Accuracy of Spacecraft Dataset and Real-Shot Validation Dataset

Firstly, several semantic segmentation networks were trained and validated on public datasets, and then the segmentation accuracy was calculated. The semantic segmentation network includes five semantic segmentation models, namely the classic FCN, the widely used PSPNet and Deeplab V3+ with excellent performance, and the relatively new OCRNet and APCNet. The most widely used ResNet and the latest ResNeSt and HRNet were selected as the backbone feature extraction network. For the ResNet and ResNeSt models, we chose ResNet101 and ResNeSt101. For HRNet, we chose HRNet48. Due to the complex network structure of Deeplab V3+, the batch size selected for training was set to six, and the other networks were set to eight. We train Deeplab V3+ with 80k iterations, and the others with 60k iterations. The image input to all our networks was 512 × 512. The SGD optimizer was applied to all models. PyTorch and MMSegmentation open-source tool kits were used in this experiment. The platform for the experiment is listed in Table 2.
We used pixel accuracy (PA) and intersection over union (IoU) to compare the semantic segmentation performance of the models. The IoU and PA of each category and the overall mIoU and mPA were calculated, where mIoU and mPA are the average of antenna, body, and solar panel. The validation results of the spacecraft dataset are shown in Figure 11.
Figure 11 shows that the segmentation results of several semantic segmentation networks are good, mIoU is higher than 85%, and mPA is higher than 90%. Among segmentation accuracy of the three targets, solar panels have the highest recognition accuracy, followed by the body of the satellite, and the antenna has the lowest segmentation accuracy, which is related to the lowest proportion of the antenna in the dataset. In general, the antenna is smaller than the body or the solar panel of the satellite and has the smallest pixel area in the dataset, so the segmentation accuracy after training is low. Compared with the antenna, the proportion of the solar panel is similar to that of the body, so the segmentation accuracy of the two is higher than that of the antenna. We removed all the background of the original dataset through preprocessing, so our segmentation accuracy is higher than the original author’s training results.
We validated the training results of the spacecraft dataset on the real-shot validation dataset. IoU and PA were also calculated, and the results are shown in Figure 12.
Figure 12 shows that the accuracy of the training results in the real-shot validation dataset is lower than that in the spacecraft dataset. Most pictures in the spacecraft dataset come from the Internet and most of them are rendered, so these rendered images vary greatly in style compared with the real images. Although the accuracy has declined, the mPA is above 80%, and the mIoU is above 75% after the background is removed. The accuracy is reduced after migrating to the real-shot validation dataset, but the accuracy reduction varies slightly for different components and different networks. The details are shown in Figure 13.
The above figure shows that the training results of the spacecraft dataset have verified that both mIoU and mPA have decreased on the real-shot validation dataset. Among the three types of components, only the solar panels have improved accuracy. This is because the solar panels in the real dataset include single panel and double panels, but solar panels still have common structural shapes, which are simpler compared with the types of panels in the spacecraft dataset. Among the six methods, PSPNet has the best migration effect on the real-shot validation dataset, with the smallest decline in accuracy.

4.2. Joint Validation of Semantic Segmentation Accuracy of Mixed Dataset and Real-Shot Validation Dataset

We verified the availability of training results of the spacecraft dataset on the real-shot validation dataset. In order to solve the problem of lack of space object data, we added the STK simulation dataset to the spacecraft dataset to form a mixed dataset, trained the same network with the mixed dataset, validated the accuracy of the training results on the real-shot dataset, and verified the feasibility of STK simulation dataset in space applications through this experiment.
The training process on the mixed dataset is consistent with that on the spacecraft dataset. We used the same training parameters and training times. IoU and PA of the six methods were also calculated after the training, and the results are shown in Figure 14.
After the addition of the STK simulation dataset, the validation accuracy of the dataset is lower than that only trained and validated on the spacecraft dataset. This is because image differences exist in the STK simulation dataset and the spacecraft dataset. Simply mixing the two datasets resulted in a slight decrease in the accuracy of the training results of the dataset. After mixing, the accuracy obtained by OCR_Resnet method is the highest, which depends on the absolute advantage of this network in antenna recognition.
After validation on the mixed dataset, the same validation was conducted on the real-shot validation dataset. It can be seen that the decline trend of accuracy is similar to the situation in Section 4.1 where real-shot validation datasets are trained on spacecraft datasets for validation. A distinct feature can be seen in Figure 15, where the training results of Deeplabv3p_ResNeSt have the lowest validation accuracy on the mixed dataset, but the migration to the real-shot validation dataset results in the best validation accuracy, which should be because Deeplabv3p is the most complex network here, and the complex network is conducive to the generalization ability of the training results. At the same time, the training of complex networks requires more sufficient data. Therefore, compared with the training results of the spacecraft dataset, a significant improvement was achieved for the migration verification accuracy of the training results of the mixed dataset with a large amount of data.
The training results of spacecraft dataset and mixed dataset were validated on the real-shot validation dataset. The two kinds of validation accuracy were compared, as shown in Figure 16.
It can be seen from Figure 16 that, compared with the spacecraft dataset, the mixed dataset added with STK simulation data has different effects on different methods and targets when the training results were migrated to the validation dataset. For the three categories of targets, there are five methods that significantly improve the accuracy of antenna segmentation, and OCR_HRNet is also slightly improved. This results from the proportion of different categories in the dataset. The proportion of antennae in the spacecraft dataset is less than that of STK simulation data, so the mixed dataset has better training ability for antenna segmentation. For the body category, the results of three methods show that the training results of the mixed dataset are not as good as the direct training effect of the original spacecraft dataset, while the migration accuracy of the other three methods has increased. This is due to STK’s lack of high-precision physical simulation of complex objects, such as satellite bodies. The segmentation accuracy of solar panels has not been improved by some of the methods for the following two reasons. On the one hand, the training results using the spacecraft dataset have already demonstrated a good effect on the recognition of solar panels, and it is difficult to be further improved after adding physical simulation data. On the other hand, the validation dataset includes single solar panel and dual solar panels, but the model of single solar panel is missing in the physical simulation dataset. Therefore, in the migration verification of the mixed dataset training results, the segmentation accuracy of solar panels is not significantly increased. This problem can be solved by using a wider variety of satellite models when making physics simulation data.
It can be seen from Figure 16 and Table 3 that relatively high mIoU and mPA are obtained by the six methods when the training results on the mixed dataset are transferred to the real-shot images for validation. In the case of training with the spacecraft dataset, the OCR_HRNet method has the best effect of migrating the training results to the verification dataset and obtained the best mIoU and mPA accuracy. The training effect of the Deeplabv3p method is not outstanding, but after adding physical simulation data, the accuracy improvement is the most obvious among the six methods. This is because among these types of networks, the Deeplabv3p network has the most complex structure, and the large amount of data is important for the training of complex networks. In the case of using spacecraft dataset training, the two methods of HRNet as the backbone feature network win the first and third rankings. When using the mixed dataset, OCR_HRNet can still obtain the second place. Therefore, using HRNet as the backbone network, and especially the combination of OCR_HRNet, is more suitable for the generalization to real-shot data.
Semantic segmentation results of the real-shot validation dataset using Deeplabv3p ResNeSt are shown in Figure 17.
Overall, after adding the physical simulation data, the migration verification accuracy mIoU and mPA of the six semantic segmentation methods have been improved to different extents. This proves that the addition of physical simulation data universally improves the accuracy of different semantic segmentation networks. The improvement in accuracy is limited in our experiments, the average improvement of the six networks is only about one percent. The small amount of data is the main reason for the insignificant improvement in accuracy. In this paper, the feasibility of this dataset production method is only verified through the simulation of 24 satellites. To be sure, it is beneficial to add orbital physics simulations data. This paper provides an effective method for space object dataset expansion.

5. Conclusions

The release of the spacecraft dataset ends the history of zero public space objects image datasets. However, the dataset still has shortcomings; for example, it has limited data and most images without physical meaning are synthesized by simulation rendering. Since the dataset has just been released, whether the training results of the dataset can be directly applied to real-shot space images has not been proved. In view of the problems mentioned above, we have conducted the following work: (1) a real-shot validation dataset was constructed to validate the possibility of migrating the training results of the spacecraft dataset to real scene applications; (2) a dataset expansion method using the EOIR module of STK for the physical simulation of orbit was proposed. This method can not only solve the problem of the lack of space data; it can also obtain data with real physical meaning.
We selected six satellite models: the Beidou satellite, Dongfanghong satellite, Fengyun-3 satellite (FY-3), Fengyun-4 satellite (FY-4), Tiangong spacecraft, and Tianwen satellite. We used the black curtain to create a dark room to simulate the space environment and used the solar simulator to simulate the sun lighting. We collected 600 pictures from different lighting angles and poses for 6 targets to build a real-shot validation dataset. We used the EOIR module of STK for the physical simulation of orbit, carried out visible band simulation of 24 satellites in geosynchronous orbit, and obtained 1440 physical simulation images, which were added to the spacecraft dataset to form a mixed dataset.
We selected six types of semantic segmentation networks, used spacecraft datasets for training, and validated the training results on the real-shot validation dataset. The results show that a certain decline exists in the validation accuracy of the real-shot validation dataset, but the decline is not serious. Therefore, the training results in spacecraft datasets which can be applied to the semantic segmentation of real space objects. We used the same semantic segmentation network and training settings to train on the mixed dataset and validated the training results on the real-shot validation dataset. Compared with the migration validation accuracy of the spacecraft dataset, the training results of the mixed dataset with STK physical simulation data have better generalization ability. After adding physical simulation data, the segmentation accuracy of various methods has been improved to varying degrees. Although the improvement is limited, it is general to various semantic segmentation methods. Therefore, the STK-based physical simulation of orbit is an effective method for space dataset expansion. In future work, we will add a wider variety of satellite models and produce and release new large-scale datasets.

Author Contributions

Conceptualization, R.W. and H.D.; methodology, R.W.; software, R.W.; validation, H.D. and H.P.; formal analysis, R.W.; investigation, A.S.; resources, H.P.; data curation, A.S.; writing—original draft preparation, R.W.; writing—review and editing, R.W.; visualization, A.S.; supervision, H.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Innovation Program CX-387 of the Shanghai Institute of Technical Physics.

Data Availability Statement

Not applicable.

Acknowledgments

We are also thankful to all anonymous reviewers for their constructive comments provided on the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Barnhart, D.; Sullivan, B.; Hunter, R.; Bruhn, J.; Fowler, E.; Hoag, L.M.; Chappie, S.; Henshaw, G.; Kelm, B.E.; Kennedy, T. Phoenix program status-2013. In Proceedings of the AIAA SPACE 2013 Conference and Exposition, San Diego, CA, USA, 10–12 September 2013; p. 5341. [Google Scholar]
  2. Shoemaker, M.A.; Vavrina, M.; Gaylor, D.E.; Mcintosh, R.; Volle, M.; Jacobsohn, J. OSAM-1 decommissioning orbit design. In Proceedings of the AAS/AIAA Astrodynamics Specialist Conference, South Lake Tahoe, CA, USA, 9–13 August 2020. [Google Scholar]
  3. Kimura, S.; Nagai, Y.; Yamamoto, H.; Masuda, K.; Abe, N. Approach for on-orbit maintenance and experiment plan using 150kg-class satellites. In Proceedings of the 2005 IEEE Aerospace Conference, Big Sky, MT, USA, 5–12 March 2005. [Google Scholar]
  4. Tarabini, L.; Gil, J.; Gandia, F.; Molina, M.Á.; Del Cura, J.M.; Ortega, G. Ground guided CX-OLEV rendez-vous with uncooperative geostationary satellite. Acta Astronaut. 2007, 61, 312–325. [Google Scholar] [CrossRef]
  5. Reed, B.B.; Smith, R.C.; Naasz, B.J.; Pellegrino, J.F.; Bacon, C.E. The restore-L servicing mission. In Proceedings of the AIAA Space 2016, Long Beach, CA, USA, 13–16 September 2016; p. 5478. [Google Scholar]
  6. Aglietti, G.; Taylor, B.; Fellowes, S.; Ainley, S.; Tye, D.; Cox, C.; Zarkesh, A.; Mafficini, A.; Vinkoff, N.; Bashford, K. RemoveDEBRIS: An in-orbit demonstration of technologies for the removal of space debris. Aeronaut. J. 2020, 124, 1–23. [Google Scholar] [CrossRef]
  7. Telaar, J.; Estable, S.; De Stefano, M.; Rackl, W.; Lampariello, R.; Ankersen, F.; Fernandez, J.G. Coupled control of chaser platform and robot arm for the e. deorbit mission. In Proceedings of the 10th International ESA Conference on Guidance Navigation and Control Systems (GNC), Salzburg, Austria, 29 May–2 June 2017; p. 4. [Google Scholar]
  8. Ellery, A. Tutorial Review on Space Manipulators for Space Debris Mitigation. Robotics 2019, 8, 34. [Google Scholar] [CrossRef] [Green Version]
  9. National Research Council. Assessment of Options for Extending the Life of the Hubble Space Telescope: Final Report; The National Academies Press: Washington, DC, USA, 2005; p. 160. [Google Scholar] [CrossRef]
  10. Maestrini, M.; Di Lizia, P. Guidance Strategy for Autonomous Inspection of Unknown Non-Cooperative Resident Space Objects. J. Guid. Control. Dyn. 2022, 45, 1126–1136. [Google Scholar] [CrossRef]
  11. Zhang, Y.; Huang, P.; Meng, Z.; Liu, Z. Precise angles-only navigation for noncooperative proximity operation with application to tethered space robot. IEEE Trans. Control. Syst. Technol. 2018, 27, 1139–1150. [Google Scholar] [CrossRef]
  12. Mo, Y.; Wu, Y.; Yang, X.; Liu, F.; Liao, Y. Review the state-of-the-art technologies of semantic segmentation based on deep learning. Neurocomputing 2022, 493, 626–646. [Google Scholar] [CrossRef]
  13. Ding, H.; Yi, J.; Wang, Z.; Zhang, Y.; Wu, H.; Cao, S. Automated synthetic datasets construction for part semantic segmentation of non-cooperative satellites. In Proceedings of the Thirteenth International Conference on Machine Vision, Shenzhen, China, 26 February–1 March 2021; SPIE: Bellingham, WA, USA, 2021. [Google Scholar]
  14. Du, H.; Hu, H.; Xie, X.; He, Y. Pose Measurement Method of Non-cooperative Targets Based on Semantic Segmentation. In Proceedings of the 2021 6th International Conference on Intelligent Computing and Signal Processing (ICSP), Xi’an, China, 9–11 April 2021. [Google Scholar]
  15. Sharma, S. Pose Estimation of Uncooperative Spacecraft Using Monocular Vision and Deep Learning; Stanford University: Stanford, CA, USA, 2019. [Google Scholar]
  16. Stathopoulou, E.K.; Remondino, F. Semantic Photogrammetry—Boosting Image-Based 3D Reconstruction with Semantic Labeling. In Proceedings of the 8th International Workshop on 3D Virtual Reconstruction and Visualization of Complex Architectures (3D-ARCH), Bergamo, Italy, 6–8 February 2019; Copernicus Gesellschaft Mbh: Göttingen, Germany, 2019. [Google Scholar]
  17. Hane, C.; Zach, C.; Cohen, A.; Pollefeys, M. Dense Semantic 3D Reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1730–1743. [Google Scholar] [CrossRef] [PubMed]
  18. Armstrong, W.; Draktontaidis, S.; Lui, N. Semantic Image Segmentation of Imagery of Unmanned Spacecraft Using Synthetic Data; Technical Report; Stanford University: Stanford, CA, USA, 2021. [Google Scholar]
  19. Cheplygina, V.; De Bruijne, M.; Pluim, J.P.W. Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Med. Image Anal. 2019, 54, 280–296. [Google Scholar] [CrossRef] [Green Version]
  20. Hoang Anh, D.; Chen, B.; Chin, T.-J.; Soc, I.C. A Spacecraft Dataset for Detection, Segmentation and Parts Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Electr Network, Virtual, 19–25 June 2021. [Google Scholar]
  21. Faraco, N.; Maestrini, M.; Di Lizia, P. Instance Segmentation for Feature Recognition on Noncooperative Resident Space Objects. J. Spacecr. Rocket. 2022, 59, 2160–2174. [Google Scholar] [CrossRef]
  22. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  23. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  24. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
  25. Yuan, Y.; Chen, X.; Wang, J. Object-contextual representations for semantic segmentation. In Proceedings of the European Conference on Computer Vision, Seattle, WA, USA, 13–19 June 2020; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  26. He, J.; Deng, Z.; Zhou, L.; Wang, Y.; Qiao, Y. Adaptive pyramid context network for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  27. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  28. Zhang, H.; Wu, C.; Zhang, Z.; Zhu, Y.; Lin, H.; Zhang, Z.; Sun, Y.; He, T.; Mueller, J.; Manmatha, R. Resnest: Split-attention networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022. [Google Scholar]
  29. Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X. Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3349–3364. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Contributors, M. MMSegmentation: Openmmlab Semantic Segmentation Toolbox and Benchmark. Available online: https://github.com/open-mmlab/mmsegmentation (accessed on 18 May 2022).
  31. Everingham, M.; Winn, J. The PASCAL visual object classes challenge 2012 (VOC2012) development kit. Pattern Anal. Stat. Model. Comput. Learn. Tech. Rep. 2012, 2007, 1–45. [Google Scholar]
  32. Zhou, B.; Zhao, H.; Puig, X.; Fidler, S.; Barriuso, A.; Torralba, A. Scene parsing through ade20k dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  33. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  34. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012. [Google Scholar]
  35. Park, T.H.; Märtens, M.; Lecuyer, G.; Izzo, D.; D’Amico, S. SPEED+: Next-generation dataset for spacecraft pose estimation across domain gap. In Proceedings of the 2022 IEEE Aerospace Conference (AERO), Big Sky, MT, USA, 5–12 March 2022. [Google Scholar]
Figure 1. Examples of original, processed, and labeled images in the dataset. Red: solar panels; blue: antennae; green: satellite body.
Figure 1. Examples of original, processed, and labeled images in the dataset. Red: solar panels; blue: antennae; green: satellite body.
Aerospace 10 00258 g001
Figure 2. Flowchart of physical simulation of orbit.
Figure 2. Flowchart of physical simulation of orbit.
Aerospace 10 00258 g002
Figure 3. Simulation in the STK: (a) Earth synchronous orbit (GEO); (b) camera’s field of view.
Figure 3. Simulation in the STK: (a) Earth synchronous orbit (GEO); (b) camera’s field of view.
Aerospace 10 00258 g003
Figure 4. Examples of images and labels in the dataset. Red: solar panels; blue: antennae; green: body.
Figure 4. Examples of images and labels in the dataset. Red: solar panels; blue: antennae; green: body.
Aerospace 10 00258 g004
Figure 5. The model used for the validation dataset: (a) Beidou; (b) DFH; (c) FY-3; (d) FY-4; (e) TG; (f) TW.
Figure 5. The model used for the validation dataset: (a) Beidou; (b) DFH; (c) FY-3; (d) FY-4; (e) TG; (f) TW.
Aerospace 10 00258 g005
Figure 6. Spectral reflective properties of the models: (a) spectral reflectance curves for the three main parts of the satellite model, including the body, antenna, and solar panels; (b) the spectral reflectance curves of the satellite model body compared to the real satellite.
Figure 6. Spectral reflective properties of the models: (a) spectral reflectance curves for the three main parts of the satellite model, including the body, antenna, and solar panels; (b) the spectral reflectance curves of the satellite model body compared to the real satellite.
Aerospace 10 00258 g006
Figure 7. Environmental simulation equipment: (a) solar simulator; (b) robot arm.
Figure 7. Environmental simulation equipment: (a) solar simulator; (b) robot arm.
Aerospace 10 00258 g007
Figure 8. The visualization of the environment simulation room layout.
Figure 8. The visualization of the environment simulation room layout.
Aerospace 10 00258 g008
Figure 9. Image of the satellite model under different lighting conditions: (a) 0°; (b) 30°; (c) 60°.
Figure 9. Image of the satellite model under different lighting conditions: (a) 0°; (b) 30°; (c) 60°.
Aerospace 10 00258 g009
Figure 10. Images and labels of the real-shot validation dataset (red: solar panels; blue: antennae; green: body): (a) Beidou; (b) DFH; (c) FY-3; (d) FY-4; (e) TG; (f) TW.
Figure 10. Images and labels of the real-shot validation dataset (red: solar panels; blue: antennae; green: body): (a) Beidou; (b) DFH; (c) FY-3; (d) FY-4; (e) TG; (f) TW.
Aerospace 10 00258 g010
Figure 11. Various types of segmentation accuracy and overall accuracy obtained by training of the spacecraft dataset: (a) intersection over union. (b); pixel accuracy.
Figure 11. Various types of segmentation accuracy and overall accuracy obtained by training of the spacecraft dataset: (a) intersection over union. (b); pixel accuracy.
Aerospace 10 00258 g011
Figure 12. Various types of segmentation accuracy and overall accuracy obtained by training of the spacecraft dataset and validation of the real-shot validation dataset: (a) intersection over union; (b) pixel accuracy.
Figure 12. Various types of segmentation accuracy and overall accuracy obtained by training of the spacecraft dataset and validation of the real-shot validation dataset: (a) intersection over union; (b) pixel accuracy.
Aerospace 10 00258 g012
Figure 13. Comparison of validation accuracy of training results in spacecraft dataset and real-shot validation dataset. The green upper boundary is the validation accuracy of the real-shot validation dataset, and the yellow upper boundary is the validation accuracy of spacecraft dataset.
Figure 13. Comparison of validation accuracy of training results in spacecraft dataset and real-shot validation dataset. The green upper boundary is the validation accuracy of the real-shot validation dataset, and the yellow upper boundary is the validation accuracy of spacecraft dataset.
Aerospace 10 00258 g013
Figure 14. Various types of accuracy and overall accuracy obtained by training and validation of the mixed dataset: (a) intersection over union; (b) pixel accuracy.
Figure 14. Various types of accuracy and overall accuracy obtained by training and validation of the mixed dataset: (a) intersection over union; (b) pixel accuracy.
Aerospace 10 00258 g014
Figure 15. Various types of accuracy and overall accuracy obtained by training of the mixed dataset and validation of the real-shot validation dataset: (a) intersection over union; (b) pixel accuracy.
Figure 15. Various types of accuracy and overall accuracy obtained by training of the mixed dataset and validation of the real-shot validation dataset: (a) intersection over union; (b) pixel accuracy.
Aerospace 10 00258 g015
Figure 16. Comparison of the validation accuracy between the mixed dataset and the spacecraft dataset training results in the real-shot validation dataset.
Figure 16. Comparison of the validation accuracy between the mixed dataset and the spacecraft dataset training results in the real-shot validation dataset.
Aerospace 10 00258 g016
Figure 17. Semantic segmentation results of real-shot validation dataset using Deeplabv3p ResNeSt: (a) Beidou; (b) DFH; (c) FY-3; (d) FY-4; € TG; (f) TW.
Figure 17. Semantic segmentation results of real-shot validation dataset using Deeplabv3p ResNeSt: (a) Beidou; (b) DFH; (c) FY-3; (d) FY-4; € TG; (f) TW.
Aerospace 10 00258 g017
Table 1. Image shooting conditions.
Table 1. Image shooting conditions.
NameConfigurations
CameraPoint Grey
Focal75mm
Resolution2048 × 2048
Light Intensity0.1 Solar Constant
Light Condition0°, 30° and 60°
Table 2. Environment.
Table 2. Environment.
NameConfigurations
Operating systemUbuntu 20.04
GPUNVIDIA TITAN RTX
CPUIntel i9-10900k
CUDACuda 11.3
Memory32G
Deep learning frameworkPytorch 1.12
Python3.8
Table 3. Comparison of the validation accuracy between the mixed dataset and the spacecraft dataset training results in the real-shot validation dataset.
Table 3. Comparison of the validation accuracy between the mixed dataset and the spacecraft dataset training results in the real-shot validation dataset.
MethodDatasetmIoU (%)mPA (%)
FCN_HRNetspacecraft dataset80.0685.21
mixed dataset79.9685.56
PSPNet_ResNetspacecraft dataset81.4585.25
mixed dataset82.1486.02
Deeplabv3p_ResNeStspacecraft dataset77.5684.47
mixed dataset81.9986.73
OCR_ResNetspacecraft dataset79.4385.2
mixed dataset79.8585.75
OCR_HRNetspacecraft dataset81.7786.04
mixed dataset82.1186.31
Apcnet_ResNetspacecraft dataset78.4685.06
mixed dataset80.4385.93
Average accuracy improvement1.290.85
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wei, R.; Song, A.; Duan, H.; Pei, H. An Effective Procedure to Build Space Object Datasets Based on STK. Aerospace 2023, 10, 258. https://doi.org/10.3390/aerospace10030258

AMA Style

Wei R, Song A, Duan H, Pei H. An Effective Procedure to Build Space Object Datasets Based on STK. Aerospace. 2023; 10(3):258. https://doi.org/10.3390/aerospace10030258

Chicago/Turabian Style

Wei, Rongke, Anyang Song, Huixian Duan, and Haodong Pei. 2023. "An Effective Procedure to Build Space Object Datasets Based on STK" Aerospace 10, no. 3: 258. https://doi.org/10.3390/aerospace10030258

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop