Next Article in Journal
Design and Experiment of the Combined Machine for Transplanting Outcrop of Codonopsis with Micro Ridge Covered with Film
Previous Article in Journal
Methylparaben Adsorption onto Activated Carbon and Activated Olive Stones: Comparative Analysis of Efficiency, Equilibrium, Kinetics and Effect of Graphene-Based Nanomaterials Addition
Previous Article in Special Issue
Crowded Space: A Review on Radar Measurements for Space Debris Monitoring and Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Dim Staring Debris Targets Detection Method with Dense Long Trailing Star

1
Beijing Institute of Spacecraft System Engineering, China Academy of Space Technology, Beijing 100094, China
2
College of Computer Science and Engineering, Chongqing University of Technology, Chongqing 400054, China
3
China Research and Development Academy of Machinery Equipment, Beijing 100089, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(16), 9148; https://doi.org/10.3390/app13169148
Submission received: 30 June 2023 / Revised: 8 August 2023 / Accepted: 9 August 2023 / Published: 11 August 2023
(This article belongs to the Special Issue Space Debris: Monitoring and Hazard Evaluation)

Abstract

:

Featured Application

The method studied in this paper is applied to the debris detection on-board, especially the dim targets and the occluded by a long trailing star.

Abstract

With the gradual increase in spacecraft in orbit, space debris monitoring has become the key to the sustainable development of space missions. A staring debris detection method is proposed for high-density stars with long tails. In order to solve the problem that the gray level of a long trailing image is not stable and continuous, rectangular fitting is used to complete the aggregation of the trailing image and reduce the influence of noise on the trailing information. The occluded state of the target was analyzed, the feature calculation method was improved, the semi-occluded scene was statistically classified, the fully connected network (FCN) based finite point feature was accurately classified, and the semi-occluded image was extracted. Based on the extracted semi-occluded image, the inter-frame association can improve the success probability of target association and realize the effective detection and tracking of debris. The detection accuracy was tested for the changing inter-frame interval and signal-to-noise ratio (SNR), and the relationship between the index parameters and key parameters was given. Compared with previous literature, this design can detect and track the occluded target with a detection rate of more than 90% and a false alarm rate of less than 10%.

1. Introduction

With the continuous development of civil spaceflight and the intensive launch of satellite missions, space debris has become an important factor affecting the safe and reliable execution of current aerospace missions [1]. In the early stage, debris monitoring mainly relied on ground-based radar and astronomical telescopes. However, with the increasing number of debris, limited by observation time and field of view, research on space-based debris monitoring has been gradually turned to in the past decade [2].
Space-based debris monitoring mainly relies on optical cameras to image the debris target, obtain astronomical data similar to a “star list”, and confirm the target information according to the difference in the motion state between the star background and the target [3]. However, with the increase in fragment targets to small scales such as decimeter level and centimeter level, the number of targets in the image and the complexity of content gradually bring great difficulty to the process of information extraction [4]. For remote and small-scale weak targets, the target-follow camera long exposure mode is often used to achieve the target-background SNR, which is sufficient for detection and recognition [5]. In this mode, the staring target forms a point target on the image, the background star forms a long trailing shape, and the non-gazing target forms shapes with different trailing lengths [6].
In the early stage, space debris target detection mainly relied on long-term observation for orbit determination to judge the target [7]. With the development of image processing technology, there are two main types of image processing-based methods: One is to obtain the mapping relationship between a single image and the celestial sphere according to the match of the camera imaging data and the star list, and then perform matching and eliminating for each star target to finally retain the fragment target information [8]. The other is to conduct registration and difference comparison judgment for the front and back frames of continuous imaging and distinguish target information based on different time domain distributions of stars and debris targets [9]. These two methods have good accuracy in data processing under the background of short trailing stars [10], and can give consideration to both real-time performance and accuracy under high SNR by relying on inter-frame differential calculation [11].
The early algorithm mainly aimed at the point target, combined with multi-frame accumulation, to achieve the multi-frame information of the moving target to form a shape similar to “dotted line”, and the line detection method was used to calculate the position and trajectory [12]. The detection of orbiting debris targets mainly relies on DSP and FPGA to form a heterogeneous computing platform to achieve real-time image processing [13]. There are also some methods to achieve trajectory by high-precision positioning of the target [14]. In the case that multiple debris targets exist simultaneously in the same field of view, the method based on a traditional tracking algorithm can also be adopted [15]. The recent method partially takes deep learning into consideration and uses a deep convolutional network to realize saliency detection for small fragment targets [16]. In literature [17], a fast grid neural network was used to complete the recognition of debris targets, which greatly improved the calculation speed while ensuring recognition accuracy. According to literature [18], the target has a significantly shorter tail compared with the stellar background, and the network-based detection can be realized through the training and learning of target data. The effective processing of weak targets is still the focus of debris monitoring at present. The method based on topological scanning proposed in literature [19] can detect and process a small number of trailing scenes. Once the trailing tail is too long, the target and star are likely to be superposed, which will lead to the failure of inter-frame correlation of target information.
To sum up, it can be seen that the monitoring of orbiting debris targets is mainly based on the motion difference between the target and the stellar background, and the information extraction of the debris target is completed by feature detection, association tracking, and other methods according to the predicted state parameters [20,21]. The scene solved by the previous algorithm is relatively simple, mainly facing the “sparse” background and target scene. In practical application, especially for some specific extremely weak targets, the background star will form a long tail, which will affect the target detection.
In this paper, we propose a method for detecting faint staring debris in the background of dense stars with long trailing tails. The gray level of the long trailing image is not stable and continuous, and the long tail may occlude the possible debris target. In these scenarios, the faint debris target can hardly be detected. We conduct the rectangle contour fitting to the stellar tails, as the angle and length of the trailing star can be predicted. Then we obtain the image with the stellar tail, and the debris target superposed together, which can be analyzed in three cases. These images are put into the fully connected network (FCN), which can tell which case the images belong to, then we can obtain the information of the occluded faint debris target. Aiming at the problem that the inter-frame association cannot be sustained due to the occlusion point target of star trailing, the inter-frame association is carried out based on the extracted semi-occluded image to improve the success probability of target association and realize the effective detection and tracking of debris.
The structure of this paper is as follows: Section 2 presents the problem and the corresponding solution algorithm; Section 3 gives the engineering implementation framework. Section 4 presents the experimental results and the comparison with previous algorithms. Section 5 concludes the paper.

2. Semi-Occluded Object Detection Algorithm Based on FCN

The staring debris mode in the background of long trailing stars aims to solve two problems in the process of target observation: one is that the debris target is too weak and has to rely on long time exposure to improve its SNR; the other is to reduce the SNR of the fixed stars by the long trailing tail of the background star to prevent the high density of the background fixed star from affecting the detection of the debris target. However, this mode also brings two new problems: one is that the long tail of the star leads to the occlusion of the debris target by the tail, which makes the conventional 3~5 frame association method ineffective; the other is that the tail is too long, causing the tail itself to be split into multiple segments, forming more false alarms.
By focusing on these problems, this section presents an object detection method based on a single-frame image, which integrates traditional multi-frame association to realize object tracking. It mainly includes four parts: simulation scene and parameter description, trailing aggregation based on rectangle fitting, semi-occluded target clustering recognition based on FCN, and multi-frame association tracking.

2.1. Simulation Scenario and Parameter Description

The simulation parameters are shown in Table 1. The stellar trailing imaging length is 26 pixels, and the width is 3–4 pixels. The imaging size of the debris target is 3 × 3 pixels; the inter-frame imaging interval is 32 pixels; the SNR of both star and target is five; the position difference of debris between frames is 0.5 pixels. The average distance between stars is 20 pixels.
Only weak targets are considered, and the generation of bright and large debris targets and stars is not considered, nor are the effects of bright and large targets on the imaging of weak targets considered.
A total of 100 groups of continuous image data were generated, with 15 consecutive frames in each group. Considering the effectiveness of simulation and testing, only gaze imaging is considered for the debris point target, and the debris target with other motion conditions is not considered in this paper.
Figure 1 shows the superposition data of three frames of images. Red, green, and blue, respectively, represent the trailing stars of the three frames before and after, and the debris point target is in the center of the image. Figure 2 shows the gray levels of the local target and the star. It can be seen that the amplitude of the star trailing is discontinuous and fluctuates greatly under the influence of noise, and the middle of the trailing is prone to “fracture”. Therefore, the process should not only eliminate the background through the motion difference between the frames, but also identify the whole trailing information according to the distribution features of the image.

2.2. Trailing Aggregation Based on Rectangular Fitting

Due to noise interference and low SNR condition, stellar tails often appear in an “unstable” state in images. These states will seriously interfere with the image processing, mainly including three cases (as shown in Figure 3): The middle part of the trailing tail is broken, “Crack”; The end of the trailing tail is attached to other trailing tails, “Adhesion”; The trailing end diverges and becomes difficult to determine the edge, “Divergence”. Traditional image processing methods can hardly solve these three problems.
In order to ensure the accuracy of subsequent detection, the influence of the above problems must be eliminated first. Considering that both the angle and length of the trailing star can be predicted, the simplified rectangular fitting method is adopted here to conduct a contour search for the trailing target and finally form the edge rectangle box with the minimum error.
The main rectangle contour fitting algorithm includes the following four steps:
Step 1 (contrast segmentation calculation): Denote I N as the image, where N is the size. Use the 3 × 3 and 7 × 7 square operator, and denote I d as the contrast of I N , where T d is the threshold. x , y [ 0 , N 1 ] and x , y Z + , the formula for calculating the contrast is as follows:
I d ( x , y ) = 1 , i = 1 1 j = 1 1 I N ( x + i , y + j ) i = 3 3 j = 3 3 I N ( x + i , y + j ) i = 1 1 j = 1 1 I N ( x + i , y + j ) T d . 0 , e l s e .
Step 2 (binary labeling): Traditional binary annotation calculation is performed on I d to obtain M target sets:
I b ( m ) = [ ( x ( 0 , m ) , y ( 0 , m ) ) , ( x ( 1 , m ) , y ( 1 , m ) ) , , ( x ( K m 1 , m ) , y ( K m 1 , m ) ) ]
where, m is denoted as the m-th target, and m [ 0 , M 1 ] , m Z + ; K m is the valid pixel number of the labeled target; ( x ( k m , m ) , y ( k m , m ) ) represents the coordinates of the k m pixel of the m-th labeled target, and k m [ 0 , K m 1 ] , k m Z + .
Step 3 (trailing rectangle generation): For any marked target I b ( m ) , the position coordinates of its centroid I c ( m ) are obtained as follows:
I c ( m ) = [ k m = 0 K m 1 x ( k m , m ) K m , k m = 0 K m 1 y ( k m , m ) K m ]
The reason why gray value weighting is not used in Equation (3) is that the target width is only three to four pixels, which is greatly disturbed by noise, and the weighting will cause more calculation errors. The center of mass of the m-th marked target is taken as any point in the rectangle, and the trailing angle ± θ of the star is taken as the search direction to conduct the fitting rectangular box search.
The fitting process is shown in Figure 4. Wf = 4, Lf = 26. The number of rectangles is, at most, 4Wf Lf = 416. The cyan dot is the centroid of the binary-labeled region. When the rotation angle is θ , the rectangle with I c ( m ) as the lower left point is shown in Figure 5. Through any point in the rectangle, the coordinate of the lower left corner of the matrix can be calculated, and then the coordinate of the four corners of the rectangle can be obtained so as to obtain the generating rectangle set R ( m ) of the m-th labeled target. R ( m , q ) denotes the set of rectangles with width and length Wf and Lf at any point with I c ( m ) , and q denotes the q-th rectangle.
Step 4 (trailing rectangle fitting): The corresponding image region I N ( R ( m , q ) ) is deduced according to R ( m , q ) , and then the gray summation S ( m , q ) and histogram h ( m , q ) in the rectangular region are calculated. Therefore, the optimal rectangle solution constraint is shown in Equation (4).
S min S ( m , q ) S max min ( | h ( m , q ) h s t | 2 )
Among them, S min and S max are the upper and lower limits of gray summation in the rectangular area and h s t is the histogram of trailing targets in the standard rectangular area. The three thresholds are obtained statistically and used to obtain the optimal rectangular fitting by constraint Equation (4). Figure 6 shows the histogram features of the rectangular region of the trailing image. Figure 7 shows the results of the optimal rectangle fitting.

2.3. Semi-Occluded Target Clustering Recognition Based on Edge Features

After the tail aggregation calculation, the interference of noise on the stellar tail and among the stellar tail regions is eliminated. However, in the process of detecting the debris target, they will still be blocked by the star’s trailing tail. This section will analyze several possible situations, mainly explain the features of the debris target under the occlusion in detail, and give the solution.
Considering the star trailing width Wf = 4 and the fragment target size Ld = 3, when occlusion occurs, it can be divided into three types according to the occlusion size as in Figure 8.
Case one: Occlusion is more than 2/3, which means that only one line of the debris target exposes the star trail. In this case, the debris target and the star trail are superimposed. Although there is local brightness accumulation, it is difficult to distinguish due to the noise;
Case two: When the occlusion is between 2/3 and 1/5, there are more than two lines of star trailing. In this case, an obvious bump is formed at the edge of the trailing star, and the bump has light and dark changes, which is obviously different from the noise bump. A large number of sample extraction statistical features need to be formed by the fragment target extraction;
Case three: When the occlusion is below 1/5, the debris target has only local contact with the stellar tail, and such occlusion is easier to distinguish on the basis of the rectangular fitting.
The distribution of occluded targets is shown in Figure 9 as a Gaussian distribution, while the star trailing statistics are mainly Rayleigh distribution. Therefore, in cases two and three, debris targets can be distinguished according to different distributions.
Figure 10 shows the common star trailing occlusion debris target scene, covering three types of occlusion cases. The left side of each scene shows a single gray image (a,c,e), and the right side shows three consecutive superimposed images (b,d,f).
Figure 11 shows the image features of the occluded fragment target. It can be seen that the gray features of the fragment target are quite different from the stellar trail. In case one, when the object is completely occluded, we can also distinguish it according to the continuous frame image. It can be seen that although there are feature differences in occlusion, there are fewer feature pixels, and it is difficult to detect quantitatively by noise interference, which makes it impossible to distinguish rules. Although the statistical distribution difference is obvious, the fragment target pixels are too few to be directly extracted according to the statistical distribution difference.
By focusing on these difficulties and problems, this section designs a small, fully connected convolutional network (FCN). By training the occluding fragment target samples, the edge “bulge” information after rectangular fitting is used as a suspected target for FCN calculation and finally determines whether it is a fragment target. The small FCN design is shown in Table 2, which contains six layers of convolution calculation. The final output result 0 represents the non-fragmented target, and 1 represents the fragmented target.

2.4. Multi-Frame Association Tracking

A traditional multi-frame association mainly relies on inter-frame difference to achieve star background removal after registration. However, in long-tail scenes, especially when the SNR between stars and targets is low, it is difficult to eliminate the interference caused by noise after registration, which will seriously affect the correlation of fragmented targets. At the same time, due to the long tail, fragmented targets are easily obscured by the tail, resulting in the inability to continuously associate fragmented target information between multiple frames and causing tracking interruption.
In response to this issue, this section adopts a multi-frame suspected target detection based on a small FCN. Suspected “protruding” targets in three consecutive frames of images are put into the extended network in Table 2, and further association and confirmation of fragmented targets are completed through multiple frames to improve the accuracy of detection and tracking.

3. Engineering Implementation Framework

The algorithm presented in Section 2 mainly includes five parts: image contrast calculation, binary segmentation annotation, star trailing rectangle fitting, fragment target FCN identification, and inter-frame correlation. Among them, contrast calculation and binary segmentation annotation can be achieved through parallel pipeline computing based on digital logic and real-time processing of input data streams; Star trailing rectangle fitting uses digital logic-based rectangle parameter calculation and slice extraction acceleration; Fragment target FCN identification and inter-frame correlation are solved in real-time using DSP. This is also because in the staring fragment target mode, there are fewer targets in a “stationary” state, and DSP can be used to complete the calculation.
Figure 12 shows the calculation and testing block diagram. During the testing process, the simulated image data and instructions are sent through the upper computer of the PC. Sent to the FPGA of the hardware board through the network port, the image data are first stored in the cache after being parsed by the network communication protocol module, and instructions are used for module control. In FPGA, the contrast of the data cached in DDR is calculated, and the annotation of the Binary image and the rectangular fitting calculation of the star trail are completed. According to the rectangular fitting, the suspected fragment target with raised rectangular edges is sent to the DSP. The DSP judges the data reception based on the EMIF bus and performs FCN calculations to determine whether the target is true or false. Finally, inter-frame data association is completed in the DSP. At the same time, DSP serves as the main controller of the hardware, responsible for instruction parsing, telemetry generation, and module program scheduling of the entire system.

4. Experimental Results and Comparison

This section tests the algorithm, mainly including three items: accuracy of star/target detection based on Table 1 parameters; The relationship between the accuracy of star/target detection and the inter-frame interval Fi when SNR = 5; The relationship between the accuracy of star/target detection and the SNR when Fi = 50. Finally, a comparison of performance indicators was made between the test results and previous literature.
Star/debris targets for detecting and tracking based on 100 sets of simulation data are generated based on the parameters in Table 1. The total true value of the number of stars in the simulation is N(s, t), and the true value of the number of debris targets is N(d, t); The number of detected stars is N(s, d), and the number of fragment targets is N(d, d); The detected number of false alarms for stars is N(s, f), and the number of false alarms for debris targets is N(d, f); The number of missed stars is N(s, L), and the number of debris targets is N(d, L). Among them,
N ( s , t ) = N ( s , d ) + N ( s , L ) N ( s , f ) N ( d , t ) = N ( d , d ) + N ( d , L ) N ( d , f )
Therefore, the precision Ps, recall Rc and F1 are as:
P s = ( N ( s , d ) N ( s , f ) ) + ( N ( d , d ) N ( d , f ) ) N ( s , d ) + N ( d , d ) R c = ( N ( s , d ) N ( s , f ) ) + ( N ( d , d ) N ( d , f ) ) ( N ( s , d ) + N ( d , d ) ) + ( N ( s , L ) + N ( d , L ) ) F 1 = 2 P s R c P s + R c

4.1. Accuracy of Star/Target Detection Based on Table 1 Parameters

Generate 100 sets of simulation data for algorithm simulation training, and test the other 100 sets of generated data based on Equations (5) and (6). Among the 100 sets of test data, there are a total number of N(s, t) = 123417 stars, with N(s, d) = 113219 detected stars, N(s, f) = 10771 false alarms, and N(s, L) = 20969 missed stars; debris N(d, t) = 2000, detected debris N(s, d) = 1764, false alarms N(d, f) = 223, and missed debris N(d, L) = 459. Obtain the precision and recall rates of stars and debris, as shown in Table 3.
In 100 sets of generated test data, a total of 472 debris targets were occluded by stellar tails, including 108 targets in occlusion one, 229 targets in occlusion two, and 135 targets in occlusion three. Therefore, it can be estimated that in the simulation scenario based on Table 1, the target’s occlusion rate is 23.6%, the probability of complete occlusion (case one) is 5.4%, the probability of edge-to-tail contact (case three) is 6.75%, and the probability of occlusion below 1/3 (case two) is 11.45%. Considering that both scenario two and scenario three can detect targets through processing, 77.12% of the occluded targets can have a chance to be detected after processing.

4.2. The Relationship between the Accuracy of Star/Target Detection and the Interval Time between Frames

The smaller the time interval between frames, the closer the minimum distance between the two images of star tails and the higher the density of tails in the entire image. When Fi = Lf, the debris target has a (Ld + 2Wf)/Pi = 55% chance of being obscured by the stellar tail, completely losing the possibility of successful detection. Therefore, studying the relationship between the accuracy of star/target detection and the inter-frame interval Pi is helpful for subsequent research on in-orbit imaging modes.
Based on the parameters in Table 1, the 100 × 24 sets of test data were generated with 24 types of Fi, where F i ( L f , 50 ] , F i Z + , for the precision and recall corresponding to the detection results of stars and debris, as in Figure 13.
When the interval between frames is less than 34, the duty cycle of the star trailing overlay image is 76%, making it difficult to accurately register and determine the star between two frames. Rectangle fitting detection is basically ineffective, and the accuracy rate drops sharply.

4.3. The Relationship between the Accuracy of Star/Target Detection and Signal-to-Noise Ratio

The SNR basically determines the detection accuracy. This section generates test data for different SNRs, conducts tests, and provides changes in detection accuracy. Based on the parameters in Table 1, the detection accuracy during statistics is shown in Figure 14.
When the SNR is less than five, the detection accuracy gradually decreases. By 2.5, the detection accuracy is greatly reduced due to the segmentation contrast threshold. By 1.4, it is basically impossible to detect stars and targets. At this time, the tail of stars is affected by noise, and the image is in a dispersed, fractured state, making it difficult to extract features through feature detection; When SNR > 6.5, the distinction between stellar trailing and debris target features is relatively clear, with almost no missed detections, and false alarms decrease as the SNR increases.

4.4. Comparison

Previous algorithms mainly focused on detection, tracking, and target feature extraction, relying mainly on traditional frame differences and correlations, making it difficult to apply to dim target detection in dense stellar long tail backgrounds. This section qualitatively compares the literature method with this method.
In Table 4, previous literature mainly focused on detection and tracking under low SNR conditions. We conduct the rectangle contour fitting to the stellar tails, as the angle and length of the trailing star can be predicted. Then we obtain the image with the stellar tail, and the debris target superposed together, which can be analyzed in three cases. These images are put into the FCN, which can tell which case the images belong to, then we can obtain the information on the occluded faint debris target.
In this paper, under the condition of a detection accuracy of 90%, star and debris target detection were achieved in dense and long tail scenes, solving the problem of occlusion obstruction and continuous tracking caused by tail dragging under a high-duty cycle. When compared with previous algorithms, it has significant advantages.

5. Conclusions

This paper presents a detection method for staring debris targets in high-density star long-tail scenarios. By performing rectangular fitting on trailing stars, the influence of noise on the grayscale values of the trailing region was reduced. Based on the statistical characteristics of the staring targets, FCN was trained and learned to detect occluded targets. In total, 100 sets of 15 consecutive frames of images generated by simulation were tested, with a recall rate of over 90%. We conducted detection accuracy tests on changing inter-frame intervals and SNR and provided the relationship between indicator parameters and key parameters. When the inter-frame interval is less than 34, the detection accuracy begins to decrease; When the SNR is below 2.5, the detection accuracy drops sharply. Compared with previous literature, this design has the ability to detect and track occluded targets while ensuring an over 90% detection rate and a less than 10% false alarm rate simultaneously, effectively solving the problem of continuous tracking caused by target occlusion in previous methods.

Author Contributions

Conceptualization, J.Y.; methodology, D.H.; software, X.S.; validation, J.Y. and W.L.; formal analysis, D.H. and X.W.; investigation, X.S.; resources, X.S.; data curation, W.L.; writing—original draft preparation, J.Y.; writing—review and editing, D.H.; visualization, X.S.; supervision, D.H.; project administration, J.Y.; funding acquisition, D.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study aims to improve and optimize the computational process of the robotic arm and does not involve human or animal studies. Therefore, ethical review and approval were waived for this study.

Informed Consent Statement

This study is not applicable. This study did not involve human studies.

Data Availability Statement

This research is mainly aimed at the optimization of principle algorithm. Experimental data and results are mainly obtained by repeated experimental statistics. Experimental data can be obtained according to the formula in the paper, and comparison data can be obtained according to references.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mohammed, S.; Teja, M.J.S.; Kora, P.; Vunnava, D.E.; Lokesh, M. Space debris detection unit for spacecrafts. In Proceedings of the 2022 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 25–27 January 2022; pp. 1–2. [Google Scholar]
  2. Zarcone, G.; Mariani, L.; Bucciarelli, M.; Hosseim, S.H.; Curiano, F.; Celesti, P.; Palo, L.D. Image processing for geo detection. In Proceedings of the 2021 3rd International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey, 11–13 June 2021; pp. 1–6. [Google Scholar]
  3. Vance, L.; Thangavelautham, J.; Fernandez, J.M. Top Level Systems Requirements Analysis for a Ground Cooperative Orbital Debris Avoidance System. In Proceedings of the 2021 IEEE Aerospace Conference (50100), Big Sky, MT, USA, 6–13 March 2021; pp. 1–6. [Google Scholar]
  4. Dabescu, R.; Itu, R.; Muresan, M.P.; Turcu, V. Compact Solution for Low Earth Orbit Surveillance. In Proceedings of the 2021 IEEE 17th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 28–30 October 2021; pp. 401–408. [Google Scholar]
  5. Wang, Y.; Niu, Z.; Song, L. A Simulation Algorithm of The Space-based Optical Star Map with Any Length of Exposure Time. In Proceedings of the 2021 International Conference of Optical Imaging and Measurement (ICOIM), Xi’an, China, 27–29 August 2021; pp. 72–76. [Google Scholar]
  6. Xue, D.; Sun, J.; Hu, Y.; Zheng, Y.; Zhu, Y.; Zhang, Y. StarNet: Convolutional Neural Network for Dim Small Target Extraction in Star Image. In Proceedings of the 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM), Xi’an, China, 13–16 September 2018; pp. 1–7. [Google Scholar]
  7. Scire, G.; Piergentili, F.; Santoni, F. Spacecraft Recognition in Co-Located Satellites Cluster Through Optical Measures. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 1699–1708. [Google Scholar] [CrossRef]
  8. Liu, M.; Wang, H.; Wen, D.; Ran, X.; Zhao, H.; Yang, S. Celestial Positioning Method and Observation Analysis for Space Object. Acta Photonica Sin. 2014, 43, 153–160. [Google Scholar]
  9. Hou, W.; Mei, F. Dim Target Detection Under Deep Space Background Based on Topological Invariance. Electron. Opt. Control 2019, 26, 12–15, 29. [Google Scholar]
  10. Zhang, J.; Lou, S.; Ren, J. Space Target Detection Algorithm Based on Sequential Multi-frames Projection. Laser Infrared 2015, 45, 88–93. [Google Scholar]
  11. Tan, J.; Huang, J.; Wang, K.; Wu, W. Grading Detecting Method for Observation Images of Geosynchronous Earth Orbit Debris. Acta Photonica Sin. 2017, 46, 90–98. [Google Scholar]
  12. Ding, S.; Wang, H.; Chen, D.; Fu, T.; Gao, M. An improved method for dim space debris detection based on hough transform. In Proceedings of the 2016 IEEE 13th International Conference on Signal Processing (ICSP), Chengdu, China, 6–10 November 2016; pp. 1534–1538. [Google Scholar]
  13. Vallduriola, G.V.; Scharf, A.; Pittet, J.N.; Utzmann, J.; Trujillo, D.A.S.; Vananti, A.; Helfers, T.; Daens, D. The use of different architectures and streak observations algorithms to detect space debris. In Proceedings of the ARCS Workshop 2018—31th International Conference on Architecture of Computing Systems, Braunschweig, Germany, 9–12 April 2018; pp. 1–7. [Google Scholar]
  14. Tan, J.; Huang, J.; Wang, K.; Wu, W. An improved two-dimensional gaussian fitting method for space target positioning. In Proceedings of the 2016 3rd International Conference on Information Science and Control Engineering (ICISCE), Beijing, China, 8–10 July 2016; pp. 204–208. [Google Scholar]
  15. Quan, S.; Ding, D.; Zhaodong, N. Space debris tracking via generalized labeled multi-Bernoulli random finite sets. In Proceedings of the 2019 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Dalian, China, 20–22 September 2019; pp. 1–4. [Google Scholar]
  16. Tao, J.; Cao, Y.; Zhuang, L.; Zhang, Z.; Ding, M. Deep convolutional neural network based small space debris saliency detection. In Proceedings of the 2019 25th International Conference on Automation and Computing (ICAC), Lancaster, UK, 5–7 September 2019; pp. 1–6. [Google Scholar]
  17. Xiang, Y.; Xi, J.; Cong, M.; Yang, Y.; Ren, C.; Han, L. Space debris detection with fast grid-based learning. In Proceedings of the 2020 IEEE 3rd International Conference of Safe Production and Informatization (IICSPI), Chongqing, China, 28–30 November 2020; pp. 205–209. [Google Scholar]
  18. Xi, J.; Xiang, Y.; Ersoy, O.K.; Cong, M.; Wei, X.; Gu, J. Space Debris Detection Using Feature Learning of Candidate Regions in Optical Image Sequences. IEEE Access 2020, 8, 150864–150877. [Google Scholar] [CrossRef]
  19. Liu, D.; Chen, B.; Chin, T.J.; Rutten, M.G. Topological Sweep for Multi-Target Detection of Geostationary Space Objects. IEEE Trans. Signal Process. 2020, 68, 5166–5177. [Google Scholar] [CrossRef]
  20. Han, L.; Tan, C.; Liu, Y.; Song, R. Research on the On-orbit Real-time Space Target Detection Algorithm. Spacecr. Recovery Remote Sens. 2021, 6, 122–131. [Google Scholar]
  21. Gao, P.; Shen, M.; Guo, X.; Yang, D.; Yu, H.; Zhou, W.; Zhao, Y. Space Debris Tracking Strategy through Monte-Carlo Based Track-before-Detect Framework. Astron. Res. Technol. Publ. Natl. Astron. Obs. China 2019, 16, 33–43. [Google Scholar]
Figure 1. Superposition of three consecutive images (red: 1th; green: 2nd; blue: 3rd).
Figure 1. Superposition of three consecutive images (red: 1th; green: 2nd; blue: 3rd).
Applsci 13 09148 g001
Figure 2. Gray amplitude comparison between local target and star (blue: low grayscale; yellow: high grayscale).
Figure 2. Gray amplitude comparison between local target and star (blue: low grayscale; yellow: high grayscale).
Applsci 13 09148 g002
Figure 3. Problems in stellar trailing detection.
Figure 3. Problems in stellar trailing detection.
Applsci 13 09148 g003
Figure 4. Schematic diagram of rectangular fitting (a) general trailing fitting (b) fracture trailing fitting.
Figure 4. Schematic diagram of rectangular fitting (a) general trailing fitting (b) fracture trailing fitting.
Applsci 13 09148 g004
Figure 5. Schematic diagram of fitting rectangular expressions (red: parallel lines on the x-axis).
Figure 5. Schematic diagram of fitting rectangular expressions (red: parallel lines on the x-axis).
Applsci 13 09148 g005
Figure 6. Histogram features of the rectangular region of the trailing image (blue: grayscale value).
Figure 6. Histogram features of the rectangular region of the trailing image (blue: grayscale value).
Applsci 13 09148 g006
Figure 7. Fitting result of trailing rectangle (red, blue, yellow: targets separated in different frames).
Figure 7. Fitting result of trailing rectangle (red, blue, yellow: targets separated in different frames).
Applsci 13 09148 g007
Figure 8. Classification of debris target occlusion.
Figure 8. Classification of debris target occlusion.
Applsci 13 09148 g008
Figure 9. Gray distribution of occluded debris.
Figure 9. Gray distribution of occluded debris.
Applsci 13 09148 g009
Figure 10. Object scene of common star trailing occluded debris (a,b,f) case 3: (left): single gray image; (right): three consecutive frames of superimposed image; (c,d) case 2: (left): single gray image; (right): three consecutive frames of superimposed image; (e) case 1: (left): single gray image; (right): three consecutive frames of superimposed image (red, blue, yellow: targets separated in different frames).
Figure 10. Object scene of common star trailing occluded debris (a,b,f) case 3: (left): single gray image; (right): three consecutive frames of superimposed image; (c,d) case 2: (left): single gray image; (right): three consecutive frames of superimposed image; (e) case 1: (left): single gray image; (right): three consecutive frames of superimposed image (red, blue, yellow: targets separated in different frames).
Applsci 13 09148 g010
Figure 11. Image features of occluded debris object (single frame grayscale image and three frames continuous superposition image).
Figure 11. Image features of occluded debris object (single frame grayscale image and three frames continuous superposition image).
Applsci 13 09148 g011
Figure 12. Engineering hardware testing verification block diagram.
Figure 12. Engineering hardware testing verification block diagram.
Applsci 13 09148 g012
Figure 13. The relationship between the accuracy of star/target detection and frame spacing.
Figure 13. The relationship between the accuracy of star/target detection and frame spacing.
Applsci 13 09148 g013
Figure 14. The relationship between the accuracy of star/target detection and signal-to-noise ratio.
Figure 14. The relationship between the accuracy of star/target detection and signal-to-noise ratio.
Applsci 13 09148 g014
Table 1. Description of simulation parameters.
Table 1. Description of simulation parameters.
No.SymbolParameterValue
1LfStart trailing length26 pixels
2WfStart trailing width4 pixels
3LdDebris size3 × 3 pixels
4FiInter-frame spacing50 pixels
5SNRSignal-to-noise ratio5
6JDebris inter-frame jitter0.5 pixel
7PiMean stellar spacing20 pixels
8 θ Trailing angle70 degrees
Table 2. Small FCN network architecture.
Table 2. Small FCN network architecture.
StageConvolutional KernelInputsOutputs
13 × 3 × 168 × 8 (×3) Note8 × 8 × 16
23 × 3 × 16 × 328 × 8 × 168 × 8 × 32
33 × 3 × 32 × 648 × 8 × 324 × 4 × 64
43 × 3 × 64 × 1284 × 4 × 642 × 2 × 128
53 × 3 × 128 × 2562 × 2 × 1281 × 1 × 256
61 × 2561 × 1 × 2561 × 1
Note: (×3) represents an extended network, used for continuous data input in the front and back three frames.
Table 3. Accuracy of star/target detection based on Table 1 parameters.
Table 3. Accuracy of star/target detection based on Table 1 parameters.
No.TypePrecision PsRecall RcF1
1Star0.90490.76350.8282
2Debris0.87360.69320.7730
3Star + Debris0.90440.76230.8273
Table 4. Comparison with different literatures (×: unavailable; √: available).
Table 4. Comparison with different literatures (×: unavailable; √: available).
No.MethodsLow SNROcclusion DetectionLightweight ComputingContinuous Tracking
1[12]×××
2[13]××
3[15]××
4[16]××
5[17]×××
6[18]××
7[19]×
8Proposed
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, J.; Huang, D.; Li, W.; Wang, X.; Shi, X. Dim Staring Debris Targets Detection Method with Dense Long Trailing Star. Appl. Sci. 2023, 13, 9148. https://doi.org/10.3390/app13169148

AMA Style

Yu J, Huang D, Li W, Wang X, Shi X. Dim Staring Debris Targets Detection Method with Dense Long Trailing Star. Applied Sciences. 2023; 13(16):9148. https://doi.org/10.3390/app13169148

Chicago/Turabian Style

Yu, Jiyang, Dan Huang, Wenjie Li, Xianjie Wang, and Xiaolong Shi. 2023. "Dim Staring Debris Targets Detection Method with Dense Long Trailing Star" Applied Sciences 13, no. 16: 9148. https://doi.org/10.3390/app13169148

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop