Next Article in Journal
Special Issue: New Trends in Enhanced, Hybrid and Integrated Geothermal Systems
Previous Article in Journal
Design and Control of a Open-Source, Low Cost, 3D Printed Dynamic Quadruped Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Segmenting Star Images with Complex Backgrounds Based on Correlation between Objects and 1D Gaussian Morphology

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(9), 3763; https://doi.org/10.3390/app11093763
Submission received: 2 March 2021 / Revised: 14 April 2021 / Accepted: 16 April 2021 / Published: 22 April 2021
(This article belongs to the Section Optics and Lasers)

Abstract

:
Space object recognition in high Earth orbits (between 2000 km and 36,000 km) is affected by moonlight and clouds, resulting in some bright or saturated image areas and uneven image backgrounds. It is difficult to separate dim objects from complex backgrounds with gray thresholding methods alone. In this paper, we present a segmentation method of star images with complex backgrounds based on correlation between space objects and one-dimensional (1D) Gaussian morphology, and the focus is shifted from gray thresholding to correlation thresholding. We build 1D Gaussian functions with five consecutive column data of an image as a group based on minimum mean square error rules, and the correlation coefficients between the column data and functions are used to extract objects and stars. Then, lateral correlation is repeated around the identified objects and stars to ensure their complete outlines, and false alarms are removed by setting two values, the standard deviation and the ratio of mean square error and variance. We analyze the selection process of each thresholding, and experimental results demonstrate that our proposed correlation segmentation method has obvious advantages in complex backgrounds, which is attractive for object detection and tracking on a cloudy and bright moonlit night.

1. Introduction

Thresholding segmentation [1] is an important process for space object recognition and extraction in star images. Stars and skylight backgrounds are separated by differences in grayscale, and segmentation thresholdings commonly used today are global thresholdings [2,3,4,5,6,7,8,9,10,11,12,13] and local thresholdings [14,15,16,17,18,19]. A global thresholding is to specify a unified thresholding for all pixels of an image, such as the Otsu method [2,3,4,5], maximum entropy method [6,7,8,9,10] and the minimum error thresholding method [11], which are only applicable to star images where the gray levels of objects and backgrounds are clearly distinguished. Local thresholding is selected according to features of different regions in star images, such as the Bernsen algorithm [14], Niblack algorithm [15,16], and layered detection strategy in irregularly sized subregions [19], and the selection of sub-blocks has a great impact on the results of image segmentation.
However, due to the influence of bright moonlight, thin clouds and vignetting effect of optical system, the local image backgrounds fluctuate greatly, and even many gray saturation regions appear. In this case, both types of thresholdings are hard to achieve good segmentation between complex backgrounds and dark objects. Gray thresholding segmentation ignores the imaging characteristics of space objects, whose energy distributions are presented as approximately symmetric Gaussian distributions and bright spots diffused to the surroundings [20,21,22,23]. However, 2D morphology of the bright spots is not always significant. On the one hand, for a dark target, its gray level is close to the intensity of the image background, and the 2D Gaussian morphology may be drowned by fluctuating backgrounds [24,25]. On the other hand, the objects and stars have a certain degree of elongation in the direction of relative motion within a few seconds of exposure time, and Gaussian features in the direction change. Therefore, we can only use 1D Gaussian morphology of object streaks in the vertical direction of relative motion. However, for a locally discontinuous dim object streak, its 1D morphology at every position is not consistent everywhere in complex backgrounds, and recognition effects are poor when point spread function (PSF) of the same amplitude is used to fit an object streak [26]. There are also some methods that apply deep learning to target detection [27,28], but a large amount of image data needs to be trained in the object detection process.
We aims at achieving star image segmentation by using the morphological characteristics of objects without relying on gray thresholding. Through the above analysis, it is known that 2D Gaussian function cannot accurately describe object streak morphology, and fitting a streak with 1D PSF with the same amplitude is not suitable for the poor background environment of star images. Accordingly, we introduce 1D Gaussian functions with different amplitudes for fitting each group of image data, utilizing the correlation between the image data and theoretical object models to respectively identify each group of data with Gaussian shapes in the streak area. In addition, we analyze the standard deviation and the ratio of mean square error and variance in each group of local image data, set reasonable thresholdings to separate objects from image background and remove false alarms. According to Gaussian morphology of objects and stars, our correlation segmentation method is independent of gray thresholding, which can flexibly use the morphological features of space objects, making the influence of complex backgrounds weaker and achieving better image segmentation.

2. Object Models and Algorithms

The CCD images taken by the Chinese Academy of Sciences Ground-based Optical Telescope with a 3.2° × 3.2° field of view, an 800 mm aperture and a 2″ angular resolutioncan be expressed as
I = bg + s + ob + n,
where I represents astronomical gray values of a CCD image, bg are deep space backgrounds, s are stars, ob are usually satellites and space debris, and n mainly includes Gaussian noise from the circuit and Poisson noise from dark current and background light.
In different observation modes, objects or stars appear in different shapes on star images. When the telescope tracks the motion of stars, the stars are relatively stationary and appear as points, whereas the target appears as a streak. In this mode, the target streak needs to be extracted. On the contrary, when the telescope moves with the target, the target appears as a point and the stars appear as streaks. In this mode, the star streaks should be removed first. Therefore, the recognition of target streaks and star streaks is the focus of our work. We uniformly call such targets or stars with point shapes “point targets” and call those with long streak shapes “target streaks”, as shown in Figure 1. In the two observation modes, either the target or the stars are stationary relative to the star image. Therefore, the direction of the streaks is consistent with the direction of relative motion, and the streaks are straight.
The length of the streak is related to the target velocity relative to the stars and exposure time. We can appropriately increase exposure time to make the dark target brighter and easier to identify. However, with the increase of exposure time, the bright stars become saturated and the bright spots spread to a very large area around, which affects target recognition. Therefore, exposure time should be adjusted dynamically according to the use requirements and weather conditions, and for observing high-orbit space objects, usually exposure time is at second level. We take various noises, influence factors and deep space backgrounds together as undulating image backgrounds. In addition, we first rotate star images according to streak parameters and adjust the direction of the streaks to horizontal (the X-axis direction of the image is horizontal, and the Y-axis direction of the image is vertical). On this basis, we select 1D Gaussian function to lengthways fit them.
y ˜ k = B G + A exp ( x k 2 2 σ 2 )
where y ˜ k are object gray values of the constructed model, BG are local image backgrounds, A is the object amplitude above the local background intensity, σ2 is the variance of the Gaussian function, and x k are image ordinates. In order to ensure that the model can fit the image data better, mean square error should be minimized.
f x k ; σ 2 = 1 n k n y ˜ k y k 2 min
where f is the mean square error function, and y k are observed object gray values. We find σ 0 2 when f is the minimum value, and the Gaussian function with σ 0 2 is the optimal fitting model for the group of data. The derivative of f is calculated as
A σ 4 k n A x k 2 exp x k 2 σ 2 y k B G x k 2 exp x k 2 2 σ 2 0
The pixel numbers occupied by dim objects in star images are different and unstable [29]. For star images taken by our wide field telescope, dim objects or stars usually occupy several pixels, and the bright stars usually occupy more pixels. Of course, the object size on the images is determined by telescope configuration and parameter setting, but it does not affect the extraction of column data with Gaussian form, and five continuous values can reflect the change trend of Gaussian morphology. If more data are selected, the computational complexity increases and overfitting occurs easily. Therefore, we select five consecutive pixels in a column to fit a Gaussian model at a time in the regions of interest or in the entire star image and take the center of the Gaussian model as the zero point, so the coordinates are
x 0 2 = 0 , x 1 2 = x 1 2 = 1 , x 2 2 = x 2 2 = 4
In addition, we set a parameter t = exp{−1/(2σ2)}, and Equation (4) is rewritten as
8 A t 7 4 y 2 + y 2 2 B G t 3 + 2 A t y 1 + y 1 2 B G 0
The local background BG and the peak value A in each group of column data are defined as
B G = 1 2 min { y k } + N e x t min { y k }
A = max { y k } B G
where Nextmin{·} is defined as the second minimum value of the group of data. It is easy to solve for t0, so the variance is
σ 0 2 = 1 2 ln t 0
Whether or not each group of column data is an object, an ideal object model that is closest to the data is generated. We conduct star image segmentation and identify the objects and stars based on correlation between objects and ideal object models. Of course, a few false alarms with Gaussian morphology can be removed by analyzing standard deviation and 2D characteristics of space objects.

3. Parameter Analysis of Object Determination

3.1. Correlation Coefficient

Correlation coefficients (denoted by r) are used to measure the degree of correlation between the data and models. Usually, the object streak center occupies one or two lines of pixels, and we call the two types of object streaks Ob 1 and Ob 2, as shown in Figure 2a,b. We take the position of each pixel in the red dotted lines as the longitudinal center to conduct model fitting, and the correlation coefficients are shown in Figure 2c,d. Six pixels in different positions of the two objects, including object centers, object edges and image backgrounds, are focused on, and their data and Gaussian models are shown in Figure 3.
It can be seen that the correlation coefficients are more than 0.9 from the center of Ob 1 to the horizontal edges, which means that there is a high correlation. However, the correlation at the background 1 is weak and less than 0.5. Line1 and Line3 are the two vertical edges of Ob 1, and the Gaussian models are slightly mismatched with the data, whose correlation coefficients are usually less than 0.6 but more than that of the image background 1. However, there is a background noise at image background 2 where the correlation coefficient is also high. The centers of Ob 2 are Line2 and Line3, and most of the correlation coefficients are between 0.6 and 0.9. Line1 and Line4 belong to the two vertical edges, and the Gaussian models are heavily mismatched with the data, so there is no correlation between the two.
If selected correlation coefficient thresholdings are low, objects with weak Gaussian morphology can be recognized well, but lots of false alarms appear, just like pixel d. Contrarily, if the thresholdings are high, the dim objects with low model correlation cannot be recognized. Therefore, the correlation coefficient thresholding is set to 0.5. On the premise that the objects and stars can be recognized, a few false alarms of background noise are considered to remove.

3.2. Standard Deviation

We take Figure 4a as an example to analyze the method of removing false alarms with standard deviation. First, all the pixels in the image are sorted by increasing gray values, and we calculate standard deviations of the minimum 1–100% pixels, which we call grayscale sequence method (GS). Based on experience, we start the calculation of the change rate of standard deviation from the minimum 90% pixels and control the calculation interval to reduce the number of calculations. When GS method is applied to large data sets, star image partitioning becomes more important. We divide the star image into multiple subimages and calculate the data in the regions of interest. The standard deviations that we obtain are increasing, and we analyze their change rate to find the mutation value where the standard deviation is the boundary between the objects and image backgrounds. The change rate of standard deviation is stable when the selected pixels are image backgrounds, and only when a large number of object pixels are selected does it fluctuate significantly. Therefore, the standard deviation thresholding selected with GS method has a certain hysteresis, and a part of the dark objects and stars may be filtered out.
If the standard deviation corresponds to the object itself rather than the variation trend of image grayscales, the hysteresis can be avoided. We consider our column recognition method where five data are grouped, and the standard deviations of all the groups are calculated and sorted incrementally, which we call standard deviation sequence method (SDS). Since the standard deviation represents the fluctuant characteristics of the object or background, the change rate changes significantly when it is converting from the image backgrounds to the dark objects. The results of thresholding selection in both methods are shown in Figure 4. In Figure 4b,d, a mutation value is greater than all previous values, and there is a relatively large increase in the mean value of a certain interval of the same size before and after this value. However, we cannot expect how significant the mutation of the change rate at this value is, because it is in a short transition interval, after which the change rate increases rapidly. What we can do is to find an optimal value within the transition interval and to obtain the standard deviation at the position of the optimal value as thresholding for the segmentation of backgrounds and dark objects.
The results show that the selected thresholdings with both methods are respectively 11.4 and 7.29, and we will verify the effectiveness of SDS method in Section 4.2. If the standard deviation of a group of data is less than 7.29, we regard it as a background directly and there is no need to model, which reduces 71% time of model fitting and avoid the risk of misidentifying the background data with Gaussian morphology as the target data. Of course, if the standard deviation of a group of data is greater than 7.29, we cannot rule out the possibility that it is a bad pixel or background noise.

3.3. K Value

The mean square error values between the data and models differ greatly due to the different amplitudes of models, and mean square error values at the object are much larger than those at the background, which does not mean that the fitting effect at the background is better. Therefore, it is crucial to unify the mean square error values for the object models of different amplitudes.
The mean square error represents the degree of deviation between the model and the data, whereas the variance represents the degree of fluctuation of the data. Now, the ratio of the two is defined as
K = M S E D y = y ˜ k y k 2 / n y k m 2 / n 1
where MSE is mean square error, D(y) is the variance, m is the mean value of a group of data, n is the number of data points, and K means the mean square error between the model and the data under unit fluctuation.
We analyze K values in Ob 1 and Ob 2. It is found that K values are small when the correlation coefficients are large, and there is an obvious linear relation. When r is 0.5, both K values are less than 1, as shown in Figure 5a,b. For a star image containing more objects, the relation between K values and correlation coefficients is shown in Figure 5c,d. Some K values decrease with the increase of r, and K values of another part increase with the increase of r. Others have no regular distribution, but they are all greater than the K value when r is 0.5. We think r ≈ 0 means that there is no correlation between the two groups of data, which are orthogonal. At this time, it can be approximately regarded as random noises with the equal mean.
Finally, we can filter out the false alarms by setting K thresholding. We focuses on the part where r is greater than 0.5 and K values decrease with the increase of r, and it reflects the significant Gaussian morphology of the image data and has a high correlation with the model. From Figure 5, K value is 1 when r is 0.5, and the false alarm pixels are randomly distributed in the places where K value is greater than 1.5. Therefore, we chose k = 1 (of course, any value between 1 and 1.5) for K thresholding. Other parts are background noises, bad pixels and the edges of objects. In this way, the objects and stars can be well segmented from the star image.

4. Experimental Results

In the experiment, real and simulated star images in different scenes are used to verify the effectiveness of our proposed correlation segmentation method. First, we analyzed the problems existing in the segmentation process through the experiments of two simple scenes, and demonstrated the effect of horizontal supplementary recognition. Second, we verified the validity of the SDS method. Then, the results of our proposed method are compared with traditional threshold segmentation methods in complex backgrounds. Finally, the identification rate and false alarm rate of our method in complex backgrounds are obtained based on two special scenes with 2000 frames each.

4.1. Horizontal Supplementary Recognition

The correlation segmentation method can recognize and extract point targets and streaks with 1D Gaussian morphology. We choose two simple scenes for analysis, and the difference between the two scenes is whether a point object in the image is close to a streak, as shown in Figure 6. It is found that the vertical Gaussian features of objects at the upper and lower edges are not significant, so the identified point object is a strip and the longitudinal edges are lost. Besides, when an object and a star are approaching vertically, the column recognition method is influenced, making the point object unrecognizable. Therefore, the horizontal supplementary recognition should be carried out at the upper and lower lines (about 2 lines) of the identified objects and stars by using the same modeling method. This has a good supplementary effect on point object recognition but only has low impact on the recognition of extending object streaks.

4.2. Comparison of GS Method and SDS Method

For bright objects, the star image can be well segmented by using the correlation segmentation method combined with the standard deviation thresholding selected with the two methods in Section 3.2, and the difference is that the lower standard deviation thresholding slightly results in more noise points. However, for a dark object without clear contour and significant Gaussian morphology, the SDS method has obvious advantages, as shown in Figure 7.
We analyze the recognized pixel number of the five objects in different σ thresholdings. When the σ thresholding is more than 24.2, the pixel number begins to decrease, which means that part edges of the objects are filtered out, as shown in Figure 7d. The thresholding obtained with SDS method, 24.11, is perfectly consistent with the optimal thresholding, 24.2. This result proves that the SDS method is more valid, and it can identify the dark objects and can also filter out most of the noises. Although the removal effect of background noises with GS method is good, it makes the object recognition incomplete or impossible.

4.3. Comparison of Segmentation Methods

In order to verify the effectiveness of our proposed star image segmentation method combining correlation coefficient, standard deviation and K value, the global and local thresholding segmentation methods are used for comparison. We use the statistical method to set the thresholdings: T1 = μ + σ and T2 = μ + 2σ. The local thresholding method has different segmentation results due to different sub-blocks, so the principle of the sub-block size is that the minimum false alarms are produced on the basis of identifying more objects. In the comparison, an object streak whose recognition length is more than half of its length is considered to identify successfully, otherwise, the objects are not reflected in the figures. If multiple objects that overlap in original images are identified as an extended object streak whose shape is easy to distinguish, we count according to the original image. The results are shown in Figure 8.
The star images with a size of 120 × 120 pixels have uneven backgrounds. The recognition results are terrible under low and high global thresholdings, and the results of local thresholding segmentation are obviously better. However, the contours of some objects are not clear, and there are many false alarms caused by the selection of windows. We find that gray thresholding segmentation has poor adaptability, and our proposed segmentation method is more effective and identifies more objects. The segmentation results of different star images with different methods are shown in Table 1.
In addition, we respectively simulated 2000 frames based on the backgrounds of S5 and S6. Forty target streaks in each frame were randomly distributed, and the recognition rate and false alarm rate were calculated. The recognition results are shown in Table 2. Our correlation segmentation method ensures low false alarm rate and achieves high recognition rate. Of course, both metrics are influenced by many factors, such as the noise level, target intensity, background complexity, and threshold selection. For star image recognition, in order to achieve better recognition rate, we can reduce each thresholding at the cost of increasing false alarm rate. Therefore, a balance between the two needs to be struck according to application environments. Moreover, the large fluctuation of image backgrounds and the low target intensity has a great impact on target recognition.
For the star images with a size of 120 × 120 pixels, we run our algorithm on the platform of Intel Core i5-6500 CPU 3.2GHz, MATLAB2016a, and the average calculation time of our method is about 0.3s. For ground-based telescope observation, the exposure time is generally at second level. When processing star images with complex backgrounds, we often select some areas of interest with small sizes. Therefore, it meets the real-time requirements. Through the simulation experiments of 2000 frames each, the recognition rate of more than 97% and false alarm rate of less than 0.5% can be stably obtained by our method.
The grayscale values of image backgrounds change greatly everywhere, so it is obviously impossible to select a certain global thresholding to separate all the objects from the backgrounds. However, the local thresholding method needs to select appropriate sub-blocks according to the size of the objects. In addition, it should not only ensure that the complete object is in the same sub-block but also ensure that there must be objects in each sub-block, otherwise there are many problems such as false alarms and incomplete object recognition across sub-blocks. For a star image with a large number of objects, it is difficult to satisfy the two situations. Our proposed correlation segmentation method solves the problems well and overcomes the limitation of gray segmentation.

5. Conclusions

The influence of clouds and moonlight is a challenge for star image segmentation, and local thresholding methods can slightly improve but not well overcome it. We present a new star image segmentation method, and the correlation coefficients between the data and models are used to segment star images. In addition, the standard deviation thresholding and K value are combined to remove false alarms. The results show that our proposed method works well in complex background, which can be used for object detecting and tracking in the cloudy and bright moonlit night and can be applied in machine learning system for astronomical observation and object classification.

Author Contributions

Conceptualization, Y.Z. and Y.W; methodology, Y.Z. and B.W.; software, Y.Z. and Y.W.; validation, Y.Z., J.Z., Y.W. and B.W.; formal analysis, Y.W. and B.W.; investigation, Y.Z. and J.Z.; resources, J.Z.; data curation, J.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, Y.Z.; visualization, Y.W.; supervision, B.W.; project administration, J.Z.; funding acquisition, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the financial support by Joint Fund of Astronomy of the National Natural Science Foundation of China (No.U1831106) and the Third Phase of Innovative Engineering Projects of Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences (No.065x32CN60).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

Authors have no relevant financial interests and no other potential conflicts of interest to disclose in the manuscript.

References

  1. Sezgin, M.; Sankur, B. Survey over image thresholding techniques and quantitative evaluation. J. Electron. Imaging 2004, 13, 146–168. [Google Scholar]
  2. Otsu, N. A thresholding selection method from gray-level histograms. IEEE Trans. Syst. Man Cyb. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  3. Xu, X.; Xu, S.; Jin, L.; Song, E. Characteristic analysis of Otsu thresholding and its applications. Pattern Recognit. Lett. 2011, 32, 956–961. [Google Scholar] [CrossRef]
  4. Moghaddamn, R.F.; Cheriet, M. AdOtsu: An adaptive and parameterless generalization of Otsu’s method for document image binarization. Pattern Recognit. 2012, 45, 2419–2431. [Google Scholar] [CrossRef]
  5. Cai, H.; Yang, Z.; Cao, X.; Xia, W.; Xu, X. A new iterative triclass thresholding technique in image segmentation. IEEE Trans. Image Process. 2014, 23, 1038–1046. [Google Scholar] [CrossRef]
  6. Kapur, J.N.; Sahoo, P.K.; Wong, A.K.C. A new method for gray-level picture thresholding using the entropy of the histogram. Comput. Vis. Graph. Image Process. 1985, 29, 273–285. [Google Scholar] [CrossRef]
  7. Sahoo, P.K.; Arora, G. A thresholding method based on two-dimensional Renyi’s entropy. Pattern Recognit. 2004, 37, 1149–1161. [Google Scholar] [CrossRef]
  8. Chang, C.I.; Du, Y.; Wang, J. Survey and comparative analysis of entropy and relative entropy thresholding techniques. IEEE Proc. Vis. Image Signal. Process. 2006, 153, 837–850. [Google Scholar] [CrossRef] [Green Version]
  9. Chen, J.; Guan, B.; Wang, H.; Zhang, X.; Tang, Y.; Hu, W. Image thresholding segmentation based on two dimensional histogram using gray level and local entropy information. IEEE Access 2018, 6, 5269–5275. [Google Scholar] [CrossRef]
  10. Zou, Y.B.; Zhang, J.Y.; Upadhyay, M.; Sun, S.F.; Jiang, T.Y. Automatic image thresholding based on Shannon entropy difference and dynamic synergic entropy. IEEE Access 2020, 8, 171218–171239. [Google Scholar] [CrossRef]
  11. Kittler, J.; Illingworth, J. Minimum error thresholding. Pattern Recognit. 1986, 19, 41–47. [Google Scholar] [CrossRef]
  12. Ng, H.F. Automatic thresholding for defect detection. Pattern Recognit. Lett. 2006, 27, 1644–1649. [Google Scholar] [CrossRef]
  13. Lee, S.U.; Chung, S.Y.; Park, R.H. A comparative performance study of several global thresholding techniques for segmentation. Comput. Vis. Graph. Image Process. 1990, 52, 171–190. [Google Scholar] [CrossRef]
  14. Bernsen, J. Dynamic thresholding of gray-level images. In Eighth International Conference on Pattern Recognition Proceedings; IEEE Computer Society Press: Washington, DC, USA, 1986; pp. 1251–1255. [Google Scholar]
  15. Niblack, W. An Introduction to Digital Image Processing; Prentice Hall: Englewood Cliffs, NJ, USA, 1986; pp. 115–116. [Google Scholar]
  16. Trier, O.D.; Jain, A.K. Goal-directed evaluation of binarization methods. IEEE Trans. Pattern Anal. 1995, 17, 1191–1201. [Google Scholar] [CrossRef]
  17. Pai, Y.T.; Chang, Y.F.; Ruan, S.J. Adaptive thresholding algorithm: Efficient computation technique based on intelligent block detection for degraded document images. Pattern Recognit. 2010, 43, 3177–3187. [Google Scholar] [CrossRef]
  18. Hedjam, R.; Moghaddam, R.F.; Cheriet, M. A spatially adaptive statistical method for the binarization of historical manuscripts and degraded document images. Pattern Recognit. 2011, 44, 2184–2196. [Google Scholar] [CrossRef]
  19. Zheng, C.X.; Pulido, J.; Thorman, P.; Hamann, B. An improved method for object detection in astronomical images. Mon. Not. R. Astron. Soc. 2015, 451, 4445–4459. [Google Scholar] [CrossRef] [Green Version]
  20. Salomon, P.M.; Goss, W.C. A Microprocessor-Controlled CCD Star Tracker. In Proceedings of the AIAA 14th Aerospace Sciences Meeting, Washington, DC, USA, 26–28 January 1976; pp. 76–116. [Google Scholar]
  21. Li, J.; Liu, Z.; Liu, F. Using sub-resolution features for self-compensation of the modulation transfer function in remote sensing. Opt. Express. 2017, 25, 4018–4037. [Google Scholar] [CrossRef]
  22. Li, Z.; Liang, B.; Zhang, T. Image simulation for airborne star tracker under strong background radiance. In Proceedings of the 2012 IEEE International Conference on Computer Science and Automation Engineering (CSAE), Zhangjiajie, China, 25–27 May 2012; pp. 644–648. [Google Scholar]
  23. Wei, M.; Xing, F.; You, Z. A real-time detection and positioning method for small and weak targets using a 1D morphology-based approach in 2D images. Light Sci. Appl. 2018, 7, 18006. [Google Scholar] [CrossRef]
  24. Delabie, T. Star position estimation improvements for accurate star tracker attitude estimation. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Kissimmee, FL, USA, 5–9 January 2015; pp. 1–15. [Google Scholar]
  25. Sun, T.; Xing, F.; You, Z.; Wei, M. Motion-blurred star acquisition method of the star tracker under high dynamic conditions. Opt. Express 2013, 21, 20096–20110. [Google Scholar] [CrossRef]
  26. Kouprianov, V. Distinguishing features of CCD astrometry of faint GEO objects. Adv. Space Res. 2008, 41, 1029–1038. [Google Scholar] [CrossRef]
  27. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. 2015, 39, 1–14. [Google Scholar] [CrossRef] [Green Version]
  28. Uijlings, J.R.R.; van de Sande, K.E.A.; Gevers, T.; Smeul-ders, A.W.M. Selective search for object recognition. Int. J. Comput. Vision 2013, 104, 154–157. [Google Scholar] [CrossRef] [Green Version]
  29. Zhang, C.; Chen, B.; Zhou, X. Small target trace acquisition algorithm for sequence star images with moving background. Opt. Precis. Eng. 2008, 16, 524–530. [Google Scholar]
Figure 1. Grayscale distribution of space objects. (a) Point object. (b) Object streak.
Figure 1. Grayscale distribution of space objects. (a) Point object. (b) Object streak.
Applsci 11 03763 g001
Figure 2. Two types of object streaks and correlation coefficients between data and models in different positions (pixel a: center of Ob 1, pixel b: image background 1, pixel c: longitudinal edge of Ob 1, pixel d: image background 2, pixel e: center of Ob 2, pixel f: longitudinal edge of Ob 2). (a) Ob 1. (b) Ob 2. (c) Correlation coefficients of Ob 1. (d) Correlation coefficients of Ob 2.
Figure 2. Two types of object streaks and correlation coefficients between data and models in different positions (pixel a: center of Ob 1, pixel b: image background 1, pixel c: longitudinal edge of Ob 1, pixel d: image background 2, pixel e: center of Ob 2, pixel f: longitudinal edge of Ob 2). (a) Ob 1. (b) Ob 2. (c) Correlation coefficients of Ob 1. (d) Correlation coefficients of Ob 2.
Applsci 11 03763 g002
Figure 3. Image data and fitting models in six positions of Ob 1 and Ob 2 (pixel (af)).
Figure 3. Image data and fitting models in six positions of Ob 1 and Ob 2 (pixel (af)).
Applsci 11 03763 g003
Figure 4. Standard deviation thresholdings with the GS method and SDS method. (a) A star image with bright stars, faint stars and object streaks. (b) σ change rate with GS method. (c) σ change with GS method.(d) σ change rate with SDS method.(e) σ change with SDS method.
Figure 4. Standard deviation thresholdings with the GS method and SDS method. (a) A star image with bright stars, faint stars and object streaks. (b) σ change rate with GS method. (c) σ change with GS method.(d) σ change rate with SDS method.(e) σ change with SDS method.
Applsci 11 03763 g004
Figure 5. The relation between K values and correlation coefficients. (a) Ob 1. (b) Ob 2. (c) Pixel: 200 × 200. (d) Pixel: 400 × 400.
Figure 5. The relation between K values and correlation coefficients. (a) Ob 1. (b) Ob 2. (c) Pixel: 200 × 200. (d) Pixel: 400 × 400.
Applsci 11 03763 g005
Figure 6. Recognition results in two scenes including point objects and star streaks are shown, and the difference between the two scenes is whether the point object in the image is close to the streak. Supplementary recognition is beneficial to the completion of point object contour recognition, but has little effect on streak recognition. (a) Original images containing stars and objects. (b) Column recognition. (c) Horizontal supplementary recognition.
Figure 6. Recognition results in two scenes including point objects and star streaks are shown, and the difference between the two scenes is whether the point object in the image is close to the streak. Supplementary recognition is beneficial to the completion of point object contour recognition, but has little effect on streak recognition. (a) Original images containing stars and objects. (b) Column recognition. (c) Horizontal supplementary recognition.
Applsci 11 03763 g006
Figure 7. Recognition results of extended objects. (a) Original image. (b) GS method (th1 = 37.67). (c) SDS method (th2 = 24.11). (d) Recognized pixel number of the five objects in different σ thresholdings.
Figure 7. Recognition results of extended objects. (a) Original image. (b) GS method (th1 = 37.67). (c) SDS method (th2 = 24.11). (d) Recognized pixel number of the five objects in different σ thresholdings.
Applsci 11 03763 g007
Figure 8. Results of star image segmentation (green squares are false alarms, orange squares are objects that are not clearly identified, and red ellipses are distinguishable objects). (a) S4 is an original image, and the lower right part is seriously affected; S5 has forty simulation streaks, and its background comes from S4; S6 is a simulation image with forty streaks, and the upper left part is affected. (b) Global T1 = μ + σ. (c) Global T2 = μ + 2σ. (d) Local T1 = μ + σ. (e) Local T2 = μ + 2σ. (f) Our proposed method.
Figure 8. Results of star image segmentation (green squares are false alarms, orange squares are objects that are not clearly identified, and red ellipses are distinguishable objects). (a) S4 is an original image, and the lower right part is seriously affected; S5 has forty simulation streaks, and its background comes from S4; S6 is a simulation image with forty streaks, and the upper left part is affected. (b) Global T1 = μ + σ. (c) Global T2 = μ + 2σ. (d) Local T1 = μ + σ. (e) Local T2 = μ + 2σ. (f) Our proposed method.
Applsci 11 03763 g008
Table 1. Segmentation results of different star images with different methods.
Table 1. Segmentation results of different star images with different methods.
ScenesGlobal MethodsLocal MethodsOur Method
T1T2T1T2
Number of
recognized objects
S463271347
S513342140
S651301140
Number of
false alarms
S4Heavy0800
S5HeavySlight1500
S6Heavy0401
Table 2. Recognition results of simulation star images with 2000 frames each (m: mean, σ: standard deviation).
Table 2. Recognition results of simulation star images with 2000 frames each (m: mean, σ: standard deviation).
ScenesThe Number of
Recognized Objects
Recognition
Rate
The Number of
False Alarms
False Alarm
Rate
Algorithm
Speed (s)
mσmσ
S539.021.0897.6%0.170.470.43%0.29
S639.161.0597.9%0.190.510.48%0.30
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zou, Y.; Zhao, J.; Wu, Y.; Wang, B. Segmenting Star Images with Complex Backgrounds Based on Correlation between Objects and 1D Gaussian Morphology. Appl. Sci. 2021, 11, 3763. https://doi.org/10.3390/app11093763

AMA Style

Zou Y, Zhao J, Wu Y, Wang B. Segmenting Star Images with Complex Backgrounds Based on Correlation between Objects and 1D Gaussian Morphology. Applied Sciences. 2021; 11(9):3763. https://doi.org/10.3390/app11093763

Chicago/Turabian Style

Zou, Yunlong, Jinyu Zhao, Yuanhao Wu, and Bin Wang. 2021. "Segmenting Star Images with Complex Backgrounds Based on Correlation between Objects and 1D Gaussian Morphology" Applied Sciences 11, no. 9: 3763. https://doi.org/10.3390/app11093763

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop