Multi-Level Convolutional Network for Ground-Based Star Image Enhancement
Abstract
:1. Introduction
1.1. Literature Review
1.2. The Analysis of Image Characteristic
1.3. Brief Description of Work and Contribution
- (1)
- Firstly, most of the background suppression algorithms based on CNN directly input the whole image into the feature extraction network without relying on any prior information. This is a completely blind background suppression method. Some noise and stray light in different real images have different effects. They are very complex and randomly distributed in each local area or even all areas of the image, which will make the generalization ability of the network relatively weak. If the feature extract network can be given a background information estimation, which is used as auxiliary information to help the network understand the background noise intensity distribution of different images, the generalization ability of the network may be better.
- (2)
- Secondly, the model’s receptive field is relatively single. Many neural-network-based background suppression algorithms have a single fixed scale for their receptive field, but at many times, the background feature information is not limited to a small fixed-scale area, which makes it difficult for the network to extract accurate and discriminative feature information, resulting in a poor overall suppression effect. The full fusion of receptive domains at different scales can help the network make full use of the multi-level spatial features.
- (3)
- Thirdly, deep networks cause loss of information on weak targets. Although increasing the number of network layers can obtain high-level semantic information and a greater receptive field, which is helpful to enhance the effect, the size of small targets in the star image is often only a dozen pixels, and the fine-grained information of small targets will be lost with an increase in layers and pooling operations.
- In view of the lack of prior information on image content, and the problem of poor model fitting and generalization ability, inspired by the literature, we use multiple residual bottleneck structures to obtain a weight map which is regarded as the level estimation of background and target information [36], and adopt it as an auxiliary information to adjust the original image, so that the fitting and generalization ability of the network model is improved and the training is more stable.
- Considering the problem of a single receptive domain, we designed a U-Net cascade module with different depths and inserted a new multi-scale convolutional block into it, which has fewer parameters than the normal convolutional block. At the same time, it not only widens the width of the network, but can also obtain multi-scale background feature information at a fixed network depth. In experiments, it improves the effect of a model and converges faster.
- Recursive feature fusion: Small target information will be lost as the depth of the network deepens; so we use a strategy of recursive fusion. Specifically, we use repeated connections in the same level of the model to give the model the ability to regulate semantic information horizontally and make the semantic information learned by the model more effectively and accurately.
- For training, we combine simulated images with real images, in which the real data are the images of optical space debris detection taken by the ground-based telescope detection platform. We transfer the parameters obtained from the simulation image training as pre-training to the ground-based image training, which makes the network fit the ground-based image training better and faster.
2. Methods
2.1. The Overall Architecture
2.2. Background Information Estimation Stage
2.3. Multi-Level U-Net Cascade Module
2.4. Recursive Feature Fusion Module
2.5. Processing of Data Sets
3. Results
3.1. Introduction to the Evaluation Indicators
3.1.1. Presentation of Evaluation Index for Global Area
3.1.2. Presentation of Local Regional Evaluation index
- Threshold segmentation is performed uniformly using the OTSU [49] algorithm to determine the range of pixels of the target itself within a single target region in the labeled image, calculated as follows.
- First, the region around the target is normalized to obtain its normalized histogram, and the mean gray(mg) of this area is calculated. In the following formula, i denotes each individual constituent, which ranges from 0 to L − 1. Pi represents the probability of a pixel with gray level i, and the total number of pixels in this region is n.
- 3.
- Second, for k = 0, 1… L − 1, the cumulative sums P1(k) and P2(k) are computed as follows:
- 4.
- Third, the cumulative means m1(k) and m2(k) are calculated.
- 5.
- Fourth, interclass variance b2(k) is calculated to obtain the value that maximizes b2(k). If the maximum value is not unique, k* is obtained by averaging the corresponding detected values of k.
- 6.
- Fifth, according to the obtained threshold, threshold segmentation is performed on the area around the target.
- 7.
- Structural element B is set. The region Iƒ after the threshold segmentation is first eroded and then dilated with the aim of removing pulse point noise and retaining the original target region.
- 8.
- The foreground area is finalized with the background image area on the processed image.
- 9.
- The is calculated after background suppression and the of the original image based on the range of pixels is determined for the target and background in the area, where the and are calculated as follows:
3.2. Training Strategies and Implementation Details
3.3. Comparative Experiments with Different Methods
- Global quantitative evaluation results on different datasets: There are four data sets for experimental training. A is a simulation image data set with relatively small background fluctuation, and B, C, D are real ground star images with complex background and large background fluctuation. As we can see, Table 1, Table 2 and Table 3 show the experimental results of different global evaluation indicators (PSNR, SSIM, BSF), respectively. From the experimental data, we can clearly see that, compared with the traditional background suppression algorithm, the suppression effect of the algorithm based on deep learning is more obvious and stable; in most cases, it is better than the traditional algorithm in three commonly used global evaluations. In addition, we can find that the MBS-Net algorithm proposed in this paper performs best on four datasets. Among these deep learning algorithms, on the three real data sets of B, C and D, our proposed algorithm outperforms the BSC-Net proposed by Li et al. last year by an average of 3.73 dB on PSNR, 2.54 on SSIM, and 6.98 on BSF. This fully shows that the proposed algorithm in this paper can achieve a background suppression function that is superior to other algorithms in three global evaluation indicators.
- 2.
- Local target regions quantitative evaluation results on different datasets: The improvement in target signal-to-noise ratio is one of the most important criteria to test the effect of the algorithm. For this reason, we randomly selected several targets from each test image in three real test data, and calculated the SCRG of the local area of these targets. We selected five of the above algorithms for comparison. Specific results are in Table 4. We find that when the filter size changes, the results of traditional algorithms vary greatly. However, deep learning algorithms do not have these drawbacks and they are very stable. Among these algorithms, we can see that the average target signal-to-noise ratio gain of MBS-Net reaches 1.54, which is significantly higher than other algorithms. This means that our algorithm not only works well in overall background suppression, but also achieves more prominent results in retaining targets and improving signal-to-noise ratio.
- 3.
- Partial image visualization after background suppression: Figure 6, Figure 7 and Figure 8 below show the background suppression results of images containing stray light and clouds so that we can see the effect of different algorithms more clearly and intuitively. In these visual results, we enlarge a local area containing the target, add the target’s SCRG, and show a three-dimensional gray scale of the whole image, which allows us to inspect the suppression effect of the algorithm from both the overall and local perspectives. We find that traditional algorithms can only improve the signal-to-noise ratio of partial targets at some times. In general, the overall background suppression effect is not stable. Median filtering only reduces the variance of the background by using a smooth background indiscriminately. The other two traditional methods improve the signal-to-noise ratio by reducing the gray level of the background. When these methods encounter complex clouds and stray light backgrounds, the background suppression effect is poor. On the contrary, the three algorithms based on deep learning are more robust. In the display of stray light and cloud background test images, the MBS-Net algorithm we proposed has the best effect; the improvement in the target’s signal-to-noise ratio is higher than other algorithms. Intuitively, the target is also more completely preserved. This is because first our proposed algorithm focuses on the different types of backgrounds in different images, so that the network can estimate the background information before each image feature extraction. Second, the design of multi-level receptive domain module and multi-scale convolution make the semantic information obtained by network feature extraction richer and the ability to distinguish background and target stronger.
4. Discussion
4.1. Discussion on Ablation Experiment
4.2. Analysis of Different Depth of MUFE
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
References
- Liou, J.C.; Krisko, P. An update on the effectiveness of post mission disposal in LEO. In Proceedings of the 64th International Astronautical Congress, Beijing, China, 23–27 September 2013; IAC-13-A6.4.2. International Astronautical Federation: Beijing, China, 2013. [Google Scholar]
- Kessler, D.J.; Cour-Palais, B.G. Collision frequency of artificial satellites: The creation of a debris belt. J. Geophys. Res. 1978, 83, 2637–2646. [Google Scholar] [CrossRef]
- Masias, M.; Freixenet, J.; Lladó, X.; Peracaula, M. A review of source detection approaches in astronomical images. Mon. Not. R. Astron. Soc. 2012, 422, 1674–1689. [Google Scholar] [CrossRef] [Green Version]
- Diprima, F.; Santoni, F.; Piergentili, F.; Fortunato, V.; Abbattista, C.; Amoruso, L.; Cordona, T. An efficient and automatic debris detection framework based on GPU technology. In Proceedings of the 7th European Conference on Space Debris, Darmstadt, Germany, 18–24 April 2017. [Google Scholar]
- Nixon, M.S.; Aguado, A.S. Feature Extraction and Image Processing. Newnes 2002, 5, 67–97. [Google Scholar]
- Kouprianov, V. Distinguishing features of CCD astrometry of faint GEO objects. Adv. Space Res. 2008, 41, 1029–1038. [Google Scholar] [CrossRef]
- Ruia, Y.; Yan-ninga, Z.; Jin-qiub, S.; Yong-penga, Z. Smear Removal Algorithm of CCD Imaging Sensors Based on Wavelet Trans-form in Star-sky Image. Acta Photonica Sin. 2011, 40, 413–418. [Google Scholar] [CrossRef]
- Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
- Yair, W.; Erick, F. Contour Extraction of Compressed JPEG Images. J. Graph. Tools 2001, 6, 37–43. [Google Scholar]
- Wei, M.S.; Xing, F.; You, Z. A real-time detection and positioning method for small and weak targets using a 1D morphology-based approach in 2D images. Light Sci. Appl. 2018, 7, 18006. [Google Scholar] [CrossRef]
- Wang, X.; Zhou, S. An Algorithm Based on Adjoining Domain Filter for Space Image Background and Noise Filtrating. Comput. Digit. Eng. 2012, 40, 98–99. [Google Scholar]
- Yan, M.; Fei, W.U.; Wang, Z. Removal of SJ-9A Optical Imagery Stray Light Stripe Noise. Spacecr. Recovery Remote Sens. 2014, 35, 72–80. [Google Scholar]
- Chen, H.; Zhang, Y.; Zhu, X.; Wang, X.; Qi, W. Star map enhancement method based on background suppression. J. PLA Univ. Sci. Technol. Nat. Sci. Ed. 2015, 16, 7–11. [Google Scholar]
- Zou, Y.; Zhao, J.; Wu, Y.; Wang, B. Segmenting Star Images with Complex Backgrounds Based on Correlation between Objects and 1D Gaussian Morphology. Appl. Sci. 2021, 11, 3763. [Google Scholar] [CrossRef]
- Batson, J.; Royer, L. Noise2self: Blind denoising by self-supervision. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 524–533. [Google Scholar]
- Hernán Gil Zuluaga, F.; Bardozzo, F.; Iván Ríos Patiño, J.; Tagliaferri, R. Blind microscopy image denoising with a deep residual and multiscale encoder/decoder network. arXiv e-prints 2021, arXiv:2105.00273. [Google Scholar]
- Soh, J.W.; Cho, N.I. Deep universal blind image denoising. In Proceedings of the 2020 IEEE 25th International Conference on Pattern Recognition (ICPR), Milan, Italy; 2021; pp. 747–754. [Google Scholar]
- Liu, P.; Zhang, H.; Lian, W.; Zuo, W. Multi-level wavelet convolutional neural networks. IEEE Access 2019, 7, 74973–74985. [Google Scholar] [CrossRef]
- Zhao, Y.; Jiang, Z.; Men, A.; Ju, G. Pyramid real image denoising network. In Proceedings of the 2019 IEEE Visual Communications and Image Processing (VCIP), Sydney, Australia, 1–4 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–4. [Google Scholar]
- Liu, J.-m.; Meng, W.-h. Infrared Small Target Detection Based on Fully Convolutional Neural Network and Visual Saliency. Acta Photonica Sin. 2020, 49, 0710003. [Google Scholar]
- Tian, C.; Fei, L.; Zheng, W.; Xu, Y.; Zuo, W.; Lin, C.W. Deep learning on image denoising: An overview. Neural Netw. 2020, 131, 251–275. [Google Scholar] [CrossRef] [PubMed]
- Liu, G.; Yang, N.; Guo, L.; Guo, S.; Chen, Z. A One-Stage Approach for Surface Anomaly Detection with Background Suppression Strategies. Sensors 2020, 20, 1829. [Google Scholar] [CrossRef] [Green Version]
- Liang-Kui, L.; Shao-You, W.; Zhong-Xing, T. Point target detection in infrared over-sampling scanning images using deep convolutional neural networks. J. Infrared Millim. Waves 2018, 37, 219. [Google Scholar]
- Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a Fast and Flexible Solution for CNN-Based Image Denoising. IEEE Trans. Image Process. 2018, 27, 4608–4622. [Google Scholar] [CrossRef] [Green Version]
- Chen, J.; Chen, J.; Chao, H.; Yang, M. Image Blind Denoising with Generative Adversarial Network Based Noise Modeling. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3155–3164. [Google Scholar]
- Wu, S.-C.; Zuo, Z.-R. Small target detection in infrared images using deep convolutional neural networks. J. Infrared Millim. Waves 2019, 38, 371–380. [Google Scholar]
- Xie, M.; Zhang, Z.; Zheng, W.; Li, Y.; Cao, K. Multi-Frame Star Image Denoising Algorithm Based on Deep Reinforcement Learning and Mixed Poisson–Gaussian Likelihood. Sensors 2020, 20, 5983. [Google Scholar] [CrossRef]
- Zhang, Z.; Zheng, W.; Ma, Z.; Yin, L.; Xie, M.; Wu, Y. Infrared Star Image Denoising Using Regions with Deep Reinforcement Learning. Infrared Phys. Technol. 2021, 117, 103819. [Google Scholar] [CrossRef]
- Li, Y.; Niu, Z.; Sun, Q.; Xiao, H.; Li, H. BSC-Net: Background Suppression Algorithm for Stray Lights in Star Images. Remote Sens. 2022, 14, 4852. [Google Scholar] [CrossRef]
- Chen, K.; Zou, Z.; Shi, Z. Building extraction from remote sensing images with sparse token transformers. Remote Sens. 2021, 13, 4441. [Google Scholar] [CrossRef]
- Zhang, Z.; Jiang, Y.; Jiang, J.; Wang, X.; Luo, P.; Gu, J. Star: A structure-aware lightweight transformer for real-time image enhancement. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 4106–4115. [Google Scholar]
- Chen, K.; Li, W.; Lei, S.; Chen, J.; Jiang, X.; Zou, Z.; Shi, Z. Continuous remote sensing image super-resolution based on context interaction in implicit function space. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16. [Google Scholar] [CrossRef]
- Peng, L.; Zhu, C.; Bian, L. U-shape transformer for underwater image enhancement. IEEE Trans. Image Process. 2023, 32, 3066–3079. [Google Scholar] [CrossRef] [PubMed]
- Wang, X. Target trace acquisition method in serial star images of moving background. Opt. Precis. Eng. 2008, 16, 524–530. [Google Scholar]
- Tao, J.; Cao, Y.; Ding, M. Progress of Space Debris Detection Technology. Laser Optoelectron. Prog. 2022, 59, 1415010. [Google Scholar]
- Guo, S.; Yan, Z.; Zhang, K.; Zuo, W.; Zhang, L. Toward convolutional blind denoising of real photographs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1712–1722. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Mei, Y.; Fan, Y.; Zhang, Y.; Yu, J.; Zhou, Y.; Liu, D.; Fu, Y.; Huang, T.S.; Shi, H. Pyramid Attention Networks for Image Restoration. arXiv 2020, arXiv:2004.13824. [Google Scholar]
- Zhao, H.-S.; Shi, J.-P.; Qi, X.-J.; Wang, X.-G.; Jia, J.-Y. Pyramid scene parsing network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
- Bolei, Z.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Object Detectors Emerge in Deep Scene CNNs. In Proceedings of the 2015 International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Bai, X.; Zhou, F.; Xue, B. Image enhancement using multi scale image features extracted by top-hat transform. Opt. Laser Technol. 2012, 44, 328–336. [Google Scholar] [CrossRef]
- Liu, Y.; Zhao, C.; Xu, Q. Neural Network-Based Noise Suppression Algorithm for Star Images Captured During Daylight Hours. Acta Opt. Sin. 2019, 39, 0610003. [Google Scholar]
- Kahr, E.; Montenbruck, O.; O’Keefe KP, G. Estimation and analysis of two-line elements for small satellites. J. Space-Craft Rocket. 2013, 50, 433–439. [Google Scholar] [CrossRef] [Green Version]
- Schrimpf, A.; Verbunt, F. The star catalogue of Wilhelm IV, Landgraf von Hessen-Kassel. arXiv Prepr. 2021, arXiv:2103.10801. [Google Scholar]
- Wang, Y.; Niu, Z.; Huang, J.; Li, P.; Sun, Q. Fast Simulation Method for Space-Based Optical Observation Images of Massive Space Debris. Laser Optoelectron. Prog. 2022, 59, 1611006. [Google Scholar]
- Huang, K.; Mao, X.; Liang, X.G. A Novel Background Suppression Algorithm for Infrared Images. Acta Aeronaut. Et Astronau-Tica Sin. 2010, 31, 1239–1244. [Google Scholar]
- Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
- Bertin, E.; Arnouts, S. SExtractor: Software for source extraction. Astron. Astrophys. Suppl. Ser. 1996, 117, 393–404. [Google Scholar] [CrossRef]
- Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Datasets | PSNR | |||||||
---|---|---|---|---|---|---|---|---|
Original | Median-Blur | Top-Hat | DnCNN | BM3D | Source Extractor | BSC-Net | MBS-Net | |
A | 26.42 | 25.10 | 31.715 | 52.29 | 26.44 | 38.99 | 82.50 | 85.62 |
B | 7.747 | 7.747 | 41.22 | 56.44 | 7.883 | 50.31 | 62.18 | 65.22 |
C | 12.12 | 12.13 | 43.82 | 60.74 | 12.14 | 51.93 | 63.28 | 66.75 |
D | 9.340 | 9.341 | 41.03 | 59.33 | 11.79 | 50.48 | 61.27 | 65.96 |
Datasets | SSIM (%) | |||||||
---|---|---|---|---|---|---|---|---|
Original | Median-Blur | Top-Hat | DnCNN | BM3D | Source Extractor | BSC-Net | MBS-Net | |
A | 4.397 | 3.302 | 56.50 | 95.05 | 3.412 | 55.33 | 98.82 | 98.87 |
B | 0.5667 | 0.3444 | 35.79 | 89.37 | 0.2799 | 5.350 | 94.33 | 95.65 |
C | 0.6279 | 0.3188 | 35.82 | 83.35 | 0.6312 | 9.730 | 90.56 | 92.95 |
D | 0.5023 | 0.5281 | 28.30 | 72.33 | 0.5590 | 10.41 | 84.06 | 88.26 |
Datasets | BSF | |||||||
---|---|---|---|---|---|---|---|---|
Original | Median-Blur | Top-Hat | DnCNN | BM3D | Source Extractor | BSC-Net | MBS-Net | |
A | - | 0.1257 | 0.3614 | 2.278 | 0.2722 | 1.834 | 3.332 | 4.916 |
B | - | 1.056 | 0.5967 | 4.958 | 1.058 | 1.606 | 7.758 | 8.471 |
C | - | 1.021 | 1.171 | 14.42 | 1.028 | 2.856 | 11.91 | 15.68 |
D | - | 1.011 | 4.604 | 5.667 | 1.009 | 4.554 | 22.48 | 38.96 |
Datasets | Local Target SCRG | ||||
---|---|---|---|---|---|
Top-Hat | BM3D | Source Extractor | BSC-Net | MBS-Net | |
B | 1.118 | 1.089 | 1.291 | 1.406 | 1.538 |
C | 1.048 | 0.8575 | 1.347 | 1.330 | 1.477 |
D | 0.4542 | 1.166 | 1.217 | 1.476 | 1.612 |
Components | Evaluation Indicators | |||
---|---|---|---|---|
BIE | MUFE | RF | SSIM | PSNR |
- | - | - | 0.9099 | 65.51 |
√ | - | - | 0.9106 | 65.62 |
- | √ | - | 0.9155 | 65.96 |
√ | √ | - | 0.9175 | 66.37 |
- | √ | √ | 0.9256 | 66.48 |
√ | √ | √ | 0.9331 | 66.75 |
Input Size | Model Structure | Params (M) | Flops (109) |
---|---|---|---|
32 × 512 × 512 | Conv 3 × 3 | 9.248 × 10−3 | 2.424 |
32 × 512 × 512 | MSC1 | 7.840 × 10−3 | 2.055 |
512 × 512 | MBS-Net* | 9.244 | 137.7 |
512 × 512 | MBS-Net (MSC1) | 8.108 | 124.2 |
512 × 512 | BSC-Net | 9.282 | 146.8 |
512 × 512 | DnCNN | 0.556 | 145.7 |
Network | Param(M) | Flops (109) | Test Time(s) of Datasets | ||
---|---|---|---|---|---|
B | C | D | |||
Net-2 | 7.906 | 119.4 | 5.004 | 6.135 | 4.241 |
Net-3 | 8.004 | 120.6 | 5.108 | 6.402 | 4.491 |
Net-4 | 8.108 | 124.2 | 5.261 | 6.564 | 4.563 |
Net-5 | 8.113 | 125.9 | 5.353 | 6.702 | 4.646 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, L.; Niu, Z.; Li, Y.; Sun, Q. Multi-Level Convolutional Network for Ground-Based Star Image Enhancement. Remote Sens. 2023, 15, 3292. https://doi.org/10.3390/rs15133292
Liu L, Niu Z, Li Y, Sun Q. Multi-Level Convolutional Network for Ground-Based Star Image Enhancement. Remote Sensing. 2023; 15(13):3292. https://doi.org/10.3390/rs15133292
Chicago/Turabian StyleLiu, Lei, Zhaodong Niu, Yabo Li, and Quan Sun. 2023. "Multi-Level Convolutional Network for Ground-Based Star Image Enhancement" Remote Sensing 15, no. 13: 3292. https://doi.org/10.3390/rs15133292
APA StyleLiu, L., Niu, Z., Li, Y., & Sun, Q. (2023). Multi-Level Convolutional Network for Ground-Based Star Image Enhancement. Remote Sensing, 15(13), 3292. https://doi.org/10.3390/rs15133292