1. Introduction
Acquiring images using ground-based telescopes is of great significance to space situations. However, various optical phenomena often affect the imaging of detectors, resulting in image nonuniformity. The imaging nonuniformity of ground-based large-aperture telescopes is mainly caused by vignetting, stray light [
1,
2], and detector nonuniformity. Image degradation not only affects the image quality and image signal-to-noise ratio (SNR) but also has a severe impact on subsequent image segmentation and target detection. Therefore, optical system analysis and nonuniform background correction of space images are necessary pretreatment steps.
Vignetting is a common problem in optical systems, including natural vignetting, mechanical vignetting, and other effects [
2]. Vignetting effects show that the energy captured by the detector decreases from the center to the edge of the image. Stray light is another influencing factor in telescope imaging. It refers to the background radiation noise generated by the light rays in the non-imaging field of view region of the system image plane. Stray light reduces the signal-to-noise ratio (SNR) of the target, thus affecting the detection or recognition ability of the whole system, or ensuring the detected target signal is wholly eradicated in the background of stray light.
Much research has been done on the problem of vignetting correction in imaging systems. The simplest method is flat field correction with uniform illumination [
3,
4,
5]. Although flat field correction can solve the vignetting problem in a fixed scene, it must be corrected again when it changes. It is relatively tricky for large aperture telescopes to obtain flat field images with uniform illumination. Another method to resolve the vignetting is to process the image. Methods can be divided into multi-frame image methods or single-frame image methods according to the number of image processing. The traditional multi-frame image method extracts vignetting information from multiple images in the same scene. Yuan et al. [
6] proposed an improved radial gradient correction method, and the corrected images have better radial uniformity and more precise details. Goldman et al. [
7] discussed whether the response curve was known and used polynomial fitting to remove vignetting in multi-frame images. Litvinov et al. [
8] proposed a method for simultaneously estimating camera radiation response, camera gain, and vignetting using multiple image frames.
Single-frame vignetting correction methods are more flexible because they do not require multi-frame information accumulation. Zheng et al. [
9] realized vignetting correction by the segmentation method, but the correction effect of this method is affected by segmentation accuracy. The method based on the radial brightness channel proposed by Cho et al. [
10] has a faster processing speed, but the application of problem scenarios is limited. Lopez-Fuentes et al. [
11] corrected the vignetting distortion by minimizing the logarithmic intensity entropy of the image, but the problem of overcorrection often existed. The influence of stray light on imaging is random. Generally, researchers can reduce the impact of external and internal stray light by setting an optical structure to suppress stray light [
12].
The methods based on pre-calibration or iteration have problems such as being only suitable for a single scene or a large amount of calculation. In recent years, with the wide application of deep learning in various fields, optical problems are also more widely solved by deep learning, including optical interferometry [
13], single-pixel imaging [
14,
15], wavefront sensing [
16,
17,
18], remote sensing [
19,
20,
21,
22,
23,
24,
25,
26] and Fourier ptychography [
27,
28,
29]. Using deep learning for image enhancement in imaging systems is also more attractive [
22,
23,
30,
31,
32]. Chang et al. [
33] applied the deep residual network to infrared images and showed good robustness to vignetting and noise-induced nonuniformity. Fang et al. [
34] applied U-Net to nonuniform correction to correct the imaging shortcomings of detectors by learning ring artifacts. Jian et al. [
35] introduced the filter into the convolutional neural network to optimize the image for the high-frequency inhomogeneity in the infrared image. Zhang et al. [
36] proposed a Shearlet deep neural network and defined regularization methods based on shearlet-extracted features to help image detail restoration and noise removal.
Generative adversarial networks are widely used in image generation and enhancement due to their excellent fitting performance. Zhang et al. [
37] applied a conditional generative network to the problem of rain removal, which can effectively improve the accuracy of target detection in subsequent rain. Armanious et al. [
38] have proposed MedGAN, a new medical image-image conversion framework, which is superior to other methods for different image conversion tasks. Dai et al. [
39] proposed an adversarial network based on residual cycle generation to correct the nonuniformity of medical images. Various correction parameters have been improved to some extent compared with traditional GAN and U-Net. Kuang et al. [
40] used generative adversarial networks to correct the nonuniformity caused by optical noise in infrared images and achieved excellent results in various infrared images.
This paper proposes a correction algorithm based on a conditional generative adversarial network (
CGAN) to solve the imaging nonuniformity problem of ground-based telescopes. Since no dataset is available for training, we create a data set containing nonuniform image pairs. Firstly, according to Kang-Weiss’s [
41] vignetting model, we randomly add vignetting to the image by changing the hyperparameter space. At the same time, Zernike polynomial and sigmoid functions are introduced to simulate the image background nonuniformity caused by stray light or detector nonuniformity. Furthermore, we modify the
CGAN to extract background information better, applying a cascade network and extremely efficient spatial pyramid (EESP) to improve the performance of the generative network. The nonuniform background is inferenced by a trained network, and the pure space image is obtained by removing the background.
The arrangement of this paper is as follows.
Section 1 introduces the correlative methods and research significance of vignetting and nonuniform correction.
Section 2 introduces the basic process of nonuniform correction and the principle of GAN. We present the proposed method in detail in
Section 3, including the design of the network structure and the simulation method of the nonuniform background.
Section 4 shows the test results of our approach on simulation datasets and discusses various comparisons. Meanwhile, we test the algorithm with real images and make a comparative analysis with multiple methods. In
Section 5, we discuss the simulation and our experimental results. Finally, we summarize the work and contribution of this paper in
Section 6.
2. Preliminaries
An imaging system with interference from an optical system can be expressed as:
where
and
are the pure image and the image with the nonuniform background.
is vignetting background,
is the nonuniform background caused by stray light or detector, and
n is the detector noise. Assuming that the vignetting changes slowly and evenly and stray light will not cause the abnormal exposure of some images, So the corrected image can be obtained by this formula:
where
is the nonuniform background. We aim to train a robust model to learn information about a nonuniform background from a large number of images:
where
is the network model used for inferencing nonuniform background after supervised learning. The nonuniform image is the non-exposure distortion image. In other words, there is no significant number of saturated pixels in the image, only the maximum pixels at the corresponding points of space targets or stars.
Generative adversarial network (GAN) is a generation algorithm based on the deep learning model proposed by Goodfellow et al. in 2014, which can be used for various generation tasks. GAN includes two networks, a generator (G) and a discriminator (D). In the field of image generation, the generation network is used to generate pictures, and its input is a random noise (z), through which the image is generated, denoted as G (z). The discriminator is used to determine whether an image is real or not.
In the training process of GAN, it is hoped that the images generated by the generated network
G can be as real as possible and can deceive the discriminant network
D. The discriminant network
D is expected to distinguish the images generated by
G from the real ones. For the GAN framework, its value function
is:
where
is the real image, The trained generation network
G can therefore be used to generate “fake” pictures.
5. Discussion
Simulation and experimental results show the effectiveness of the proposed method. Firstly, we conduct ablation experiments to verify the proposed network structure’s effectiveness and the loss function’s rationality. The proposed network structure combines the cascade method and the idea of skip connection to effectively improve the performance of the network. At the same time, compared with the loss function used in traditional CGAN, we further discuss the possibility of the loss function. SSIM can evaluate the degree of training, thus improving the training accuracy. Furthermore, we analyze the influence of different patch size in the discriminator, and the results are similar to many CGAN algorithms. Finally, we verify the applicability of the algorithm by feeding images with different resolutions into the generator. We find that the sensitivity of different nonuniform backgrounds to different resolutions is also very different. The image with a gentle gradient of nonuniform background can still obtain good results through the network after downsampling. When the gradient of the background is large, only a tiny amount of downsampling is used to ensure the integrity of the nonuniform background.
We tested the network trained on the simulated dataset on real images and achieved very good results. This is mainly due to the huge amount of simulation dataset and the diversity of simulation data, and a variety of different nonuniform backgrounds greatly improve the generalization of the network. In our experiments, we found that when the diversity of the simulation data set is poor, it will cause the phenomenon of “overfitting”, that is, the performance is magnificent in the simulation dataset, but the correction effect is poor in the real image.
In the process of constructing the dataset, we also tried some ray tracing software for simulation. However, this method not only limited the nonuniform background simulation, but also often failed to effectively simulate the general situation by using limited optical systems, resulting in the limitation of the dataset. Space image nonuniform correction using supervised learning methods is extremely dataset dependent, so subsequent research may prefer unsupervised learning methods.