Next Article in Journal
Quantitative Endogenous Fluorescence Analysis Discriminates Glioblastoma Tumor through Visible and NIR Excitation
Next Article in Special Issue
Design of a Large Field of View and Low-Distortion Off-Axis Optical System Based on a Free-Form Surface
Previous Article in Journal
Broadband Microwave Photonic Mixer with Flexibly Tunable Phase Shift and Supporting Dispersion-Induced Power-Fading-Free Fiber Transmission
Previous Article in Special Issue
Multiobjective Optimization of SiC Mirror Based on Dual-Parameter Coupling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Automatic Inhomogeneous Background Correction for Spatial Target Detection Image Based on Partition Processing

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
College of Communication Engineering, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Photonics 2023, 10(4), 433; https://doi.org/10.3390/photonics10040433
Submission received: 27 February 2023 / Revised: 22 March 2023 / Accepted: 3 April 2023 / Published: 12 April 2023
(This article belongs to the Special Issue Advances in Photoelectric Tracking Systems)

Abstract

:
High-resolution imaging with wide field of view (FoV) ground-based telescopes is often affected by skylight background and noise due to the detector, resulting in an inhomogeneous background. In this paper, we propose an improved method for spatial image non-uniformity correction based on partition processing. First, an evaluation metric is introduced to evaluate the partition size and automatically iterate a suitable partition value for different scenarios based on the different operating conditions of the telescope. Then, we iteratively calculate the mean and variance in each partitioned region to extract the background of each partitioned region. Finally, after applying bilinear interpolation to the background extracted from each region, the inhomogeneous background is obtained and removed from the original image. The experiments on the simulated and real images show that the proposed method can effectively remove the inhomogeneous background of spatial images and meet the requirements of the real-time processing of high-resolution images under long exposure conditions.

1. Introduction

Following the growing number of space debris, space target detection becomes a complex and challenging subject. Ground-based photoelectric telescopes are the most commonly used equipment in space target detection; several organizations have carried out observations with ground-based photoelectric telescopes and published their results [1,2,3,4,5]. Meanwhile, the removal of the inhomogeneous background from their captured images is an important prerequisite for improving space target detection capabilities. Image degradation not only affects image quality and the image signal-to-noise ratio (SNR), but it also has a serious impact on the subsequent image segmentation and target detection. Therefore, the inhomogeneous background correction of spatial images becomes a necessary pre-processing step. In addition, for applications where small telescopes search for space targets, remote unattended automatic observation with multi-station deployment can be achieved if the equipment has the characteristics of a lower cost, automatic observation, and easy deployment [6].
Methods of geometric modification, image smoothing and filtering, image morphological operations, and image arithmetic operations are used to process spatial images taken by ground-based telescopes. Each of these operations enhances or highlights the region of interest for later viewing, processing, etc. However, image processing operations can cause some damage to the original information of the image, which is known as the process of image degradation.
Although ground-based telescope systems can use adaptive optics or phase modulation to improve image quality, these systems are usually expensive and complex [7,8,9]. Even more effective image enhancement techniques are applied, such as deconvolution and image processing. It still cannot be achieved in a way that satisfies the real-time processing requirements [10,11,12]. A lot of research has been published in the field of image processing on the issue of inhomogeneous background correction. A simple method is the flat-field correction under uniform illumination [13,14]. Although it can solve the problem in a fixed scene, the correction must be performed again when the observation environment changes. For ground-based telescopes, obtaining a flat-field image under uniform illumination is more problematic. Another way to solve the inhomogeneous background is image processing. Depending on the number of images processed, the methods can be classified as multi-frame image methods and single-frame image methods. The widely used multi-frame image method extracts the inhomogeneous background from multiple images of the same scene. Yuan et al. [15] proposed an improved radial gradient correction method with a better radial uniformity, achieving more accurate details in the corrected images. Litvinov et al. [16] proposed a method to simultaneously estimate the camera radiometric response, camera gain, and vignetting using multiple consecutive images. However, a limitation of the multi-frame processing methods is adapting to the drastic changes in the scene, such as observing a fast-moving target in low earth orbit (LEO).
Single-frame inhomogeneous background correction methods are more flexible because they do not require the accumulation of multi-frame information. The single-frame inhomogeneous background correction studies mainly concentrate on the field of computed tomography, which does not require high real-time processing [17,18]. In the field of spatial target detection, Zhang et al. [19] used maximum likelihood estimation for correction. Partitioning the image into segments is currently the most commonly used method for the real-time correction of an inhomogeneous background in a single-frame image. However, the segmentation threshold is mostly a fixed empirical value, which cannot be adjusted correspondingly over the frames.
This paper presents a single-frame method for automatically removing the inhomogeneous background from the images taken with telescopes. It is more suitable for images captured by small telescopes of an integrated design and meets the real-time processing requirements within 2.5 s at 4 Kpx × 4 Kpx resolution. By introducing a split-size evaluation index, the segmentation value can be adjusted over different scenes. Experiments with simulated images and captured images in this paper show that the method has a better quality and performance compared to the processing method with fixed segmentation thresholds.

2. Equipment and Methods

We first describe the data capture device information in Section 2.1. Next, the workflow of the method is described. The complete workflow of the method is illustrated in Figure 1. The method can be divided into 3 steps, which are the automatic partition, the background-estimating, and the background-removing step. The automatic partition step determines the split value and partitions the image into several regions. The background-estimating step extracts the background within each region. The background-removing step calculates the full image background and then removes it from the original image. The steps will be explained in detail in Section 2.2, Section 2.3 and Section 2.4, respectively.

2.1. Experimental Equipment

The image acquisition equipment for the experiment is shown in Figure 2. The system consists of an optical system, a camera, a focusing mechanism, a motion control system, and an integrated control box. We chose a scientific CMOS detector with a resolution of 4096 px × 4096 px and a pixel size of 9 × 9 μm. The detector was cooled to −40 °C to improve imaging and increase the telescope’s observing power. The telescope’s optical system is designed as a transmissive imaging system, which reduces the size and weight of the optical system. The optical system has an aperture of 150 mm and a field of view of 6.5° × 6.5°. For the integrated design of the telescope, the software system, including the image processing and motion control modules, runs on NVIDIA Jetson AGX Xavier. At the same time, the real-time processing of the telescope is challenged by the high resolution of the images and the low computing power of the processing devices.
The method in this paper is designed to meet the real-time processing requirements of these devices, and we use images taken with each of these devices in a city with a strong skylight background and in a countryside with a weak skylight background. To facilitate the presentation of the results of our method, we store the captured images and then process them offline. At the same time, the images were not compressed in any way to preserve their original information.

2.2. Determination of Region Split Size

The main causes of image degradation are the atmospheric scattering noise and the CMOS detector noise [20]. Mohamed et al. [21] and Sonnenschein et al. [22] pointed out that the noise generated by atmospheric scattering is mainly the Poisson noise. Pain et al. [23] and Nguyen et al. [24] illustrated the sources of CMOS noise, where the dark current noise follows a Poisson distribution, and the readout noise follows a Gaussian distribution. An inhomogeneous image is a non-exposure distorted image. In other words, a large number of pixels are non-saturated, and only the space target or star is the pixel under the maximum grey level. An imaging system with optical system interference can be expressed as:
I = I + V × 𝒮 + N
where I and I are the pure image and the image with the inhomogeneous background. V is the change caused by the optical system, 𝒮 is the inhomogeneous background caused by stray light or the detector, and N is the noise of the detector. Assuming that the changes caused by the optical system are slow and uniform, stray light does not cause abnormal exposures in some images. The corrected image can be obtained by the following equation:
I = I B
where B is an inhomogeneous background. Our goal is to determine a method F to extract information about the inhomogeneous background from the image.
B = F ( I )
Although the distribution of the abovementioned noise is inhomogeneous, the variation in the inhomogeneous background within a certain small image area is very flat. Therefore, we split the image into several small area blocks R b l o c k . A larger block size can reduce the processing time but it cannot accurately estimate the background of the original image. A smaller split block size can estimate the background of the original image more accurately but requires a correspondingly longer processing time.
Our method determines the split size in step 1. First, a small initial split size is set to split the image I . This initial split size is set differently for different scenes and different image resolutions. In this article, the initial split size is set to 10 px. Then, the mean value of pixels within each segmentation block is calculated to obtain the evaluation image S ( m , n ) of size m × n , which is used to evaluate the segmentation quality. Since the processing is low-pass filtering, the smoothness between each region is a characterization of the changing trend of the high-frequency part of the image. Therefore, an effective smoothness-evaluating operator must perform high-pass filtering on the image. One way to high-pass filter an image is to determine its second-order derivative. For a two-dimensional image, the Laplace operator 2 S ( m , n ) can be defined as follows:
2 S ( m , n ) = 2 S ( m , n ) m 2 + 2 S ( m , n ) n 2
We note that the second-order derivatives of the Laplace operator in the m and n directions may have opposite signs and tend to cancel each other out. This phenomenon occurs frequently on telescope-taken images, meaning the Laplace operator may be unstable. We overcome this problem by modifying the Laplace operator 2 M S ( m , n ) as:
2 M S ( m , n ) = 2 S m 2 + 2 S n 2
The modified Laplace operator is always larger or equal to the pre-correction Laplace operator. The discretization of a Laplace operator is usually approximated as a 3 × 3 operator. Therefore, the discrete approximation of the modified Laplace (ML) operator can be obtained as:
M L m , n = 2 S m , n S m 1 , n S m + 1 , n + 2 S m , n S m , n 1 S m , n + 1
M L m , n evaluates the undulation between small regions of the discriminated image [25]. Further, the mean ML value between all segmented regions in the evaluation image S m , n is regarded as the evaluation value E 1 of the original image at the current segmentation size.
E 1 = 1 m × n m , n 2 S m , n S m 1 , n S m + 1 , n + 2 S m , n S m , n 1 S m , n + 1
We iteratively scale up the size of the segmented regions with the preset step size then update the evaluation value until the kth evaluation value E k satisfies the following termination condition.
E k E k 1 E k 1 < 0.01
If this condition is satisfied, the rate of change in the evaluation value at this time is small. We consider that the split size is suitable for the current frame.

2.3. Region Background Extraction

In Section 2.2, the mean value of the region is simply taken as the background of the block, but the calculation of the block background should exclude the bright stars and faint points to obtain an accurate background. For star maps taken by telescopes, most of the image elements belong to the background grayscale, and the range is concentrated. At the same time, stars and faint points are presented with high and low grayscale values, respectively. To extract the regional background of the image accurately, the dark and bright spots should be removed when processing the inhomogeneity correction. The removal is performed as follows. First, by calculating the mean value m b l o c k 1 and the standard deviation σ b l o c k 1 in a block.
m b l o c k 1 = ( x , y ) R b l o c k f b l o c k ( x , y ) N b l o c k
σ b l o c k 1 = ( x , y ) R b l o c k ( f b l o c k x , y m b l o c k 1 ) 2 N b l o c k
where f b l o c k ( x , y ) means the gray value at ( x , y ) in the region block R b l o c k , and N b l o c k represents the number of pixels in the R b l o c k . Then, we calculate the segmentation thresholds T b l o c k 1 L and T b l o c k 1 H for the removal:
T b l o c k 1 L = m b l o c k 1 3 × σ b l o c k 1
T b l o c k 1 H = m b l o c k 1 + 3 × σ b l o c k 1
Next, the images are culled according to the following rules:
g b l o c k 1 x , y = 0 , f b l o c k x , y T b l o c k 1 L f b l o c k x , y , T b l o c k 1 L < f b l o c k x , y < T b l o c k 1 H 0 , f b l o c k ( x , y ) T b l o c k 1 H
g b l o c k 1 x , y is the image after the first-time removal of bright and dark pixels. Continue analyzing g b l o c k 1 x , y to calculate the mean m b l o c k 2 and standard deviation σ b l o c k 2 of the remaining points after the removal.
m b l o c k 2 = ( x , y ) R b l o c k 0 g b l o c k 1 ( x , y ) N b l o c k N 0
σ b l o c k 2 = ( x , y ) R b l o c k 0 ( g b l o c k 1 x , y m b l o c k 2 ) 2 N b l o c k N 0
where N 0 denotes the number of pixels where g b l o c k 1 x , y = 0 . Our method extracts the block background by removing pixels in each iteration. We update the pixel values iteratively until the image standard deviation satisfies the condition as follows:
σ b l o c k n 1 σ b l o c k n σ b l o c k n 1 < 0.01
σ b l o c k n 1 and σ b l o c k n denote the standard deviation of the image after the n − 1st and nth removal. When the condition in Equation (16) is satisfied, the standard deviation of the image after the nth removal is close to the previous one, and the dark and bright spots in the image are considered to be removed cleanly. The mean value of m b l o c k n at this point is the gray value of the background in the current region.

2.4. Full Image Background Estimation

There are stepped differences in gray values between the blocks of the background image acquired in Section 2.3. However, the gray value undulation of the background in the actual image is usually continuous. To recover the background of the image accurately, an interpolation filter is required. We choose bilinear interpolation as the method for full image recovery. Compared with nearest neighbor interpolation, bilinear interpolation can provide smoother results. It also performs significantly better than bicubic spline interpolation and meets the real-time requirement.
Similar to the up-sampling, our method takes the center pixel of each region as a sample point, then the estimated background can be obtained by interpolating the gray value over the pixels between the sample points. After, a corrected image can be obtained by differencing the original image from the estimated background.

3. Results and Discussion

In this section, we conduct experiments on the method to verify the effectiveness and practicality of the proposed method. We first introduce the evaluation indicators in Section 3.1. In Section 3.2, we evaluate the effectiveness using the simulated images and then analyze the experimental results. Then, we test our method on the image data captured by the experimental equipment in Section 2.1 and list the results of our method in Section 3.3. The experimental results of both simulated and captured images demonstrate the effectiveness of our method.

3.1. Evaluation Indicators

It should be noted that ground truth images are present in the simulated images but not in the captured images. Therefore, the evaluation indicators are applied to the simulated images. Structural similarity (SSIM) and mean square error (MSE) are used to evaluate the correction quality of the simulated inhomogeneous background. The MSE is defined as follows:
M S E = 1 m × n i = 0 m 1 j = 0 n 1 ( I i , j K ( i , j ) ) 2
where m and n are the resolutions of the image, respectively; both of them are set to 4096. I and K are the real images and the corrected image, respectively. The SSIM is defined as follows:
S S I M = ( 2 μ x μ y + c 1 ) ( ( 2 σ x y + c 2 ) ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 )
where μ x and μ y represent the average of the input x and the output y , σ x , and σ y represent the standard deviation of x and y , and σ x y is the covariance between x and y . c 1 and c 2 are the constant to avoid the error when the denominator is zero.
A smaller MSE indicates that the error between the ground truth image and the corrected image is smaller, which means the background noise is better removed. SSIM closer to 1 declares that the ground truth image and the corrected image are more similar, which denotes that the information of the original image is better preserved during the correction.

3.2. Experiment on Simulated Images

We generated a set of simulated images to evaluate the effectiveness of the proposed method. The simulated inhomogeneous backgrounds are generated according to the model [26]. We generated a background-free image and then composite the background onto it to obtain the test image. In this way, the ground truth image, the inhomogeneous background, and the simulated image are obtained. We apply the method in Section 2 to the simulated images to obtain the estimated background and corrected images; the results are shown in Figure 3.
Figure 4 compares the grey-level distribution in Figure 3a,d on the same row (scan line 350). Figure 4a shows the grey level distribution of the ground truth image on the scan line; Figure 4b shows the corrected image on the grey level distribution. We can see that the average grey level distribution of the corrected image remains the same as the ground truth image, and as for the superimposed noise, the method removes it well.
The effectiveness of the proposed method can be evaluated quantitatively on simulated images. We evaluate the effectiveness of the proposed method by comparing it with the method for a fixed empirical split region size. The results in Table 1 show that the proposed method can find a reasonable split region size compared to the method based on empirical values. Additionally, it has a better correction result on the simulated degraded image, thus obtaining a noise-suppressed image.

3.3. Experiment on Captured Images

We have demonstrated the effectiveness of the method in Section 3.2. Different from the simulated images, it is difficult to obtain ground truth images of the captured images to evaluate our method quantitatively. However, the method processing effect can be identified using the improvement of the image quality before and after the method processing.
In Figure 5, the first column shows the captured images, and the second column lists the corrected images by our method. The third and fourth column represent the background estimated by the method and its gray-level distribution, respectively. The grey scale of the image in the figure has been stretched to present the detail. It is worth noting that the observations in rows 1–2 of the figure are set in cities, where the images contain background stray light scattered by the atmosphere due to the high level of city lighting. The observations in rows 3–4 are set in the countryside, where the stray light scattered by the atmosphere is weaker and the background is mainly caused by detector readout noise and diffraction from bright stars. The comparative results show that our method can correct the inhomogeneous background well in both strong and weak sky backgrounds, demonstrating the universality of the method of this paper.
We have also processed the case when a small amount of cloud cover is obscured. As shown in Figure 6, the method also works well, since the variations in the clouds are usually smooth over small areas.
Figure 7a,b displays the captured and corrected images, respectively, Figure 7c shows the background corrected by the method, and Figure 7d is the normalized background distribution. Due to the low elevation angle, more stray light is refracted and scattered by the atmosphere near the horizon, causing large undulations in the vertical direction of the shot. This variation is still relatively smooth in small areas. It can therefore be well removed by the method proposed in this paper.
We also compare the performance of the method in this paper with the single-image correction method of Zhang et al. [19]. It is worth noting that our method is run in MATLAB R2018a with a single thread. Therefore, only one core was used when testing our method. As shown in Table 2, in the sample average running time of a set of 40-star images, the average running time of our method is less than Zhang et al.’s method, under the same resolution. Meanwhile, for long-exposure images, the processing time requirement (2.5 s) can also be met under 4 Kpx × 4 Kpx resolution.

4. Conclusions

This paper presents a method for the inhomogeneous background correction of single-frame images from small telescopes. The method removes the skylight background and camera noise from the images. The method is a model-free method that can adapt to complex environments and extract background information from a single-frame image. The method has the advantages of a high performance, high correction accuracy, and the ability to deal with the heterogeneity of complex sky backgrounds, showing a strong practicality.
The experiments on the simulated and captured images prove the effectiveness of the method. The method can significantly improve image quality under the conditions of complex noise and complicated skylight backgrounds. It applies to astronomical observations and remote sensing images and is of great value for these applications. It can be applied to agricultural observations, ocean observations, and ground-based observations. This robust method can also be extended for medical detection and analysis and high dynamic range imaging. In future research, we will speed up our method with an optimized C++ implementation or GPU-accelerated computation.

Author Contributions

Conceptualization, T.C., Z.W. and J.C.; methodology, J.C., C.J., Z.W. and C.L. (Changhua Liu); software, C.J.; validation, M.S. and C.L. (Changhua Liu); formal analysis, C.J. and C.L. (Changzheng Lu); data curation, C.J. and C.L. (Changzheng Lu); writing—original draft preparation, C.J.; writing—review and editing, C.J. and C.L. (Changzheng Lu); visualization, C.J. and M.S.; project administration, T.C.; funding acquisition, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China, Grant Nos. 62001448 and U21A20451.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are thankful to Risong Zhang for assistance with the language and figures, and to every member of the Changchun Changguang Orion Optoelectronic Technology Co., Ltd.; all their contributions were necessary for the completion of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gao, Y.; Zhao, J.-Y. Space object detecting ability improvement method based on optimal principle. Optoelectron. Lett. 2019, 15, 459–462. [Google Scholar] [CrossRef]
  2. Steindorfer, M.A.; Kirchner, G.; Koidl, F.; Wang, P.; Jilete, B.; Flohrer, T. Daylight space debris laser ranging. Nat. Commun. 2020, 11, 3735. [Google Scholar] [CrossRef] [PubMed]
  3. Beck, P.; Robson, L.; Gallaway, M.; Jones, H.R.; Campbell, D. Efficient follow-up of exoplanet transits using small telescopes. Publ. Astron. Soc. Pac. 2019, 131, 084402. [Google Scholar] [CrossRef] [Green Version]
  4. Yibin, R.; Yu, Z.; Xuejian, N. Application of large shipborne theodolite in space target measurement. In Proceedings of the 17th International Conference on Optical Communications and Networks (ICOCN2018), Zhuhai, China, 16–19 November 2018; SPIE: Bellingham, WA, USA, 2019; p. 110482J. [Google Scholar]
  5. Sun, R.-Y.; Zhan, J.-W.; Zhao, C.-Y.; Zhang, X.-X. Algorithms and applications for detecting faint space debris in GEO. Acta Astronaut. 2015, 110, 9–17. [Google Scholar] [CrossRef]
  6. Luo, H.; Zheng, J.-H.; Wang, W.; Cao, J.-J.; Zhu, J.; Chen, G.-P.; Zhang, Y.-S.; Liu, C.-S.; Mao, Y.-D. FocusGEO II. A telescope with imaging mode based on image overlay for debris at Geosynchronous Earth Orbit. Adv. Space Res. 2022, 69, 2618–2628. [Google Scholar] [CrossRef]
  7. Paolo, C.; Guido, A.; Matteo, A.; Carmelo, A.; Andrea, B.; Andrea, B.; Maria, B.; Andrea, B.; Marco, B.; Lorenzo, B.; et al. MAORY: The adaptive optics module for the Extremely Large Telescope (ELT). In Adaptive Optics Systems VII; SPIE: Bellingham, WA, USA, 2020; p. 114480Y. [Google Scholar]
  8. Ryan, D.; Mark, C.; Yutaka, H. On the feasibility of using a laser guide star adaptive optics system in the daytime. J. Astron. Telesc. Instrum. Syst. 2019, 5, 019002. [Google Scholar] [CrossRef]
  9. Dong, L.; Wang, B. Research on the new detection method of suppressing the skylight background based on the shearing interference and the phase modulation. Opt. Express 2020, 28, 12518–12528. [Google Scholar] [CrossRef] [PubMed]
  10. Sanders, T.; Hedges, R.; Schulz, T.; Abijaoude, M.; Peters, J.; Steinbock, M.; Arreola, A.; Holmes, T. Real Time Deconvolution of Adaptive Optics Ground Based Telescope Imagery. J. Astronaut. Sci. 2022, 69, 175–191. [Google Scholar] [CrossRef]
  11. Torben, E.A.; Mette, O.-P.; Anita, E. Image-based wavefront sensing for astronomy using neural networks. J. Astron. Telesc. Instrum. Syst. 2020, 6, 034002. [Google Scholar] [CrossRef]
  12. Hickman, S.; Weddell, S.; Clare, R. Image Correction with Curvature and Geometric Wavefront Sensors in Simulation and On-sky. In Proceedings of the 2019 International Conference on Image and Vision Computing New Zealand (IVCNZ), Dunedin, New Zealand, 2–4 December 2019; pp. 1–6. [Google Scholar]
  13. Yu, W. Practical anti-vignetting methods for digital cameras. IEEE Trans. Consum. Electron. 2004, 50, 975–983. [Google Scholar]
  14. Tek, F.B.; Dempster, A.G.; Kale, I. Computer vision for microscopy diagnosis of malaria. Malar. J. 2009, 8, 153. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Yuan, X.; Wang, X.; Dou, A.; Ding, X. Vignetting Correction of Post-Earthquake UAV Images. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 5704–5707. [Google Scholar]
  16. Litvinov, A.; Schechner, Y.Y. Addressing radiometric nonidealities: A unified framework. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 52–59. [Google Scholar]
  17. Kaiqiong, S.; Yaqin, L.; Shan, Z.; Jun, W. Hybrid active contour model for inhomogeneous image segmentation with background estimation. J. Electron. Imaging 2018, 27, 023018. [Google Scholar] [CrossRef]
  18. Vopalensky, M.; Czech Academy of Sciences (CAS); Kumpova, I.; Vavrik, D. Suppression of residual gradients in the flat-field corrected images. E-J. Nondestruct. Test. 2019, 25. [Google Scholar] [CrossRef] [PubMed]
  19. Zhang, D.; Yang, Q.Y.; Chen, T. Vignetting correction for a single star-sky observation image. Appl. Opt. 2019, 58, 4337–4344. [Google Scholar] [CrossRef] [PubMed]
  20. Zhang, Y.; Wang, J.; Su, J.; Cheng, X.; Zhang, Z.J.T.A.J. Astrometric observations of a near-Earth object using the image fusion technique. Astron. J. 2021, 162, 250. [Google Scholar] [CrossRef]
  21. Mohamed, E.H.; Michael, C.R.; Durdu, O.G. Reconstruction of images degraded by aerosol scattering and measurement noise. Opt. Eng. 2015, 54, 033101. [Google Scholar] [CrossRef]
  22. Sonnenschein, C.M.; Horrigan, F.A. Signal-to-Noise Relationships for Coaxial Systems that Heterodyne Backscatter from the Atmosphere. Appl. Opt. 1971, 10, 1600–1604. [Google Scholar] [CrossRef] [PubMed]
  23. Pain, B.; Cunningham, T.; Hancock, B.; Wrigley, C.; Sun, C. Excess noise and dark current mechanisms in CMOS imagers. In Proceedings of the IEEE Workshop on CCD’s and Advanced Image Sensors, Karuizawa, Nagano, Japan, 9–11 June 2005. [Google Scholar]
  24. Nguyen, T.-K.; Kim, C.-H.; Ihm, G.-J.; Yang, M.-S.; Lee, S. CMOS low-noise amplifier design optimization techniques. IEEE Trans. Microw. Theory Tech. 2004, 52, 1433–1442. [Google Scholar] [CrossRef]
  25. Nayar, S.K.; Nakagawa, Y. Shape from focus. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 824–831. [Google Scholar] [CrossRef] [Green Version]
  26. Kang, S.B.; Weiss, R. Can we calibrate a camera using an image of a flat, textureless Lambertian surface? In Proceedings of the Computer Vision—ECCV 2000: 6th European Conference on Computer Vision, Dublin, Ireland, 26 June–1 July 2000; pp. 640–653. [Google Scholar]
Figure 1. Flowchart of the method.
Figure 1. Flowchart of the method.
Photonics 10 00433 g001
Figure 2. The structural overview of the image acquisition equipment.
Figure 2. The structural overview of the image acquisition equipment.
Photonics 10 00433 g002
Figure 3. An example of correcting a simulated image with our method. (a) The ground truth image; (b) the simulated inhomogeneous background; (c) the simulated image by compositing the ground truth and the simulated background; (d) the clear image after the correction; (e) the grey-level distribution of (b); and (f) the estimated background by our method.
Figure 3. An example of correcting a simulated image with our method. (a) The ground truth image; (b) the simulated inhomogeneous background; (c) the simulated image by compositing the ground truth and the simulated background; (d) the clear image after the correction; (e) the grey-level distribution of (b); and (f) the estimated background by our method.
Photonics 10 00433 g003
Figure 4. Comparison of the grey-level distribution on the same row in the ground truth image and corrected image. (a) ground truth image grey level distribution; (b) corrected image grey level distribution.
Figure 4. Comparison of the grey-level distribution on the same row in the ground truth image and corrected image. (a) ground truth image grey level distribution; (b) corrected image grey level distribution.
Photonics 10 00433 g004
Figure 5. Correction results on images captured with the image acquisition equipment mentioned in Section 2.1. I–IV show different images.
Figure 5. Correction results on images captured with the image acquisition equipment mentioned in Section 2.1. I–IV show different images.
Photonics 10 00433 g005
Figure 6. In the case of a fewer number of clouds, our model can still obtain reliable result. (a) The captured image with the image acquisition equipment mentioned in Section 2.1; (b) the corrected image that the clouds are removed; (c) the estimated background by our method; and (d) the grey-level distribution of (c).
Figure 6. In the case of a fewer number of clouds, our model can still obtain reliable result. (a) The captured image with the image acquisition equipment mentioned in Section 2.1; (b) the corrected image that the clouds are removed; (c) the estimated background by our method; and (d) the grey-level distribution of (c).
Photonics 10 00433 g006
Figure 7. The results of removing inhomogeneous backgrounds using our method under a low elevation angle. (a) The captured image; (b) the corrected image; (c) the estimated background; (d) the grey-level distribution of (c).
Figure 7. The results of removing inhomogeneous backgrounds using our method under a low elevation angle. (a) The captured image; (b) the corrected image; (c) the estimated background; (d) the grey-level distribution of (c).
Photonics 10 00433 g007
Table 1. Performance of different partition areas in the test set.
Table 1. Performance of different partition areas in the test set.
Split Area SizeMSESSIM
10 px × 10 px33.620.9437
20 px × 20 px23.020.9813
30 px × 30 px8.650.9827
40 px × 40 px9.550.9805
Ours7.460.9837
Table 2. Comparison of average execution time.
Table 2. Comparison of average execution time.
Image-SizeCPUAverage
Running Time
Method
2 kpx × 2 kpx[email protected] GHz9.29 sZhang
2 kpx × 2 kpx[email protected] GHz1.25 sOurs
4 kpx × 4 kpx[email protected] GHz2.36 sOurs
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, C.; Chen, T.; Lu, C.; Wu, Z.; Liu, C.; Shao, M.; Cao, J. Automatic Inhomogeneous Background Correction for Spatial Target Detection Image Based on Partition Processing. Photonics 2023, 10, 433. https://doi.org/10.3390/photonics10040433

AMA Style

Jiang C, Chen T, Lu C, Wu Z, Liu C, Shao M, Cao J. Automatic Inhomogeneous Background Correction for Spatial Target Detection Image Based on Partition Processing. Photonics. 2023; 10(4):433. https://doi.org/10.3390/photonics10040433

Chicago/Turabian Style

Jiang, Chun, Tao Chen, Changzheng Lu, Zhiyong Wu, Changhua Liu, Meng Shao, and Jingtai Cao. 2023. "Automatic Inhomogeneous Background Correction for Spatial Target Detection Image Based on Partition Processing" Photonics 10, no. 4: 433. https://doi.org/10.3390/photonics10040433

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop