Next Article in Journal
Estimation of Lightning Activity of Squall Lines by Different Lightning Parameterization Schemes in the Weather Research and Forecasting Model
Next Article in Special Issue
An Integrated Detection and Multi-Object Tracking Pipeline for Satellite Video Analysis of Maritime and Aerial Objects
Previous Article in Journal
Comprehensive Evaluation of Spatial Distribution and Temporal Trend of NO2, SO2 and AOD Using Satellite Observations over South and East Asia from 2011 to 2021
Previous Article in Special Issue
EL-NAS: Efficient Lightweight Attention Cross-Domain Architecture Search for Hyperspectral Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

A DeturNet-Based Method for Recovering Images Degraded by Atmospheric Turbulence

1
Chang Guang Satellite Technology Co., Ltd., Changchun 130102, China
2
The Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
3
National Key Laboratory of Optical Field Manipulation Science and Technology, Chinese Academy of Sciences, Chengdu 610209, China
4
Key Laboratory of Optical Engineering, Chinese Academy of Sciences, Chengdu 610209, China
5
University of Chinese Academy of Sciences, Beijing 101408, China
6
University of Electronic Science and Technology of China (UESTC), Chengdu 610054, China
*
Authors to whom correspondence should be addressed.
Remote Sens. 2023, 15(20), 5071; https://doi.org/10.3390/rs15205071
Submission received: 31 August 2023 / Revised: 8 October 2023 / Accepted: 19 October 2023 / Published: 23 October 2023

Abstract

:
Atmospheric turbulence is one of the main issues causing image blurring, dithering, and other degradation problems when detecting targets over long distances. Due to the randomness of turbulence, degraded images are hard to restore directly using traditional methods. With the rapid development of deep learning, blurred images can be restored correctly and directly by establishing a nonlinear mapping relationship between the degraded and initial objects based on neural networks. These data-driven end-to-end neural networks offer advantages in turbulence image reconstruction due to their real-time properties and simplified optical systems. In this paper, inspired by the connection between the turbulence phase diagram characteristics and the attentional mechanisms for neural networks, we propose a new deep neural network called DeturNet to enhance the network’s performance and improve the quality of image reconstruction results. DeturNet employs global information aggregation operations and amplifies notable cross-dimensional reception regions, thereby contributing to the recovery of turbulence-degraded images.

1. Introduction

Atmospheric turbulence is one of the main issues that cause image degradation when detecting objects at long ranges. Created by the random fluctuation of the refractive index, atmospheric turbulence is a spatial-temporal blur that cannot be measured directly [1,2,3]. With the development of adaptive optics, Noll et al. [4] established a relationship between turbulence and wavefront distortion, as described by the Zernike polynomials. Thus, a turbulence-degraded image can be efficiently restored using deconvolution algorithms when wavefront distortion or its point spread function (PSF) is correctly obtained [5,6,7,8,9].
Depending on whether the PSF is obtained in advance or not, the deconvolution algorithms can be divided into blind and non-blind deconvolution algorithms. Non-blind deconvolution algorithms restore degraded images using a known PSF. Typical non-blind deconvolution algorithms include inverse filtering, Wiener filtering, and other algorithms [10,11,12,13]. However, non-blind deconvolution algorithms need additional devices, such as a wavefront sensor (WFS), to detect the exact PSF, which makes the system complex [10,14]. Compared to non-blind deconvolution, blind deconvolution algorithms reconstruct degraded images in the presence of a poorly determined PSF, so it does not need any wavefront detection devices. However, because both the PSF and the initial image must be restored from the degraded images, blind deconvolution is an ill-posed problem. Thus, traditional blind deconvolution algorithms are mostly based on image sequences or iterative methods, like lucky imaging [15], iterative blind deconvolution [16], and their improvement algorithms [17,18].
In recent years, deep learning methods have received much attention in turbulence-blind deconvolution [19,20]. Compared with other methods, deep learning has considerable advantages in solving ill-posed and nonlinear problems for its end-to-end learning and data-driven approaches [21,22,23]. After training with a large amount of data, a deep neural network establishes a hidden nonlinear mapping between the input and output directly. Thus, deep learning methods can be properly used in ill-posed optical information restoration, such as holographic reconstruction [24,25], super-resolution imaging [26,27], image denoising [28,29], and phase extraction [30,31]. In addition, deep learning has also achieved good results in areas such as the modulation classification of signals [32,33,34,35,36], which proves the successful application of deep learning algorithms in various fields. Some researchers have also shown the effectiveness of deep learning in turbulence image reconstruction [37,38,39]. However, deep learning-based turbulent-degraded algorithms face some difficulties, such as the difficulty to obtain both the turbulence image and its PSF, the effect of noise, the complexity of the blur kernel, and the similarity of the recovered target information. Thus, we need to establish a new deep natural network, perform different kinds of experiments to obtain different data, and test the effectiveness of the network in both simulations and experiments.
In this paper, we propose a deep natural network called DeturNet for improving the images degraded by atmospheric turbulence. The advantage of this network is that it preserves global information and makes the image features better transferred by enhancing the interaction of information in different dimensions. In addition, this network shows good robustness based on the results and has some generalization ability with a small and single-scene dataset. In order to verify the performance of our network, we conducted simulations, as well as laboratory and outdoor experiments. Both simulation and experiment show that this method has a good ability for improving the degraded image in terms of turbulence removal and noise immunity and is also fast enough to be used in real-time applications.

2. Materials and Methods

2.1. Atmospheric Turbulence Imaging Model

In this section, the properties and a simplified model of atmospheric turbulence in long-range imaging are discussed in detail. Figure 1 shows a model of light propagating through atmospheric turbulence. In a high-F-number optical system, the PSF of the turbulence can be considered a spatially invariant function [40]. Thus, the observed degraded image can be modeled using Equation (1) [5].
g ( r , θ ) = f ( r , θ ) h ( r , θ ) + n ( r , θ )
where  g ( r , θ )  is the observed image,  f ( r , θ )  represents the actual object,  h ( r , θ )  represents the PSF,  n ( r , θ )  represents the noise,  ( r , θ )  represents space coordinates of planar image, and   denotes the convolution operation.
Then, the relationship between PSF and its wavefront distortion is shown in Equation (2)
h ( r , θ ) = | { P ( r , θ ) exp [ i φ ( r , θ ) ] } | 2
where P(r,θ) represents the optical pupil function of the telescope,  φ ( r , θ )  is the wavefront distortion, and  { }  represents the Fourier transform. Since the optical pupil function is constrained in this paper, the PSF only depends on the wavefront distortion.
The phase  φ ( r , θ )  of the wavefront distortion caused by atmospheric turbulence can be decomposed in a Zernike polynomial orthogonal in the circular domain [4], as shown in Equation (3).
φ ( r , θ ) = i = 1 a i z i ( r , θ )
where  a i  is the Zernike coefficient that represents the Zernike polynomial on the  i th term. The relationship between the Zernike coefficient and the turbulence can be determined using [4]. According to the Kolmogorov turbulence theory, the relationship between the covariance matrix C = [cij] and the Zernike polynomial coefficient vector  A = { a 1 , a 2 , , a n }  can be obtained using Equation (4).
a i a j = c i j ( D r 0 ) 5 / 3
where cij is the covariance coefficient, D is the aperture of the telescope, and r0 is the atmospheric coherence length. Thus, the coefficient matrix A can be derived from the Karhumen-Loeve polynomial, as shown in Equation (5).
{ C = V S V T A = V B
where V is the diagonal matrix, S is the Karhumen-Loeve polynomial coefficient matrix, and B represents the phase wavefront that can be considered as a Gaussian-distributed random vector with zero mean and variance array S. Based on the above analysis, we can obtain a Gaussian-type random vector distribution A with zero mean and make sure that the calculated wavefront distortion conforms to the Kolmogorov turbulence theory model.

2.2. DeturNet

To solve the ill-posed turbulence question, a deep learning method called DeturNet is proposed in this section, which is shown in Figure 2. Inspired by existing networks [41,42], DeturNet consists of two subnetworks, which are both U-Net structures. The two subnetworks connected in series have the following advantages: Since the output of the previous subnetwork as the input of the next subnetwork has an effect on the depth of the network, we believe that the deeper the depth of the network, the weaker the effect of atmospheric turbulence on the reconstruction, and the easier the network learns the feature distribution of the image, which is conducive to obtaining a better reconstruction; in addition to the effect of atmospheric turbulence, the design of the two subnetworks may be applicable to the for low-level and similar end-to-end tasks.
Each subnetwork has a five-layer structure that contains three parts: downsampling, upsampling, and skipping connections. In the downsampling stage, a HIN block is used to implement downsampling of the network and perform instance normalization operations, as shown in Figure 3a. In the upsampling stage, the Res Block is used to improve the reconstruction speed and to avoid gradient disappearance, as is shown in Figure 3b. Between down-sampling and up-sampling, a hopping connection is established by adding a cross-stage feature (CSFF) and a supervised attention module (SAM) [43]. The CSFF module allows features from one stage to be transformed and fused with features from the next stage, helping to enrich the multiscale features in the next stage, and the SAM helps to propagate useful features from the feature map to the next stage. Although these modules are good for information extraction and transmission, the above mechanism suffers from the loss of the channel and spatial local information of the image.
In order to preserve the channel and spatial local information and enhance the interaction between the different dimensions of information, we added a global attention mechanism (GAM) [44] to improve the utilization of multichannel information and amplify the global interaction representation, as shown in Figure 3c. The GAM comprises two submodules: a channel attention mechanism (as shown in Figure 4) and a spatial attention mechanism (as shown in Figure 5). The two-layer multilayer perceptron (MLP) is an encoder-decoder structure with a reduction rate r in the channel, which is used to amplify the spatial dependence of channels at different latitudes.
Through the design of a bipartite network, the depth of the network can be increased, the nonlinear fitting ability of the network can be improved, and the effect of the complex input transformations on the performance of the network can be learned more easily, which is conducive to better fitting the characteristics of the network. Aiming at the characteristics of atmospheric turbulence, the use of a baryon network can improve the ability of the model to recover the turbulence degradation image to a certain extent.

2.3. The Implement of DeturNet

In this section, the establishment of the training dataset and platform is proposed. As shown in Figure 6, we first selected 700 remote sensing images of aircraft from the NWPU-RESISC45 dataset [45] as the original image dataset and simulated 700 phase screens using the 4–60th Zernike polynomial from Equations (3)–(5) as the phase screen of atmospheric turbulence. The NWPU-RESISC45 remote sensing dataset is a large-scale public dataset published by the Northwestern Polytechnical University for remote sensing scene classification. The aircraft categories are shown in the original image in Figure 6. Then, we propose the imaging degradation by the original image and turbulence screen individually according to Equations (1) and (2). Finally, the degraded and the corresponding original images were obtained as inputs and labels in the dataset, respectively.
The whole dataset is divided into a training set, a validation set, and a test set in a ratio of 8:1:1. The network input image size was 256 × 256, and its label size was the same as its size. We used the L1 loss as the loss function.
L o s s = 1 N i = 1 N | y i y ^ i |
where  y ^ i  represents the output image,  y i  represents any image in the input image, and N represents the number of images in the batch.

3. Results and Discussion

3.1. The Test of Image Restoration in Simulation

In this section, we test the effectiveness and robustness of DeturNet using simulations. The training took approximately 3 h, with a total of 300 epochs. The initial learning rate was 1 × 10−4. The learning rate decay mode is cosine decay with a minimum learning rate of 1 × 10−6. The batch size is 16. The optimization algorithm was Adam, and the weight decay was set to 2 × 10−6.
The training platform was Pytorch 1.8.0, based on Python 3.7.11 in Sichuan, China. This is used to implement the network. An Intel Xeon(R) CPU (2.5 KHz) and NIVIDIA GeForce GTX 3090 GPU were used for the training and testing phases. The loss curve for the DeturNet training process is shown in Figure 7.
The restoration results are shown in Figure 8. The proposed results show that our method can effectively reduce the turbulence of an image. We also compared our method with other traditional and deep learning image recovery methods, such as the blind recovery method proposed by Jin et al. [46], the deep learning algorithms U-Net [42], and DeepRFT [47]. All the methods were retrained to achieve optimal recovery. The effects of the image restoration are shown in Figure 8. In order to evaluate the image restoration effectively, the results were analyzed using both subjective and objective methods. From a visual point of view, DeturNet has a better effect on turbulence image recovery, and the result is closer to the original image than those of the other methods. The other methods still have some degree of ambiguity in the edge information of the image target. Meanwhile, two evaluation functions, peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), are proposed to perform an objective evaluation.
P S N R = 10 log 10 ( M a x V a l u e 2 1 M × N i = 0 M 1 j = 0 N 1 ( X ( i , j ) Y ( i , j ) ) 2 )
S S I M ( X , Y ) = l ( X , Y ) c ( X , Y ) s ( X , Y )
where X denotes the pixel value of the reference image, Y denotes the pixel value of the evaluated image, and (i, j) denotes the pixel coordinates. MaxValue is the maximum value of the color grey scale in an image, which is usually 255 on a uniform statement. l, c, and s represent the equations for brightness, contrast, and structure, respectively [48]. These two evaluation functions show different degradations of the images. PSNR usually shows the ratio between the maximum energy of the signal and the energy of noise, which affects the fidelity of its representation. SSIM usually shows the similarity between two images. A higher PSNR and SSIM closer to 1 often mean a better imaging quality [49,50,51].
In the outdoor experiments, the PSNR and SSIM evaluation criteria were not applicable due to the lack of corresponding labels. Therefore, we utilized three additional unreferenced evaluation criteria, namely variance, information entropy, and average gradient ( A G ), to objectively assess the recovery quality. The calculations for these criteria are as follows:
V a r i a n c e = 1 M N i j ( f ( i , j ) μ ) 2 , μ = 1 M N i j f ( i , j )
E n t r o p y = k = 0 n P ( k ) log 2 P ( k )
A G = 1 M N i j ( f x ) 2 + ( f x ) 2 2
where  M N  represents the image size,  f ( i , j )  represents the gray value of the pixel  ( i , j ) , and P(k) represents the ratio of the number of pixels with a gray value of k to the total number of pixels.  f x , f y  represents the gradients in the x and y directions, respectively.
In Figure 8, we can see that the evaluation index under the restored image of the DeturNet is the highest among all the methods in all four images. The average results of the two evaluation indices in the test sets are also shown in Table 1, where PSNR_std represents the variance of the PSNR of the test set. For the test datasets, the recovery results show that DeturNet has an average improvement of 3.16 (16.8%) in the PSNR and 0.0899 (13.3%) in the SSIM, which is the highest among the four methods. Thus, our method performed better on all test sets and showed the superiority of our proposed method. We also compared our method with the other 4 methods in terms of time, as shown in Table 2. The results show that our method has a better real-time capability compared to other methods. With the development of computer computing power, the proposed method has the potential to perform real-time imaging reconstruction.

3.2. Ablation

To further validate the important role of each module in DeturNet, this paper does further ablation experiments. It should be noted that the ablation experiments are not to verify the superior performance of a particular module but rather to demonstrate that DeturNet has good de-turbulence capability with the joint action of each module. The results of the ablation experiments are shown in Table 3. The selection of the hyperparameters is kept in line with the previously mentioned ones. It can be seen that the condition of removing a certain module leads to a decrease in the performance of DeturNet, and the de-turbulence ability of DeturNet is reduced.

3.3. The Robustness of DeturNet on Different Noises

To test the robustness of the network on different noises, four Gaussian-type noises with means of 0 and variances of 0.01, 0.03, 0.05, and 0.08 are added to the test set, and we feed it directly into our trained network. The test images and reconstructed images are shown in Figure 9. The results for the entire test set are shown in Table 4.
As we can see, the increase in noise affects both the blurred and restored images in Figure 9. When the noise is small, the reconstructed image is less affected by noise. When the noise increases, the reconstructed image shows a ringing effect; however, it still preserves the most detailed information. The target could still be clearly recognized. From Table 4, we can see that the PSNR and the SSIM are in a uniform decline, which indicates that the capability of our algorithm decreases slowly with an increase in noise, but the overall reconstruction capability is still good.

3.4. The Robustness of DeturNet on Different Turbulences

To test the robustness of our network under different turbulences, different turbulence intensities (D/r0 = 5 and D/r0 = 10) are proposed in this paper. So, we re-stimulated the datasets with their corresponding intensities and retrained them. The results are shown in Figure 10. The degraded image shows that a stronger turbulence intensity causes a more severely blurred image, which makes the recovery of turbulent degraded images extremely challenging.
The test results in Figure 10 show that, although the image is completely blurred by the turbulence of D/r0 = 10, the recovery results of DeturNet can still reconstruct the target information, such as the aircraft outline and building edges. However, some high-frequency information is lost, such as the edge details in the recovered results, when the turbulence intensity increases further. Table 5 shows that the reconstruction results are valid for the different turbulence intensity tests.

3.5. Laboratory Experiment Results and Discussions

In this section, we build a laboratory experimental imaging system to verify the performance and robustness of our network in practice. The laboratory connects the simulation and the outdoor experiments. Compared with the simulation dataset, it provides a more realistic atmospheric turbulence situation. Compared with outdoor experiments, it offers a dataset that is not affected by turbulence as the basis of the training dataset. As shown in Figure 11, LED and Digital Micromirror Devices (DMD) are used to generate the dynamic targets loaded by aircraft images of the NWPU-RESISC45 dataset, where the LED provides a stable, broad-spectrum light source, and DMD projects the loaded targets rapidly. A turbulent screen was used to generate the stochastic atmospheric turbulence, which had the same intensity as the simulation situations.
Through the experimental process, a dataset of 700 turbulence-blurred images is obtained and divided into a training set, a validation set, and a test set in the ratio of 8:1:1. The subjectivity of the recovered images and the two evaluation criteria show that our proposed method has a better reconstruction ability than the other methods. The partial recovery result images are shown in Figure 12. The average results for the test set are shown in Table 6. Both the simulation and experimental results show that the recovery method based on DeturNet has better effects and robustness for turbulently degraded image restoration, which means that the proposed method has the potential to be used in image restoration for astronomical observation, remote sensing observation, and traffic detection.

3.6. Outdoor Experiment Results and Discussions

In order to further validate the effectiveness of our method, we collected real turbulence degradation pictures in the natural environment in the outside world. The experimental scene is shown in Figure 13a. In this case, the telescope focal length was 1250 mm, the object distance of the target was about 200 m, the CCD camera pixel size was 5.5 µm, the CCD camera exposure time was 3 ms, the acquisition time was 4 pm, the maximum temperature of the day was 33 °C and the minimum temperature was 24 °C. One of the acquired images is shown in Figure 13b.
In the experiment, we captured images of a toy car and calibration targets. We selected some regions of interest and tested the recoverability and generalization effect directly using DeturNet trained on a laboratory experimental dataset. The tested DeturNet-based single-frame recovery results are shown in Figure 14, from top to bottom, for scenarios 1–4. We observed the reconstructed images using both the subjective and objective methods. In a subjective evaluation, DeturNet exhibited a good recovery effect. The target contour edge information in the image was better recovered, and the boundary was more visible. Meanwhile, the variances of DeturNet restoration results were much larger than those of the blurred images and the results of the comparison methods.
Two objective evaluation criteria, information entropy and average gradient (AG), were used in this paper, as shown in Table 7. In the outdoor experiments, although the DeturNet was trained using the laboratory dataset, we can see that the network has good recovery results. Thus, from the perspective of visual effect, histogram distribution, and the size of the variance, entropy, and average gradient, the results show that we can recover the image scene from completely uninvolved training to a certain extent. The variance and entropy can reflect the amount of information in the image; the larger the value, the richer the information in the image hierarchy. The average gradient reflects the ability of an image to express the contrast in minute detail. In general, a larger average gradient often means the image is sharper. Since the original data are unknown, these evaluation functions have some limitations. For example, as we can see in the first row of images, even though the U-Net has the highest value of variance, the edges of the toy car in its recovered images are obviously distorted, while the edges of the toy car reconstructed using DeturNet are more realistic and sharper. Through subjective judgments supplemented by objective metrics, it can be seen that DeturNet outperforms other deep learning algorithms in terms of generalization ability. Meanwhile, the proposed results also show that the generalization ability can be improved with uncorrelated datasets when performing sufficiently fitted experiments, and we will continue our research on this issue.

4. Conclusions

In this paper, we propose a single-frame deep learning method called DeturNet for atmospheric turbulent image reconstruction. Compared with other deep learning methods, DeturNet has a deeper network level with a structure that is more consistent with turbulent characteristics. We verified the effectiveness of the DeturNet through simulations and experiments. Simulation and laboratory experimental results showed that the DeturNet has a better reconstruction effect and anti-noise ability compared to other methods. The outdoor experimental results show that the DeturNet has a good generalization ability after training with the lab-training dataset. These results show that DeturNet provides good recovery results for blind turbulence image reconstruction. When we combine the other advantages of deep learning, such as low cost and high speed, DeturNet can become an effective alternative method for the application of turbulent image reconstruction.

Author Contributions

Conceptualization, X.L. (Xiangxi Li) and X.Z.; methodology, X.L. (Xiangxi Li) and J.C.; software, X.L. (Xiangxi Li) and W.W.; validation, X.L. (Xingling Liu); writing—original draft preparation, X.L. (Xiangxi Li); writing—review and editing, J.C.; funding acquisition, H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62175243 and 12304331, in part by the Excellent Youth Foundation of Sichuan Scientific Committee under Grant 2019JDJQ0012, in part by the Youth Innovation Promotion Association, CAS under Grant 2020372, and the Outstanding Scientist Project of Tianfu Qingcheng Program.

Data Availability Statement

Data will be made available upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hufnagel, R.E.; Stanley, N.R. Modulation Transfer Function Associated with Image Transmission through Turbulent Media. J. Opt. Soc. Am. JOSA 1964, 54, 52–61. [Google Scholar] [CrossRef]
  2. Roggemann, M.C.; Welsh, B.M. Imaging through Turbulence; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  3. Furhad, M.H.; Tahtali, M.; Lambert, A. Restoring Atmospheric-Turbulence-Degraded Images. Appl. Opt. 2016, 55, 5082–5090. [Google Scholar] [CrossRef] [PubMed]
  4. Noll, R.J. Zernike Polynomials and Atmospheric Turbulence. J. Opt. Soc. Am. JOSA 1976, 66, 207–211. [Google Scholar] [CrossRef]
  5. Wang, K.; Zhang, M.; Tang, J.; Wang, L.; Hu, L.; Wu, X.; Li, W.; Di, J.; Liu, G.; Zhao, J. Deep Learning Wavefront Sensing and Aberration Correction in Atmospheric Turbulence. PhotoniX 2021, 2, 8. [Google Scholar] [CrossRef]
  6. Xin, Q.; Ju, G.; Zhang, C.; Xu, S. Object-Independent Image-Based Wavefront Sensing Approach Using Phase Diversity Images and Deep Learning. Opt. Express 2019, 27, 26102. [Google Scholar] [CrossRef]
  7. Lane, R.G. Blind Deconvolution of Speckle Images. JOSA A 1992, 9, 1508–1514. [Google Scholar] [CrossRef]
  8. Sheppard, D.G.; Hunt, B.R.; Marcellin, M.W. Iterative Multiframe Superresolution Algorithms for Atmospheric-Turbulence-Degraded Imagery. J. Opt. Soc. Am. A JOSAA 1998, 15, 978–992. [Google Scholar] [CrossRef]
  9. Ellerbroek, B.; Rhoadarmer, T. Adaptive Wavefront Control Algorithms for Closed Loop Adaptive Optics. Math. Comput. Model. 2001, 33, 145–158. [Google Scholar] [CrossRef]
  10. Rigaut, F.; Ellerbroek, B.L.; Northcott, M.J. Comparison of Curvature-Based and Shack–Hartmann-Based Adaptive Optics for the Gemini Telescope. Appl. Opt. AO 1997, 36, 2856–2868. [Google Scholar] [CrossRef]
  11. Krishnan, D.; Fergus, R. Fast Image Deconvolution Using Hyper-Laplacian Priors. Adv. Neural Inf. Process. Syst. 2009, 22. [Google Scholar]
  12. Sankhe, P.D.; Patil, M.; Margaret, M. Deblurring of Grayscale Images Using Inverse and Wiener Filter. In Proceedings of the International Conference & Workshop on Emerging Trends in Technology, Mumbai, India, 25–26 February 2011; pp. 145–148. [Google Scholar]
  13. Singh, M.K.; Tiwary, U.S.; Kim, Y.-H. An Adaptively Accelerated Lucy-Richardson Method for Image Deblurring. EURASIP J. Adv. Signal Process. 2007, 2008, 365021. [Google Scholar] [CrossRef]
  14. Wild, W.J. Linear Phase Retrieval for Wave-Front Sensing. Opt. Lett. 1998, 23, 573–575. [Google Scholar] [CrossRef] [PubMed]
  15. David, L. Fried Probability of Getting a Lucky Short-Exposure Image through Turbulence. J. Opt. Soc. Am 1978, 68, 1651–1658. [Google Scholar]
  16. Ayers, G.R.; Dainty, J.C. Iterative Blind Deconvolution Method and Its Applications. Opt. Lett. OL 1988, 13, 547–549. [Google Scholar] [CrossRef] [PubMed]
  17. Davey, B.; Lane, R.; Bates, R. Blind Deconvolution of Noisy Complex-Valued Image. Opt. Commun. 1989, 69, 353–356. [Google Scholar] [CrossRef]
  18. Tsumuraya, F.; Miura, N.; Baba, N. Iterative Blind Deconvolution Method Using Lucy’s Algorithm. Astron. Astrophys. 1994, 282, 699–708. [Google Scholar]
  19. Wu, S.; Dong, C.; Qiao, Y. Blind Image Restoration Based on Cycle-Consistent Network. IEEE Trans. Multimed. 2022, 25, 1111–1124. [Google Scholar] [CrossRef]
  20. Huang, L.; Xia, Y. Joint Blur Kernel Estimation and CNN for Blind Image Restoration. Neurocomputing 2020, 396, 324–345. [Google Scholar] [CrossRef]
  21. Tao, X.; Gao, H.; Shen, X.; Wang, J.; Jia, J. Scale-Recurrent Network for Deep Image Deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8174–8182. [Google Scholar]
  22. Zhang, K.; Ren, W.; Luo, W.; Lai, W.-S.; Stenger, B.; Yang, M.-H.; Li, H. Deep Image Deblurring: A Survey. Int. J. Comput. Vis. 2022, 130, 2103–2130. [Google Scholar]
  23. Brunton, S.L.; Kutz, J.N. Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control; Cambridge University Press: Cambridge, UK, 2019. [Google Scholar]
  24. Rivenson, Y.; Zhang, Y.; Günaydın, H.; Teng, D.; Ozcan, A. Phase Recovery and Holographic Image Reconstruction Using Deep Learning in Neural Networks. Light Sci. Appl. 2018, 7, 17141. [Google Scholar] [CrossRef] [PubMed]
  25. Ren, Z.; Xu, Z.; Lam, E.Y. End-to-End Deep Learning Framework for Digital Holographic Reconstruction. Adv. Photonics 2019, 1, 016004. [Google Scholar] [CrossRef]
  26. Fang, L.; Monroe, F.; Novak, S.W.; Kirk, L.; Schiavon, C.R.; Yu, S.B.; Zhang, T.; Wu, M.; Kastner, K.; Latif, A.A. Deep Learning-Based Point-Scanning Super-Resolution Imaging. Nat. Methods 2021, 18, 406–416. [Google Scholar] [CrossRef]
  27. Masutani, E.M.; Bahrami, N.; Hsiao, A. Deep Learning Single-Frame and Multiframe Super-Resolution for Cardiac MRI. Radiology 2020, 295, 552–561. [Google Scholar] [CrossRef] [PubMed]
  28. Tian, C.; Fei, L.; Zheng, W.; Xu, Y.; Zuo, W.; Lin, C.-W. Deep Learning on Image Denoising: An Overview. Neural Netw. 2020, 131, 251–275. [Google Scholar]
  29. Elad, M.; Kawar, B.; Vaksman, G. Image Denoising: The Deep Learning Revolution and Beyond–A Survey Paper. SIAM J. Imaging Sci. 2023, 16, 1594–1654. [Google Scholar] [CrossRef]
  30. Wang, K.; Li, Y.; Kemao, Q.; Di, J.; Zhao, J. One-Step Robust Deep Learning Phase Unwrapping. Opt. Express 2019, 27, 15100–15115. [Google Scholar] [CrossRef] [PubMed]
  31. Wang, K.; Di, J.; Li, Y.; Ren, Z.; Kemao, Q.; Zhao, J. Transport of Intensity Equation from a Single Intensity Image via Deep Learning. Opt. Lasers Eng. 2020, 134, 106233. [Google Scholar] [CrossRef]
  32. Zheng, Q.; Zhao, P.; Li, Y.; Wang, H.; Yang, Y. Spectrum Interference-Based Two-Level Data Augmentation Method in Deep Learning for Automatic Modulation Classification. Neural Comput. Appl. 2021, 33, 7723–7745. [Google Scholar] [CrossRef]
  33. Zheng, Q.; Zhao, P.; Zhang, D.; Wang, H. MR-DCAE: Manifold Regularization-based Deep Convolutional Autoencoder for Unauthorized Broadcasting Identification. Int. J. Intell. Syst. 2021, 36, 7204–7238. [Google Scholar] [CrossRef]
  34. Zheng, Q.; Tian, X.; Yu, Z.; Wang, H.; Elhanashi, A.; Saponara, S. DL-PR: Generalized Automatic Modulation Classification Method Based on Deep Learning with Priori Regularization. Eng. Appl. Artif. Intell. 2023, 122, 106082. [Google Scholar] [CrossRef]
  35. Zheng, Q.; Zhao, P.; Wang, H.; Elhanashi, A.; Saponara, S. Fine-Grained Modulation Classification Using Multi-Scale Radio Transformer with Dual-Channel Representation. IEEE Commun. Lett. 2022, 26, 1298–1302. [Google Scholar] [CrossRef]
  36. Zheng, Q.; Tian, X.; Yang, M.; Wu, Y.; Su, H. PAC-Bayesian Framework Based Drop-Path Method for 2D Discriminative Convolutional Network Pruning. Multidimens. Syst. Signal Process. 2020, 31, 793–827. [Google Scholar] [CrossRef]
  37. Shu, J.; Xie, C.; Gao, Z. Blind Restoration of Atmospheric Turbulence-Degraded Images Based on Curriculum Learning. Remote Sens. 2022, 14, 4797. [Google Scholar] [CrossRef]
  38. Mei, K.; Patel, V.M. LTT-GAN: Looking Through Turbulence by Inverting GANs. IEEE J. Sel. Top. Signal Process. 2023, 17, 587–598. [Google Scholar] [CrossRef]
  39. Jin, D.; Chen, Y.; Lu, Y.; Chen, J.; Wang, P.; Liu, Z.; Guo, S.; Bai, X. Neutralizing the Impact of Atmospheric Turbulence on Complex Scene Imaging via Deep Learning. Nat. Mach. Intell. 2021, 3, 876–884. [Google Scholar] [CrossRef]
  40. Block, N.R.; Introne, R.E.; Schott, J.R. Image Quality Analysis of a Spectra-Radiometric Sparse-Aperture Model. In Proceedings of the Spaceborne Sensors; SPIE: Bellingham, DC, USA, 2004; Volume 5418, pp. 127–138. [Google Scholar]
  41. Chen, L.; Lu, X.; Zhang, J.; Chu, X.; Chen, C. HINet: Half Instance Normalization Network for Image Restoration. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; pp. 182–192. [Google Scholar]
  42. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  43. Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.-H.; Shao, L. Multi-Stage Progressive Image Restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14821–14831. [Google Scholar]
  44. Liu, Y.; Shao, Z.; Hoffmann, N. Global Attention Mechanism: Retain Information to Enhance Channel-Spatial Interactions. arXiv 2021, arXiv:2112.05561. [Google Scholar]
  45. Cheng, G.; Han, J.; Lu, X. Remote Sensing Image Scene Classification: Benchmark and State of the Art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef]
  46. Jin, M.; Roth, S.; Favaro, P. Normalized Blind Deconvolution. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 668–684. [Google Scholar]
  47. Mao, X.; Liu, Y.; Shen, W.; Li, Q.; Wang, Y. Deep Residual Fourier Transformation for Single Image Deblurring. arXiv 2021, arXiv:2111.11745 2021. [Google Scholar]
  48. Hore, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
  49. Zhang, C.; Huang, Y.P.; Guo, Z.Y.; Yang, J. Real-Time Lane Detection Method Based on Semantic Segmentation. Opto-Electron. Eng. 2022, 49, 210378. [Google Scholar]
  50. Rui, S.; Han, Z.; Cheng, Z.; Zhang, X. Super-resolution reconstruction of infrared image based on channel attention and transfer learning. OEE 2021, 48, 200045. [Google Scholar] [CrossRef]
  51. Liao, M.; Zheng, S.; Pan, S.; Lu, D.; He, W.; Situ, G.; Peng, X. Deep-Learning-Based Ciphertext-Only Attack on Optical Double Random Phase Encryption. OEA 2021, 4, 200016. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the imaging model in which (a) is the original image, (b) is the turbulence disturbance, and (c) is the degraded image observed by the imaging system.
Figure 1. Schematic diagram of the imaging model in which (a) is the original image, (b) is the turbulence disturbance, and (c) is the degraded image observed by the imaging system.
Remotesensing 15 05071 g001
Figure 2. In the DeturNet structure, each substructure had five layers.
Figure 2. In the DeturNet structure, each substructure had five layers.
Remotesensing 15 05071 g002
Figure 3. (a) HIN Block, (b) Res Block, (c) GAM module.
Figure 3. (a) HIN Block, (b) Res Block, (c) GAM module.
Remotesensing 15 05071 g003
Figure 4. The structure of the channel attention module. Fin represents the input, and Fout represents the output.
Figure 4. The structure of the channel attention module. Fin represents the input, and Fout represents the output.
Remotesensing 15 05071 g004
Figure 5. The structure of the spatial attention module. Fin represents the input, and Fout represents the output.
Figure 5. The structure of the spatial attention module. Fin represents the input, and Fout represents the output.
Remotesensing 15 05071 g005
Figure 6. In the image dataset, where the original image was from the NWPU-RESISC45 dataset, the turbulence phase screen was generated by the Zernike polynomial, and the blurred image was degraded via the original image and phase screen.
Figure 6. In the image dataset, where the original image was from the NWPU-RESISC45 dataset, the turbulence phase screen was generated by the Zernike polynomial, and the blurred image was degraded via the original image and phase screen.
Remotesensing 15 05071 g006
Figure 7. Loss curves of DeturNet during the training process.
Figure 7. Loss curves of DeturNet during the training process.
Remotesensing 15 05071 g007
Figure 8. Restoration effect comparison of images in test sets where (a) Original map; (b) Blurred images; (c) Jin et al. [46]; (d)U-Net; (e) DeepRFT Net; (f) DeturNet. The numbers in the images are the PSNR and SSIM.
Figure 8. Restoration effect comparison of images in test sets where (a) Original map; (b) Blurred images; (c) Jin et al. [46]; (d)U-Net; (e) DeepRFT Net; (f) DeturNet. The numbers in the images are the PSNR and SSIM.
Remotesensing 15 05071 g008
Figure 9. Robustness test results for different noise intensities. The test images shown in Figure 8 are the same as the test images shown in Figure 8. The top row of images for the same scene is the noise image, and the bottom row shows the recovery results. The last four columns represent images with different noise levels. The numbers in the images are the PSNR and SSIM.
Figure 9. Robustness test results for different noise intensities. The test images shown in Figure 8 are the same as the test images shown in Figure 8. The top row of images for the same scene is the noise image, and the bottom row shows the recovery results. The last four columns represent images with different noise levels. The numbers in the images are the PSNR and SSIM.
Remotesensing 15 05071 g009
Figure 10. Test results for different turbulence intensities. The first three columns of the figure show the original images, blurred images, and recovery results for the turbulence intensity D/r0 = 5. The last three columns show the original images, blurred images, and recovery results for the turbulence intensity D/r0 = 10. The numbers in the images are the PSNR and SSIM.
Figure 10. Test results for different turbulence intensities. The first three columns of the figure show the original images, blurred images, and recovery results for the turbulence intensity D/r0 = 5. The last three columns show the original images, blurred images, and recovery results for the turbulence intensity D/r0 = 10. The numbers in the images are the PSNR and SSIM.
Remotesensing 15 05071 g010
Figure 11. Laboratory setup of the imaging system. The LED provides a stable light source, and the DMD is used to provide changing target information. A turbulent screen is used to simulate a real turbulence environment. The receiving module was an imaging system with F = 16. The focal length of the lens was 100 mm, the diaphragm through-aperture size was 6.25 mm, and the exposure time was 20 ms.
Figure 11. Laboratory setup of the imaging system. The LED provides a stable light source, and the DMD is used to provide changing target information. A turbulent screen is used to simulate a real turbulence environment. The receiving module was an imaging system with F = 16. The focal length of the lens was 100 mm, the diaphragm through-aperture size was 6.25 mm, and the exposure time was 20 ms.
Remotesensing 15 05071 g011
Figure 12. Restoration effect comparison of images in experiment. (a) Original map; (b) Blurred images; (c) Jin et al. [46]; (d)U-Net; (e) DeepRFT Net; (f) DeturNet. The numbers under images are PSNR and SSIM.
Figure 12. Restoration effect comparison of images in experiment. (a) Original map; (b) Blurred images; (c) Jin et al. [46]; (d)U-Net; (e) DeepRFT Net; (f) DeturNet. The numbers under images are PSNR and SSIM.
Remotesensing 15 05071 g012
Figure 13. (a) Outfield experimental diagram and (b) outfield experimental acquisition image.
Figure 13. (a) Outfield experimental diagram and (b) outfield experimental acquisition image.
Remotesensing 15 05071 g013
Figure 14. Outfield image restoration results using histograms. The numbers under the images represent the variance, and the lower-right corner of each image is the histogram of the image. Generally, the higher the value, the richer the image information [46].
Figure 14. Outfield image restoration results using histograms. The numbers under the images represent the variance, and the lower-right corner of each image is the histogram of the image. Generally, the higher the value, the richer the image information [46].
Remotesensing 15 05071 g014
Table 1. Comparison of the test sets.
Table 1. Comparison of the test sets.
BlurredJin et al. [46]U-NetDeepRFTDeturNet
PSNR(dB)18.8518.3121.4821.4022.01
PSNR_std2.402.192.372.603.20
SSIM0.67450.66250.73040.74160.7644
SSIM_std0.08480.08700.09340.08860.0833
Table 2. Comparison of the average time consumed by methods to recover images.
Table 2. Comparison of the average time consumed by methods to recover images.
Jin et al. [46]U-NetDeepRFTDeturNet
113.38 s48.87 ms139.97 ms47.16 ms
Table 3. Results of ablation experiments.
Table 3. Results of ablation experiments.
MethodSAMCSFFGAMPSNR(dB)SSIM
DeturNet-21.510.7614
-21.650.7597
-21.580.7576
22.010.7644
Table 4. Comparison of the Gaussian test sets.
Table 4. Comparison of the Gaussian test sets.
Var = 0Var = 0.01Var = 0.03Var = 0.05Var = 0.08
Noise test setsPSNR(dB)18.8518.8318.7718.6918.55
SSIM0.67450.66540.63800.60640.5611
Recovery resultsPSNR(dB)22.0121.8621.5021.3020.89
SSIM0.76440.74600.70190.66030.6044
Table 5. Comparison of the Gaussian test sets.
Table 5. Comparison of the Gaussian test sets.
D/r0 = 5D/r0 = 8D/r0 = 10
BlurredDeturNetBlurredDeturNetBlurredDeturNet
PSNR(dB)19.1022.7218.8522.0118.3721.20
SSIM0.68190.77970.67450.76440.64310.7302
Table 6. Comparison of the test sets in experiment.
Table 6. Comparison of the test sets in experiment.
BlurredJin et al. [46]U-NetDeepRFTDeturNet
PSNR18.7318.7024.1927.1027.69
PSNR_std2.182.193.003.253.25
SSIM0.56150.55730.69700.77190.7863
SSIM_std0.10300.10580.08910.07100.0681
Table 7. Comparison of the test scenes.
Table 7. Comparison of the test scenes.
BlurredJin et al. [46]U-NetDeepRFTDeturNet
Scene1Entropy6.086.136.956.766.80
AG1.781.662.843.732.86
Scene2Entropy6.555.917.277.547.32
AG3.553.736.387.676.75
Scene3Entropy7.317.347.677.737.64
AG2.873.265.215.445.71
Scene4Entropy6.856.706.877.426.86
AG2.592.674.364.693.78
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Liu, X.; Wei, W.; Zhong, X.; Ma, H.; Chu, J. A DeturNet-Based Method for Recovering Images Degraded by Atmospheric Turbulence. Remote Sens. 2023, 15, 5071. https://doi.org/10.3390/rs15205071

AMA Style

Li X, Liu X, Wei W, Zhong X, Ma H, Chu J. A DeturNet-Based Method for Recovering Images Degraded by Atmospheric Turbulence. Remote Sensing. 2023; 15(20):5071. https://doi.org/10.3390/rs15205071

Chicago/Turabian Style

Li, Xiangxi, Xingling Liu, Weilong Wei, Xing Zhong, Haotong Ma, and Junqiu Chu. 2023. "A DeturNet-Based Method for Recovering Images Degraded by Atmospheric Turbulence" Remote Sensing 15, no. 20: 5071. https://doi.org/10.3390/rs15205071

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop