Next Article in Journal
Cyber Security Risk Modeling in Distributed Information Systems
Previous Article in Journal
Simulation-Based Analysis of the Effect of New Immersion Freezing Equipment on the Freezing Speed
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Decoupled Scattering Imaging Method Based on Autocorrelation Enhancement

1
Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo 315000, China
2
Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315000, China
3
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
4
Qilu Aerospace Information Research Institute, Jinan 250100, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2023, 13(4), 2394; https://doi.org/10.3390/app13042394
Submission received: 27 December 2022 / Revised: 3 February 2023 / Accepted: 10 February 2023 / Published: 13 February 2023

Abstract

:
Target recovery through scattering media is an important aspect of optical imaging. Although various algorithms combining deep-learning methods for target recovery through scattering media exist, they have limitations in terms of robustness and generalization. To address these issues, this study proposes a data-decoupled scattering imaging method based on autocorrelation enhancement. This method constructs basic-element datasets, acquires the speckle images corresponding to these elements, and trains a deep-learning model using the autocorrelation images generated from the elements using speckle autocorrelation as prior physical knowledge to achieve the scattering recovery imaging of targets across data domains. To remove noise terms and enhance the signal-to-noise ratio, a deep-learning model based on the encoder–decoder structure was used to recover a speckle autocorrelation image with a high signal-to-noise ratio. Finally, clarity reconstruction of the target is achieved by applying the traditional phase-recovery algorithm. The results demonstrate that this process improves the peak signal-to-noise ratio of the data from 15 to 37.28 dB and the structural similarity from 0.38 to 0.99, allowing a clear target image to be reconstructed. Meanwhile, supplementary experiments on the robustness and generalization of the method were conducted, and the results prove that it performs well on frosted glass plates with different scattering characteristics.

1. Introduction

Owing to the influence of inhomogeneous propagation media properties, photons carrying target information during imaging can change their propagation path when passing through media such as the atmosphere and biological tissues. Consequently, multiple scattering phenomena occur, resulting in failure to identify the target object in the image plane [1,2,3]. Overcoming the effect of scattering and recovering the target hidden behind the scattering media will greatly benefit laser detection, biomedical imaging, military applications, etc.
In the past decade or so, researchers have proposed various methods to image targets through scattering media by performing measurements and calculations, such as wavefront shaping, transmission matrix, and phase conjugation [4,5,6,7,8]. However, each of these methods has specific advantages, disadvantages, and applicability conditions, such as transmission matrix measurements and phase-conjugation methods, which require complex imaging systems. In addition, their associated measurements are too computationally intensive. With further research, Bertolotti et al. and Katz et al. proposed scattering imaging techniques based on speckle autocorrelation [9,10,11,12,13,14,15,16], but owing to its high susceptibility to noise terms in its phase recovery, poor reconstruction results are obtained, as autocorrelation involves only the amplitude information of the target.
According to the principle of speckle formation, the scattering medium recodes the incident light field that carries the target information. The researchers have proposed the implementation of computational imaging through scattering media using machine learning, especially deep-learning methods that do not require complex physical models [17,18,19,20,21,22,23,24]. For example, Hui et al. demonstrated the feasibility of machine learning in achieving target imaging through scattering media using the support vector regression (SVR) algorithm [17]. Yang et al. used a convolutional neural network based on U-Net architecture to reconstruct handwritten numbers for the task of speckle image recovery through media such as frosted glass [18]. Zhang et al. applied an encoder–decoder-based generative adversarial network to an optical imaging system to capture the target phase pattern to reconstruct a target in speckle images [19]. These studies overcame the limitations of traditional methods to achieve target imaging through scattering media with relatively low computational efforts and without changing the optical path. The feasibility of machine learning and deep learning in computational imaging through scattering media without any prior physical knowledge has been demonstrated. Nonetheless, the aforementioned end-to-end computational imaging method effectively recovered images only for targets in the same domain owing to the lack of guidance from prior knowledge.
To further improve the generalization of the model and explore the influence of target information in speckle images on imaging, Lyu et al. designed a hybrid neural network (HNN) deep-learning model to achieve target reconstruction by retrieving as low as 0.1% of the target information from speckle images [25]. Sun et al. proposed classification before reconstruction to achieve efficient target reconstruction of the models under different scattering media [26]. Furthermore, Li et al. developed a deep neural network to achieve target imaging through different scattering conditions [27]. Although these researchers improved the generalization of the model, in essence, they are still data driven.
Owing to the limitation of the acquisition environment, it is extremely difficult to acquire a large variety of comprehensive and scenario-diverse datasets. Therefore, the exploration of physical models of scattering imaging and overcoming data limitations to achieve imaging through scattering media have become attractive research trends. Zhu et al. combined autocorrelation physical models with deep learning to effectively reduce the training datasets and reconstruct the target under the same data domain through scattering media [28]. However, in the practical optical-path acquisition, images are susceptible to interference from external factors and generate large noise terms during the autocorrelation calculation, which affects the quality of speckle-correlation imaging. This issue has raised the question of how to combine physical constraints with deep learning to achieve autocorrelation-based speckle recovery with a low signal-to-noise ratio and effectively suppress background noise terms.
To address these key issues, this paper designs a scattering recovery algorithm, autocorrelation-enhanced generative adversarial networks (A-eGAN), for random-scale and complex targets based on prior physical knowledge of autocorrelation to reduce the impact of noise on the autocorrelation results. After loading the speckle autocorrelation image datasets of basic elements as the training set, speckle autocorrelation recoveries of the targets under different data domains were achieved by training the deep-learning model. The experimental results show that the method in this paper can improve the signal-to-noise ratio of the autocorrelation images in the target speckle image datasets under different data domains collected from frosted glass plates with different meshes. High-resolution reconstruction of the target is achieved with the aid of the phase recovery algorithm. The main contributions of this paper are as follows:
(1)
Highly generalizable speckle recovery with better interpretability was achieved by combining basic elements with autocorrelation.
(2)
Designing neural network structures suitable for stochastic scales and complex targets to enhance the reconstruction performance of the model.
(3)
Using a two-stage approach (i.e., enhancement and reconstruction) to develop a robust method under different scattering media and interference terms.

2. Materials and Methods

2.1. Principle of Autocorrelation

The scattering imaging process is determined within a certain field of view, when the angle of the light source is constant. The scattering-based imaging system mainly consists of a laser light source, target, scattering medium, and detector, as shown in Figure 1.
The relationship between the target image and corresponding output speckle image captured by the sensor can be described as follows:
E o u t = K E i n ,
where E o u t represents the received speckle image, K represents the point-spread function of the scattering media (i.e., frosted glass), E i n represents the target image of the input vector, and * represents the convolution of the target with the point-spread function. Even though the target cannot be directly identified by the speckle image, the speckle image contains the target information according to the convolution theorem. According to the autocorrelation principle, the autocorrelation operation on the target speckle image is written as follows:
E o u t = K E i n , E o u t E o u t = ( K E i n ) ( K E i n ) = ( E i n E i n ) ( K K ) ,
Since the point-diffusion function of an imaging system is a peak function, the above equation can be written as
E o u t E o u t = ( K E i n ) ( K E i n ) = ( E i n E i n ) + C
where denotes the autocorrelation operation and C is the noise term.
The autocorrelation of the target and its energy spectral density form a Fourier transform pair is given as:
F T { E i n E i n } = | F T ( E i n ) | 2
where F T { } represents the Fourier transform operation, from which the amplitude information of the target in the Fourier transform spectrum is obtained. Owing to the lack of phase information, the target image cannot be recovered by the inverse Fourier transform. Hence, the HIO-ER phase-recovery algorithm is used to recover the phase information of the target [21,22]. The structure is shown in Figure 2.
This method initializes its amplitude information and then obtains the amplitude distribution of the target via iterative transformation in the Fourier domain. At the same time, the phase distribution is the same as that of the target, and the real target can be recovered by inverse Fourier transform. The detailed iterations are written as follows:
G k ( u , v ) = F T { g k ( x , y ) } , G k ( u , v ) = | F T { O ( u , v ) } | e i arg G k ( u , v ) , g k ( x , y ) = I F T { G k ( u , v ) } ,
The initial value of the iteration g 1 ( x , y ) is assigned an arbitrary value, g k ( x , y ) is the k t h iteration, I F T { } represents the Fourier-inverse-transform operation, and the ( k + 1 ) t h iteration is obtained from the result of the k t h iteration, g k ( x , y ) . In this process, the “mixed input-output method” is used for the following physical constraints:
g k + 1 ( x , y ) = { g k ( x , y ) ( x , y ) Γ g k ( x , y ) β g k ( x , y ) ( x , y ) Γ
where Γ contains all points that do not satisfy the constraint, which is a non-negative real number for the imaging target, and β is the decay factor constant. New estimated objects are generated continuously during the iterative process until all of them satisfy the condition.

2.2. Model and Data Design

2.2.1. Overall Algorithm Flow

According to the autocorrelation principle, the speckle images have obvious structural features after undergoing the autocorrelation operation. However, the autocorrelation results are still inevitably influenced by noise, which affects the reconstruction results of the phase-recovery algorithm. In this section, by using the prior physical knowledge of speckle correlation, deep learning is combined with traditional algorithms. Meanwhile, the basic-element datasets generated using the autocorrelation operation graph are constructed as the initial training datasets of the network model to solve the limitation of massive datasets required by a deep neural network. The specific process is shown in Figure 3.

2.2.2. Neural Network Model

To transform speckle autocorrelation images with low signal-to-noise ratios into autocorrelation images with high signal-to-noise ratios, a generative adversarial network (GAN) was chosen as the base model in this paper [29,30,31,32,33]. GAN produces better output by balancing two modules, that is, the generative model and discriminative model. On the one hand, the generative model adopts the U-Net architecture, which is more suitable for recovery as a general encoder-decoder model, employing similar downsampling and upsampling operations to capture extraction and recovery. Currently, several methods based on the prototype U-Net architecture can achieve the effect of speckle reconstruction and can recover their target objects to varying degrees. In addition, their generative models aim to generate new images that are as similar as possible to the real target image. On the other hand, the goal of the discriminative model is to distinguish the generated image from the real image.
As for the network structure details, the feature extraction step uses a 3 × 3 size-convolution kernel to generate 512 × 512 × 64 feature maps from the input data using LeakyRelu as the activation function. Consequently, the input layer outputs 64 feature maps, each of which is 512 × 512 pixels. Image recovery is performed using a 3 × 3 convolution kernel for the deconvolution operation while adding jump connections in the upsampling and downsampling with respect to ResNet. High-level features from deeper layers are combined with low-level features from shallower layers to better recover the structural details of the image.
Owing to limitations in the size of the convolutional kernel, the GAN’s generators can only capture relationships in local regions. Traditional convolutional GANs generate high-resolution details and can only be used as a function of local points in low-resolution feature mappings. Consequently, multiple convolutional layers are computationally inefficient in establishing global dependencies. The self-attention mechanism provides a global perceptual domain. It learns the relationship between a pixel point and all other locations, so that the generation of each location no longer depends solely on the other locations in the vicinity. Instead, more distant, and informative locations are introduced, and the features of each location are computed as a weighted sum of all other locations. In this study, we construct a generative adversarial network with a self-attention mechanism to obtain global feature information of the image. This not only better eliminates the effect of background noise but also preserves the details of the image. In addition, the discriminator uses a simple convolutional neural network, where the last layer is the activation layer. The specific network structure is shown in Figure 4.
For the self-attentive mechanism shown in module c in Figure 4, the function of this module is to replace the traditional convolutional feature map with a self-attentive feature map. For the feature map output after the convolution layer, the three branches of the 1 × 1 convolution structure are f ( x ) , g ( x ) , and h ( x ) , H and W denote the length and width of the feature map, and C represents the number of channels. The output matrix of f ( x ) is transposed and multiplied with the output matrix of g ( x ) and normalized by SoftMax to obtain the attentional feature map. The attentional feature map is multiplied with the h ( x ) output matrix and convolved by 1 × 1 , and the output is integrated into the self-attentive feature map.
The MSE loss function is used in this study. It is used to achieve recovery of the target details by calculating the average of the sum of the squared errors of the corresponding points of the predicted and original data.
M S E = 1 M × N i = 1 M × N ( x i x i ) 2 ,
where x i and x i are the first i ( i = 1 , 2 , 3 , M × N ) pixel values on the target image and corresponding recovered image, respectively.

2.3. Data Setup

In optical imaging, each image element receives a beam of light corresponding to a spatial location and has a different pixel value depending on the intensity of the light. The final image obtained by the camera is presented as the sum of the images constructed by all the light rays independently, as shown in Figure 5.
The imaging process can be expressed as follows:
E i n = i = 0 n f ( x i )
where x i represents the beam of light emitted outward from the target object, and f ( ) represents the mapping function of light intensity to the pixel value. When the light beam encounters scattering media during its propagation, it undergoes scattering to form a speckle image superimposed on the final imaging surface. The imaging process can be expressed as follows:
E o u t = K E i n = K i = 0 n f ( x i )
The principle of differential geometry states that all complex geometries can be approximated as a large collection of sufficiently small, basic geometric images (e.g., squares). Therefore, in this study, we propose the use of basic geometric elements in the training set so that the mapping relationship between the basic-element speckle autocorrelation images and the truth value autocorrelation images can be learnt by the deep-neural-network model. The model could then be extended to the mapping relationship between the complex target speckle autocorrelation images and the truth value autocorrelation images to recover the complex speckle autocorrelation images.
To verify the autocorrelation recovery of the proposed method under complex target data, Fashion MNIST datasets were chosen as the test datasets for the experiment. Fashion MNIST is a set of common 28 × 28 grayscale images. It is more complex than MNIST, and can thus, better represent the actual performance of the method.

3. Results and Analysis

3.1. Optical Imaging System

As shown in Figure 6, a green laser with a wavelength of 532 nm, modulated by a half-wave plate (Edmund, HWP, #49-210), was used in combination with a spatial light modulator. The spatial light modulator improved the utilization of the incident light as the polarization state of the incident light can be adjusted. Then, the light was collimated and expanded by combining lenses (fL1 = 35 mm, fL2 = 400 mm,) as an illumination source. The spatial light modulator (SLM) (HoloeyePluto-2-VIS-016, pixel size 8 μm) loads the virtual target image into the optical path. The laser beam carrying the target information was modulated using two lenses (fL3 = 250 mm, fL4 = 100 mm; diaphragm aperture 8 mm) and was incident on a static scattering medium (Edmund, frosted glass, 220 and 600 mesh). The speckle image formed after exiting was received by the detector (PYTHON1300 CMOS camera) with a distance (d) of 50 mm between the target and scattering media. Autocorrelation operations were performed using the acquired speckle images as the training and test sets of the model.
Meanwhile, to address the robustness issue of deep neural networks, 220- and 600-mesh frosted-glass plates with different scattering characteristics were chosen as scattering media. The experimental optical path design was used to obtain the corresponding scattering images of the target. Virtual objects composed of basic elements such as circles, triangles, and squares, as well as their rotational variations and random positions, were used as initial data for the neural network training. The basic element dataset proved to be a simple target dataset due to the lack of apparent detail. The Fashion-MNIST dataset was used as an unknown complex target hidden behind the scattering medium, as the dataset was composed of common everyday clothing targets with more detailed features. Figure 7 shows the loaded basic geometric targets and complex targets obtained through SLM; the received speckle images, which are indistinguishable by the human eye; and the structured results obtained after performing the autocorrelation operation. Moreover, based on the results in Figure 7, the autocorrelation operation results of the same target under different scattering media have similar structures to their truth value autocorrelation results, except they are affected by noise differently. The autocorrelation can be used as prior knowledge to facilitate the optimization of neural networks as well as to improve generalization.

3.2. Model Training

The experiment uses 6000 datasets of the basic elements collected using the above optical path design, which is pre-processed by autocorrelation to obtain the training set of the model. The simultaneously collected Fashion-MNIST was used as the test set, which totalled 2000 sets. The speckle recovery algorithms designed in this paper were all run on a computing platform based on the Pytorch deep-learning framework with CPU i7-8700 and GPU RTX 2080Ti, accelerated by Pytorch 1.9 and CUDA10.1. The Epoch was set to 100, and the loss curves during model training are shown in Figure 8.
From Figure 8, it can be seen that after 40 iterations, the loss curves of the training and validation sets tend to converge.

3.3. Experimental Results

To prove that the autocorrelation recovery of complex target speckle images could be achieved by constructing basic-element datasets derived from deep-learning models to overcome the limitations of its target shape and class, the following procedure was followed. First, the basic-element data and the Fashion-MNIST data collected under 220-mesh frosted glass were used as the training and test sets, respectively, to validate the experimental idea. The experimental results are shown in Figure 9.
The results show that after the speckle autocorrelation with a low signal-to-noise ratio is recovered by the model, the autocorrelated images maintain a high signal-to-noise ratio in relation to its truth value autocorrelated images. The high-resolution reconstruction of the target image is achieved by combining the traditional phase-recovery algorithm. However, during the autocorrelation process, the model lacks the position information of the target, and the phase recovery only uses the target amplitude information. Therefore, the phase-recovery result does not reflect the position of the target. This result also verifies that the complex target autocorrelation is recovered when the basic-element training set is used.
Based on these experimental results, the effect of autocorrelation recovery can be further verified under the condition of frosted-glass plates with different meshes as scattering media. The experimental design included crossover experiments to verify the robustness and feasibility of the model. First, the experiment was designed to use the experimental data collected from 220-mesh frosted glass as the training set to verify its recovery results for the data collected from 600-mesh frosted glass. The experimental results are shown in Figure 10:
The autocorrelation recovery curve shows the data collected using frosted glass with low mesh as the training set. The test results using frosted glass with high mesh maintain a high signal-to-noise ratio recovery, where the PSNR is 37.28 dB and SSIM is 0.98. The data collected from high-mesh frosted glass was also used as the training set to verify the feasibility of the experimental data collected from low-mesh frosted glass. The results are shown in Figure 11.
Based on the phase recovery results, the target can still be recovered using different meshes. However, affected by the different characteristics of scattering media, its autocorrelation recovery results have a PSNR of 34.65 dB and SSIM of 0.94 in the validation from high mesh to low mesh. Both values decreased, but the model is still able to reconstruct clear targets using the phase recovery algorithm.
In conclusion, the experimental results indicate that by combining deep learning with traditional methods and incorporating prior physical knowledge in the autocorrelation, high-resolution reconstruction of targets can be achieved under the influence of different noise terms, providing a new approach to reconstructing speckle images of targets in complex environments.

3.4. Reconstruction Quality Analysis and Comparative Experiment

To better characterize the results of the deep learning method proposed in this paper in autocorrelated image recovery. In this paper, peak signal-to-noise ratio ( P S N R ), structural similarity index ( S S I M ), and Mutual Information ( M I ) are used as the quality evaluation of the self-correlated images recovered by deep learning.
The P S N R is expressed as:
P S N R ( x , y ) = 10 log 10 M A X 2 1 m × n i = 1 m × n ( x i y i ) 2
where ( m × n ) is the number of pixels of the image, x i is the pixel value of each pixel of the reconstructed image, y i is the pixel value of each pixel of the reference image, and M A X is the maximum value of the image. The S S I M is expressed as:
S S I M ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 )
where μ x and μ y are the average values of the total pixels in the reconstructed image and the reference image, respectively. σ x and σ y are the variances of the reconstructed image and the reference image. σ x y is the covariance of the reconstructed image and the reference image. c 1 and c 2 are two small positive numbers used to avoid being divided by 0. The M I is expressed as:
M I ( x , y ) = i = 0 L 1 j = 0 L 1 h x y ( i , j ) log 2 h x y ( i , j ) h x ( i , j ) h y ( j )
MI reflects the same amount of information in two images. Where h x ( i , j ) and h y ( j ) represent histograms of x and y , and h x y ( i , j ) represents the joint histogram of these two images.
Combined with the evaluation metrics, the effectiveness of this paper’s method is first verified, while the results are analyzed and compared with those of mainstream target image recovery methods. The results are shown in Table 1 below.
Analysis of the data in the table shows that the method in this paper performs well in the recovery of autocorrelated images, as indicated by the values marked in bold. The results of MI evaluation show that the method effectively recovers the target information.

4. Discussion

The following conclusions can be drawn based on the reconstruction results presented in Section 3. The prior physical knowledge of autocorrelation is used as a constraint due to its distinct structural characteristics of scattered autocorrelation. However, for unconstrained models, the recovery results are indeed noise resistant, but they also increase the training time and the risk of overfitting. It also needs to overcome the limitations of the training data set. For this reason, the robustness to noise can be improved in speckle imaging, and the limitation of the data domain can be solved by constructing basic-element datasets to recover complex targets. For cases involving low signal-to-noise ratios, autocorrelation-enhanced recovery is necessary, first by employing a deep-learning model. The generator was built using U-Net and ResNet and included five jump layers to fuse high-level information with low-level features. The discriminator was a five-layer convolutional neural network. To further enhance the robustness of the model, basic elements were used in the training set because a complex target is geometrically derived from the simple target, which in turn achieves data decoupling and completes the autocorrelation output of complex target speckle images with a high signal-to-noise ratio. At the same time, target reconstruction is achieved in conjunction with traditional phase recovery.

5. Conclusions

In the case of target speckle images affected by noise, introducing high signal-to-noise autocorrelation as an intermediate optimization tool can optimize the network learning process, improve the learning efficiency, and enhance the robustness of noise. Particularly for speckle autocorrelation severely affected by noise, the experimental results reveal that high-quality autocorrelation information helps to improve the effect of reconstruction during phase recovery. Therefore, improving the signal-to-noise ratio of speckle autocorrelation is a crucial physical constraint in overcoming the noise terms of scattering imaging. By introducing a physical constraint, a method based on the construction of basic-element datasets using deep learning is introduced with the aim of enhancing speckle autocorrelation at low signal-to-noise ratios. This approach significantly elevated the average peak signal-to-noise ratio to 37.28 dB and achieved a structural similarity of 0.99. Importantly, the method achieves higher-quality physical-knowledge-based reconstruction than the end-to-end method. In addition, the high signal-to-noise autocorrelation helps to improve the reconstruction efficiency and solves the limitation of the target data domain. Moreover, extended imaging through frosted-glass plates with different scattering properties was achieved, providing a basis for further practical applications.

Author Contributions

Conceptualization, C.W. and J.Z.; methodology, C.W.; software, C.W.; validation, C.W., S.Y. and Y.Y.; investigation, C.W.; data curation, W.L. and H.Z.; writing—original draft preparation, C.W.; writing—review and editing, S.Y.; supervision, J.X.; project administration, J.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the Technology Innovation 2025 Major Project (2020Z019, 2021Z063) of Ningbo and the Natural Science Foundation of Zhejiang (LQ23F050002).

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Acknowledgments

We thank Hanfeng Feng and Zhiyu Wang for technical supports and experimental discussion. We thank the reviewers for useful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Katz, O.; Heidmann, P.; Fink, M.; Gigan, S. Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations. Nat. Photonics 2014, 8, 784–790. [Google Scholar] [CrossRef]
  2. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [CrossRef] [PubMed]
  3. Ntziachristos, V. Going deeper than microscopy: The optical imaging frontier in biology. Nat. Methods 2010, 7, 603–614. [Google Scholar] [CrossRef] [PubMed]
  4. Leal-Junior, A.G.; Frizera, A.; Marques, C.; Pontes, M.J. Optical Fiber Specklegram Sensors for Mechanical Measurements: A Review. IEEE Sens. J. 2020, 20, 569–576. [Google Scholar] [CrossRef]
  5. Mosk, A.P.; Lagendijk, A.; Lerosey, G.; Fink, M. Controlling waves in space and time for imaging and focusing in complex media. Nat. Photonics 2012, 6, 283–292. [Google Scholar] [CrossRef]
  6. Vellekoop, I.M.; Mosk, A.P. Focusing coherent light through opaque strongly scattering media. Opt. Lett. 2007, 32, 2309–2311. [Google Scholar] [CrossRef]
  7. Li, S.; Deng, M.; Lee, J.; Sinha, A.; Barbastathis, G. Imaging through glass diffusers using densely connected convolutional networks. Optica 2018, 5, 803–813. [Google Scholar] [CrossRef]
  8. Popoff, S.M.; Lerosey, G.; Carminati, R.; Fink, M.; Boccara, A.C.; Gigan, S. Measuring the transmission matrix in optics: An approach to the study and control of light propagation in disordered media. Phys. Rev. Lett. 2010, 104, 100601. [Google Scholar] [CrossRef]
  9. Bertolotti, J.; Van Putten, E.G.; Blum, C.; Lagendijk, A.; Vos, W.L.; Mosk, A.P. Non-invasive imaging through opaque scattering layers. Nature 2012, 491, 232–234. [Google Scholar] [CrossRef]
  10. Bertolotti, J.; Van Putten, E.G.; Blum, C.; Lagendijk, A.; Vos, W.L.; Mosk, A.P. Adaptive Optics and Wavefront Control for Biological Systems. In Proceedings of the SPIE BiOS, San Francisco, CA, USA, 13 February 2016; Volume 9335, pp. 106–112. [Google Scholar]
  11. Fienup, J.R. Phase retrieval algorithms: A comparison. Appl. Opt. 1982, 21, 2758–2769. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Meng, R.; Shao, C.; Li, P.; Dong, Y.; Hou, A.; Li, C.; Lin, L.; He, H.; Ma, H. Transmission Mueller matrix imaging with spatial filtering. Opt. Lett. 2021, 46, 4009–4012. [Google Scholar] [CrossRef] [PubMed]
  13. Huang, G.; Wu, D.; Luo, J.; Huang, Y.; Shen, Y. Retrieving the optical transmission matrix of a multimode fiber using the extended Kalman filter. Opt. Express 2020, 28, 9487–9500. [Google Scholar] [CrossRef] [PubMed]
  14. Chen, L.; Singh, R.K.; Chen, Z.; Pu, J. Phase shifting digital holography with the Hanbury Brown–Twiss approach. Opt. Lett. 2020, 45, 212–215. [Google Scholar] [CrossRef]
  15. Jia, H.; Luo, X.J.; Zhang, Y.; Lan, F.Y.; Liu, H.; Chen, M.L. All-optical imaging and tracking technology for rectilinear motion targets through scattering media. Sinica 2018, 67, 224202. [Google Scholar] [CrossRef]
  16. Xiao, X.; Du, S.M.; Zhao, F.; Wang, J.; Liu, J.; Li, R.X. Single-shot optical speckle imaging based on pseudothermal illuminations. Sinica 2019, 68, 034201. [Google Scholar] [CrossRef]
  17. Chen, H.; Gao, Y.; Liu, X.; Zhou, Z. Imaging through scattering media using speckle pattern classification based support vector regression. Optics Express 2018, 26, 26663–26678. [Google Scholar] [CrossRef]
  18. Yang, M.; Liu, Z.H.; Cheng, Z.D.; Xu, J.S.; Li, C.F.; Guo, G.C. Deep hybrid scattering image learning. J. Phys. D Appl. Phys. 2019, 52, 115105. [Google Scholar] [CrossRef]
  19. Zhang, X.R.; Zhang, H.J.; Luo, Y.L.; Yang, L.H. Computational Optical Imaging through Scattering Media by Generative Adversarial Networks. Optoelectron. Technol. 2021, 41, 185. (In Chinese) [Google Scholar]
  20. Guo, E.; Zhu, S.; Sun, Y.; Bai, L.; Zuo, C.; Han, J. Learning-based method to reconstruct complex targets through scattering medium beyond the memory effect. Opt. Express 2020, 28, 2433–2446. [Google Scholar] [CrossRef] [PubMed]
  21. Rawat, S.; Wendoloski, J.; Wang, A. cGAN-assisted imaging through stationary scattering media. Opt. Express 2022, 30, 18145–18155. [Google Scholar] [CrossRef] [PubMed]
  22. Zhan, X.; Gao, J.; Gan, Y.; Song, C.; Zhang, D.; Zhuang, S.; Han, S.; Lai, P.; Liu, H. Roles of scattered and ballistic photons in imaging through scattering media: A deep learning-based study. arXiv 2022, arXiv:2207.10263. [Google Scholar] [CrossRef]
  23. Lai, X.; Li, Q.; Chen, Z.; Shao, X.; Pu, J. Reconstructing images of two adjacent objects passing through scattering medium via deep learning. Opt. Express 2021, 29, 43280–43291. [Google Scholar] [CrossRef]
  24. Hu, Q.; Xu, S.; Chen, X.W.; Wang, X.; Wang, K.X. Object recognition for remarkably small field-of-view with speckles. App. Phys. Lett. 2021, 118, 091103. [Google Scholar] [CrossRef]
  25. Lyu, M.; Wang, H.; Li, G.; Zheng, S.; Situ, G. Learning-based lensless imaging through optically thick scattering media. Adv. Photonics 2019, 1, 036002. [Google Scholar] [CrossRef]
  26. Sun, Y.; Shi, J.; Sun, L.; Fan, J.; Zeng, G. Image reconstruction through dynamic scattering media based on deep learning. Opt. Express 2019, 27, 6032–6046. [Google Scholar] [CrossRef]
  27. Li, Y.; Cheng, S.; Xue, Y.; Tian, L. Displacement-agnostic coherent imaging through scatter with an interpretable deep neural network. Opt. Express 2021, 29, 2244–2257. [Google Scholar] [CrossRef]
  28. Zhu, S.; Guo, E.; Gu, J.; Cui, Q.; Zhou, C.; Bai, L.; Han, J. Efficient color imaging through unknown opaque scattering layers via physics-aware learning. Opt. Express 2021, 29, 40024–40037. [Google Scholar] [CrossRef]
  29. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  30. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar] [CrossRef]
  31. Lu, E.; Hu, X. Image super-resolution via channel attention and spatial attention. Appl. Intell. 2022, 52, 2260–2268. [Google Scholar] [CrossRef]
  32. Ren, Z.; Zhao, J.; Wang, C.; Ma, X.; Lou, Y.; Wang, P. Research on Key Technologies of Super-Resolution Reconstruction of Medium and Long Wave Maritime Infrared Image. Appl. Sci. 2022, 12, 10871. [Google Scholar] [CrossRef]
  33. Chen, J.; Huang, D.; Zhu, X.; Chen, F. Gradient-Guided and Multi-Scale Feature Network for Image Super-Resolution. Appl. Sci. 2022, 12, 2935. [Google Scholar] [CrossRef]
Figure 1. Principle of scatter imaging.
Figure 1. Principle of scatter imaging.
Applsci 13 02394 g001
Figure 2. Flow of phase-recovery algorithm: (a) phase-recovery algorithm flow and (b) the ground truth. DFT is denoted as Discrete Fourier Transform, PI as Phase Information, and IDFT as Inverse Discrete Fourier Transform.
Figure 2. Flow of phase-recovery algorithm: (a) phase-recovery algorithm flow and (b) the ground truth. DFT is denoted as Discrete Fourier Transform, PI as Phase Information, and IDFT as Inverse Discrete Fourier Transform.
Applsci 13 02394 g002
Figure 3. A-eGAN algorithm flow: (a) ground truth; (b) speckle autocorrelation image; (c) high signal-to-noise autocorrelation image output by the deep-learning model; and (d) reconstruction result of phase recovery.
Figure 3. A-eGAN algorithm flow: (a) ground truth; (b) speckle autocorrelation image; (c) high signal-to-noise autocorrelation image output by the deep-learning model; and (d) reconstruction result of phase recovery.
Applsci 13 02394 g003
Figure 4. Deep-learning model: (a) generative network; (b) discriminative network; (c) the self-attentive mechanism.
Figure 4. Deep-learning model: (a) generative network; (b) discriminative network; (c) the self-attentive mechanism.
Applsci 13 02394 g004
Figure 5. Relationship between complex objectives and essential elements.
Figure 5. Relationship between complex objectives and essential elements.
Applsci 13 02394 g005
Figure 6. Experimental optical path design: (a) simulated optical path design and (b) actual data-acquisition optical path.
Figure 6. Experimental optical path design: (a) simulated optical path design and (b) actual data-acquisition optical path.
Applsci 13 02394 g006
Figure 7. Experimental data representation and display of autocorrelation reprocessing results. GT is the truth value of the target, Speckle refers to the target speckle images, GT_corr represents the truth value autocorrelation, 220Spec_corr is the speckle autocorrelation under 220-mesh frosted glass, and 600Spec_corr is the speckle autocorrelation under 600-mesh frosted glass.
Figure 7. Experimental data representation and display of autocorrelation reprocessing results. GT is the truth value of the target, Speckle refers to the target speckle images, GT_corr represents the truth value autocorrelation, 220Spec_corr is the speckle autocorrelation under 220-mesh frosted glass, and 600Spec_corr is the speckle autocorrelation under 600-mesh frosted glass.
Applsci 13 02394 g007
Figure 8. Loss–function curve.
Figure 8. Loss–function curve.
Applsci 13 02394 g008
Figure 9. Test results under 220-mesh frosted glass. GT is the truth value of the target, GT_corr represents the truth value autocorrelation, Spec_corr is the speckle autocorrelation under 220-mesh frosted glass, Corr_rec is the depth model prediction result, Corr_Cont Curve is the comparison value between the depth model prediction result and the truth value autocorrelation result, and Phase_rec represents the phase-recovery reconstruction result.
Figure 9. Test results under 220-mesh frosted glass. GT is the truth value of the target, GT_corr represents the truth value autocorrelation, Spec_corr is the speckle autocorrelation under 220-mesh frosted glass, Corr_rec is the depth model prediction result, Corr_Cont Curve is the comparison value between the depth model prediction result and the truth value autocorrelation result, and Phase_rec represents the phase-recovery reconstruction result.
Applsci 13 02394 g009
Figure 10. Results obtained using 220-mesh frosted-glass collection data as the training set and testing on 600-mesh frosted-glass collection data.
Figure 10. Results obtained using 220-mesh frosted-glass collection data as the training set and testing on 600-mesh frosted-glass collection data.
Applsci 13 02394 g010
Figure 11. Results obtained using 600-mesh frosted-glass collection data as the training set and testing on 220-mesh frosted-glass collection data.
Figure 11. Results obtained using 600-mesh frosted-glass collection data as the training set and testing on 220-mesh frosted-glass collection data.
Applsci 13 02394 g011
Table 1. Test results for complex object datasets and comparison of related methods (PSNR, SSIM, and MI).
Table 1. Test results for complex object datasets and comparison of related methods (PSNR, SSIM, and MI).
PSNR/dBSSIMMI
U-Net23.620.730.76
HNN25.480.810.91
Ours35.220.951.21
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, C.; Zhuang, J.; Ye, S.; Liu, W.; Yuan, Y.; Zhang, H.; Xiao, J. Data-Decoupled Scattering Imaging Method Based on Autocorrelation Enhancement. Appl. Sci. 2023, 13, 2394. https://doi.org/10.3390/app13042394

AMA Style

Wang C, Zhuang J, Ye S, Liu W, Yuan Y, Zhang H, Xiao J. Data-Decoupled Scattering Imaging Method Based on Autocorrelation Enhancement. Applied Sciences. 2023; 13(4):2394. https://doi.org/10.3390/app13042394

Chicago/Turabian Style

Wang, Chen, Jiayan Zhuang, Sichao Ye, Wei Liu, Yaoyao Yuan, Hongman Zhang, and Jiangjian Xiao. 2023. "Data-Decoupled Scattering Imaging Method Based on Autocorrelation Enhancement" Applied Sciences 13, no. 4: 2394. https://doi.org/10.3390/app13042394

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop