Next Article in Journal
Direct Femtosecond Laser Processing for Generating High Spatial Frequency LIPSS (HSFL) on Borosilicate Glasses with Large-Area Coverage
Next Article in Special Issue
Energy Backflow in Tightly Focused Fractional Order Vector Vortex Beams with Binary Topological Charges
Previous Article in Journal
Energy Efficiency Optimization for SLIPT-Enabled NOMA System
Previous Article in Special Issue
TSDSR: Temporal–Spatial Domain Denoise Super-Resolution Photon-Efficient 3D Reconstruction by Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Phase Imaging through Scattering Media Using Incoherent Light Source

1
College of Physics and Information Engineering, Minnan Normal University, Zhangzhou 363000, China
2
Fujian Provincial Key Laboratory of Light Propagation and Transformation, College of Information Science & Engineering, Huaqiao University, Xiamen 361021, China
*
Author to whom correspondence should be addressed.
Photonics 2023, 10(7), 792; https://doi.org/10.3390/photonics10070792
Submission received: 11 June 2023 / Revised: 5 July 2023 / Accepted: 6 July 2023 / Published: 9 July 2023
(This article belongs to the Special Issue Nonlinear Optics and Hyperspectral Polarization Imaging)

Abstract

:
Phase imaging normally employs coherent a light source while an incoherent light source is not preferred due to its random wavefront. Another challenge for practical phase imaging is imaging through scattering media, which scatter the photons in a random manner and lead to seriously distorted images of speckles. Based on the convolutional neural network (CNN), this paper presents an approach for phase imaging through scattering media using an incoherent light source. A CNN was trained and utilized to reconstruct the target images from the captured images of speckles. Similarities of over 90% between the reconstructed images and their target images have been achieved. It was concluded that an incoherent light source can be used as an illumination source for scattering phase imaging with the assistance of deep learning technology. This phase imaging approach with an incoherent light source through scattering media can be used to record the refractive indices of transparent samples, which might lead to its application in biomedical imaging.

1. Introduction

The amplitude and phase in a light beam are excellent information transfer features in optical imaging. Optical imaging that records amplitude is referred to as bright field imaging, and that which records a phase is called phase imaging [1,2]. In biological imaging and astronomical observations, phase imaging is preferable to bright field imaging. In label-free biomedical tomography, for example, phase imaging can be used to obtain refractive index distributions by recording the phase of image light wave, and then realizing the fast, non-destructive, high-resolution imaging of transparent samples [3,4]. Both coherent and incoherent light can be used as illumination sources for bright field imaging, as their amplitudes can be readily modulated. The modulation of a light source in a phase is essential for phase imaging. However, it is a great challenge to modulate the phase of an incoherent beam due to its random wavefront, which leads to the absence of an incoherent light source in phase imaging. Coherent light sources have currently been adopted for phase imaging, but they are not considered ideal because of the speckle effect which results from the coherence [5]. Therefore, in successfully conducting phase imaging while avoiding the speckle effect induced by coherent light sources, the exploration of phase imaging with an incoherent light source is a worthwhile endeavor.
Imaging through scattering media is another inevitable obstacle for biological imaging or astronomical observations. Biological tissues or fogs that obscure the target might scramble its spatial information into random diffusion [6,7,8]. This topic has drawn significant research interests, as its solutions would benefit a multitude of applications such as deep tissue imaging [9,10], underwater imaging [11,12], imaging through fog [13,14], etc. However, the complexity of random transmission caused by the perturbations in scattering media makes it still challenging. A number of approaches, such as point spread functions [15,16], speckle correlation [17], transmission matrices [18], wave front shaping [19] and deep learning neural network-based methods [20,21], have been proposed in recent years for removing the effects of scattering media and achieving high-quality images. In these methods, with the advantages of higher accuracy, better robustness, faster processing speed and less requirements in hardware, deep-learning based methods have shown great potential in retrieving the original information from the images seriously distorted by scattering media. A variety of deep-learning networks have been proposed for the reconstruction of images degraded by scattering media. Li et al. reconstructed images through unseen diffusers by combining the CNN and speckle correlation, and their work was then extended to restore the targets hidden behind scattering media in unknown locations [22,23]. Similar work was reported by Rahmani and his collaborators, in which the CNN was utilized to reconstruct amplitude and phase images from the speckles through a multimode fiber [24]. Lai et al. proposed the reconstruction of images of two adjacent objects through a scattering medium via deep learning [25]. Based on a deep learning neural network, Qiao and his collaborators realized real-time X-ray phase-contrast imaging through random media [26]. In addition to the reconstruction of 2D images, the deep learning approach was demonstrated to produce a 3D phase image from a single shot [27]. By using a neural network, Zheng et al. realized incoherent bright field imaging through highly nonstatic and optically thick turbid media [28]. In general, deep learning has provided a powerful tool for the reconstruction of images through unseen diffusers. Realizing phase imaging with incoherent light sources while eliminating the influence of scattering media could be a worthwhile endeavor for the applications of biological imaging and astronomical observations. Therefore, we looked into this issue and proposed a phase imaging approach with an incoherent light source through scattering media with the aid of deep learning technology in this paper.

2. Experimental Setup

As shown in Figure 1, an experimental optical imaging platform was set up for the study of phase imaging with an incoherent light source through scattering media. A quasi-monochromatic red light emitting diode (LED: CREE 3W) was utilized as the incoherent illumination source, whose central wavelength and line width were 630 nm and 20 nm, respectively. The divergent incoherent beam emitted by the LED passed through a biconvex lens (L1: focal length, f = 150 mm, clear aperture, Φ = 40 mm) and a diaphragm (D: clear aperture, Φ, = 10 mm), turning into an approximate paraxial beam. In order to improve the modulation effectiveness of the phase-only spatial light modulator (SLM: UPO-labs), a horizontal polarizer (P) that converted the paraxial beam into a horizontally polarized beam was placed behind the diaphragm. A beam splitter (BS) was inserted between the horizontal polarizer and the SLM, splitting the horizontally polarized beam into two portions. One portion was transmitted through the beam splitter and arrived at the SLM. As indicated in Figure 1, the distance of the illumination light propagating from the LED to SLM was z = d1 + d2. According to the van Cittert–Zernike principle, the spatial coherence of quasi-monochromatic light radiating from an incoherent light source can be expressed in a paraxial approximation as follows [29]:
μ ( r 1 r 2 , z ) = exp ( i 2 π r 1 2 r 2 2 2 λ z ) × s I ( x , y ) exp [ i 2 π ( x 2 x 1 λ z x + y 2 y 1 λ z y ) ] d x d y s I ( x , y ) d x d y
In Equation (1), λ is the central wavelength of the quasi-monochromatic light, I ( x , y ) is the intensity distribution in the source plane, and z is the propagation distance from the light source to the observation plane. r 1 = x 1 e x + y 1 e y and r 2 = x 2 e x + y 2 e y represent the spatial coordinates of two independent observations in the observation plane. Equation (1) indicates that the spatial coherence distribution of the beam is a function of the propagation distance and intensity distribution of the light source. Once the propagation distance is not zero, the beam changes from incoherent light to partially coherent light. For the beam incident to and modulated by the SLM, the observation plane locates it at the position of the SLM and the propagation distance is the distance from the LED to the SLM; that is, z = d1 + d2. It can be deduced that the beam at the position of the SLM is actually a partially coherent beam, although the original beam radiating from the LED illumination source is incoherent. That is, the beam changes from incoherent light to partially coherent light in transmission.
Figure 1. Optical experimental setup of incoherent phase imaging through scattering media, and the spectral characteristics of the red LED. P: polarizer, BS: beam splitter, SLM: spatial light modulator, L1 and L2: biconvex lens, SM: scattering media, and D: diaphragm. Inset (a): distribution of two-slit interference fringe when the propagation distance is 110 cm, and inset (b): distribution of two-slit interference fringe when the propagation distance is 120 cm.
Figure 1. Optical experimental setup of incoherent phase imaging through scattering media, and the spectral characteristics of the red LED. P: polarizer, BS: beam splitter, SLM: spatial light modulator, L1 and L2: biconvex lens, SM: scattering media, and D: diaphragm. Inset (a): distribution of two-slit interference fringe when the propagation distance is 110 cm, and inset (b): distribution of two-slit interference fringe when the propagation distance is 120 cm.
Photonics 10 00792 g001
To verify the conclusion of the previous theoretical analysis, the coherence of the beam located at the position of the SLM without being modulated by the SLM was measured experimentally. The fringe contrast achieved from two-slit interference experiment reflected the coherence of a beam. For coherence measurement, two slits measuring 0.11 mm in slit width and 0.3 mm in slit spacing took the place of the SLM and a CCD camera behind the two slits captured the patterns of the interference fringes, as illustrated in the bottom right corner of Figure 1. As d1 and d2 were set to 10 cm and 100 cm, i.e., the propagation distance was 110 cm, the resultant two-slit interference fringe is shown in inset (a) of Figure 1. With d1 = 20 cm and d2 = 100 cm, the fringe is shown in the inset (b) of Figure 1. In both insets, Imax refers to the brightness of the maximum grey of the interference fringe, and Imin refers to the minimum. Substituting Imax and Imin into Equation (2) below, the contrasts of interference fringes (i.e., the value of K) can be obtained.
K = I max I min I max + I min
According to Equation (2), the contrasts of interference fringes were calculated as 0.05 and 0.11, respectively, for different d1 values of 10 cm and 20 cm, as labeled in the bottom left corners of inset (a) and (b) in Figure 1. It is well-known that K = 1 represents that the incident beam is completely coherent, whereas K = 0 corresponds to an incoherent beam. The measured fringe contrasts are between 0 and 1 and thus the beams incident on the two slits are partially coherent beams. Therefore in subsequent phase imaging experiments, the partially coherent beams incident on the SLM were phase-modulated by it and reflected back to the beam splitter.
Theoretical analysis and practical measurements showed that, the beams incident on the SLM were partially coherent and the values of the measured coherent contrasts of 0.05 and 0.11 were close to zero. It was essential to test whether or not such partially coherent beams can be effectively modulated by the SLM and used for phase imaging. The vortex phase illustrated in the top right corners of inset (a) and (b) in Figure 1 was loaded into the SLM to modulate the partially coherent beams and the beams were reflected back to the beam splitter. The beam splitter reflected the beams perpendicularly into a CCD camera (AVT PIKE F-421B, 2048 × 2048 pixels, pixel pitch = 7.5 μm) through another biconvex lens (L2: focal length, f = 100 mm; clear aperture, Φ = 40 mm). The images captured by the CCD camera for different d1 spacings are shown in the bottom right corners of inset (a) and (b) in Figure 1, which indicate the intensity distributions of the partially coherent beams phase-modulated by the SLM. The dark-grey circular patterns in the center of these images indicate that both partially coherent beams were successfully modulated by the SLM. The circular pattern in inset (b) is darker than that in inset (a), which suggests that the latter beam was modulated more effectively.
After the preparatory work described above, the experiments on phase imaging using incoherent light source through a scattering medium were carried out by inserting a scattering medium (SM: ground glass, THORLABS: DG100X100-120-100 mm × 100 mm N-BK7 Ground Glass Diffuser, 120 Grit) between BS and L2, as shown in the experimental setup in Figure 1. Once a beam is modulated by the SLM, it carries the specific phase information that the SLM adds to it. The modulated beams reflected by BS entered the scattering medium, scattered into random speckles, and the phase information that they carried was scrambled in random diffusion. The random speckles produced by the SM were focused by lens L2 and imaged by the CCD. Figure 2 shows a phase image loaded into the SLM to modulate the partially coherent beam and the corresponding speckle pattern captured by the CCD. There is no clue to the phase image at all from the speckle pattern due to the decorrelation caused by perturbations in the scattering medium.

3. Data Acquisition and Processing

As mentioned in the Introduction, deep learning neural network-based methods have been proposed to reconstruct the images affected by scatterers or diffusers. Taking the advantages of higher accuracy, better robustness, faster processing speed, more flexibility and better generalization ability, the method based on the convolutional neural network (CNN) was selected from a number of candidates for the goal of this paper. The CNN, which can be separated into unsupervised learning and supervised learning, is composed of a convolutional layer, pooling layer and up-sampling layer. Unsupervised learning involves learning sample data without a label or category to discover the structural knowledge of the sample data, while supervised learning involves learning a function or model from a given training data set, which can be employed to predict the result from new data [30]. Obviously, supervised learning is more desirable for reconstructing the phase information from speckle images. A supervised learning CNN of a U-net architecture was adopted in this study, the frame diagram of which is shown in Figure 3. There is an encoder and a decoder in the U-net, which can be divided into a contraction path and an expansion path. The decoder is connected with the corresponding feature layer of the encoder, and the information on different spatial scales is transmitted through this connection to retain the high-frequency information. The supervised learning U-net requires a data set that consists of the sample data with labels for network training. The images of handwritten digits in the MNIST dataset and the images of hand-drawn drawings in the Google Quick Draw dataset were displayed on the SLM as the phase images to modulate the incident partially coherent beams [31]. They were resized to 512 × 512 pixels and then loaded onto the SLM. The grayscale speckle images captured by the CCD were also of the size of 512 × 512 pixels.
The transmission distance, z, was firstly set to 110 cm and the coherent contrast, K, of the partially coherent beam incident on the SLM was 0.05. Briefly, 7000 images from the datasets were loaded onto the SLM as the phase images and 7000 speckle images were captured by the CCD. The phase images and their corresponding speckle images were paired to form an input–output dataset of 7000 data pairs. They were randomly grouped into two sets, a training set for the training network and a test set for evaluating network performance, in a ratio of 6 to 1. In order to improve training efficiency and reduce the training time, all the images in the training set were resized from 512 × 512 pixels to 256 × 256 pixels. The U-net was then trained by feeding the input–output data pairs in the training set into the network and optimized by an Adam (adaptive moment estimation) optimizer. The Adam optimizer utilizing the strategy of adaptive moment estimation was employed to minimize the loss function and optimize the training process [32]. The training program was implemented in the Keras/TensorFlow framework and sped up using a GPU (NVIDIA, RTX 2080 SUPER). Structural similarity (SSIM) was used to quantify the similarity of the reconstructed image to its corresponding phase image [33].
The handwritten digits reconstructed by the U-net as K = 0.05 are shown in Figure 4a and the reconstruction results of the hand-drawn drawings are shown in Figure 4b. The values at the top of the reconstructed images are the SSIM indexes that indicate the similarity of the reconstructed image to its corresponding phase image. Comparisons between phase images and reconstructed images demonstrate that the U-Net restored the phase images from the speckle images effectively. The features of the handwritten digits and the hand drawings, particularly their edges, have been retrieved from the speckle images and rendered on the reconstructed images accurately. The values of SSIM index of the reconstructed images indicate that structural similarities of about 0.9 for handwritten digits and over 0.7 for hand-drawn drawings were achieved in the reconstructions with the U-net.
An epoch in the network training process refers to a single pass through the entire training set, which is used to determine when to stop training. After each epoch, the structural similarity for the U-net trained with handwritten digits and that for the U-net trained with hand-drawn drawings are calculated and plotted in Figure 5. It could be found that the training converged and the structural similarities were almost at their maximums at the epoch of twelve. When the epoch number was less than eight, the red line was above the black line, while the situation was reversed as it became greater than eight. That is, the U-net performed better at reconstructing the hand-drawn drawings when it was undertrained. However, when it was sufficiently trained, it showed better performance in the reconstruction of the handwritten digits. This is because the relatively richer graphical information of hand drawings makes their features more recognizable to the network in the early epochs, which also makes them more difficult to be completely reconstructed in the final epochs. The results in Figure 4 and Figure 5 demonstrate that a partially coherent beam with a very low coherence (K = 0.05) emitted from an incoherent LED source can be used for phase imaging through scattering media.
As indicated by the coherence measurements described previously, the coherence of the partially coherent beam emitted by the LED could be increased by increasing the propagation distance, d1. The coherence contrast, K, rises from 0.05 to 0.11 when d1 is set to 20 cm. Figure 6 shows the reconstruction of the phase images through scattering media by the U-net when K is 0.11. By comparing the SSIM indexes in Figure 5 with the corresponding ones in Figure 6, it can be seen that the structural similarities of the reconstruction for the handwritten digits did not increase with the increase in the coherence of the modulated beam. However, for hand drawings, they were significantly improved and an average structural similarity of 0.84 was achieved.
The SSIM indexes for the reconstruction of handwritten digits and for the reconstruction of hand drawings as functions of the epoch number are plotted and shown in Figure 7 for when the coherence contrast of the incident partially coherent beam is 0.11. A similar trend indicated by the curves in Figure 5 is shown in this graph displaying that the SSIM rose gradually with the increase in the epoch number until it reached its maximum when the training of the network converged, and the reconstruction performance of the U-net for handwritten digits was slightly better than that for hand drawings. An obvious difference from Figure 5 is that the red line becomes very close to the black line. In conjunction with the discussion in the previous paragraph, it can be concluded that the improvement in the coherence of the partially coherent beam helps the U-net to reconstruct complex phase images more accurately. However, the U-net can reconstruct simple phase images through scattering media accurately even if the coherence of the modulated beam is very low, for example, when K = 0.05. To further clarify the relationship between reconstruction accuracy and the K value, the maximum SSIM as a function of the K value is plotted in Figure 8. A similar conclusion to the above can be drawn from the curves in the figure; that is, the U-net is capable of reconstructing simple phase images through scattering media accurately even with the modulated beam of very low coherence. However, for complex phase images, the increase in beam coherence helps to increase reconstruction accuracy.
In order to further verify the feasibility of phase imaging using an incoherent light source through scattering media, a hybrid training method was utilized, in which the training set consisting of both images of handwritten digits and hand drawings was employed to train the U-net. Then, the performance of the network was evaluated though the reconstruction of the images in the test set and the results are illustrated in Figure 9. In the case of K = 0.05, the U-net could accurately reconstruct the hand-drawn images but made errors in the reconstruction of handwritten digits. As K = 0.11, both images of handwritten digits and hand-drawn drawings were accurately reconstructed from speckle images and a final SSIM of over 0.8 was obtained.
Figure 10 illustrates the structural similarity indexes of the U-net during the hybrid training process. A significant improvement of over 30% in network performance was achieved when the partially coherent beam incident on the SLM had higher coherence. In the case of K = 0.11, although the coherence was still low, the accurate reconstruction of both images of handwritten digits and hand drawings with a final structural similarity of 0.8 was realized with the U-net trained through hybrid training. The results demonstrate that phase imaging through a scattering medium can be achieved using an incoherent light source by simply controlling the transmission distance between the incoherent light source and the target.

4. Conclusions

In summary, we have experimentally demonstrated that phase imaging through scattering media using an incoherent light source can be realized with the aid of a CNN. Quantitative analyses of the structural similarities between the original and reconstructed phase images were performed to indicate the fidelity of phase image reconstruction and to evaluate the performance of the network. The results of our study suggest that incoherent light sources can be used as illumination sources for phase imaging through scattering media. As the incoherent beam radiated from an incoherent light source propagates to the position of phase modulation, the beam converts from an incoherent one into a partially coherent automatically. The random wavefront phase of the dominant incoherent part of the beam does not fail in phase modulation and the CNN reconstructs the phase images from the speckle images caused by the scattering media. This phase imaging approach with an incoherent light source through scattering media can be used to record the refractive indices of transparent samples, which might lead to its application in biomedical imaging.

Author Contributions

Conceptualization, H.L.; methodology, H.L. and Z.H.; software, F.C. and C.H.; validation, Y.L., C.Y., H.C. and Y.Z.; data curation, H.L. and Z.H.; writing—original draft preparation, H.L.; writing—review and editing, J.Z., and J.P.; funding acquisition, H.L., H.C. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (NSFC), grant numbers 61975072 and 12174173, by Natural Science Foundation of Fujian province, grant numbers 2022H0023, 2022J02047 and 2022G02006, and by the Young Core Instructor from the Education Department of Fujian Province, grant numbers JT180295 and JAT190393.

Institutional Review Board Statement

“Not applicable” for studies not involving humans or animals.

Informed Consent Statement

“Not applicable.” for studies not involving humans.

Data Availability Statement

No new data were created.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, K.; Kim, K.; Jung, J.; Heo, J.; Cho, S.; Lee, S.; Chang, G.; Jo, Y.; Park, H.; Park, Y. Quantitative phase imaging techniques for the study of cell pathophysiology: From principles to applications. Sensors 2013, 13, 4170–4191. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Mir, M.; Bhaduri, B.; Wang, R.; Zhu, R.; Popescu, G. Quantitative phase imaging. Prog. Opt. 2012, 57, 217. [Google Scholar]
  3. Hu, C.; He, S.; Lee, Y.J.; He, Y.; Kong, E.M.; Li, H.; Anastasio, M.A.; Popescu, G. Live-dead assay on unlabeled cells using phase imaging with computational specificity. Nat. Commun. 2022, 13, 713. [Google Scholar] [CrossRef] [PubMed]
  4. Park, Y.; Depeursinge, C.; Popescu, G. Quantitative phase imaging in biomedicine. Nat. Photonics 2018, 12, 578–589. [Google Scholar] [CrossRef]
  5. Redding, B.; Cerjan, A.; Huang, X.; Lee, M.L.; Stone, A.D.; Choma, M.A.; Cao, H. Low spatial coherence electrically pumped semiconductor laser for speckle-free full-field imaging. Proc. Natl. Acad. Sci. USA 2015, 112, 1304–1309. [Google Scholar] [CrossRef] [PubMed]
  6. Wallyn, J.; Anton, N.; Akram, S.; Vandamme, T.F. Biomedical imaging: Principles, technologies, clinical aspects, contrast agents, limitations and future trends in nanomedicines. Pharm. Res. 2019, 36, 78. [Google Scholar] [CrossRef]
  7. Park, J.H.; Yu, Z.; Lee, K.; Lai, P.; Park, Y. Perspective: Wavefront shaping techniques for controlling multiple light scattering in biological tissues: Toward in vivo applications. APL Photonics 2018, 3, 100901. [Google Scholar] [CrossRef] [Green Version]
  8. Bertolotti, J.; Katz, O. Imaging in complex media. Nat. Phys. 2022, 18, 1008–1017. [Google Scholar] [CrossRef]
  9. Cao, J.; Yang, Q.; Miao, Y.; Li, Y.; Qiu, S.; Zhu, Z.; Wang, P.; Chen, Z. Enhance the delivery of light energy ultra-deep into turbid medium by controlling multiple scattering photons to travel in open channels. Light Sci. Appl. 2022, 11, 108. [Google Scholar] [CrossRef]
  10. May, M.A.; Barré, N.; Kummer, K.K.; Kress, M.; Ritsch-Marte, M.; Jesacher, A. Fast holographic scattering compensation for deep tissue biological imaging. Nat. Commun. 2021, 12, 4340. [Google Scholar] [CrossRef]
  11. Zhou, J.; Wei, X.; Shi, J.; Chu, W.; Zhang, W. Underwater image enhancement method with light scattering characteristics. Comput. Electr. Eng. 2022, 100, 107898. [Google Scholar] [CrossRef]
  12. Zhou, J.; Liu, D.; Xie, X.; Zhang, W. Underwater image restoration by red channel compensation and underwater median dark channel prior. Appl. Opt. 2022, 61, 2915–2922. [Google Scholar] [CrossRef] [PubMed]
  13. Liu, D.; Sun, J.; Gao, S.; Ma, L.; Jiang, P.; Guo, S.; Zhou, X. Single-parameter estimation construction algorithm for Gm-APD ladar imaging through fog. Opt. Commun. 2021, 482, 126558. [Google Scholar] [CrossRef]
  14. Suganya, R.; Kanagavalli, R. Hybrid gated recurrent unit and convolutional neural network-based deep learning architecture-based visibility improvement scheme for improving fog-degraded images. Int. J. Inf. Technol. 2022, 14, 19–29. [Google Scholar] [CrossRef]
  15. Edrei, E.; Scarcelli, G. Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media. Sci. Rep. 2016, 6, 33558. [Google Scholar] [CrossRef] [Green Version]
  16. Li, L.; Li, Q.; Sun, S.; Lin, H.-Z.; Liu, W.-T.; Chen, P.-X. Imaging through scattering layers exceeding memory effect range with spatial-correlation-achieved point-spread-function. Opt. Lett. 2018, 43, 1670–1673. [Google Scholar] [CrossRef] [PubMed]
  17. Katz, O.; Heidmann, P.; Fink, M.; Gigan, S. Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations. Nat. Photonics 2014, 8, 784–790. [Google Scholar] [CrossRef] [Green Version]
  18. Kim, M.; Choi, W.; Choi, Y.; Yoon, C.; Choi, W. Transmission matrix of a scattering medium and its applications in biophotonics. Opt. Express 2015, 23, 12648–12668. [Google Scholar] [CrossRef]
  19. Mosk, A.P.; Lagendijk, A.; Lerosey, G.; Fink, M. Controlling waves in space and time for imaging and focusing in complex media. Nat. Photonics 2012, 6, 283–292. [Google Scholar] [CrossRef] [Green Version]
  20. Zhang, Y.; Zhao, H.; Wu, H.; Chen, Z.; Pu, J. Recognition of orbital angular momentum of vortex beams based on convolutional neural network and multi-objective classifier. Photonics 2023, 10, 631. [Google Scholar] [CrossRef]
  21. Li, S.; Deng, M.; Lee, J.; Sinha, A.; Barbastathis, G. Imaging through glass diffusers using densely connected convolutional networks. Optica 2018, 5, 803–813. [Google Scholar] [CrossRef]
  22. Li, Y.; Xue, Y.; Tian, L. Deep speckle correlation: A deep learning approach toward scalable imaging through scattering media. Optica 2018, 5, 1181–1190. [Google Scholar] [CrossRef]
  23. Li, Y.; Cheng, S.; Xue, Y.; Tian, L. Displacement-agnostic coherent imaging through scatter with an interpretable deep neural network. Opt. Express 2021, 29, 2244–2257. [Google Scholar] [CrossRef] [PubMed]
  24. Rahmani, B.; Loterie, D.; Konstantinou, G.; Psaltis, D.; Moser, C. Multimode optical fiber transmission with a deep learning network. Light Sci. Appl. 2018, 7, 69. [Google Scholar] [CrossRef] [Green Version]
  25. Lai, X.; Li, Q.; Chen, Z.; Shao, X.; Pu, J. Reconstructing images of two adjacent objects passing through scattering medium via deep learning. Opt. Express 2021, 29, 43280–43291. [Google Scholar] [CrossRef]
  26. Qiao, Z.; Shi, X.; Yao, Y.; Wojcik, M.J.; Rebuffi, L.; Cherukara, M.J.; Assoufid, L. Real-time X-ray phase-contrast imaging using SPINNet—A speckle-based phase-contrast imaging neural network. Optica 2022, 9, 391. [Google Scholar] [CrossRef]
  27. Fan, W.; Chen, T.; Xu, X.; Chen, Z.; Hu, H.; Zhang, D.; Wang, D.; Pu, J.; Zhu, S. Single-Shot Recognition of 3D Phase Images with Deep Learning. Laser Photonics Rev. 2022, 16, 2100719. [Google Scholar] [CrossRef]
  28. Zheng, S.; Wang, H.; Dong, S.; Wang, F.; Situ, G. Incoherent imaging through highly nonstatic and optically thick turbid media based on neural network. Photonics Res. 2021, 9, B220–B228. [Google Scholar] [CrossRef]
  29. Lin, H.; Tao, H.; He, M.; Pu, X. Spatial Coherence of High-Power Singles-Color LED. Acta Opt. Sin. 2012, 32, 0323003. [Google Scholar]
  30. Lyu, M.; Wang, H.; Li, G.; Zheng, S.; Situ, G. Learning-based lensless imaging through optically thick scattering media. Adv. Photonics 2019, 1, 036002. [Google Scholar] [CrossRef] [Green Version]
  31. Grother, P. NIST Special Database 19. NIST Handprinted Forms and Characters Database; NIST: Gaithersburg, MD, USA, 2008.
  32. Kingma, D.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  33. Wang, Z.; Bovik, A.C. Mean squared error: Love it or leave it? A new look at signal fidelity measures. IEEE Signal Process. Mag. 2009, 26, 98–117. [Google Scholar] [CrossRef]
Figure 2. Phase image and its corresponding speckle patterns.
Figure 2. Phase image and its corresponding speckle patterns.
Photonics 10 00792 g002
Figure 3. The framework of the U-net with supervised learning. Each blue frame corresponds to a multi-channel feature map, the number of channels is in the frame, the size of x-y is at the bottom left of the frame, the orange frame represents the replication function map, and the arrows represent different operations.
Figure 3. The framework of the U-net with supervised learning. Each blue frame corresponds to a multi-channel feature map, the number of channels is in the frame, the size of x-y is at the bottom left of the frame, the orange frame represents the replication function map, and the arrows represent different operations.
Photonics 10 00792 g003
Figure 4. Reconstruction of the phase image through scattering media by using the U-net when the value of K was 0.05. (a) Handwritten digits as original phase images, and (b) quickdraw images as original phase images. The digits above the reconstruction of the phase image are values of the SSIM index.
Figure 4. Reconstruction of the phase image through scattering media by using the U-net when the value of K was 0.05. (a) Handwritten digits as original phase images, and (b) quickdraw images as original phase images. The digits above the reconstruction of the phase image are values of the SSIM index.
Photonics 10 00792 g004
Figure 5. The structural similarity curves as a function of the training iteration step during training of handwritten digit dataset and quickdraw image dataset, respectively. The value of K is 0.05.
Figure 5. The structural similarity curves as a function of the training iteration step during training of handwritten digit dataset and quickdraw image dataset, respectively. The value of K is 0.05.
Photonics 10 00792 g005
Figure 6. Reconstruction of the phase image through scattering media using the U-net when the value of K was 0.11. (a) Handwritten digits as original phase images, and (b) quickdraw images as original phase images. The digits above the reconstruction of the phase image are values of the SSIM index.
Figure 6. Reconstruction of the phase image through scattering media using the U-net when the value of K was 0.11. (a) Handwritten digits as original phase images, and (b) quickdraw images as original phase images. The digits above the reconstruction of the phase image are values of the SSIM index.
Photonics 10 00792 g006
Figure 7. The structural similarity curves as a function of the training iteration step during the training of the handwritten digit dataset and quickdraw image dataset, respectively. The value of K is 0.11.
Figure 7. The structural similarity curves as a function of the training iteration step during the training of the handwritten digit dataset and quickdraw image dataset, respectively. The value of K is 0.11.
Photonics 10 00792 g007
Figure 8. The structural similarity curves as a function of the value of K.
Figure 8. The structural similarity curves as a function of the value of K.
Photonics 10 00792 g008
Figure 9. Reconstruction of the phase image through scattering media using the U-net under hybrid training. (a) The value of K is 0.05, and (b) the value of K is 0.11. The digits above the reconstruction of the phase image are values of the SSIM index.
Figure 9. Reconstruction of the phase image through scattering media using the U-net under hybrid training. (a) The value of K is 0.05, and (b) the value of K is 0.11. The digits above the reconstruction of the phase image are values of the SSIM index.
Photonics 10 00792 g009
Figure 10. The structural similarity curves as a function of the training iteration step during the hybrid training of the handwritten digit dataset and quickdraw image dataset.
Figure 10. The structural similarity curves as a function of the training iteration step during the hybrid training of the handwritten digit dataset and quickdraw image dataset.
Photonics 10 00792 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, H.; Huang, C.; He, Z.; Zeng, J.; Chen, F.; Yu, C.; Li, Y.; Zhang, Y.; Chen, H.; Pu, J. Phase Imaging through Scattering Media Using Incoherent Light Source. Photonics 2023, 10, 792. https://doi.org/10.3390/photonics10070792

AMA Style

Lin H, Huang C, He Z, Zeng J, Chen F, Yu C, Li Y, Zhang Y, Chen H, Pu J. Phase Imaging through Scattering Media Using Incoherent Light Source. Photonics. 2023; 10(7):792. https://doi.org/10.3390/photonics10070792

Chicago/Turabian Style

Lin, Huichuan, Cheng Huang, Zhimin He, Jun Zeng, Fuchang Chen, Chaoqun Yu, Yan Li, Yongtao Zhang, Huanting Chen, and Jixiong Pu. 2023. "Phase Imaging through Scattering Media Using Incoherent Light Source" Photonics 10, no. 7: 792. https://doi.org/10.3390/photonics10070792

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop