Next Article in Journal
Dynamic Tunable Deflection of Radiation Based on Epsilon-Near-Zero Material
Previous Article in Journal
Method and Installation for Efficient Automatic Defect Inspection of Manufactured Paper Bowls
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Confocal Single-Pixel Imaging

Department of Biomedical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea
*
Author to whom correspondence should be addressed.
Photonics 2023, 10(6), 687; https://doi.org/10.3390/photonics10060687
Submission received: 15 May 2023 / Revised: 6 June 2023 / Accepted: 13 June 2023 / Published: 14 June 2023
(This article belongs to the Topic Biomedical Photonics)

Abstract

:
Obtaining depth-selective images requires gating procedures such as spatial, nonlinear, or coherence gating to differentiate light originating from different depths of the volume of interest. Nonlinear gating requires pulsed excitation sources and excitation probes, limiting easy usage. Coherence gating also requires broadband sources and interferometry requiring specialized stable setups. Spatial gating can be used both for fluorescence and reflection geometry and various light sources and thus has the least requirements on hardware, but still requires the use of a pinhole which makes it difficult to use for photography or widefield imaging schemes. Here, we demonstrate that we can utilize a single digital micromirror device (DMD) to simultaneously function as a dynamic illumination modulator and automatically synchronized dynamic pinhole array to obtain depth-sectioned widefield images. Utilizing the multiplexed measurement advantage of single-pixel imaging, we show that the depth and ballistic light gating of the confocal single pixel imaging scheme can be utilized to obtain images through glare and multiple scattering where conventional widefield imaging fails to recover clear images due to saturation or random scattered noise.

1. Introduction

Optical imaging has become an essential part of everyday life and holds valuable information as can be seen in the impact of high-quality cameras embedded in smartphones. The same holds true when we increase the resolution aimed to look at smaller structures in both industry and in the life sciences [1,2]. High resolution optical imaging enables the imaging of the subcellular organelles making up individual cells and enables nondestructive inspection of industrial products en masse. These examples illustrate the strength of optical imaging in fast and easy high-resolution imaging of superficial layers of various objects. However, when trying to look through multiple layers of objects or inside deep structures, optical imaging faces difficulties associated with depth selectivity and multiple scattering of light.
Since optical sensors detect the intensity of light disregarding the direction or whether the light is converging or diverging, cameras cannot differentiate light originating from different depths of the sample or whether the light has undergone multiple scattering. In other words, to differentiate ballistic photons that originate from specific depths of the sample and obtain depth-selective imaging, we need to add specific gating mechanisms such as spatial, nonlinear, or coherence gating into an optical system. Nonlinear gating requires the use of expensive pulsed laser sources and cannot be used for non-excitable targets [3]. Coherence gating requires broadband sources and interferometry and cannot be used for fluorescence [4]. Spatial gating does not have limitations on light source selection and can be applied for both reflection and fluorescence imaging [5]. Spatial gating, however, requires the use of a pinhole which restricts use in widefield imaging such as photography. Conventional confocal microscopy also suffers from reduced signal-to-noise ratios when applied to thick scattering samples as the amount of ballistic light is exponentially reduced as a function of sample thickness. Due to the importance of 3D imaging, advances in developing different methods to realize depth selectivity is still ongoing; for example, realizing unconventional optical geometries that incorporate orthogonal illumination and detection paths as in lightsheet microscopy [6], and encoding depth information based on high spatial frequency contrast of specially designed illumination patterns such as in structured illumination microscopy [7,8].
Whereas absorption of light reduces the energy of the propagating light without changing its direction, the propagation direction of light is altered whenever it meets a refractive index mismatch. In other words, we can overcome absorption by simply using wavelengths that are not absorbed by the specific target of interest [9]. However, overcoming multiple scattering is a much more difficult problem since refractive index mismatch among different structures exist for all wavelengths. To overcome optical aberrations and multiple scattering due to the refractive index inhomogeneity, the field of adaptive optics [10,11,12,13] and wavefront shaping [14,15,16,17,18,19] is gaining popularity, where wavefront distortions that are accumulated during the transport of light through complex media are actively compensated. Originally developed for astronomy [20], adaptive optics and wavefront shaping have been shown to be equally effective in correcting for optical distortions due to complex media, especially for thick biological tissues. While actively compensating for the distortions has been shown to be effective for various microscopy methods such as confocal [21], multiphoton [22,23,24,25], lightsheet [26,27], and optical coherence tomography [28,29], its application requires the additional design and setup of complex hardware and measurement algorithms that are out of reach for most small research groups. Adaptive optics is also fundamentally limited by the optical memory effect which restricts the effective imaging field of view that can benefit from the complex phase correction [30,31,32] and requires an extended amount of measurements when trying to correct for distortions due to multiple scattering where Shack–Hartmann wavefront sensors fail to operate.
In a simpler approach, single-pixel imaging [33] has been shown to be less sensitive to multiple scattering in comparison with conventional widefield imaging. Conventional widefield imaging relies on the combination of 2D sensors (camera) and a lens with respective distances between the sample of interest, the lens, and sensor satisfying the imaging condition. In contrast, single-pixel imaging does not require the imaging condition to be satisfied between the sample of interest and the detector. Rather, the entirety of the light that is scattered or emitted from the sample is simply integrated onto a single-pixel photodetector, removing the spatial information in detection. In such a scheme, the spatial information about the sample is not directly measured on the detection side and therefore must be obtained in a different manner. Single-pixel imaging satisfies this requirement by illuminating the sample with known 2D patterns. In other words, the spatial degree of freedom is imposed onto the illumination rather than the detector. By simply summing the illuminated patterns that are weighted with their respective measured intensities after interacting with the sample, we can obtain the 2D image of the object as in traditional imaging. Since the requirements on the sensors is less than conventional imaging and only a single-pixel sensor is required rather than a multitude array of 2D pixels, the detection of light in exotic ranges such as deep infra-red becomes possible where cheap camera solutions do not exist.
This unique geometry where the information relays between the sample and detector only requires the delivery of total intensity without any spatial information has allowed demonstrations where the insertion of turbid media between the sample and detector has negligible effects in image reconstruction quality. The initial demonstration showed this in transmission geometry where there was no disturbance along the illumination source to sample path and the disturbance was solely distributed along the sample to detector path [34,35]. As previously discussed, the scattering of light in the detection path does not compromise image reconstruction. However, this approach required a clear path between the illumination source to the sample which is a severe constraint limiting practical applications. Recently, it was shown that the application of temporal focusing can reduce this constraint by utilizing the longer center wavelength as well as the broader bandwidth that is used for multiphoton excitation and the required ultrafast pulsewidth of the excitation source [36]. This approach, however, requires expensive amplifiers to increase the maximum peak power while reducing the pulse repetition rate to realize multiphoton excitation over a wide field of view which only a limited number of well-funded labs have access to.
Here, we describe the use of a double path single-pixel imaging geometry which we call confocal single-pixel imaging to reduce the deleterious effects of multiple scattering and simultaneously gate ballistic photons at specific depths to automatically obtain depth selectivity. The required hardware for confocal single-pixel imaging is identical to conventional single-pixel imaging, minimizing the costs and complexity of the setup. The only difference is that the target light after interacting with the sample is redirected to the DMD in a double pass geometry to utilize individual mirrors of the DMD as both illuminating and detecting confocal pinholes. Utilizing this simple setup, we demonstrate that we can dynamically change single-pixel imaging characteristics from conventional widefield imaging to depth-selective imaging. In contrast to confocal microscopy, our method can be easily applied to macroscopic photography utilizing standard camera lenses which we demonstrate can be utilized to obtain depth-selective widefield imaging resistant to glare as well as imaging through complex structures that induce multiple scattering.

2. Materials and Methods

Our imaging setup (Figure 1) consists of just a LED light source (Thorlabs, Newton, NJ, USA, MWWHL4), a DSLR macrolens (Nikkor, f/2.8, focal length 105 mm) for both illumination and detection of light to and from the sample, a DMD to control the illumination patterns as well as the confocal gating array (Texas Instruments, Dallas, TX, USA, DLP LightCrafter), a photodiode detector (Thorlabs, PDA100A-EC), and a lens to focus light onto the photodiode. The DMD had a 7.6 µm micromirror pitch distance and a total of 608 × 684 pixels. The mirrors can be tilted to ±12° for independent binary amplitude modulation of each pixel. Using a HDMI connection port, image commands could be updated at 60 Hz. Using 3-color 8-bit dynamic range images, the actual refresh rate of binary patterns could be controlled at a 1440 Hz refresh rate. The DMD surface is simply aligned to be conjugate to the target sample depth using the DSLR macrolens. This alignment automatically aligns the reflected light from the targeted depth back onto the DMD. Due to the double pass geometry, the setup is alignment free and the same mirror segments of the DMD are used for both illumination and detection paths. Since each mirror of the DMD defines the spatial coordinates on the target sample plane, the illumination can be actively turned on or off on an individual pixel basis. The illuminated light that reaches the target is then scattered and reflected back to the same mirror of origin. Other scattered light that has been scattered from different depths or scattered by multiple scattering does not pass through the confocal gating and is removed from the image reconstruction process. A final lens simply integrates the total reflected light into a photodiode. While we demonstrate our method by measuring the scattered light from the sample in a brightfield imaging geometry, the same setup can be easily adapted to be used for other schemes such as fluorescence imaging.
In conventional confocal microscopy, decreasing the pinhole size results in an enhancement of depth selectivity and resolution while reducing the signal-to-noise ratio due to the lower detection throughput. This is also the same for spinning disk confocal microscopy where many pinholes are used in parallel. Although the signals are measured through multiple pinholes simultaneously, the light passing through each pinhole is individually measured and the reduction of pinhole size results in a decrease in signal-to-noise ratio for the respective pinhole. In comparison, single-pixel imaging utilizes multiplexed measurements where the signal collected from all areas across the field of view are integrated and processed together. This results in the so called Fellgett’s advantage and we can utilize the light collection efficiency of the entire system as a whole.
To take advantage of the multiplexed detection and increase the depth-sectioning efficiency of confocal ghost imaging, we tested several design parameters for optimal results. Since we are utilizing single-pixel imaging reconstruction algorithms, it is advantageous to use orthogonal illumination patterns. When illuminating the sample with orthogonal 2D patterns, we only need to illuminate the sample with M = N measurements where N is the number of definable pixels on the sample plane. In short, we just need to take exactly the same number of measurements as there are degrees of freedom in the field of view that we want to measure. The use of nonorthogonal illumination patterns requires M >> N measurements which makes the imaging process inefficient. In this approach, compressive sensing can be applied to reduce the number of measurements by utilizing the sparsity of natural images [37].
The spacing between adjacent micromirrors has been decided with consideration of the diffraction limit:
Rayleigh diffraction limit = 0.61 λ/NA
in which λ is the wavelength and NA is the numerical aperture of the optical system. Here, the minimum spacing corresponds to the diffraction limit of the optical system, and as the spacing between the two micromirrors increases, the confocal gating efficiency increases. However, as the spacing increases, the light throughput and SNR of acquired images decrease, so an appropriate value of the spacing should be selected under the trade-off relationship. Since the diffraction limit of the system configuration is 1.12 µm, the effective size of the pinhole has been chosen to be 1.95 µm which can be realized by grouping the 576 by 576 active pixels of DMD into macropixels of 6 by 6 pixels to make 96 by 96 effective pixels. The minimal spacing between adjacent macropixels is 11.7 µm which is far enough to eliminate the out-of-focus lights. The value of spacing keeps altering depending on which pattern will be illuminated. The confocal orthogonal illumination/pinhole patterns were generated by simply redistributing standard Hadamard basis patterns by changing the minimal spacing (Figure 2). As the illumination basis is still uniform, theoretically we only require the same number of measurements as the number of pixels in the measured image. Since light intensity cannot be negative, we measured an identical set of conjugate patterns where the 1 and 0 intensity mirror positions were reversed resulting in a measurement number of 2 × N, where N is the number of pixels of the obtained image.
In detail, each confocal orthogonal pattern (96 by 96) has been generated by reshaping a vectorized Hadamard matrix (256 by 256). The reshaped pattern is spread out over the large size matrix (561 by 561) which is then zero padded to match with the size of the DMD (683 by 607). Before spreading out, the diameter of the multiplexed pinholes and space between pinholes should be determined considering the system’s diffraction limit. The Hadamard pattern is composed of 1 and −1, and is realized by displaying two complementary patterns consecutively and subtracting the second measurement since light intensity cannot be negative. The patterns are streamed live onto the DMD, which updates the displayed pattern at a refresh rate of 1440 Hz. The analog voltage signal measured from a single-pixel detector is then captured by a data acquisition (DAQ) card (National Instruments, Austin, TX, USA, PCIe-6321), synchronized with the DMD. For image acquisition, the signal values corresponding to each complementary pattern set are subtracted to generate a single Hadamard pattern as previously discussed. All the patterns displayed are given weights defined by the measured light intensity reflecting off the object. Summation of all the weighted patterns reconstructs the object. In short, several sub-regions assigned based on the confocal principle are imaged through the conventional single-pixel imaging method, filling out the FOV with a multiplexed raster scanning method.
O x , y = m = 1 O m x , y = m = 1 1 N i = 1 N S m i S m I i x , y
where O is the reconstructed image in a Cartesian coordinate system, (x, y); Si is the signal measured with the theoretical Hadamard pattern Ii; N is the number of modes of SPI which is also identical to the number of pixels in the Ii; and m is the iteration number for the raster scanning across the total FOV and Om will be retrieved at every iteration. Using orthogonal illumination patterns, we need the same number of measurements as the number of pixels of the image. To generate Hadamard patterns with +1 and −1 values, we need twice as many measurements resulting in 18,432 measurements to acquire a 96 by 96 pixel image. Based on the system refresh rate of 1440 Hz, the current image measurement time is 12.8 s. The image reconstruction is simply summations of the weighted patterns which can be implemented in real time. If needed, the number of measurements can be reduced by employing compressive sensing into the reconstruction process [37].

3. Results

We verified the depth-sectioning capabilities for widefield imaging using confocal single-pixel imaging (Figure 3). Using a USAF target as the target object, depth sectioning was clearly observed using the confocal orthogonal basis illumination patterns for extended objects, where the entire USAF target within the field of view disappeared from the detection range as soon as the target was moved in the axial direction. In contrast, using conventional Hadamard basis illumination patterns, out of focus blurred images persisted to be measured with increased blurring as a function of axial misplacement. As both imaging modalities can be realized using a single setup, we can also modulate different parts of the field of view simultaneously, for instance, some regions of interest (ROI) via conventional widefield imaging, and some ROIs via spatially gated imaging if needed. Comparing the axial PSF for conventional confocal imaging, confocal single-pixel imaging, and conventional single-pixel imaging, we found the z-PSF FWHM to be 1, 1.07, and 1.40, respectively, normalized to the performance of conventional confocal imaging. The depth sectioning performance of confocal single-pixel imaging is therefore comparable to that of conventional confocal imaging.
To demonstrate further applications, we employed confocal single-pixel imaging to obtain widefield images through severe glare (Figure 4). Glare was generated by placing a Petri dish in front of the USAF target. Using conventional imaging, the severe glare reflected off from out-of-plane refractive index mismatch overwhelmed the signal from the actual target of interest causing saturation and compromising quantitative imaging across the field of view (Figure 4a). In comparison, confocal single-pixel imaging effectively removed the glare originating from different depths and successfully measured the target without glare artifacts (Figure 4b). To compare the image quality enhancement from glare suppression, we defined the signal-to-background ratio of our images by choosing a part of the object that should not have signal as the background, and the signal of a constant area of the object of interest as the true signal. Comparing the regions outlined below, we found the signal-to-background ratio in conventional widefield imaging to be 1.2559 whereas the signal-to-background ratio in confocal single-pixel imaging was 3.3685, clearly demonstrating the enhancement in contrast due to glare suppression.
We finally tested confocal single-pixel imaging for imaging through scattering layers (Figure 5). For further applications, we demonstrate the possibility of our confocal SPI system using various illumination wavelengths that normally cannot be imaged using conventional cameras. Using various illumination sources, we could observe that broadband sources suffered least from speckle artifacts in the recovered image, while longer wavelengths suffered less from image aberrations and displayed sharper contrast through scattering [32]. Consequently, our multiplexed confocal regime will continue to extract more informative light with a higher SNR from deeper and scattering tissue compared to conventional SPI and confocal systems.
The feasibility of confocal SPI through scattering media has been demonstrated with broadband visible light in Figure 5. Within the length of the mean free path, the broadband illumination provided lower speckle contrast than infra-red light. For the visibility of the speckle, we placed a single layer of Scotch tape (reduced scattering cross section μ s ~3 cm−1 [38]) and optimized the distance between the layer and the target (Figure 5a–c), while speckle tolerant imaging was achieved using broadband LED illumination (Figure 5d–f), showing higher contrast than that with conventional widefield imaging (Figure 5g–i).

4. Discussion and Conclusions

Although various gating mechanisms have been demonstrated to be effective in microscopy, their direct applications to various scales spanning from microscopic to macroscopic levels is currently difficult. For example, nonlinear microscopy which relies on high peak pulse powers is not effective for macroscopic imaging since obtaining the high flux for larger effective pixel sizes will require more powerful laser sources with prohibitive costs. Similar restrictions hold for other laser scanning microscopy techniques such as confocal and optical coherence tomography since telecentric scan lenses used to translate angle tilt of light to spatial shift typically have short focal lengths along with the limited scan angles of scan mirrors which limit the achievable high quality scan range. In this regard, we have demonstrated confocal single-pixel imaging which utilizes conventional camera lenses for widefield depth-selective macroscopic imaging. Using DMDs which can dynamically modify illumination patterns in a double path geometry, we obtained automatic spatial gating with the same individual mirrors acting as both the illuminator as well as detector. To realize efficient spatial gating using 2D DMDs, we developed confocal orthogonal illumination patterns which only require the exact same number of measurements as the obtained image pixel resolution.
While our current demonstration utilized brightfield epi-reflection imaging and realized removal of glare artifacts and multiple scattering background noise, the same principle can be used for various applications such as fluorescence and phase imaging. Removing glare may find applications in long range surveillance where looking through windows on a sunny day or looking inside deep waters through surface reflections is required. Macroscopic imaging through turbid media will also find valuable use in various applications such as imaging through fog or turbulence as well as biomedical imaging.

Author Contributions

Investigation, C.A. and J.-H.P.; data curation, C.A.; writing, C.A. and J.-H.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Research Foundation of Korea, grant number 2021R1A2C3012903.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Roda, A.; Michelini, E.; Zangheri, M.; Di Fusco, M.; Calabria, D.; Simoni, P. Smartphone-based biosensors: A critical review and perspectives. TrAC Trends Anal. Chem. 2016, 79, 317–325. [Google Scholar] [CrossRef]
  2. Blahnik, V.; Schindelbeck, O. Smartphone imaging technology and its applications. Adv. Opt. Technol. 2021, 10, 145–232. [Google Scholar] [CrossRef]
  3. Denk, W.; Strickler, J.H.; Webb, W.W. Two-photon laser scanning fluorescence microscopy. Science 1990, 248, 73–76. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Huang, D.; Swanson, E.A.; Lin, C.P.; Schuman, J.S.; Stinson, W.G.; Chang, W.; Hee, M.R.; Flotte, T.; Gregory, K.; Puliafito, C.A.; et al. Optical Coherence Tomography. Science 1991, 254, 1178–1181. [Google Scholar] [CrossRef] [Green Version]
  5. Jonkman, J.; Brown, C.M.; Wright, G.D.; Anderson, K.I.; North, A.J. Tutorial: Guidance for quantitative confocal microscopy. Nat. Protoc. 2020, 15, 1585–1611. [Google Scholar] [CrossRef]
  6. Huisken, J.; Swoger, J.; Del Bene, F.; Wittbrodt, J.; Stelzer, E.H.K. Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science 2004, 305, 1007–1009. [Google Scholar] [CrossRef] [Green Version]
  7. Neil, M.A.A.; Juskaitis, R.; Wilson, T. Method of obtaining optical sectioning by using structured light in a conventional microscope. Opt. Lett. 1997, 22, 1905–1907. [Google Scholar] [CrossRef]
  8. Neil, M.A.A.; Juskaitis, R.; Wilson, T. Real time 3D fluorescence microscopy by two beam interference illumination. Opt. Commun. 1998, 153, 1–4. [Google Scholar] [CrossRef]
  9. Denk, W.; Svoboda, K. Photon upmanship: Why multiphoton imaging is more than a gimmick. Neuron 1997, 18, 351–357. [Google Scholar] [CrossRef] [Green Version]
  10. Booth, M.J. Adaptive optics in microscopy. Philos. Trans. A Math. Phys. Eng. Sci. 2007, 365, 2829–2843. [Google Scholar] [CrossRef]
  11. Ji, N. Adaptive optical fluorescence microscopy. Nat. Methods 2017, 14, 374–380. [Google Scholar] [CrossRef]
  12. Ahn, C.; Hwang, B.; Nam, K.; Jin, H.; Woo, T.; Park, J.-H. Overcoming the penetration depth limit in optical microscopy: Adaptive optics and wavefront shaping. J. Innov. Opt. Health Sci. 2019, 12, 1930002. [Google Scholar] [CrossRef] [Green Version]
  13. Hwang, B.; Woo, T.; Park, J.H. Fast diffraction-limited image recovery through turbulence via subsampled bispectrum analysis. Opt. Lett. 2019, 44, 5985–5988. [Google Scholar] [CrossRef]
  14. Mosk, A.P.; Lagendijk, A.; Lerosey, G.; Fink, M. Controlling waves in space and time for imaging and focusing in complex media. Nat. Photonics 2012, 6, 283–292. [Google Scholar] [CrossRef] [Green Version]
  15. Vellekoop, I.M. Feedback-based wavefront shaping. Opt. Express 2015, 23, 12189–12206. [Google Scholar] [CrossRef]
  16. Horstmeyer, R.; Ruan, H.; Yang, C. Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue. Nat. Photonics 2015, 9, 563–571. [Google Scholar] [CrossRef] [Green Version]
  17. Yu, Z.; Li, H.; Zhong, T.; Park, J.-H.; Cheng, S.; Woo, C.M.; Zhao, Q.; Yao, J.; Zhou, Y.; Huang, X.; et al. Wavefront shaping: A versatile tool to conquer multiple scattering in multidisciplinary fields. Innovation 2022, 3, 100292. [Google Scholar] [CrossRef]
  18. Nam, K.; Park, J.H. Increasing the enhancement factor for DMD-based wavefront shaping. Opt. Lett. 2020, 45, 3381–3384. [Google Scholar] [CrossRef]
  19. Jin, H.; Hwang, B.; Lee, S.; Park, J.-H. Limiting the incident NA for efficient wavefront shaping through thin anisotropic scattering media. Optica 2021, 8, 428–437. [Google Scholar] [CrossRef]
  20. Beckers, J.M. Adaptive optics for astronomy: Principles, performance, and applications. Annu. Rev. Astron. Astrophys. 1993, 31, 13–62. [Google Scholar] [CrossRef]
  21. Booth, M.J.; Neil, M.A.A.; Juskaitis, R.; Wilson, T. Adaptive aberration correction in a confocal microscope. Proc. Natl. Acad. Sci. USA 2002, 99, 5788–5792. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Debarre, D.; Botcherby, E.J.; Watanabe, T.; Srinivas, S.; Booth, M.J.; Wilson, T. Image-based adaptive optics for two-photon microscopy. Opt. Lett. 2009, 34, 2495–2497. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Ji, N.; Milkie, D.E.; Betzig, E. Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues. Nat. Methods 2010, 7, 141–147. [Google Scholar] [CrossRef] [PubMed]
  24. Park, J.H.; Sun, W.; Cui, M. High-resolution in vivo imaging of mouse brain through the intact skull. Proc. Natl. Acad. Sci. USA 2015, 112, 9236–9241. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Park, J.H.; Kong, L.; Zhou, Y.; Cui, M. Large-field-of-view imaging by multi-pupil adaptive optics. Nat. Methods 2017, 14, 581–583. [Google Scholar] [CrossRef] [Green Version]
  26. Liu, T.-L.; Upadhyayula, S.; Milkie, D.E.; Singh, V.; Wang, K.; Swinburne, I.A.; Mosaliganti, K.R.; Collins, Z.M.; Hiscock, T.W.; Shea, J.; et al. Observing the cell in its native state: Imaging subcellular dynamics in multicellular organisms. Science 2018, 360, eaaq1392. [Google Scholar] [CrossRef] [Green Version]
  27. Bourgenot, C.; Saunter, C.D.; Taylor, J.M.; Girkin, J.M.; Love, G.D. 3D adaptive optics in a light sheet microscope. Opt. Express 2012, 20, 13252–13261. [Google Scholar] [CrossRef] [Green Version]
  28. Jang, J.; Lim, J.; Yu, H.; Choi, H.; Ha, J.; Park, J.H.; Oh, W.Y.; Jang, W.; Lee, S.; Park, Y. Complex wavefront shaping for optimal depth-selective focusing in optical coherence tomography. Opt. Express 2013, 21, 2890–2902. [Google Scholar] [CrossRef]
  29. Fiolka, R.; Si, K.; Cui, M. Complex wavefront corrections for deep tissue focusing using low coherence backscattered light. Opt. Express 2012, 20, 16532–16543. [Google Scholar] [CrossRef]
  30. Feng, S.C.; Kane, C.; Lee, P.A.; Stone, A.D. Correlations and Fluctuations of Coherent Wave Transmission through Disordered Media. Phys. Rev. Lett. 1988, 61, 834–837. [Google Scholar] [CrossRef] [Green Version]
  31. Freund, I.I.; Rosenbluh, M.; Feng, S. Memory effects in propagation of optical waves through disordered media. Phys. Rev. Lett. 1988, 61, 2328–2331. [Google Scholar] [CrossRef]
  32. Vellekoop, I.M.; Aegerter, C.M. Scattered light fluorescence microscopy: Imaging through turbid layers. Opt. Lett. 2010, 35, 1245–1247. [Google Scholar] [CrossRef] [Green Version]
  33. Edgar, M.P.; Gibson, G.M.; Padgett, M.J. Principles and prospects for single-pixel imaging. Nat. Photonics 2018, 13, 13–20. [Google Scholar] [CrossRef]
  34. Tajahuerce, E.; Duran, V.; Clemente, P.; Irles, E.; Soldevila, F.; Andres, P.; Lancis, J. Image transmission through dynamic scattering media by single-pixel photodetection. Opt. Express 2014, 22, 16945–16955. [Google Scholar] [CrossRef] [Green Version]
  35. Duran, V.; Soldevila, F.; Irles, E.; Clemente, P.; Tajahuerce, E.; Andres, P.; Lancis, J. Compressive imaging in scattering media. Opt. Express 2015, 23, 14424–14433. [Google Scholar] [CrossRef] [Green Version]
  36. Escobet-Montalbán, A.; Spesyvtsev, R.; Chen, M.; Saber, W.A.; Andrews, M.; Herrington, C.S.; Mazilu, M.; Dholakia, K. Wide-field multiphoton imaging through scattering media without correction. Sci. Adv. 2018, 4, eaau1338. [Google Scholar] [CrossRef] [Green Version]
  37. Katz, O.; Bromberg, Y.; Silberberg, Y. Compressive ghost imaging. Appl. Phys. Lett. 2009, 95, 131110. [Google Scholar] [CrossRef] [Green Version]
  38. Luu, L.; Roman, P.A.; Mathews, S.A.; Ramella-Roman, J.C. Microfluidics based phantoms of superficial vascular network. Biomed. Opt. Express 2012, 3, 1350–1364. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Experimental set-up for confocal single-pixel imaging. (Left inset) A representative illustration of a micromirror device on which a subset of the micromirror acts like multiple pinholes to extract ballistic and single reflection light from focused plane. (Right inset) nth confocal Hadamard pattern displayed. (LED) Light emitting diode, (DMD) digital micromirror device, (L1) macro lens, (S) scattering media, (O) target object, (BS) beam splitter, (L2) condenser lens, (M) light dumping mirror, (PD) photodetector.
Figure 1. Experimental set-up for confocal single-pixel imaging. (Left inset) A representative illustration of a micromirror device on which a subset of the micromirror acts like multiple pinholes to extract ballistic and single reflection light from focused plane. (Right inset) nth confocal Hadamard pattern displayed. (LED) Light emitting diode, (DMD) digital micromirror device, (L1) macro lens, (S) scattering media, (O) target object, (BS) beam splitter, (L2) condenser lens, (M) light dumping mirror, (PD) photodetector.
Photonics 10 00687 g001
Figure 2. Design of confocal orthogonal basis and work-flow chart for confocal gated SPI. x and y: sample plane coordinates, N: size of Hadamard pattern, I: reshaped Hadamard pattern, Ci1 and Ci2: i-th complementary confocal pattern, O: reconstructed object.
Figure 2. Design of confocal orthogonal basis and work-flow chart for confocal gated SPI. x and y: sample plane coordinates, N: size of Hadamard pattern, I: reshaped Hadamard pattern, Ci1 and Ci2: i-th complementary confocal pattern, O: reconstructed object.
Photonics 10 00687 g002
Figure 3. Comparison between widefield imaging and confocal single-pixel imaging. (ad) Images acquired under the confocal configuration that includes confocal orthogonal basis pattern set and double pathway. (eh) Images acquired under the conventional widefield configuration that includes conventional orthogonal basis pattern set and single pathway. Each column shows serial image acquisition from the focal plane to out-of-focus planes at a step size of 200 µm. It can be seen that confocal SPI successfully gates out-of-focus blurring as well out-of-focus background due to surface reflections. Scale bar: 100 µm.
Figure 3. Comparison between widefield imaging and confocal single-pixel imaging. (ad) Images acquired under the confocal configuration that includes confocal orthogonal basis pattern set and double pathway. (eh) Images acquired under the conventional widefield configuration that includes conventional orthogonal basis pattern set and single pathway. Each column shows serial image acquisition from the focal plane to out-of-focus planes at a step size of 200 µm. It can be seen that confocal SPI successfully gates out-of-focus blurring as well out-of-focus background due to surface reflections. Scale bar: 100 µm.
Photonics 10 00687 g003
Figure 4. Performance comparison for glare suppression in widefield imaging and confocal single-pixel imaging. Red dotted circle is a guide to the eye showing the center of the glare where information is totally erased due to saturation for (a) conventional imaging, whereas (b) only light originating from the object is selectively gated using confocal single-pixel imaging. Scale bar: 1000 µm.
Figure 4. Performance comparison for glare suppression in widefield imaging and confocal single-pixel imaging. Red dotted circle is a guide to the eye showing the center of the glare where information is totally erased due to saturation for (a) conventional imaging, whereas (b) only light originating from the object is selectively gated using confocal single-pixel imaging. Scale bar: 1000 µm.
Photonics 10 00687 g004
Figure 5. Imaging through scattering layers. (a) Ground truth, (b,c) confocal SPI using visible (405 nm) and short wavelength infra-red light (1650 nm), respectively. Scalebar: 50 µm. (d) Ground truth, (e,f) confocal SPI through 5 and 7 layers of Scotch tape using a broadband LED (400–700 nm), respectively. (g) Ground truth, (h,i) conventional widefield imaging through 5 and 7 layers of Scotch tape using a broadband LED (400–700 nm), respectively. Scalebar: 100 µm.
Figure 5. Imaging through scattering layers. (a) Ground truth, (b,c) confocal SPI using visible (405 nm) and short wavelength infra-red light (1650 nm), respectively. Scalebar: 50 µm. (d) Ground truth, (e,f) confocal SPI through 5 and 7 layers of Scotch tape using a broadband LED (400–700 nm), respectively. (g) Ground truth, (h,i) conventional widefield imaging through 5 and 7 layers of Scotch tape using a broadband LED (400–700 nm), respectively. Scalebar: 100 µm.
Photonics 10 00687 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahn, C.; Park, J.-H. Confocal Single-Pixel Imaging. Photonics 2023, 10, 687. https://doi.org/10.3390/photonics10060687

AMA Style

Ahn C, Park J-H. Confocal Single-Pixel Imaging. Photonics. 2023; 10(6):687. https://doi.org/10.3390/photonics10060687

Chicago/Turabian Style

Ahn, Cheolwoo, and Jung-Hoon Park. 2023. "Confocal Single-Pixel Imaging" Photonics 10, no. 6: 687. https://doi.org/10.3390/photonics10060687

APA Style

Ahn, C., & Park, J. -H. (2023). Confocal Single-Pixel Imaging. Photonics, 10(6), 687. https://doi.org/10.3390/photonics10060687

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop