Optical Diffraction Tomography Using Nearly In-Line Holography with a Broadband LED Source

: We present optical tomography methods for a 3D refractive index reconstruction of weakly scattering objects using LED light sources. We are able to record holograms by minimizing the optical path difference between the signal and reference beams while separating the scattered ﬁeld from its twin image. We recorded multiple holograms by illuminating the LEDs sequentially and reconstructed the 3D refractive index reconstruction of the sample. The reconstructions show high signal-to-noise ratio in which the effect of speckle artifacts is highly minimized due to the partially incoherent illumination of the LEDs. Results from combining different illumination wavelengths are also described demonstrating higher acquisition speed.


Introduction
Optical Diffraction Tomography (ODT) is an emerging tool for label-free imaging of semi-transparent samples in three-dimensional space [1][2][3][4][5][6][7][8][9][10]. Being semi-transparent, such objects do not strongly alter the amplitude of the illuminating field. However, the total phase delay at a particular wavelength is a function of the refractive index of the sample and also the thickness of the sample. Due to this ambiguity, one cannot distinguish between those parameters from 2D projections. Hence, to reconstruct the 3D refractive index (RI) map of semi-transparent samples, a holographic detection is needed to extract the phase of the field after passing through the sample. Then, by acquiring different holograms at different illumination angles, the 3D RI map can be reconstructed using inverse scattering models [10].
Holographic detection was introduced by Gabor who used "in-line" holography. He showed that the intensity image retrieved from the in-line holography is composed of an "in-focus" image in addition to an "out-of-focus" image (i.e., "Twin" image) [11]. Due to this "Twin" image problem, in-line holography usually encounters problems in retrieving the phase of the object. Upatnieks and Leith proposed an "off-axis" holography [12]. In this configuration, a small tilt is introduced between the reference arm and the sample arm, which results in shifting in the Fourier domain the "out-of-focus" image with respect to the "in-focus". Since then, "off-axis" interferometry has been widely used in ODT by first extracting the phase before using the inverse models [13,14].
Several limitations remain that limit the use of ODT in biological imaging. These limitations include phase instability due to interferometry and laser fluctuations and speckle artifacts due to the high coherence of the laser source. To overcome these limitations, Lei Tian, Laura Waller, and co-workers used a relatively broadband source (i.e., LED illumination) to illuminate the sample for Fourier ptychographic and 3D imaging [15][16][17][18]. In particular, in [18], Tian and Waller used LED illumination and demonstrated an iterative reconstruction scheme with a multi-slice forward model to estimate the 3D complex RI distribution by minimizing an error function between the intensity patterns estimated from the forward model. Their approach showed in-focus reconstruction at different depths while taking multiple scattering phenomena into account.
In recent years, this approach was demonstrated both in reflection and transmission configurations [19][20][21][22][23][24]. For example, a motion-free illumination scanning scheme was demonstrated for 3D RI reconstruction using an LED ring that mimics a circular scanning approach [25]. Other approaches to intensity diffraction tomography include the use of iterative schemes to reconstruct the 3D RI map from 2D intensity images using nonlinear iterative schemes that minimize an error function [26,27]. These models usually start with an initial guess of the 3D RI that it keeps modifying by minimizing the error between the actual measurement from the experiment and the intensity profile from the forward physical model.
In the earlier work [18], the phase and absorption transfer functions were calculated in the spatial domain using the intensity images as a function of the illumination angle. In this work, we use LED illumination and apply the Fourier diffraction theorem or Wolf Transform [1,28] in order to reconstruct the 3D scattering potential in the 3D Fourier space followed by a 3D inverse Fourier transform to produce the 3D RI distribution. The reconstructions showed higher resolution, lower speckle noise, and high contrast reconstructions compared to the results we presented earlier for a Wolf transform reconstruction applied to projections obtained with a laser illumination [28].
We begin by discussing the use of the Fourier diffraction theorem on the "in-line" intensity data for the retrieval of the 3D RI map of the sample. We describe the theory behind our work, we then show the reconstructed 3D RI map. After that, we show the effect of slight misalignment of the illumination on the quality and contrast of the 3D RI reconstruction. Finally, we show the effect of adding a different wavelength on the final reconstruction.

Theory
The intensity pattern captured by the detector of the ODT system is denoted by I t (x, y) with x and y being the horizontal and the vertical dimensions of the 2D intensity pattern. The detected intensity is given by: where |U i | is the amplitude of the incident field, |U s | is the amplitude of the scattered field, and ϕ s − ϕ i is the difference between the phases of the complex scattered and the incident field, carrying the phase information of the sample. For weakly scattering samples (i.e., Born approximation), we can assume that (|U s |<<|U i |), and defining U i = e jϕ i , U i = 1 , Equation (1) can be simplified as follows: where ∆ϕ = ϕ s − ϕ i . Equation (2) can be rewritten as: Multiplying both sides of Equation (3) by E i = e jφ i , we obtain: I t e jϕ i = |U s |e jϕ s + e jϕ i + |U s |e −jϕ s e 2jϕ i = U s + e jϕ i + U s * e 2jϕ i = U s + e jϕ i (1 + U s * e jϕ i ) Equation (4) includes the effect of the scattered field (i.e.,U s ), which we refer to as the "Principal" image and its complex conjugate (i.e.,U s * ), which we refer to as the twin image.
As has been shown previously [28], at illumination angles less than the numerical aperture, the two terms tend to cancel each other, which results in a low contrast 3D reconstruction while maintaining the high frequency features of the sample. Figure 1 shows the effect of changing the illumination angle on the 2D intensity image of a simulated digital phantom where different illumination angles were assumed. Note the 2D Fourier transform of the corresponding intensity images includes two circles in the Fourier domain. Each circle is the result of the spectral filtering applied by the limited numerical aperture of the objective lens. From Equation (3), we see that we have three terms: a Zero-order term (the first term on the right-hand side) and the two cross terms. The shift of the two circles from the Zero-order term depends on the illumination angle of the incident plane wave. In other words, for normal incidence, the two circles completely overlap with each other. However, as we increase the illumination angle, the shift between the two circles increases until we reach the limit of the numerical aperture, at which point we see that the two circles are tangent to each other. Only when the illumination is at the maximum angle permitted by the NA of the objective lens can the complex field be retrieved from the intensity image as would be the case for an off-axis interferometric setup with a separate reference arm for holographic detection. Figure 1 shows that for an accurate extraction of the scattered field, the sample should be illuminated with the maximum angle permitted by the NA of the objective lens [21]. The scattered field can be extracted by simply multiplying the intensity measurement with the incident plane wave, which results in shifting the spectrum in the Fourier domain. This is followed by the spatial filtering of the "Principal" image with a circular filter whose size is determined by the NA of the objective lens as shown in Figure 2.
From Figure 2, it can be seen that only when the illumination angle is at the edge of the imaging NA can we extract the complex scattered field from the intensity image. This can be experimentally demonstrated by illumination along a circular cone whose center is perfectly aligned with the imaging objective lens. This is demonstrated in the experimental setup described in the following section. Processing of the 2D intensity images before mapping into the 3D Fourier space. The leftmost panel shows the intensity measurements and the corresponding Fourier transform. The middle panel shows the effect of multiplying by the incident plane wave, which results in centering the scattered field highlighted by the white circles. The final step is the filtering of the scattered field with a circular filter whose size matches the size of the numerical aperture in the Fourier space declared by the red circle. Scale bar = 8 µm.
By multiplying the intensity image with the incident plane-wave to shift the spectrum in the Fourier domain, the scattered field spectrum becomes centered around the origin. To filter out the complex scattered field in the Fourier domain, we apply a low pass filter given by the following equation: where U s (k x , k y ) is the 2D Fourier transform of U s (x, y), and LPF{.} represents a circular low pass filter whose radius is given by k 0 N A, where k 0 is the wave number in free-space. By extracting the complex field along the direction k = (k x , k y , k z ) for each illumination k-vector k in = (k in x , k in y , k in z ), one Fourier component of the 3D spectrum of the scattering potential where → κ is the 3D spatial frequency of the scattering potential, k 0 = 2π λ , λ is the wavelength of the illumination beam, and n 0 is the refractive index of the surrounding medium. We refer to Equation (6) as the Wolf transform [1,29].
By applying Equation (6) for different 2D projections and accumulating the various spectral components in the 3D Fourier domain, the 3D spectrum Subsequently, F(r) can be spatially reconstructed using an inverse 3D Fourier transform. Finally, n(r) is retrieved using the following equation: In summary, we obtain the 3D RI distribution, through the following steps; (1) the raw images should be processed to remove the background; (2) the illumination angle is calculated; (3) the intensity image is multiplied with the incident plane-wave to shift the scattered field spectrum to the center; (4) the resulting spectrum is low pass filtered with a circular filter whose radius is proportional to the numerical aperture of the imaging objective lens; (5) the resulting low pass filtered spectrum is mapped to the 3D Fourier space of the sample as a spherical cap (i.e., diffraction); (6) by applying steps 1-5 for all intensity images with different illumination angles, the 3D scattering potential is formed in the 3D Fourier space; and (7) by applying an inverse 3D Fourier transform, the spatial distribution of the scattering potential is calculated from which the 3D RI distribution is retrieved.
Preprocessing of the images is performed by subtracting the background from the raw images to remove any noise from the camera or the ambient environment. The background is retrieved by applying a low pass filter onto the raw images. The normalized intensity profile is then calculated as follows: where I Bkg is the background signal.

Experimental Setup and 3D RI Reconstructions
The experimental setup used in our experiments combined a standard bright field microscope (AmScope T490B-DK 40X-2000X, Irvine, CA, USA) in which an LED ring illumination unit (Adafruit, ID: 1586, 24 LED pixels, bandwidth = 20 nm, New York, NY, USA) replaced the bright-field illumination unit incorporated with the commercial microscope. The experimental setup is shown in Figure 3. The LED ring had a radius of approximately 30 mm. To assume plane-wave illumination, the LED was placed far enough from the sample (at a distance around 35 mm) so that the wavefront illuminating the sample was flat enough. For imaging, an objective lens of magnification 40× and NA of 0.65 was used (Plan Achro, part of the AmScope system). Images were captured using a scientific color CMOS camera (Thorlabs Zelux, resolution: 1440 × 1080, pixel size: 3.45 µm, Bergkirchen, Germany). The LED ring was driven using an Arduino kit (Arduino Uno). For proper synchronization, a Matlab script was used to synchronize the LED ring with the camera for different illumination angles. Each image took 0.5 s to be acquired with a total of 12 s for the acquisition of the 24 images. Using more powerful LEDs or laser diodes can result in shorter acquisition speed.
While an LED source was used in this study, the described technique is also applicable to laser sources. The advantage of using LED is to get rid of speckle noise usually seen with high coherent illumination, and they are practically advantageous (low cost angle scanning by sequential LED illumination).
The center of the LED ring was aligned with the optical axis as described in the previous section. As we will show later, any misalignment will severely affect the quality of the reconstruction. In addition, the illumination and imaging NAs were matched by controlling the distance between the sample and the LED ring to ensure proper extraction of the "Principal" field only from the intensity measurement. As shown in Figure 3b, the distance (h) was controlled so that the illumination and imaging NA were perfectly matched. In other words, the distance (h) should satisfy the following condition: Figure 4 shows an example for a raw image taken for a cheek cell extracted from human mouth. Figure 4b shows the two circles (i.e., cross terms) as expected from the theory. As explained in the previous section, by carefully aligning the LED ring to match the imaging NA, we were able to completely decouple the two cross terms for proper reconstruction of the 3D RI distribution. We refer to the raw intensity images as holograms, since holography is generally referred to an image where the amplitude and phase can be extracted as was first described by Dennis Gabor in which the recorded image on the film has information about the complex field of the transparent sample being imaged. In Gabor's original work the reference beam was propagating in the same direction as the signal beam; therefore, it was difficult to separate one from the other. The work of Leith and Upatnieks placed the reference beam at a sufficiently large angle that the reconstructed signal beam was separated in the Fourier space from the Zero-order beam. The holograms presented here (and also in the work of Waller and Tian) were recorded with a reference beam, which was the portion of the incident beam that was not scattered, which was chosen to be at an angle just enough to separate the signal beam from the twin image (the conjugate term) in the Fourier space. Since the object in this case is a weakly scattering phase only object, the effects of the modulation of the incident beam by the sample did not appreciably affect the reconstructions. Fringes were not visible in the recording shown in Figure 4a due to low contrast and sampling by the camera, which was close to the Nyquist rate. The fringes were weak, but they were surely there, since the 2D Fourier transform shown in Figure 4b contains two orders.
To retrieve the illumination angle, we adopted a previously developed algorithm, which retrieved the illumination angle by detecting the boundaries and distance between the center of the circle and the Zero-order term [30]. After acquiring different projections by illuminating individually and sequentially the LEDs (see the Supplementary Video), the 3D RI reconstruction was retrieved by mapping the filtered Fourier transform shown in Figure 2 into the 3D Fourier-space. This was followed by taking the 3D inverse Fourier transform of the scattering potential in the Fourier-space to calculate the 3D RI distribution. Figure 5 shows the 3D RI reconstruction for the cheek cell shown in Figure 4 (see also Supplementary Video).

Effect of Misalignment on the Reconstruction Quality
In this section, we study the effect of optical misalignment on the quality of the 3D reconstruction. As described in the above sections, it is critical that the illumination and imaging NA are identical (i.e., conical illumination with the center aligned with the optical axis). Any misalignment will result in the "Principal" and the "Twin" images overlapping, which will alter the contrast quality of the final 3D RI reconstruction. To study this, the same optical setup was used; however, the LED was slightly misaligned from the optical axis. Figure 7 shows an image for two LED illuminations. As clearly seen, when the "Principal" and the "Twin" image overlap (i.e., in Fourier transform) we see very low contrast in the intensity images since the "Principal" and the "Twin" image tend to cancel each other [28]. However, for the other case when they do not overlap, we see that the contrast is enhanced. As a result of this effect, the final 3D RI reconstruction will not reflect the true 3D refractive index distribution of the sample, since we cannot retrieve the complex scattered field "alone" from the intensity images because of the overlapping circles for certain projections. Figure 8 shows the retrieved 3D RI distribution as a result of optical misalignment. From the figure, it is observed how the bacteria highlighted in blue box takes an RI value less than the surrounding medium (i.e., water), which does not agree with literature [17,31]; this is also in contradiction to the reconstruction shown in Figure 6 when the LED were perfectly aligned, and the illumination and imaging NA were matched. This error in the 3D RI reconstruction might be attributed to the twin image and the principal image overlapping which resulted in the wrong calculation of the 3D reconstructions. In addition, we see that the refractive index contrast decreased due to the cancelation of the low frequency components from the overlapping of the principal and the twin image. Figure 7. Effect of misalignment on the captured intensity images under LED illumination. An illumination angle smaller than the maximum angle allowed by the NA of the objective lens, results in overlapping spectra (circles), and we obtain low contrast images. This is not the case when the illumination angle is maximized, and the spectra do not overlap. Figure 8. 3D RI reconstruction for misaligned LED ring. As shown in the red box, the contrast is highly suppressed as a result of the cancelation of the low spatial frequency due to the overlap between the two circles in some of the projections. On the other hand, as shown in the dark blue box, we see an artifact where the contrast is inverted for the bacterial structure, which does not agree with the literature [17,31], where the bacterial structures have higher refractive index than cytoplasm.

ODT Using Wavelength Diversity
Finally, 3D RI reconstruction based on wavelength diversity is discussed in this section. Since the LED ring supports "RGB" colors, we captured images at three different wavelengths: red (623 nm), green (515 nm), and blue (468 nm). Theoretically, this corresponded to mapping at different Ewald's sphere with different radii k = 2π λ illum as shown in Figure 9. By following the same procedures discussed in the previous sections at each particular wavelength, the 3D refractive index reconstruction was retrieved. Practically, objective lenses suffer from chromatic aberrations, which results in the image not being in best focus at different wavelengths, as shown in Figure 10. This results in distortions (aberrations) when mapping the projections onto the Ewald's sphere resulting in an inaccurate estimation of the final 3D RI distribution. We can correct for this by acquiring different intensity images at different focal planes and correct for the aberrations by calculating the cross-correlation function between those images and a reference image [32], which effectively corrects for these aberrations. To correct for the chromatic aberrations, a different approach based on the Fresnel propagation was taken to refocus the image digitally, since we have access to the scattered field. This was performed in two steps; first, we reconstructed the 3D RI distribution without calibrating for the aberrations at red and blue (given that green channel is in focus).
Then, by monitoring the reconstructions, the fields were backpropagated by the distance where the sample was displaced from the best plane of focus (z = 0). Figure 11 shows an example for the reconstructions along YZ for green and blue. Note how the reconstruction for the blue illumination was displaced from the best plane of focus representing the error due to the chromatic aberrations introduced by the objective lens. Figure 11. Effect of chromatic aberrations on the 3D RI reconstruction at green illumination (left) and blue illumination (right). Note how the 3D reconstruction at blue is shifted from the z = 0 plane due to chromatic aberrations.
The second step was to backpropagate the complex field extracted from the intensity image by the distance ∆z to refocus it using the Fresnel propagation as follows: where E uncalib is the chromatic aberrated field and E calib is the calibrated field after removing the chromatic aberrated. Figure 12 shows the effect of refocusing on the displacement along the optical axis on the XY slice at z = 0 (best plane of focus) in which the image came into focus after calibrating for the chromatic aberrations. After calibrating each wavelength channel, the 3D RI distribution was retrieved by combing all the calibrated projections into the 3D Fourier-space. However, this was based on the assumption that the sample did not have strong dispersion, and thus, the RI value was almost constant at different wavelengths. Figure 13 shows the 3D frequency support in the Fourier-space and the corresponding XY slices at different depths. While we did not observe enhancement in the optical resolution by going from one wavelength to three wavelengths, imaging at multiple wavelengths can still be advantageous in other aspects. For example, instead of capturing 24 projections where each projection corresponded to each LED, we could simultaneously operate three LED pixels each with a different color and then decouple them in the postprocessing, since we had an RGB camera. This increased the throughput of the system by a factor of three [31,33]. In addition, the incoherent superposition of the three RGB images increased the SNR of the reconstructed 3D object. This is not visually evident in Figure 13 because the image quality was already very good with a single color, but in cases where we want to carry out high speed recording, and we operate at low light levels, the aid of the RGB illumination can prove helpful. The color scanning influences the transverse spatial resolution of the 3D RI distribution, since the diffraction limited resolution is proportional to the wavelength. This is a relatively minor effect. Finally, we can use the wavelength scanning in reflection or 90-degree configuration. This would result in a resolution enhancement along the axial direction helping resolve the missing cone problem, which arises from imaging in transmission configuration through the limited NA imaging system where the missing spatial frequencies by the objective lens result in a cone-shaped structure in the 3D Fourier space as shown in Figure 13b with the red arrows. Regarding the number of projections, we believe that a higher number of projections would result in a higher value of the 3D RI distribution, since we are filling more in the Ewald's sphere (3D Fourier space). On the other hand, for coherent detection, we observed that the higher number of projections could enhance the "effective" resolution as the coherent noise would average out as we increase the number of projections resulting in a better signal-to-noise ratio.

Conclusions
In conclusion, we presented a technique for 3D refractive reconstruction using the Wolf transform based on intensity measurements. The technique relied on mapping the extracted scattered field into the 3D Fourier space and then taking an inverse 3D Fourier transform to retrieve the 3D RI in the spatial domain. The reconstructions showed excellent signal-to-noise ratio due to the use of a partially incoherent illumination source, which led to minimizing the speckle noise usually detected in coherent detection. To retrieve the 3D RI distribution, the illumination and the imaging NA must be perfectly matched. Finally, we investigated the effect of adding other illumination wavelengths and showed how to correct for the chromatic aberrations from the objective lens.
Author Contributions: Conceptualization, A.B.A. and D.P.; methodology, A.B.A., A.R. and D.P.; writing-original draft preparation, A.B.A.and D.P.; writing-review and editing, A.B.A. and D.P.; supervision, D.P. All authors have read and agreed to the published version of the manuscript.

Funding: Swiss National Science Foundation (SNSF 514481).
Institutional Review Board Statement: Not applicable.