# Deep Deconvolution of Object Information Modulated by a Refractive Lens Using Lucy-Richardson-Rosen Algorithm

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{7}

^{8}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

_{s}from a refractive lens with a complex amplitude of $\mathrm{exp}\left[-j\pi {R}^{2}/\left(\lambda f\right)\right]$, where f is the focal length of the lens given as $\frac{1}{f}=\frac{1}{u}+\frac{1}{{z}_{h}}$, u is the ideal object distance, z

_{h}is the distance between the refractive lens and the sensor and ideal image distance, λ is the wavelength and R is the radial coordinate given as $R=\sqrt{{x}^{2}+{y}^{2}}$. The complex amplitude of light reaching the refractive lens is given as ${\psi}_{1}={C}_{1}\sqrt{{I}_{o}}Q\left(\frac{1}{{z}_{s}}\right)L\left(\frac{\overline{{r}_{o}}}{{z}_{s}}\right)$, where $Q\left(1/{z}_{s}\right)=\mathrm{exp}\left[j\pi {R}^{2}/\left(\lambda {z}_{s}\right)\right]$ and $L\left(\overline{o}/{z}_{s}\right)=\mathrm{exp}\left[j2\pi \left({o}_{x}x+{o}_{y}y\right)/\left(\lambda {z}_{s}\right)\right]$ are the quadratic and linear phases, z

_{s}is the actual object distance and the axial aberration is quantified as z

_{s}-u and C

_{1}is a complex constant. The complex amplitude after the optical modulator is given as ${\psi}_{2}={C}_{1}\sqrt{{I}_{o}}Q\left(\frac{1}{{z}_{1}}\right)L\left(\frac{\overline{{r}_{o}}}{{z}_{s}}\right)$, where ${z}_{1}=\frac{{z}_{s}f}{f-{z}_{s}}$. The intensity distribution obtained in the sensor plane located at a distance of z

_{h}is the PSF given as

_{s}= u, the imaging condition is satisfied, z

_{1}becomes z

_{h}and a point image is obtained on the sensor. The lateral resolution in the object plane is given as 1.22λz

_{s}/D, where D is the diameter of the lens. The axial resolution of the system is given as $8\lambda {({z}_{s}/D)}^{2}$ and the magnification of the system is given as M = z

_{h}/z

_{s}. By the linearity condition of incoherent imaging, the intensity distribution obtained for an object with a function O is given as

_{O}is obtained by sampling of O by I

_{PSF}and therefore when the imaging condition is satisfied, the object information gets sampled by the lateral resolution of the system. When the imaging condition is not satisfied, the I

_{PSF}is blurred and so is the object information. In indirect imaging mode, the task is to extract O from I

_{O}and I

_{PSF}. A direct method to extract O is to cross-correlate I

_{O}and I

_{PSF}as ${I}_{R}={I}_{O}\ast {I}_{PSF}$ which is given as ${I}_{R}={I}_{PSF}\otimes O\ast {I}_{PSF}$. Rearranging the terms, we obtain ${I}_{R}=O\otimes {I}_{PSF}\ast {I}_{PSF}$. So, the reconstructed information is the object information sampled by the autocorrelation function of I

_{PSF}. The width of the autocorrelation function cannot be smaller than the diffraction limited spot size under normal conditions. When the imaging condition is satisfied or when a diffuser is used, the autocorrelation function is sharp. When the imaging condition is not satisfied, then the autocorrelation function is blurred, making the correlation-based reconstruction not effective. The advanced version of correlation given as a non-linear reconstruction is effective in reducing the background noise arising due to the positive nature of the I

_{PSF}during correlation but is affected by the nature of the intensity distribution [24,26]. The non-linear reconstruction is given as

_{a}. While this is one of the robust correlation-based reconstruction methods, LRA uses a different approach involving the calculation of the maximum likelihood solution but once again from I

_{PSF}and I

_{O}. The (n+1)th reconstructed image in LRA is given as ${I}_{R}^{n+1}={I}_{R}^{n}\left\{\frac{{I}_{O}}{{I}_{R}^{n}\otimes {I}_{PSF}}\otimes {{I}_{PSF}}^{\prime}\right\}$, where ${{I}_{PSF}}^{\prime}$ refers to the flipped version of ${I}_{PSF}$ and the loop is iterated until the maximum likelihood reconstruction is obtained. In the above equation, the denominator has a convolution between two positive functions which results in non-zero values. The initial guess of the LRA is often the recorded image itself and the final solution is a maximum-likelihood solution. As seen in the above equation, there is a forward convolution ${I}_{R}^{n}\otimes {I}_{PSF}$ and the ratio between this and ${I}_{O}$ is correlated with ${I}_{PSF}$ which is replaced by the NLR. This yields a better estimation with reduced background noise and rapid convergence. In this study, the performances of LRA, NLR and LRRA are compared. The schematic of the Lucy-Richardson-Rosen algorithm is shown in Figure 2.

## 3. Simulation Studies

_{h}and f were selected as 0.8 m and 0.4 m, respectively, and z

_{s}was varied from 0.4 m to 1.2 m in steps of 0.1 m. The recorded PSFs for z

_{s}= 0.4 to 1.2 m in steps of 0.1 m is shown in Figure 3. A test object ‘CIPHR’ was used, and the object intensity distributions were calculated by a convolution between the test object and the PSF. The images of the test object for different cases of axial aberrations are shown in Figure 3. The reconstruction results using LRA, NLR and LRRA are shown in Figure 3. It can be seen that the performance of LRRA is significantly better than LRA and better than NLR. The LRA and NLR had consistent reconstruction conditions such as 20 iterations and α = 0 and β = 0.6. In the case of LRRA, the conditions were changed for every case. The values of (α, β, n) for z

_{s}= 0.4 to 1.2 are (0, 0.5, 5), (0, 0.5, 5), (0, 0.5, 5), (0, 0.5, 5), (0, 0.5, 1), (0, 0.5, 8), (0, 0.5, 8), (0, 0.5, 8) and (0, 0.6, 5), respectively. In the case of NLR, the reconstruction improves when the PSF pattern is larger as expected due to improvement in sharpness of autocorrelation function with larger patterns. A 3D simulation was carried out by accumulating the 2D intensity distributions into cube data. The images of the PSF, object variation from 0.6 to 1 m and the cross-sectional images of reconstructions of NLR, LRA and LRRA are shown in Figure 4a–e, respectively. Comparing, Figure 4c–e, it is seen that NLR and LRRA performed better than LRA, while LRRA exhibited the best performance.

## 4. Experiments

_{s}= 7 cm, 7.1 cm and 7. 2 cm, the corresponding direct imaging (DI) results of object and reconstruction results using LRA (n = 20), NLR (α = 0.2, β = 0.7) and LRRA (α = 0.6, β = 0.9) with n = 2, 12 and 12 for the above three cases are shown in Figure 7. The differences between the simulation and experimental results are due to the cumulative effect of the following conditions. The point object used in the simulation was a single pixel object with a size of 10 µm, while the one used in the experiment was 100 µm. The image sensor used in the experiment was a low-cost web camera in which it is not possible to control exposure conditions as it is done in scientific cameras. Most web cameras have their own autocorrection algorithms to enhance images. There were stray lights entering the camera. The above three might have contributed to the increase in background noises. The objects used in simulation only transmit or block light but in experiments, in addition to the above the objects also scatter light. Finally, experimental errors in the form of differences in the locations of the pinhole and objects. We believe that the cumulative effect of all the above caused the discrepancy between simulation and experimental results.

_{s}= 7 cm, 7.2 cm and 7. 4 cm, the corresponding direct imaging (DI) results of object and reconstruction results using LRA (n = 20), NLR (α = 0.2, β = 0.7) and LRRA (α = 0.8, β = 0.9, n = 10), (α = 0.8, β = 1, n = 10) and (α = 0.8, β = 0.9, n = 15) for the above three cases are shown in Figure 8. The third test object consists of two circular objects each with a radius of 360 μm. The images of the PSFs recorded at z

_{s}= 7 cm, 7.2 cm and 7. 4 cm; the corresponding direct imaging (DI) results of object and reconstruction results using LRA (n = 20), NLR (α = 0.2, β = 0.7) and LRRA (α = 0.6, β = 0.9, n = 12), (α = 0.8, β = 1, n = 15) and (α = 0.8, β = 1, n = 15) for the above three cases are shown in Figure 9.

_{1}and I

_{2}are the two compared images; ${\mu}_{{I}_{1}}$ and ${\mu}_{{I}_{2}}$ are the local mean values of I

_{1}and I

_{2}, respectively; ${\sigma}_{{I}_{1}}$ and ${\sigma}_{{I}_{2}}$ are the variances of I

_{1}and I

_{2}with means ${\mu}_{{I}_{1}}$ and ${\mu}_{{I}_{2}}$; ${\sigma}_{{I}_{1}{I}_{2}}$ is the covariance; D

_{1}and D

_{2}are constants used to maintain the values of components in denominator as non-zeros. The maps of the SSIM for the above cases are shown in Figure 10. It should be noted that the presence of stray light in the recorded images could significantly affect the SSIM index. This could be attributed to the slight variations observed in Figure 10. The SSIM values are plotted as shown in Figure 11. It can be seen that LRRA performed better than both LRA and NLR techniques.

## 5. Discussion

## 6. Conclusions

^{2}, beyond which the object information becomes blurred. There are many computational techniques that can be used to deblur the object information but are often limited to a smaller range of axial aberrations [31,32,33]. In this study, a recently developed computational technique called the LRRA has been implemented for deep deconvolution of images formed by a refractive lens and compared against NLR and LRA. The performance of LRRA seems significantly better than LRA and better than NLR in both simulation as well as experimental studies. Since the simulation and experimental studies confirm the possibility of a higher range of deconvolution, we believe that this study will benefit 3D imaging using spatially incoherent light. In this study, proof-of-concept 3D imaging has been demonstrated. Recalling the novelty conditions described in the introduction, in our original article [29], the intensity distributions were localized and so the autocorrelation function was sharp, and consequently NLR performed better than LRA. In this study, the PSF is blurred and so the autocorrelation function has a width which is twice that of the width of the PSF and so a correlation-based reconstruction system is expected to perform poorly with a low resolution. This is exactly seen in this case when NLR did not reconstruct the object information satisfactorily. The optimal case of LRRA seems to shift between NLR and LRA depending upon the type of intensity distributions and offers a better performance than both methods. In summary, LRRA enables converting a refractive lens-based direct imaging system into a 3D imaging system where direct and indirect imaging methods can co-exist. When the imaging condition is satisfied, it acts as a direct imaging system. When the imaging condition is not satisfied, LRRA is applied to reconstruct the information at that plane using the pre-recorded PSF. To the best of our knowledge, such an incoherent holography system does not exist. We believe that this study will improve the current state-of-the-art incoherent 3D imaging technology.

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Rosen, J.; Vijayakumar, A.; Kumar, M.; Rai, M.R.; Kelner, R.; Kashter, Y.; Bulbul, A.; Mukherjee, S. Recent advances in self-interference incoherent digital holography. Adv. Opt. Photon.
**2019**, 11, 1–66. [Google Scholar] [CrossRef] - Lichtman, J.W.; Conchello, J.A. Fluorescence microscopy. Nat. Methods
**2005**, 2, 910–919. [Google Scholar] [CrossRef] - Kim, M.K. Adaptive optics by incoherent digital holography. Opt. Lett.
**2012**, 37, 2694–2696. [Google Scholar] [CrossRef] - Liu, J.-P.; Tahara, T.; Hayasaki, Y.; Poon, T.-C. Incoherent Digital Holography: A Review. Appl. Sci.
**2018**, 8, 143. [Google Scholar] [CrossRef] - Poon, T.-C. Optical Scanning Holography—A Review of Recent Progress. J. Opt. Soc. Korea
**2009**, 13, 406–415. [Google Scholar] [CrossRef] - Rosen, J.; Alford, S.; Anand, V.; Art, J.; Bouchal, P.; Bouchal, Z.; Erdenebat, M.-U.; Huang, L.; Ishii, A.; Juodkazis, S.; et al. Roadmap on Recent Progress in FINCH Technology. J. Imaging
**2021**, 7, 197. [Google Scholar] [CrossRef] - Murty, M.V.R.K.; Hagerott, E.C. Rotational shearing interferometry. Appl. Opt.
**1966**, 5, 615. [Google Scholar] [CrossRef] - Sirat, G.; Psaltis, D. Conoscopic holography. Opt. Lett.
**1985**, 10, 4–6. [Google Scholar] [CrossRef] - Rosen, J.; Brooker, G. Digital spatially incoherent Fresnel holography. Opt. Lett.
**2007**, 32, 912–914. [Google Scholar] [CrossRef] - Kim, M.K. Incoherent digital holographic adaptive optics. Appl. Opt.
**2013**, 52, A117–A130. [Google Scholar] [CrossRef][Green Version] - Kelner, R.; Rosen, J.; Brooker, G. Enhanced resolution in Fourier incoherent single channel holography (FISCH) with reduced optical path difference. Opt. Express
**2013**, 21, 20131–20144. [Google Scholar] [CrossRef] - Vijayakumar, A.; Kashter, Y.; Kelner, R.; Rosen, J. Coded aperture correlation holography–a new type of incoherent digital holograms. Opt. Express
**2016**, 24, 12430–12441. [Google Scholar] [CrossRef] [PubMed] - Ables, J.G. Fourier Transform Photography: A New Method for X-Ray Astronomy. Publ. Astron. Soc. Aust.
**1968**, 1, 172–173. [Google Scholar] [CrossRef] - Dicke, R.H. Scatter-Hole Cameras for X-Rays and Gamma Rays. Astrophys. J. Lett.
**1968**, 153, L101. [Google Scholar] [CrossRef] - Cieślak, M.J.; Gamage, K.A.; Glover, R. Coded-aperture imaging systems: Past, present and future development–A review. Radiat. Meas.
**2016**, 92, 59–71. [Google Scholar] [CrossRef] - Anand, V.; Rosen, J.; Juodkazis, S. Review of engineering techniques in chaotic coded aperture imagers. Light. Adv. Manuf.
**2022**, 3, 1–9. [Google Scholar] [CrossRef] - Vijayakumar, A.; Rosen, J. Interferenceless coded aperture correlation holography–a new technique for recording incoherent digital holograms without two-wave interference. Opt. Express
**2017**, 25, 13883–13896. [Google Scholar] [CrossRef] - Singh, A.K.; Pedrini, G.; Takeda, M.; Osten, W. Scatter-plate microscope for lensless microscopy with diffraction limited resolution. Sci. Rep.
**2017**, 7, 10687. [Google Scholar] [CrossRef] - Antipa, N.; Kuo, G.; Heckel, R.; Mildenhall, B.; Bostan, E.; Ng, R.; Waller, L. DiffuserCam: Lensless single-exposure 3D imaging. Optica
**2018**, 5, 1–9. [Google Scholar] [CrossRef] - Sahoo, S.K.; Tang, D.; Dang, C. Single-shot multispectral imaging with a monochromatic camera. Optica
**2017**, 4, 1209–1213. [Google Scholar] [CrossRef] - Vijayakumar, A.; Rosen, J. Spectrum and space resolved 4D imaging by coded aperture correlation holography (COACH) with diffractive objective lens. Opt. Lett.
**2017**, 42, 947. [Google Scholar] [CrossRef] [PubMed] - Anand, V.; Ng, S.H.; Maksimovic, J.; Linklater, D.; Katkus, T.; Ivanova, E.P.; Juodkazis, S. Single shot multispectral multidimensional imaging using chaotic waves. Sci. Rep.
**2020**, 10, 1–13. [Google Scholar] [CrossRef] [PubMed] - Anand, V.; Ng, S.H.; Katkus, T.; Juodkazis, S. Spatio-Spectral-Temporal Imaging of Fast Transient Phenomena Using a Random Array of Pinholes. Adv. Photon. Res.
**2021**, 2, 2000032. [Google Scholar] [CrossRef] - Rai, M.R.; Anand, V.; Rosen, J. Non-linear adaptive three-dimensional imaging with interferenceless coded aperture correlation holography (I-COACH). Opt. Express
**2018**, 26, 18143–18154. [Google Scholar] [CrossRef] - Horner, J.L.; Gianino, P.D. Phase-only matched filtering. Appl. Opt.
**1984**, 23, 812–816. [Google Scholar] [CrossRef] - Smith, D.; Gopinath, S.; Arockiaraj, F.G.; Reddy, A.N.K.; Balasubramani, V.; Kumar, R.; Dubey, N.; Ng, S.H.; Katkus, T.; Selva, S.J.; et al. Nonlinear Reconstruction of Images from Patterns Generated by Deterministic or Random Optical Masks—Concepts and Review of Research. J. Imaging
**2022**, 8, 174. [Google Scholar] [CrossRef] - Richardson, W.H. Bayesian-Based Iterative Method of Image Restoration. J. Opt. Soc. Am.
**1972**, 62, 55–59. [Google Scholar] [CrossRef] - Lucy, L.B. An iterative technique for the rectification of observed distributions. Astron. J.
**1974**, 79, 745. [Google Scholar] [CrossRef] - Anand, V.; Han, M.; Maksimovic, J.; Ng, S.H.; Katkus, T.; Klein, A.; Bambery, K.; Tobin, M.J.; Vongsvivut, J.; Juodkazis, S.; et al. Single-shot mid-infrared incoherent holography using Lucy-Richardson-Rosen algorithm. Opto-Electron. Sci.
**2022**, 1, 210006. [Google Scholar] [CrossRef] - Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process.
**2004**, 13, 600–612. [Google Scholar] [CrossRef][Green Version] - Beck, A.; Teboulle, M. Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems. IEEE Trans. Image Process.
**2009**, 18, 2419–2434. [Google Scholar] [CrossRef] [PubMed] - Biemond, J.; Lagendijk, R.; Mersereau, R. Iterative methods for image deblurring. Proc. IEEE
**1990**, 78, 856–883. [Google Scholar] [CrossRef][Green Version] - Wang, R.; Tao, D. Recent progress in image deblurring. arXiv
**2014**, arXiv:1409.6838. [Google Scholar]

**Figure 2.**Schematic of LRRA. ML—maximum likelihood; OTF—optical transfer function; n—number of iterations; ⊗—2D convolutional operator; ${\mathcal{F}}^{\ast}$—refers to complex conjugate following a Fourier transform.

**Figure 3.**Simulation results of PSF, object intensity and reconstruction results using NLR, LRA and LRRA.

**Figure 4.**(

**a**) Image of 3D PSF (z

_{s}= 0.6 to 1 m), X-Y cross sectional images obtained from cube data of (

**b**) imaging using a lens, reconstruction using (

**c**) NLR, (

**d**) LRA and (

**e**) LRRA.

**Figure 5.**Images of the simulated PSF, DI of the test object, reconstruction results using LRA, NLR and LRRA.

**Figure 6.**Photograph of the experimental setup: (1) LED source, (2) Iris, (3) LED power source, (4) Lens L1 (f = 50 mm), (5) Test object, (6) Lens L2 (f = 35 mm), (7) ND filter (ND 1.5), (8) Image sensor and (9) XY stage movement controller.

**Figure 7.**Images of the PSF, DI of the test object—1, reconstruction results using LRA, NLR and LRRA. The scale bar has a length of 1 mm.

**Figure 8.**Images of the PSF, DI of the test object—2, reconstruction results using LRA, NLR and LRRA. The scale bar has a length of 1 mm.

**Figure 9.**Images of the PSF, DI of the test object—3, reconstruction results using LRA, NLR and LRRA. The scale bar has a length of 1 mm.

**Figure 10.**SSIM maps for the test objects with respect to the direct imaging and the reconstruction results using LRA, NLR and LRRA. The colormap follows usual grayscale with black being least similarity and white being highest similarity.

**Figure 11.**SSIM values of the test objects with respect to the direct imaging and the reconstruction results.

**Figure 12.**(

**a**) Image of the recorded intensity distribution for a 4 mm thick object consisting of two thin objects—test object 2 and 3. (

**b**) Reconstructed image using LRRA at the plane of the test object 2.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Praveen, P.A.; Arockiaraj, F.G.; Gopinath, S.; Smith, D.; Kahro, T.; Valdma, S.-M.; Bleahu, A.; Ng, S.H.; Reddy, A.N.K.; Katkus, T.;
et al. Deep Deconvolution of Object Information Modulated by a Refractive Lens Using Lucy-Richardson-Rosen Algorithm. *Photonics* **2022**, *9*, 625.
https://doi.org/10.3390/photonics9090625

**AMA Style**

Praveen PA, Arockiaraj FG, Gopinath S, Smith D, Kahro T, Valdma S-M, Bleahu A, Ng SH, Reddy ANK, Katkus T,
et al. Deep Deconvolution of Object Information Modulated by a Refractive Lens Using Lucy-Richardson-Rosen Algorithm. *Photonics*. 2022; 9(9):625.
https://doi.org/10.3390/photonics9090625

**Chicago/Turabian Style**

Praveen, P. A., Francis Gracy Arockiaraj, Shivasubramanian Gopinath, Daniel Smith, Tauno Kahro, Sandhra-Mirella Valdma, Andrei Bleahu, Soon Hock Ng, Andra Naresh Kumar Reddy, Tomas Katkus,
and et al. 2022. "Deep Deconvolution of Object Information Modulated by a Refractive Lens Using Lucy-Richardson-Rosen Algorithm" *Photonics* 9, no. 9: 625.
https://doi.org/10.3390/photonics9090625