Correlation Plenoptic Imaging With Entangled Photons

Plenoptic imaging is a novel optical technique for three-dimensional imaging in a single shot. It is enabled by the simultaneous measurement of both the location and the propagation direction of light in a given scene. In the standard approach, the maximum spatial and angular resolutions are inversely proportional, and so are the resolution and the maximum achievable depth of focus of the 3D image. We have recently proposed a method to overcome such fundamental limits by combining plenoptic imaging with an intriguing correlation remote-imaging technique: ghost imaging. Here, we theoretically demonstrate that correlation plenoptic imaging can be effectively achieved by exploiting the position-momentum entanglement characterizing spontaneous parametric down-conversion (SPDC) photon pairs. As a proof-of-principle demonstration, we shall show that correlation plenoptic imaging with entangled photons may enable the refocusing of an out-of-focus image at the same depth of focus of a standard plenoptic device, but without sacrificing diffraction-limited image resolution.


Introduction
Plenoptic imaging, also known as light-field or integral imaging, is a novel optical imaging modality [1]. Its key principle is to record the three-dimensional light field of a given scene by measuring both the location and the propagation direction of the incoming light. In particular, several images of the scene, one for each propagation direction of light within the scene, are acquired in a single shot. On one hand, such images correspond to the required viewpoints enabling the three-dimensional reconstruction of the scene. In fact, plenoptic imaging is the simplest method of 3D imaging with the present technological means [2][3][4]. On the other hand, the available angular information also enables the simplification of low-light shooting: The acquired images can be combined, in post-processing, to give an overall image characterized by the same depth of field of the N original images, but a signal-to-noise ratio N times larger [5].
Coincidence counting Signal and idler beams emitted from the SPDC source impinge on a beam-splitter (BS). Both beams are split into a reflected path a and a transmitted path b. The reflected beam propagates toward the lens L a of focal length f and is refracted toward the high resolution sensor array S a . The transmitted beam propagates through the object, playing the role of the desired scene, and is collected by the lens L b of focal length F before being detected by the high-resolution sensor array S b . The two sensors are connected to a coincidence counting circuit. On one hand, distances z b , z b , z b are chosen in such a way that the source and the sensor S b are in conjugate planes of the lens L b . On the other hand, distances z a and z a are such that, when the two-photon thin-lens equation 1/(z b + z a ) + 1/z a = 1/ f is satisfied, a ghost image of the object is retrieved on sensor S a , triggered by sensor S b .

Background
The coincidence detection of entangled photons from SPDC is described by the second order Glauber correlation function [26]: where E is the positive-energy part of the electric field at sensor j (with j = a, b), placed in r j = (ρ j , z j ), t j the time of the detection, ω is the frequency and k = (κ, ω/c) the wave vector of the detected radiation, g j is the Green's function propagating the field mode k from the source to the sensor.
The negative-energy part E (−) j (r j , t j ) of the electric field is the Hermitian conjugate of the field E (+) of Equation (2). A scalar approximation for the electric field has been assumed, which physically corresponds to considering a fixed polarization of light. The positive and negative-energy parts of the electric field involve the photon annihilation and creation operators (a k and a † k ), respectively, associated with wave vector k. The expectation value in Equation (1) is taken over the two-photon signal-idler state produced by SPDC [27][28][29]: where N is a normalization constant, ν is the detuning with respect to the central frequency of signal and idler Ω s = Ω i = ω p /2 , which is linked by phase matching to the central frequency of the pump laser ω p , L is the length of the SPDC crystal, D is the difference between the inverse group velocities of signal and idler, s(LDν) is the spectrum of the SPDC biphoton [30,31], and h tr is the Fourier transform of the pump transverse profile: We have assumed, for simplicity, degenerate SPDC radiation, but the result can be easily generalized to the non-degenerate situation [32,33]. Without loss of generality, we shall further assume the source to be monochromatic, in such a way that the time dependence of the correlation function will not be relevant. By employing the canonical commutation relations [a k , a k ] = 0 and [a k , a † k ] = δ(k − k ), with δ the Dirac delta distribution, and the inversion symmetry of the Fourier transform of the transverse pump profile h tr (κ) = h tr (−κ), the spatial part of the two-photon correlation function reads: up to irrelevant constants. This result indicates the strong coupling between the two remote sensors, as enabled by the momentum-momentum entanglement characterizing SPDC biphotons.
Let us now evaluate the propagators in the two arms of the setup depicted in Figure 1; we shall assume for simplicity the lenses to be diffraction-limited. In arm a, light propagates in free space for a distance z a from the source to the lens L a and is then detected by the sensor S a , placed at a distance z a from the lens. In the paraxial approximation, propagation of a field with frequency Ω ck z in free space from (ρ 1 , z 1 ) to (ρ 2 , z 2 ) is described by the function [34]: with G(x) [y] = e iy|x| 2 /2 . Knowing the initial field E(ρ 1 ), one can determine the final field E(ρ 2 ) = dρ 1 E(ρ 1 )G(ρ 2 − ρ 1 , z 2 − z 1 ). Propagation through a lens of focal length f is described by G(ρ l ) [−Ω/(c f )] . Hence, the propagator associated with arm a of the setup reads: where ρ s and ρ are transverse coordinate on the source and the lens L a plane, respectively, and C a , C a contain irrelevant constants. In arm b, light propagates for a distance z b from the source to the object which represents the desired scene to image, then for a distance z b from the object to lens L b , and a further distance z b before being detected by the sensor S b . By indicating with A the aperture function of the object, and assuming the focusing condition 1/(z b + z b ) + 1/z b = 1/F to be satisfied, the propagator associated with arm b of the setup reads: where ρ o and ρ are transverse coordinate on the object and the lens L b planes, respectively, is the magnification of the image of the source on the sensor array S b , and C b , C b contain irrelevant constants. By inserting in Equation (5) the Green's function given by Equations (7) and (9), and the laser pump profile on the SPDC crystal, as defined in Equation (4), one finds that the second order correlation function associated with signal-idler pairs from SPDC is given by the plenoptic correlation function: where K contains irrelevant constants.

Plenoptic Properties of the Correlation Function and Refocusing Capability
As shown in Equation (10), the proposed CPI protocol is theoretically described by a second order correlation function encoding both spatial and angular information, hence, characterized by the key re-focusing capability typical of plenoptic imaging.
To develop an intuition about the result of Equation (10), we consider the simple case in which the distance between the object and the source z b = z bF satisfies the two-photon thin lens equation [25,35]: In this case, by integrating the result of Equation (10) over the whole sensor array S b , one gets the standard (incoherent) ghost image of the object, magnified by a factor of m = z a z a +z bF , namely [25,35], where h tr is the Fourier transform of the laser pump profile, as defined in Equation (4). The above result is valid in the hypothesis that h tr is similar to or narrower than the Fourier transform of the imaging lens L a . In fact, such incoherent ghost image is formally equivalent to the incoherent image one would obtain in a standard imaging system characterized by a point-spread function h tr given by the Fourier transform of the imaging lens aperture function.
However, the second order correlation function of Equation (10) can do much better than standard ghost imaging: The deep physical difference arises from the coherent nature of the ghost image it describes, that is obtained from the general expression (10), when the focusing condition in Equation (11) holds.
The coherence of such ghost image is the immediate consequence of measuring coincidences between the spatial sensor S a and any single pixel of the angular sensor S b . This can be better understood in terms of the Klyshko picture [35] reported in Figure 2: The light illuminating the object and contributing to the coincidence detection between any two pairs of pixels ρ a and ρ b has a well defined propagation direction (i.e., it is spatially coherent). As made clear from Figure 2, the Klyshko picture also enables the interpretation of the proposed setup for CPI with entangled photons as a sort of correlation pinhole camera. Such a perspective helps developing an intuition about the analogy between the proposed scheme and standard plenoptic imaging, as well as understanding the role played by the sensor S b in retrieving the angular information about the two-photon light field. In fact, due to the quasi one-to-one correspondence between points on the sensor S b and points on the source, one can trace, in post-processing, the geometrical ray connecting each point of the source with each point of the object. This leads to the peculiar refocusing and 3D imaging capabilities of plenoptic imaging. Now, to explicitly demonstrate this last point and better highlight the plenoptic properties of the second-order correlation function of Equation (10), we shall consider the more general out-of-focus situation (z b = z bF ) and rewrite it as a product of the pump profile F and the object aperture function A with the phase factor e i Ω c ϕ(ρ o ,ρ s ;ρ a ,ρ b ) , with: namely The stationary points of the phase defined in Equation (14) enable us to determine the geometrical correspondence between points on the object and the source with points on the sensors S a and S b , respectively. In particular, the stationarity of ϕ with respect to ρ s determines the object point that gives the predominant contribution to the integral of Equation (15), that is: where the identity ζ(z a , z a ) = (z bF + z a )z a /z bF has been used. When the focusing condition of Equation (11) is satisfied, this object point becomes independent of the specific sensor pixel ρ b . Hence, the focused ghost image is not sensitive to the change of perspective enabled by the high resolution of the angular sensor S b . On the other hand, the stationarity of ϕ with respect to ρ o yields the focusing of the source on the sensor S b : Thus, in the geometrical optics limit, the second order correlation function of Equation (15) reduces to the product of the tilted and rescaled geometrical image of the object and the source profile: Interestingly, by properly rescaling the variable ρ a , the object can be completely decoupled from the source; in fact, the rescaled second order correlation function gives the perfect geometrical image of the desired scene. Such rescaling is formally identical to the one employed both in standard plenoptic imaging [5] and in correlation plenoptic imaging with chaotic light [18? ]. Similar to standard plenoptic imaging, the signal to noise ratio of the refocused image can be improved by integrating the result of Equation (19) over the whole sensor array ρ b , thus employing light coming from the whole light source: This result represents the refocused incoherent ghost image of an object placed at a generic distance z b from the source, and is thus the central result of the present paper.
The possibility of reconstructing the light field and refocusing an out-of-focus image, as reported in Equation (20), lies on the accuracy with which both object and source points are in a one-to-one correspondence with points on sensors S a and S b , respectively. We have already demonstrated that the Fourier transform of the transverse pump profile determines the object point spread function (see Equation (12)), with a spot size ∆ρ a ∼ mcz bF /(ΩD s ), where D s is the diameter of the pump profile.
On the other hand, it is easy to check that the source is imaged with a point spread function given by the Fourier transform of the object aperture function. From Equation (10), one can infer that a point on the source corresponds to a spot of width ∆ρ b ∼Mcz b /(Ωd) on the sensor S b , with d the smallest length scale of the aperture function of the object. Thus, as far as the pixel sizes lie above the resolution limits, the spatial and angular resolution are decoupled. The structure of a standard plenoptic device, instead, entails an inverse proportionality relation between the angular resolution and the spatial resolution of the focused image, also in the geometrical-optics regime [1,5]. Thus, our protocol of plenoptic imaging with entangled photons enables us to beat this intrinsic limitation and achieve a larger depth of field (depending on the angular resolution), by leaving unchanged the resolution on the focused image and the total number of pixels.

Simulation of CPI With Entangled Photons From SPDC
In Figure 3, we show the enhanced depth of field induced by the refocusing capability of the SPDC correlation plenoptic protocol. A mask with a transparent letter E, whose thickness is d = 0.2 mm, is placed in a setup with z a = 10 mm, z a = 30 mm, and f = 12 mm, which would give a focused ghost image magnified by m = 1.5. The object mask is illuminated by SPDC photons with λ = 1 µm, generated by a pump whose Gaussian transverse profile has width σ = 0.6 mm. With respect to the source, the object is placed at a distance z b = 3 mm, which is less than one third of the focused plane distance z bF = 10 mm. The ghost image of such an object would be focused at z aF = 5z a . The widths of the sensors S a and S b are fixed to W a = 6md = 1.8 mm and W b = 4Mσ = 1.9 mm, with M = 0.8 the magnification of the source image reproduced on S b . Their pixel size δ = 6 µm is close to both resolution limits, as defined by the source and the object's aperture. The results reported in Figure 3 clearly indicate that the refocusing procedure enables the recovery of the information on the aperture function of the object, which is completely lost in the misfocused ghost image.
We shall now compare the above results with the one achievable by a standard plenoptic camera having the same pixel size and total number of pixels per side (N tot = N a + N b = 620). To this end, we introduce the parameter α = S i /S i , given by the ratio between the distance S i from the focusing element to the image plane, and the actual distance S i between the focusing element (imaging lens) and the detector. Generally, perfect refocusing is possible if [5] where ∆x is the minimum distance that can be resolved on the image plane, and ∆u the minimum distance that can be resolved on the imaging lens. In a standard plenoptic camera, if the sensor have pixels of size δ, the image resolution is given by ∆x (p) = 2δN In CPI instead, ∆x (c) = 2δ, since pixels of width δ can be used also to retrieve the image. On the other hand, the resolution on the imaging lens is given by ∆u (c) = 2D s /N b , where D s is the effective diameter of the lens L a , that can be obtained by properly scaling the size D s of the pump profile: In this case, the right-hand side of the perfect refocusing condition given in Equation (21) reads ∆x ∆u Hence, the maximum achievable depth of focus, in the setup employed for the simulation reported in Figure 3, is |1 − 1/α| < 0.26. A standard plenoptic camera with the same pixel size and total number of pixels per side would enable us to achieve this same depth of focus provided N (p) u = 18 pixels are employed for the angular resolution; this condition imposes a loss of spatial resolution by a factor 18 (∆x (p) = 0.1 mm) with respect to the one of the CPI protocol considered above.

Discussion
At the heart of the refocusing capability of the second order correlation function of Equation (10), is the larger depth of focus of the coherent ghost image (Equation (13)), with respect to the incoherent ghost image (Equation (12)), as reported in Figure 4. In fact, the maximum achievable depth of focus of the proposed CPI scheme is the result of the increased depth of focus of coherent ghost imaging, with respect to incoherent ghost imaging.  (12) and (13), respectively, in the same setup described in Section 3. Both functions have been normalized to their value in ρ a = 0 for any value of α.
This can be better understood by considering the origin of both the out-of-focus and the refocused image: The first one is obtained by integrating the out-of-focus coherent image (Equations (10), (15), or (18)) over the whole sensor S b , exactly as it would do a bucket detector of standard ghost imaging; the second one is obtained by integrating, over the same sensor S b , the rescaled version of such out-of-focus coherent image, as indicated in Equation (20). Now, as shown in Figure 5, the out-of-focus coherent image is a projection of the focused image (hence, it is either enlarged or reduced with respect to it) as seen by the viewpoint defined by the specific value of ρ b . The integration all such coherent images over the whole sensor S b implies the overlap of all the projections taken from the different viewpoints ρ b ; the resulting incoherent image is thus characterized by a loss of resolution, namely, it appears out of focus. The rescaled coherent image restores the correct size of the focused image and, most important, tilts the image in such a way to cancel the specific viewpoint from which it was taken. As a consequence, the integration of all such rescaled coherent images over the whole sensor S b has no more detrimental effect on the resolution of the resulting incoherent image; the post-processed image thus appears refocused.

Conclusions and Outlook
In view of practical applications, it is worth mentioning that all the above results apply to both reflective and transmitting objects. In addition, in contrast with chaotic light, entangled photons from SPDC enable us to employ different wavelengths in the two arms of the setup: Light illuminating the object is not required to have the same spectrum as light being remotely detected by S a to retrieve the desired image [32,33]. This is quite interesting in view of applications requiring specific illumination wavelenghts for the object. In this scenario, one may choose two different sensors for maximizing the detection efficiency.
As plenoptic imaging is being broadly adopted in diverse fields such as digital photography [6][7][8], microscopy [3,4], 3D imaging, sensing and rendering [2], our proposed scheme has direct applications in several biomedical and engineering fields. Interestingly, the coherent nature of the correlation plenoptic imaging technique may lead to innovative coherent microscopy modality.