Resolution Limit of Correlation Plenoptic Imaging between Arbitrary Planes

: Correlation plenoptic imaging (CPI) is an optical imaging technique based on intensity correlation measurement, which enables detecting, within fundamental physical limits, both the spatial distribution and the direction of light in a scene. This provides the possibility to perform tasks such as three-dimensional reconstruction and refocusing of different planes. Compared with standard plenoptic imaging devices, based on direct intensity measurement, CPI overcomes the problem of the strong trade-off between spatial and directional resolution. Here, we study the resolution limit in a recent development of the technique, called correlation plenoptic imaging between arbitrary planes (CPI-AP). The analysis, based on Gaussian test objects, highlights the main properties of the technique, as compared with standard imaging, and provides an analytical guideline to identify the limits at which an object can be considered resolved.


Introduction
Plenoptic imaging (PI) identifies a category of devices and techniques characterized by the possibility of detecting the light field, namely the combined information on the spatial distribution and propagation direction of light, in a single exposure of the scene of interest [1]. The range of applications of PI is currently expanding, including, among others, microscopy [2][3][4][5], stereoscopy [1,6,7], wavefront sensing [8][9][10][11], particle image velocimetry [12], particle tracking and sizing [13], and photography, where it is employed to add refocusing capabilities to digital cameras [14]. Cutting edge applications include 3D neuronal activity functional imaging [5], surgery [15], endoscopy [16], and flow visualization [17]. In the state of the art, PI represents an extremely promising method to perform 3D imaging [18], because it gives the possibility of the parallel acquisition of 2D images from different perspectives, with only one sensor. State-of-the-art plenoptic devices are characterized by the presence of a microlens array in the image plane of the main lens. This additional component focuses on the sensor repeated images (one for each microlens) of the main lens [1,14]. These repeated images represent different perspectives of the illuminated object, which can be used to reconstruct light paths from the lens to the sensor, providing the possibility to refocus different planes in post-processing, change the viewpoint, and reconstruct images with a larger depth of field. However, the architecture of traditional plenoptic systems entails a trade-off between spatial and directional, the origin of which lies in the fundamental trade-off between the resolution and depth of field. Given a sensor with N tot pixels per line and an array of N x microlenses per line, each corresponding to N u pixels per line behind, then N x N u = N tot . This imposes an increase by a factor N u in the linear size of the spatial resolution cell, which makes the diffraction limit, depending on the numerical aperture of the main lens, unreachable. Essentially, the resolution and depth of field that can be obtained by a standard plenoptic device are the same as one would obtain with an N u -times smaller numerical aperture of the main lens, entailing the practical (but not fundamental) advantage of a greater luminosity and signal-to-noise ratio of the final image, as well as the possibility of the parallel acquisition of multiperspective images.
In order to overcome this practical limitation, we recently developed and experimentally demonstrated a new technique, named correlation plenoptic imaging (CPI), capable of performing plenoptic imaging without losing spatial resolution [19,20]. In particular, the resolution of focused images can reach the physical (diffraction) limit. The system is based on encoding spatial and directional measurement on two separate sensors, by measuring second-order spatio-temporal correlations of light: the spatial information on a given plane in the scene is retrieved on one sensor [21][22][23][24][25], and the angular information is simultaneously obtained on the second sensor [26] thanks to the correlated nature of light beams [19,27,28]. As a result of such separation, the spatial vs. directional resolution trade-off is significantly mitigated. This technique paves the way toward the development of novel quantum plenoptic cameras, which will enable one to perform the same tasks of standard plenoptic systems, such as refocusing and scanning-free 3D imaging, along with a relevant performance increase in terms of resolution (which can be diffraction limited), depth of field, and noise [29]. In the first realizations of CPI, two particular reference planes, one inside the scene of interest, one practically coinciding with the focusing element, were imaged in order to reconstruct directional information. Such a task becomes non-trivial in the case of composite lenses, as those that one can find in a commercial camera or in a microscope, requiring the introduction of correction factors in the refocusing algorithms. We thus developed an alternative protocol, called correlation plenoptic imaging between arbitrary planes (CPI-AP), in which this difficulty is overcome by retrieving the images of two generic planes, typically placed inside the three-dimensional scene [30]. The proposed protocol highly simplifies the experimental implementation and improves refocusing precision; furthermore, it relieves the compromise between the resolution and depth of field, providing an unprecedented combination of them.
As in all CPI protocols, for technical and physical reasons that will be explained throughout this work, even in CPI-AP, it is not trivial to define resolution limits. Actually, the usual definition, based on a point-spread function, becomes immaterial in correlation plenoptic imaging. In this paper, we consider the paradigmatic case of objects characterized by a Gaussian profile, to identify reasonable definitions of a resolution limit. The analyticity of the results will provide a direct comparison, both qualitative and quantitative, with the case of standard imaging, based on direct intensity measurement.

Methods
In a second-order imaging protocol, light from a source is split (e.g., by a beam splitter) into two optical paths a and b, characterized by their optical propagators, that transfer the field from a point on the source, identified by the coordinates ρ o , to points of coordinates ρ a and ρ b on the detector planes. The correlation between fluctuations of the intensities I a and I b measured at the end of the corresponding paths generally encodes more information than the average intensities. The imaging properties of the correlation imaging device can be retrieved by the correlation function: Intensity fluctuation correlations contain relevant information for imaging if light is chaotic or if the two beams are composed of entangled photons, produced, e.g., by spontaneous parametric down-conversion [21,28].
Let us now consider a typical setup of CPI-AP [30], shown in Figure 1. Light comes from an object, which emits chaotic light, propagates toward the lens L f , characterized by the focal length f , and then, encounters a beam splitter (BS). The latter generates two copies of the input beam, which are eventually detected by one of the two sensors D a and D b , both spatially resolving. The detectors are placed in such a way that they collect the focused images of two planes in proximity of the object, called D o a and D o a , respectively.
As demonstrated in [30], plenoptic information can be retrieved by analysing the spatiotemporal correlations between the fluctuations in the intensity acquired by the two sensors. For evaluating Γ(ρ a , ρ b ) in the discussed CPI-AP setup, we shall assume that the object is positioned at a distance z from the lens L f . Light emitted by this object is characterized by the intensity profile A(ρ o ). We further assumed that transverse coherence can be safely neglected and that emission is quasi-monochromatic around the central wavelength λ (corresponding to the wavenumber k = 2π/λ). In these conditions, propagation from an arbitrary point ρ o on the object plane to a point ρ a (ρ b ) on the detector D a (D b ) occurs through the proper paraxial optical transfer functions [31]. Neglecting irrelevant factors (independent of ρ a and ρ b , the resulting correlation function reads where with j = a, b being the two propagators, P(ρ l ) the pupil function of the lens L f , and M j = z j /z j the magnifications of the object planes D o j on detectors D j .

Figure 1.
Representation of a setup for correlation plenoptic imaging between arbitrary planes (CPI-AP); the object is supposed to be a chaotic light emitter [30]. The lens L f focuses the images of the two planes D o a and D o b on the two spatially resolving sensors D a and D b , respectively. By correlating the intensity fluctuations retrieved by each pair of pixels on the two detectors, information on the distribution and direction of light from the object is retrieved.
The plenoptic properties of Γ(ρ a ρ b ) from Equation (3) can be fully understood by considering the dominant contribution to the integrals in the limit k → ∞ of geometrical optics, which gives: In this result, we observe that Γ(ρ a , ρ b ) encodes at the same time images of the (squared) object intensity profile A 2 and of the lens pupil function P. While the latter is independent of the distance z between the lens L f and the object plane, the image of the object depends on the linear combination of the coordinates of the two detectors; if the object is placed in either of the planes D o a or D o b , Γ will depend on the coordinates of only one detector, either ρ a or ρ b , respectively. This means that for z = z a (z = z b ), A 2 does not depend any longer on ρ b (ρ a ), and thus, integrating Γ on ρ b (ρ a ) would provide a focused image of the object. As described in [30], when the object lies outside the depth of field around one of the two conjugate planes, integrating the correlation function on any detector plane coordinate would provide a blurred image. A "refocusing" algorithm, able to decouple the image of the lens from the image of the object, is therefore necessary; this is achieved by defining a proper linear combination of the detector coordinates ρ a and ρ b , such as the one given by the two variables: The transformation in Equation (5) can be inverted and plugged into Equation (4), yielding the refocused correlation function: with ρ a (ρ r , ρ s ) and ρ b (ρ r , ρ s ) satisfying the system of Equations (5). The effect of applying the transformation of Equation (5) to the argument of Γ is thus to realign all the displaced images in order to reconstruct the focused image. We can now integrate the Γ ref function over the ρ s variable, which gives: In the limit k → ∞, the above approximation tends to become exact, and the refocused image coincides with the squared object intensity profile. The refocusing procedure is robust against transverse alignment shifts: of the two sensors with respect to the optical axes, since the effect on the refocused image (7) is a mere translation: which does not affect the relative transverse distance between details nor the image resolution. A natural benchmark for the refocused image of Equation (6) is represented by the images captured by the two detectors D j (with j = a, b) through direct intensity, namely: In the k → ∞ limit, these quantities provide faithful images of the object profile: only in the case z = z j . For different object positions, a geometrical spread of the image occurs, as we show in more detail in our case study.

Results
Due to the structure of the correlation function of Equation (2), the refocused image of Equation (7) takes the form namely, it is a double integral on the object coordinates, with Φ a proper function involving the optical propagators. Such a feature prevents us from defining a proper point-spread function, as one naturally does in the case of the direct intensity image. The resolution limits of CPI require a careful evaluation, and even ad hoc definitions. In fact, the plenoptic reconstruction of the direction of light in CPI-AP is based on imaging the two arbitrary planes focused by the lens L f on the detectors D a and D b ; a transmissive object acts as a diffractive aperture for such an image; hence, a point object would hinder the one-to-one correspondence between points of the plane D o a and points of D o b . To address this issue, we must test the behaviour of the refocused images of objects of a finite size. As a testbed, we consider a class of objects whose intensity profile is characterized by the Gaussian shape of standard deviation a. This, also, allows performing an analytical determination of the refocusing function and a direct comparison of the results obtained in different cases. The lens aperture is modelled by the Gaussian pupil function: which is often a good approximation for the pupil, especially for composite lenses. In this hypothesis, we first compute the correlation function Equation (2), which reads with where we used the shorthand notation Then, following the refocusing procedure defined by Equations (5)- (7), we obtain the Gaussian refocused image where Σ 0 coincides with the peak value. On the one hand, the width ∆(a) of this image can be expressed through the decomposition where a 2 is the value obtained in the limit k → ∞, providing a perfectly resolved image, as expected from Equation (7). On the other hand, the quantity defines the spread due to the finite image resolution and is determined by factors such as the wavelength, the lens size σ, the distances z j of the two reference planes from the lens, and the object axial position z. It is worth noticing that the finite-resolution contribution δ(a) depends also on the object width a, a feature not present in standard imaging, but already observed in other CPI cases, which we shall return to later. Based on Equation (11), the standard images of the object retrieved by each detector D j separately (see Equation (13)) reads where represents the effective magnification for an object at a distance z, rescaled by a geometrical projection factor z j /z, and By inspection of the results of Equations (21), (22), and (25), we can outline the main differences between the two cases: • The different factors 1 and 2 in front of a 2 in Equations (21) and (25), respectively, are determined by the fact that the direct intensity provides an image of A, while the refocused second correlation yields an image of A 2 (see Equations (7)- (11)); • While the spread of the direct intensity image is independent of a, the spread δ of the refocused CPI-AP image is monotonically decreasing with the object width. The dependence of the correlation image on the object is related to the role of δ as an "effective aperture" in correlation imaging; • Consistent with the previous point, δ(a) in Equation (22) is monotonically decreasing with the object size a. This entails that the total image width ∆(a) can have a counterintuitive non-monotonous behaviour with a, with a minimum for a > 0, unless the object is very close to one of the reference planes, namely Noticeably, the value δ(a = 0) is always finite, unlike in previously analysed cases [20]; • As expected, in the out-of-focus case, the direct intensity image cannot provide a faithful representation of the object, even for k → ∞, since a residual purely geometrical spread, proportional to the lens aperture, is still present. Moreover, as the distance from the focused plane increases, the dependence of ∆ S(j) on a becomes progressively weaker, making objects of different widths indistinguishable. This is not the case for CPI-AP, since: refocusing provides a perfectly resolved image of A 2 , independent of the distance from the focused planes; • The resolution and depth-of-field limits of traditional plenoptic imaging devices [14] are determined by the properties of the collected sub-images, obtained by reducing the main lens numerical aperture of a factor N −1 u , with N u the number of directional resolution cells per line. Therefore, the image width is obtained by the replacement σ → σ N u (29) in Equation (25). Besides negatively affecting the resolution of the focused image, such a change entails a limitation to the image width at k → ∞, which is qualitatively similar to the case reported in Equation (27) for standard imaging, although quantitatively attenuated.
The above considerations highlight both the enormous potential of the refocusing capability of CPI-AP and the difficulty of defining a resolution limit. In particular, the peculiar non-monotonic behaviour of the image size ∆, which decreases with the object size when a comes close to zero, makes the definition of a point-spread function non-informative of the imaging capabilities of the system. To define a resolution limit, we followed the general idea that an object is resolved when the width of its image is at least approximately proportional to its own width.
Let us start from the case of a focused object, in which the image width takes the much simpler form with the spread determined only by diffraction at the lens. For small a, the image width is dominated by the constant spread, and the size of the object can hardly be inferred from it. Instead, for large a, ∆(a) is essentially proportional to a, up to a small correction. A conventional transition point between the two regimes can be identified as the valueã of the object width, such thatã = z j kσ (31) namely, the value at which the width of the perfectly resolved image becomes equal to the spread. Incidentally, this value coincides with the minimum image width: Motivated by these observations, we formulated two definitions to identify a lower limit to the object width that can be resolved, in the sense that it is proportional to the corresponding image width. The two definitions coincide in the focused cases z = z j .
First, by generalizing Equation (31), we defineã for an arbitrary object position z as the width value such that the perfectly resolved contribution to the image width ∆ 2 (a) becomes equal to the spread contribution: By solving the above equation, we obtaiñ In Figure 2, we represent a graphical identification ofã, both in the case of z = z j (specifically, z = z a ), in which the spread δ is constant, and in the case z = z j (specifically, z = (z a + z b )/2), in which δ decreases with a, thus providing an even more reliable proportionality between image and object widths. In all plots, the parameters are fixed to λ = 532 nm, z a = 293 mm, z b = 343 mm, and σ = 8.2 mm.  A comparison between the two definitions of resolution limits for an object with a Gaussian intensity profile is reported in Figure 4, showing in the same plot the behaviour ofã and a with varying z. The two quantities have consistent behaviours, with minima close to the two reference planes z = z j and a local maximum close to z = (z a + z b )/2. While, as discussed before, the two limits coincide close to the focused planes, the limit a tends to be more restrictive by a factor √ 2 in the out-of-focus cases. Behaviour of the limit objectã (dashed red line) and a (solid blue line) as a function of the distance z between the object and the lens. The two functions have consistent behaviours, with minima close to the two reference planes z a = 293 mm and z b = 343 mm and a local maximum close to the midpoint z = (z a + z b )/2. While the two limits coincide close to the focused planes, the limit a is more restrictive by a factor of √ 2 in out-of-focus cases.

Discussion
We defined and discussed different characterizations of the resolution limits in CPI-AP for objects with a Gaussian profile. The difficulty in defining resolution limits in an unambiguous way has clearly emerged, since the two limit quantities that we considered, though coinciding in the focused case, deviate from each other as the object is placed away from the two reference planes. Specifically, the limit a , which is obtained by imposing that the image width at perfect resolution is larger than the minimum image width, turns out to be generally more restrictive than the limitã, which is obtained by requiring that the image width at a perfect resolution is larger than the spread due to the finite resolution.
The definition of resolution limits in the present work is conceptually different with respect to the one considered in the previous literature on the topic, which was based on the ability to discriminate a double slit with very specific features, namely a centre-tocentre distance equal to twice the slit width (see, e.g., [30] for the CPI-AP setup and [20] for a different CPI system). Despite this difference, the results obtained in our work are fully consistent with the previous ones in terms of the variation of the resolution with a varying object axial position. We remark that, though the present analysis highlights a better performance of CPI-AP in terms of resolution, a full evaluation of the advantages with respect to standard techniques must also take into account the problem of noise, which affects correlation imaging in a specific way (see, e.g., [32]). A thorough discussion of this issue will be a matter for future research.
An alternative approach to the investigation of further conventional resolution limits is to employ the modulation transfer function (MTF) criterion [33], in which the visibility of the image of a periodic intensity profile is analysed. This is outside the scope of the present paper, but we plan to investigate it in future research. In fact, we expect that the analytical results obtained with a sinusoidal profile can be exploited to provide a full control of the system performance.
Let us finally remark that the starting point of our analysis, namely the form of the correlation function given by Equation (2), relies on the physical assumption that the transverse coherence length on the source is much smaller than both the intensity profile extension and the linear size of the resolution cell defined by the lens. However, especially in a microscopy context, transverse coherence is not necessarily negligible and may affect the imaging properties of the device. Further research will be devoted to investigating how residual transverse coherence affects the resolution and depth of field of the CPI-AP system.

Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript:

PI
Plenoptic imaging CPI Correlation plenoptic imaging CPI-AP Correlation plenoptic imaging between arbitrary planes MTF Modulation transfer function