Full-Resolution Light-Field Camera via Fourier Dual Photography

: Conventional light-ﬁeld cameras with a micro-lens array suffer from resolution trade-off and shallow depth of ﬁeld. Here we develop a full-resolution light-ﬁeld camera based on dual photography. We extend the principle of dual photography from real space to Fourier space for obtaining two-dimensional (2D) angular information of the light-ﬁeld. It uses a spatial light modulator at the image plane as a virtual 2D detector to record the 2D spatial distribution of the image, and a real 2D detector at the Fourier plane of the image to record the angles of the light rays. The Fourier-spectrum signals recorded by each pixel of the real 2D detector can be used to reconstruct a perspective image through single-pixel imaging. Based on the perspective images reconstructed by different pixels, we experimentally demonstrated that the camera can digitally refocus on objects at different depths. The camera can achieve light-ﬁeld imaging with full resolution and provide an extreme depth of ﬁeld. The method provides a new idea for developing full-resolution light-ﬁeld cameras.


Introduction
Light-field imaging [1,2], also known as integral imaging [3][4][5] or plenoptic imaging [6], is used for capturing 4D light-field information (2D spatial and 2D angular information). Many schemes of light-field camera have been proposed for different applications. The most widely used light-field cameras operate with a micro-lens array (MLA) [7]. It encodes the 4D light-field information to a 2D detector through an MLA. This MLA scheme enables light-field capture in a single shot, but imposes a trade-off between the spatial resolution and the angular resolution [8]. Consequently, in order to obtain the angle information, the spatial resolution of the reconstructed image has to be sacrificed, which is lower than the resolution of the 2D detector. Spatial resolution is very important for light-field microscopes. To improve the spatial resolution, several Fourier light-field microscopes have been proposed, where an MLA or a diffuser is placed in the Fourier plane instead of the intermediate image plane [9][10][11][12][13]. In some applications, such as rendering and relighting, full resolution light-field information is desired [14,15]. For the detector to acquire light-field at full resolution, multi-shot schemes based on coded masks have been proposed [16][17][18]. These schemes use a spatial light modulator (SLM) as a time-varying aperture to modulate the light-field and use a 2D detector to record the light-field. Recently, to address the resolution trade-off problem in light-field microscopes, the schemes without the MLA based on single-pixel imaging [19][20][21][22][23] are reported [24][25][26], where a LED light source combined with a digital micromirror device (DMD) is used to illuminate the sample. Two light-field cameras based on single-pixel imaging using DMD and liquid crystal on silicon-SLM are also reported respectively [27,28].
Here, we develop a full-resolution light-field camera (FRLFC) by means of Fourier dual photography. Fourier dual photography is the extension of dual photography from real space to Fourier space. Dual photography was introduced by Sen et al. in 2005 [29] for efficiently capturing the light transport between a camera and a projector, which can generate a full resolution dual image from the viewpoint of a projector. Unlike the conventional light-field camera with MLA, which uses a 2D detector to acquire the 4D light-field information, the FRLFC places a 2D detector at the Fourier plane to acquire the 2D angular information and an SLM at the image plane. The SLM acts like a virtual 2D detector to acquire the 2D the spatial information according to the principle of Fourier dual photography. In other words, the 4D light-field information is acquired by using a pair of 2D detectors. Therefore, full-resolution light-field imaging is achieved by the FRLFC. The experimental results demonstrate the feasibility of the proposed method. Figure 1 shows the imaging configuration of dual photography. An SLM is used to project patterns onto the object in a scene, and this mode of operation is commonly referred to as structured illumination. Figure 1a is the primal configuration. Exploiting Helmholtz reciprocity [30], we can virtually interchange the SLM and the detector to computationally reconstruct an image from the viewpoint of the SLM, as shown in Figure 1b which is the dual configuration.

Fourier Dual Photography
Photonics 2022, 9, x FOR PEER REVIEW 2 of 13 illuminate the sample. Two light-field cameras based on single-pixel imaging using DMD and liquid crystal on silicon-SLM are also reported respectively [27,28].
Here, we develop a full-resolution light-field camera (FRLFC) by means of Fourier dual photography. Fourier dual photography is the extension of dual photography from real space to Fourier space. Dual photography was introduced by Sen et al. in 2005 [29] for efficiently capturing the light transport between a camera and a projector, which can generate a full resolution dual image from the viewpoint of a projector. Unlike the conventional light-field camera with MLA, which uses a 2D detector to acquire the 4D lightfield information, the FRLFC places a 2D detector at the Fourier plane to acquire the 2D angular information and an SLM at the image plane. The SLM acts like a virtual 2D detector to acquire the 2D the spatial information according to the principle of Fourier dual photography. In other words, the 4D light-field information is acquired by using a pair of 2D detectors. Therefore, full-resolution light-field imaging is achieved by the FRLFC. The experimental results demonstrate the feasibility of the proposed method. Figure 1 shows the imaging configuration of dual photography. An SLM is used to project patterns onto the object in a scene, and this mode of operation is commonly referred to as structured illumination. Figure 1a is the primal configuration. Exploiting Helmholtz reciprocity [30], we can virtually interchange the SLM and the detector to computationally reconstruct an image from the viewpoint of the SLM, as shown in Figure 1b which is the dual configuration. When the SLM is placed at the image plane to modulate the image of the object in ambient light illumination, this mode of operation is commonly referred to structured detection. We can extend the principle of dual photography from the mode of structured illumination to the mode of structured detection. Figure 2 shows the imaging configuration of dual photography in the mode of structured detection. Figure 2a shows the primal configuration, where the object is imaged by an imaging lens L1 onto the SLM, and then imaged by imaging lens L2 onto the detector. Figure 2b shows the dual configuration, where the object is imaged by imaging lenses L1 and L2 onto the virtual SLM, and then is imaged by imaging lens L2 onto the virtual detector. It should be note that Figure 2a  When the SLM is placed at the image plane to modulate the image of the object in ambient light illumination, this mode of operation is commonly referred to structured detection. We can extend the principle of dual photography from the mode of structured illumination to the mode of structured detection. Figure 2 shows the imaging configuration of dual photography in the mode of structured detection. Figure 2a shows the primal configuration, where the object is imaged by an imaging lens L1 onto the SLM, and then imaged by imaging lens L2 onto the detector. Figure 2b shows the dual configuration, where the object is imaged by imaging lenses L1 and L2 onto the virtual SLM, and then is imaged by imaging lens L2 onto the virtual detector. It should be note that Figure 2a,b are formally asymmetric, because in practice modulation should be carried out before detection, hence the delay lens L2 is added in Figure 2b. formally asymmetric, because in practice modulation should be carried out before detection, hence the delay lens L2 is added in Figure 2b. Considering that light-field imaging needs to obtain 2D angular information of the light rays and the camera usually works for objects in ambient light illumination or selfluminous objects, we extend the dual photography of structured detection mode from real space to Fourier space, which is called Fourier dual photography. Figure 3a shows the primal imaging configuration of Fourier dual photography, where the object is imaged by the imaging lens onto the SLM, and then focused by the Fourier lens onto the 2D detector at the Fourier plane. Figure 3b shows the dual configuration of Fourier dual photography, where the object is imaged by the imaging lens onto the image plane and then focused by the Fourier lens onto the virtual SLM, and finally imaged onto the virtual detector by the Fourier lens. Considering that light-field imaging needs to obtain 2D angular information of the light rays and the camera usually works for objects in ambient light illumination or selfluminous objects, we extend the dual photography of structured detection mode from real space to Fourier space, which is called Fourier dual photography. Figure 3a shows the primal imaging configuration of Fourier dual photography, where the object is imaged by the imaging lens onto the SLM, and then focused by the Fourier lens onto the 2D detector at the Fourier plane. Figure 3b shows the dual configuration of Fourier dual photography, where the object is imaged by the imaging lens onto the image plane and then focused by the Fourier lens onto the virtual SLM, and finally imaged onto the virtual detector by the Fourier lens. formally asymmetric, because in practice modulation should be carried out before detection, hence the delay lens L2 is added in Figure 2b. Considering that light-field imaging needs to obtain 2D angular information of the light rays and the camera usually works for objects in ambient light illumination or selfluminous objects, we extend the dual photography of structured detection mode from real space to Fourier space, which is called Fourier dual photography. Figure 3a shows the primal imaging configuration of Fourier dual photography, where the object is imaged by the imaging lens onto the SLM, and then focused by the Fourier lens onto the 2D detector at the Fourier plane. Figure 3b shows the dual configuration of Fourier dual photography, where the object is imaged by the imaging lens onto the image plane and then focused by the Fourier lens onto the virtual SLM, and finally imaged onto the virtual detector by the Fourier lens.

Methods
The FRLFC adopts the imaging configuration of Fourier dual photography shown in Figure 3a. According to the principle of dual photography, the SLM acts as a virtual detector that can computationally reconstruct a dual image of the object to record the 2D spatial information with full resolution of the SLM. By loading multiple structured patterns onto the SLM to modulate the image of the object, multiple Fourier spectrum images can be captured by the detector. The Fourier spectrum signals recorded by each pixel of the detector can be used to reconstruct a perspective image of the object using the single-pixel imaging method [31]. The images reconstructed by different pixels of the detector contain the 2D angular information of light rays because the Fourier spectrums recorded by different pixels correspond to the light rays with different angles. Figure 4 shows how the depth information of objects in a scene is recorded by the FRLFC. A, B, and C represent three different pixels of the detector, pixel B is on the optical axis of the optical system. The spatial coordinate of perspective images reconstructed by the single-pixel detector is determined by the SLM. E A F A , E B F B and E C F C represent the spatial coordinate range of the perspective images reconstructed by pixels A, B, and C, respectively.
When the object (solid arrow) is at the conjugate object plane of the SLM as shown in Figure 4a, the spatial coordinates of the images reconstructed by pixels A, B, and C are the same, where E A F A , E B F B and E C F C are overlapped. However, if the object (dash arrow) is out of the conjugate object plane as shown in Figure 4b,c, the spatial coordinates of the images reconstructed by pixels A, B, and C are not the same, and E A F A , E B F B and E C F C are not overlapped. Specifically, when the object is in front of the conjugate object plane as shown in Figure 4b, the image reconstructed by pixel A deviates upward from the image reconstructed by pixel B, and the image reconstructed by pixel C deviates downward from the image reconstructed by pixel B. In contrast, when the object is behind the conjugate object plane as shown in Figure 4c, the image reconstructed by pixel A deviates downward from the image reconstructed by pixel B, and the image reconstructed by pixel C deviates upward from the image reconstructed by pixel B. The coordinate deviation value of the reconstructed image is related to the depth of the object and the position of the pixel. Therefore, the depth information of the objects can be obtained by use of the images reconstructed by different pixels of the detector. In Figure 4c, (∆s xi , ∆s yi ) represent the (x, y) coordinates deviation values of the image reconstructed by pixel A, and ∆z image is the position deviation value of the image plane from the SLM plane.
Based on the perspective images reconstructed by different pixels, we can digitally refocus on the objects at different depths. The digital refocusing algorithm is summarized as follows: Step 1: Calculating the coordinate deviation value of the image reconstructed by the ith single-pixel detector according to the refocusing depth ∆z image : where (x i , y i ) is the coordinate of the single-pixel detector, and (x center , y center ) is the coordinate of the center point B of the Fourier spectrum image.
Step 2: Shifting the perspective images reconstructed by the ith pixel in the Fourier domain: where I i (x, y) is the perspective images reconstructed by the ith pixel; I s i (x, y) is the shifted perspective images; f ( f x , f y ) is the frequency coordinate; FT[ ] and FT −1 { } are the Fourier transform and inverse Fourier transform operators, respectively, j is the imaginary unit.
Step 3: Summing all shifted images to obtain the refocused image I ∆z (x, y) at the depth ∆z image : Step 4: Changing the refocusing depth ∆z image and repeating Steps 1 to 3 to obtain object images at different depths.
images reconstructed by pixels A, B, and C are not the same, and A A E F , B B E F and C C E F are not overlapped. Specifically, when the object is in front of the conjugate object plane as shown in Figure 4b, the image reconstructed by pixel A deviates upward from the image reconstructed by pixel B, and the image reconstructed by pixel C deviates downward from the image reconstructed by pixel B. In contrast, when the object is behind the conjugate object plane as shown in Figure 4c, the image reconstructed by pixel A deviates downward from the image reconstructed by pixel B, and the image reconstructed by pixel C deviates upward from the image reconstructed by pixel B. The coordinate deviation value of the reconstructed image is related to the depth of the object and the position of the pixel. Therefore, the depth information of the objects can be obtained by use of the images reconstructed by different pixels of the detector. In Figure 4c,  Based on the perspective images reconstructed by different pixels, we can digitally refocus on the objects at different depths. The digital refocusing algorithm is summarized as follows: Step 1: Calculating the coordinate deviation value of the image reconstructed by the Color light-field imaging with the FRLFC can be achieved by placing a color detector at the Fourier plane. The 2D color detector can not only record the angular information but also record the color information. Each pixel of the detector can obtain three groups of 1D intensity sequences, corresponding to red, green, and blue channels. Utilizing the three groups 1D intensity sequences, we can reconstruct red, green, and blue perspective images from every pixel of the detector. Based on all the perspective images, we can realize color light-field imaging by using the steps mentioned above.

Results and Discussion
To demonstrate the light-field imaging capability of the FRLFC, we conduct two experiments. The SLM used in the experiments is a liquid-crystal device (LCD, 5.5 inches, 1440 × 2560 pixels, pixel size 47.25 µm, a mobile phone display). The 2D detector used in experiments is a color charge coupled device (CCD) (Point Grey: GS3-U3-60QS6C-C, 1 inch CCD, 2736 × 2190 pixels, pixel size 4.54 µm). The focal lengths of the imaging lens (Nikon: f/1.8D AF NIKKOR) and the Fourier lens are 50 mm and 15 mm, respectively.
In the first experiment, we attempted to reconstruct a 3D scene consisting of three toy masks with different depths in ambient light illumination. To ensure the scene was uniformly illuminated, we used eleven white light-emitting diodes (LED) (color temperature 6500 k, brightness 1000 lumens) to illuminate the scene from different directions. We loaded a series of Hadamard basis patterns on the LCD to modulate the image of the 3D objects according to the Hadamard single-pixel imaging method [30]. The Hadamard patterns were displayed in a 128 × 128 pixel area of the LCD, that is, the size of the Hadamard patterns were 128 × 128 pixels. Figure 5a shows some examples of the Hadamard basis patterns. We captured 128 × 128 Fourier spectrum images, each of which corresponds to a Hadamard basis pattern displaced on the LCD. Figure 5b shows a Fourier spectrum image captured by the camera. As the pixel size of the CCD is only 4.54 µm, the light signal detected by each pixel is very weak. In order to improve the signal-to-noise ratio, we bound multiple pixels of the CCD into a single-pixel detector, obviously at the expense of spatial resolution. The layout of the single-pixel detectors is shown in Figure 5c, where each orange dot represents a single-pixel detector. The radius of each orange dot is 50 pixels. Using the single-pixel imaging method, we can reconstruct red, green, and blue perspective images of the objects from the measurements of each single-pixel detector. The size of the reconstructed images is 128 × 128 pixels, which is the same as that of the modulation patterns. Figure 6a1-d1 show the positions of single-pixel detectors (orange dots). Figure 6a2-a4,b2-b4,c2-c4,d2-d4 show four groups of perspective images reconstructed by the leftmost, topmost, rightmost, and bottommost single-pixel detectors, respectively. It can be seen that the perspective images reconstructed by different single-pixel detectors are shifted with each other, as shown in Figure 6a2-d2. Figure 6e shows the physical size of the masks. These results confirm that the images reconstructed with different singlepixel detectors contain different angle information. With the perspective images reconstructed by all the single-pixel detectors, we can digitally refocus the objects at different depths by using the algorithm mentioned above. Using the red, green, and blue refocus images, we can synthesize digitally refocused full-color images, as shown in Figure 7. Using the single-pixel imaging method, we can reconstruct red, green, and blue perspective images of the objects from the measurements of each single-pixel detector. The size of the reconstructed images is 128 × 128 pixels, which is the same as that of the modulation patterns. Figure 6a1-d1 show the positions of single-pixel detectors (orange dots). Figure 6a2-a4,b2-b4,c2-c4,d2-d4 show four groups of perspective images reconstructed by the leftmost, topmost, rightmost, and bottommost single-pixel detectors, respectively. It can be seen that the perspective images reconstructed by different single-pixel detectors are shifted with each other, as shown in Figure 6a2-d2. Figure 6e shows the physical size of the masks. These results confirm that the images reconstructed with different single-pixel detectors contain different angle information. With the perspective images reconstructed by all the single-pixel detectors, we can digitally refocus the objects at different depths by using the algorithm mentioned above. Using the red, green, and blue refocus images, we can synthesize digitally refocused full-color images, as shown in Figure 7.
It can be seen that the perspective images reconstructed by different single-pixel detectors are shifted with each other, as shown in Figure 6a2-d2. Figure 6e shows the physical size of the masks. These results confirm that the images reconstructed with different singlepixel detectors contain different angle information. With the perspective images reconstructed by all the single-pixel detectors, we can digitally refocus the objects at different depths by using the algorithm mentioned above. Using the red, green, and blue refocus images, we can synthesize digitally refocused full-color images, as shown in Figure 7.  In the second experiment, we attempt to capture a scene of self-luminous objects with the FRLFC. The scene consists of three LED lamps with different colors and shapes. The experimental parameters are the same as those of the first experiment. Figure 8a1-d1 show the positions of single-pixel detectors (orange dots). Figure 8a2-a4,b2-b4,c2-c4,d2-d4 show four groups of perspective images reconstructed by the leftmost, topmost, rightmost, and bottommost single-pixel detectors, respectively. It can be seen that the perspective images reconstructed by different single-pixel detectors are shifted with each other,  In the second experiment, we attempt to capture a scene of self-luminous obje the FRLFC. The scene consists of three LED lamps with different colors and sha experimental parameters are the same as those of the first experiment. Figure 8a1-  In the second experiment, we attempt to capture a scene of self-luminous objects with the FRLFC. The scene consists of three LED lamps with different colors and shapes. The experimental parameters are the same as those of the first experiment. Figure 8a1-d1 show the positions of single-pixel detectors (orange dots). Figure 8a2-a4,b2-b4,c2-c4,d2-d4 show four groups of perspective images reconstructed by the leftmost, topmost, rightmost, and bottommost single-pixel detectors, respectively. It can be seen that the perspective images reconstructed by different single-pixel detectors are shifted with each other, as shown in Figure 8a2-d2. Figure 8e shows the physical size of the self-luminous objects. Based on the perspective images reconstructed by all single-pixel detectors, we can reconstruct the image of the objects at different depths, as shown in Figure 9.     For a common camera, the lateral resolution of the acquired image is mainly determined by the imaging resolution of the imaging lens and the sampling frequency of the digital image sensor. For the proposed FRLFC, since the SLM acts as a virtual digital image sensor, the lateral resolution is determined by the imaging resolution of the imaging lens and the sampling frequency of the SLM. In the experiment, the imaging resolution of the imaging lens used is about 1.21 µm ( 1.22λ D f = 1.22×0.55 µm 1/1.8 ≈ 1.21 µm), which is much smaller than the pixel size of the LCD spatial light modulator of 47.25 µm. Therefore, the lateral resolution of the FRLFC in the experiment is mainly determined by the pixel size of the LCD.
Depth of field (DOF) is an important performance parameter of light-field camera [32]. We have tested the DOF of the FRLFC. In the experiment, a lamp with twelve LED beads is used as a self-luminous object to be imaged. The size of this lamp is approximately 22 mm × 40 mm. Figure 10 shows the reconstructed images of the lamp at different depths using the single-pixel imaging method with the CCD as a single-pixel detector, which is equivalent to that of a common camera at its maximum aperture according to the principle of dual photography. In the figure, U200, U250 and others represent the object distance u = 200 mm, u = 250 mm, etc. The images are considered clear from u = 600 mm to u = 800 mm, so that the DOF is about 200 mm. Figure 11 shows the reconstruction images of the lamp at different depths using the single-pixel imaging method with the single-pixel detector whose radius is 10 pixels at the center of the Fourier spectrum, which is equivalent to that of a common camera at radius 10 pixels aperture. As can be seen from Figure 11, the images are considered clear from u = 200 mm to u = 1350 mm, so that the DOF is longer than 1150 mm. The smaller the size of the single-pixel detector, the longer the DOF of the reconstructed image. Therefore, when a pixel of the camera is used as a single-pixel detector, an extreme DOF can be obtained for the FRLFC. Importantly, since the spatial resolution of the reconstructed image by the single-pixel detector is determined by the sampling frequency of the SLM rather than the size of the single-pixel detector, the proposed FRLFC can provide an extreme DOF without sacrificing spatial resolution. In comparison, the light-field camera with MLA provides a longer DOF than the common camera at the expense of image resolution. It should be noted that the light-field camera mentioned above does not include the light-field microscope. For a common camera, the lateral resolution of the acquired image is mainly determined by the imaging resolution of the imaging lens and the sampling frequency of the digital image sensor. For the proposed FRLFC, since the SLM acts as a virtual digital image sensor, the lateral resolution is determined by the imaging resolution of the imaging lens and the sampling frequency of the SLM. In the experiment, the imaging resolution of the imaging lens used is about 1.21 µm ( µm ), which is much smaller than the pixel size of the LCD spatial light modulator of 47.25 µm. Therefore, the lateral resolution of the FRLFC in the experiment is mainly determined by the pixel size of the LCD. Depth of field (DOF) is an important performance parameter of light-field camera [32]. We have tested the DOF of the FRLFC. In the experiment, a lamp with twelve LED beads is used as a self-luminous object to be imaged. The size of this lamp is approximately 22 mm × 40 mm. Figure 10 shows the reconstructed images of the lamp at different depths using the single-pixel imaging method with the CCD as a single-pixel detector, which is equivalent to that of a common camera at its maximum aperture according to the principle of dual photography. In the figure, U200, U250 and others represent the object distance u = 200 mm, u = 250 mm, etc. The images are considered clear from u = 600 mm to u = 800 mm, so that the DOF is about 200 mm. Figure 11 shows the reconstruction images of the lamp at different depths using the single-pixel imaging method with the singlepixel detector whose radius is 10 pixels at the center of the Fourier spectrum, which is equivalent to that of a common camera at radius 10 pixels aperture. As can be seen from Figure 11, the images are considered clear from u = 200 mm to u = 1350 mm, so that the DOF is longer than 1150 mm. The smaller the size of the single-pixel detector, the longer the DOF of the reconstructed image. Therefore, when a pixel of the camera is used as a single-pixel detector, an extreme DOF can be obtained for the FRLFC. Importantly, since the spatial resolution of the reconstructed image by the single-pixel detector is determined by the sampling frequency of the SLM rather than the size of the single-pixel detector, the proposed FRLFC can provide an extreme DOF without sacrificing spatial resolution. In comparison, the light-field camera with MLA provides a longer DOF than the common camera at the expense of image resolution. It should be noted that the light-field camera mentioned above does not include the light-field microscope.   Figure 11. The reconstruction images of the lamp at different depths using the single-pixel imaging method with the single-pixel detector whose radius is 10 pixels.

Conclusions
In summary, we have developed a light-field camera based on dual photography. The principle of dual photography is extended from real space to Fourier space for obtaining 2D angular information of the light rays. Experimental results demonstrate that the proposed light-field camera can realize color light-field imaging. Compared with conventional light-field cameras with MLA, the proposed camera avoids the resolution tradeoff problem and the full-resolution light-field imaging can be realized theoretically. High resolution angular information obtained can also result in light-field imaging with an extreme depth of field. Although the proposed method is currently not suitable for dynamic scene imaging, it has a potential of high-resolution light-field imaging for static scenes and provides a new idea for developing the full resolution light-field camera.

Supplementary Materials:
The following supporting information can be downloaded at: www.mdpi.com/xxx/s1, Visualization 1: The reconstructed perspective images; Visualization 2: The images digitally refocused; Visualization 3: The reconstructed perspective images of the self-luminous objects; Visualization 4: The images digitally refocused of the self-luminous objects.