Fast Image Reconstruction Technique for Parallel Phase-Shifting Digital Holography

: For incoherent and coherent digital holography, the parallel phase-shifting technique has been used to reduce the number of exposures required for the phase-shifting technique which eliminates zero-order diffraction and conjugates image components. Although the parallel phase-shifting technique can decrease the hologram recording time, the image interpolations require additional calculation time. In this study, we propose a technique that reduces the calculation time for image interpolations; this technique is based on the convolution theorem. We experimentally verified the proposed technique and compared it with the conventional technique. The proposed technique is more effective for more precise interpolation algorithms because the calculation time does not depend on the size of interpolation kernels.

The phase-shifting technique has been used for digital holography to obtain the complex amplitude of a wavefront diffracted from objects by eliminating the zero-order diffraction and conjugate image components [13,14].Most incoherent digital holography methods use the phase-shifting technique.The phase-shifting technique requires the capture of three or four holograms with different phase shifts.To avoid multiple image exposures, the parallel phase-shifting technique [15,16] was proposed, which obtained the complex-amplitude distribution of an object wave by a single exposure.In this technique, multiple holograms with different phase shifts were recorded by space multiplexing.The parallel phase-shifting technique can be effectively implemented using polarization image sensors [17].The recently commercialized polarization image sensors have a high resolution (>1000 × 1000) and operate at a video rate or a high frame rate (≥60 Hz) [18].The use of the parallel phase-shifting technique enables real-time digital holography [19][20][21].The single-shot recording enables the measurement of short-time phenomena [22][23][24] and rapidly moving microorganisms [25,26].In addition, the single-shot multi-wavelength incoherent digital holography was demonstrated by using the color polarization image sensor [27] or the polarization-sensitive phase-shifted array [28].
The polarization image sensors consist of four groups of pixels corresponding to the four phase shifts.Pixels with the same phase shifts were extracted to synthesize four images required for the phase-shifting technique.The interpolation or demosaicing technique was used to obtain four pixel values at identical positions for the four images corresponding to the four phase shifts.The parallel phase-shifting technique can reduce the time taken for image capturing; however, the image interpolations require more time.This additional time required for the interpolations will become an obstacle for real-time operations, especially when the resolution of the image sensor increases.In general, interpolation algorithms with higher accuracies require a longer calculation time than those with lower accuracies.Thus, it is equally important to decrease the time required for image interpolations as it is to decrease the time required for calculating the diffraction.
In this study, we propose a technique to reduce the calculation time required for the image interpolations in the parallel phase-shifting technique.In Section 2, the proposed technique is explained.The experimental verification of the proposed technique is shown in Section 3. The discussion is given in Section 4, which is followed by the conclusions in Section 5.

Theory
First, we explain the parallel phase-shifting technique using the polarization image sensor [17].Figure 1a shows the arrangement of the micro-polarizers attached to the pixels of the image sensor.The directions of the polarizing axes of the micro-polarizers are 0°, 45°, 90°, and 135°.When the object and reference waves having circular polarizations in opposite directions interfere on the image sensor, the phase shifts between the two waves captured by the polarized pixels are δ = 0, π/2, π, and 3π/2, as shown in Figure 1b.
Figure 2 illustrates the calculation procedure of the conventional parallel phase-shifting technique [15,16].Pixels having the same phase shifts are extracted to obtain four sparse images.The extracted sparse images are represented by Io(x, y; 0), Io(x, y; π/2), Io(x, y; π), and Io(x, y; 3π/2) corresponding to the phase shifts of δ = 0, π/2, π, and 3π/2, respectively.The interpolated images corresponding to the phase shifts of δ = 0, π/2, π, and 3π/2 are denoted by I(x, y; 0), I(x, y; π/2), I(x, y; π), and I(x, y; 3π/2), respectively.We performed image interpolations and obtained four pixel values at identical positions for the four images.The phase-shifting technique provides a complex-amplitude distribution of the object wave on the image sensor by using the following equation: The diffraction calculation is applied to u(x, y) to obtain the reconstructed image.Several kinds of diffraction calculations can be used, such as the Fresnel and Fraunhofer diffraction equations.In this study, we use the angular spectrum representation for the diffraction calculation [29,30].The angular spectrum method provides the reconstruction image v(x, y) at a distance of zr using the following equation: Where IFT denotes the inverse Fourier transform and ( ) is the Fourier transform of u(x, y).The Fourier transform is performed to obtain ( ) x y U f f followed by the multiplication of the phase term.Finally, the inverse Fourier transform gives the reconstructed image.Next, we explain the calculation technique proposed in this study.When the resolution of the image sensors increases and the fast Fourier transform (FFT) algorithm is used for the diffraction calculation, the calculation time for the Fourier transform increases logarithmically with the resolution.The calculation time for the interpolations increases linearly with the resolution; therefore, this time needs to be reduced.In this study, the interpolation is represented by the convolution of an image and an interpolation kernel.The four interpolated images are obtained for the phase shifts δ = 0, π/2, π, and 3π/2 using the following equation: Where * denotes the convolution operation, and f (x, y) represents the interpolation ker- nel.Various kinds of interpolation algorithms are obtained by changing the interpolation kernel.When the convolution theorem is used, the complex amplitude distribution of the object wave calculated by the angular spectrum method is given as follows: Where FT represents the Fourier transform, and u0(x, y) is given as follows: The above explanation is based on conventional phase-shifting techniques [13,31].Figure 3 depicts the proposed technique.The image extractions and interpolations are removed.The image captured by the image sensor is multiplied by the complex-number matrix represented by C to obtain u0(x, y), and the Fourier transform is performed.Then, the Fourier transform of the interpolation kernel is multiplied, which is followed by the multiplication of the phase term.Finally, the inverse Fourier transform is performed to obtain the reconstructed image.The proposed technique completely removes the four image-interpolation calculations from the calculation procedure.However, two multiplication operations using C and the Fourier transform of the kernel are added.The Fourier transform of the kernel can be calculated in advance.Image interpolation generally requires more time than image multiplication.Therefore, the proposed technique can reduce the calculation time for the parallel phase-shifting technique.
When the Fourier transform of the interpolation kernel is calculated in advance, the calculation time required for the proposed technique does not depend on the size of the interpolation kernel.The kernel size depends on the interpolation algorithm, and the kernel sizes usually increase with the increase in the quality of the image interpolation algo-rithms.Research [32] has shown that the root mean square error of the reconstructed images decreased when interpolation algorithms of high accuracy were used, such as the bicubic interpolation (having kernel size of 7 × 7).The proposed technique can be combined with any diffraction calculation using a Fourier transform, other than the angular spectrum technique.
The proposed technique assumes that the interpolation process is space-invariant.Recently, space-variant techniques [33] have been developed for more precise image interpolation.Because these interpolation processes cannot be expressed using the convolution operation, the proposed technique cannot be used.
Figure 4 illustrates the FINCH recording system.The objective lens provides a converging wavefront with incoherent light diffracted from an object.The polarizer transforms the incoherent light into linearly polarized light with a polarization angle of 45°.The phase-modulation spatial light modulator (SLM) modulates the phase of the linearly polarized light in either the horizontal direction or the vertical direction but does not modulate the phase of the linearly polarized light in the orthogonal direction.A quadratic phase distribution is displayed on the phase-modulation SLM so that the horizontally and vertically polarized lights have wavefronts with different radii of curvature.Then, the quarter-wave plate transforms the horizontally and vertically polarized lights into righthanded and left-handed circularly polarized lights.Finally, the polarization image sensor is used to record the interference of the two circularly polarized lights.An LED with a central wavelength of 525 nm and a bandwidth of 30 nm was used as an incoherent light source.We used SLM-100 (Santec Corporation, Aichi, Japan) with a resolution of 1440 × 1050 as a phase-modulation SLM; a pixel pitch of 10.4 μm was used.The polarization camera was VP-TRI050S-P (Lucid Vision Labs, Inc., Richmond, British Columbia, Canada), which had a resolution of 2448 × 2048 and a pixel pitch of 3.45 μm.The central 2048 × 2048 pixels were used for hologram recording.The focal length of the objective lens was 20 mm, and the focal length of the quadratic phase distribution displayed on the SLM was 600 mm.The theoretical minimum resolvable distance was 1.60 μm [6].A negative test target USAF1951 was used as a recording object.The calculations were performed using a personal computer (Intel Core i7-8550U 4.0GHz CPU, 8GB RAM).
Figure 5a-c shows the reconstructed images obtained using the conventional technique and Figure 5d-f show those obtained using the proposed technique.For the image interpolations, we used the bilinear, bicubic, and Lanczos-3 algorithms with the kernel sizes of 3 × 3, 7 × 7, and 11 × 11, respectively.The peak signal-to-noise ratios (PSNRs) between the images calculated using the conventional and proposed techniques for the bilinear, bicubic, and Lanczos-3 algorithms were 61.2, 61.3, and 62.1 dB, respectively.Thus, the reconstructed images obtained by the two techniques were virtually equivalent.
Table 1 shows a comparison of the calculation time required by the different algorithms.The calculation time was measured separately for object wave generation and diffraction calculations.The results were averages of 100 calculations.The proposed technique effectively reduced the calculation time for the object wave generation.The time required for the object wave generation did not depend on the kernel size of the interpolation.The diffraction calculation time for the proposed technique was slightly longer than that for the conventional technique because the multiplication operations of the Fourier transform of the interpolation kernel were added to the diffraction calculation.
Figure 6 shows the magnified reconstructed images obtained using the proposed technique for comparing the effects of the interpolation algorithms.The highest spatial frequency line group was extracted.From top to bottom, the line spacings were 2.76, 2.46, and 2.19 μm.In [32], the authors reported that in the reconstructed images, the bicubic interpolation produced smaller errors than the bilinear interpolation; however, the calculation time required for the bicubic interpolation was approximately twice as much as that required for the bilinear interpolation.However, in our experiments, no differences were observed in the reconstructed images obtained for the three interpolation algorithms because the line spacings were much larger than the theoretical minimum resolvable distance of the experimental system.

Discussion
Our experiment results prove that the proposed technique can effectively reduce the calculation time for parallel phase-shifting digital holography without the degradation of the reconstructed images.The proposed technique is theoretically equivalent to the conventional technique; therefore, we mainly discuss only the calculation time.
As shown in Table 1, the calculation time for the proposed technique was mostly occupied by the diffraction calculation.In this study, the FFT program offered by the fastest Fourier Transform in the West library [34], was used for the diffraction calculation.The use of the graphical processer units (GPUs) for the FFT calculation will substantially reduce the calculation time [35].Ref. [35] reported that the diffraction calculation time using GPU is approximately 5% of that using CPU.When this time reduction ratio is applied to our results, the total calculation time could be reduced to approximately 40 ms, and image reconstruction with a frame rate of approximately 40 Hz can be achieved.Considering recent advances in GPU technology, video rate (60 Hz) image reconstruction can be achieved in the near future.
The proposed technique replaced the four image interpolation calculations with the two multiplication operations using the fixed complex-number matrices.These multiplication operations can be performed using parallel processing.Therefore, the use of GPUs will also reduce the calculation time for multiplication operations.
In this study, we used the monochrome polarization image sensor.Color polarization image sensors have also been developed and used for incoherent digital holography [27].When the color image sensor was used, the four extracted images corresponding to the four phase differences become sparse, and the interpolation kernel size increased.Therefore, the proposed technique is more effective for color digital holography that uses color polarization image sensors.

Conclusions
In this study, we proposed a fast image reconstruction technique for parallel phaseshifting digital holography.The proposed technique replaces the four image interpolation processes required for the conventional parallel phase-shifting technique with two multiplication operations using fixed complex-number matrices.The proposed technique was experimentally verified using the FINCH system employing the polarization image sensor.The proposed technique increased the calculation speed for the image interpolations by the factors of 6.4, 9.3, and 15.2 for the bilinear, bicubic, and Lanczos-3 interpolation algorithms, respectively.The PSNRs calculated between the images obtained by the conventional and proposed techniques for the bilinear, bicubic, and Lanczos-3 algorithms were 61.2, 61.3, and 62.1 dB, respectively.

Figure 1 .
Figure 1.Parallel phase-shifting technique using the polarization image sensor: (a) polarizing axes of micro-polarizers attached to pixels and (b) phase shifts for pixels.

Figure 2 .
Figure 2. Conventional calculation procedure for parallel phase-shifting digital holography.

Figure 3 .
Figure 3. Proposed calculation procedure for parallel phase-shifting digital holography.

Figure 5 .
Figure 5. Reconstructed images using (a-c) the conventional technique and (d-f) the proposed technique using the interpolation algorithms (a,d) bilinear, (b,e) bicubic, and (c,f) Lanczos-3.

Figure 6 .
Figure 6.Magnified images of the reconstructed images obtained using the proposed technique when the interpolation algorithms were (a) bilinear, (b) bicubic, and (c) Lanczos-3.

Table 1 .
Calculation time for the conventional and proposed techniques.