Lensless Three-Dimensional Quantitative Phase Imaging Using Phase Retrieval Algorithm

Quantitative phase imaging (QPI) techniques are widely used for the label-free examining of transparent biological samples. QPI techniques can be broadly classified into interference-based and interferenceless methods. The interferometric methods which record the complex amplitude are usually bulky with many optical components and use coherent illumination. The interferenceless approaches which need only the intensity distribution and works using phase retrieval algorithms have gained attention as they require lesser resources, cost, space and can work with incoherent illumination. With rapid developments in computational optical techniques and deep learning, QPI has reached new levels of applications. In this tutorial, we discuss one of the basic optical configurations of a lensless QPI technique based on the phase-retrieval algorithm. Simulative studies on QPI of thin, thick, and greyscale phase objects with assistive pseudo-codes and computational codes in Octave is provided. Binary phase samples with positive and negative resist profiles were fabricated using lithography, and a single plane and two plane phase objects were constructed. Light diffracted from a point object is modulated by phase samples and the corresponding intensity patterns are recorded. The phase retrieval approach is applied for 2D and 3D phase reconstructions. Commented codes in Octave for image acquisition and automation using a web camera in an open source operating system are provided.


Introduction
Quantitative phase imaging (QPI) techniques enable the measurement of thickness and refractive index variations in optically transparent objects, and have been widely used to study unstained biological samples, which modulate the phase but not the amplitude [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. A wide variety of techniques, such as Schlieren method [4], Zernike phase contrast method [5], tomography [6], interference microscopy [7][8][9] and digital holography [10] can be applied for QPI applications. Out of the above methods, some techniques are preferred due to their easier implementation and improved performance. For instance, the phase contrast microscope invented by Frits Zernike in 1933 consisted of annular illumination, phase and amplitude modulating ring elements in a sequence that reduced the background light and highlighted the phase profile with an improved contrast [5]. This was widely adopted later for phase imaging tasks. Later, a modified version of the phase contrast microscope called a differential interference contrast microscopy was developed by Georges Nomarski nearly two decades later [11]. In this technique, two orthogonally polarized, mutually coherent and spatially separated light waves are interfered with, and the relative phase differences are imaged. In subsequent studies, interferometry and digital holography techniques evolved into reliable phase imaging methods [12][13][14]. In most of the interference based phase imaging applications, a coherent light source is split into two: one of the beams passes through the sample, while the other beam is used as a reference. The resulting interferogram is processed to extract the phase distribution of the object. Multimodal imaging methods involving the measurement of both amplitude and phase have been developed, but require a dual beam approach [15,16], coded aperture methods [17,18], Lloyd's mirror [19] and low-coherence two beam holography techniques [20]. In all the above techniques, many optical components such as lenses, prisms, beam splitters, polarizers, etc., are needed for beam splitting, combining and for creation of interference patterns.
In 2004, a phase contrast imaging technique was introduced [21], in which collimated laser light was modulated by a phase sample and the corresponding intensity pattern was recorded. The well-known Gerchberg-Saxton algorithm (GSA) [22][23][24] was implemented to retrieve the phase of the object. This mode of imaging is one of the most economical versions available for phase contrast imaging applications. Later, multiple wavelengths were used to improve the convergence of the phase retrieval algorithm [25]. In all the above studies, phase imaging in only a single plane was demonstrated. With the developments in smart phone devices, the lensless imaging method was converted into highly compact versions [26][27][28][29][30]. Furthermore, the technique evolved into shadow imaging [26,28], fluorescence imaging [30], and learning-based imaging [31]. Recent studies have shown the possibility of employing low coherent light for QPI. The topic has been extensively reviewed [32], but there are not many tutorials available on this topic.
In this manuscript, we demonstrate a lensless single camera shot 3D QPI technique using incoherent light. A phase retrieval algorithm has been implemented with accurate scaling and sampling conditions, such that the algorithm converges within a few iterations to the optimal reconstruction, enabling the real time monitoring of live biological specimens. The manuscript consists of seven sections. In the second section, the methodology and theoretical studies are presented. The third section describes the simulation and computational procedure. The simulative studies with different types of samples are presented in the fourth section. In the fifth section, the experiments and results are presented. The results are discussed in the sixth section. The summary and conclusion are presented in the final section. Five supplementary materials are provided, which consists of the original commented codes in Octave for simulation and real experiments with the automation of image sensor and phase retrieval. We believe that the presented tutorial will highly benefit researchers who begin research in the area of QPI.

Methodology
The optical configuration of a basic lensless QPI using phase retrieval algorithm is shown in Figure 1. The illumination system consists of a light emitting diode and a pinhole. The light diffracted from the pinhole is incident on the sample located at a distance of d 1 from it. The light modulated by the sample is recorded by an image sensor located at a distance of d 2 from the sample. The complex amplitude incident on the sample is given as C 1 √ I o Q(1/d 1 ) and after the sample it is C 2 where C 1 and C 2 are complex constants,

√
I o is the amplitude of the point object generated by the pinhole, is the quadratic phase factor given as Q(b) = exp jπbR 2 /λ , where R = (x 2 + y 2 ) 1/2 and Φ s (x, y) = 2πt(x,y) λ (n − 1), where t(x,y) and n are the thickness function and refractive index of the sample, respectively. In this case, it is assumed that n is constant for simplicity, but it is possible that both n and t can simultaneously vary across the sample and cannot be separated without additional measurements. The complex amplitude at the image sensor is given by C 2 , where ' ' is a two-dimensional convolutional operator. The intensity distribution recorded by the image sensor can be given as C 2 The phase of the sample is calculated using an iterative GSA [22]. There are two planes of interest: sample plane and sensor plane. It is assumed that the sample introduces only phase modulation and no amplitude modulation. For smaller distances, a spherical phase factor given by The schematic of the GSA is shown as an inset in Figure 1.
Unlike lens-based imaging techniques, lensless systems' resolution limit is the pixel size of the sensor and the resolution differences between the maximum limit achievable (λ/2), and the pixel size is often bridged using pixel super resolution techniques [28,29]. The magnification of the system based on geometric projection is given as M = (1 + d 2 /d 1 ), and the numerical aperture (NA) is given as , where D is the diameter of the image sensor.
J. Imaging 2020, 6, x FOR PEER REVIEW 3 of 11 (1/ )exp [− Φ ( , )]⨂ (1/ ) . The phase of the sample is calculated using an iterative GSA [22]. There are two planes of interest: sample plane and sensor plane. It is assumed that the sample introduces only phase modulation and no amplitude modulation. For smaller distances, a spherical phase factor given by [ ( )] = exp [ 2 ( )/ ] , where ( ) = ( + + ) / is needed. The schematic of the GSA is shown as an inset in Figure 1. Unlike lens-based imaging techniques, lensless systems' resolution limit is the pixel size of the sensor and the resolution differences between the maximum limit achievable (λ/2), and the pixel size is often bridged using pixel super resolution techniques [28,29]. The magnification of the system based on geometric projection is given as M = (1 + d2/d1), and the numerical aperture (NA) is given as D/2(d1 + d2), where D is the diameter of the image sensor. In two beam interference-based phase measurements, the amplitude and phase at the image sensor are known and so the complex amplitude at the sample plane can be directly calculated, either by a back propagation or a Fourier transform of the complex amplitude recorded at the sensor plane. In the interferenceless-phase retrieval approaches, a part of the information, i.e., the phase information at the sensor plane, is not available. The amplitude at the sensor plane is available and the amplitude information at the sample plane is assumed or known. The algorithm begins with a complex amplitude constructed with the square root of the intensity function recorded by the image sensor as the amplitude function, and an initial guess uniform phase or random phase at the sensor plane. However, previous studies indicated a faster convergence when the initial guess phase was uniform [21]. This complex amplitude is propagated to the sample plane by convolving it with [− ( )]. The resulting complex amplitude's amplitude is replaced by a uniform function, while the resulting phase is the phase of the sample which is carried on to the next iteration. This modified complex amplitude is forward propagated by convolution with [ ( )] and so on. During every back and forth propagation, the amplitude functions were replaced by the measured and assumed functions, while the obtained phase functions were transferred as they are. By this process, the two phase functions gradually converge to satisfy the amplitude functions at the two planes. The entire In two beam interference-based phase measurements, the amplitude and phase at the image sensor are known and so the complex amplitude at the sample plane can be directly calculated, either by a back propagation or a Fourier transform of the complex amplitude recorded at the sensor plane. In the interferenceless-phase retrieval approaches, a part of the information, i.e., the phase information at the sensor plane, is not available. The amplitude at the sensor plane is available and the amplitude information at the sample plane is assumed or known. The algorithm begins with a complex amplitude constructed with the square root of the intensity function recorded by the image sensor as the amplitude function, and an initial guess uniform phase or random phase at the sensor plane. However, previous studies indicated a faster convergence when the initial guess phase was uniform [21]. This complex amplitude is propagated to the sample plane by convolving it with S[−R(d)]. The resulting complex amplitude's amplitude is replaced by a uniform function, while the resulting phase is the phase of the sample which is carried on to the next iteration. This modified complex amplitude is forward propagated by convolution with S[R(d)] and so on. During every back and forth propagation, the amplitude functions were replaced by the measured and assumed functions, while the obtained phase functions were transferred as they are. By this process, the two phase functions gradually converge to satisfy the amplitude functions at the two planes. The entire process is iterated until the phase pattern is generated with a minimum error, which is measured by comparing the reconstructed phase pattern with the original phase pattern. There are numerous ways to quantify the error in the reconstructed phase. In this case, an error function defined by a cross-correlation between the original phase pattern and the reconstructed phase pattern is used for quantifying the similarity between the two. The correlation function is defined as C = Φ s (x, y) * Φ r (x, y, p) 2 , where Φ r (x, y, p) is the phase pattern reconstructed from the phase retrieval algorithm after p iterations and ' * ' is a 2D correlation operator [33,34] with a phase-only filter [35]. Once C (x = 0, y = 0) reaches a stable value, the iteration is stopped. The maximum value possible is when the cross-correlation reaches the autocorrelation value, i.e., when There are other alternative cost functions, such as SSIM [36], entropy [37] and mean-squared error [38] which could be used in place of the cross-correlation. Entropy and SSIM require a reference image, while entropy is a blind evaluator.

Computational Procedure
The computer simulation begins with a sampling of the planes of interest. The length and breadth of the matrices are set to match twice the image sensor's length N 1 and breadth N 2 in pixels, and the pixel size ∆ of the sensor is set as the sampling period for the simulation. The pseudocode for the entire process is shown in Table 1. The first step involves the definition of the computational space, sampling, meshgrid and wavelength. In the next step, the amplitude and phase matrices at the sample and sensor planes are constructed. The intensity pattern recorded by the sensor with a size of N 1 × N 2 is normalized and the square root values are calculated, and the matrix is zero padded to reach a matrix size of 2N 1 × 2N 2 to be the amplitude matrix A 1 at the sensor plane. The phase matrix P 1 at the sensor plane is set to be a constant for all pixels and the complex amplitude C 1 is constructed using A 1 and P 1 . The amplitude matrix A 2 at the sample plane is constructed by assigning all the pixels a value of 1. The phase retrieval algorithm is executed with a for loop and begins at the sensor plane with a complex amplitude C 1 . The complex amplitude from the sensor plane is propagated to the sample plane by a convolution realized by three Fourier transform operations, given as C n S = −1 [ (C n ) × (S)] [38]. At the sample plane, the phase matrix was carried on while the amplitude matrix is replaced by A 2 . The phase at the sample plane is extracted after several iterations, when the correlation coefficient C (x = 0, y = 0) does not vary beyond a threshold value. Table 1. Pseudocode for Phase retrieval algorithm.

Task. No
Task Steps

Defining Computational space
Step-I Define the length and breadth of the computational space in pixels (2×N 1 , 2×N 2 ).

Defining initial matrices and forward and backward propagators
Initial matrices: Sensor plane-Amplitude A 1 = 0 (for all X, Y) and where I is the normalized recorded intensity pattern and phase P 1 = 0 (for all X, Y). Sample plane-Amplitude A 1 = 1 (for all X, Y). Propagators: Forward propagator: Construct the initial complex amplitude C 1 at the sensor plane as C 1 = A 1 exp(jP 1 ). Start for loop Step-I Convolve the initial complex amplitude with the backward propagator: Step-II Replace the amplitude of C 2 with A 2 and carry-on the phase P 2 at the sample plane i.e., C 2 = A 2 exp(jP 2 ).
Step-III Convolve the modified complex amplitude C 2 with the forward propagator: Step-IV Replace the amplitude of C 1 by A 1 and carry on the phase for the next iteration. Iterate Steps I-IV until the phase pattern is generated with a minimum error indicated by the convergence of the correlation co-efficient C (x = 0, y = 0) to a stable value. Display P 2 .
End for loop

Simulative Studies
The simulative studies were carried out for a wavelength λ = 530 nm, d 1 = 200 cm and d 2 = 20 cm. Five test objects are considered for simulative studies. The first object is a Kangaroo phase object with a step size of t = 530 nm and an index of refraction n = 1.5. The Kangaroo phase object is used for the first type of simulation studies. In this study, the object is illuminated by the light diffracted from a point object located at a large distance, and so the phase of the incident light is closer to a plane wave. The amplitude and phase simulated using Fresnel propagation after the sample plane at distances of 10 cm, 15 cm and 20 cm are shown in Figure 2a-f. The intensity distribution simulated at d 2 = 20 cm is used in the phase retrieval algorithm. The reconstructed phase patterns after 2, 20, and 50 iterations are shown in Figure 2g-i respectively. The reference phase image is shown in Figure 2j. The plot of the correlation function as a function of the number of iterations is shown in Figure 2k. The initial phase is selected as one in this case. The performance of the algorithm is highly sensitive to the initial guess phase. The computational codes for simulation in Octave are given in Supplementary Material 1. The simulation conditions were quite direct with a nearly plane wavefront at the input. If the incident wavefront is spherical with a smaller radius of curvature, then it is necessary to optimize the distance d 2 to compensate the effect of the spherical phase. The appropriate propagation distance for obtaining the best reconstruction is at within the region of paraxial approximation. For instance, when d 1 = 200 cm and d 2 = 20 cm, the optimal propagation distance d 2 is 22.2 cm, which could be tested in the Octave codes provided in the Supplementary Material 1.
J. Imaging 2020, 6, x FOR PEER REVIEW 5 of 11 Step-I Convolve the initial complex amplitude with the backward propagator: Step-II Replace the amplitude of C2 with A2 and carry-on the phase P2 at the sample plane i.e., C2 = A2 exp(jP2).
Step-III Convolve the modified complex amplitude C2 with the forward propagator: Step-IV Replace the amplitude of C1 by A1 and carry on the phase for the next iteration. Iterate Steps I-IV until the phase pattern is generated with a minimum error indicated by the convergence of the correlation co-efficient C (x = 0, y = 0) to a stable value. Display P2. End for loop

Simulative Studies
The simulative studies were carried out for a wavelength λ = 530 nm, d1 = 200 cm and d2 = 20 cm. Five test objects are considered for simulative studies. The first object is a Kangaroo phase object with a step size of t = 530 nm and an index of refraction n = 1.5. The Kangaroo phase object is used for the first type of simulation studies. In this study, the object is illuminated by the light diffracted from a point object located at a large distance, and so the phase of the incident light is closer to a plane wave. The amplitude and phase simulated using Fresnel propagation after the sample plane at distances of 10 cm, 15 cm and 20 cm are shown in Figure 2a-f. The intensity distribution simulated at d2 = 20 cm is used in the phase retrieval algorithm. The reconstructed phase patterns after 2, 20, and 50 iterations are shown in Figure 2g-i respectively. The reference phase image is shown in Figure 2j. The plot of the correlation function as a function of the number of iterations is shown in Figure 2k. The initial phase is selected as one in this case. The performance of the algorithm is highly sensitive to the initial guess phase. The computational codes for simulation in Octave are given in Supplementary Material 1. The simulation conditions were quite direct with a nearly plane wavefront at the input. If the incident wavefront is spherical with a smaller radius of curvature, then it is necessary to optimize the distance d2 to compensate the effect of the spherical phase. The appropriate propagation distance for obtaining the best reconstruction is at − within the region of paraxial approximation.
For instance, when d1 = 200 cm and d2 = 20 cm, the optimal propagation distance d2′ is 22.2 cm, which could be tested in the Octave codes provided in the Supplementary Material 1.  The next two objects, namely the logo and emblem of Swinburne, are binary amplitude objects at the sample plane and the sensor plane respectively. The phase patterns are calculated using the phase retrieval algorithm at the sample plane and the sensor plane respectively. The image of two objects, the corresponding amplitude and phase patterns at the sample and sensor planes after 200 iterations are shown in Figure 3a-f respectively. The Octave code for simulating the optical fields and phase retrieval is given in Supplementary Material 2. A thick object is constructed using the two thin phase objects made up of the Kangaroo phase object and the Swinburne logo, with a step phase difference of π with respect to the background. The distance between the two objects ∆d = 20 cm, d 1 = 200 cm and d 2 = 20 cm. The Octave code for constructing the thick object using two thin objects is given in the Supplementary Material 3. The images of the amplitude and phase at the sensor plane are shown in Figure 4a,b respectively. The image reconstruction after 10 iterations executed for d 2 = 20 cm and 40 cm are shown in Figure 4c,d respectively. The presence of the background phase from the other object affects the convergence of the algorithm, as shown in Figure 4e, and so it is necessary to carefully observe the reconstructions with the number of iterations. The fourth object is a complex object consisting of a scattering layer with 200 × 200 pixels size, with a scattering ratio σ = 0.12, synthesized using a Fourier GSA as described in [39]. The amplitude around the scattering layer is zero. The phase image of the scattering layer, the amplitude and phase at the sensor plane and the reconstructed phase image after 20 iterations are shown in Figure 5a-d respectively. The simulation is repeated for a linear phase with a maximum value of 8π, surrounded by a constant phase of zero but unit amplitude. The image of the linear phase with modulo-2π phase modulation, the amplitude and phase at the sensor are shown in Figure 5e-g respectively. The reconstructed phase is shown in Figure 5h. The original phase profile is shown in Figure 5i. The unwrapped reconstructed phase profile is shown in Figure 5j. The simulation can be carried out using one of the Supplementary Materials 1-3 and the phase unwrapping can be achieved by the unwrap function in a for loop of Octave. The modulo-2π phase profiles of the original and reconstructed phase patterns are compared in Figure 5k. The performances of the GSA under different computational configurations have been studied [40]. The formation of greyscale objects without and with modulo-2π phase structures have been discussed in [41].
J. Imaging 2020, 6, x FOR PEER REVIEW 6 of 11 The next two objects, namely the logo and emblem of Swinburne, are binary amplitude objects at the sample plane and the sensor plane respectively. The phase patterns are calculated using the phase retrieval algorithm at the sample plane and the sensor plane respectively. The image of two objects, the corresponding amplitude and phase patterns at the sample and sensor planes after 200 iterations are shown in Figure 3a-f respectively. The Octave code for simulating the optical fields and phase retrieval is given in Supplementary Material 2. A thick object is constructed using the two thin phase objects made up of the Kangaroo phase object and the Swinburne logo, with a step phase difference of π with respect to the background. The distance between the two objects Δd = 20 cm, d1 = 200 cm and d2 = 20 cm. The Octave code for constructing the thick object using two thin objects is given in the Supplementary Material 3. The images of the amplitude and phase at the sensor plane are shown in Figure 4a,b respectively. The image reconstruction after 10 iterations executed for d2 = 20 cm and 40 cm are shown in Figure 4c,d respectively. The presence of the background phase from the other object affects the convergence of the algorithm, as shown in Figure 4e, and so it is necessary to carefully observe the reconstructions with the number of iterations. The fourth object is a complex object consisting of a scattering layer with 200 × 200 pixels size, with a scattering ratio σ = 0.12, synthesized using a Fourier GSA as described in [39]. The amplitude around the scattering layer is zero. The phase image of the scattering layer, the amplitude and phase at the sensor plane and the reconstructed phase image after 20 iterations are shown in Figure 5a-d respectively. The simulation is repeated for a linear phase with a maximum value of 8π, surrounded by a constant phase of zero but unit amplitude. The image of the linear phase with modulo-2π phase modulation, the amplitude and phase at the sensor are shown in Figure 5e-g respectively. The reconstructed phase is shown in Figure 5h. The original phase profile is shown in Figure 5i. The unwrapped reconstructed phase profile is shown in Figure 5j. The simulation can be carried out using one of the Supplementary Materials 1-3 and the phase unwrapping can be achieved by the unwrap function in a for loop of Octave. The modulo-2π phase profiles of the original and reconstructed phase patterns are compared in Figure 5k. The performances of the GSA under different computational configurations have been studied [40]. The formation of greyscale objects without and with modulo-2π phase structures have been discussed in [41].   The next two objects, namely the logo and emblem of Swinburne, are binary amplitude objects at the sample plane and the sensor plane respectively. The phase patterns are calculated using the phase retrieval algorithm at the sample plane and the sensor plane respectively. The image of two objects, the corresponding amplitude and phase patterns at the sample and sensor planes after 200 iterations are shown in Figure 3a-f respectively. The Octave code for simulating the optical fields and phase retrieval is given in Supplementary Material 2. A thick object is constructed using the two thin phase objects made up of the Kangaroo phase object and the Swinburne logo, with a step phase difference of π with respect to the background. The distance between the two objects Δd = 20 cm, d1 = 200 cm and d2 = 20 cm. The Octave code for constructing the thick object using two thin objects is given in the Supplementary Material 3. The images of the amplitude and phase at the sensor plane are shown in Figure 4a,b respectively. The image reconstruction after 10 iterations executed for d2 = 20 cm and 40 cm are shown in Figure 4c,d respectively. The presence of the background phase from the other object affects the convergence of the algorithm, as shown in Figure 4e, and so it is necessary to carefully observe the reconstructions with the number of iterations. The fourth object is a complex object consisting of a scattering layer with 200 × 200 pixels size, with a scattering ratio σ = 0.12, synthesized using a Fourier GSA as described in [39]. The amplitude around the scattering layer is zero. The phase image of the scattering layer, the amplitude and phase at the sensor plane and the reconstructed phase image after 20 iterations are shown in Figure 5a-d respectively. The simulation is repeated for a linear phase with a maximum value of 8π, surrounded by a constant phase of zero but unit amplitude. The image of the linear phase with modulo-2π phase modulation, the amplitude and phase at the sensor are shown in Figure 5e-g respectively. The reconstructed phase is shown in Figure 5h. The original phase profile is shown in Figure 5i. The unwrapped reconstructed phase profile is shown in Figure 5j. The simulation can be carried out using one of the Supplementary Materials 1-3 and the phase unwrapping can be achieved by the unwrap function in a for loop of Octave. The modulo-2π phase profiles of the original and reconstructed phase patterns are compared in Figure 5k. The performances of the GSA under different computational configurations have been studied [40]. The formation of greyscale objects without and with modulo-2π phase structures have been discussed in [41].

Experiments and Results
The test samples for the study were fabricated using electron beam lithography (EBL; Raith 150 2 , Dortmund, Germany) [42][43][44] in PMMA 950K(MicroChemicals GmbH Nicolaus-Otto-Str. 39 D-89079, Ulm, Germany) resist on indium tin oxide (ITO) glass substrates with 1.1 mm thickness. Two samples, namely Swinburne object and Star object, were fabricated with positive (resist removed) and negative (resist present) configurations respectively. The thickness t of PMMA resist was about 800 nm, and its refractive index is closely matching that of glass. The developed patterns after EBL exposure represent mostly phase structures (a jump in PMMA thickness) which were used for the optical imaging experiments. An experimental set up based on Figure 1 is built using a LED (λc = 530 nm, FWHM = 33 nm) and a pinhole with a diameter of 100 µm. The distances were d1 = 10 cm and d2 = 5 cm. The experiment was first carried out using the single thin samples, one after the other. The images of the intensity patterns were recorded by the image sensor (Thorlabs DCU223M, 1024 × 768 pixels, pixel size = 4.65 µm). The optical microscope images of the two objects, the intensity patterns recorded using the lensless imaging system and the phase images reconstructed using the phase retrieval algorithm, are shown in Figure 6. The wavelength value of 530 nm and a pixel size of 4.65 µm were used in the phase retrieval algorithm. The propagators were synthesized for d = 5 cm, but the best reconstruction was obtained for d = 6.5 cm. The difference in the values of d can be caused by a range of factors, such as the spherical phase of the incident light, errors in distance measurements, the thickness of glass, etc. Since there can be only one value of the distance for every axial plane, in the first run, the optimal distance which resulted in the best focus was estimated. Later, the same distance was used for phase retrieval for objects located in the same plane. The phase maps and images shown in Figure 6 match exactly with the positive and negative resist profiles of the

Experiments and Results
The test samples for the study were fabricated using electron beam lithography (EBL; Raith 150 2 , Dortmund, Germany) [42][43][44] in PMMA 950K(MicroChemicals GmbH Nicolaus-Otto-Str. 39 D-89079, Ulm, Germany) resist on indium tin oxide (ITO) glass substrates with 1.1 mm thickness. Two samples, namely Swinburne object and Star object, were fabricated with positive (resist removed) and negative (resist present) configurations respectively. The thickness t of PMMA resist was about 800 nm, and its refractive index is closely matching that of glass. The developed patterns after EBL exposure represent mostly phase structures (a jump in PMMA thickness) which were used for the optical imaging experiments. An experimental set up based on Figure 1 is built using a LED (λ c = 530 nm, FWHM = 33 nm) and a pinhole with a diameter of 100 µm. The distances were d 1 = 10 cm and d 2 = 5 cm. The experiment was first carried out using the single thin samples, one after the other. The images of the intensity patterns were recorded by the image sensor (Thorlabs DCU223M, 1024 × 768 pixels, pixel size = 4.65 µm). The optical microscope images of the two objects, the intensity patterns recorded using the lensless imaging system and the phase images reconstructed using the phase retrieval algorithm, are shown in Figure 6. The wavelength value of 530 nm and a pixel size of 4.65 µm were used in the phase retrieval algorithm. The propagators were synthesized for d = 5 cm, but the best reconstruction was obtained for d = 6.5 cm. The difference in the values of d can be caused by a range of factors, such as the spherical phase of the incident light, errors in distance measurements, the thickness of glass, etc. Since there can be only one value of the distance for every axial plane, in the first run, the optimal distance which resulted in the best focus was estimated. Later, the same distance was used for phase retrieval for objects located in the same plane. The phase maps and images shown in Figure 6 match exactly with the positive and negative resist profiles of the Swinburne logo object and the star object respectively, after fabrication.  From the colour bar shown in Figure 6, it is seen that the phase difference between the two levels is approximately π, which corresponds to a thickness of 530 nm, which is lesser than the measured value of 800 nm. The difference between the retrieved phase and experimental value is partly due to the broader spectral width of 33 nm, and partly due to experimental and computational errors. The results shown in Figure 6 are phase maps and images for a single axial plane and therefore does not constitute three dimensional QPI. To demonstrate QPI in 3D, thick objects were constructed by attaching two different thin phase objects with a 3 cm spacer. A phase object with a negative resist profile 'Nan' was fabricated using EBL, as shown in Figure 7a, and the thick object construction is shown in Figure 7b. The phase image reconstructions at the two planes are shown in Figure 7c,d respectively. In Figure 7c, the star object is in focus, while in Figure 7d, the 'Nan' object is in focus. The phase retrieval algorithm was executed with a different distance d + 3 cm, to obtain the reconstruction for the second plane. The combined phase reconstruction is shown in Figure 7e.  From the colour bar shown in Figure 6, it is seen that the phase difference between the two levels is approximately π, which corresponds to a thickness of 530 nm, which is lesser than the measured value of 800 nm. The difference between the retrieved phase and experimental value is partly due to the broader spectral width of 33 nm, and partly due to experimental and computational errors. The results shown in Figure 6 are phase maps and images for a single axial plane and therefore does not constitute three dimensional QPI. To demonstrate QPI in 3D, thick objects were constructed by attaching two different thin phase objects with a 3 cm spacer. A phase object with a negative resist profile 'Nan' was fabricated using EBL, as shown in Figure 7a, and the thick object construction is shown in Figure 7b. The phase image reconstructions at the two planes are shown in Figure 7c,d respectively. In Figure 7c, the star object is in focus, while in Figure 7d, the 'Nan' object is in focus. The phase retrieval algorithm was executed with a different distance d + 3 cm, to obtain the reconstruction for the second plane. The combined phase reconstruction is shown in Figure 7e.  From the colour bar shown in Figure 6, it is seen that the phase difference between the two levels is approximately π, which corresponds to a thickness of 530 nm, which is lesser than the measured value of 800 nm. The difference between the retrieved phase and experimental value is partly due to the broader spectral width of 33 nm, and partly due to experimental and computational errors. The results shown in Figure 6 are phase maps and images for a single axial plane and therefore does not constitute three dimensional QPI. To demonstrate QPI in 3D, thick objects were constructed by attaching two different thin phase objects with a 3 cm spacer. A phase object with a negative resist profile 'Nan' was fabricated using EBL, as shown in Figure 7a, and the thick object construction is shown in Figure 7b. The phase image reconstructions at the two planes are shown in Figure 7c,d respectively. In Figure 7c, the star object is in focus, while in Figure 7d, the 'Nan' object is in focus. The phase retrieval algorithm was executed with a different distance d + 3 cm, to obtain the reconstruction for the second plane. The combined phase reconstruction is shown in Figure 7e.

Discussion
The phase difference step of imaged objects (in air) was given as 2πt(n − 1)/λ, which is approximately 1.5π for PMMA of refractive index 1.5 and thickness 800 nm at 530 nm wavelength. This is a small phase change typical for bio-imaging microscopy carried out in solution (refractive index 1.33 of water) and for micro optical elements. The lateral resolution of imaging is limited by the pixel size of the camera rather than the NA of the lens, as in the case of regular microscopy. The field of view is limited by the physical size of the image sensor. There is some degree of freedom in the lensless case, with a diverging wave illumination of the sample, where the distance d 2 can be adjusted to trade-off resolution of imaging with the field of view. The phase retrieval algorithm required as low as only 2 iterations to converge and the execution time was <2 s in a computer (Intel Core i5-8250U CPU @ 1.60 GHz and 1.80 GHz). It must be noted that the limitation will not apply for real time observation of a biological event or fabrication of a micro optical element, unless the event needs a real time intervention.

Summary, Conclusions, Outlook
A tutorial for a compact, single camera shot, three-dimensional QPI technique using a partially coherent light source has been demonstrated with pseudocodes and Octave codes. The tutorial uses a set of low-cost and easily available components: low cost LEDs, basic camera or web camera, open source OS and open source Octave software, with spatial and temporal resolutions on par with regular microscopes, but with a cost at least an order lower than conventional microscopes. The supplementary files for automating web camera and quantitative phase imaging using Octave has been provided. This integrated approach creates a viable tool for 3D QPI for the real-time observation of biological events. The technique has been demonstrated for only two planes. Consequently, the proposed method can be used to monitor multiple planes of an object simultaneously, in real time, a capability only present in 3D phase scopes. The phase retrieval algorithm is highly sensitive to various optical and computational parameters, such as distances, external light, sampling, zero padding, object size, initial guess phase and amplitude patterns, etc. Once an optical system and the algorithm is calibrated to achieve optimal reconstruction, the method can be effective. One of the main drawbacks of using a source with a larger spectral width is the uncertainties present in the determination of phase values. While the contrast within 0-2π cycle is relatively lower than the original phase pattern, it is possible to measure phase profiles beyond the 0-2π range and unwrap it to the original phase without modulo-2π folding.
This method is promising for bio-microscopy of cells and particularly for monitoring the interaction of bio-membrane with the nanotextured surface to reveal mechano-bactericidal action [46]. There is a strong application potential of the QPI method in laser tweezers [47], where optically transparent 3D objects are manipulated in complex environments with different reflectivities and shapes of micro-surroundings. Modifications inside transparent materials, phase changes and refractive index alterations during femtosecond laser writing of micro-optical elements [44] can be augmented with in situ monitoring using the proposed method. These are directions that will be explored in the application of this QPI. The technique can also be integrated into polariscopy for orientation measurements [48]. We believe that the tutorial will be useful for beginners and young scientists who would like to begin research in the area of QPI and follow the latest developments in the area [49,50]. Furthermore, the low-cost and easy to implement conditions will encourage academic research activities on QPI in developing countries.