You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

28 February 2023

Three-Dimensional Digital Zooming of Integral Imaging under Photon-Starved Conditions

and
Research Center for Hyper-Connected Convergence Technology, School of ICT, Robotics and Mechanical Engineering, Institute of Information and Telecommunication Convergence (IITC), Hankyong National University, 327 Chungang-ro, Anseong 17579, Kyonggi-do, Republic of Korea
*
Author to whom correspondence should be addressed.
This article belongs to the Collection 3D Imaging and Sensing System

Abstract

In this paper, we propose new three-dimensional (3D) visualization of objects at long distance under photon-starved conditions. In conventional three-dimensional image visualization techniques, the visual quality of three-dimensional images may be degraded because object images at long distances may have low resolution. Thus, in our proposed method, we utilize digital zooming, which can crop and interpolate the region of interest from the image to improve the visual quality of three-dimensional images at long distances. Under photon-starved conditions, three-dimensional images at long distances may not be visualized due to the lack of the number of photons. Photon counting integral imaging can be used to solve this problem, but objects at long distance may still have a small number of photons. In our method, a three-dimensional image can be reconstructed, since photon counting integral imaging with digital zooming is used. In addition, to estimate a more accurate three-dimensional image at long distance under photon-starved conditions, in this paper, multiple observation photon counting integral imaging (i.e., N observation photon counting integral imaging) is used. To show the feasibility of our proposed method, we implement the optical experiments and calculate performance metrics, such as peak sidelobe ratio. Therefore, our method can improve the visualization of three-dimensional objects at long distances under photon-starved conditions.

1. Introduction

Three-dimensional (3D) visualization of objects at long distances on photon-starved conditions has been a great challenge in many applications, such as military, astronomy, and observing wild animals. In the military case, a defense or reconnaissance that searches enemies at long distances in the day or night is required. In astronomy, observing stars at billions of light years of distance is a critical problem. In addition, observing wild animals, which are nocturnal and have much wariness, is also needed.
However, it is difficult to visualize the three-dimensional objects, which are located at long distances by conventional imaging methods, since lateral and longitudinal resolutions of the image at long distance may be reduced due to the limitation of optical devices and the image sensor. When a camera takes a picture, the object at long distance in the scene has less pixels than a close one. Therefore, lateral and longitudinal resolutions (i.e., three-dimensional resolution) of the image for objects at long distance are reduced. To visualize three-dimensional objects at long distance, integral imaging [1,2,3], which was first proposed by G. Lippmann, can be utilized. It uses two-dimensional (2D) images with different perspectives captured by lenslet array or camera array, where these images are referred to as elemental images. Integral imaging can provide full parallax and continuous viewing points of three-dimensional objects without any viewing glasses and coherent light sources [1,2,3,4,5,6,7,8]. However, due to the limitation of three-dimensional resolution for three-dimensional objects at long distances, the visual quality of three-dimensional images at long distances may be degraded. In addition, this resolution problem may be critical under photon-starved conditions. Because an image sensor detects less photons, which have the information of an object at a long distance under photon-starved conditions, elemental images may not have the information of the object. That is, the visual quality of three-dimensional images may be more degraded under photon-starved conditions.
To visualize three-dimensional objects under photon-starved conditions, photon counting integral imaging [9,10,11] has been proposed. It can make a computational model of a photon detector by statistical distribution, such as Poisson distribution, because photons occur rarely in unit time and space [11]. In addition, for three-dimensional image reconstruction, statistical estimation methods, such as maximum likelihood estimation (MLE) [9,10,11] or Bayesian approaches [12,13,14], are utilized. However, photon counting integral imaging may not estimate the accurate three-dimensional images for objects at long distances under photon-starved conditions, since object images may have low resolution and an insufficient number of photons. Therefore, to solve these problems, a new three-dimensional image visualization technique is required.
In this paper, we propose three-dimensional digital zooming of integral imaging under photon-starved conditions. It can magnify region of interest (ROI) in elemental images captured by synthetic aperture integral imaging (SAII) [15]. Then, three-dimensional images at long distances can be obtained by volumetric computational reconstruction (VCR) [16,17,18,19,20,21,22,23] and photon counting integral imaging [9,10,11,12,13,14]. Under photon-starved conditions, photons can be detected throughout the scene by computational photon counting imaging, which may cause the degradation of resolution for objects due to lack of the number of photons. However, in our method, photon counting imaging is utilized only in ROI of elemental images to visualize three-dimensional images at long distances. Therefore, more photons can be extracted from the ROI of elemental images. In addition, multiple observations of photon counting imaging is considered in our method, where this method is called “N observation photon counting imaging”, which improves the visual quality of the images under photon-starved conditions, since photons are detected randomly for each observation, and multiple observation can increase the number of samples. Additionally, to estimate more accurate three-dimensional images under photon-starved conditions, statistical estimation methods, such as maximum likelihood estimation (MLE), are used in our method.
This paper is organized as follows. We describe the basic concept of optical and digital zooming and integral imaging in Section 2. Then, we introduce the computational photon counting method and our proposed method in Section 3. To show the feasibility of our proposed method, we show the experimental results in Section 4. Finally, we make a conclusion with summary in Section 5.

3. Three-Dimensional Visualization of Objects at Long Distances under Photon-Starved Conditions

3.1. Computational Photon Counting Imaging

Under photon-starved conditions, it is difficult to record the information of objects from the scene using conventional imaging methods due to lack of the number of photons. To overcome this problem, computational photon counting imaging is utilized in our method. Photons can be detected by Poisson random process under these conditions because photons may occur rarely in unit time and space [11]. Computational photon counting imaging can be described as follows [9,10,11,12,13,14]
λ ( x , y ) = N p I ( x , y ) x = 1 N x y = 1 N y I ( x , y )
C ( x , y ) | λ ( x , y ) P o i s s o n ( λ ( x , y ) )
where λ is the normalized irradiance of the elemental image, which has unit energy, I ( x , y ) is the recorded two-dimensional image, C ( x , y ) is the photon counting image, and N p is the expected number of photons from the elemental image. Since λ has unit energy by Equation (6), N p photons can be extracted randomly from the recorded two-dimensional image by Equation (7). To obtain a three-dimensional image under photon-starved conditions, photon counting integral imaging [11] can be used. At first, to estimate the recorded two-dimensional image from photon counting image, maximum likelihood estimation (MLE) can be applied as follows [9,10,11]
L ( λ k l ) = k = 0 K 1 l = 0 L 1 λ k l ( x , y ) C k l e λ k l ( x , y ) C k l !
l ( λ k l ) k = 0 K 1 l = 0 L 1 C k l ln λ k l ( x , y ) k = 0 K 1 l = 0 L 1 λ k l ( x , y )
l ( λ k l ) λ k l = C k l λ k l ( x , y ) 1 = 0
λ ^ k l = C k l
where λ k l is the normalized irradiance of kth column lth row elemental image, L ( ) , l ( ) are the likelihood and log-likelihood functions, and λ k l ^ is the kth column lth row-estimated two-dimensional image for the scene, respectively. By using MLE and VCR, a three-dimensional image can be visualized under photon-starved conditions, as follows [9,10,11]
I p ( x , y , z ) = 1 O ( x , y , z ) k = 0 K 1 l = 0 L 1 λ ^ k l ( x + Δ S x k , y + Δ S y l )
where I p ( x , y , z ) is a three-dimensional image under photon-starved conditions obtained by photon counting integral imaging. However, three-dimensional objects at long distance under photon-starved conditions may not be visualized due to lack of the number of photons in ROI of the scene. Therefore, in this paper, we present digital zooming and photon counting integral imaging for three-dimensional objects at long distances under photon-starved conditions, as described in the next section. In our method, we use different VCRs for digital zooming, where shifting pixels are recalculated because elemental images are cropped by ROI of the scene. In addition, to improve the visual quality of photon counting image, N observation photon counting imaging is proposed.

3.2. Three-Dimensional Digital Zooming of Integral Imaging and N Observation Photon Counting Integral Imaging under Photon-Starved Conditions

As mentioned earlier, the conventional integral imaging has the problems that resolution of three-dimensional image and depth resolution of object at long distance are worst. In addition, resolution of three-dimensional images for objects at long distances under photon-starved conditions is also much worse. To solve these resolution problems, in our method, we utilize digital zooming and VCR to visualize three-dimensional images at long distances, as illustrated in Figure 3.
Figure 3. Procedure of our method.
As mentioned earlier, digital zooming is applied to obtain new elemental images by cropping ROI from the original elemental images before overlapping the elemental images with shifting pixels on the reconstruction plane. When ROI is cropped from the original elemental image, the aspect ratio of the image is preserved as the original elemental images. Then, new elemental images are interpolated, and their size is the same as the original elemental images. Therefore, new VCR for three-dimensional objects at long distances, considering digital zooming, can be written as follows
N x = 1 m × N x , N y = 1 m × N y , ( m > 1 )
N x : N x = z : z z = 1 m × z
Δ S x = N x × p x × f c x × z Δ S x = N x × p x × f c x × z
Δ S y = N y × p y × f c y × z Δ S y = N y × p y × f c y × z
Δ S x k = k × Δ S x , for k = 0 , 1 , 2 , , K 1
Δ S y l = l × Δ S y , for l = 0 , 1 , 2 , , L 1
O ˜ ( x , y , z ) = k = 0 K 1 l = 0 L 1 𝟙 ( x + Δ S x k , y + Δ S y l )
I ˜ ( x , y , z ) = 1 O ˜ ( x , y , z ) k = 0 K 1 l = 0 L 1 E ˜ k l ( x + Δ S x k , y + Δ S y l )
where N x , N y are width and height of the cropped ROI from the original elemental image, m is real value of zooming ratio, which is bigger than one, z is a zooming distance by digital zooming, Δ S x , Δ S y are the number of shifting pixels for new elemental image by digital zooming, Δ S x k , Δ S y l are the rounded number of shifting pixels of kth column lth row new elemental image by digital zooming, O ˜ ( x , y , z ) is the new overlapping matrix by digital zooming, E ˜ k l is the kth column and the lth row of new elemental image by digital zooming, and I ˜ ( x , y , z ) is the three-dimensional image by digital zooming, respectively. The size of ROI is always less than the original elemental images, and new elemental images are interpolated with the same size as the original elemental images. Thus, through Equations (13)–(20), the shifting pixels are changed, and a three-dimensional image by digital zooming can be obtained.
Three-dimensional objects at long distances under photon-starved conditions can be visualized by combining our digital zooming and computational photon counting integral imaging. However, since new elemental images have limited resolution, photon counting integral imaging may not produce three-dimensional images with sufficient visual quality. Therefore, in this paper, we propose N observation photon counting imaging as depicted in Figure 4. In this method, photon counting images are generated and estimated N times. Then, they are accumulated and averaged in relation to each other. Finally, the estimated images with better visual quality can be obtained, since the number of samples for each photon counting image increases. To verify the feasibility of our method, we describe our experimental setup and results in the next section.
Figure 4. N observation photon counting imaging.

4. Simulation and Experimental Results

In this section, we present our simulation and experimental setup for obtaining the elemental images by SAII. Then, we show the simulation and experimental results to prove the feasibility of our method.

4.1. Simulation and Experimental Setup

Before we implement the optical experiment, we implemented computer simulation by “Blender”. We set the simulation environment as depicted in Figure 5.
Figure 5. Simulation setup.
The object is ‘ISO-12233’, which is located at 1600 mm from camera array. In this simulation setup, focal length and sensor size of virtual camera are 50 mm and 36 mm (H) × 24 mm (V), respectively. The size of elemental image is 1080 (H) × 720 (V), and pitch between cameras is 2 mm. Total number of elemental images is 5 (H) × 5 (V). Through this setup, we evaluate the performance of interpolation methods, such as “nearest”, “bilinear”, and “bicubic”.
To record the elemental images by SAII, experimental setup is illustrated in Figure 6. The yellow helicopter and the fire truck are used as three-dimensional objects at close distance (40 mm) and long distance (520 mm), respectively, because we require two different objects, which are located at close and long distances to prove our zooming ability. In this setup, Nikon D850 is used as the image sensor, and Nikon DX AF-S Nikkor 18–55 mm is used as the camera lens. The sensor size is 36 mm (H)× 24 mm (V), and the focal length of the camera lens is set to 18 mm. Total number of the recorded elemental images is 5 (H)× 5 (V), each elemental image has 5408 (H)× 3600 (V) pixels, and the pitch between cameras is 0.5 mm. Then, for digital zooming, the size of ROI is set to 676 (H) × 450 (V) pixels, which is eight times smaller than the original elemental image. That is, the magnification ratio is eight. Thus, the distance between objects and camera is reduced as magnification ratio, which is the ratio between size of the elemental image and cropped ROI when a three-dimensional image is reconstructed by VCR with digital zooming. Figure 7a illustrates the center image among elemental images captured by our experimental setup, and Figure 7b is the new elemental image for digital zooming, which is the cropped and interpolated ROI of Figure 7a.
Figure 6. Experiment setup.
Figure 7. (a) 13th elemental image and (b) new elemental image for ROI of (a).
To show the availability of visualization under photon-starved conditions, we captured the elemental images under these conditions as shown in Figure 8. The elemental image under photon-starved conditions is shown in Figure 8a, which was captured by our experimental setup, where the exposure time of the camera was 13 s to obtain the photons, including the object information at long distance. Then, for digital zooming, new elemental image was generated as shown in Figure 8d. To estimate the elemental images under these conditions, photon counting imaging with maximum likelihood estimation was used, where the numbers of extracted photons are 97,344 and 973,440. Finally, photon counting images without digital zooming and with digital zooming were obtained, as shown in Figure 8b,c and Figure 8e,f, respectively.
Figure 8. Elemental images and estimated images under photon-starved conditions. (a) 13th elemental image, (b,c) estimated images of (a) by computational photon counting imaging with 97,344 and 973,440 photons, respectively, (d) new elemental image for ROI of (a,e,f) estimated image of (d) by computational photon counting imaging with 97,344 and 973,440 photons, respectively.

4.2. Experimental Result

Figure 9 shows three-dimensional images of simulation results. Figure 9a–c are the reconstructed three-dimensional images by digital zooming VCR with “bicubic”, “bilinear”, and “nearest” interpolation methods. Their depths are the same as 200 mm, and they were digitally zoomed with magnification ratio eight.
Figure 9. Simulation results with (a) bicubic, (b) bilinear, and (c) nearest, respectively.
To verify our method, we calculate the peak sidelobe ratio (PSR) of correlation via different depths as the performance metric. PSR of the correlation peak is defined as the number of standard deviation by which the peak exceeds the mean value of the correlation surface. It can be calculated by [28]
P S R = m a x [ c ( x ) ] μ c σ c
where μ c is the mean of the correlation, and σ c is the standard deviation of the correlation. The higher the PSR value is, the better the recognition performance obtained.
To calculate the PSR value, the reference image, which is reconstructed at 200 mm, is used. Magnification is eight, 1600 ÷ 8 = 200 by using Equation (14). Table 1 shows the CPU time for obtaining three-dimensional images at 200mm depth and PSR value. The specification of the computer used in this simulation is AMD Ryzen 7 1700X Eight-Core Processor 3.40 GHz, 16GB of RAM. CPU time of each interpolation method is almost the same. “Nearest” method is the fastest, but its PSR value at focusing depth is the lowest. As shown Figure 10, the “nearest” method has the worst PSR value, and the “bicubic” method has the best result. The PSR graphs of “bicubic” and “bilinear” methods via reconstruction depths are very similar to each other. Only “nearest” method does not have the peak PSR value at focusing depth (200 mm). In Table 1, the speed of the “bilinear” method is almost the same as the “nearest” one, but the PSR value is almost the same as the “bicubic” method. In addition, the “bicubic” method is the slowest, and, instead, its PSR value is higher than any other interpolation methods. The “bilinear” method is faster than the “bicubic” method, but it is slower than the “nearest” method. Since it uses four nearest pixels and distances to interpolate at new location, the PSR value of the “bilinear” method is better than the “nearest” one and worse than the “bicubic” method, which uses weighted 16 nearest pixels. Through Figure 9, Table 1, and Figure 10, the “nearest” interpolation method has the worst result among the three interpolation methods, and the “bicubic” method is the best among them. However, the “bilinear” method is as fast as the “nearest” method for interpolation, and this result is almost similar to the “bicubic” method.
Table 1. Processing time and PSR value for interpolation methods such as nearest, bilinear, and bicubic.
Figure 10. PSR results of simulation by digital zooming VCR with “bicubic” (blue line), “bilinear” (orange line), and “nearest” (magenta line) interpolation methods.
Table 2 shows the PSR values of reconstructed three-dimensional images via reconstruction depths by using digital zooming with “nearest”, “bilinear”, and “bicubic” interpolation methods, respectively, where magnification ratio is 8.8. Thus, the reconstruction depth of focused object is 181.82 mm via Equation (14). However, in Table 2, the peak PSR values (727.47, 720.57 and 705.19) of interpolation methods are located at 186.36 mm, 196.36 mm and 186.36 mm (“bicubic”, “bilinear” and “nearest”), and they are not located at focusing depth. Through Table 2, the location of peak PSR value for each interpolation method has the error.
Table 2. PSR values of simulation results via various reconstruction depths by digital zooming with interpolation methods such as nearest, bilinear, and bicubic. Magnification ratio is 8.8.
Figure 11 shows three-dimensional images of our experimental result under conventional conditions. Figure 11a is the reconstructed three-dimensional image by VCR without digital zooming. Here, a fire truck may not be recognized because of low resolution. Figure 11b is the reconstructed three-dimensional image by VCR with digital zooming, where it is noticed that a fire truck can be recognized. To show that our method is better than the conventional method, we cropped “911” characters at various reconstruction depths, respectively. Figure 11c,d were reconstructed at 344 mm and 392 mm by conventional method without digital zooming and Figure 11e,f were reconstructed at 43 mm and 49 mm by our method, respectively. It is noticed that the experimental results by our method have better visual quality than ones by the conventional method.
Figure 11. (a) Reconstructed three-dimensional image by conventional method, (b) reconstructed three-dimensional image by our proposed method, (c,d) cropped image from the results by conventional method at 344 mm and 392 mm, and (e,f) cropped image from the results by our proposed method at 43 mm and 49 mm, respectively.
In Table 3, the PSR values of “bilinear” methods are better than the other two interpolation methods, such as “nearest” and “bicubic” entirely. In addition, this peak value (60.997) is located at 43 mm, which is object depth. In contrast, the other two methods have peak PSR values at wrong object depths. Moreover, since the pixel value by “bicubic” method may be negative, the “bilinear” method has the best result. Through this result, we utilize the “bilinear” interpolation method in our proposed method under photons-starved conditions.
Table 3. Comparison of PSR values for interpolation methods, such as nearest, bilinear, and bicubic via reconstruction depths under photon-starved conditions.
Figure 12 shows the reconstructed three-dimensional images under photon-starved conditions. A fire truck may not be visualized by the conventional method due to low resolution and lack of the number of photons, as shown in Figure 12a–c. In contrast, the reconstructed three-dimensional image by photon counting integral imaging with digital zooming is shown in Figure 12d–f has better visual quality. However, its visual quality is not sufficient for recognition. Therefore, in this paper, we use N observation photon counting integral imaging to improve the visual quality of the reconstructed three-dimensional image under photon-starved conditions. As shown in Figure 12g–i, it is noticed that our method can visualize the reconstructed three-dimensional image at long distance under photon-starved conditions and improve its visual quality for recognition.
Figure 12. (a) Reconstructed three-dimensional image under photon-starved conditions by conventional method, (b,c) cropped images from results by computational photon counting imaging without digital zooming at 344 mm and 392 mm, (d) reconstructed three-dimensional image under photon-starved conditions by computational photon counting imaging with digital zooming, (e,f) cropped images from results by computational photon counting imaging with digital zooming at 43 mm and 49 mm, where 97,344 photons are used, (g) reconstructed three-dimensional image under photon-starved conditions by our method, where 97,344 photons and 10 observations are used, (h,i) cropped images from results by our method at 43 mm and 49 mm, respectively.
In the PSR graph, as shown in Figure 13, the reference image is used as the reconstructed three-dimensional image at 43 mm. The distance 43 mm is calculated by using Equation (14) because the object is focused at 344 mm, and m is 8. The red line in Figure 13 is the PSR result of the conventional method via different depths. Magenta and blue lines in Figure 13 are PSR results of our methods with and without N observation photon counting integral imaging, respectively. In Figure 13, the reconstruction depths by our method are eight times smaller than the ones by conventional method because of digital zooming. Thus, they are modified by multiplying 8 to the reconstruction depths by our method. Table 4 shows the PSR values of the conventional method, digital zooming, and our proposed method, where the conventional method uses integral imaging and computational photon counting imaging only. The tendency of PSR results by the conventional method (red line) have flat and wrong peaks at the reconstruction depth of object. In contrast, PSR results of our method (magenta line) have a sharp peak at the correct reconstruction depth of object. Peak PSR value of conventional method is 32.1834 at 356 mm and 360 mm. On the other hand, peak PSR values of our method with and without N observation photon counting integral imaging are 60.9974 and 41.7063 at 43 mm (i.e., 43 × 8 = 344 mm), respectively. Therefore, through Figure 13 and Table 4, it is noticed that our proposed method can provide a more accurate position of three-dimensional objects at long distance with improved visual quality under photon-starved conditions.
Figure 13. PSR results by computational photon counting integral imaging without digital zooming (conventional method, red line), computational photon counting integral imaging with digital zooming (blue line) and N observation photon counting integral imaging with digital zooming (magenta line).
Table 4. Comparison among conventional imaging method, digital zooming method, and our proposed method via various reconstruction depths by PSR.

5. Conclusions

In this paper, we have presented three-dimensional visualization of objects at long distances under photon-starved conditions using digital zooming and N observation photon counting integral imaging. The conventional method has less lateral and longitudinal resolutions for objects at long distances. In contrast, our method can solve this resolution problem by using digital zooming. In addition, under photon-starved conditions, the three-dimensional information of the object at long distances may be more accurately obtained via digital zooming VCR with N observation photon counting integral imaging. As shown in experimental results, our method had better visual quality of three-dimensional objects at long distances under photon-starved conditions than the conventional method. Therefore, we believe that our method may be used for many applications, such as military, astronomy, and so on. However, our method has a drawback. Because digital zooming utilizes interpolation algorithms for improving the visual quality, it is difficult to zoom the object at very long distance or detail information of object. That is, when the object is located at too long a distance and required magnification is too large, the interpolation method does not work because the nearest pixels may not be helpful for interpolation. Thus, in this case, we require additional optical zooming methods. To solve these problems, we will continue to examine them in future work.

Author Contributions

Conceptualization, G.Y. and M.C.; methodology, G.Y.; software, G.Y.; validation, G.Y.; formal analysis, G.Y.; investigation, G.Y. and M.C.; resources, G.Y.; writing—original draft preparation, G.Y.; writing—review and editing, M.C.; visualization, G.Y.; supervision, M.C.; project administration, M.C.; funding acquisition, M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2020R1F1A1068637).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
3DThree-dimensional
2DTwo-Dimensional
ROIRegion of interest
VCRVolumetric computational reconstruction
SAIISynthetic aperture integral imaging
MLEMaximum likelihood estimation
PSRPeak sidelobe ratio

References

  1. Lippmann, G. La Photographie Integrale. Comp. Rend. Acad. Sci. 1908, 146, 446–451. [Google Scholar]
  2. Sokolov, A.P. Autostereoscopy and Integral Photography by Professor Lippmann’s Method; Moscow State University: Moscow, Russia, 1911. [Google Scholar]
  3. lves, H.E. Optical properties of a lippmann lenticuled sheet. Opt. Soc. Amer 1931, 21, 171–176. [Google Scholar]
  4. Burckhardt, C.B. Optimum parameters and resolution limitation of integral photography. J. Opt. Soc. Amer. 1968, 58, 71–76. [Google Scholar] [CrossRef]
  5. Okoshi, T. Three-Dimensional Imaging Techniques; Academic Press: New York, NY, USA, 1976. [Google Scholar]
  6. Okoshi, T. Three-Dimensional displays. Proc. IEEE 1980, 68, 548–564. [Google Scholar] [CrossRef]
  7. Javidi, B.; Okano, F.; Son, J.-Y. Three-Dimensional Imaging, Visualization, and Display Technology; Springer: New York, NY, USA, 2009. [Google Scholar]
  8. Cho, M.; Daneshpanah, M.; Moon, I.; Javidi, B. Three-Dimensional Optical Sensing and Visualization Using Integral Imaging. Proc. IEEE 2010, 99, 556–575. [Google Scholar]
  9. Cho, M.; Javidi, B. Three-dimensional photon counting integral imaging using moving array lens technique. Opt. Lett. 2012, 37, 1487–1489. [Google Scholar] [CrossRef]
  10. Cho, M.; Javidi, B. Three-dimensional photon counting image with axially distributed sensing. Sensors 2016, 16, 1184. [Google Scholar] [CrossRef]
  11. Tavakoli, B.; Javidi, B.; Watson, E. Three-dimensional visualization by photon counting computational integral imaging. Opt. Exp. 2008, 16, 4426–4436. [Google Scholar] [CrossRef]
  12. Cho, M. Three-dimensional color photon counting microscopy using Bayesian estimation with adaptive priori information. Chin. Opt. Lett. 2015, 13, 070301. [Google Scholar]
  13. Jung, J.; Cho, M.; Dey, D.-K.; Javidi, B. Three-dimensional photon counting integral using Bayesian estimation. Opt. Lett. 2010, 35, 1825–1827. [Google Scholar] [CrossRef]
  14. Lee, J.; Cho, M. Enhancement of three-dimensional image visualization under photon-starved conditions. Appl. Opt. 2022, 61, 6374–6382. [Google Scholar] [CrossRef] [PubMed]
  15. Jang, J.-S.; Javidi, B. Three-dimensional synthetic aperture integral imaging. Opt. Lett. 2002, 27, 1144–1146. [Google Scholar] [CrossRef]
  16. Hong, S.-H.; Jang, J.-S.; Javidi, B. Three-dimensional volumetric object reconstruction using computational integral imaging. Opt. Exp. 2004, 12, 483–491. [Google Scholar] [CrossRef] [PubMed]
  17. Hong, S.-H.; Javidi, B. Three-Dimensional visualization of partially occluded objects using integral imaging. IEEE OSA J. Display Technol. 2005, 1, 354–359. [Google Scholar] [CrossRef]
  18. Levoy, M. Light fields and computational imaging. IEEE Comput. Mag. 2006, 39, 46–55. [Google Scholar] [CrossRef]
  19. Hwang, Y.S.; Hong, S.-H.; Javidi, B. Free view 3-D visualization of occluded objects by using computational synthetic aperture integral imaging. J. Disp. Technol. 2007, 3, 64–70. [Google Scholar] [CrossRef]
  20. Tavakoli, B.; Daneshpanah, M.; Javidi, B.; Watson, E. Performance of 3D integral imaging with position uncertainty. Opt. Exp. 2007, 15, 11889–11902. [Google Scholar] [CrossRef] [PubMed]
  21. Arimoto, H.; Javidi, B. Integral three-dimensional imaging with computed reconstruction. Opt. Lett. 2001, 26, 157–159. [Google Scholar] [CrossRef] [PubMed]
  22. Vaish, V.; Levoy, M.; Szeliski, R.; Zitnick, C.L.; Kang, S.-B. Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; Volume 2, pp. 2331–2338. [Google Scholar]
  23. Cho, B.; Kopycki, P.; Martinez-Corral, M.; Cho, M. Computational volumetric reconstruction of integral imaging with improved depth resolution considering continuously non-uniform shifting pixels. Opt. Laser Eng. 2018, 111, 114–121. [Google Scholar] [CrossRef]
  24. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Pearson Education, Inc.: Upper Saddle River, NJ, USA, 2008; pp. 65–68. [Google Scholar]
  25. Occorsio, D.; Ramella, G.; Themistoclakis, W. Image Scaling by de la Vallée-Poussin Filtered Interpolation. J. Math. Imaging Vis. 2022, 2, 1–29. [Google Scholar] [CrossRef]
  26. Occorsio, D.; Ramella, G.; Themistoclakis, W. Lagrange-Chebyshev Interpolation for image resizing. Math. Comput. Simul. 2022, 197, 105–126. [Google Scholar] [CrossRef]
  27. Wang, Z.; Liu, D.; Yang, J.; Han, W.; Huang, H. Deep Networks for Image Super-Resolution With Sparse Prior. In Proceedings of the IEEE International Conference on Computer Vision 2015, Santiago, Chile, 7–13 December 2015; pp. 370–378. [Google Scholar]
  28. Cho, M.; Mahalanobis, A.; Javidi, B. 3D passive photon counting automatic target recognition using advanced correlation filters. Opt. Lett. 2011, 36, 861–863. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.