Next Article in Journal
Research in Nonlinearity of Surface Acoustic Wave Devices
Next Article in Special Issue
A Review of Optical Imaging Technologies for Microfluidics
Previous Article in Journal
A Spiral Distributed Monitoring Method for Steel Rebar Corrosion
Previous Article in Special Issue
Reverse Design of On-Chip Terahertz Demultiplexers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vari-Focal Light Field Camera for Extended Depth of Field

1
Gwangju Institute of Science and Technology, School of Electrical Engineering and Computer Science, 123 Cheomdangwagi-ro, Buk-gu, Gwangju 61005, Korea
2
SOSLAB, B-101, BI Center, GIST 123 Cheomdangwagi-ro, Buk-gu, Gwangju 61005, Korea
3
Gwangju Institute of Science and Technology, AI Graduate School, 123 Cheomdangwagi-ro, Buk-gu, Gwangju 61005, Korea
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work and should be considered co-first authors.
Micromachines 2021, 12(12), 1453; https://doi.org/10.3390/mi12121453
Submission received: 26 October 2021 / Revised: 17 November 2021 / Accepted: 24 November 2021 / Published: 26 November 2021
(This article belongs to the Special Issue Advanced MEMS and Optical System Assembly and Integration)

Abstract

:
The light field camera provides a robust way to capture both spatial and angular information within a single shot. One of its important applications is in 3D depth sensing, which can extract depth information from the acquired scene. However, conventional light field cameras suffer from shallow depth of field (DoF). Here, a vari-focal light field camera (VF-LFC) with an extended DoF is newly proposed for mid-range 3D depth sensing applications. As a main lens of the system, a vari-focal lens with four different focal lengths is adopted to extend the DoF up to ~15 m. The focal length of the micro-lens array (MLA) is optimized by considering the DoF both in the image plane and in the object plane for each focal length. By dividing measurement regions with each focal length, depth estimation with high reliability is available within the entire DoF. The proposed VF-LFC is evaluated by the disparity data extracted from images with different distances. Moreover, the depth measurement in an outdoor environment demonstrates that our VF-LFC could be applied in various fields such as delivery robots, autonomous vehicles, and remote sensing drones.

1. Introduction

Camera systems have advanced to mimic the mechanism and function of biological eyes [1,2,3,4]. However, conventional 2D imaging systems are limited in capturing the real world because of the lack of depth information. A light field camera provides a facile way to capture 3D information (light field or plenoptic function) of an object with only a single image sensor and micro-lens array without any external light source, so it has attracted considerable interest in both academia and industry. A micro-lens array (MLA), a core optical component, acting like a multi-camera array, acquires light field data which can be utilized in depth estimation, multi-viewing imaging, digital refocusing, and 3D reconstruction. The initial light field cameras were proposed with two different approaches: placing a pinhole grating inside the camera; placing a micro-lens in front of the image plane [5,6]. Based on these concepts, Ng introduced the first hand-held plenoptic camera called the standard light field camera (plenoptic 1.0) in 2005 [7], and a focused light field camera (plenoptic 2.0) was proposed by Lumsdaine and Georgiev in 2009 [8]. Since then, numerous studies have flourished based on these two types of light field camera and various applications have been explored by utilizing the light field data. For instance, the development of MLA fabrication in a large area facilitates the integration of customized light field cameras for all-in focus imaging from a single exposure [9], and geometrical calibration and depth estimation algorithms have been developed to extract depth information from the lenslet light field camera [10,11].
In particular, 3D depth sensing is one of the major concerns in modern security or automation systems, and the light field camera could be a vital technique for depth sensing vision systems [12]. To realize 3D imaging equivalent to human vision, near- (~1 m) to mid-range (~10 m) depth sensing is required in the artificial vision system including the light field camera. However, a conventional light field camera is inherently limited in shallow depth of field (DoF) because the MLA has a finite DoF. To address this problem, extensive research has been conducted to extend the DoF, including the adoption of multi-focal MLA [13,14,15] and focal length modulation based on liquid-crystal MLA (LC-MLA) [16,17]. Although multi-focal MLA and LC-MLA successfully extend the DoF to some extent, there still exist difficulties in practical application because of their complex manufacturing process, low spatial resolution in multi-focal MLA, inconsistent focal length tendency at low voltage and/or relatively low transparency in LC-MLA [18,19,20,21]. Therefore, another approach to extend the DoF in the main lens rather than MLA should be explored.
In this study, we demonstrate the vari-focal light field camera (VF-LFC) with extended DoF for near to mid-range depth sensing applications. Each optical component, such as a vari-focal lens, and MLA is selected and designed to meet the target DoF based on numerical calculations presented in the following section. We considered both the DoF of MLA (image plane) and the DoF of the main lens (object plane), to optimize the focal length of the MLA. For mechanical robustness of the system, an optical alignment system was designed by ray-tracing simulation considering the space between the image sensor and cover glass, and both cover glass thickness and MLA thickness. In addition, an alignment system was designed to be adaptable with the c-mount optical components, which is generally adopted in machine vision. The implemented VF-LFC successfully extracts depth information within the range of 0.5 m to 15 m by varying its focal length according to the region of interest. To verify the extended DoF of the fabricated VF-LFC, the simulated and measured disparity data according to object distance were analyzed. In addition, we demonstrated the measurement in an outdoor environment to confirm that our VF-LFC was applicable to various fields. The proposed VF-LFC verifies that the DoF customization of the LFC system can be readily achieved by optimizing the focal lengths of MLA and vari-focal lenses.

2. Materials and Methods

Conventional light field cameras consist of a fixed focal lens as a main lens, the MLA, and image sensor as shown in Figure 1a. Each micro-lens of the MLA acts as a single camera by focusing the image from the main lens on the image sensor with slightly different perspectives, so that it is possible to capture light field data including disparity. The disparity should be guaranteed to have a sufficient slope to extract depth information within the distance range, but it rapidly converges as the distance increases out of the DoF of the light field camera, as shown in Figure 1c. Therefore, to extract depth information from far objects, a light field camera should have a large DoF. However, in the conventional light field camera system with fixed focal lens, the MLA has a fundamentally limited DoF, so the conventional light field camera suffers from a shallow DoF [22].
To overcome this limitation, we adopted the vari-focal lens as a main lens which can adjust its focal length within the range of 20 mm to 75 mm (CBC co., Ltd., Computar lenses, Tokyo 309084, Japan) instead of the fixed focal lens. By changing the focal length of the system, the total DoF of the system can be largely extended, while providing sufficient disparity slope, as shown in Figure 1d. Here, we divided four measurement regions with four specific focal lengths (20 mm, 30 mm, 50 mm, and 75 mm) to achieve depth estimation with an object distance of up to 15 m. In the measurement region of short focal length, the depth information of near objects can be extracted, and as the focal length increases, it is available to estimate the depth of the faraway objects. It is noted that each measurement region should be partially overlapped with successive measurement regions to realize continuous depth estimation over a total DoF of the system. In addition, each measurement region should have a large disparity change for the depth estimation with high accuracy.
The VF-LFC consists of a vari-focal lens as main lens, a micro-lens array, and an image sensor as shown in Figure 2a. Based on plenoptic camera 2.0, an image of an object located at a L is formed by the main lens at the position of b L . The MLA is located behind the image plane with a distance of a , and the image sensor is located behind the MLA with a distance of b . The MLA focuses the image of the main lens, as an object, on the image sensor. The depth estimation principle of the VF-LFC is as follows. The distance a L from the main lens to the object can be calculated by the thin lens equation using the focal length of the main lens f L and b L , which is expressed by
1 f L = 1 a L + 1 b L
Each micro-image formed on the image sensor shows the main lens image with a slightly different point of view. Figure 2b describes the triangulation scheme based on the projection of two micro-images. p x 1 , 2 is the distance between the image point to the principal point of the respective micro-image, and d 1 , 2 is the distance from the intersection point of the image plane and a baseline that is an optical axis of the respective micro-lens. In Figure 2b, an up-pointing arrow and a down-pointing arrow of distance present positive and negative values, respectively. Triangles which have equal angles are similar, and thus, the following relation is validated:
p x i = d i b a
The baseline distance between two micro-lenses is defined as follows:
d = d 2 d 1
Similarly, if we define the parallax of the image point, p x , as the difference between p x 1 and p x 2 , it can be defined as follows from Equations (2) and (3):
p x = p x 1 p x 2 = ( d 2 d 1 ) b a
The distance between the image plane and the MLA, a , is described as a function of the baseline distance d , the distance between the MLA and sensor, b , and the estimated parallax, p x
a = d b p x
In addition, the distance between the image plane and main lens, b L , can be calculated by a and known parameters B L and b .
b L = B L b a
The object plane distance from the main lens, a L , is calculated by substituting Equation (5) into the Equation (1), which is expressed by
a L = ( 1 f L 1 B L b a ) 1
Consequently, the depth can be estimated from a point which is simultaneously focused on at least two micro-images.
Figure 2c shows the image side DoF according to the MLA focal length. F L is a focal length of the main lens when the object is at infinity. If the image side DoF exceeds the F L , the image become blurred because it is out of the focal range. Therefore, the MLA focal lengths of 955 µm and 965 µm are more desirable than 975 µm for the selected vari-facal lens. For the depth estimation with high accuracy, the object should be located within the DoF of the light field camera system. Since the image side and object side DoFs are correlated with each other, the object side DoF is determined by the image side DoF, a + and a . The object side DoF was calculated for focal lengths of 20 mm, 30 mm, 50 mm, and 75 mm, respectively (Figure 2d). The dotted lines indicate the boundaries between measurement regions of each focal length. In case of the focal length 955 µm, the object side DoF does not cover the target object distance of 15 m, even though there is no blurred region within the image side DoF. In this regard, we designed the MLA of VF-LFC to have a focal length of 965 μm considering the DoF of both the image side and the object side.

3. Results and Discussion

Figure 3a shows the design schematic of an optical alignment module according to the target MLA focal length, and the integrated VF-LFC module is shown in Figure 3b. We set the C-mount flange back distance of 17.5 mm and the image plane distance with the consideration of the main lens focal point of 5.3 mm. Given the previously designed MLA focal length of 965 µm and the MLA center of DoF distance 4.5 mm, the distance b from the MLA to the image sensor was set to 1.235 mm through the thin lens equation in Equation (1). Figure 3c shows the measurements of near and far objects with focal lengths of 20 mm and 75 mm through the proposed VF-LFC. For a near object, a focused light field image was acquired by the focal length 20 mm, whereas in the case of focal length 75 mm, the micro-image itself, which lowers accuracy during image processing, is too large and blur occurs. On the other hand, for a far object, the micro-image of the object is too small to distinguish and shows low resolution when the focal length is 20 mm. Therefore, the focal length 75 mm is more adequate for the far object imaging, and its micro-image is shown as similar to the micro-image of the near object captured with the focal length 20 mm. As a result, Figure 3c clearly shows that the object DoF can be extended by controlling the focal length according to the target distance. The parameters of the MLA and image sensor are presented in Table 1.
In Figure 4, the experimental demonstration was conducted by VF-LFC to verify that the disparity variation within each measurement region for different focal length was sufficient to extract depth information with high accuracy. The checkerboard calibration target (with downsized checkerboard in the center) was used as an object for light field imaging from near to far distance, as shown in Figure 4a. The object distance for each focal length was divided as follows: FL20 is 0.5–1.5 m at 0.5 m intervals; FL30 is 1–5 m, FL50 is 4–10 m; and FL75 is 8–15 m at 1 m intervals. For continuity between adjacent measurement regions, the measurement region for each focal length was divided to contain partially overlapping portions with neighboring measurement regions. Figure 4c shows a portion of the reference image captured at each focal length. Although the captured images are measured with different focal lengths, all captured images have the region that shows similar disparity with those from other measurement regions, which indicates that the VF-LFC was finely aligned and assembled as initially designed. Figure 4d shows the disparity graph along the object distance based on the calibration result and experimental result. The solid line shown in the graph is based on the calculation of Equation (6), and parallax was converted to disparity, which is a pixel unit. The intrinsic parameters used in the calculation are presented in Table 2. From the image captured by VF-LFC, the disparity was calibrated by fixing the internal variable to suit our device based on the open source light field tool box 2.0 [23]. Disparity calculation was performed on adjacent micro-lens images through a semi-global matching process, and disparity values were extracted for each of 10 locations from the boundary line of the checkerboard for acquiring accurate values. The average and standard deviations of the calculated disparity values are expressed as red dots and error bars, respectively. Even though the standard deviation of measured data seems relatively large as the disparity slope is steeper, the overall trend of measured data is well matched with the calibration results. As a result, the sufficient disparity variation is guaranteed up to a distance of 15 m, which indicates that our VF-LFC with greatly extended DoF can extract the depth information with high accuracy. Figure 4e shows the sequential depth measurement scene with signs at five different distances. Micro-images from four different focal lengths (Figure 4f) are stitched after image rendering, and then, the stitched image is overlapped by the normalized disparity map as shown in Figure 4g. It clearly indicates that the proposed VF-LFC can extract sequential depth information within the extended DoF range. Table 3 classifies the light field camera systems into objective lens, measurement range and features. It shows that our VF-LFC achieves the most extended DoF with a simple configuration compared to other light field cameras regardless of the light field type.
To demonstrate the practicality of the VF-LFC, the outdoor measurement was performed, as shown in Figure 5. Measurements were carried out at four different focal lengths. Regardless of focal length, clear micro-images were acquired even outdoors, as shown in Figure 5b. Figure 5c describes the relationship among the field of view, object distance and focal length of a vari-focal lens. As the focal length increases, the field of view is naturally narrowed down to clearly focus the farther objects. Figure 5d shows the rendered images by rendering algorithms (top row) and disparity maps based on extracted data from micro-images with a VF-LFC at each focal length. It should be noted that each disparity map at different focal lengths shows continuous disparity variation, which indicates that high accuracy depth information can be estimated within the entire extended DoF, up to 15 m of object distance. Hence, the proposed vari-focal light field camera enables the acquiring of depth information from the near to far area compared to conventional light field camera systems. Further advancement in disparity map extraction can be achieved by additional post-processing to reduce the disparity map artefacts caused by excessive light reflection in the scene [28,29].

4. Conclusions

In summary, we proposed the VF-LFC with an extended DoF by introducing a vari-focal lens as a main lens. Unlike conventional light field cameras, which suffer from an inherently limited DoF, the proposed VF-LFC successfully extends its DoF up to ~15 m. We divided the four measurement regions according to the focal lengths of a vari-focal lens (20 mm, 30 mm, 50 mm, and 75 mm) to achieve the target DoF. Each measurement region contains the partially overlapping area with its adjacent region for continuity. By considering both the DoF (image side) of MLA and the DoF (object side) of the main lens, the focal length of MLA was designed to be 965 μm. For mechanical robustness, a light field module was precisely designed to meet an optical alignment, and the fabricated module was adaptable to a C-mount lens, which is generally used for machine vision. The disparity data extracted from the object images according to each measurement region are well matched with the theoretically calculated values and indicate that the fabricated VF-LFC can provide depth information with the range of the extended DoF. In addition, the outdoor measurement demonstrates that our VF-LFC can be a practical technique for various applications such as autonomous vehicles, delivery robots, and remote sensing drones that require 3D information acquisition from short to medium distances.

Author Contributions

Conceptualization, H.M.K.; methodology, H.M.K.; software, M.S.K. and S.C.; validation, H.M.K., M.S.K., J.J., H.-G.J. and Y.M.S.; formal analysis, M.S.K. and S.C.; investigation, H.M.K.; resources, M.S.K. and S.C.; data curation, M.S.K.; writing—original draft preparation, H.M.K. and S.C.; writing—review and editing, H.M.K., S.C., J.J., H.-G.J., and Y.M.S.; visualization, H.M.K. and M.S.K.; project administration, H.-G.J., Y.M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the GIST Research Institute (GRI), through a grant funded by the GIST in 2021 and the Institute of Information & Communications Technology Planning and Evaluation (IITP), through a grant funded by the Korean government (MSIT) (No.2020-0-01000, Light field and LiDAR sensor fusion systems for full self-driving). This work was also supported by the GIST-MIT Research Collaboration grant funded by the GIST in 2021(GK14550), and Samsung Electronics.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, G.J.; Choi, C.; Kim, D.H.; Song, Y.M. Bioinspired Artificial Eyes: Optic Components, Digital Cameras, and Visual Prostheses. Adv. Funct. Mater. 2018, 28, 1705202. [Google Scholar] [CrossRef]
  2. Song, Y.M.; Xie, Y.; Malyarchuk, V.; Xiao, J.; Jung, I.; Choi, K.J.; Liu, Z.; Park, H.; Lu, C.; Kim, R.H.; et al. Digital cameras with designs inspired by the arthropod eye. Nature 2013, 497, 95–99. [Google Scholar] [CrossRef]
  3. Gu, L.; Poddar, S.; Lin, Y.; Long, Z.; Zhang, D.; Zhang, Q.; Shu, L.; Qiu, X.; Kam, M.; Javey, A.; et al. A biomimetic eye with a hemispherical perovskite nanowire array retina. Nature 2020, 581, 278–282. [Google Scholar] [CrossRef] [PubMed]
  4. Kim, M.S.; Lee, G.J.; Choi, C.; Kim, M.S.; Lee, M.; Liu, S.; Cho, K.W.; Kim, H.M.; Cho, H.; Choi, M.K.; et al. An aquatic-vision-inspired camera based on a monocentric lens and a silicon nanorod photodiode array. Nat. Electron. 2020, 3, 546–553. [Google Scholar] [CrossRef]
  5. Ives, H.E. Parallax Panoramagrams Made with a Large Diameter Lens. J. Opt. Soc. Am. 1930, 20, 332–342. [Google Scholar] [CrossRef] [Green Version]
  6. Lippmann, G. Épreuves réversibles donnant la sensation du relief. J. Phys. Theor. Appl. 1908, 7, 821–825. [Google Scholar] [CrossRef]
  7. Ng, R.; Levoy, M.; Brédif, M.; Duval, G.; Horowitz, M.; Hanrahan, P. Light Field Photography with a Hand-Held Plenoptic Camera; Research Report CSTR 2005-02; Stanford University: Stanford, CA, USA, 2005. [Google Scholar]
  8. Georgiev, T.; Lumsdaine, A. Superresolution with Plenoptic 2.0 Cameras. In Frontiers in Optics 2009/Laser Science XXV/Fall 2009 OSA Optics & Photonics Technical Digest, OSA Technical Digest (CD); Optical Society of America: Washington, DC, USA, 2009; paper STuA6. [Google Scholar]
  9. Kim, H.M.; Kim, M.S.; Lee, G.J.; Yoo, Y.J.; Song, Y.M. Large area fabrication of engineered microlens array with low sag height for light-field imaging. Opt. Express 2019, 27, 4435–4444. [Google Scholar] [CrossRef]
  10. Bok, Y.; Jeon, H.G.; Kweon, I.S. Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 287–300. [Google Scholar] [CrossRef] [PubMed]
  11. Jeon, H.G.; Park, J.; Choe, G.; Park, J.; Bok, Y.; Tai, Y.W.; Kweon, I.S. Accurate depth map estimation from a lenslet light field camera. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1547–1555. [Google Scholar]
  12. Kim, H.M.; Kim, M.S.; Lee, G.J.; Jang, H.J.; Song, Y.M. Miniaturized 3D Depth Sensing-Based Smartphone Light Field Camera. Sensors 2020, 20, 2129. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Perwaß, C.; Wietzke, L. Single lens 3D-camera with extended depth-of-field. In Human Vision and Electronic Imaging XVII; Proceedings of SPIE; SPIE: Bellingham, WA, USA, 2012; Volume 8291, ID 829108. [Google Scholar]
  14. Georgiev, T.; Lumsdaine, A. The multifocus plenoptic camera. In Digital Photography VIII; Proceedings of SPIE; SPIE: Bellingham, WA, USA, 2012; Volume 8299, ID 829908. [Google Scholar]
  15. Lee, J.H.; Chang, S.; Kim, M.S.; Kim, Y.J.; Kim, H.M.; Song, Y.M. High-Identical Numerical Aperture, Multifocal Microlens Array through Single-Step Multi-Sized Hole Patterning Photolithography. Micromachines 2020, 11, 1068. [Google Scholar] [CrossRef]
  16. Lei, Y.; Tong, Q.; Zhang, X.; Sang, H.; Xie, C. Plenoptic camera based on a liquid crystal microlens array. In Novel Optical Systems Design and Optimization XVIII; Proceedings of SPIE; SPIE: Bellingham, WA, USA, 2015; Volume 9579, ID 95790T. [Google Scholar]
  17. Chen, M.; He, W.; Wei, D.; Hu, C.; Shi, J.; Zhang, X.; Wang, H.; Xie, C. Depth-of-Field-Extended Plenoptic Camera Based on Tunable Multi-Focus Liquid-Crystal Microlens Array. Sensors 2020, 20, 4142. [Google Scholar] [CrossRef] [PubMed]
  18. Hsieh, P.Y.; Chou, P.Y.; Lin, H.A.; Chu, C.Y.; Huang, C.T.; Chen, C.H.; Qin, Z.; Corral, M.M.; Javidi, B.; Huang, Y.P. Long working range light field microscope with fast scanning multifocal liquid crystal microlens array. Opt. Express 2018, 26, 10981–10996. [Google Scholar] [CrossRef] [PubMed]
  19. Xin, Z.; Wei, D.; Xie, X.; Chen, M.; Zhang, X.; Liao, J.; Wang, H.; Xie, C. Dual-polarized light-field imaging micro-system via a liquid-crystal microlens array for direct three-dimensional observation. Opt. Express 2018, 26, 4035–4049. [Google Scholar] [CrossRef] [PubMed]
  20. Kwon, H.; Kizu, Y.; Kizaki, Y.; Ito, M.; Kobayashi, M.; Ueno, R.; Suzuki, K.; Funaki, H. A Gradient Index Liquid Crystal Microlens Array for Light-Field Camera Applications. IEEE Photonics Technol. Lett. 2015, 27, 836–839. [Google Scholar] [CrossRef]
  21. Algorri, J.F.; Morawiak, P.; Bennis, N.; Zografopoulos, D.C.; Urruchi, V.; Rodríguez-Cobo, L.; Jaroszewicz, L.R.; Sánchez-Pena, J.M.; López-Higuera, J.M. Positive-negative tunable liquid crystal lenses based on a microstructured transmission line. Sci. Rep. 2020, 10, 10153. [Google Scholar] [CrossRef] [PubMed]
  22. Pertuz, S.; Pulido-Herrera, E.; Kamarainen, J.K. Focus model for metric depth estimation in standard plenoptic cameras. ISPRS J. Photogramm. Remote. Sens. 2018, 144, 38–47. [Google Scholar] [CrossRef]
  23. Palmieri, L. The Plenoptic Toolbox 2.0. Available online: https://github.com/PlenopticToolbox/PlenopticToolbox2.0 (accessed on 17 October 2018).
  24. Monteiro, N.B.; Marto, S.; Barreto, J.P.; Gaspar, J. Depth range accuracy for plenoptic cameras. Comput. Vis. Image Underst. 2018, 168, 104–117. [Google Scholar] [CrossRef]
  25. Chen, Y.; Jin, X.; Dai, Q. Distance measurement based on light field geometry and ray tracing. Opt. Express 2017, 25, 59–76. [Google Scholar] [CrossRef]
  26. Bae, S.I.; Kim, K.; Jang, K.W.; Kim, H.K.; Jeong, K.H. High Contrast Ultrathin Light-Field Camera Using Inverted Microlens. Arrays with Metal–Insulator–Metal Optical Absorber. Adv. Opt. Mater. 2021, 9, 2001657. [Google Scholar] [CrossRef]
  27. Zeller, N.; Quint, F.; Stilla, U. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry. ISPRS J. Photogramm. Remote. Sens. 2016, 118, 83–100. [Google Scholar] [CrossRef]
  28. Ibrahim, M.M.; Liu, Q.; Khan, R.; Yang, J.; Adeli, E.; Yang, Y. Depth map artefacts reduction: A review. IET Image Process. 2020, 14, 2630–2644. [Google Scholar] [CrossRef]
  29. Tao, M.W.; Su, J.C.; Wang, T.C.; Malik, J.; Ramamoorthi, R. Depth Estimation and Specular Removal for Glossy Surfaces Using Point and Line Consistency with Light-Field Cameras. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1155–1169. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a,b) Schematic of conventional light field camera (a) and proposed light field camera with a vari-focal lens (b). (c,d) Relationship between disparity and object distance. (c) The disparity of the conventional light field camera decreases and converges. (d) By controlling the focal length before the disparity converges, the proposed VF-LFC provides an extremely extended DoF.
Figure 1. (a,b) Schematic of conventional light field camera (a) and proposed light field camera with a vari-focal lens (b). (c,d) Relationship between disparity and object distance. (c) The disparity of the conventional light field camera decreases and converges. (d) By controlling the focal length before the disparity converges, the proposed VF-LFC provides an extremely extended DoF.
Micromachines 12 01453 g001
Figure 2. (a) Schematic of the VF-LFC system with main design parameters. (b) Schematic of the optical path through the MLA between the image sensor and the image plane of the main lens. (c) The image side DoF, a + and a , are calculated from the thin lens equation for a focal length of MLA. (d) The object side DoF for each focal length of a vari-focal lens according to the MLA focal length, the dotted lines are the boundaries between measurement regions of each focal length.
Figure 2. (a) Schematic of the VF-LFC system with main design parameters. (b) Schematic of the optical path through the MLA between the image sensor and the image plane of the main lens. (c) The image side DoF, a + and a , are calculated from the thin lens equation for a focal length of MLA. (d) The object side DoF for each focal length of a vari-focal lens according to the MLA focal length, the dotted lines are the boundaries between measurement regions of each focal length.
Micromachines 12 01453 g002
Figure 3. (a) Schematic of optical alignment module for VF-LFC. (b) Photographs of the assembled module for VF-LFC (c) Photographs of light field image captured by VF-LFC with focal length FL 20 mm (left), and FL 75 mm (right), inset: original texture image.
Figure 3. (a) Schematic of optical alignment module for VF-LFC. (b) Photographs of the assembled module for VF-LFC (c) Photographs of light field image captured by VF-LFC with focal length FL 20 mm (left), and FL 75 mm (right), inset: original texture image.
Micromachines 12 01453 g003
Figure 4. (a) Photograph of the VF-LFC and reference image (inset). (b) Schematic of VF-LFC experimental setup according to the measurement regions for each focal length. (c) Micro-images from VF-LFC at four different focal lengths. (d) Graph of the disparity according to object distance, solid lines are calibrated result and red dots represent experimental result. (e) Photograph of sequential depth measurement setup with the VF-LFC. (f) Micro-images from VF-LFC at four different focal lengths. (g) Stitched image with four different focal length photographs, overlapped by normalized disparity map.
Figure 4. (a) Photograph of the VF-LFC and reference image (inset). (b) Schematic of VF-LFC experimental setup according to the measurement regions for each focal length. (c) Micro-images from VF-LFC at four different focal lengths. (d) Graph of the disparity according to object distance, solid lines are calibrated result and red dots represent experimental result. (e) Photograph of sequential depth measurement setup with the VF-LFC. (f) Micro-images from VF-LFC at four different focal lengths. (g) Stitched image with four different focal length photographs, overlapped by normalized disparity map.
Micromachines 12 01453 g004
Figure 5. (a) Photograph of outdoor measurement setup of VF-LFC. (b) Microimages from VF-LFC at four different focal lengths. (c) Photographs indicating the field of view difference and object distance for each focal length. (d) Photographs after image rendering (top row) and disparity maps according to each focal length captured by a VF-LFC (bottom row).
Figure 5. (a) Photograph of outdoor measurement setup of VF-LFC. (b) Microimages from VF-LFC at four different focal lengths. (c) Photographs indicating the field of view difference and object distance for each focal length. (d) Photographs after image rendering (top row) and disparity maps according to each focal length captured by a VF-LFC (bottom row).
Micromachines 12 01453 g005
Table 1. Parameters of MLA and image sensor.
Table 1. Parameters of MLA and image sensor.
Microlens ArrayImage Sensor
Pitch240 µmModelIMX178
(Sony, Tokyo, Japan)
Focal length965 µmPixel size2.4 µm
Array typeHexagonalNumber of pixels3096 (H) × 2080 (V)
Table 2. Intrinsic parameters for vari-focal light field camera.
Table 2. Intrinsic parameters for vari-focal light field camera.
b   ( m m ) d   ( μ m ) f L   ( m m ) b L   ( m m )
1.235101.820, 30, 50, 755.3 b + f L
Table 3. Light field camera classification.
Table 3. Light field camera classification.
Light Field TypeObjective LensDepth of FieldMain CharacteristicsReference
Unfocused LFC (1.0)Zoom lens0.05–2 mLytro 1st generation[24]
Unfocused LFC (1.0)Zoom lens0.5–0.9 m-[25]
Focused LFC (2.0)3.04 mm0.05–0.25 mSmall form factor[26]
Focused LFC (2.0)35 mm0.7–5.2 mRaytrix camera[27]
Focused LFC (2.0)39.8 mm~0.59 mLC MLA (tunable)[19]
Focused LFC (2.0)Zoom lens0.5–15 mVari-focal lensOurs
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, H.M.; Kim, M.S.; Chang, S.; Jeong, J.; Jeon, H.-G.; Song, Y.M. Vari-Focal Light Field Camera for Extended Depth of Field. Micromachines 2021, 12, 1453. https://doi.org/10.3390/mi12121453

AMA Style

Kim HM, Kim MS, Chang S, Jeong J, Jeon H-G, Song YM. Vari-Focal Light Field Camera for Extended Depth of Field. Micromachines. 2021; 12(12):1453. https://doi.org/10.3390/mi12121453

Chicago/Turabian Style

Kim, Hyun Myung, Min Seok Kim, Sehui Chang, Jiseong Jeong, Hae-Gon Jeon, and Young Min Song. 2021. "Vari-Focal Light Field Camera for Extended Depth of Field" Micromachines 12, no. 12: 1453. https://doi.org/10.3390/mi12121453

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop