Next Article in Journal
Vital Sign Monitoring and Cardiac Triggering at 1.5 Tesla: A Practical Solution by an MR-Ballistocardiography Fiber-Optic Sensor
Next Article in Special Issue
Colour Constancy for Image of Non-Uniformly Lit Scenes
Previous Article in Journal
Constrained Unscented Particle Filter for SINS/GNSS/ADS Integrated Airship Navigation in the Presence of Wind Field Disturbance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of Disparity Information for Depth Extraction Using CMOS Image Sensor with Offset Pixel Aperture Technique †

1
School of Electronics Engineering, Kyungpook National University, Daegu 41566, Korea
2
Center for Integrated Smart Sensors, KAIST, Daejeon 34141, Korea
*
Author to whom correspondence should be addressed.
This paper is an extended version of “Extraction of Disparity Information Using Pixels with Integrated Apertures” published in the Proceedings of 2018 IEEE International Conference on Imaging Systems and Techniques (IST), Krakow, Poland, 16–18 October 2018.
Sensors 2019, 19(3), 472; https://doi.org/10.3390/s19030472
Submission received: 20 December 2018 / Revised: 22 January 2019 / Accepted: 22 January 2019 / Published: 24 January 2019

Abstract

:
A complementary metal oxide semiconductor (CMOS) image sensor (CIS), using offset pixel aperture (OPA) technique, was designed and fabricated using the 0.11-µm CIS process. In conventional cameras, an aperture is located on the camera lens. However, in a CIS camera using OPA technique, apertures are integrated as left-offset pixel apertures (LOPAs) and right-offset pixel apertures (ROPAs). A color pattern is built, comprising LOPA, blue, red, green, and ROPA pixels. The disparity information can be acquired from the LOPA and ROPA channels. Both disparity information and two-dimensional (2D) color information can be simultaneously acquired from the LOPA, blue, red, green, and ROPA channels. A geometric model of the OPA technique is constructed to estimate the disparity of the image, and the measurement results are compared with the estimated results. Depth extraction is thus achieved by a single CIS using the OPA technique, which can be easily adapted to commercial CIS cameras.

1. Introduction

Complementary metal oxide semiconductor (CMOS) image sensors (CISs) are widely used in various products, such as mobile phones, digital single-lens reflex cameras, and closed-circuit television systems [1,2,3]. CISs have several advantages, including high integration, low power consumption, and high frame rates. Additionally, they have been developed for imaging systems. In a conventional CIS camera, only two-dimensional (2D) images without depth information can be obtained. To obtain depth information, time-of-flight, stereo vision, and structured light techniques have been used [4,5,6,7,8,9,10,11]. These techniques, however, have disadvantages. A high-power external light source is required for the time-of-flight technique, and the stereo vision technique usually requires more than a single camera. Furthermore, the structured light technique requires a high-cost hardware system.
In this study, a CIS using offset pixel aperture (OPA) technique is developed, and its performance is evaluated. Three-dimensional (3D) information with 2D color and disparity information related to depth can be obtained using a single CIS without an external light source. This paper is an extended version of a previous paper [12]. A CIS using OPA technique is based on the depth-from-disparity method. In this method, the disparity of an image increases with the distance between the camera and the object. Several techniques based on the depth-from-disparity method have been developed to obtain images using disparity information [13,14,15,16]. The depth-from-disparity method was previously used for phase detection auto-focusing (PDAF) [17,18]. However, the depth resolution of the proposed OPA technique could be better than that of the PDAF technique because the offset of two apertures in the proposed OPA technique is larger. Furthermore, the proposed OPA technique uses no color filter, resulting in higher sensitivity compared to the PDAF technique. The concept of the OPA technique was previously proposed [19] and the hardware implementation for depth extraction was developed [20].
In this paper, disparity information for depth extraction using CIS with OPA technique is analyzed based on the theory of a mathematical model. In addition, the effects of aperture offset in the OPA of the pixels are investigated using optical simulation based on the finite-difference time-domain method. The OPA was designed and fabricated using the first metal layer in the CIS process. The aperture offset is optimized via optical simulation. Based on simulation results, a CIS using OPA technique was fabricated, and the performance of the fabricated CIS is evaluated. It uses small pixels, based on four transistors active pixel sensors with the pinned photodiode, and it can be easily adapted to high-resolution conventional CISs. The geometric model of the OPA technique is constructed to estimate the disparity of the image, and its operating principle is explained. In the measurement result, the disparity information is obtained from the left-OPA (LOPA) and right-OPA (ROPA) channels of the CIS. The measurement result with disparity, according to distance, are compared with results estimated from the geometric model. The study provides a useful reference for further research in depth extraction and is useful in the implementation of 3D camera systems for gesture recognition, motion detection, and object reconstruction.

2. Operating Principle

Depth-from-disparity is an attractive method of depth extraction. Stereo vision is a typical technique based on the depth-from-disparity method. The OPA technique in this paper is also based on the depth-from-disparity method. In this method, a pair of 2D images obtained from two viewpoints are used for depth extraction. The amount of disparity depends on the distance to the surface of the focus point and the characteristics of the camera lens. The disparity of the image increases with the distance between the camera lens and the object. The geometric model for the depth-from-disparity method is shown in Figure 1, and its principle is explained as follows:
1 u + 1 v = 1 f
v = f u u f
v 0 = f u 0 u 0 f ,
where u is the distance between the focused point of a object and a lens, v is the distance between the lens and the focused plane, and f is the focal length of the lens. The model has two holes, and the baseline (OA) is a distance between the holes. The disparity (D) is related to the parameters of the model and defined in the following equations:
v v 0 = f 2 ( u u 0 ) ( u f ) ( f u 0 )
D = ( v v 0 ) O A v = f 2 ( u u 0 ) u ( f u 0 ) · O A f .
This equation indicates that the image has no disparity at u = u0, and the disparity is proportional to distance. The disparity is also proportional to distance between the two holes. However, the depth extraction is more difficult and the possibility of a false match increases as the distance between the two holes increases. Figure 2 shows the disparity of the captured images from the left and right holes. When these images are distinguished, the disparity can be calculated by the differences of between two correlated points of the images from the left and right holes.
The geometric model of the OPA technique is shown in Figure 3. The OPA in the pixel acts as two holes in the lens. The equivalent F-number (FP) of the OPA technique is defined as the following equation:
F P = f O A = h O P ,
where OA and f are defined in Equation (5). h is the height of the pixel and OP is offset of the OPA. The disparity related to the parameters of the geometric model of the OPA technique is redefined from Equation (5):
D = f 2 ( u u 0 ) u ( f u 0 ) F P = f 2 ( u u 0 ) u ( f u 0 ) · O P h .
The disparity is related to the offset of the OPA and the height of the pixel. To increase disparity, the offset of the OPA should be large, or the height of the pixel should be low. However, the height of the pixel is fixed by the CIS process and the offset is limited to pixel size. Therefore, optimization of the offset is necessary to increase the disparity.
The vertical structure of the blue, red, and green pixels without the OPA is shown in Figure 4a, and the vertical structures of the LOPA and ROPA pixels are shown in Figure 4b,c, respectively. The vertical structure comprises a microlens, silicon oxide (SiO2), metal layers, a photodiode, and a silicon (Si) substrate. In the LOPA and ROPA pixels, most of the light incident on the photodiode has an incidence angle of θP owing to the aperture offset (OP/2). The pixel apertures of the LOPA and ROPA were designed in the first metal layer using a CIS process where the light is focused on the first metal layer with a microlens. The second and third metal layers are used for signal lines in the pixels. The radius of curvature for the microlens and its height from the first metal layer are elaborately considered and modeled. The LOPA and ROPA are based on white pixel, which does not have a color filter. The white pixel is ideal for the OPA technique to ensure higher sensitivity.
Figure 5a presents the optical simulation results that demonstrate variations in optical power with the incidence angle as a function of the aperture offset. For implementing optical simulation using the finite-difference time-domain method, consideration is paid to the refractive index, the heights of the microlens and active pixel sensor, and the position of the layers of the active pixel sensor. As the aperture offset increases, the peak point of the optical power is shifted. However, sensitivity-related optical power at the peak point decreases with the aperture offset, because the amount of transmitted light decreases. Disparity according to the aperture offset is shown in Figure 5b. The aperture offset is optimized at 0.65 µm, and the CIS using the OPA technique for depth extraction was fabricated, based on optical simulation.

3. Design and Fabrication

Figure 6a shows the layouts of the LOPA and ROPA pixels. The active pixel sensor is based on four transistors with a pinned photodiode. The incident light is focused at the center of the pinned photodiode and is converted to an electrical signal. The active pixel sensor is suitable because of its small size, high sensitivity, and low dark current. The size of unit pixel is as small as 2.8 × 2.8 µm2, and the fill factor of the pixel is 39%. The resolution of pixel array is 1632 (horizontal, H) × 1124 (vertical, V) with the optical block. The pixel was designed using the 0.11-µm 2-poly 4-metal CIS process. In the LOPA and ROPA pixels, the incident light with a high incidence angle is transmitted by the LOPA and ROPA. The disparity information can be obtained from the aperture in the LOPA and ROPA pixels, because the LOPA and ROPA pixels produces different incidence angles of the transmitted light.
The color pattern of the pixel array is shown in Figure 6b. The color pattern comprises the LOPA, blue, red, green, and ROPA pixels. A unit pattern comprises 4 × 4 pixels. The blue, red, and green pixels with color filters are used to obtain color images, and the LOPA and ROPA pixels are used to obtain disparity images.
Figure 7 shows the structures of the fabricated pixels obtained by the scanning electron microscopy. The view is vertically tilted 54°. The structures comprise microlens, color filters, SiO2, third metal layers (M3), first metal layers (M1), Si substrates, and shallow trench isolations. There is no color filter in the ROPA pixel based on the white pixel.
The camera board with the fabricated CIS using the OPA technique is shown in Figure 8. The performance of the fabricated CIS is evaluated using this camera board. The fabricated CIS comprises a pixel array with the color patterns, driving circuits (including the vertical and horizontal scanners), column-parallel readout circuits, bias circuits, and other components. A voltage of 1.5 V is used for the analog and digital part, and one of 3.3 V is used for the analog part. The frame rate and exposure time for the measurement are 17 frames per second (fps) and 59 ms, respectively. The fabricated CIS is summarized in Table 1.

4. Measurement Results and Discussion

Figure 9a shows the measurement system used to evaluate the performance of the fabricated CIS using the OPA technique according to the incidence angle. In the measurement system, a collimator was used to produce parallel light, and the incidence angle was changed from −20° to 20°. The range of incidence angle was determined by considering an F-number of 1.4 with a maximum incidence angle of 19.65° when a camera lens was used to obtain the images. The measurement was performed under dark conditions, and the output of the camera without the lens was measured.
The comparison of the simulation and measurement results is shown in Figure 9b. The output of the simulation results is based on Figure 5 when the aperture offset is 0.65 µm. The output of the measurement results was averaged over 100 frames to suppress the influences of temporal random noise. The output of the simulation and measurement results was normalized to a range from 0 to 1. For both simulation and measurement results, the incidence angles at the peak points of the outputs in the LOPA and ROPA pixels are shifted by more than 17° from 0°. The incidence angle at the peak point is determined by θP-related aperture offset, as shown in Figure 4. These results confirm that light with a high incidence angle is transmitted in the LOPA and ROPA pixels. The measurement results agreed well with the simulation results, but with slightly different output. The disparities of the simulation and measurement results are 35.2° and 36.9°, respectively. The difference in the disparities between simulation and measurement results is 1.7°, caused by different characteristics between the ideal microlens model for the simulation and the fabricated microlens. Additionally, the incidence angle at the cross point of outputs in the LOPA and ROPA pixels is slightly shifted by −1.4° from 0° in the measurement result, because microlens is shifted, owing to the overlay in the fabrication of CIS process. To obtain better performance, an improvement in the microlens process is necessary.
Figure 10 shows the measurement setup for evaluation of the disparity information. The black and white printed image with an edge was used for measurement to calculate disparity. The measurement system comprises a camera board, fabricated CIS, camera lens, and rail to control the distance. The F-number and focal length of the camera were 1.4 and 16 mm, respectively. The camera lens was focused at 20 cm, and the distance between the camera lens and the black-and-white printed image was controlled from 20 cm to 130 cm. The output image of the fabricated CIS, according to the distance, was measured, and the edge of the output images was defined as the point where the light intensity was half the maximum value.
The output images from the LOPA and ROPA channels are shown in Figure 11a. The measurement was implemented using the measurement setup in Figure 10. The output images from the LOPA and ROPA channels are separated by the raw image from the LOPA, blue, red, green, and ROPA channels. The output image with 101 × 101 pixels is the central part of the sample image. The output image is also averaged over 100 frames to reduce temporal noise and is normalized. Edge of the image is defined as the point where the intensity drops to half of the maximum [20]. Comparing the disparities, the disparity at the distance of 130 cm is larger than that of 20 cm.
The measured and estimated results with disparity, according to distance, are shown in Figure 11b. The disparity was calculated by subtracting the positions of the edges in the output images from the LOPA and ROPA channels. The disparity increases with distance, and the depth can be calculated from the disparity information. The measurement results with disparity agreed well with the results estimated by Equation (7). The root-mean square difference between the measured and estimated results is only 0.14 pixel. These results confirm that the theory of the OPA technique and measurement are in good agreement.

5. Conclusions

A CIS using the OPA technique for depth extraction was fabricated using a 0.11-µm CIS process and developed. The color pattern of the OPA technique comprised the LOPA, blue, red, green, and ROPA pixels. The color information can be obtained by the blue, red, and green pixels, and the disparity information related to the distance between the camera lens and the object was obtained by the LOPA and ROPA pixels. Moreover, the aperture offset in the LOPA and ROPA pixels was optimized by the optical simulation using the finite-difference time-domain method. In the measurement results, the disparity of the LOPA and ROPA pixels, according to incidence angle, was evaluated. A disparity of 35.2° with an aperture offset of 0.65 µm in the LOPA and ROPA pixels was achieved in the simulation result, whereas a disparity of 36.9° was achieved in the measurement result. The measurement results agreed well with the simulation results, except for a small difference caused by the different characteristics between the ideal microlens model for the simulation and the fabricated microlens. Additionally, the disparities according to distance were measured. The measurement results confirmed that disparity increases with the distance and that the root-mean square difference between measured result and estimated result from the OPA model is only 0.14 pixel. The measurement results are in good agreement with the estimated results with the obtained theoretical expressions. This work will be useful for depth extraction using CIS with high resolution and low cost, and the developed CIS using the OPA technique has many applications such as motion detection, gesture recognition, and object reconstruction.

Author Contributions

This work was realized through the collaboration of all authors. B.-S.C., J.L., and S.-H.K. performed the simulations and experiments. S.C., J.P., S.-J.L., and J.-K.S. analyzed the results and guided the research direction. B.-S.C. and J.-K.S. wrote the paper.

Funding

This work was supported by the Center for Integrated Smart Sensors funded by the Ministry of Science and ICT as Global Frontier Project (CISS-2016M3A6A6931333), and by the BK21 Plus project funded by the Ministry of Education, Korea (21A20131600011).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fossum, E.R.; Hondongwa, D.B. A Review of the Pinned Photodiode for CCD and CMOS Image Sensors. IEEE J. Electron. Devi. Soc. 2014, 2, 33–43. [Google Scholar] [CrossRef] [Green Version]
  2. Bigas, M.; Cabruja, E.; Forest, J.; Salvi, J. Review of CMOS image sensors. Microelectron. J. 2006, 37, 433–451. [Google Scholar] [CrossRef] [Green Version]
  3. Park, S.C.; Park, M.K.; Kang, M.G. Super-Resolution Image Reconstruction: A Technical Overview. IEEE Signal Process. Mag. 2003, 20, 21–36. [Google Scholar] [CrossRef]
  4. Chiabrando, F.; Chiabrando, R.; Piatti, D.; Rinaudo, F. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera. Sensors 2009, 9, 10080–10096. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Cho, J.; Choi, J.; Kim, S.-J.; Park, S.; Shin, J.; Kim, J.D.K.; Yoon, E. A 3-D Camera with Adaptable Background Light Suppression Using Pixel-Binning and Super-Resolution. IEEE J. Solid-State Circuits 2014, 49, 2319–2332. [Google Scholar] [CrossRef]
  6. Kawahito, S.; Halin, I.A.; Ushinaga, T.; Sawada, T.; Homma, M.; Maeda, Y. A CMOS Time-of-Flight Range Image Sensor with Gates-on-Field-Oxide Structure. IEEE Sens. J. 2007, 7, 1578–1586. [Google Scholar] [CrossRef]
  7. Bamji, C.S.; O’Connor, P.; Elkhatib, T.; Mehta, S.; Thompson, B.; Prather, L.A.; Snow, D.; Akkaya, O.C.; Daniel, A.; Payne, A.D.; et al. A 0.13 μm CMOS System-on-Chip for a 512 × 424 Time-of-Flight Image Sensor with Multi-Frequency Photo-Demodulation up to 130 MHz and 2 GS/s ADC. IEEE J. Solid-State Circuits 2015, 50, 303–319. [Google Scholar] [CrossRef]
  8. Niclass, C.; Soga, M.; Matsubara, H.; Kato, S.; Kagami, M. A 100-m Range 10-Frame/s 340 × 96-Pixel Time-of-Flight Depth Sensor in 0.18-μm CMOS. IEEE J. Solid-State Circuits 2013, 48, 559–572. [Google Scholar] [CrossRef]
  9. Lin, Y.; Yang, J.; Lv, Z.; Wei, W.; Song, H. A Self-Assessment Stereo Capture Model Applicable to the Internet of Things. Sensors 2015, 15, 20925–20944. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Okutomi, M.; Kanade, T. A Multiple-Baseline Stereo. IEEE Trans. Patt. Anal. Mach. Intell. 1993, 15, 353–363. [Google Scholar] [CrossRef]
  11. Liberadzki, P.; Adamczyk, M.; Witkowski, M.; Sitnik, R. Structured-Light-Based System for Shape Measurement of the Human Body in Motion. Sensors 2018, 18, 2827. [Google Scholar] [CrossRef] [PubMed]
  12. Choi, B.-S.; Kim, S.-H.; Lee, J.; Chang, S.; Park, J.; Lee, S.-J.; Shin, J.-K. Extraction of Disparity Information Using Pixels with Integrated Apertures. In Proceedings of the IEEE International Conference on Imaging Systems and Techniques, Kraków, Poland, 16–18 October 2018. [Google Scholar]
  13. Adelson, E.H.; Wang, J.Y.A. Single Lens Stereo with a Plenoptic Camera. IEEE Trans. Patt. Anal. Mach. Intell. 1992, 14, 99–106. [Google Scholar] [CrossRef]
  14. Levin, A.; Fergus, R.; Durand, F.; Freeman, W.T. Image and Depth from a Conventional Camera with a Coded Aperture. ACM Trans. Graph. 2007, 26, 70. [Google Scholar] [CrossRef]
  15. Jang, J.; Park, S.; Jo, J.; Paik, J. Depth map generation using a single image sensor with phase masks. Opt. Express 2016, 24, 12868–12878. [Google Scholar] [CrossRef] [PubMed]
  16. Wang, A.; Molnar, A. A Light-Field Image Sensor in 180 nm CMOS. IEEE J. Solid-State Circuits 2012, 47, 257–271. [Google Scholar] [CrossRef]
  17. Śliwiński, P.; Wachel, P. A Simple Model for On-Sensor Phase-Detection Autofocusing Algorithm. J. Comput. Commun. 2013, 1, 11–17. [Google Scholar] [CrossRef]
  18. Morimitsu, A.; Hirota, I.; Yokogawa, S.; Ohdaira, I.; Matsumura, M.; Takahashi, H.; Yamazaki, T.; Oyaizu, H.; Incesu, Y.; Atif, M.; et al. A 4M pixel full-PDAF CMOS image sensor with 1.58 μm 2 × 1 On-Chip Micro-Split-Lens technology. In Proceedings of the International Image Sensor Workshop, Vaals, The Netherlands, 8–11 June 2015; pp. 5–8. [Google Scholar]
  19. Choi, B.-S.; Kim, S.-H.; Lee, J.; Oh, C.-W.; Chang, S.; Park, J.; Lee, S.-J.; Shin, J.-K. 3D CMOS image sensor based on white pixel with off-center rectangular apertures. In Proceedings of the IS&T International Symposium on Electronic Imaging, Burlingame, CA, USA, 28 January–2 February 2018. [Google Scholar]
  20. Yun, W.J.; Kim, Y.G.; Lee, Y.M.; Lim, J.Y.; Kim, H.J.; Khan, M.U.K.; Chang, S.; Park, H.S.; Kyung, C.M. Depth extraction with offset pixels. Opt. Express 2018, 26, 15825–15841. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Geometric model for the depth-from-disparity method.
Figure 1. Geometric model for the depth-from-disparity method.
Sensors 19 00472 g001
Figure 2. Disparity images from the left and right holes.
Figure 2. Disparity images from the left and right holes.
Sensors 19 00472 g002
Figure 3. The geometric model of the OPA technique.
Figure 3. The geometric model of the OPA technique.
Sensors 19 00472 g003
Figure 4. Vertical structures of (a) blue, red, green, (b) LOPA, and (c) ROPA pixels.
Figure 4. Vertical structures of (a) blue, red, green, (b) LOPA, and (c) ROPA pixels.
Sensors 19 00472 g004
Figure 5. Optical simulation results that demonstrate (a) the variations in optical power with the incidence angle as a function of the aperture offset and (b) the disparity according to the aperture offset.
Figure 5. Optical simulation results that demonstrate (a) the variations in optical power with the incidence angle as a function of the aperture offset and (b) the disparity according to the aperture offset.
Sensors 19 00472 g005
Figure 6. (a) Layouts of the LOPA and ROPA pixels and (b) color pattern of the pixel array.
Figure 6. (a) Layouts of the LOPA and ROPA pixels and (b) color pattern of the pixel array.
Sensors 19 00472 g006
Figure 7. Structures of the fabricated pixels obtained by the scanning electron microscopy.
Figure 7. Structures of the fabricated pixels obtained by the scanning electron microscopy.
Sensors 19 00472 g007
Figure 8. Camera board with the fabricated CIS using the OPA technique.
Figure 8. Camera board with the fabricated CIS using the OPA technique.
Sensors 19 00472 g008
Figure 9. (a) Measurement system used to evaluate performance of the fabricated CIS using the OPA technique according to the incidence angle and (b) comparison of the simulation and measurement results.
Figure 9. (a) Measurement system used to evaluate performance of the fabricated CIS using the OPA technique according to the incidence angle and (b) comparison of the simulation and measurement results.
Sensors 19 00472 g009
Figure 10. Measurement setup for evaluation of disparity information.
Figure 10. Measurement setup for evaluation of disparity information.
Sensors 19 00472 g010
Figure 11. (a) Output images from the LOPA and ROPA channels, and (b) the measured and estimated results with disparity, according to distance.
Figure 11. (a) Output images from the LOPA and ROPA channels, and (b) the measured and estimated results with disparity, according to distance.
Sensors 19 00472 g011aSensors 19 00472 g011b
Table 1. Summary of the fabricated CIS.
Table 1. Summary of the fabricated CIS.
ParameterValue
Process0.11-µm 2-poly 4-metal CIS process
Pixel typeFour transistors active pixel sensor with the pinned photodiode
Pixel size2.8 × 2.8 µm2
Pixel array1632 (H) × 1124 (V)
Color patternLOPA, blue, red, green, and ROPA pixels
Chip size7 mm (H) × 10 mm (V)
Power supply3.3 V (analog), 1.5 V (analog and digital)
Frame rate17 fps

Share and Cite

MDPI and ACS Style

Choi, B.-S.; Lee, J.; Kim, S.-H.; Chang, S.; Park, J.; Lee, S.-J.; Shin, J.-K. Analysis of Disparity Information for Depth Extraction Using CMOS Image Sensor with Offset Pixel Aperture Technique. Sensors 2019, 19, 472. https://doi.org/10.3390/s19030472

AMA Style

Choi B-S, Lee J, Kim S-H, Chang S, Park J, Lee S-J, Shin J-K. Analysis of Disparity Information for Depth Extraction Using CMOS Image Sensor with Offset Pixel Aperture Technique. Sensors. 2019; 19(3):472. https://doi.org/10.3390/s19030472

Chicago/Turabian Style

Choi, Byoung-Soo, Jimin Lee, Sang-Hwan Kim, Seunghyuk Chang, JongHo Park, Sang-Jin Lee, and Jang-Kyoo Shin. 2019. "Analysis of Disparity Information for Depth Extraction Using CMOS Image Sensor with Offset Pixel Aperture Technique" Sensors 19, no. 3: 472. https://doi.org/10.3390/s19030472

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop