Next Article in Journal
Fast Hologram Calculation Method Based on Wavefront Precise Diffraction
Next Article in Special Issue
On-Demand Dynamic Terahertz Polarization Manipulation Based on Pneumatically Actuated Metamaterial
Previous Article in Journal
Biomass-Derived Carbon-Based Electrodes for Electrochemical Sensing: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Micro 4D Imaging Sensor Using Snapshot Narrowband Imaging Method

1
College of Mechanical Engineering and Automation, Huaqiao University, Xiamen 361021, China
2
School of Mechanical and Automotive Engineering, Fujian University of Technology, Fuzhou 350118, China
*
Author to whom correspondence should be addressed.
Micromachines 2023, 14(9), 1689; https://doi.org/10.3390/mi14091689
Submission received: 12 August 2023 / Revised: 26 August 2023 / Accepted: 28 August 2023 / Published: 29 August 2023

Abstract

:
The spectral and depth (SAD) imaging method plays an important role in the field of computer vision. However, accurate depth estimation and spectral image capture from a single image without increasing the volume of the imaging sensor is still an unresolved problem. Our research finds that a snapshot narrow band imaging (SNBI) method can discern wavelength-dependent spectral aberration and simultaneously capture spectral-aberration defocused images for quantitative depth estimation. First, a micro 4D imaging (M4DI) sensor is proposed by integrating a mono-chromatic imaging sensor with a miniaturized narrow-band microarrayed spectral filter mosaic. The appearance and volume of the M4DI sensor are the same as the integrated mono-chromatic imaging sensor. A simple remapping algorithm was developed to separate the raw image into four narrow spectral band images. Then, a depth estimation algorithm is developed to generate 3D data with a dense depth map at every exposure of the M4DI sensor. Compared with existing SAD imaging method, the M4DI sensor has the advantages of simple implementation, low computational burden, and low cost. A proof-of-principle M4DI sensor was applied to sense the depth of objects and to track a tiny targets trajectory. The relative error in the three-dimensional positioning is less than 7% for objects within 1.1 to 2.8 m.

1. Introduction

Spectral imaging sensors have the ability to obtain spectral information with two-dimensional spatial information (x, y, λ), and they have been widely used in remote sensing [1], biomedical engineering [2,3] and food/crop quality detection [4,5]. In parallel, three-dimensional (3D) imaging, with the ability to obtain three-dimensional spatial information (x, y, z), plays an important role in the field of computer vision, such as in trajectory tracking [6,7], 3D reconstruction [8,9,10], and automatic driving [11,12]. In recent years, a spectral and depth (SAD) imaging method combining 3D spatial imaging and spectral imaging has been developed.
The simplest SAD imaging method is to fuse data from multiple sensors [13,14,15] to obtain 4D information (x, y, z, λ), but these methods are bulky and suffer from alignment errors. The monocular SAD imaging method can obtain 4D information (x, y, z, λ) from one imaging device, but most of them rely on scanning [16,17] or multiple frames [18,19,20,21], which leads to a low temporal resolution and requires the scene to be static. With the development of imaging technology, SAD imaging methods are gradually developing toward both monocular and snapshot [22,23,24,25]. On the one hand, the existing methods use dispersive elements [22,24,25] or a Wollaston prism [23] to map the spectrum with pixel positions, which increases the volume of the system. Most of them use computational imaging methods [22,23,24,25], which cannot display 4D data cubes in real time due to the massive computational requirements for reconstructing spectral images. On the other hand, the existing methods use light field imaging to obtain depth, which also requires increasing the volume of the imaging system. Therefore, a monocular snapshot SAD imaging method with simple implementation and low computational complexity is of great research value.
The key problem of the SAD imaging method is how to use a single two-dimensional imaging sensor to obtain multidimensional information (x, y, z, λ) in real time. Regarding snapshot spectral imaging, the snapshot narrow-band imaging (SNBI) method developed by our team [26] can capture a multispectral image in a single shot. The SNBI method uses a miniaturized narrow-band microarrayed spectral filter mosaic to transform grayscale cameras into snapshot multispectral cameras without increasing the volume of the imaging system. Regarding depth imaging, extensive research has been undertaken into depth sensing in 3D unstructured scenes using various 3D imaging methods, including light field imaging [7,22,23,24,25], multicolor depth from defocus (DFD) [27,28,29,30,31], time-of-flight [19,21,32], and multicamera stereo vision [33,34]. Most of them could not ensure that the depth obtained in a single frame and range expanded without increasing the volume of the imaging sensor. In particular, the DFD approach has unique advantages: it recovers depth by analyzing the amount of defocus blur of a single image and requires a simpler optical design. In addition, the DFD approach is a passive depth estimation method, which is not disturbed by the infrared illumination of the sun. Therefore, it can be used indoors as well as outdoors. The focus of our research is to estimate the depth from the multispectral image by using a micro-imaging sensor. We previously proposed that it was feasible to use multispectral images to detect spectral-aberration-caused defocus [35].
This study proposes a micro 4D imaging (M4DI) sensor that can dynamically capture spectral and spatial 3D information. The M4DI sensor is integrated by a mono-chromatic imaging sensor and a miniaturized narrow-band microarrayed spectral filter mosaic. The M4DI sensor has the same volume as the integrated mono-chromatic imaging sensor, and has the advantages of compactness, a light weight, and low cost. Four-channel multispectral images are obtained by a simple remapping algorithm in a single exposure. Then, we propose a method of using defocus cues from multispectral images to estimate the depth. Postprocessing requires less computational burden, which makes it possible to be applied to a real-time micro-imaging sensor. Finally, the system parameters are determined and the depth estimation performance of the prototype is tested.

2. System and Methods

2.1. Micro 4D Imaging Sensor

The M4DI sensor was developed by integrating a mono-chromatic imaging sensor with a miniaturized narrow-band microarrayed spectral filter mosaic. The filter mosaic contains 135 × 160 square compound pixels (CP), and each CP covers 16-by-16 pixels of the underneath mono-chromatic imaging sensor. A higher spatial and spectral resolution can be realized in the future by improving the manufacturing accuracy of the filter mosaic. Each CP consists of four optical microfilters arranged side-by-side in a two-dimensional manner (Figure 1a). The side length of a CP is 104 µm; hence, the side length of each microfilter is 52 µm, allowing optical light within one narrow spectral band to pass through. Jointly, a CP allows four narrow spectral bands, B1 = 450 ± 10, B2 = 525 ± 10, B3 = 620 ± 10, and B4 = 415 ± 10 nm to pass through while blocking all other wavelength light rays. The transmittance rates of all four passing bands are over 70%, which is at least four orders of magnitude higher than those of all stopping bands, which have transmittance rates lower than 0.004% (Figure 1b). Therefore, the M4DI sensor eliminates any noticeable cross-talk between different spectral bands.
As shown in Figure 2, an experimental platform for the M4DI sensor is established, and Table 1 lists the components used in the experimental platform. The characteristic of the axial dispersion lens is that each wavelength of light has a different focal length, which enhances the defocus difference between spectral images. Light is filtered by a miniature narrow-band filter and captured by a grayscale camera. The laser range sensor is used to provide the true value of depth and is fixed with the M4DI sensor. The relative depth error of the laser range sensor is less than 0.2% in the range of 5 m.

2.2. Multispectral Images and Depth Map

2.2.1. Multispectral Image Acquisition

Figure 3 illustrates the spatial arrangement of four neighboring pixels of multiple bands Bi (i = 1, 2, 3, 4) within the raw image, which is determined by the filter mosaic shown in Figure 1a. At a single exposure, the M4DI sensor captured a raw image of the scene (Figure 3). A simple remapping algorithm was developed to separate the raw image R(r,c) (r = 1, 2, …, 2560; c = 1, 2, …, 2160) into four narrow spectral band images Bi (m,n), where i = 1, 2, 3, 4 and m = 1, 2, …, 135; n = 1, 2, …, 160, according to Equation (1):
B i m , n = x = 2 5 y = 2 5 R 16 m + x + 8 × m o d i 1,2 , 16 n + y + 8 × i 1 2 16 ,
where mod(a,b) is the remainder of the division of a and b. The symbol ⌊ ⌋ is rounded toward negative infinity. In a spectral pixel, all the gray values in the coverage area of an i channel filter are averaged. Due to the limitation of the manufacturing technology, there is a gap between different spectral bands, so a 4 × 4 area is selected as the window of each spectrum (x, y = 2, 3, 4, 5).

2.2.2. Depth from Multispectral Imaging with Chromatic Aberration

The key of DFD is how to obtain defocus information from the blurred image. A blurred image can be modeled as a convolution of a clear image with a point spread function (PSF). For circular apertures, the defocus pattern can be approximated by the Gaussian function g x , σ = 1 / 2 π σ 2 × e x p x 2 / 2 σ 2 . The standard deviation σ of the Gaussian function represents the degree of blur. The defocused image I(x) captured by the imaging sensor can be represented by the following formula:
I x = f x g x , σ + n x ,
where x is the pixel coordinate of the image, is a convolution symbol, and n(x) is random noise.
The chromatic aberration of the lens is characterized by each wavelength of light having a different focal length, which is caused by the different transmittance of each wavelength of light. In general, chromatic aberration is eliminated in the postprocessing step. Our research is focused on estimating depth from blur differences in multispectral images. The relationship between defocus cues σi of the ith spectral channel λi and depth d is as follows:
σ i x = k D s 1 d i 1 d ,
where D is the optical aperture; k is a constant related to the imaging system; s is the distance between the sensor and the lens; and di is the optimal focus distance (OFD) corresponding to the ith spectral channel λi.
Generally, defocus cues are easily estimated at the edge of the image texture [36]. The step blur edge is the main type in the image texture. Calculating the defocus degree of the step edge can be used to estimate the depth. The image at the edge can be expressed as f x = A u x + B g x , σ 0 . The function u(x) is the step function, and σ0 is a fixed standard deviation of the step blur edge. A is the amplitude, and B is the offset. The defocused image captured by the imaging sensor can be represented by the following formula:
I i x = A u x + B g x , σ 0 g x , σ i + n x ,
where Ii(x) is the pixel value at the image position x of the ith spectral channel λi and σ i is the standard deviation of the ith spectral channel λi. Median filtering is applied to the multispectral image captured by the M4DI sensor to eliminate noise. Before depth estimation, one preprocessing procedure was applied to normalize the four spectral images to make their amplitude A comparable. The normalized spectral image I(λ) is obtained using Equation (5):
I λ = B λ B m i n λ B m a x λ B m i n λ .
where B(λ) is the spectral image obtained using Equation (1), Bmin(λ) is the minimum gray value of the spectral image, and Bmax(λ) is the maximum gray value of the spectral image.
Then, the gradient of the blurred image becomes:
i x = A 2 π σ 0 2 + σ i 2 e x p x 2 2 σ 0 2 + σ i 2 .
Write i 0 as i . At the edges (x = 0), the gradient becomes:
i = A 2 π σ 0 2 + σ i 2 .
According to Equation (7), the edge gradient of any three wavelength images is used to eliminate the fixed standard deviation σ0. For convenience, the following equation is denoted as M:
M = 1 i 2 1 j 2 1 k 2 1 j 2 = σ i 2 σ j 2 σ k 2 σ j 2     i j k .
Substituting Equation (3) into Equation (8) and defining D i = 1 d i :
d ^ i , j , k = 2 × M × D k D j D i D j M × D k 2 D j 2 D i 2 D j 2     i j k .
It can be seen from Equation (9) that the depth can be estimated from three spectral images. In the imaging system, there is a limit to the gradient change perceived by the image sensor. When the blur size is too large, it may cause a gradient change that is too small to be sensed by the sensor. Therefore, three spectral images with the closest central wavelengths are applied to Equation (9), and other spectral images provide depth estimation in a new range. In this paper, four spectral images are used for depth estimation:
d ^ = d ^ 1 , 2 , 3 i f   2   o r   3 = m a x 1 , 2 , 3 , 4 d ^ 1 , 2 , 4 i f   1   o r   4 = m a x 1 , 2 , 3 , 4   0 e l s e .
The sparse depth is obtained by Equation (10) and the depth value is only estimated at the edge. A dense depth map, wherein each pixel within the image has a depth value, can be obtained from the sparse depth map using existing algorithms [37] with some modifications. We use the matting Laplacian method to interpolate the sparse depth d ^ into a dense depth map δ. The depth map interpolation method needs to minimize the following cost function:
E δ = δ T L δ + ρ δ d ^ T H δ d ^ ,
where L is the Laplace matrix, which was proposed in [37]; ρ is a smoothing constant; H is a diagonal matrix whose element Hmm is equal to 1 if the inequality d ^ ( m ) 0 is satisfied at pixel m. Equation (11) is derived and the derivative is made 0:
L + ρ H δ = ρ H d ^ ,
Equation (12) can be rewritten in the following way:
δ = L + ρ H 1 ρ H d ^ .
The value of ρ is determined by the camera system or estimation method. In our system, we use a fixed ρ value of 0.001.

3. Results and Discussion

According to Equation (9), it is necessary to determine the OFD of each spectral channel. The OFD can be obtained by illuminating the imaging sensor with a point light source. Figure 4 illustrates the amount of bandwise defocus variation with changes in depth, measured as the full-width-half-maximum (FWHM) of a point light source imaged by the M4DI sensor. The depth corresponding to the minimum value of the FWHM is the OFD. Figure 4 measures the OFD as d4 = 1.4 m, d3 = 4 m, d2 = 2.3 m, and d1 = 1.6 m. Note that this OFD is a fixed value in our system because the focal length in each channel and the lens are fixed.
The image processing process of the M4DI sensor is shown in Figure 5. At a single exposure, four spectral images are separated from the original images. The spectral images are normalized, and then their gradient is calculated. We use the Canny edge detector to perform edge detection and estimate the depth at the edge (sparse depth). Finally, a depth map is obtained by filling the sparse depth according to the normalized spectral images.
A plane sample was placed at intervals of 10 cm in the range of 1~4 m to test the accuracy and range of depth estimation. The result of the depth estimation is shown in Figure 6a. The true depth, illustrated by the black curve, was measured using a laser range sensor, which has 1 mm accuracy for a depth range within 5 m. When three channel images of 620 nm, 525 nm, and 450 nm are used as input images, the depth estimation range is 1.3~2.8 m. When three channel images of 525 nm, 450 nm, and 415 nm are used as input images, the depth estimation range is 1.1~1.4 m. Therefore, the depth estimation range of the M4DI sensor is 1.1 to 2.8 m. The maximum relative error of the M4DI sensor is within 7%. Our team further applied the M4DI sensor to recover the trajectory of a box printed with tiny characters. The central pixel position and depth of the character “U” (size: 10 × 17 mm) are given by template matching. The trajectories sampled at 16 different positions are shown in Figure 6b. This shows that the M4DI sensor could be applied to the field of target recognition and tracking.
We further conducted a qualitative evaluation to test whether the M4DI sensor could conduct 3D sensing of a complicated object. Experimental samples were placed at different depths. The result of depth estimation is shown in Figure 7. Multiple factors could affect the depth measurement accuracy. One is the uneven reflectivity of the surface material, which would enhance the received light intensity, and hence, the gradient of a particular band if the reflectivity peaks at that band. Strong reflectivity in any one of the four narrow bands causes an apparently shallower depth than the real depth. Another is that no texture information is obtained, which means no information for depth in the passive depth estimation method. Specifically, as shown in Figure 7, the sample has no texture (symbol A), strong specular reflection (symbol B), and weak illumination (symbol C). Even with these error-inducing factors, the far–near relationship between samples can be distinguished, that is, a < b < c, d < e, f < g, and h < i.

4. Conclusions

In conclusion, this paper presents an M4DI sensor for multispectral depth imaging, which can obtain narrow-band spectral images of four channels, and the relative error in the depth recovery is less than 7% in the depth range of 1.1 to 2.8 m. The M4DI sensor was developed by integrating a mono-chromatic imaging sensor with a miniaturized narrow-band microarrayed spectral filter mosaic, which made it have the same volume as the integrated mono-chromatic imaging sensor. The advantages of the proposed M4DI sensor include its high efficiency of generating 4D cubic data, its extended depth of focus when combining its various spectral bands, its compactness, its light weight, and its ability to work in passive environments. These advantages make it unique among SAD imaging methods, as none of the existing SAD imaging methods can obtain a multispectral depth image in a single frame without increasing the volume of the imaging sensor. The M4DI sensor was applied to sense the depth of objects and to track tiny target trajectories. The M4DI sensor, as a miniaturized real-time imaging system, has the potential to identify and track targets in a narrow space.

Author Contributions

Conceptualization, W.J. and D.Y.; methodology, W.J.; software, W.J.; validation, W.J.; formal analysis, W.J.; investigation, D.Y. and C.H.; resources, Q.Y. and L.K.; data curation, W.J. and C.H.; writing—original draft, W.J.; writing—review and editing, D.Y.; supervision, D.Y.; project administration, C.H.; funding acquisition, C.H. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Natural Science Foundation of Fujian (2020J02005); and the General Program of Fujian Province Natural Science Foundation (2021J01293).

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kukkonen, M.; Maltamo, M.; Korhonen, L.; Packalen, P. Comparison of multispectral airborne laser scanning and stereo matching of aerial images as a single sensor solution to forest inventories by tree species. Remote Sens. Environ. 2019, 231, 111208. [Google Scholar] [CrossRef]
  2. Paquit, V.C.; Tobin, K.W.; Price, J.R.; Meriaudeau, F. 3D and Multispectral Imaging for Subcutaneous Veins Detection. Opt. Express 2009, 17, 11360–11365. [Google Scholar] [CrossRef]
  3. Li, Q.; He, X.; Wang, Y.; Liu, H.; Xu, D.; Guo, F. Review of spectral imaging technology in biomedical engineering: Achievements and challenges. J. Biomed. Opt. 2013, 18, 100901. [Google Scholar] [CrossRef] [PubMed]
  4. Nicolaï, B.M.; Beullens, K.; Bobelyn, E.; Peirs, A.; Saeys, W.; Theron, K.I.; Lammertyn, J. Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review. Postharvest Biol. Technol. 2007, 46, 99–118. [Google Scholar] [CrossRef]
  5. Sun, G.; Wang, X.; Sun, Y.; Ding, Y.; Lu, W. Measurement Method Based on Multispectral Three-Dimensional Imaging for the Chlorophyll Contents of Greenhouse Tomato Plants. Sensors 2019, 19, 3345. [Google Scholar] [CrossRef] [PubMed]
  6. Bahraini, M.S.; Rad, A.B.; Bozorg, M. SLAM in Dynamic Environments: A Deep Learning Approach for Moving Object Tracking Using ML-RANSAC Algorithm. Sensors 2019, 19, 20. [Google Scholar] [CrossRef] [PubMed]
  7. Zheng, Y.; Song, L.; Huang, J.; Zhang, H.; Fang, F. Detection of the three-dimensional trajectory of an object based on a curved bionic compound eye. Opt. Lett. 2019, 44, 4143–4146. [Google Scholar] [CrossRef]
  8. Han, Q.; Wang, S.; Fang, Y.; Wang, L.; Du, X.; Li, H.; He, Q.; Feng, Q. A Rail Fastener Tightness Detection Approach Using Multi-source Visual Sensor. Sensors 2020, 20, 1367. [Google Scholar] [CrossRef]
  9. Zhou, S.; Song, W. Robust Image-Based Surface Crack Detection Using Range Data. J. Comput. Civil. Eng. 2020, 34, 16. [Google Scholar] [CrossRef]
  10. Zhang, Y.; Liu, W.; Wang, W.; Gao, P.; Xing, H.; Ma, J. Extraction method of a nonuniform auxiliary laser stripe feature for three-dimensional reconstruction of large components. Appl. Opt. 2020, 59, 6573–6583. [Google Scholar] [CrossRef]
  11. Liu, Z.; Yu, S.; Zheng, N. A Co-Point Mapping-Based Approach to Drivable Area Detection for Self-Driving Cars. Engineering 2018, 4, 479–490. [Google Scholar] [CrossRef]
  12. Li, H.; Savkin, A.V.; Vucetic, B. Autonomous Area Exploration and Mapping in Underground Mine Environments by Unmanned Aerial Vehicles. Robotica 2020, 38, 442–456. [Google Scholar] [CrossRef]
  13. Wang, L.; Xiong, Z.; Shi, G.; Zeng, W.; Wu, F. Simultaneous Depth and Spectral Imaging With a Cross-Modal Stereo System. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 812–817. [Google Scholar] [CrossRef]
  14. Kim, M.H.; Harvey, T.A.; Kittle, D.S.; Rushmeier, H.; Dorsey, J.; Prum, R.O.; Brady, D.J. 3D imaging spectroscopy for measuring hyperspectral patterns on solid objects. ACM Trans. Graph. 2012, 31, 1–11. [Google Scholar] [CrossRef]
  15. Zhao, Y.; Yue, T.; Chen, L.; Wang, H.; Ma, Z.; Brady, D.J.; Cao, X. Heterogeneous camera array for multispectral light field imaging. Opt. Express 2017, 25, 14008. [Google Scholar] [CrossRef] [PubMed]
  16. Latorre-Carmona, P.; Sánchez-Ortiga, E.; Xiao, X.; Pla, F.; Martínez-Corral, M.; Navarro, H.; Saavedra, G.; Javidi, B. Multispectral integral imaging acquisition and processing using a monochrome camera and a liquid crystal tunable filter. Opt. Express 2012, 20, 25960. [Google Scholar] [CrossRef]
  17. Farber, V.; Oiknine, Y.; August, I.; Stern, A. Compressive 4D spectro-volumetric imaging. Opt. Lett. 2016, 41, 5174. [Google Scholar] [CrossRef] [PubMed]
  18. Marquez, M.; Rueda-Chacon, H.; Arguello, H. Compressive Spectral Light Field Image Reconstruction via Online Tensor Representation. IEEE Trans. Image Process. 2020, 29, 3558–3568. [Google Scholar] [CrossRef]
  19. Rueda, H.; Fu, C.; Lau, D.L.; Arce, G.R. Single Aperture Spectral plus ToF Compressive Camera: Toward Hyperspectral plus Depth Imagery. IEEE J. Sel. Top. Signal Process. 2017, 11, 992–1003. [Google Scholar] [CrossRef]
  20. Zia, A.; Zhou, J.; Gao, Y. Exploring Chromatic Aberration and Defocus Blur for Relative Depth Estimation From Monocular Hyperspectral Image. IEEE Trans. Image Process. 2021, 30, 4357–4370. [Google Scholar] [CrossRef]
  21. Rueda-Chacon, H.; Florez-Ospina, J.F.; Lau, D.L.; Arce, G.R. Snapshot Compressive ToF plus Spectral Imaging via Optimized Color-Coded Apertures. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2346–2360. [Google Scholar] [CrossRef] [PubMed]
  22. Feng, W.; Rueda, H.; Fu, C.; Arce, G.R.; He, W.; Chen, Q. 3D compressive spectral integral imaging. Opt. Express 2016, 24, 24859–24871. [Google Scholar] [CrossRef] [PubMed]
  23. Zhu, S.; Gao, L.; Zhang, Y.; Lin, J.; Jin, P. Complete plenoptic imaging using a single detector. Opt. Express 2018, 26, 26495. [Google Scholar] [CrossRef] [PubMed]
  24. Cui, Q.; Park, J.; Smith, R.T.; Gao, L. Snapshot hyperspectral light field imaging using image mapping spectrometry. Opt. Lett. 2020, 45, 772–775. [Google Scholar] [CrossRef] [PubMed]
  25. Cui, Q.; Park, J.; Ma, Y.; Gao, L. Snapshot hyperspectral light field tomography. Optica 2021, 8, 1552. [Google Scholar] [CrossRef] [PubMed]
  26. Yi, D.; Kong, L.; Zhao, Y. Contrast-Enhancing Snapshot Narrow-Band Imaging Method for Real-Time Computer-Aided Cervical Cancer Screening. J. Digit. Imaging 2020, 33, 211–220. [Google Scholar] [CrossRef] [PubMed]
  27. Cao, Z.; Zhai, C. Defocus-based three-dimensional particle location with extended depth of field via color coding. Appl. Opt. 2019, 58, 4734–4739. [Google Scholar] [CrossRef]
  28. Haim, H.; Elmalem, S.; Giryes, R.; Bronstein, A.M.; Marom, E. Depth Estimation From a Single Image Using Deep Learned Phase Coded Mask. IEEE Trans. Comput. Imaging 2018, 4, 298–310. [Google Scholar] [CrossRef]
  29. Trouvé, P.; Champagnat, F.; Le Besnerais, G.; Sabater, J.; Avignon, T.; Idier, J. Passive depth estimation using chromatic aberration and a depth from defocus approach. Appl. Opt. 2013, 52, 7152–7164. [Google Scholar] [CrossRef]
  30. Trouvé-Peloux, P.; Sabater, J.; Bernard-Brunel, A.; Champagnat, F.; Le Besnerais, G.; Avignon, T. Turning a conventional camera into a 3D camera with an add-on. Appl. Opt. 2018, 57, 2553–2563. [Google Scholar] [CrossRef]
  31. Sitzmann, V.; Diamond, S.; Peng, Y.; Dun, X.; Boyd, S.; Heidrich, W.; Heide, F.; Wetzstein, G. End-to-end Optimization of Optics and Image Processing for Achromatic Extended Depth of Field and Super-resolution Imaging. ACM Trans. Graph. 2018, 37, 144-1–144-13. [Google Scholar] [CrossRef]
  32. Georgiev, M.; Bregovic, R.; Gotchev, A. Time-of-Flight Range Measurement in Low-Sensing Environment: Noise Analysis and Complex-Domain Non-Local Denoising. IEEE Trans. Image Process. 2018, 27, 2911–2926. [Google Scholar] [CrossRef] [PubMed]
  33. Furukawa, Y.; Hernández, C. Multi-View Stereo: A Tutorial. Found. Trends Comput. Graph. Vis. 2015, 9, 1–148. [Google Scholar] [CrossRef]
  34. Bennett, W.; Neel, J.; Vaibhav, V.; Eino-Ville, T.; Emilio, A.; Adam, B.; Andrew, A.; Mark, H.; Marc, L. High performance imaging using large camera arrays. ACM Trans. Graph. 2005, 24, 765–776. [Google Scholar]
  35. Jiang, W.; Yi, D.; Kong, L. Depth from spectrally-varying blurring detected by a snapshot narrow band multispectral imaging sensor. J. Eng. JOE 2019, 2019, 8591–8594. [Google Scholar] [CrossRef]
  36. Zhuo, S.; Sim, T. Defocus map estimation from a single image. Pattern Recognit. 2011, 44, 1852–1858. [Google Scholar] [CrossRef]
  37. Levin, A.; Lischinski, D.; Weiss, Y. A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 228–242. [Google Scholar] [CrossRef]
Figure 1. Illustration of (a) the geometric arrangement and (b) the spectral transmittance of the filter mosaic used in this study.
Figure 1. Illustration of (a) the geometric arrangement and (b) the spectral transmittance of the filter mosaic used in this study.
Micromachines 14 01689 g001
Figure 2. The experimental setup for the proposed M4DI sensor.
Figure 2. The experimental setup for the proposed M4DI sensor.
Micromachines 14 01689 g002
Figure 3. Spatial arrangement of four neighboring pixels of four bands.
Figure 3. Spatial arrangement of four neighboring pixels of four bands.
Micromachines 14 01689 g003
Figure 4. Variations in each spectral channel defocus of the dispersive optical lens.
Figure 4. Variations in each spectral channel defocus of the dispersive optical lens.
Micromachines 14 01689 g004
Figure 5. The image processing process of the M4DI sensor.
Figure 5. The image processing process of the M4DI sensor.
Micromachines 14 01689 g005
Figure 6. Illustration of (a) the result of depth estimation and (b) the track recovery result of the tiny character “U” (size: 10 × 17 mm).
Figure 6. Illustration of (a) the result of depth estimation and (b) the track recovery result of the tiny character “U” (size: 10 × 17 mm).
Micromachines 14 01689 g006
Figure 7. Depth estimated by the M4DI sensor: the first row shows the images of four different scenes captured by a smartphone; the second row shows four pseudocolor images composed of 620 nm, 525 nm, and 450 nm channel images; the third row shows the sparse depth (only at edges) for the four scenes; and the fourth row shows the dense depth map for the four scenes.
Figure 7. Depth estimated by the M4DI sensor: the first row shows the images of four different scenes captured by a smartphone; the second row shows four pseudocolor images composed of 620 nm, 525 nm, and 450 nm channel images; the third row shows the sparse depth (only at edges) for the four scenes; and the fourth row shows the dense depth map for the four scenes.
Micromachines 14 01689 g007
Table 1. List of components used in the experimental platform.
Table 1. List of components used in the experimental platform.
ComponentsManufacturerFunction
Mono-chromatic imaging sensorUnited Scientific Camera & Imaging Corp, (A55-G17M)Capture gray image
Miniaturized narrow-band microarrayed spectral filter mosaicSelf-builtConvert gray image into narrow-band spectral image
Axial-dispersive optical lensSelf-builtProduce chromatic dispersion
Laser range sensorZhiwei Robotics Corp, (RPLIDAR A1M8-R6)Obtain true depth for testing performance
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, W.; Yi, D.; Huang, C.; Yu, Q.; Kong, L. Micro 4D Imaging Sensor Using Snapshot Narrowband Imaging Method. Micromachines 2023, 14, 1689. https://doi.org/10.3390/mi14091689

AMA Style

Jiang W, Yi D, Huang C, Yu Q, Kong L. Micro 4D Imaging Sensor Using Snapshot Narrowband Imaging Method. Micromachines. 2023; 14(9):1689. https://doi.org/10.3390/mi14091689

Chicago/Turabian Style

Jiang, Wei, Dingrong Yi, Caihong Huang, Qing Yu, and Linghua Kong. 2023. "Micro 4D Imaging Sensor Using Snapshot Narrowband Imaging Method" Micromachines 14, no. 9: 1689. https://doi.org/10.3390/mi14091689

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop