Next Article in Journal
YOLO-DHGC: Small Object Detection Using Two-Stream Structure with Dense Connections
Previous Article in Journal
Predicting the Wear Amount of Tire Tread Using 1D−CNN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Space Calibrated 3D Network of Diffractive Hyperspectral Optical Imaging Sensor

by
Hao Fan
1,2,
Chenxi Li
1,*,
Bo Gao
1,2,
Huangrong Xu
1,
Yuwei Chen
1,2,
Xuming Zhang
3,
Xu Li
3 and
Weixing Yu
1,2,*
1
Key Laboratory of Spectral Imaging Technology of Chinese Academy of Sciences, Xi’an Institute of Optics and Precision Mechanics, Xi’an 710119, China
2
Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
3
Department of Applied Physics, Hong Kong Polytechnic University, Hongkong 999077, China
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(21), 6903; https://doi.org/10.3390/s24216903
Submission received: 13 September 2024 / Revised: 17 October 2024 / Accepted: 24 October 2024 / Published: 28 October 2024

Abstract

:
Diffractive multispectral optical imaging plays an essential role in optical sensing, which typically suffers from the image blurring problem caused by the spatially variant point spread function. Here, we propose a novel high-quality and efficient hybrid space calibrated 3D network “HSC3D” for spatially variant diffractive multispectral imaging that utilizes the 3D U-Net structure combined with space calibration modules of magnification and rotation effects to achieve high-accuracy eight-channel multispectral restoration. The algorithm combines the advantages of the space calibrated module and U-Net architecture with 3D convolutional layers to improve the image quality of diffractive multispectral imaging without the requirements of complex equipment modifications and large amounts of data. A diffractive multispectral imaging system is established by designing and manufacturing one diffractive lens and four refractive lenses, whose monochromatic aberration is carefully corrected to improve imaging quality. The mean peak signal-to-noise ratio and mean structural similarity index of the reconstructed multispectral images are improved by 3.33 dB and 0.08, respectively, presenting obviously improved image quality compared with a typical Unrolled Network algorithm. The new algorithm with high space calibrated ability and imaging quality has great application potential in diffraction lens spectroscopy and paves a new method for complex practical diffractive multispectral image sensing.

1. Introduction

Diffraction multispectral imaging typically uses binary optical diffractive elements which are axially dispersive to realize spectral separation and optical imaging, attracting great interest among researches in optical sensing, such as gas sensing [1], industrial detection [2,3], security sensing [4], and so on. A diffraction lens, the critical optical element of diffraction multispectral imaging, disperses the spectra of incident light, and different spectra are separated at different imaging positions along the optical axis. Since Denise M. Lyons reported the diffractive optic image spectrometer (DOIS) system in 1995 [5], scholars have carried out many studies on diffractive spectral imaging [6,7,8]. In 1999, Michele Hinnrichs et al. proposed a diffractive hyperspectral system for gas sensing with a camcorder-sized prototype and further realized dual-band mid-wavelength infrared/long-wavelength infrared hyperspectral imaging using a single lens and a single dual-band focal plane array in 2003 [9]. Neelam Gupta et al. developed a diffraction imaging spectroscopy system by using a stepper motor on a linear rail, confirming the detection of surface contaminants [2]. Phuong-Ha Cu-Nguyen et al. demonstrated a confocal hyperspectral sensing system utilizing diffractive optical elements and a tunable membrane fluidic lens [10], and further designed the highly compact tunable hyperchromatic lens system at a wavelength range of 450–900 nm [11]. These reported diffractive multispectral imaging systems utilize a stepper motor and a tunable fluidic lens to adjust the position of the image and obtain multispectral images in the visible and infrared spectrum, which shows low spectrum resolution and cannot meet the need of more accurate and more sensitive multispectral diffractive imaging.
With the rapid development of artificial intelligence, many researchers pay attention to the high-accuracy multispectral reconstruction of the diffractive image by introducing AI algorithms to avoid the weaknesses of complex equipment modifications and extensive data processing [12,13,14]. Zhang Ming-qi et al. proposed the so-called inverse filtering restoration algorithm along the path of traditional methods to solve the ill-posed problem of inverse filtering of diffraction spectroscopy imaging [15]. Daniel S. Jeon and coworkers proposed the use of a neural network algorithm in spectral image reconstruction to improve the spectral resolution and imaging quality [16]. F. S. Oktem developed a model-based fast reconstruction algorithm in an extreme ultraviolet regime by combining data-driven and model-driven methods to improve the reconstruction accuracy [17]. Zhao Hai-bo designed a dual-channel visible and near-infrared diffraction computational imaging system that utilizes additional calibration information to improve the spatial resolution of the DOIS for complex imaging scenery [18]. These diffraction spectral algorithms use end-to-end algorithms that are accurate and efficient. However, these methods require substantial real-world data and sophisticated calibration methods to meet the precision demand of neural networks. On the other hand, the hardware improvements of the multispectral diffractive imaging system normally suffer from larger amounts of data, a higher economic expense, and a more complex system due to the utilization of more cameras and spiral diffractive lenses; as a result, they cannot meet the requirements of miniature diffractive multispectral imaging technology.
Furthermore, spatial variation is an important factor in the multispectral diffractive imaging system that involves magnification, rotation, and displacement, which leads to the complex spatially variant point spread function (PSF) dramatically degrading the imaging quality. Michele Hinnrich et al. resampled acquired spectral images with different magnifications, and spectral image construction was then carried out with the same magnification [6]. Qiang Sun et al. added the optical zoom module into the diffractive multispectral imaging system to avoid the magnification effect [19]. These reported works utilize precise PSF estimation, large training datasets, and novel optical structures to enhance the accuracy of multispectral reconstruction in various diffractive multispectral imaging applications. In addition, complex measurements such as shooting, registration, and color correcting are necessary for PSF estimation. Moreover, large training datasets are necessary for hardware improvements to achieve high-accuracy calibrations of targeted images. These traditional methods suffer from complex manual methods and procedures, and again cannot meet the development trend of diffractive multispectral imaging. Thus, how to achieve convenient high-accuracy diffraction spectroscopy imaging that overcomes complex space variation factors and algorithmic limitations in complex sensing is still an open question.
In this paper, we propose a diffractive hyperspectral optical imaging system with a hybrid space calibrated 3D Network “HSC3D” that adopts complex optical sensing with the spatially variant PSF. Our HSC3D algorithm utilizes the 3D U-Net structure with 3D convolutional layers that is constructed by our recently published method [20], which is also combined with the advantages of a space calibration module to improve the imaging quality of reconstructed images. Multispectral data are simulated through the forward process of diffractive optical imaging and are further corrected by using a fabricated calibration module. The diffraction multispectral imaging system is composed of one diffractive lens and four refractive lenses, whose monochromatic aberration is designed and corrected to improve imaging quality. The magnification and rotation variant effects of the measured eight-channel multispectral images are corrected by the spaced calibrated module and further transferred into the 3D U-Net to restructure the multispectral images accurately. The mean peak signal-to-noise ratio (MPSNR) and the mean structure similarity index measure (MSSIM) of the restructured multispectral images are calculated and discussed. The effectiveness of our HSC3D algorithm is also analyzed and compared with a typical Unrolled Network.
This article proceeds as follows. In Section 2, we introduce the hybrid space calibrated 3D Network. In Section 3, we present our established diffractive multispectral imaging system. In Section 4, we analyze the reconstruction of eight-channel multispectral images and compare it with other algorithms. In Section 5, the conclusions of the proposed approach and directions for future work are given.

2. Experimental Methods

Diffraction multispectral imaging typically uses diffractive lenses to disperse light along the optical axis to obtain multispectral information. According to the dispersion properties of the diffractive lenses ( f λ = λ 0 f 0 / λ ), the incident light of longer wavelengths is imaged at the front image plane, while the light of shorter wavelengths is imaged in the back image plane [21]. The incident light travels through the diffractive lens and disperses along the optical axis, forming narrowband multispectral images. Multispectral images of different wavelengths can be obtained by moving the image plane by multiple measurements. Figure 1a shows the ideal imaging process of the diffractive multispectral imaging system. When the measurement plane is adjusted to different positions of z 1 , z 2 , and z 3 along the optical axis, these multispectral images at different wavelength show similar dimensions ( D 1 = D 2 = D 3 ) and rotation angles ( θ 1 = θ 2 = θ 3 ) . Furthermore, the magnification effect of the real multispectral imaging process in Figure 1b presents a smaller dimension ( D 1 < D 2 < D 3 ) with a larger wavelength according to the transverse magnification of the system [6]. Figure 1c shows the rotation effect of the real multispectral imaging process, where a different rotation angle θ 1 θ 2 θ 3 emerges among different multispectral images along the optical axis. In addition, as shown in Figure 1d, practical multispectral diffractive imaging presents more complex spatial variations combing magnification and rotation effects, whose multispectral images have different dimensions ( D 1 < D 2 < D 3 ) and rotation angles θ 1 θ 2 θ 3 . The multispectral imaging information is related to the intensity of each spectral component and can be calculated as follows:
g ( m , n ) = λ m , n I ( ξ , ζ , λ ) H ( m ξ , n ζ , λ ) + η
where g ( m , n ) represents the spectral image, and m and n denote the center of each pixel in the x and y direction. I ( ξ , ζ , λ ) represents the original spectral information, H represents the PSF of the diffractive lens, and η denotes noise. The PSF of multispectral diffractive imaging can be calculated by using the established method in [20],
H ( x , y , λ ) = 1 λ f e i k 2 f x 2 + y 2 P ( s , t , λ ) e i k 2 f s 2 + t 2 i k f x s + y t d s d t 2 ,
where P ( s , t , λ ) , (x, y), (s, t), f, λ and k denote the function of the incident light field, the spatial coordinates in the plane of the image sensor, the spatial coordinates in the plane of the diffractive lens, the distance between the lens and the sensor, the wavelength, and the wave number, respectively. Considering the space-variant effect on practical multispectral diffractive imaging, the calibrated spectral image y m , n can be expressed by
y m , n = λ m , n H ( m ξ , n ζ , λ ) K ( λ , z ) I ( ξ , ζ , λ ) + η ,
where the space variation matrix K λ , z is related to the wavelength λ and the focus position z . For a specific wavelength λi at a specific position along the optical axis zi, the space variation matrix K can be expressed as a function of x m a x and y m a x ,
K λ i , z i = f x m a x , y m a x
where x m a x and y m a x denote the maximum values of the spatial coordinate in the plane of the image sensor. The dimension D of multispectral images for a specific wavelength li satisfies D = 4 x m a x y m a x . Furthermore, the reconstruction of multispectral images can be regarded as a 2D multichannel deconvolution problem and can be formalized as
I ^ λ = a r g m i n I 1 2 y K H I 2 2 + λ R I ,
where R I is a regularization term that serves as a constrained condition and is used to avoid the overfitting problem.
To obtain the reconstructed multispectral images, the inverse problem of Equation (4) can be separated into the following two steps. Firstly, the measured images are calibrated to remove spatial variations and noise, and can be expressed as
S ^ 1 = a r g m i n I 1 2 y K S 1 2 2 + λ R S 1 ,
where S 1 represents the environmentally influenced multichannel spectral images. Secondly, the multichannel spectral information is unmixed by means of a multispectral inverse convolutional reconstruction network and can be formalized as
I ^ λ = a r g m i n I 1 2 S ^ 1 H I 2 2 + λ R I .
In practice, diffractive multispectral imaging commonly presents magnification (Figure 1b) and the rotation effect (Figure 1c), which arises from variable transversal magnification and small vibrations of the imaging system and complex sensing background [22,23].
To solve this problem, the 3D network is combined with the space calibration module to achieve high-accuracy multispectral image reconstruction. Figure 2 exhibits the architecture of our HSC3D algorithm that is composed of four critical parts, including magnification and rotation calibration, intensity calibration, denoising, and 3D U-Net reconstruction.
  • Magnification and rotation calibration: The magnification variation α of diffractive multispectral imaging at the focal length of different wavelengths is calibrated through simulation of multispectral training data. Also, the random rotation angle difference β introduced by the complex imaging progress of the diffractive multispectral system (Figure 1c) is inputted into the data preprocessing step to improve the robustness of the network.
  • Intensity calibration: The intensity calibration is applied to multispectral images at different wavelengths to obtain the spectral profile that is close to the final recovered image. The variation in intensity is caused by the small vibration of the light source and the transmitted deviation of different wavelength channels.
  • Denoising: Noise, an important factor of diffractive multispectral imaging, causes the difference between simulated and measured information, which includes environmental noise, dark current, photon noise, readout noise, and analog-to-digital converter (ADC) noise. Google’s MAXIM model is used as the preprocessor to remove the noise of the aliased images.
  • 3D U-Net: Calibrated diffractive multispectral images are trained by the 3D U-Net to reconstitute multispectral images. The 3D U-Net network is composed of the encoding module and decoding module based on the U-Net framework. The encoder consists of a down-sampling and feature extraction module that transforms the input 3D multispectral image into a multichannel feature map. Also, the decoding module with an up-sampling and image reconstruction module reduces the multichannel 4D feature tensor to the 3D multispectral image. Both feature extraction modules and image reconstruction modules are made up of a norm layer, 3D convolution layer, rectified linear unit (ReLU), and simplified channel attention (SCA) layer. The 3D convolution layer and the SCA layer are utilized to capture 3D features and adjust the weight between adjacent spectral channels to reconstruct diffractive multispectral images.
The aforementioned network structure was implemented by using python programs. Numerical experiments were conducted on a server with an Intel(R) Xeon(R) Platinum 8255C CPU and two RTX 3080 (10 GB) GPUs. The operating system for the experiment environment was Ubuntu 20.04.4 LTS. The model was constructed through the pytorch framework, version 1.11.0, and was guided by the open-source BasicSR library [24]. In this way, our HSC3D algorithm combines the advantages of the space calibrated module and 3D U-Net, which is able to learn spatial and intensity calibrated factors and obtain network parameters automatically.

3. Diffractive Multispectral Imaging System

The diffractive multispectral optical imaging system is designed and fabricated to obtain multispectral information. As shown in Figure 3, the experimental diffraction multispectral imaging system consists of one diffractive lens and four refractive lenses, whose monochromatic aberration has been carefully corrected to achieve high-quality imaging. The diffractive lens is fabricated by photolithography, and its microcosmic appearance is measured by LuphoScan50 with an accuracy of 2 nm by using a non-contact scanning method. The measured microcosmic appearance of the fabricated diffractive lens in Figure 3a shows the deviation of the fabricated diffractive lens, which is around 78.89% less than 0.1 um, satisfying the wave aberration caused by the above deviation errors of less than 1/10λ. From Figure 3b, the simulated modulation transfer function (MTF) of our diffractive multispectral optical imaging system is higher than 0.76 for the wavelength ranging from 510 to 580 nm, demonstrating the high imaging quality of our diffractive multispectral optical imaging system.
The experimental schematic for the diffractive hyperspectral optical imaging system is shown in Figure 3c. The uniformly illuminated incident light from the monochromator (Omno150300) and integrating sphere (NBT-JF-150m) passed through the USAF1951 calibration target and carried the spatial information of different spectral images. Then, the transmitted light traveled through the collimator with a focus of 500 mm at 555 nm and formed parallel light. Further, this parallel light was input into the fabricated diffractive multispectral imaging system. Finally, the light intensity of the diffractive multispectral channels was recorded by moving the complementary metal-oxide semiconductor (CMOS) pixel sensor installed on the stepper motors (Zolix PA300), with a moving accuracy of around 25 μm. During the experimental process, monochromatic light with an interval of 10 nm was used to calibrate the focus position for different wavelengths. In addition, wide-spectrum light was used to record the actual images. A 550 nm filter (EO65744) with a spectral width of 80 nm was inserted between the monochromator and integrating sphere to prevent the interference caused by outside light.

4. Diffractive Multispectral Imaging Experimental Results

Magnification factors α and rotation factors β are computed by using the intensity-based automatic image registration methods. The features of eight-channel images are identified by the Gaussian differential function, where each feature point possesses three messages including position, scale, and direction. Also, the geometric transform matrix of eight-channel images is calculated by fitting the paired feature points using the least squares method. From Figure 4a, the measured magnification factor αmea (black line) declines with the increasing wavelength along the optical axis with a larger focus distance of the imaging system. The calibrated magnification factor αcal (red line) in Figure 4a is calculated by using the image warping algorithm and is closer to 1, presenting a smaller deviation compared to the measured magnification factor αmea (black line). In addition, the measured random rotation factor βopt in Figure 4b (black line) is corrected by using the image warping algorithm, and the calibrated random rotation factor βopt (red line) is closer to 0° in the wavelength of 570 nm and 580 nm. In addition, the measurement errors in the magnification factor amea and the rotation factor bmea are around 0.0015 and 0.0126, respectively, which can be attributed to the effect of different measurement backgrounds during multispectral diffractive imaging. Furthermore, the calibration errors of the magnification factor acal and the rotation factor bcal are around 0.0012 and 0.0092, respectively, which can arise from the data processing of the calibration model. As a result, the proposed measurement and calibration errors have quite a small value of around 0.01, confirming that our proposed calibrated module could decrease the space-variant issues.
Furthermore, the MPSNR and MSSIM parameters are calculated to characterize the effectiveness of space calibration and the imaging quality of the multispectral images. The peak signal-to-noise ratio (PSNR) typically assesses the similarity between the two images and satisfies the following [25]:
P S N R = 20 log 10 max I M S E .
where the mean squared error (MSE) can be calculated by
M S E = 1 m n i = 0 m 1 j = 0 n 1 I i , j K i , j 2 .
Also, the structure similarity index measure (SSIM) depicts the effect of brightness, contrast, and structure, and is consistent with the subjective perception of image quality by the human eye [26], which can be calculated as follows:
S S I M x , y = 2 μ x μ y + C 1 2 σ x y + C 2 μ x 2 + μ y 2 + C 1 σ x 2 + σ y 2 + C 2
where μ x , σ x , and σ x y can be calculated by the following:
μ x = i = 1 N w i x i
σ x = ( i = 1 N w i ( x i μ x ) 2 ) 1 2
σ x y = i = 1 N w i ( x i μ x ) ( y i μ y )
As can be seen from Table 1, the calculated PSNR values of the multispectral channel images at wavelengths of 520 nm, 530 nm, 540 nm, 550 nm, 560 nm, 570 nm, and 580 nm are 9.95 dB, 10.89 dB, 11.93 dB, 12.71 dB, 13.11 dB, 13.26 dB, and 13.36 dB, respectively, which are higher than the original values. Similarly, the calculated SSIM values of the multispectral channel images at wavelengths of 520 nm, 530 nm, 540 nm, 550 nm, 560 nm, 570 nm, and 580 nm are 0.53, 0.57, 0.64, 0.67, 0.62, 0.65, and 0.68, respectively, which are also higher than the original images. In order to fully evaluate the multispectral imaging performance, the values of the MPSNR and MSSIM are also calculated. In comparison with that of the original eight-channel multispectral images, the MPSNR and MSSIM values are improved by 0.11 dB and 0.04, respectively. As a result, our calibration model shows good performance in reducing the space-variant issues of magnification and rotation effects and increasing the imaging quality of diffractive multispectral imaging.
The calibrated multispectral images are further input into 3D U-Net to reconstruct the multispectral images. Figure 5a–c show the reconstructed eight-channel multispectral images by using the 3D U-Net, Unrolled Network algorithm, and our HSC3D network, respectively. As shown in Figure 5a, the imaging quality of the simulated reconstructed eight-channel multispectral images by using the 3D U-Net algorithm is obviously decreased when the wavelength is increased, especially at 580 nm. This trend may come from the magnification and rotation effects shown in Figure 4, which is also in accordance with reported research that describes similar decreased imaging quality properties [22,23]. The reconstructed eight-channel multispectral images by using the Unrolled Network in Figure 5b present the typical ghosting issue that multispectral images erroneously retain, and also exhibit a similar decreasing imaging quality effect to that in Figure 5a. The reconstructed spectral images obtained by using our HSC3D network (Figure 5c) are clearer than the reconstructed spectral images in Figure 5a and avoid the ghosting issues present in Figure 5b. Further, the PSNR and SSIM of the Unrolled Network and our HSC3D network are calculated and analyzed to characterize the imaging performance quantitatively.
As shown in Table 2, the calculated values of the PSNR of the eight-channel recovered images by using our HSC3D network at wavelengths of 510 nm, 520 nm, 530 nm, 540 nm, 550 nm, 560 nm, 570 nm, and 580 nm are 8.90 dB, 9.64 dB, 10.44 dB, 11.19 dB, 11.85 dB, 12.41 dB, 12.84 dB, and 13.00 dB, respectively, which are larger than those obtained by using the Unrolled Network. Similarly, the calculated values of the SSIM of the eight-channel recovered images by using our HSC3D network are 0.54, 0.58, 0.63, 0.65, 0.65, 0.64, 0.65, and 0.65, respectively, which are also higher than those obtained by using the Unrolled Network. These improvements in the PSNR and SSIM of the eight-channel recovered images are in accordance with Figure 5b,c, confirming an improved imaging quality compared with the Unrolled Network. Furthermore, the calculated values of MPSNRhyb and MSSIMhyb of the eight-channel recovered images by using our HSC3D network are around 11.28 dB and 0.62, respectively, which are 3.33 dB and 0.08 higher than those obtained by using the Unrolled Network, respectively. All of these improvements show that our new algorithm has a better performance in terms of overall spectral image restoration results.

5. Conclusions

In conclusion, we proposed a novel high-quality and efficient hybrid space calibrated 3D Network “HSC3D” for space-variant diffractive multispectral imaging. Our “HSC3D” network utilizes the 3D U-Net structure combined with space calibration models of magnification and rotation correction to achieve more accurate multispectral restoration. A prototype diffractive multispectral imaging system was designed and manufactured which consisted of one diffractive lens and four refractive lenses, whose monochromatic aberration is corrected carefully to realize high-quality multispectral imaging. The measured eight-channel multispectral images with variant space effects of magnification and rotation were calibrated by employing an intensity-based automatic image registration module, and were then input into the 3D network to reconstruct multispectral images. The calculated values of the MPSNR and MSSIM of the reconstructed eight-channel multispectral images obtained by using our hybrid space calibrated 3D network are shown to be improved by 3.33 dB and 0.08, respectively, in comparison with the original ones, confirming the obviously improved image quality in comparison with the typical Unrolled Network algorithm. Our algorithm combines the advantages of the space calibration model and U-Net architecture with 3D convolutional layers to improve the image quality of diffractive multispectral imaging, and thus has no need for large amounts of experimental data and complex equipment modifications. More complex physical variation effects of the PSF, such as the aberrations of different optical elements, can also be discussed and calibrated in future work to achieve an even wider spectrum and more sensitive diffractive multispectral imaging capability. Moreover, a discrete pixel-by-pixel PSF of the diffractive multispectral imaging array can also be researched to advance gazing diffractive multispectral imaging by using a grid hybrid space calibrated 3D algorithm. The proposed HSC3D network can also be adapted to more complex practical cases, especially for cases with various spatial variants such as space imaging [27], micrography [28], and security camera systems [29].

Author Contributions

Conceptualization, H.F., C.L., and W.Y.; methodology, H.F.; validation, B.G. and X.L.; formal analysis, H.F., C.L., and B.G.; investigation, H.F. and Y.C., resources, C.L. and W.Y.; data curation, H.F. and H.X.; writing—review and editing, C.L. and W.Y.; supervision, W.Y.; funding acquisition, X.Z., C.L., and W.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number (62061160488, 62305376, 62105360), and the China National Key Research and Development Program, grant number (2020YFC2200202, 2020YFC2200200)).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data can be obtained from the authors upon reasonable request.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Hinnrichs, M. Remote sensing for gas plume monitoring using state-of-the-art infrared hyperspectral imaging. In Proceedings of the Photonics East (ISAM, VVDC, IEMB), Boston, MA, USA, 1–6 November 1999; p. 12. [Google Scholar]
  2. Smith, D.; Gupta, N. Data collection with a dual-band Infrared hyperspectral imager. In Proceedings of the Optics and Photonics 2005, San Diego, CA, USA, 31 July–4 August 2005; Volume 5881, pp. 40–50. [Google Scholar] [CrossRef]
  3. Blanch-Perez-del-Notario, C.; Geelen, B.; Li, Y.; Vandebriel, R.; Bentell, J.; Jayapala, M.; Charle, W. Compact High-Speed Snapshot Multispectral Imagers in the VIS/NIR (460 to 960 nm) and SWIR Range (1.1 to 1.65 nm) and Its Potential in a Diverse Range of Applications; SPIE: Bellingham, WA, USA, 2023; Volume 12338. [Google Scholar]
  4. Shen, Y.; Li, J.; Lin, W.; Chen, L.; Huang, F.; Wang, S. Camouflaged Target Detection Based on Snapshot Multispectral Imaging. Remote Sens. 2021, 13, 3949. [Google Scholar] [CrossRef]
  5. Whitcomb, K.; Lyons, D.; Hartnett, S. DOIS: A Diffractive Optic Image Spectrometer; SPIE: Bellingham, WA, USA, 1996; Volume 2749. [Google Scholar]
  6. Hinnrichs, M.; Massie, M.A. New approach to imaging spectroscopy using diffractive optics. In Proceedings of the Imaging Spectrometry III, San Diego, CA, USA, 28–30 July 1997; pp. 194–205. [Google Scholar]
  7. Zhao, H.; Liu, Y.; Yu, X.; Xu, J.; Wang, Y.; Zhang, L.; Zhong, X.; Xue, F.; Sun, Q. Diffractive Optical Imaging Spectrometer with Reference Channel; SPIE: Bellingham, WA, USA, 2020; Volume 11566. [Google Scholar]
  8. Gundogan, U.; Oktem, F.S. Computational spectral imaging with diffractive lenses and spectral filter arrays. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 2938–2942. [Google Scholar]
  9. Hinnrichs, M.; Gupta, N.; Goldberg, A. Dual Band (MWIR/LWIR) Hyperspectral Imager. In Proceedings of the 32nd Applied Imagery Pattern Recognition Workshop, Washington, DC, USA, 15–17 October 2003; pp. 73–80. [Google Scholar]
  10. Cu-Nguyen, P.-H.; Grewe, A.; Hillenbrand, M.; Sinzinger, S.; Seifert, A.; Zappe, H. Tunable hyperchromatic lens system for confocal hyperspectral sensing. Opt. Express 2013, 21, 27611–27621. [Google Scholar] [CrossRef] [PubMed]
  11. Cu-Nguyen, P.-H.; Grewe, A.; Feßer, P.; Seifert, A.; Sinzinger, S.; Zappe, H. An imaging spectrometer employing tunable hyperchromatic microlenses. Light Sci. Appl. 2016, 5, e16058. [Google Scholar] [CrossRef] [PubMed]
  12. Bacca, J.; Martinez, E.; Arguello, H. Computational spectral imaging: A contemporary overview. J. Opt. Soc. Am. A 2023, 40, C115–C125. [Google Scholar] [CrossRef] [PubMed]
  13. Huang, L.; Luo, R.; Liu, X.; Hao, X. Spectral imaging with deep learning. Light Sci. Appl. 2022, 11, 61. [Google Scholar] [CrossRef]
  14. Yuan, X.; Brady, D.J.; Katsaggelos, A.K. Snapshot Compressive Imaging: Theory, Algorithms, and Applications. IEEE Signal Process. Mag. 2021, 38, 65–88. [Google Scholar] [CrossRef]
  15. Zhang, M.; Cao, G.; Chen, Q.; Sun, Q. Image Restoration Method Based on Improved Inverse Filtering for Diffractive Optic Imaging Spectrometer. Comput. Sci. 2019, 46, 86–93. [Google Scholar] [CrossRef]
  16. Jeon, D.S.; Baek, S.-H.; Yi, S.; Fu, Q.; Dun, X.; Heidrich, W.; Kim, M.H. Compact snapshot hyperspectral imaging with diffracted rotation. ACM Trans. Graph. 2019, 38, 117. [Google Scholar] [CrossRef]
  17. Oktem, F.S.; Kar, O.F.; Bezek, C.D.; Kamalabadi, F. High-Resolution Multi-Spectral Imaging With Diffractive Lenses and Learned Reconstruction. IEEE Trans. Comput. Imaging 2021, 7, 489–504. [Google Scholar] [CrossRef]
  18. Xie, H.; Zhao, Z.; Han, J.; Xiong, F.; Zhang, Y. Dual camera snapshot high-resolution-hyperspectral imaging system with parallel joint optimization via physics-informed learning. Opt. Express 2023, 31, 14617–14639. [Google Scholar] [CrossRef] [PubMed]
  19. Bin, Y.; Danni, C.; Qiang, S.; Junle, Q.; Hanben, N. Design and Analysis of New Diffractive Optic Imaging Spectrometer. Acta Opt. Sin. 2009, 29, 1260–1263. [Google Scholar] [CrossRef]
  20. Fan, H.; Li, C.; Xu, H.; Zhao, L.; Zhang, X.; Jiang, H.; Yu, W. High accurate and efficient 3D network for image reconstruction of diffractive-based computational spectral imaging. IEEE Access 2024, 12, 120720–120728. [Google Scholar] [CrossRef]
  21. Born, M.A.X.; Wolf, E. Chapter VIII—Elements Of The Theory Of Diffraction. In Principles of Optics, 6th ed; Born, M.A.X., Wolf, E., Eds.; Pergamon Press: New York, NY, USA, 1980; pp. 370–458. [Google Scholar]
  22. Lohmann, A.W.; Paris, D.P. Space-Variant Image Formation. J. Opt. Soc. Am. 1965, 55, 1007–1013. [Google Scholar] [CrossRef]
  23. Sawchuk, A.A. Space-variant image motion degradation and restoration. Proc. IEEE 1972, 60, 854–861. [Google Scholar] [CrossRef]
  24. Wang, X.; Xie, L.; Yu, K.; Chan, K.C.K.; Loy, C.C.; Dong, C. BasicSR: Open Source Image and Video Restoration Toolbox. 2022. Available online: https://github.com/XPixelGroup/BasicSR (accessed on 24 October 2024).
  25. Bukhari, K.Z.; Wong, J. Visual Data Transforms Comparison; Delft University of Technology: Delft, The Netherlands, 1955. [Google Scholar]
  26. Zhou, W.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  27. Guilloteau, C.; Oberlin, T.; Berné, O.; Dobigeon, N. Hyperspectral and Multispectral Image Fusion Under Spectrally Varying Spatial Blurs—Application to High Dimensional Infrared Astronomical Imaging. IEEE Trans. Comput. Imaging 2020, 6, 1362–1374. [Google Scholar] [CrossRef]
  28. Toader, B.; Boulanger, J.; Korolev, Y.; Lenz, M.O.; Manton, J.; Schönlieb, C.-B.; Mureşan, L. Image Reconstruction in Light-Sheet Microscopy: Spatially Varying Deconvolution and Mixed Noise. J. Math. Imaging Vis. 2022, 64, 968–992. [Google Scholar] [CrossRef] [PubMed]
  29. Janout, P.; Páta, P.; Skala, P.; Bednář, J. PSF Estimation of Space-Variant Ultra-Wide Field of View Imaging Systems. Appl. Sci. 2017, 7, 151. [Google Scholar] [CrossRef]
Figure 1. (a) The schematic diagram of the ideal imaging process of diffractive multispectral imaging, (b) the schematic diagram of the magnification effect of diffractive multispectral imaging, (c) the schematic diagram of the rotation effect of diffractive multispectral imaging, and (d) the schematic diagram of complex spatial variations combing the magnification and rotation of practical diffractive multispectral imaging, where λ1, λ2, and λ3, denote different wavelengths and satisfy λ 1 > λ 2 > λ 3 , and D1, D2, and D3 denote the dimension of diffractive multispectral images at different wavelengths.
Figure 1. (a) The schematic diagram of the ideal imaging process of diffractive multispectral imaging, (b) the schematic diagram of the magnification effect of diffractive multispectral imaging, (c) the schematic diagram of the rotation effect of diffractive multispectral imaging, and (d) the schematic diagram of complex spatial variations combing the magnification and rotation of practical diffractive multispectral imaging, where λ1, λ2, and λ3, denote different wavelengths and satisfy λ 1 > λ 2 > λ 3 , and D1, D2, and D3 denote the dimension of diffractive multispectral images at different wavelengths.
Sensors 24 06903 g001
Figure 2. Experimental method of our hybrid space-variant 3D U-Net.
Figure 2. Experimental method of our hybrid space-variant 3D U-Net.
Sensors 24 06903 g002
Figure 3. (a) The measured microcosmic appearance of the fabricated diffractive lens. (b) The simulated MTF of the diffractive multispectral optical imaging system for a wavelength ranging from 510 nm to 580 nm, where the solid line and the dashed line depict the tangential and sagittal results, respectively. (c) The experimental schematic for the diffractive multispectral optical imaging system.
Figure 3. (a) The measured microcosmic appearance of the fabricated diffractive lens. (b) The simulated MTF of the diffractive multispectral optical imaging system for a wavelength ranging from 510 nm to 580 nm, where the solid line and the dashed line depict the tangential and sagittal results, respectively. (c) The experimental schematic for the diffractive multispectral optical imaging system.
Sensors 24 06903 g003
Figure 4. (a) The magnification factors α and (b) rotation factors β of the 8-channel multispectral images, where the black line and red line depict the measured and calibrated results, respectively.
Figure 4. (a) The magnification factors α and (b) rotation factors β of the 8-channel multispectral images, where the black line and red line depict the measured and calibrated results, respectively.
Sensors 24 06903 g004
Figure 5. Reconstructed 8-channel (510 nm, 520 nm, 530 nm, 540 nm, 550 nm,560 nm, 570 nm, and 580 nm) multispectral images from (a) 3D U-net, (b) Unrolled Network, and (c) our HSC3D network.
Figure 5. Reconstructed 8-channel (510 nm, 520 nm, 530 nm, 540 nm, 550 nm,560 nm, 570 nm, and 580 nm) multispectral images from (a) 3D U-net, (b) Unrolled Network, and (c) our HSC3D network.
Sensors 24 06903 g005
Table 1. Comparison of PSNR and SSIM between calibrated and original 8-channel multispectral images.
Table 1. Comparison of PSNR and SSIM between calibrated and original 8-channel multispectral images.
Wavelength (nm)PSNRmeaSSIMmeaPSNRcalSSIMcal
510.009.090.489.090.48
520.009.840.519.950.53
530.0010.770.5510.890.57
540.0011.800.6011.930.64
550.0012.580.6312.710.67
560.0012.970.6113.110.62
570.0013.150.6013.260.65
580.0013.250.6013.360.68
Mean value11.680.5711.790.61
Table 2. Comparison of PSNR and SSIM between Unrolled Network and our HSC3D network algorithms.
Table 2. Comparison of PSNR and SSIM between Unrolled Network and our HSC3D network algorithms.
Wavelength (nm)PSNRhybSSIMhybPSNRunrSSIMunr
510.004.03 0.44 8.90 0.54
520.007.87 0.47 9.64 0.58
530.008.66 0.52 10.44 0.63
540.009.08 0.53 11.19 0.65
550.009.08 0.62 11.85 0.65
560.008.70 0.60 12.41 0.64
570.008.19 0.57 12.84 0.65
580.007.97 0.54 13.00 0.65
Mean value7.95 0.54 11.28 0.62
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fan, H.; Li, C.; Gao, B.; Xu, H.; Chen, Y.; Zhang, X.; Li, X.; Yu, W. Hybrid Space Calibrated 3D Network of Diffractive Hyperspectral Optical Imaging Sensor. Sensors 2024, 24, 6903. https://doi.org/10.3390/s24216903

AMA Style

Fan H, Li C, Gao B, Xu H, Chen Y, Zhang X, Li X, Yu W. Hybrid Space Calibrated 3D Network of Diffractive Hyperspectral Optical Imaging Sensor. Sensors. 2024; 24(21):6903. https://doi.org/10.3390/s24216903

Chicago/Turabian Style

Fan, Hao, Chenxi Li, Bo Gao, Huangrong Xu, Yuwei Chen, Xuming Zhang, Xu Li, and Weixing Yu. 2024. "Hybrid Space Calibrated 3D Network of Diffractive Hyperspectral Optical Imaging Sensor" Sensors 24, no. 21: 6903. https://doi.org/10.3390/s24216903

APA Style

Fan, H., Li, C., Gao, B., Xu, H., Chen, Y., Zhang, X., Li, X., & Yu, W. (2024). Hybrid Space Calibrated 3D Network of Diffractive Hyperspectral Optical Imaging Sensor. Sensors, 24(21), 6903. https://doi.org/10.3390/s24216903

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop