Next Article in Journal
RF eigenfingerprints, an Efficient RF Fingerprinting Method in IoT Context
Next Article in Special Issue
Anisotropic SpiralNet for 3D Shape Completion and Denoising
Previous Article in Journal
Chromism-Integrated Sensors and Devices for Visual Indicators
Previous Article in Special Issue
Crosstalk Correction for Color Filter Array Image Sensors Based on Lp-Regularized Multi-Channel Deconvolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Retina-like Computational Ghost Imaging for an Axially Moving Target

1
Key Laboratory of Biomimetic Robots and Systems, School of Optics and Photonics, Beijing Institute of Technology, Ministry of Education, Beijing 100081, China
2
Yangtze Delta Region Academy, Beijing Institute of Technology, Jiaxing 314003, China
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(11), 4290; https://doi.org/10.3390/s22114290
Submission received: 9 May 2022 / Revised: 30 May 2022 / Accepted: 2 June 2022 / Published: 5 June 2022
(This article belongs to the Collection Computational Imaging and Sensing)

Abstract

:
Unlike traditional optical imaging schemes, computational ghost imaging (CGI) provides a way to reconstruct images with the spatial distribution information of illumination patterns and the light intensity collected by a single-pixel detector or bucket detector. Compared with stationary scenes, the relative motion between the target and the imaging system in a dynamic scene causes the degradation of reconstructed images. Therefore, we propose a time-variant retina-like computational ghost imaging method for axially moving targets. The illuminated patterns are specially designed with retina-like structures, and the radii of foveal region can be modified according to the axial movement of target. By using the time-variant retina-like patterns and compressive sensing algorithms, high-quality imaging results are obtained. Experimental verification has shown its effectiveness in improving the reconstruction quality of axially moving targets. The proposed method retains the inherent merits of CGI and provides a useful reference for high-quality GI reconstruction of a moving target.

1. Introduction

Ghost imaging (GI), also known as single-pixel imaging, has attracted broad attention in recent years [1,2,3,4]. The traditional ghost imaging system consists of two optical paths: the reference light path and the signal light path. In the reference light path, the beam propagates freely, and the intensity distribution information is detected by a detector with spatial resolution. In the signal light path, the signal beam illuminates the target, and the transmitted or reflected light is collected by a bucket detector without spatial resolution. The image is computed through correlation between the measurements of two light paths, rather than each one separately [5,6]. In 2008, computational ghost imaging (CGI) [7,8] was proposed, which omitted the reference path via a spatial light modulator or digital micromirror device (DMD) that is actively used to control and compute predictable illumination patterns. The single-arm structure improves the practicality of GI in fluorescence imaging [9], terahertz imaging [10,11], multispectral imaging [12,13,14,15,16,17], three-dimensional imaging [18,19,20,21,22], and scattering medium imaging [23,24,25].
Undoubtedly, for the application of GI, the development trend is bound to be from stationary targets to moving targets. Compared with stationary targets, moving targets pose a greater challenge to GI technology. When the target moves relatively toward the imaging system, the reconstructed image suffers resolution degradation, as well as image quality decline. Eliminating the image degradation caused by target motion is worthy of further research.
Generally, the relative motion between the target and the imaging system can be divided into two cases: tangential motion perpendicular to the optical axis and axial motion along the optical axis. In 2011, Li et al. [26] took the idea of quasistatic estimation in target motion. Within a small segmental interval, the target can be regarded as stationary. Zhang et al. [27] proposed a Fourier transform ghost diffraction imaging method to eliminate the image degradation caused by a shaking target. Li et al. [28] successfully reconstructed the tangentially moving target by translationally compensating for the light-intensity distribution on the reference light path. In 2020, Yang et al. [29] proposed a tracking compensation method to improve the degradation of target motion. By shifting or rotating the illumination patterns preloaded on the DMD according to the precise estimate of the moving target’s motion track, they achieved high-quality reconstruction. Gong et al. [30] investigated the influence of the axial correlation depth-of-light field in GI applications. In 2015, Li et al. [31], based on resizing speckle patterns and velocity retrieval, experimentally reconstructed an axially moving target in a dual-arm GI system. In 2017, the high-order GI theory was applied to investigate the impact of light field high-order intensity correlation on axially moving target reconstruction [32]. Compared with the tangential motion, there are relatively fewer discussions of the target’s axial motion in current studies, many of which are still based on a dual-arm ghost imaging system.
Inspired by the structure of the human eye, retina-like patterns are designed to balance high-resolution and high-imaging efficiency [33,34]. In our previous work [35], retina-like structure patterns were utilized in three-dimensional CGI, which has been proven effective in compressing redundant information and speeding up the imaging process while improving the reconstruction quality of the foveal region of interest. In this paper, we propose a time-variant retina-like computational ghost imaging (VRGI) method that uses retina-like patterns with changeable foveal region radii to reconstruct an axially moving target while controlling the target well in the light field during the movement process. There is no need to predict the precise motion track of targets or adjust the DMD in advance. Meanwhile, compared with the dual-arm GI system, our method is easier to implement and can restore the imaging quality of moving targets without redundant hardware requirements.

2. Methods

In a CGI system, a series of illumination patterns are projected onto the target object. The reflected or transmitted light intensity of the target is collected by the single-pixel detector. The final image is reconstructed by correlating the information of illumination patterns and their corresponding light-intensity measurements.
The measurement process is described as follows:
S m = x , y U m ( x , y ) O ( x , y )
where Um(x,y) represents a sequence of illumination patterns, m is the pattern sequence number, and (x,y) represents the 2D Cartesian coordinates. O(x,y) is the target. Thus, the corresponding collected light intensity is denoted as Sm.
When considering the reconstruction algorithm, it has been proven that the compressed sensing (CS) algorithm [36], also known as compressed sampling, has better performance than the conventional second-order correlation algorithm. It provides an alternative approach to data acquisition and compression that reduces the number of required measurements by the aid of the redundant structure of images, and, therefore, has drawn much research attention.
Specifically, the data collection in CS can be described as:
y = A b
where y is the measurement vector, A is the vectorized representation of the projected patterns, and b is a vector, which denotes the target scene that is assumed to be sparse. The reconstruction process is to calculate b from the pattern matrix A and the corresponding measurement y.
At present, CS algorithms mainly include l1 minimization [37], greedy algorithms [38], and TV minimization [39]. The total variation (TV) regularization algorithm, which transforms the image reconstruction problem into a constrained optimization problem, is selected in this paper as our reconstruction algorithm, as it performs better in preserving the boundaries or edges of the reconstructed images [40]. By exploiting TV solver, a sparse image may still be reconstructed with a relatively high quality even though the number of measurements is deemed insufficient.
Mathematically, c represents the gradient of an image, and G is the gradient calculation matrix. U R M * a is the illumination pattern (with M being the projection number of the illumination patterns and each pattern comprising a = x × y pixels); O R a * 1 represents the target scene that aligned as a vector; and S R M * 1 is the collected light intensity. The l1 norm is used to calculate the total variation of the image, so the optimization model becomes:
min c l 1 s . t . G O = c U O = S
The retina has high resolution in the foveal region and low resolution in the edge region. This characteristic can effectively enhance the quality of the reconstructed images in the foveal region of interest. On this basis, the CGI method with retina-like patterns (or the so-called RGI method) uses retina-like patterns with space-variant resolution instead of random patterns with uniform resolution.
With reference to our previous studies on retina-like structures, the geometry of the retina-like patterns can be expressed as follows:
{ r p + 1 = r × ε p ε = 1 + sin ( π / Q ) 1 sin ( π / Q ) r 1 = r 0 1 sin ( π / Q ) θ q = q × 2 π Q ( q = 1 , 2 , 3 Q ) ζ p = log ε ( r p ) = log ε ( r 1 ) + p 1 ( p = 1 , 2 , 3 P )
Each pattern includes P rings, and each ring consists of Q pixels. rp is the radius of the pth ring; θq is the degree of the q sector; and ε is the increasing coefficient. In the foveal area, retina-like patterns are nested with high-resolution speckles, which are the same as random patterns, whereas in the peripheral area, they have low-resolution speckles, and the resolution decreases with the increase in eccentricity.
Compared with that of a stationary target, the image degradation caused by a target in motion is a nonnegligible problem in CGI research on moving targets. In the case of axially moving targets, the RGI method can control the target well in the spatially fixed light field during the process of movement instead of adjusting the whole light field as the target is moving. The target can always be illuminated by the foveal region of the retina-like patterns.
However, the projection size of the foveal region of the retina-like patterns changes at different positions along the axis. The farther away from the optical system it is, the larger the projection area. To maintain the stability of the moving target in the light field of illuminated patterns, the proportion of the projection area of the foveal region occupied by the target should be kept unchanged as much as possible as the target moves back and forth along the axis. The foveal region of the illuminated retina-like patterns should be larger when the target is moving closer to the imaging system.
Focusing on the reconstruction of axially moving targets, the VRGI method adjusts the radius of the foveal region of the retina-like patterns, according to the specific motion parameters of the target and the information of the scene, to maintain the proportion of the target in the foveal region during the movement.
Based on a retina-like pattern with space-variant resolution, as shown in Equation (4), we change the radius r0 of the foveal region according to the axially moving velocity v, the moving distance (expressed as v × t) of the target, the size of the target (expressed as d), and D, which is the distance between the target and the optical projection system. The mathematical formula can be expressed as follows:
r 0 ( i ) = r 0 ( 1 ) + v × t ( i ) × ( d / D )
where r0(i) is the radius of the foveal region of the i-th retina-like pattern, v is the axial velocity of the target, t(i) is the moving time corresponding to the i-th speckle, d is the size of the target, and D is the distance between the target and the projection system. We set different increments according to different sampling measurements so that the radius of the foveal region of the retina-like pattern can be adjusted evenly during the process of movement.

3. Experimental Results

To implement the proposed method, we built the experimental setup, as shown in Figure 1. An LED light source operating at 400~750 nm (@20 W) uniformly illuminates the DMD (Texas Instruments DLP Discovery 4100 development kit, Texas Instruments, US). The maximum binary pattern refreshing rate of the DMD is up to 22 kHz, the size of the DMD is 1024 × 768 pixels and the micromirror pitch is 13.68 μm. The DMD projects preloaded patterns to the target through a projection lens. The target in the experiment is a symmetrical Chinese character printed on paper. It is placed on a one-dimensional guide rail and moves back and forth along the optical axis at a constant speed, v. The distance and speed of the movement is controlled by a stepper motor. A photodetector (Thorlabs PDA36A, Newton, NJ, USA) with an active area of 13 mm2 functions as a bucket detector and measures the reflected light intensities of the target. A data acquisition board (PICO6404E, Pico Technology, Rowley Park, UK) acquires and transfers the measured intensities to a computer for reconstruction.
In our experiment, the resolution of the retina-like patterns is 64 × 64, and the ratio d/D of the target size d to the distance D between the target and the optical projection system is set at 0.0625. The target moves at a uniform velocity toward the optical system along the optical axis. The total moving distance is 5 mm. To compare the reconstruction results of the axially moving target at different v values, we set the moving velocities as 0.5 mm/s, 1 mm/s, 1.5 mm/s, and 2 mm/s.
According to Equation (5), retina-like patterns with uniformly varying foveal region radii are generated according to the number of measurements that are needed. The radius of the foveal region varies [21,23]. We reasonably set the projection time of illumination patterns to make it equal to the motion time of the target and ensure that the target would not move beyond the illumination field.
Since the compressed sensing algorithm enables the GI reconstruction of an N pixel image with much fewer measurements than N, we set the sampling numbers as 1024, 1229, 1434, and 1638 here. The experiments are performed with these four different sampling measurements and different moving velocities, and the reconstruction results retrieved by TV regularization algorithm are shown in Figure 2.
The experimental results compare the VRGI with variable foveal region radii and the traditional RGI method with an unchanged foveal region radius (a minimum value of 21, a middle value of 22, and a maximum value of 23). For the 1024, 1229, 1434, and 1638 measurements, all the reconstructed images become blurred with increasing moving velocity, but generally, VRGI outperforms traditional RGI no matter how fast the target moves along the optical axis, with a more distinguishable and detailed structure of the character.
For quantitative comparison, the peak signal-to-noise ratio (PSNR) [41] was used as the criterion. The PSNR is defined as follows:
{ PSNR = 10 log 10 ( 2 k 1 ) 2 MSE M S E = 1 a x , y ( O ( x , y ) O ( x , y ) ) 2
where k represents the number of bits, and O(x,y) and O′(x,y) represent the original image and reconstructed image, respectively. MSE stands for the mean square error.
Figure 3 shows the quantitative comparison of PSNR values for 1024, 1229, 1434, and 1638 measurements. Generally, there is a linear relationship between the moving velocity and reconstruction quality, the faster the target moves, the worse the image degrades, with inevitable blurriness and a higher noise level. However, changing the foveal region radius r0 according to the movement could reduce the negative influence of image degradation caused by target motion. The PSNR values of the proposed method are obviously higher than those of the other three groups of traditional RGI method.
As shown in Figure 3a, for 1024 measurements, the moving velocity of the target increases from 0.5 mm/s to 2 mm/s, and the PSNR values of VRGI results only decrease from 21.78 dB to 21.26 dB. For the traditional RGI method with an unchanged radius of the foveal region, the best reconstruction quality can reach only 21.29 dB at the slowest velocity and can reach only 18.35 dB when the moving velocity increased to 2 mm/s. The reconstruction quality of VRGI at the fastest velocity is equivalent to or even better than that of RGI at the slowest velocity. In our experiment, when the velocity of the moving target increases to 2 mm/s, the reconstruction performance of RGI with r0 = 21 is unsatisfactory for 1638 measurements such that the target can hardly be identified, as shown in Figure 2. Obviously, 2 mm/s has not reached the velocity limit of VRGI method, even though it also suffers image quality decline as the target moves faster.
Moreover, patterns with different r0 values show different adaptability to the increase in the target’s moving velocity. RGI with r0 = 21 is greatly affected by it, while RGI with r0 = 23 is relatively stable. However, this does not mean that the size of the foveal region and reconstruction quality can be simply illustrated by an absolute linear relationship, and the influence of different measurements should also be considered. Under low measurement conditions, the PSNR values of RGI images with r0 = 23 change steadily as the velocity increases, but the reconstructed results do not perform better than those of small r0 patterns.
In addition, the TV solver is applied here to decrease the sampling ratio, and we indeed obtain high-quality images under low measurement conditions, as shown in Figure 3a,b. However, when the measurements gradually reach 1638, the redundant calculation makes the noise level even higher and cannot achieve an effective utilization.
Therefore, a reasonable selection of the variation range of r0 and sampling measurements are needed, according to the scene information and target motion parameters in practical applications, to achieve the optimal match.

4. Conclusions

The image degradation caused by relative motion between the target and the imaging system decreases the imaging resolution and affects the reconstruction quality in ghost imaging. In this paper, we proposed and demonstrated a time-variant retina-like computational ghost imaging strategy for axially moving targets to suppress the image degradation caused by target motion in GI applications. In comparison with axially moving schemes conducted in a dual-arm system in previous studies, the proposed method can maintain the inherent advantages of computational ghost imaging and reconstruct the target in a single-arm system without redundant hardware requirements. The illumination patterns projected by DMD are designed with retina-like structures, and the radii of foveal region can be modified according to the moving process of the target. The experimental results demonstrate that the proposed method has better performance in reconstructing axially moving targets than the traditional RGI method with retina-like patterns containing an unchanged foveal region size. By reasonably considering the size of the foveal region in retina-like patterns and the sampling range of measurements, the most effective match is achievable. Although the proposed method can restrain the influence of image degradation caused by motion to a certain extent, there is still a possibility for optimization. Further improvement can be achieved by adapting a more efficient reconstruction algorithm, as well as filling our retina-like structure with other forms of illumination patterns that contain more information or sparsity prior of targets. It is believed that the proposed method will broaden the applications in real-time imaging, remote sensing, unmanned driving, and other fields that need to estimate scene information based on images.

Author Contributions

Conceptualization, Y.Z. and J.C.; methodology, Y.Z. and J.C.; validation, Y.Z., H.C. and D.Z.; formal analysis, Y.Z. and H.C.; investigation, B.H.; resources, J.C.; data curation, Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, Y.Z., H.C. and D.Z.; supervision, Q.H.; project administration, Q.H. and J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (61871031 and 61875012); Beijing Municipal Natural Science Foundation (4222017); Funding of foundation enhancement program under Grant (2019-JCJQ-JJ-273); Graduate Interdisciplinary Innovation Project of Yangtze Delta Region Academy of Beijing Institute of Technology (Jiaxing), No. GIIP2021-006.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Acknowledgments

The authors thank the editor and the anonymous reviewers for their valuable suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gibson, G.M.; Johnson, S.D.; Padgett, M.J. Single-pixel imaging 12 years on: A review. Opt. Express 2020, 28, 28190–28208. [Google Scholar] [CrossRef]
  2. Lu, T.; Qiu, Z.; Zhang, Z.; Zhong, J. Comprehensive comparison of single-pixel imaging methods. Opt. Lasers Eng. 2020, 134, 106301. [Google Scholar] [CrossRef]
  3. Zhang, D.; Zhai, Y.; Wu, L.; Chen, X. Correlated two-photon imaging with true thermal light. Opt. Lett. 2005, 30, 2354–2356. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Sun, M.; Zhang, J. Single-Pixel Imaging and Its Application in Three-Dimensional Reconstruction: A Brief Review. Sensors 2019, 19, 732. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Pittman, T.B.; Shih, Y.H.; Strekalov, D.V.; Sergienko, A.V. Optical imaging by means of two-photon quantum entanglement. Phys. Rev. A 1995, 52, R3429–R3432. [Google Scholar] [CrossRef] [PubMed]
  6. Bennink, R.S.; Bentley, S.J.; Boyd, R.W. “Two-Photon” Coincidence Imaging with a Classical Source. Phys. Rev. Lett. 2002, 89, 113601. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Shapiro, J.H. Computational ghost imaging. Phys. Rev. A 2008, 78, R061802. [Google Scholar] [CrossRef]
  8. Bromberg, Y.; Katz, O.; Silberberg, Y. Ghost imaging with a single detector. Phys. Rev. A 2009, 79, 53840. [Google Scholar] [CrossRef] [Green Version]
  9. Studer, V.; Bobin, J. Compressive Fluorescence Microscopy for Biological and Hyperspectral Imaging. Proc. Natl. Acad. Sci. USA 2012, 109, E1679–E1687. [Google Scholar] [CrossRef] [Green Version]
  10. Zanotto, L.; Piccoli, R. Single-pixel terahertz imaging: A review. Opto.-Electron. Adv. 2020, 3, 200012. [Google Scholar] [CrossRef]
  11. Olivieri, L.; Gongora, J.T.; Peters, L.; Cecconi, V.; Cutrona, A.; Tunesi, J.; Tucker, R.; Pasquazi, A.; Peccianti, M. Hyperspectral terahertz microscopy via nonlinear ghost imaging. Optica 2020, 7, 186–191. [Google Scholar] [CrossRef] [Green Version]
  12. Li, Z.; Suo, J.; Hu, X.; Deng, C.; Fan, J.; Dai, Q. Efficient single-pixel multispectral imaging via non-mechanical spatio-spectral modulation. Sci. Rep. 2017, 7, 41435. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Bian, L.; Suo, J.; Situ, G.; Li, Z.; Fan, J.; Chen, F.; Dai, Q. Multispectral imaging using a single bucket detector. Sci. Rep. 2016, 6, 24752. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Rousset, F.; Ducros, N.; Peyrin, F.; Valentini, G.; Andrea, C.D.; Farina, A. Time-resolved multispectral imaging based on an adaptive single-pixel camera. Opt. Express 2018, 26, 10550–10558. [Google Scholar] [CrossRef]
  15. Huang, J.; Shi, D. Multispectral computational ghost imaging with multiplexed illumination. J. Opt. 2017, 19, 75701. [Google Scholar] [CrossRef] [Green Version]
  16. Huang, J.; Shi, D.; Meng, W.; Zha, L.; Yuan, K.; Hu, S.; Wang, Y. Spectral encoded computational ghost imaging. Opt. Commun. 2020, 474, 126105. [Google Scholar] [CrossRef]
  17. Duan, D.; Xia, Y. Pseudo color ghost coding imaging with pseudo thermal light. Opt. Commun. 2018, 413, 295–298. [Google Scholar] [CrossRef]
  18. Deng, Q.; Zhang, Z.; Zhong, J. Image-free real-time 3-D tracking of a fast-moving object using dual-pixel detection. Opt. Lett. 2020, 45, 4734–4737. [Google Scholar] [CrossRef]
  19. Soltanlou, K.; Latifi, H. Three-dimensional imaging through scattering media using a single pixel detector. Appl. Opt. 2019, 58, 7716–7726. [Google Scholar] [CrossRef]
  20. Zhang, Z.; Zhong, J. Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels. Opt. Lett. 2016, 41, 2497–2500. [Google Scholar] [CrossRef]
  21. Yang, X.; Zhang, Y.; Yang, C.; Xu, L.; Wang, Q.; Zhao, Y. Heterodyne 3D ghost imaging. Opt. Commun. 2016, 368, 1–6. [Google Scholar] [CrossRef]
  22. Zhang, C.; Guo, S.; Guan, J.; Cao, J.; Gao, F. Three-dimensional ghost imaging using acoustic transducer. Opt. Commun. 2016, 368, 134–140. [Google Scholar] [CrossRef]
  23. Gong, W.; Han, S. Correlated imaging in scattering media. Opt. Lett. 2011, 36, 394–62011. [Google Scholar] [CrossRef] [Green Version]
  24. Xu, Y.; Liu, W.; Zhang, E.; Li, Q.; Dai, H.; Chen, P. Is ghost imaging intrinsically more powerful against scattering? Opt. Express 2015, 23, 32993–33000. [Google Scholar] [CrossRef] [PubMed]
  25. Satat, G.; Tancik, M.; Gupta, O.; Heshmat, B.; Raskar, R. Object classification through scattering media with deep learning on time resolved measurement. Opt. Express 2017, 25, 17466–27479. [Google Scholar] [CrossRef] [Green Version]
  26. Li, H.; Xiong, J.; Zeng, G. Lensless ghost imaging for moving objects. Opt. Eng. 2011, 50, 7005. [Google Scholar] [CrossRef]
  27. Zhang, C.; Gong, W.; Han, S. Improving imaging resolution of shaking targets by Fourier-transform ghost diffraction. Appl. Phys. Lett. 2013, 102, 21111. [Google Scholar] [CrossRef] [Green Version]
  28. Li, E.; Bo, Z.; Chen, M.; Gong, W.; Han, S. Ghost imaging of a moving target with an unknown constant speed. Appl. Phys. Lett. 2014, 104, 251120. [Google Scholar] [CrossRef]
  29. Yang, Z.; Li, W.; Song, Z.; Yu, W.; Wu, L. Tracking Compensation in Computational Ghost Imaging of Moving Objects. IEEE Sens. J. 2021, 21, 85–91. [Google Scholar] [CrossRef]
  30. Gong, W.; Han, S. The influence of axial correlation depth of light field on lensless ghost imaging. J. Opt. Soc. Am. B 2010, 27, 675–678. [Google Scholar] [CrossRef]
  31. Li, X.; Deng, C.; Chen, M.; Gong, W.; Han, S. Ghost imaging for an axially moving target with an unknown constant speed. Photon. Res. 2015, 3, 153–157. [Google Scholar] [CrossRef] [Green Version]
  32. Liang, Z.; Fan, X.; Cheng, Z.; Zhu, B.; Chen, Y. Research of high-order thermal ghost imaging for an axial moving target. J. Optoelectron. Laser 2017, 28, 547–552. [Google Scholar]
  33. Phillips, D.B.; Sun, M.J.; Taylor, J.M. Adaptive foveated single-pixel imaging with dynamic supersampling. Sci. Adv. 2017, 3, e1601782. [Google Scholar] [CrossRef] [Green Version]
  34. Hao, Q.; Tao, Y.; Cao, J.; Tang, M.; Cheng, Y.; Zhou, D.; Ning, Y.; Bao, C.; Cui, H. Retina-like Imaging and Its Applications: A Brief Review. Appl. Sci. 2021, 11, 7058. [Google Scholar] [CrossRef]
  35. Zhang, K. Modeling and Simulations of Retina-Like Three-Dimensional Computational Ghost Imaging. IEEE Photonics J. 2019, 11, 1–13. [Google Scholar] [CrossRef]
  36. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  37. Oldenburg, D.W. Inversion of band limited reflection seismograms. In Inverse Problems of Acoustic & Elastic Waves; Society for Industrial & Applied: Philadelphia, PA, USA, 1984. [Google Scholar]
  38. Mallat, S.G.; Zhang, Z. Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 1993, 41, 3397–3415. [Google Scholar] [CrossRef] [Green Version]
  39. Bian, L.; Suo, J.; Dai, Q.; Chen, F. Experimental comparison of single-pixel imaging algorithms. J. Opt. Soc. Am. A 2018, 35, 78–87. [Google Scholar] [CrossRef]
  40. Li, C. An Efficient Algorithm for Total Variation Regularization with Applications to the Single Pixel Camera and Compressive Sensing. Master’s Thesis, Rice University, Houston, TX, USA, 2011. [Google Scholar]
  41. Liu, H.; Yang, B.; Guo, Q.; Shi, J.; Guan, C.; Zheng, G.; Mühlenbernd, H.; Li, G.; Zentgraf, P.; Zhang, P. Single-pixel computational ghost imaging with helicity-dependent metasurface hologram. Sci. Adv. 2017, 3, e1701477. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Experimental setup.
Figure 1. Experimental setup.
Sensors 22 04290 g001
Figure 2. Reconstruction of an axially moving target by RGI and VRGI with different measurements and differen t velocities.
Figure 2. Reconstruction of an axially moving target by RGI and VRGI with different measurements and differen t velocities.
Sensors 22 04290 g002
Figure 3. PSNR values of RGI and VRGI images: (a) PSNR of RGI and VRGI images at different velocities with 1024 measurements; (b) PSNR of RGI and VRGI images at different velocities with 1229 measurements; (c) PSNR of RGI and VRGI images at different velocities with 1434 measurements; and (d) PSNR of RGI and VRGI images at different velocities with 1638 measurements.
Figure 3. PSNR values of RGI and VRGI images: (a) PSNR of RGI and VRGI images at different velocities with 1024 measurements; (b) PSNR of RGI and VRGI images at different velocities with 1229 measurements; (c) PSNR of RGI and VRGI images at different velocities with 1434 measurements; and (d) PSNR of RGI and VRGI images at different velocities with 1638 measurements.
Sensors 22 04290 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Cao, J.; Cui, H.; Zhou, D.; Han, B.; Hao, Q. Retina-like Computational Ghost Imaging for an Axially Moving Target. Sensors 2022, 22, 4290. https://doi.org/10.3390/s22114290

AMA Style

Zhang Y, Cao J, Cui H, Zhou D, Han B, Hao Q. Retina-like Computational Ghost Imaging for an Axially Moving Target. Sensors. 2022; 22(11):4290. https://doi.org/10.3390/s22114290

Chicago/Turabian Style

Zhang, Yingqiang, Jie Cao, Huan Cui, Dong Zhou, Bin Han, and Qun Hao. 2022. "Retina-like Computational Ghost Imaging for an Axially Moving Target" Sensors 22, no. 11: 4290. https://doi.org/10.3390/s22114290

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop