Next Article in Journal
Fi-Fo Detector: Figure and Formula Detection Using Deformable Networks
Previous Article in Journal
Validity and Reliability of Methods for Sonography Education in Physiotherapy: Onsite vs. Online
Previous Article in Special Issue
CFAM: Estimating 3D Hand Poses from a Single RGB Image with Attention
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Super-Resolution Remote Imaging Using Time Encoded Remote Apertures

1
Department of Electrical and Computer Engineering, University of Wisconsin Madison, Madison, WI 53706-1691, USA
2
Department of Biostatistics and Medical Informatics, University of Wisconsin Madison, Madison, WI 53726, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(18), 6458; https://doi.org/10.3390/app10186458
Submission received: 21 July 2020 / Revised: 9 September 2020 / Accepted: 10 September 2020 / Published: 16 September 2020
(This article belongs to the Special Issue Advanced Ultrafast Imaging)

Abstract

:
Imaging of scenes using light or other wave phenomena is subject to the diffraction limit. The spatial profile of a wave propagating between a scene and the imaging system is distorted by diffraction resulting in a loss of resolution that is proportional with traveled distance. We show here that it is possible to reconstruct sparse scenes from the temporal profile of the wave-front using only one spatial pixel or a spatial average. The temporal profile of the wave is not affected by diffraction yielding an imaging method that can in theory achieve wavelength scale resolution independent of distance from the scene.

1. Introduction

1.1. Previous Attempts to Beat the Diffraction Limit of Resolution

The diffraction limit naturally limits the spatial resolution of an image acquired by a conventional imaging system with a finite aperture. A wave traveling from the scene to the imaging system is subject to diffraction. The wave reaching the aperture of the imaging system has been degraded by diffraction. It can no longer be used to reconstruct an image of the object at the optimal resolution. Any spatial wavefront is subject to this diffraction, which causes a well-known linear drop in resolution with distance. The resulting linear dependence of resolution on imaging distance is known as the Rayleigh criterion.
Over the last several years, many works in different fields have been done to beat the diffraction limit. Astronomical imaging is an area where super-resolution methods are extensively studied. Most of the super-resolution methods used in astronomy combine image capturing techniques with image post-processing. In early years, co-addition methods, capturing successively multiple pictures within a short exposure time and applying shift-and-add algorithm [1] or linear reconstruction scheme [2] to combine the images, were actively used. Different regularization-based optimization methods were developed: Orieux et al. [3] successfully applied quadratic-regularized optimization to the “Spectral and Photometric Imaging Receiver” data. Rust et al. [4] demonstrated the high-resolution fluorescence microscopy method (STORM) that utilizes high-accuracy localization of photoswitchable fluorophores. The method operates past the diffraction limit. Jarret et al. [5] proposed a method using a deconvolution technique, maximum correlation method, combined with a re-sampling kernel to super-resolve “Wide-field Infrared Survey Explorer” data.
Several image post-processing methods exist in digital imaging applications in order to construct high-resolution images from low-resolution images. Nonuniform interpolation approach [6], frequency domain approach [7], regularized optimizations (deterministic [8,9,10,11] and stochastic [12,13,14,15]), iterative back-projection [16], adaptive filtering approach [17] are widely used super-resolution imaging methods. These super-resolution methods provide high-resolution image outputs, mitigating the effects of diffraction. However, all of the methods still suffer from the fundamental limit: the resolution of super-resolved images still depends on the distance and aperture of the imaging system. The further the region of interest, the lower the image resolutions.
It is well known that a scattering medium around an object can be used to couple evanescent waves into the far-field. This means that images of the embedded object with resolutions similar to the wavelength can be captured from large distances [18]. In existing approaches, this requires prior characterization of the scattering medium from the location of the object, which makes the method unsuitable for many practical applications. Here, we show that scattering can be leveraged in sparse scenes to determine the geometry of the entire scattering scene (object and medium) without prior information based on just a single pixel time response of the scene.

1.2. Our Contribution

In this work, we propose a novel super-resolution imaging approach that reconstructs scenes from internally scattered light while using their time response. This temporal signal is not subject to diffraction. We show that the resolution of our method is independent of distance and aperture size, and we can reconstruct the scene up to Euclidean congruence. We demonstrate the capabilities of the novel approach while using simulated and experimental data.

2. Time Encoded Remote Apertures (TERA)

2.1. Fundamentals of Imaging

In this section, we overview the fundamental process of obtaining an image from an object and define terms to avoid confusion. Consider two objects emitting or reflecting light p 1 , p 1 and an imaging system C with an aperture size D placed at a distance d from for the object (Figure 1a). To infer positions of object points p i , i = 1 , 2 within the scene, the imaging system redirects all rays from points p i , i = 1 , 2 , such that they constructively interfere on exactly one point in the focal plane. In other words, the imaging system evaluates the length of a light ray, as encoded in its phase. In this case, the resolution r of the imaging system is determined by the Rayleigh’s diffraction limit:
r = 1.22 λ d D
where λ is the wavelength of the emitted light and determines how accurately the imaging system can determine the length of each ray, d is a distance between focal plane of the imaging system and the object, and D is the diameter of the aperture of the imaging system. Figure 1a shows an example of a diffraction pattern (Airy disk) with two targets ( p i , i = 1 , 2 ) resulting from a circular aperture. Another way of imaging is to measure the time of flight of intensity fluctuation. We call these intensity fluctuations as second-order coherence. One can use a short-pulsed laser to illuminate the object and lens-less time-of-flight detector to observe back-scattered light and reconstruct an image. The resolution of such an image is described by the Rayleigh criterion, except that the wavelength λ is replaced by the time resolution τ of the time-of-flight detector. We call it a transient Rayleigh criterion.

2.2. Tera Imaging Overview

Consider a situation when one can detect not only direct (first bounce) back-scattered lights, but also the lights that bounced between the objects(second bounce). Such a signal can be obtained by illuminating the scene with a short-pulsed laser and detecting the returning light with a single time-of-flight detector. The temporal signal of these multi-bounce lights directly encodes the information about the distance between two object points in the scene. Figure 1b shows the simplified example of how the information about the scene is encoded in the temporal signal. Suppose the scene contains two small point objects p 1 and p 2 with a negligibly small diameter δ . The distances from p 1 and p 2 to the imaging system C are d 1 and d 2 correspondingly. The distance between objects p 1 and p 2 is d 3 . The scene is being illuminated by a pulsed laser, and a time-of-flight detector measures the returning light from the scene. The time-resolved measurement or time response of the scene is illustrated on the left side of Figure 1b. The first impulse is due to the light directly reflected from object p 1 , and it appears at t 1 = 2 d 1 c , where c is the speed of light. Similarly, the second impulse appearing at t 2 = 2 d 2 c , corresponds to the light reflected from the object p 2 . The last impulse at t 3 is due to the light traveled following two paths: ( C p 1 p 2 C ) and ( C p 2 p 1 C ). Clearly, these two paths have the same travel distances and, thus, in the time response, they both appear at t 3 = d 1 + d 2 + d 3 c . By using these three values, t 1 , t 2 , t 3 , one can find d 1 , d 2 , and d 3 . In other words, one can completely reconstruct relative positions of objects p 1 and p 2 up to Euclidean congruence.
Now, we fix the distance d 3 and keep increasing the distances d 1 and d 2 . Naturally, at some point, the distance d 3 will be below the Rayleigh diffraction limit for the conventional imaging system, i.e., one can not visually separate two points ( p 1 and p 2 ). However, for the proposed imaging system, the increase of the distances d 1 and d 2 only results in the time shift of the entire signal. The difference between three time tags, t 1 , t 2 , t 3 , is conserved. Therefore, one can still recover the scene( d 1 , d 2 and d 3 ) up to Euclidean congruence. We note that we can find exact relative distance ( d 3 ) between the objects and distances ( d 1 , d 2 ) from the imaging system. However, the orientation (rotation) of the point-cloud remains ambiguous. Because we are using single-pixel detector, one can rotate the entire point-cloud along the depth direction or translate the entire point-cloud along the perpendicular direction to the depth and record the same time response. We call it reconstructing point-cloud up to Euclidean congruence.
Naturally, the arising question is whether it is possible to reconstruct the scene when the number of objects is larger. Is the reconstruction unique, i.e., is it possible that two or more configurations generate the same time response? Below, we show that the reconstruction of a point-cloud with n objects is possible under some assumptions.
In this section, we use the results of Gkioulekas et al. [19], which shows that, when the measured time-response of the scene contains a sufficiently rich set of first and second bounces, we can reconstruct the point-cloud up to Euclidean congruence. Here, we follow the same notation as [19].
Now, consider a scenario where the scene consist of multiple point objects (Figure 2a). Let p 1 , . . . , p n R 3 be n point objects in the scene and p 0 R 3 be a position of our imaging system (a laser and a detector are co-located). See Figure 2b. A graph S = ( p 1 , . . . , p n ) is a scene configuration and K = ( p 0 , p 1 , . . . , p n ) is a total configuration. The scene is illuminated with a pulsed laser and the detector observes returning signal from the scene. We define light path α k = [ p 0 , p 1 , . . . , p z , p 0 ] ( z N ) as a finite sequence of points where light has traveled. z is some integer. Note that any sequence α starts from p 0 and ends at p 0 , since our light source and detectors are co-located. First bounce, ping, is a light path α k = [ p 0 , p i , p 0 ] for i = 1 , . . . , n . Second bounce, loop, is a light path α k = [ p 0 , p i , p j , p 0 ] for i j . Let v k be a length of α k . Subsequently, our measurement contains ensemble β = [ v 1 , v 2 , . . . , v k ] , where v k are returned first or second bounce.
Next, let K 5 be a sub-graph of K that has 5 vertices, including p 0 . If the measurement ensemble β contains all pings and triangles of K 5 that starts and ends at p 0 , and then we say that K 5 is contained within β . See Figure 2c. In this example, the measurement ensemble β contains four pings ( α = [ p 0 , p i , p 0 ] for i = 1 , . . . , 4 ) and six loops ( α = [ p 0 , p i , p j , p 0 ] for i , j ( 1 , 2 , 3 , 4 ) and i j ). Next, if the measurement ensemble β contains a ping ( α = [ p 0 , p 4 , p 0 ] ) and three loops ( α = [ p 0 , p i , p 4 , p 0 ] for i = 1 , . . . , 3 ), then we say that β allows for trilateration. See Figure 2d.
Finally, we can apply the theorem from Gkioulekas et al. [19], which says that if K = ( p 0 , p 1 , . . . , p n ) is unknown configuration and β is the measurement ensemble of K that allows for trilateration, then one can find trilateration based process to reconstruct K up to Euclidean congruence. Readers can find detailed mathematical proof of the statement in [19]. Note that there exists a generic (or trivial) point configuration K for which unique reconstruction is not possible. However, these special configurations occur very rarely [19].

3. Reconstruction Algorithm

In this section, we design a trilateration based reconstruction algorithm for our imaging system. There exist algorithms (TRIBOND [20] and LIGA [21]) that reconstruct a point-cloud given list of unassigned edge measurements. TRIBOND is a deterministic algorithm that addresses the unassigned distance geometry problem (uDGP). It has been successfully applied to reconstruct the structure of molecules and nanoparticles while using edge distance lists extracted from X-ray or neutron diffraction data. However, we can not apply the TRIBOND algorithm directly to our data, because it contains not only the first bounce, but also the second bounce. Moreover, these bounces are unlabeled. It requires modification. Gkioulekas et al. [19] showed that it is possible to reconstruct point-cloud using unlabeled edge measurements. Here, we modify the TRIBOND algorithm [20] with method described in [19] to process data that were acquired by our imaging system. The modified TRIBOND algorithm (see Algorithm 1) consists of two parts: core finding and adding a vertex.

3.1. Core Finding

The first step of the buildup algorithm is to find the core. The core of the embedding point-cloud is an over-constrained set of five points, including the source, see Figure 2c. The core can be broken down into three pieces: the base triangle and two tetrahedra. The base triangle (Figure 2c ( p 0 , p 1 , p 3 ) ) is constructed using two first (Figure 2c dashed green line) and one second (Figure 2a solid red line) bounce. Each tetrahedra (Figure 2a ( p 0 , p 1 , p 2 , p 3 ) and ( p 0 , p 1 , p 4 , p 3 ) ) uses one first and two second bounces. Because the bounces are not labeled, one has to exhaustively search over all possible first and second bounce pairs to build the base triangle and tetrahedra. Finally, we loop through all of the remaining bounces to find one second bounce in order to test and check whether it fits into bridge bond ( p 2 , p 4 ) between two vertices of the tetrahedra. If a correct second bounce is found and the bridge bond is satisfied. We found a core structure and can move to “adding a vertex”. If the bridge bond is not satisfied, then we restart “core finding” and choose another base triangle and tetrahedra by exhaustive search. This process is repeated until the core structure is found.

3.2. Adding a Vertex

After core is found, next step is to iteratively add a vertex to the core. First, we choose a random tetrahedron from the current structure. The next step is to search over all possible combinations of four distances from the remaining distance pool. Three distances will form one first and two second bounce distances. The remaining distance is used to test the bridge bond for the chosen rigid substructure. For instance, in Figure 2d ( p 0 , p 1 , p 2 , p 3 ) is chosen as a rigid substructure and p 4 is added point while using one first bounce [ p 0 , p 4 , p 0 ] and two second bounces ( [ p 0 , p 1 , p 4 , p 0 ] and [ p 0 , p 2 , p 4 , p 0 ] ). The second bounce [ p 0 , p 3 , p 4 ] is used for the bridge bond check.
Algorithm 1: Modified TRIBOND
Applsci 10 06458 i001

4. Simulations

Here, we test the modified TRIBOND algorithm using simulated data. We generate simulated data using a simplified version of the transient light transport renderer [22,23]. The renderer is successfully used in many times of flight applications [24,25]. Similar to the previous chapter, we consider the following scenario. Let p 1 , . . . , p n R 3 be n point objects in the scene and p 0 R 3 be a position of the TERA imaging system. Let d i be the distance from p i to p 0 for i = 1 , . . . , n and d i j be the distance from p i to p j for all i j . See Figure 3. In this simulation, we assume the exact distance list and do not consider the noise. The simulated scene contains n = 10 point objects that are located far away from the imaging system. The Figure 3 shows the simulated time response, which contains a mixture of first and second bounce. The first and second bounces signals are also shown separately. Locations of the peaks in the signal are marked with blue triangles. We transform these locations of the peaks into a distance list and use it as an input to the modified TRIBOND algorithm. On the right side of the Figure 3, the result of reconstruction is shown. The reconstruction matches the original point-cloud configuration up to Euclidean congruence; in other words, all of the pair-wise distances were recovered; however, the rotation and transpose of the point-cloud are remaining unknown.
We conduct the following simulation in order to address more general scenarios and numerically evaluate the algorithm. Our sensor’s dark count rate is 10 photons per second. The temporal impulse response of the system is 80 ps. See Section 5 for more details on the experimental hardware. Additionally, Poisson noise was added to the simulated data. We assume that, after pulsing, noise can be avoided by using a fast gated SPAD detector, and we, therefore, do not add after pulsing noise in the simulation. Laser power is assumed to be 10 W. We generate sets of simulated scenes with five to twenty diffuse targets with varying sizes from 1 to 8 cm. Reconstructing all these scenes would not be computationally feasible. In order to evaluate whether we have sufficient data to reconstruct the scene, we note that, in order to reconstruct a vertex of a point cloud, it is sufficient to have one first bounce and two second bounce signals from that vertex (see Section 3.2 and Gkioulekas et al. [19] for details). By counting number peaks and comparing to ground truth pairwise distances, we evaluate which vertexes of the scene remain recoverable given the correctly identified peaks. Figure 4 shows the results of statistical analysis. A large number and large diameter sizes of objects may occlude some possible light paths and can create overlapped peaks, which reduces the recoverability rate. However, a high recoverability rate is achieved for a small number of points with small diameter sizes, since most of the peaks are available. Note that an additional complication to the reconstruction is provided by false peaks in the data. Section 5 discusses this issue.

5. Proof of Concept Experiments

In this section, we demonstrate the performance of the algorithm using experimental data. The experimental setup is shown in Figure 5a. The imaging system contains an ultra-fast laser (One Five Katana HP amplified diode laser) that emits laser pulses with 50 ps width and 10 MHz repetition rate. An example of the laser illumination area is shown in Figure 5d. The laser has 1 W emission power at the wavelength of 532 nm. The scene is located around 4 m away from the imaging system. The returning scattered light from the scene is collected by a SPAD (PMD series), which has 20 micron diameter active-area. The SPAD has a photon detection efficiency of 49% at wavelength 555 nm. The dark count rate(DCR) is 10 photons per second. After pulsing probability(AP) is 0.1–3%. Dead time is 77 ns. The SPAD is coupled with a time-correlated single-photon counter (PicoQuant HydraHarp). The total effective temporal impulse response of our imaging system is 80 ps. Exposure time for each scene capture was set to 2 min. Note that we do not use any lenses in front of the SPAD.
Figure 5b,c,e,f show examples of experimental scenes. The scene is located approximately 4 m away from the imaging system. Circular patches and stars have a size of 4 × 4 cm, and the plane is 8 × 8 cm. First, we test the method using two circular patches. We place to patches 15 cm apart from each other and move them apart up to 35 cm with 1 cm spacing (21 different positions). Note that one patch is stationary, while another is moving. See Figure 6. The first column shows the actual scene picture. The second column is a reference simulated picture of the scene using a camera with an aperture size of 20 microns. The ground truth scene has been captured while using a regular camera, and the image was convolved with a blur kernel that corresponds to 20 micron sensor at the corresponding depth. The third column is a reconstruction of the scene using acquired data. Finally, the fourth column shows the actual data. In the reconstruction, one can see that the two patches clearly are separated. Figure 7 shows a plot of ground truth (green) and reconstructed (blue) distances. The RMS error between ground truth and reconstructed distances is 0.03 cm. Next, we test the method using different targets (star, plane, circular patch). Note that the circular patch and plane are white and they have diffuse surfaces, whereas stars are brown and made of wood. See Figure 8a,b. One can notice that the shape of the star and plane’s peak are slightly different from the circular patches, but still, three peaks exist in the data. Stars have sharper and more pronounced peaks, whereas planes have wide peaks. Lastly, in Figure 8c, the reconstruction of three circular patches is shown. In the data, there are three first bounces and three second bounces peaks.
As was mentioned in the simulation section, in some cases, it is possible for objects to block some light paths, which results in missing peaks in the collected time response. Peaks may also be too weak to detect and be missing for that reason. We can reconstruct vertexes that have at least one first and two second bounce distance peaks preserved in the data (Section 3.2). The objects can also have low reflectance or be not well-orientated, which may lead to lower peak intensity. However, as long as the peak is detected, the actual intensity is not relevant. If needed, one can increase exposure time or laser power to get a better signal. Additionally, peaks can also overlap with each other. One possible way to address overlapped peaks is to add a post processing step, where all of the discovered nodes are randomly tested for a potential double peak and added to the distance list for further reconstruction. It is an interesting approach and it is subject for further study.
In addition to missing peaks, the collected data may contain incorrect peaks due to background noise from ambient light, dark count rate, after pulsing. Incorrect peaks may also be what we call orphaned peaks that are true peaks from a patch for which not enough peaks are available to reconstruct the patch. A wrong peak can create an invalid vertex in the point-cloud if it satisfies the vertex adding condition (Section 3.2). The algorithm would find an incorrect core and most likely only be able to account for a very small fraction of the detected peaks in a small network. Hence, it can lead to an incorrect reconstruction. In this case, the re-initialization of the algorithm that happens routinely as part of the described algorithm can help. Because all distances used for core build-up and vertex adding parts are randomized, there is a probability that the wrong peak will not be used in the core and would be left over after all of the correct nodes and edges have been identified. A small number of false peaks thus would not prevent a correct reconstruction. A better way to deal with incorrectly detected peaks is an interesting subject for further research.
Our SPAD’s dark count rate is 10 photons per second. Coupled with 1 W laser, this dark count rate is negligible in our experiments. After pulsing probability is around 0.1–3% and can also be ignored in our experiments. One way to reduce after pulsing noise is to use fast time-gated SPADs. Gated mode allows a SPAD to be sensitive only during a short gate window of hundreds of picoseconds. The gating window can be moved in time. By progressively moving the active window through the scene in time, one can largely eliminate after pulsing.

6. Conclusions

In this paper, we present a novel imaging method that does not depend on the fundamental diffraction limit. The method makes use of hardware and algorithm to break the resolution limit under certain conditions of the scene (point-cloud assumption of the scene). In conventional imaging systems, the fundamental diffraction limit is governed by the size of the aperture, wave-length, and distance to the target. With fixed wave-length and distance, our proposed method does not depend on the size of the aperture, but instead, depends on the time resolution of the imaging system. The method is robust to change in surface materials. Currently, the main limitation is that the method is only applicable to the point-cloud scenes. Such sparse scenes commonly occur in aerial or space imaging scenarios. The paper gives base theory and introduces a method to use multiply scattered light (second bounce). As a next step, we are exploring possibilities of using multiply scattered light for continuous surfaces. As well as using machine learning to do the classification of objects below resolution limit while using multiply scattered light.

Author Contributions

J.H.N. contributed to the method, implemented the algorithms, developed simulations and conducted experiments. A.V. conceived the method and advised all parts of the project. All authors contributed to writing the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by Air Force Research Laboratory through program AFRL-AFOSR (VA-TR-2018-0422).

Acknowledgments

We appreciate help of Toan Le on the hardware setup, and helpful discussion with Xiochun Liu and Atul Ingle.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
APAfter Pulsing
DCRDark Count Rate
RMSRoot Mean Square
SPADSingle Photon Avalanche Diode
TERATime Encoded Remote Apertures

References

  1. Christou, J. Infrared speckle imaging: Data reduction with application to binary stars. Exp. Astron. 1991, 2, 27–56. [Google Scholar] [CrossRef]
  2. Fruchter, A.; Hook, R. Drizzle: A method for the linear reconstruction of undersampled images. Publ. Astron. Soc. Pac. 2002, 114, 144. [Google Scholar] [CrossRef] [Green Version]
  3. Orieux, F.; Giovannelli, J.F.; Rodet, T.; Abergel, A.; Ayasso, H.; Husson, M. Super-resolution in map-making based on a physical instrument model and regularized inversion-Application to SPIRE/Herschel. Astron. Astrophys. 2012, 539, A38. [Google Scholar] [CrossRef]
  4. Rust, M.J.; Bates, M.; Zhuang, X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat. Methods 2006, 3, 793–796. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Jarrett, T.; Masci, F.; Tsai, C.; Petty, S.; Cluver, M.; Assef, R.J.; Benford, D.; Blain, A.; Bridge, C.; Donoso, E.; et al. Extending the Nearby Galaxy Heritage with WISE: First Results from the WISE Enhanced Resolution Galaxy Atlas. Astron. J. 2012, 145, 6. [Google Scholar] [CrossRef]
  6. Clark, J.; Palmer, M.; Lawrence, P. A transformation method for the reconstruction of functions from nonuniformly spaced samples. IEEE Trans. Acoust. Speech Signal Process. 1985, 33, 1151–1165. [Google Scholar] [CrossRef]
  7. Tsai, R. Multiple frame image restoration and registration. Adv. Comput. Vis. Image Process. 1989, 1, 1715–1989. [Google Scholar]
  8. Hong, M.C.; Kang, M.G.; Katsaggelos, A.K. Regularized multichannel restoration approach for globally optimal high-resolution video sequence. In Visual Communications and Image Processing’97; International Society for Optics and Photonics: San Jose, CA, USA, 1997; Volume 3024, pp. 1306–1317. [Google Scholar]
  9. Hong, M.C.; Kang, M.G.; Katsaggelos, A.K. An iterative weighted regularized algorithm for improving the resolution of video sequences. In Proceedings of the International Conference on Image Processing, Santa Barbara, CA, USA, 26–29 October 1997; IEEE: Santa Barbara, CA, USA, 1997; Volume 2, pp. 474–477. [Google Scholar]
  10. Hardie, R.C.; Barnard, K.J.; Bognar, J.G.; Armstrong, E.E.; Watson, E.A. High-resolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system. Opt. Eng. 1998, 37, 247–261. [Google Scholar]
  11. Bose, N.K.; Lertrattanapanich, S.; Koo, J. Advances in superresolution using L-curve. In Proceedings of the IEEE International Symposium on Circuits and Systems, Sydney, NSW, Australia, 6–9 May 2001; pp. 433–436. [Google Scholar]
  12. Tom, B.C.; Katsaggelos, A.K. Reconstruction of a High-Resolution Image by Simultaneous Registration, Restoration, and Interpolation of Low-Resolution Images; IEEE: Washington, DC, USA, 1995; p. 2539. [Google Scholar]
  13. Schultz, R.R.; Stevenson, R.L. Extraction of high-resolution frames from video sequences. IEEE Trans. Image Process. 1996, 5, 996–1011. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Hardie, R.C.; Barnard, K.J.; Armstrong, E.E. Joint MAP registration and high resolution image estimation using a sequence of undersampled images. IEEE Trans. Image Process. 1997, 6, 1621–1633. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Cheeseman, P.; Kanefsky, B.; Kraft, R.; Stutz, J.; Hanson, R. Super-resolved surface reconstruction from multiple images. In Maximum Entropy and Bayesian Methods; Springer: Dordrecht, The Netherlands, 1996; pp. 293–308. [Google Scholar]
  16. Irani, M.; Peleg, S. Improving resolution by image registration. CVGIP Graph. Model. Image Process. 1991, 53, 231–239. [Google Scholar] [CrossRef]
  17. Elad, M.; Feuer, A. Superresolution restoration of an image sequence: Adaptive filtering approach. IEEE Trans. Image Process. 1999, 8, 387–395. [Google Scholar] [CrossRef] [Green Version]
  18. Ourir, A.; Lerosey, G.; Lemoult, F.; Fink, M.; de Rosny, J. Far field subwavelength imaging of magnetic patterns. Appl. Phys. Lett. 2012, 101, 111102. [Google Scholar] [CrossRef]
  19. Gkioulekas, I.; Gortler, S.J.; Theran, L.; Zickler, T. Determining Generic Point Configurations From Unlabeled Path or Loop Lengths. arXiv 2017, arXiv:1709.03936. [Google Scholar]
  20. Duxbury, P.M.; Granlund, L.; Gujarathi, S.; Juhas, P.; Billinge, S.J. The unassigned distance geometry problem. Discret. Appl. Math. 2016, 204, 117–132. [Google Scholar] [CrossRef] [Green Version]
  21. Juhás, P.; Granlund, L.; Duxbury, P.; Punch, W.; Billinge, S. The Liga algorithm for ab initio determination of nanostructure. Acta Crystallogr. Sect. A Found. Crystallogr. 2008, 64, 631–640. [Google Scholar] [CrossRef]
  22. Jarabo, A.; Marco, J.; Muñoz, A.; Buisan, R.; Jarosz, W.; Gutierrez, D. A Framework for Transient Rendering. ACM Trans. Graph. 2014, 33, 1–10. [Google Scholar] [CrossRef]
  23. Hernandez, Q.; Gutierrez, D.; Jarabo, A. A computational model of a single-photon avalanche diode sensor for transient imaging. arXiv 2017, arXiv:1703.02635. [Google Scholar]
  24. Peters, C.; Klein, J.; Hullin, M.B.; Klein, R. Solving Trigonometric Moment Problems for Fast Transient Imaging. ACM Trans. Graph. 2015, 34, 220:1–220:11. [Google Scholar] [CrossRef]
  25. Tsai, C.Y.; Kutulakos, K.N.; Narasimhan, S.G.; Sankaranarayanan, A.C. The geometry of first-returning photons for non-line-of-sight imaging. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
Figure 1. Comparison between conventional and Time Encoded Remote Apertures (TERA) imaging system. (a) Conventional imaging system. To infer the position of an object within the focal plane, an imaging system needs to determine the length difference between two rays. The figure shows a fundamental relation between distance, resolution, and aperture. The resolution of the system is determined by the Rayleigh’s criterion. (b) TERA imaging system. A pulsed laser illuminates the scene, and the SPAD detects the light reflected back from the scene. The collected time response contains information about the distance between the two targets. The resolution of the system is determined by the time resolution of the SPAD.
Figure 1. Comparison between conventional and Time Encoded Remote Apertures (TERA) imaging system. (a) Conventional imaging system. To infer the position of an object within the focal plane, an imaging system needs to determine the length difference between two rays. The figure shows a fundamental relation between distance, resolution, and aperture. The resolution of the system is determined by the Rayleigh’s criterion. (b) TERA imaging system. A pulsed laser illuminates the scene, and the SPAD detects the light reflected back from the scene. The collected time response contains information about the distance between the two targets. The resolution of the system is determined by the time resolution of the SPAD.
Applsci 10 06458 g001
Figure 2. (a) Illustration of the imaging system (a pulsed laser and time-of-flight camera) and the point-cloud scene. (b) We consider the imaging system as part of the point cloud. p 0 is the location of the imaging system. p 1 , p 2 , . . . , p n are point objects in the scene. (c) Dashed green lines are pings, and solid red lines are loop paths. If the measurement ensemble β contains four pings and six loops then we say that the sub-graph is contained in the measurement ensemble. (d) If “x” are known points and the measurement ensemble β contains one ping and three loops, then we say that the measurement allows for trilateration.
Figure 2. (a) Illustration of the imaging system (a pulsed laser and time-of-flight camera) and the point-cloud scene. (b) We consider the imaging system as part of the point cloud. p 0 is the location of the imaging system. p 1 , p 2 , . . . , p n are point objects in the scene. (c) Dashed green lines are pings, and solid red lines are loop paths. If the measurement ensemble β contains four pings and six loops then we say that the sub-graph is contained in the measurement ensemble. (d) If “x” are known points and the measurement ensemble β contains one ping and three loops, then we say that the measurement allows for trilateration.
Applsci 10 06458 g002
Figure 3. Time response and reconstruction (a) Total time response of a scene with 10 randomly generated points. The time response includes the first and second bounces. The peak extraction algorithm is applied to find peaks. The peaks are marked by blue triangles. (b) Second bounce time response; (c) Reconstruction of the point-cloud. After applying a peak extraction algorithm, modified TRIBOND is used to reconstruct the point-cloud. The red dotted line corresponds to the first bounce path connecting point in the scene and imaging system. The black line is the second bounce path.
Figure 3. Time response and reconstruction (a) Total time response of a scene with 10 randomly generated points. The time response includes the first and second bounces. The peak extraction algorithm is applied to find peaks. The peaks are marked by blue triangles. (b) Second bounce time response; (c) Reconstruction of the point-cloud. After applying a peak extraction algorithm, modified TRIBOND is used to reconstruct the point-cloud. The red dotted line corresponds to the first bounce path connecting point in the scene and imaging system. The black line is the second bounce path.
Applsci 10 06458 g003
Figure 4. Statistical analysis of scene recoverability. The plot shows the probability of correctly imaging all targets in a scene of spheres randomly placed in a volume of 10 cubic meters. The number of spheres varies from five to 20 and the sphere diameter from 1 to 8 cm. The plot is generated from simulated data using the realistic parameters of detector and photon noise and assuming a 10 W illumination pulse from the laser.
Figure 4. Statistical analysis of scene recoverability. The plot shows the probability of correctly imaging all targets in a scene of spheres randomly placed in a volume of 10 cubic meters. The number of spheres varies from five to 20 and the sphere diameter from 1 to 8 cm. The plot is generated from simulated data using the realistic parameters of detector and photon noise and assuming a 10 W illumination pulse from the laser.
Applsci 10 06458 g004
Figure 5. Imaging system and examples of the scene (a,d) The imaging system with co-located ultrafast pulsed laser and SPAD. We placed a diffuser in front of the laser to scatter the laser light into the scene. Note there is no lens before SPAD. On the right, the scene with illumination is shown. (b,c,e,f) Examples of the test scenes. All of the targets have a dimension approximately 4 × 4 cm, except the plane, which has dimensions 8 × 8 cm.
Figure 5. Imaging system and examples of the scene (a,d) The imaging system with co-located ultrafast pulsed laser and SPAD. We placed a diffuser in front of the laser to scatter the laser light into the scene. Note there is no lens before SPAD. On the right, the scene with illumination is shown. (b,c,e,f) Examples of the test scenes. All of the targets have a dimension approximately 4 × 4 cm, except the plane, which has dimensions 8 × 8 cm.
Applsci 10 06458 g005
Figure 6. Resolution test using 2 targets (a) Circular patches with diameter 3 cm are placed at distance 15 cm from each other. First column shows ground truth image. Simulated image of the scene captured by an imaging system with 20-micron aperture size. Third column shows reconstructed image. Last column is acquired data. In the time response, we can see clearly three peaks. Two first and one second bounce. (b) Circular patches with diameter 3 cm are placed at distance 23 cm from each other. (c) Circular patches with diameter 3 cm are placed at distance 35 cm from each other.
Figure 6. Resolution test using 2 targets (a) Circular patches with diameter 3 cm are placed at distance 15 cm from each other. First column shows ground truth image. Simulated image of the scene captured by an imaging system with 20-micron aperture size. Third column shows reconstructed image. Last column is acquired data. In the time response, we can see clearly three peaks. Two first and one second bounce. (b) Circular patches with diameter 3 cm are placed at distance 23 cm from each other. (c) Circular patches with diameter 3 cm are placed at distance 35 cm from each other.
Applsci 10 06458 g006
Figure 7. Ground truth distance vs reconstructed distance. Circular patches with diameter 3 cm are placed at distance 15 cm from each other and are moved apart from each other up to 35 cm. The green line is ground truth distance and blue line is reconstructed distance. The RMS error of total 21 sample is 0.03 cm.
Figure 7. Ground truth distance vs reconstructed distance. Circular patches with diameter 3 cm are placed at distance 15 cm from each other and are moved apart from each other up to 35 cm. The green line is ground truth distance and blue line is reconstructed distance. The RMS error of total 21 sample is 0.03 cm.
Applsci 10 06458 g007
Figure 8. Experiments with different targets. (a) Two star targets of size 4 × 4 cm are placed at distance 30 cm from each other. Stars are made of wood and surface color is brown. (b) Circular patch and a plane are placed at distance 30 cm from each other. (c) Three circular white patches with 4 cm diameter are placed in the scene. In the time response, we can find six peaks: three first and three s bounces.
Figure 8. Experiments with different targets. (a) Two star targets of size 4 × 4 cm are placed at distance 30 cm from each other. Stars are made of wood and surface color is brown. (b) Circular patch and a plane are placed at distance 30 cm from each other. (c) Three circular white patches with 4 cm diameter are placed in the scene. In the time response, we can find six peaks: three first and three s bounces.
Applsci 10 06458 g008

Share and Cite

MDPI and ACS Style

Nam, J.H.; Velten, A. Super-Resolution Remote Imaging Using Time Encoded Remote Apertures. Appl. Sci. 2020, 10, 6458. https://doi.org/10.3390/app10186458

AMA Style

Nam JH, Velten A. Super-Resolution Remote Imaging Using Time Encoded Remote Apertures. Applied Sciences. 2020; 10(18):6458. https://doi.org/10.3390/app10186458

Chicago/Turabian Style

Nam, Ji Hyun, and Andreas Velten. 2020. "Super-Resolution Remote Imaging Using Time Encoded Remote Apertures" Applied Sciences 10, no. 18: 6458. https://doi.org/10.3390/app10186458

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop