applsci-logo

Journal Browser

Journal Browser

Advanced Ultrafast Imaging

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Optics and Lasers".

Deadline for manuscript submissions: closed (31 July 2020) | Viewed by 23556

Special Issue Editors


E-Mail Website
Guest Editor
Department of Bioengineering, Department of Precision Engineering, The University of Tokyo, Tokyo 113-8656, Japan
Interests: biomedical engineering; precision engineering; optical engineering; ultrafast science

E-Mail Website
Guest Editor
Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes (Québec) J3X 1S2, Canada
Interests: ultrafast imaging; photoacoustic imaging; laser beam/pulse shaping
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Ultrafast imaging is a powerful tool for studying fast dynamics in nature and also for analyzing fast processes in industrial applications. To address the high demand for scientific research and technological development, various ultrafast imaging methods have been developed over the last 150 years. Particularly in recent years, new principles, materials, devices and systems for performing ultrafast image acquisition have been developed in diverse research fields including optics, semiconductors, information science, accelerators, biology, and so on. In addition, new imaging concepts based on ultrafast measurements have been proposed not to provide fast imaging speed, but to provide unique information about the target.

This Special Issue “Advanced Ultrafast Imaging” aims to reflect recent developments in ultrafast imaging technology, and to grasp the essence of ultrafast imaging for developing future devices and methods. This Special Issue also aims to provide a friendly introduction to each technique for potential users. Submissions are expected to be focused on the ultrafast imaging method itself rather than the captured phenomena. We also welcome new ideas in ultrafast imaging.

Topics of interest include, but are not limited to, the following areas:

  • Ultrafast optical imaging: single-shot imaging, all-optical imaging, holography, frequency comb, optical Kerr gate
  • Ultrafast cameras and detectors: CCD and CMOS sensors, framing cameras, streak cameras, SPADs, ultrafast scintillators
  • Ultrafast light sources: high-repetition lasers, supercontinuum lasers
  • Time-resolved imaging: pump-probe technique, stroboscopic imaging
  • Computational imaging: compressive sensing, machine learning
  • Ultrafast spectroscopy: single-shot spectroscopy, dual-comb CARS, SRS, THz spectroscopy
  • Bioimaging: fluorescence imaging, time-resolved fluorescence microscopy, cytometry, AFM, medical diagnosis
  • X-ray, electron, and other imaging: XFEL, ultrafast electron microscopy, neutron imaging
  • Quantum imaging, time-of-flight imaging, single-photon imaging
  • New theory and design of ultrafast imaging

We hope this Special Issue works as a roadmap for all developers and users of ultrafast imaging.

Dr. Keiichi Nakagawa
Dr. Jinyang Liang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Photography 
  • Ultrafast optics 
  • Computational imaging 
  • Image sensor 
  • Quantum imaging 
  • Bioimaging

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 1447 KiB  
Article
Super-Resolution Remote Imaging Using Time Encoded Remote Apertures
by Ji Hyun Nam and Andreas Velten
Appl. Sci. 2020, 10(18), 6458; https://doi.org/10.3390/app10186458 - 16 Sep 2020
Cited by 6 | Viewed by 2366
Abstract
Imaging of scenes using light or other wave phenomena is subject to the diffraction limit. The spatial profile of a wave propagating between a scene and the imaging system is distorted by diffraction resulting in a loss of resolution that is proportional with [...] Read more.
Imaging of scenes using light or other wave phenomena is subject to the diffraction limit. The spatial profile of a wave propagating between a scene and the imaging system is distorted by diffraction resulting in a loss of resolution that is proportional with traveled distance. We show here that it is possible to reconstruct sparse scenes from the temporal profile of the wave-front using only one spatial pixel or a spatial average. The temporal profile of the wave is not affected by diffraction yielding an imaging method that can in theory achieve wavelength scale resolution independent of distance from the scene. Full article
(This article belongs to the Special Issue Advanced Ultrafast Imaging)
Show Figures

Figure 1

16 pages, 2698 KiB  
Article
CFAM: Estimating 3D Hand Poses from a Single RGB Image with Attention
by Xianghan Wang, Jie Jiang, Yanming Guo, Lai Kang, Yingmei Wei and Dan Li
Appl. Sci. 2020, 10(2), 618; https://doi.org/10.3390/app10020618 - 15 Jan 2020
Cited by 5 | Viewed by 2769
Abstract
Precise 3D hand pose estimation can be used to improve the performance of human–computer interaction (HCI). Specifically, computer-vision-based hand pose estimation can make this process more natural. Most traditional computer-vision-based hand pose estimation methods use depth images as the input, which requires complicated [...] Read more.
Precise 3D hand pose estimation can be used to improve the performance of human–computer interaction (HCI). Specifically, computer-vision-based hand pose estimation can make this process more natural. Most traditional computer-vision-based hand pose estimation methods use depth images as the input, which requires complicated and expensive acquisition equipment. Estimation through a single RGB image is more convenient and less expensive. Previous methods based on RGB images utilize only 2D keypoint score maps to recover 3D hand poses but ignore the hand texture features and the underlying spatial information in the RGB image, which leads to a relatively low accuracy. To address this issue, we propose a channel fusion attention mechanism that combines 2D keypoint features and RGB image features at the channel level. In particular, the proposed method replans weights by using cascading RGB images and 2D keypoint features, which enables rational planning and the utilization of various features. Moreover, our method improves the fusion performance of different types of feature maps. Multiple contrast experiments on public datasets demonstrate that the accuracy of our proposed method is comparable to the state-of-the-art accuracy. Full article
(This article belongs to the Special Issue Advanced Ultrafast Imaging)
Show Figures

Figure 1

12 pages, 2028 KiB  
Article
Combination of 2D Compressive Sensing Spectral Domain Optical Coherence Tomography and Interferometric Synthetic Aperture Microscopy
by Luying Yi, Liqun Sun, Xiangyu Guo and Bo Hou
Appl. Sci. 2019, 9(19), 4003; https://doi.org/10.3390/app9194003 - 25 Sep 2019
Cited by 1 | Viewed by 2543
Abstract
Combining the advantages of compressive sensing spectral domain optical coherence tomography (CS-SDOCT) and interferometric synthetic aperture microscopy (ISAM) in terms of data volume, imaging speed, and lateral resolution, we demonstrated how compressive sampling and ISAM can be simultaneously used to reconstruct an optical [...] Read more.
Combining the advantages of compressive sensing spectral domain optical coherence tomography (CS-SDOCT) and interferometric synthetic aperture microscopy (ISAM) in terms of data volume, imaging speed, and lateral resolution, we demonstrated how compressive sampling and ISAM can be simultaneously used to reconstruct an optical coherence tomography (OCT) image. Specifically, an OCT image is reconstructed from two-dimensional (2D) under-sampled spectral data dimension-by-dimension through a CS reconstruction algorithm. During the iterative process of CS algorithm, the deterioration of lateral resolution beyond the depth of focus (DOF) of a Gaussian beam is corrected. In the end, with less spectral data, we can obtain an OCT image with spatially invariant lateral resolution throughout the imaging depth. This method was verified in this paper by imaging the cells of an orange. A 0.7 × 1.5 mm image of an orange was reconstructed using only 50% × 50% spectral data, in which the dispersion of the structure was decreased by approximately 2.4 times at a depth of approximately 5.7 Rayleigh ranges above the focus. This result was consistent with that obtained with 100% data. Full article
(This article belongs to the Special Issue Advanced Ultrafast Imaging)
Show Figures

Figure 1

10 pages, 3557 KiB  
Article
Optimizing Single-Shot Coherent Power-Spectrum Scattering Imaging Adaptively by Tuning Feedback Coefficient for Practical Exposure Conditions
by Wei Wang, Yanfang Guo, Wusheng Tang, Wenjun Yi, Mengzhu Li, Mengjun Zhu, Junli Qi, Jubo Zhu and Xiujian Li
Appl. Sci. 2019, 9(18), 3676; https://doi.org/10.3390/app9183676 - 5 Sep 2019
Cited by 3 | Viewed by 2103
Abstract
With only one piece of the power-spectrum pattern, the single-shot coherent power-spectrum imaging can provide a clear object image for real-time applications even if the object is hidden by opaque scattering media, in which the feedback coefficient β value for the reconstruction with [...] Read more.
With only one piece of the power-spectrum pattern, the single-shot coherent power-spectrum imaging can provide a clear object image for real-time applications even if the object is hidden by opaque scattering media, in which the feedback coefficient β value for the reconstruction with least retrievals and fastest speed has to be pre-estimated through time-consuming iterative loops. Here we report a method for estimating the optimal β value from the captured raw power-spectrum images adaptively to optimize the single-shot coherent power-spectrum imaging for practical exposure conditions. The results demonstrate that, based on exposure level analysis of the captured raw power-spectrum images even of underexposure, moderate exposure, and overexposure cases, the β value could be quickly determined with a compact expression for the algorithm to achieve clear reconstruction output efficiently. The proposed method helps to push ahead of the coherent diffractive imaging devices for real-time imaging through turbid mediums in Artificial Intelligence (AI), driving assistance, and flight assistance applications. Full article
(This article belongs to the Special Issue Advanced Ultrafast Imaging)
Show Figures

Figure 1

17 pages, 8723 KiB  
Article
Multi-Focus Image Fusion Based on Decision Map and Sparse Representation
by Bin Liao, Hua Chen and Wei Mo
Appl. Sci. 2019, 9(17), 3612; https://doi.org/10.3390/app9173612 - 2 Sep 2019
Cited by 2 | Viewed by 2793
Abstract
As the focal length of an optical lens in a conventional camera is limited, it is usually arduous to obtain an image in which each object is focused. This problem can be solved by multi-focus image fusion. In this paper, we propose an [...] Read more.
As the focal length of an optical lens in a conventional camera is limited, it is usually arduous to obtain an image in which each object is focused. This problem can be solved by multi-focus image fusion. In this paper, we propose an entirely new multi-focus image fusion method based on decision map and sparse representation (DMSR). First, we obtained a decision map by analyzing low-scale images with sparse representation, measuring the effective clarity level, and using spatial frequency methods to process uncertain areas. Subsequently, the transitional area around the focus boundary was determined by the decision map, and we implemented the transitional area fusion based on sparse representation. The experimental results show that the proposed method is superior to the other five fusion methods, both in terms of visual effect and quantitative evaluation. Full article
(This article belongs to the Special Issue Advanced Ultrafast Imaging)
Show Figures

Figure 1

19 pages, 1412 KiB  
Article
Improving Lossless Image Compression with Contextual Memory
by Alexandru Dorobanțiu and Remus Brad
Appl. Sci. 2019, 9(13), 2681; https://doi.org/10.3390/app9132681 - 30 Jun 2019
Cited by 6 | Viewed by 3605
Abstract
With the increased use of image acquisition devices, including cameras and medical imaging instruments, the amount of information ready for long term storage is also growing. In this paper we give a detailed description of the state-of-the-art lossless compression software PAQ8PX applied to [...] Read more.
With the increased use of image acquisition devices, including cameras and medical imaging instruments, the amount of information ready for long term storage is also growing. In this paper we give a detailed description of the state-of-the-art lossless compression software PAQ8PX applied to grayscale image compression. We propose a new online learning algorithm for predicting the probability of bits from a stream. We then proceed to integrate the algorithm into PAQ8PX’s image model. To verify the improvements, we test the new software on three public benchmarks. Experimental results show better scores on all of the test sets. Full article
(This article belongs to the Special Issue Advanced Ultrafast Imaging)
Show Figures

Figure 1

11 pages, 6916 KiB  
Article
A Novel Focal Length Measurement Method for Center-Obstructed Omni-Directional Reflective Optical Systems
by Hojong Choi, Joo-Youn Jo and Jae-Myung Ryu
Appl. Sci. 2019, 9(11), 2350; https://doi.org/10.3390/app9112350 - 8 Jun 2019
Cited by 12 | Viewed by 3587
Abstract
An omni-directional optical system can be used as a surveillance camera owing to its wide field angle. In cases in which a system is designed with a central screen obscuring structure to increase the resolution of the off-axis field, however, the conventional methods [...] Read more.
An omni-directional optical system can be used as a surveillance camera owing to its wide field angle. In cases in which a system is designed with a central screen obscuring structure to increase the resolution of the off-axis field, however, the conventional methods cannot be used to measure the effective focal length (EFL). We assumed the actual and theoretical distortion values of the fabricated optical system to be the same and determined the system’s EFL by finding the minimum deviation point of the measured and theoretical distortions. The feasibility of the determined EFL was verified through a tolerance analysis of the system. For these precise measurements we also analyzed the sources of error. To verify our proposed measurement method, we measured the focal length of a center-obstructed omni-directional reflective optical system with an 80–135° field of view (FOV). The EFL from the measurement was 0.3739 mm and was only approximately 11 µm different from the EFL calculated using the design software. Thus, the reliability of focal length measurements in omni-directional optical systems was improved. Full article
(This article belongs to the Special Issue Advanced Ultrafast Imaging)
Show Figures

Figure 1

11 pages, 1486 KiB  
Article
Super-Resolution Lensless Imaging of Cells Using Brownian Motion
by Yuan Fang, Ningmei Yu and Yuquan Jiang
Appl. Sci. 2019, 9(10), 2080; https://doi.org/10.3390/app9102080 - 21 May 2019
Cited by 6 | Viewed by 3132
Abstract
The lensless imaging technique, which integrates a microscope into a complementary metal oxide semiconductor (CMOS) digital image sensor, has become increasingly important for the miniaturization of biological microscope and cell detection equipment. However, limited by the pixel size of the CMOS image sensor [...] Read more.
The lensless imaging technique, which integrates a microscope into a complementary metal oxide semiconductor (CMOS) digital image sensor, has become increasingly important for the miniaturization of biological microscope and cell detection equipment. However, limited by the pixel size of the CMOS image sensor (CIS), the resolution of a cell image without optical amplification is low. This is also a key defect with the lensless imaging technique, which has been studied by a many scholars. In this manuscript, we propose a method to improve the resolution of the cell images using the Brownian motion of living cells in liquid. A two-step algorithm of motion estimation for image registration is proposed. Then, the raw holographic images are reconstructed using normalized convolution super-resolution algorithm. The result shows that the effect of the collected cell image under the lensless imaging system is close to the effect of a 10× objective lens. Full article
(This article belongs to the Special Issue Advanced Ultrafast Imaging)
Show Figures

Figure 1

Back to TopTop