Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Keywords = plenoptic imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 5686 KiB  
Communication
3D Correlation Imaging for Localized Phase Disturbance Mitigation
by Francesco V. Pepe and Milena D’Angelo
Photonics 2024, 11(8), 733; https://doi.org/10.3390/photonics11080733 - 6 Aug 2024
Cited by 1 | Viewed by 923
Abstract
Correlation plenoptic imaging is a procedure to perform light-field imaging without spatial resolution loss, by measuring the second-order spatiotemporal correlations of light. We investigate the possibility of using correlation plenoptic imaging to mitigate the effect of a phase disturbance in the propagation from [...] Read more.
Correlation plenoptic imaging is a procedure to perform light-field imaging without spatial resolution loss, by measuring the second-order spatiotemporal correlations of light. We investigate the possibility of using correlation plenoptic imaging to mitigate the effect of a phase disturbance in the propagation from the object to the main lens. We assume that this detrimental effect, which can be due to a turbulent medium, is localized at a specific distance from the lens, and is slowly varying in time. The mitigation of turbulence effects has already fostered the development of both light-field imaging and correlation imaging procedures. Here, we aim to merge these aspects, proposing a correlation light-field imaging method to overcome the effects of slowly varying turbulence, without the loss of lateral resolution, typical of traditional plenoptic imaging devices. Full article
Show Figures

Figure 1

29 pages, 4039 KiB  
Article
Mind the Exit Pupil Gap: Revisiting the Intrinsics of a Standard Plenoptic Camera
by Tim Michels, Daniel Mäckelmann and Reinhard Koch
Sensors 2024, 24(8), 2522; https://doi.org/10.3390/s24082522 - 15 Apr 2024
Cited by 1 | Viewed by 1910
Abstract
Among the common applications of plenoptic cameras are depth reconstruction and post-shot refocusing. These require a calibration relating the camera-side light field to that of the scene. Numerous methods with this goal have been developed based on thin lens models for the plenoptic [...] Read more.
Among the common applications of plenoptic cameras are depth reconstruction and post-shot refocusing. These require a calibration relating the camera-side light field to that of the scene. Numerous methods with this goal have been developed based on thin lens models for the plenoptic camera’s main lens and microlenses. Our work addresses the often-overlooked role of the main lens exit pupil in these models, specifically in the decoding process of standard plenoptic camera (SPC) images. We formally deduce the connection between the refocusing distance and the resampling parameter for the decoded light field and provide an analysis of the errors that arise when the exit pupil is not considered. In addition, previous work is revisited with respect to the exit pupil’s role, and all theoretical results are validated through a ray tracing-based simulation. With the public release of the evaluated SPC designs alongside our simulation and experimental data, we aim to contribute to a more accurate and nuanced understanding of plenoptic camera optics. Full article
(This article belongs to the Special Issue Short-Range Optical 3D Scanning and 3D Data Processing)
Show Figures

Figure 1

13 pages, 2662 KiB  
Review
A Mini-Review of Recent Developments in Plenoptic Background-Oriented Schlieren Technology for Flow Dynamics Measurement
by Yulan Liu, Feng Xing, Liwei Su, Huijun Tan and Depeng Wang
Aerospace 2024, 11(4), 303; https://doi.org/10.3390/aerospace11040303 - 12 Apr 2024
Cited by 4 | Viewed by 2281
Abstract
To uncover the underlying fluid mechanisms, it is crucial to explore imaging techniques for high-resolution and large-scale three-dimensional (3D) measurements of the flow field. Plenoptic background-oriented schlieren (Plenoptic BOS), an emerging volumetric method in recent years, has demonstrated being able to resolve volumetric [...] Read more.
To uncover the underlying fluid mechanisms, it is crucial to explore imaging techniques for high-resolution and large-scale three-dimensional (3D) measurements of the flow field. Plenoptic background-oriented schlieren (Plenoptic BOS), an emerging volumetric method in recent years, has demonstrated being able to resolve volumetric flow dynamics with a single plenoptic camera. The focus-stack-based plenoptic BOS system can qualitatively infer the position of the density gradient in 3D space based on the relative sharpness of the refocused BOS image. Plenoptic BOS systems based on tomography or specular enhancement techniques are realized for use in high-fidelity 3D flow measurements due to the increased number of acquisition views. Here, we first review the fundamentals of plenoptic BOS, and then discuss the system configuration and typical application of single-view and multi-view plenoptic BOS. We also discuss the related challenges and outlook on the potential development of plenoptic BOS in the future. Full article
(This article belongs to the Special Issue Gust Influences on Aerospace)
Show Figures

Figure 1

21 pages, 2127 KiB  
Article
Fusion and Allocation Network for Light Field Image Super-Resolution
by Wei Zhang, Wei Ke, Zewei Wu, Zeyu Zhang, Hao Sheng and Zhang Xiong
Mathematics 2023, 11(5), 1088; https://doi.org/10.3390/math11051088 - 22 Feb 2023
Cited by 1 | Viewed by 1829
Abstract
Light field (LF) images taken by plenoptic cameras can record spatial and angular information from real-world scenes, and it is beneficial to fully integrate these two pieces of information to improve image super-resolution (SR). However, most of the existing approaches to LF image [...] Read more.
Light field (LF) images taken by plenoptic cameras can record spatial and angular information from real-world scenes, and it is beneficial to fully integrate these two pieces of information to improve image super-resolution (SR). However, most of the existing approaches to LF image SR cannot fully fuse the information at the spatial and angular levels. Moreover, the performance of SR is hindered by the ability to incorporate distinctive information from different views and extract informative features from each view. To solve these core issues, we propose a fusion and allocation network (LF-FANet) for LF image SR. Specifically, we have designed an angular fusion operator (AFO) to fuse distinctive features among different views, and a spatial fusion operator (SFO) to extract deep representation features for each view. Following these two operators, we further propose a fusion and allocation strategy to incorporate and propagate the fusion features. In the fusion stage, the interaction information fusion block (IIFB) can fully supplement distinctive and informative features among all views. For the allocation stage, the fusion output features are allocated to the next AFO and SFO for further distilling the valid information. Experimental results on both synthetic and real-world datasets demonstrate that our method has achieved the same performance as state-of-the-art methods. Moreover, our method can preserve the parallax structure of LF and generate faithful details of LF images. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

25 pages, 20118 KiB  
Article
Light Field View Synthesis Using the Focal Stack and All-in-Focus Image
by Rishabh Sharma, Stuart Perry and Eva Cheng
Sensors 2023, 23(4), 2119; https://doi.org/10.3390/s23042119 - 13 Feb 2023
Viewed by 2789
Abstract
Light field reconstruction and synthesis algorithms are essential for improving the lower spatial resolution for hand-held plenoptic cameras. Previous light field synthesis algorithms produce blurred regions around depth discontinuities, especially for stereo-based algorithms, where no information is available to fill the occluded areas [...] Read more.
Light field reconstruction and synthesis algorithms are essential for improving the lower spatial resolution for hand-held plenoptic cameras. Previous light field synthesis algorithms produce blurred regions around depth discontinuities, especially for stereo-based algorithms, where no information is available to fill the occluded areas in the light field image. In this paper, we propose a light field synthesis algorithm that uses the focal stack images and the all-in-focus image to synthesize a 9 × 9 sub-aperture view light field image. Our approach uses depth from defocus to estimate a depth map. Then, we use the depth map and the all-in-focus image to synthesize the sub-aperture views, and their corresponding depth maps by mimicking the apparent shifting of the central image according to the depth values. We handle the occluded regions in the synthesized sub-aperture views by filling them with the information recovered from the focal stack images. We also show that, if the depth levels in the image are known, we can synthesize a high-accuracy light field image with just five focal stack images. The accuracy of our approach is compared with three state-of-the-art algorithms: one non-learning and two CNN-based approaches, and the results show that our algorithm outperforms all three in terms of PSNR and SSIM metrics. Full article
(This article belongs to the Special Issue 3D Reconstruction with RGB-D Cameras and Multi-sensors)
Show Figures

Figure 1

14 pages, 1626 KiB  
Article
Refocusing Algorithm for Correlation Plenoptic Imaging
by Gianlorenzo Massaro, Francesco V. Pepe and Milena D’Angelo
Sensors 2022, 22(17), 6665; https://doi.org/10.3390/s22176665 - 3 Sep 2022
Cited by 13 | Viewed by 1859
Abstract
Correlation plenoptic imaging (CPI) is a technique capable of acquiring the light field emerging from a scene of interest, namely, the combined information of intensity and propagation direction of light. This is achieved by evaluating correlations between the photon numbers measured by two [...] Read more.
Correlation plenoptic imaging (CPI) is a technique capable of acquiring the light field emerging from a scene of interest, namely, the combined information of intensity and propagation direction of light. This is achieved by evaluating correlations between the photon numbers measured by two high-resolution detectors. Volumetric information about the object of interest is decoded, through data analysis, from the measured four-dimensional correlation function. In this paper, we investigate the relevant aspects of the refocusing algorithm, a post-processing method that isolates the image of a selected transverse plane within the 3D scene, once applied to the correlation function. In particular, we aim at bridging the gap between existing literature, which only deals with refocusing algorithms in case of continuous coordinates, and the experimental reality, in which the correlation function is available as a discrete quantity defined on the sensors pixels. Full article
(This article belongs to the Special Issue Camera Calibration and 3D Reconstruction)
Show Figures

Figure 1

16 pages, 1200 KiB  
Article
A First Attempt to Combine NIRS and Plenoptic Cameras for the Assessment of Grasslands Functional Diversity and Species Composition
by Simon Taugourdeau, Mathilde Dionisi, Mylène Lascoste, Matthieu Lesnoff, Jean Marie Capron, Fréderic Borne, Philippe Borianne and Lionel Julien
Agriculture 2022, 12(5), 704; https://doi.org/10.3390/agriculture12050704 - 17 May 2022
Cited by 1 | Viewed by 2512
Abstract
Grassland represents more than half of the agricultural land. Numerous metrics (biomass, functional trait, species composition) can be used to describe grassland vegetation and its multiple functions. The measures of these metrics are generally destructive and laborious. Indirect measurements using optical tools are [...] Read more.
Grassland represents more than half of the agricultural land. Numerous metrics (biomass, functional trait, species composition) can be used to describe grassland vegetation and its multiple functions. The measures of these metrics are generally destructive and laborious. Indirect measurements using optical tools are a possible alternative. Some tools have high spatial resolutions (digital camera), and others have high spectral resolutions (Near Infrared Spectrometry NIRS). A plenoptic camera is a multifocal camera that produces clear images at different depths in an image. The objective of this study was to test the interest of combining plenoptic images and NIRS data to characterize different descriptors of two Mediterranean legumes mixtures. On these mixtures, we measured biomass, species biomass, and functional trait diversity. NIRS and plenoptic images were acquired just before the field measurements. The plenoptic images were analyzed using Trainable Weka Segmentation ImageJ to evaluate the percentage of each species in the image. We calculated the average and standard deviation of the different colors (red, green, blue reflectance) in the image. We assessed the percentage of explanation of outputs of the images and NIRS analyses using variance partition and partial least squares. The biomass Trifolium michelianum and Vicia sativa were predicted with more than 50% variability explained. For the other descriptors, the variability explained was lower but nevertheless significant. The percentage variance explained was nevertheless quite low, and further work is required to produce a useable tool, but this work already demonstrates the interest in combining image analysis and NIRS. Full article
Show Figures

Figure 1

12 pages, 963 KiB  
Article
Resolution Limit of Correlation Plenoptic Imaging between Arbitrary Planes
by Francesco Scattarella, Milena D’Angelo and Francesco V. Pepe
Optics 2022, 3(2), 138-149; https://doi.org/10.3390/opt3020015 - 12 Apr 2022
Cited by 9 | Viewed by 2299
Abstract
Correlation plenoptic imaging (CPI) is an optical imaging technique based on intensity correlation measurement, which enables detecting, within fundamental physical limits, both the spatial distribution and the direction of light in a scene. This provides the possibility to perform tasks such as three-dimensional [...] Read more.
Correlation plenoptic imaging (CPI) is an optical imaging technique based on intensity correlation measurement, which enables detecting, within fundamental physical limits, both the spatial distribution and the direction of light in a scene. This provides the possibility to perform tasks such as three-dimensional reconstruction and refocusing of different planes. Compared with standard plenoptic imaging devices, based on direct intensity measurement, CPI overcomes the problem of the strong trade-off between spatial and directional resolution. Here, we study the resolution limit in a recent development of the technique, called correlation plenoptic imaging between arbitrary planes (CPI-AP). The analysis, based on Gaussian test objects, highlights the main properties of the technique, as compared with standard imaging, and provides an analytical guideline to identify the limits at which an object can be considered resolved. Full article
Show Figures

Figure 1

21 pages, 5817 KiB  
Article
Objective Quality Assessment Metrics for Light Field Image Based on Textural Features
by Huy PhiCong, Stuart Perry, Eva Cheng and Xiem HoangVan
Electronics 2022, 11(5), 759; https://doi.org/10.3390/electronics11050759 - 1 Mar 2022
Cited by 11 | Viewed by 2732
Abstract
Light Field (LF) imaging is a plenoptic data collection method enabling a wide variety of image post-processing such as 3D extraction, viewpoint change and digital refocusing. Moreover, LF provides the capability to capture rich information about a scene, e.g., texture, geometric information, etc. [...] Read more.
Light Field (LF) imaging is a plenoptic data collection method enabling a wide variety of image post-processing such as 3D extraction, viewpoint change and digital refocusing. Moreover, LF provides the capability to capture rich information about a scene, e.g., texture, geometric information, etc. Therefore, a quality assessment model for LF images is needed and poses significant challenges. Many LF Image Quality Assessment (LF-IQA) metrics have been recently presented based on the unique characteristics of LF images. The state-of-the-art objective assessment metrics have taken into account the image content and human visual system such as SSIM and IW-SSIM. However, most of these metrics are designed for images and video with natural content. Additionally, other models based on the LF characteristics (e.g., depth information, angle information) trade high performance for high computational complexity, along with them possessing difficulties of implementation for LF applications due to the immense data requirements of LF images. Hence, this paper presents a novel content-adaptive LF-IQA metric to improve the conventional LF-IQA performance that is also low in computational complexity. The experimental results clearly show improved performance compared to conventional objective IQA metrics, and we also identify metrics that are well-suited for LF image assessment. In addition, we present a comprehensive content-based feature analysis to determine the most appropriate feature that influences human visual perception among the widely used conventional objective IQA metrics. Finally, a rich LF dataset is selected from the EPFL dataset, allowing for the study of light field quality by qualitative factors such as depth (wide and narrow), focus (background or foreground) and complexity (simple and complex). Full article
(This article belongs to the Special Issue Advances in Signal, Image and Information Processing)
Show Figures

Graphical abstract

13 pages, 5354 KiB  
Article
Flexible Plenoptic X-ray Microscopy
by Elena Longo, Domenico Alj, Joost Batenburg, Ombeline de La Rochefoucauld, Charlotte Herzog, Imke Greving, Ying Li, Mikhail Lyubomirskiy, Ken Vidar Falch, Patricia Estrela, Silja Flenner, Nicola Viganò, Marta Fajardo and Philippe Zeitoun
Photonics 2022, 9(2), 98; https://doi.org/10.3390/photonics9020098 - 8 Feb 2022
Cited by 2 | Viewed by 3825
Abstract
X-ray computed tomography (CT) is an invaluable technique for generating three-dimensional (3D) images of inert or living specimens. X-ray CT is used in many scientific, industrial, and societal fields. Compared to conventional 2D X-ray imaging, CT requires longer acquisition times because up to [...] Read more.
X-ray computed tomography (CT) is an invaluable technique for generating three-dimensional (3D) images of inert or living specimens. X-ray CT is used in many scientific, industrial, and societal fields. Compared to conventional 2D X-ray imaging, CT requires longer acquisition times because up to several thousand projections are required for reconstructing a single high-resolution 3D volume. Plenoptic imaging—an emerging technology in visible light field photography—highlights the potential of capturing quasi-3D information with a single exposure. Here, we show the first demonstration of a flexible plenoptic microscope operating with hard X-rays; it is used to computationally reconstruct images at different depths along the optical axis. The experimental results are consistent with the expected axial refocusing, precision, and spatial resolution. Thus, this proof-of-concept experiment opens the horizons to quasi-3D X-ray imaging, without sample rotation, with spatial resolution of a few hundred nanometres. Full article
(This article belongs to the Special Issue Advances in X-ray Optics)
Show Figures

Figure 1

18 pages, 80059 KiB  
Article
All-In-Focus Polarimetric Imaging Based on an Integrated Plenoptic Camera with a Key Electrically Tunable LC Device
by Mingce Chen, Zhexun Li, Mao Ye, Taige Liu, Chai Hu, Jiashuo Shi, Kewei Liu, Zhe Wang and Xinyu Zhang
Micromachines 2022, 13(2), 192; https://doi.org/10.3390/mi13020192 - 26 Jan 2022
Cited by 4 | Viewed by 2538
Abstract
In this paper, a prototyped plenoptic camera based on a key electrically tunable liquid-crystal (LC) device for all-in-focus polarimetric imaging is proposed. By using computer numerical control machining and 3D printing, the proposed imaging architecture can be integrated into a hand-held prototyped plenoptic [...] Read more.
In this paper, a prototyped plenoptic camera based on a key electrically tunable liquid-crystal (LC) device for all-in-focus polarimetric imaging is proposed. By using computer numerical control machining and 3D printing, the proposed imaging architecture can be integrated into a hand-held prototyped plenoptic camera so as to greatly improve the applicability for outdoor imaging measurements. Compared with previous square-period liquid-crystal microlens arrays (LCMLA), the utilized hexagonal-period LCMLA has remarkably increased the light utilization rate by ~15%. Experiments demonstrate that the proposed imaging approach can simultaneously realize both the plenoptic and polarimetric imaging without any macroscopic moving parts. With the depth-based rendering method, both the all-in-focus images and the all-in-focus degree of linear polarization (DoLP) images can be obtained efficiently. Due to the large depth-of-field advantage of plenoptic cameras, the proposed camera enables polarimetric imaging in a larger depth range than conventional 2D polarimetric cameras. Currently, the raw light field images with three polarization states including I0 and I60 and I120 can be captured by the proposed imaging architecture, with a switching time of several tens of milliseconds. Some local patterns which are selected as interested target features can be effectively suppressed or obviously enhanced by switching the polarization state mentioned. According to experiments, the visibility in scattering medium can also be apparently improved. It can be expected that the proposed polarimetric imaging approach will exhibit an excellent development potential. Full article
(This article belongs to the Special Issue Optics and Photonics in Micromachines)
Show Figures

Figure 1

23 pages, 4246 KiB  
Article
Card3DFace—An Application to Enhance 3D Visual Validation in ID Cards and Travel Documents
by Leandro Dihl, Leandro Cruz and Nuno Gonçalves
Appl. Sci. 2021, 11(19), 8821; https://doi.org/10.3390/app11198821 - 23 Sep 2021
Cited by 2 | Viewed by 2344
Abstract
The identification of a person is a natural way to gain access to information or places. A face image is an essential element of visual validation. In this paper, we present the Card3DFace application, which captures a single-shot image of a person’s face. [...] Read more.
The identification of a person is a natural way to gain access to information or places. A face image is an essential element of visual validation. In this paper, we present the Card3DFace application, which captures a single-shot image of a person’s face. After reconstructing the 3D model of the head, the application generates several images from different perspectives, which, when printed on a card with a layer of lenticular lenses, produce a 3D visualization effect of the face. The image acquisition is achieved with a regular consumer 3D camera, either using plenoptic, stereo or time-of-flight technologies. This procedure aims to assist and improve the human visual recognition of ID cards and travel documents through an affordable and fast process while simultaneously increasing their security level. The whole system pipeline is analyzed and detailed in this paper. The results of the experiments performed with polycarbonate ID cards show that this end-to-end system is able to produce cards with realistic 3D visualization effects for humans. Full article
Show Figures

Figure 1

10 pages, 1898 KiB  
Article
An Artificial-Intelligence-Driven Predictive Model for Surface Defect Detections in Medical MEMS
by Amin Amini, Jamil Kanfoud and Tat-Hean Gan
Sensors 2021, 21(18), 6141; https://doi.org/10.3390/s21186141 - 13 Sep 2021
Cited by 8 | Viewed by 3699
Abstract
With the advancement of miniaturization in electronics and the ubiquity of micro-electro-mechanical systems (MEMS) in different applications including computing, sensing and medical apparatus, the importance of increasing production yields and ensuring the quality standard of products has become an important focus in manufacturing. [...] Read more.
With the advancement of miniaturization in electronics and the ubiquity of micro-electro-mechanical systems (MEMS) in different applications including computing, sensing and medical apparatus, the importance of increasing production yields and ensuring the quality standard of products has become an important focus in manufacturing. Hence, the need for high-accuracy and automatic defect detection in the early phases of MEMS production has been recognized. This not only eliminates human interaction in the defect detection process, but also saves raw material and labor required. This research developed an automated defects recognition (ADR) system using a unique plenoptic camera capable of detecting surface defects of MEMS wafers using a machine-learning approach. The developed algorithm could be applied at any stage of the production process detecting defects at both entire MEMS wafer and single component scale. The developed system showed an F1 score of 0.81 U on average for true positive defect detection, with a processing time of 18 s for each image based on 6 validation sample images including 371 labels. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

20 pages, 1903 KiB  
Article
Design of an FPGA Hardware Optimizing the Performance and Power Consumption of a Plenoptic Camera Depth Estimation Algorithm
by Faraz Bhatti and Thomas Greiner
Algorithms 2021, 14(7), 215; https://doi.org/10.3390/a14070215 - 15 Jul 2021
Cited by 1 | Viewed by 3262
Abstract
Plenoptic camera based system captures the light-field that can be exploited to estimate the 3D depth of the scene. This process generally consists of a significant number of recurrent operations, and thus requires high computation power. General purpose processor based system, due to [...] Read more.
Plenoptic camera based system captures the light-field that can be exploited to estimate the 3D depth of the scene. This process generally consists of a significant number of recurrent operations, and thus requires high computation power. General purpose processor based system, due to its sequential architecture, consequently results in the problem of large execution time. A desktop graphics processing unit (GPU) can be employed to resolve this problem. However, it is an expensive solution with respect to power consumption and therefore cannot be used in mobile applications with low energy requirements. In this paper, we propose a modified plenoptic depth estimation algorithm that works on a single frame recorded by the camera and respective FPGA based hardware design. For this purpose, the algorithm is modified for parallelization and pipelining. In combination with efficient memory access, the results show good performance and lower power consumption compared to other systems. Full article
(This article belongs to the Special Issue Algorithms in Reconfigurable Computing)
Show Figures

Graphical abstract

14 pages, 4277 KiB  
Review
Towards Quantum 3D Imaging Devices
by Cristoforo Abbattista, Leonardo Amoruso, Samuel Burri, Edoardo Charbon, Francesco Di Lena, Augusto Garuccio, Davide Giannella, Zdeněk Hradil, Michele Iacobellis, Gianlorenzo Massaro, Paul Mos, Libor Motka, Martin Paúr, Francesco V. Pepe, Michal Peterek, Isabella Petrelli, Jaroslav Řeháček, Francesca Santoro, Francesco Scattarella, Arin Ulku, Sergii Vasiukov, Michael Wayne, Claudio Bruschini, Milena D’Angelo, Maria Ieronymaki and Bohumil Stoklasaadd Show full author list remove Hide full author list
Appl. Sci. 2021, 11(14), 6414; https://doi.org/10.3390/app11146414 - 12 Jul 2021
Cited by 25 | Viewed by 4014
Abstract
We review the advancement of the research toward the design and implementation of quantum plenoptic cameras, radically novel 3D imaging devices that exploit both momentum–position entanglement and photon–number correlations to provide the typical refocusing and ultra-fast, scanning-free, 3D imaging capability of plenoptic devices, [...] Read more.
We review the advancement of the research toward the design and implementation of quantum plenoptic cameras, radically novel 3D imaging devices that exploit both momentum–position entanglement and photon–number correlations to provide the typical refocusing and ultra-fast, scanning-free, 3D imaging capability of plenoptic devices, along with dramatically enhanced performances, unattainable in standard plenoptic cameras: diffraction-limited resolution, large depth of focus, and ultra-low noise. To further increase the volumetric resolution beyond the Rayleigh diffraction limit, and achieve the quantum limit, we are also developing dedicated protocols based on quantum Fisher information. However, for the quantum advantages of the proposed devices to be effective and appealing to end-users, two main challenges need to be tackled. First, due to the large number of frames required for correlation measurements to provide an acceptable signal-to-noise ratio, quantum plenoptic imaging (QPI) would require, if implemented with commercially available high-resolution cameras, acquisition times ranging from tens of seconds to a few minutes. Second, the elaboration of this large amount of data, in order to retrieve 3D images or refocusing 2D images, requires high-performance and time-consuming computation. To address these challenges, we are developing high-resolution single-photon avalanche photodiode (SPAD) arrays and high-performance low-level programming of ultra-fast electronics, combined with compressive sensing and quantum tomography algorithms, with the aim to reduce both the acquisition and the elaboration time by two orders of magnitude. Routes toward exploitation of the QPI devices will also be discussed. Full article
(This article belongs to the Special Issue Basics and Applications in Quantum Optics)
Show Figures

Figure 1

Back to TopTop