Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Keywords = HDR reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 4785 KiB  
Article
A Clustered Adaptive Exposure Time Selection Methodology for HDR Structured Light 3D Reconstruction
by Zhuang Li, Rui Ma and Shuyu Duan
Sensors 2025, 25(15), 4786; https://doi.org/10.3390/s25154786 - 3 Aug 2025
Viewed by 222
Abstract
Fringe projection profilometry (FPP) has been widely applied in industrial 3D measurement due to its high precision and non-contact advantages. However, FPP often encounters measurement problems with high-dynamic-range objects, consequently impacting phase computation. In this paper, an adaptive exposure time selection method is [...] Read more.
Fringe projection profilometry (FPP) has been widely applied in industrial 3D measurement due to its high precision and non-contact advantages. However, FPP often encounters measurement problems with high-dynamic-range objects, consequently impacting phase computation. In this paper, an adaptive exposure time selection method is proposed to calculate the optimal number of exposures and exposure time by using an improved clustering method to divide the region with different reflection degrees. Meanwhile, the phase order sharing strategy is adopted in the phase unwrapping stage, and the same set of complementary Gray code patterns is used to calculate the phase orders under different exposure times. The experimental results demonstrate that the measurement error of the method described in this paper was reduced by 25.4% under almost the same exposure times. Full article
Show Figures

Figure 1

19 pages, 4858 KiB  
Article
A Novel Solution for Reconstructing More Details High Dynamic Range Image
by Kuo-Ching Hung, Sheng-Fuu Lin and Ching-Hung Lee
Appl. Sci. 2025, 15(11), 5819; https://doi.org/10.3390/app15115819 - 22 May 2025
Viewed by 414
Abstract
Although scholars have made significant progress in obtaining high dynamic range (HDR) images by using deep learning algorithms to fuse multiple exposure images, there are still challenges, such as image artifacts and distortion in high-brightness and low-brightness saturated areas. To this end, we [...] Read more.
Although scholars have made significant progress in obtaining high dynamic range (HDR) images by using deep learning algorithms to fuse multiple exposure images, there are still challenges, such as image artifacts and distortion in high-brightness and low-brightness saturated areas. To this end, we propose a more detailed high dynamic range (MDHDR) method. Firstly, our proposed method uses super-resolution to enhance the details of long-exposure and short-exposure images and fuses them into medium-exposure images, respectively. Then, the HDR image is reconstructed by fusing the original medium-exposure, enhanced medium-exposure images. Extensive experimental results show that the proposed method can reconstruct good HDR images that perform better image clarity in quantitative tests and improve HDR-VDP2 by 4.8% in qualitative tests. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

14 pages, 7865 KiB  
Article
Time-Interval-Guided Event Representation for Scene Understanding
by Boxuan Wang, Wenjun Yang, Kunqi Wu, Rui Yang, Jiayue Xie and Huixiang Liu
Sensors 2025, 25(10), 3186; https://doi.org/10.3390/s25103186 - 19 May 2025
Viewed by 538
Abstract
The recovery of scenes under extreme lighting conditions is pivotal for effective image analysis and feature detection. Traditional cameras face challenges with low dynamic range and limited spectral response in such scenarios. In this paper, we advocate for the adoption of event cameras [...] Read more.
The recovery of scenes under extreme lighting conditions is pivotal for effective image analysis and feature detection. Traditional cameras face challenges with low dynamic range and limited spectral response in such scenarios. In this paper, we advocate for the adoption of event cameras to reconstruct static scenes, particularly those in low illumination. We introduce a new method to elucidate the phenomenon where event cameras continue to generate events even in the absence of brightness changes, highlighting the crucial role played by noise in this process. Furthermore, we substantiate that events predominantly occur in pairs and establish a correlation between the time interval of event pairs and the relative light intensity of the scene. A key contribution of our work is the proposal of an innovative method to convert sparse event streams into dense intensity frames without dependence on any active light source or motion, achieving the static imaging of event cameras. This method expands the application of event cameras in static vision fields such as HDR imaging and leads to a practical application. The feasibility of our method was demonstrated through multiple experiments. Full article
(This article belongs to the Special Issue Computational Optical Sensing and Imaging)
Show Figures

Figure 1

42 pages, 2122 KiB  
Review
A Review Toward Deep Learning for High Dynamic Range Reconstruction
by Gabriel de Lima Martins, Josue Lopez-Cabrejos, Julio Martins, Quefren Leher, Gustavo de Souza Ferreti, Lucas Hildelbrano Costa Carvalho, Felipe Bezerra Lima, Thuanne Paixão and Ana Beatriz Alvarez
Appl. Sci. 2025, 15(10), 5339; https://doi.org/10.3390/app15105339 - 10 May 2025
Viewed by 903
Abstract
High Dynamic Range (HDR) image reconstruction has gained prominence in a wide range of fields; not only is it implemented in computer vision, but industries such as entertainment and medicine also benefit considerably from this technology due to its ability to capture and [...] Read more.
High Dynamic Range (HDR) image reconstruction has gained prominence in a wide range of fields; not only is it implemented in computer vision, but industries such as entertainment and medicine also benefit considerably from this technology due to its ability to capture and reproduce scenes with a greater variety of luminosities, extending conventional levels of perception. This article presents a review of the state of the art of HDR reconstruction methods based on deep learning, ranging from classical approaches that are still expressive and relevant to more recent proposals involving the advent of new architectures. The fundamental role of high-quality datasets and specific metrics in evaluating the performance of HDR algorithms is also discussed, as well as emphasizing the challenges inherent in capturing multiple exposures and dealing with artifacts. Finally, emerging trends and promising directions for overcoming current limitations and expanding the potential of HDR reconstruction in real-world scenarios are highlighted. Full article
(This article belongs to the Special Issue Novel Research on Image and Video Processing Technology)
Show Figures

Figure 1

20 pages, 49431 KiB  
Article
Generative Adversarial Network-Based Lightweight High-Dynamic-Range Image Reconstruction Model
by Gustavo de Souza Ferreti, Thuanne Paixão and Ana Beatriz Alvarez
Appl. Sci. 2025, 15(9), 4801; https://doi.org/10.3390/app15094801 - 25 Apr 2025
Cited by 1 | Viewed by 666
Abstract
The generation of High-Dynamic-Range (HDR) images is essential for capturing details at various brightness levels, but current reconstruction methods, using deep learning techniques, often require significant computational resources, limiting their applicability on devices with moderate resources. In this context, this paper presents a [...] Read more.
The generation of High-Dynamic-Range (HDR) images is essential for capturing details at various brightness levels, but current reconstruction methods, using deep learning techniques, often require significant computational resources, limiting their applicability on devices with moderate resources. In this context, this paper presents a lightweight architecture for reconstructing HDR images from three Low-Dynamic-Range inputs. The proposed model is based on Generative Adversarial Networks and replaces traditional convolutions with depthwise separable convolutions, reducing the number of parameters while maintaining high visual quality and minimizing luminance artifacts. The evaluation of the proposal is conducted through quantitative, qualitative, and computational cost analyses based on the number of parameters and FLOPs. Regarding the qualitative analysis, a comparison between the models was performed using samples that present reconstruction challenges. The proposed model achieves a PSNR-μ of 43.51 dB and SSIM-μ of 0.9917, achieving competitive quality metrics comparable to HDR-GAN while reducing the computational cost by 6× in FLOPs and 7× in the number of parameters, using approximately half the GPU memory consumption, demonstrating an effective balance between visual fidelity and efficiency. Full article
(This article belongs to the Special Issue Advances in Image Recognition and Processing Technologies)
Show Figures

Figure 1

20 pages, 2194 KiB  
Article
An Efficient and Low-Complexity Transformer-Based Deep Learning Framework for High-Dynamic-Range Image Reconstruction
by Josue Lopez-Cabrejos, Thuanne Paixão, Ana Beatriz Alvarez and Diodomiro Baldomero Luque
Sensors 2025, 25(5), 1497; https://doi.org/10.3390/s25051497 - 28 Feb 2025
Cited by 2 | Viewed by 1529
Abstract
High-dynamic-range (HDR) image reconstruction involves creating an HDR image from multiple low-dynamic-range images as input, providing a computational solution to enhance image quality. This task presents several challenges, such as frame misalignment, overexposure, and motion, which are addressed using deep learning algorithms. In [...] Read more.
High-dynamic-range (HDR) image reconstruction involves creating an HDR image from multiple low-dynamic-range images as input, providing a computational solution to enhance image quality. This task presents several challenges, such as frame misalignment, overexposure, and motion, which are addressed using deep learning algorithms. In this context, various architectures with different approaches exist, such as convolutional neural networks, diffusion networks, generative adversarial networks, and Transformer-based architectures, with the latter offering the best quality but at a high computational cost. This paper proposes an HDR reconstruction architecture using a Transformer-based approach to achieve results competitive with the state of the art while reducing computational cost. The number of self-attention blocks was reduced for feature refinement. To prevent quality degradation, a Convolutional Block Attention Module was added, enhancing image features by using the central frame as a reference. The proposed architecture was evaluated on two datasets, achieving the best results on Tel’s dataset in terms of quality metrics. The computational cost indicated that the architecture was significantly more efficient than other Transformer-based approaches for reconstruction. The results of this research suggest that low-complexity Transformer-based architectures have great potential, with applications extending beyond HDR reconstruction to other domains. Full article
Show Figures

Graphical abstract

19 pages, 11243 KiB  
Article
A Simple Polarization-Based Fringe Projection Profilometry Method for Three-Dimensional Reconstruction of High-Dynamic-Range Surfaces
by Xiang Sun, Zhenjun Luo, Shizhao Wang, Jianhua Wang, Yunpeng Zhang and Dandan Zou
Photonics 2025, 12(1), 27; https://doi.org/10.3390/photonics12010027 - 31 Dec 2024
Viewed by 1139
Abstract
Three-dimensional (3D) reconstruction of high-dynamic-range (HDR) surfaces plays an important role in the fields of computer vision and image processing. Traditional 3D measurement methods often face the risk of information loss when dealing with surfaces that have HDR characteristics. To address this issue, [...] Read more.
Three-dimensional (3D) reconstruction of high-dynamic-range (HDR) surfaces plays an important role in the fields of computer vision and image processing. Traditional 3D measurement methods often face the risk of information loss when dealing with surfaces that have HDR characteristics. To address this issue, this paper proposes a simple 3D reconstruction method, which combines the features of non-overexposed regions in polarized and unpolarized images to improve the reconstruction quality of HDR surface objects. The optimum fringe regions are extracted from images with different polarization angles, and the non-overexposed regions in normally captured unpolarized images typically contain complete fringe information and are less affected by specular highlights. The optimal fringe information from different polarized image groups is gradually used to replace the incorrect fringe information in the unpolarized image, resulting in a complete set of fringe data. Experimental results show that the proposed method requires only 24~36 images and simple phase fusion to achieve successful 3D reconstruction. It can effectively mitigate the negative impact of overexposed regions on absolute phase calculation and 3D reconstruction when reconstructing objects with strongly reflective surfaces. Full article
(This article belongs to the Special Issue New Perspectives in Optical Design)
Show Figures

Figure 1

23 pages, 106560 KiB  
Article
RLUNet: Overexposure-Content-Recovery-Based Single HDR Image Reconstruction with the Imaging Pipeline Principle
by Yiru Zheng, Wei Wang, Xiao Wang and Xin Yuan
Appl. Sci. 2024, 14(23), 11289; https://doi.org/10.3390/app142311289 - 3 Dec 2024
Viewed by 1672
Abstract
With the popularity of High Dynamic Range (HDR) display technology, consumer demand for HDR images is increasing. Since HDR cameras are expensive, reconstructing High Dynamic Range (HDR) images from traditional Low Dynamic Range (LDR) images is crucial. However, existing HDR image reconstruction algorithms [...] Read more.
With the popularity of High Dynamic Range (HDR) display technology, consumer demand for HDR images is increasing. Since HDR cameras are expensive, reconstructing High Dynamic Range (HDR) images from traditional Low Dynamic Range (LDR) images is crucial. However, existing HDR image reconstruction algorithms often fail to recover fine details and do not adequately address the fundamental principles of the LDR imaging pipeline. To overcome these limitations, the Reversing Lossy UNet (RLUNet) has been proposed, aiming to effectively balance dynamic range expansion and recover overexposed areas through a deeper understanding of LDR image pipeline principles. The RLUNet model comprises the Reverse Lossy Network, which is designed according to the LDR–HDR framework and focuses on reconstructing HDR images by recovering overexposed regions, dequantizing, linearizing the mapping, and suppressing compression artifacts. This framework, grounded in the principles of the LDR imaging pipeline, is designed to reverse the operations involved in lossy image operations. Furthermore, the integration of the Texture Filling Module (TFM) block with the Recovery of Overexposed Regions (ROR) module in the RLUNet model enhances the visual performance and detail texture of the overexposed areas in the reconstructed HDR image. The experiments demonstrate that the proposed RLUNet model outperforms various state-of-the-art methods on different testsets. Full article
(This article belongs to the Special Issue Applications in Computer Vision and Image Processing)
Show Figures

Figure 1

15 pages, 7931 KiB  
Article
Color Models in the Process of 3D Digitization of an Artwork for Presentation in a VR Environment of an Art Gallery
by Irena Drofova and Milan Adamek
Electronics 2024, 13(22), 4431; https://doi.org/10.3390/electronics13224431 - 12 Nov 2024
Viewed by 1741
Abstract
This study deals with the color reproduction of a work of art to digitize it into a 3D realistic model. The experiment aims to digitize a work of art for application in a virtual reality environment concerning faithful color reproduction. Photogrammetry and scanning [...] Read more.
This study deals with the color reproduction of a work of art to digitize it into a 3D realistic model. The experiment aims to digitize a work of art for application in a virtual reality environment concerning faithful color reproduction. Photogrammetry and scanning with a LiDAR sensor are used to compare the methods and work with colors during the reconstruction of the 3D model. An innovative tablet with a camera and LiDAR sensor is used for both methods. At the same time, current findings from the field of color vision and colorimetry are applied to 3D reconstruction. The experiment focuses on working with the RGB and L*a*b* color models and, simultaneously, on the sRGB, CIE XYZ, and Rec.2020(HDR) color spaces for transforming colors into a virtual environment. For this purpose, the color is defined in the Hex Color Value format. This experiment is a starting point for further research on color reproduction in the digital environment. This study represents a partial contribution to the much-discussed area of forgeries of works of art in current trends in forensics and forgery. Full article
(This article belongs to the Section Electronic Multimedia)
Show Figures

Figure 1

15 pages, 7263 KiB  
Article
Reconstructing High Dynamic Range Image from a Single Low Dynamic Range Image Using Histogram Learning
by Huei-Yung Lin, Yi-Rung Lin, Wen-Chieh Lin and Chin-Chen Chang
Appl. Sci. 2024, 14(21), 9847; https://doi.org/10.3390/app14219847 - 28 Oct 2024
Viewed by 1783
Abstract
High dynamic range imaging is an important field in computer vision. Compared with general low dynamic range (LDR) images, high dynamic range (HDR) images represent a larger luminance range, making the images closer to the real scene. In this paper, we propose an [...] Read more.
High dynamic range imaging is an important field in computer vision. Compared with general low dynamic range (LDR) images, high dynamic range (HDR) images represent a larger luminance range, making the images closer to the real scene. In this paper, we propose an approach for HDR image reconstruction from a single LDR image based on histogram learning. First, the dynamic range of an LDR image is expanded to an extended dynamic range (EDR) image. Then, histogram learning is established to predict the intensity distribution of an HDR image of the EDR image. Next, we use histogram matching to reallocate pixel intensities. The final HDR image is generated through regional adjustment using reinforcement learning. By decomposing low-frequency and high-frequency information, the proposed network can predict the lost high-frequency details while expanding the intensity ranges. We conduct the experiments based on HDR-Real and HDR-EYE datasets. The quantitative and qualitative evaluations have demonstrated the effectiveness of the proposed approach compared to the previous methods. Full article
Show Figures

Figure 1

17 pages, 42688 KiB  
Article
The Multi-Detectors System of the PANDORA Facility: Focus on the Full-Field Pin-Hole CCD System for X-ray Imaging and Spectroscopy
by David Mascali, Eugenia Naselli, Sandor Biri, Giorgio Finocchiaro, Alessio Galatà, Giorgio Sebastiano Mauro, Maria Mazzaglia, Bharat Mishra, Santi Passarello, Angelo Pidatella, Richard Rácz, Domenico Santonocito and Giuseppe Torrisi
Condens. Matter 2024, 9(2), 28; https://doi.org/10.3390/condmat9020028 - 20 Jun 2024
Cited by 2 | Viewed by 1735
Abstract
PANDORA (Plasmas for Astrophysics Nuclear Decays Observation and Radiation for Archaeometry) is an INFN project aiming at measuring, for the first time, possible variations in in-plasma β-decay lifetimes in isotopes of astrophysical interest as a function of thermodynamical conditions of the in-laboratory [...] Read more.
PANDORA (Plasmas for Astrophysics Nuclear Decays Observation and Radiation for Archaeometry) is an INFN project aiming at measuring, for the first time, possible variations in in-plasma β-decay lifetimes in isotopes of astrophysical interest as a function of thermodynamical conditions of the in-laboratory controlled plasma environment. Theoretical predictions indicate that the ionization state can dramatically modify the β-decay lifetime (even of several orders of magnitude). The PANDORA experimental approach consists of confining a plasma able to mimic specific stellar-like conditions and measuring the nuclear decay lifetime as a function of plasma parameters. The β-decay events will be measured by detecting the γ-ray emitted by the daughter nuclei, using an array of 12 HPGe detectors placed around the magnetic trap. In this frame, plasma parameters have to be continuously monitored online. For this purpose, an innovative, non-invasive multi-diagnostic system, including high-resolution time- and space-resolved X-ray analysis, was developed, which will work synergically with the γ-rays detection system. In this contribution, we will describe this multi-diagnostics system with a focus on spatially resolved high-resolution X-ray spectroscopy. The latter is performed by a pin-hole X-ray camera setup operating in the 0.5–20 keV energy domain. The achieved spatial and energy resolutions are 450 µm and 230 eV at 8.1 keV, respectively. An analysis algorithm was specifically developed to obtain SPhC (Single Photon-Counted) images and local plasma emission spectrum in High-Dynamic-Range (HDR) mode. Thus, investigations of image regions where the emissivity can change by even orders of magnitude are now possible. Post-processing analysis is also able to remove readout noise, which is often observable and dominant at very low exposure times (ms). Several measurements have already been used in compact magnetic plasma traps, e.g., the ATOMKI ECRIS in Debrecen and the Flexible Plasma Trap at LNS. The main outcomes will be shortly presented. The collected data allowed for a quantitative and absolute evaluation of local emissivity, the elemental analysis, and the local evaluation of plasma density and temperature. This paper also discusses the new plasma emission models, implemented on PIC-ParticleInCell codes, which were developed to obtain powerful 3D maps of the X-rays emitted by the magnetically confined plasma. These data also support the evaluation procedure of spatially resolved plasma parameters from the experimental spectra as well as, in the near future, the development of appropriate algorithms for the tomographic reconstruction of plasma parameters in the X-ray domain. The described setups also include the most recent upgrade, consisting of the use of fast X-ray shutters with special triggering systems that will be routinely implemented to perform both space- and time-resolved spectroscopy during transient, stable, and turbulent plasma regimes (in the ms timescale). Full article
(This article belongs to the Special Issue High Precision X-ray Measurements 2023)
Show Figures

Figure 1

33 pages, 6045 KiB  
Article
A Display-Adaptive Pipeline for Dynamic Range Expansion of Standard Dynamic Range Video Content
by Gonzalo Luzardo, Asli Kumcu, Jan Aelterman, Hiep Luong, Daniel Ochoa and Wilfried Philips
Appl. Sci. 2024, 14(10), 4081; https://doi.org/10.3390/app14104081 - 11 May 2024
Cited by 1 | Viewed by 1726
Abstract
Recent advancements in high dynamic range (HDR) display technology have significantly enhanced the contrast ratios and peak brightness of modern displays. In the coming years, it is expected that HDR televisions capable of delivering significantly higher brightness and, therefore, contrast levels than today’s [...] Read more.
Recent advancements in high dynamic range (HDR) display technology have significantly enhanced the contrast ratios and peak brightness of modern displays. In the coming years, it is expected that HDR televisions capable of delivering significantly higher brightness and, therefore, contrast levels than today’s models will become increasingly accessible and affordable to consumers. While HDR technology has gained prominence over the past few years, low dynamic range (LDR) content is still consumed due to a substantial volume of historical multimedia content being recorded and preserved in LDR. Although the amount of HDR content will continue to increase as HDR becomes more prevalent, a large portion of multimedia content currently remains in LDR. In addition, it is worth noting that although the HDR standard supports multimedia content with luminance levels up to 10,000 cd/m2 (a standard measure of brightness), most HDR content is typically limited to a maximum brightness of around 1000 cd/m2. This limitation aligns with the current capabilities of consumer HDR TVs but is a factor approximately five times brighter than current LDR TVs. To accurately present LDR content on a HDR display, it is processed through a dynamic range expansion process known as inverse tone mapping (iTM). This LDR to HDR conversion faces many challenges, including the inducement of noise artifacts, false contours, loss of details, desaturated colors, and temporal inconsistencies. This paper introduces complete inverse tone mapping, artifact suppression, and a highlight enhancement pipeline for video sequences designed to address these challenges. Our LDR-to-HDR technique is capable of adapting to the peak brightness of different displays, creating HDR video sequences with a peak luminance of up to 6000 cd/m2. Furthermore, this paper presents the results of comprehensive objective and subjective experiments to evaluate the effectiveness of the proposed pipeline, focusing on two primary aspects: real-time operation capability and the quality of the HDR video output. Our findings indicate that our pipeline enables real-time processing of Full HD (FHD) video (1920 × 1080 pixels), even on hardware that has not been optimized for this task. Furthermore, we found that when applied to existing HDR content, typically capped at a brightness of 1000 cd/m2, our pipeline notably enhances its perceived quality when displayed on a screen that can reach higher peak luminances. Full article
(This article belongs to the Special Issue Intelligent Systems: Methods and Implementation)
Show Figures

Figure 1

17 pages, 9162 KiB  
Article
High Dynamic Range Image Reconstruction from Saturated Images of Metallic Objects
by Shoji Tominaga and Takahiko Horiuchi
J. Imaging 2024, 10(4), 92; https://doi.org/10.3390/jimaging10040092 - 15 Apr 2024
Cited by 1 | Viewed by 2114
Abstract
This study considers a method for reconstructing a high dynamic range (HDR) original image from a single saturated low dynamic range (LDR) image of metallic objects. A deep neural network approach was adopted for the direct mapping of an 8-bit LDR image to [...] Read more.
This study considers a method for reconstructing a high dynamic range (HDR) original image from a single saturated low dynamic range (LDR) image of metallic objects. A deep neural network approach was adopted for the direct mapping of an 8-bit LDR image to HDR. An HDR image database was first constructed using a large number of various metallic objects with different shapes. Each captured HDR image was clipped to create a set of 8-bit LDR images. All pairs of HDR and LDR images were used to train and test the network. Subsequently, a convolutional neural network (CNN) was designed in the form of a deep U-Net-like architecture. The network consisted of an encoder, a decoder, and a skip connection to maintain high image resolution. The CNN algorithm was constructed using the learning functions in MATLAB. The entire network consisted of 32 layers and 85,900 learnable parameters. The performance of the proposed method was examined in experiments using a test image set. The proposed method was also compared with other methods and confirmed to be significantly superior in terms of reconstruction accuracy, histogram fitting, and psychological evaluation. Full article
(This article belongs to the Special Issue Imaging Technologies for Understanding Material Appearance)
Show Figures

Figure 1

23 pages, 11789 KiB  
Article
ERS-HDRI: Event-Based Remote Sensing HDR Imaging
by Xiaopeng Li, Shuaibo Cheng, Zhaoyuan Zeng, Chen Zhao and Cien Fan
Remote Sens. 2024, 16(3), 437; https://doi.org/10.3390/rs16030437 - 23 Jan 2024
Cited by 3 | Viewed by 2676
Abstract
High dynamic range imaging (HDRI) is an essential task in remote sensing, enhancing low dynamic range (LDR) remote sensing images and benefiting downstream tasks, such as object detection and image segmentation. However, conventional frame-based HDRI methods may encounter challenges in real-world scenarios due [...] Read more.
High dynamic range imaging (HDRI) is an essential task in remote sensing, enhancing low dynamic range (LDR) remote sensing images and benefiting downstream tasks, such as object detection and image segmentation. However, conventional frame-based HDRI methods may encounter challenges in real-world scenarios due to the limited information inherent in a single image captured by conventional cameras. In this paper, an event-based remote sensing HDR imaging framework is proposed to address this problem, denoted as ERS-HDRI, which reconstructs the remote sensing HDR image from a single-exposure LDR image and its concurrent event streams. The proposed ERS-HDRI leverages a coarse-to-fine framework, incorporating the event-based dynamic range enhancement (E-DRE) network and the gradient-enhanced HDR reconstruction (G-HDRR) network. Specifically, to efficiently achieve dynamic range fusion from different domains, the E-DRE network is designed to extract the dynamic range features from LDR frames and events and perform intra- and cross-attention operations to adaptively fuse multi-modal data. A denoise network and a dense feature fusion network are then employed for the generation of the coarse, clean HDR image. Then, the G-HDRR network, with its gradient enhancement module and multiscale fusion module, performs structure enforcement on the coarse HDR image and generates a fine informative HDR image. In addition, this work introduces a specialized hybrid imaging system and a novel, real-world event-based remote sensing HDRI dataset that contains aligned remote sensing LDR images, remote sensing HDR images, and concurrent event streams for evaluation. Comprehensive experiments have demonstrated the effectiveness of the proposed method. Specifically, it improves state-of-the-art PSNR by about 30% and the SSIM score by about 9% on the real-world dataset. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing in Remote Sensing)
Show Figures

Figure 1

19 pages, 7287 KiB  
Article
Preliminary Characterization of an Active CMOS Pad Detector for Tracking and Dosimetry in HDR Brachytherapy
by Thi Ngoc Hang Bui, Matthew Large, Joel Poder, Joseph Bucci, Edoardo Bianco, Raffaele Aaron Giampaolo, Angelo Rivetti, Manuel Da Rocha Rolo, Zeljko Pastuovic, Thomas Corradino, Lucio Pancheri and Marco Petasecca
Sensors 2024, 24(2), 692; https://doi.org/10.3390/s24020692 - 22 Jan 2024
Viewed by 1887
Abstract
We assessed the accuracy of a prototype radiation detector with a built in CMOS amplifier for use in dosimetry for high dose rate brachytherapy. The detectors were fabricated on two substrates of epitaxial high resistivity silicon. The radiation detection performance of prototypes has [...] Read more.
We assessed the accuracy of a prototype radiation detector with a built in CMOS amplifier for use in dosimetry for high dose rate brachytherapy. The detectors were fabricated on two substrates of epitaxial high resistivity silicon. The radiation detection performance of prototypes has been tested by ion beam induced charge (IBIC) microscopy using a 5.5 MeV alpha particle microbeam. We also carried out the HDR Ir-192 radiation source tracking at different depths and angular dose dependence in a water equivalent phantom. The detectors show sensitivities spanning from (5.8 ± 0.021) × 10−8 to (3.6 ± 0.14) × 10−8 nC Gy−1 mCi−1 mm−2. The depth variation of the dose is within 5% with that calculated by TG-43. Higher discrepancies are recorded for 2 mm and 7 mm depths due to the scattering of secondary particles and the perturbation of the radiation field induced in the ceramic/golden package. Dwell positions and dwell time are reconstructed within ±1 mm and 20 ms, respectively. The prototype detectors provide an unprecedented sensitivity thanks to its monolithic amplification stage. Future investigation of this technology will include the optimisation of the packaging technique. Full article
(This article belongs to the Special Issue Integrated Circuits and CMOS Sensors)
Show Figures

Figure 1

Back to TopTop