Next Article in Journal
Cenozoic Stratigraphic Architecture of the Beikang Basin (South China Sea): Insights into Tectonic Evolution and Sedimentary Response
Previous Article in Journal
Mechanism of Permeability Evolution in Coral Reef Limestone Under Variable Confined Pressure Using Nuclear Magnetic Resonance Technology
Previous Article in Special Issue
An Unsupervised Obstacle Segmentation Method for Forward-Looking Sonar Based on Teacher–Student Transfer Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PLPGR-Net: Photon-Level Physically Guided Restoration Network for Underwater Laser Range-Gated Image

by
Qing Tian
1,
Longfei Hu
1,
Zheng Zhang
1 and
Qiang Yang
2,3,*
1
School of Artificial Intelligence and Computer Science, North China University of Technology, Beijing 100144, China
2
Western China (Mianyang) Transformation Center for Advanced Technological Achievements (MianYang Science & Technology City Institute of Advanced Technology), Mianyang 621000, China
3
Department of Engineering Physics, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2025, 13(12), 2217; https://doi.org/10.3390/jmse13122217
Submission received: 19 October 2025 / Revised: 12 November 2025 / Accepted: 19 November 2025 / Published: 21 November 2025
(This article belongs to the Special Issue Advancements in Deep-Sea Equipment and Technology, 3rd Edition)

Abstract

Underwater laser range-gated imaging (ULRGI) effectively suppresses backscatter from water bodies through a time-gated photon capture mechanism, significantly extending underwater detection ranges compared to conventional imaging techniques. However, as imaging distance increases, rapid laser power attenuation causes localized pixel loss in captured images. To address ULRGI’s limitations in multi-frame stacking—particularly poor real-time performance and artifact generation—this paper proposes the Photon-Level Physically Guided Underwater Laser-Gated Image Restoration Network (PLPGR-Net). To overcome image degradation caused by water scattering and address the challenge of strong coupling between target echo signals and scattering noise, we designed a three-branch architecture driven by photon-level physical priors. This architecture comprises: scattering background suppression module, sparse photon perception module, and enhanced U-Net high-frequency information recovery module. By establishing a multidimensional physical constraint loss system, we guide image reconstruction across three dimensions—pixels, features, and physical laws—ensuring the restored results align with underwater photon distribution characteristics. This approach significantly enhances operational efficiency in critical applications such as underwater infrastructure inspection and cultural relic detection. Comparative experiments using proprietary datasets and state-of-the-art denoising and underwater image restoration algorithms validate the method’s outstanding performance in deeply integrating physical interpretability with deep learning generalization capabilities.

1. Introduction

Underwater optical imaging serves as a critical pillar for marine science research and maritime safety assurance, while also representing a major challenge that has long perplexed both academia and industry. Its applications span underwater infrastructure inspection [1], underwater robot navigation and obstacle avoidance, underwater search and rescue operations, and military reconnaissance. However, traditional underwater optical imaging methods face challenges such as limited detection range, severe backscatter noise from aquatic media, and imaging difficulties in deep-sea darkness. These challenges become more pronounced in turbid waters, significantly limiting the practical application effectiveness of conventional underwater optical imaging.
ULRGI is currently one of the primary applied technologies in deep-sea exploration. To overcome the severe absorption and scattering effects of the underwater environment on light, this technology precisely synchronizes nanosecond-level pulsed laser illumination with the nanosecond-level gating time of an ICMOS (Image Coupled CMOS) camera. This synchronization effectively masks backscatter noise [2,3] generated by the underwater environment and suspended particles in target echo signals, enabling imaging in deep-water dark environments. As imaging distances increase, the propagation attenuation rate of pulsed lasers in seawater accelerates. Additionally, photons undergo continuous forward and backscattering by suspended particles during both emission and reflection, leading to energy dispersion. This reduces the proportion of effective signals received by detectors, causing dual degradation in image quality. Consequently, images exhibit non-uniform sub-pixel speckle, blurring, and low contrast. Enhancing and restoring underwater laser range-gated images holds promise for extending the hardware detection limit. Mounted on remotely operated vehicles (ROVs), this technology enables long-range acquisition of information on submerged communication cables, cultural relics, and biological remains.
Confronted with extreme conditions like light-deprived seawater and high suspended particle concentrations, the presence of suspended particles, plankton, and microorganisms further degrades underwater visibility and detection range, posing severe challenges to imaging systems. Traditional underwater optical cameras are limited to imaging within 2 m [4]. Even with intense illumination, suspended particles reflect excessive light back to the lens, creating a “light haze”—akin to shining a flashlight in a blizzard. This intense scattered light exacerbates diffusion, causing sensor overexposure and preventing breakthroughs in long-range imaging. ULRGI achieves imaging at 3–5 times the attenuation length [5,6,7]. However, due to its imaging mechanism, ULRGI images suffer from pulse broadening, gating noise, and partial water scattering noise. Returned target information remains heavily contaminated by multiple noise sources, significantly increasing the difficulty of image restoration in this field.
Traditional underwater image restoration techniques, as shown in Figure 1, primarily focus on two approaches to enhance image quality [8]: one is driven by physical models based on the optical properties of water [9], and the other relies on classical image processing algorithms. Physics-model-driven image enhancement methods require precise simulation and modeling of laser optical transmission properties in water, constructing degradation models based on water scattering models for recovery, or employing point spread function (PSF) estimation for blind deconvolution recovery. For instance, Hou et al. [10] incorporated sparse priors for the illumination channel into an extended underwater imaging model, while Voss and Chapin [11] employed specialized instruments to measure the point spread function (PSF) in the ocean. However, the process from measurement to modeling faces challenges such as high equipment costs and complex parameter involvement. Consequently, the resulting underwater imaging models often lack sufficient precision, leading to time-consuming blind restoration algorithms that cannot be processed in real-time for practical applications, rendering real-time processing unfeasible in practical applications. Peng [12] proposed an underwater scene depth estimation technique based on image blur and light absorption, applying the physical principles of underwater light propagation to the image formation model (IFM). However, the inherently complex and diverse characteristics of the underwater environment pose a core obstacle to acquiring “accurate and universal” prior knowledge. When confronted with unfamiliar environmental conditions, the domain-specific nature of such priors often becomes a key factor leading to suboptimal restoration performance. In recent years, as research into the challenging task of underwater image enhancement has deepened, several specialized underwater optical image restoration algorithms have emerged [13,14,15,16]. Li, Z et al. [17] proposed an iterative color correction and vision-based enhancement framework to address issues like color distortion in underwater images. Dengyu Cao et al. [18] proposed an adaptive enhancement method for small underwater targets based on dense residual denoising. Shiyan Li et al. [19] introduced CMFNet, an end-to-end ultra-lightweight underwater image enhancement network. Zhang, W. et al. [20] developed a fusion method combining frequency and spatial domains for underwater image restoration. Jingyu Yang et al. [21] presented a structure-texture decomposition-based approach for enhancing underwater image details and edges. Among these, classical image processing algorithms fail to eliminate scattering noise at the physical level, often leading to reverse amplification of scattering noise and detail distortion. Single-branch end-to-end restoration networks often misinterpret scattered noise as target details, yielding results that violate underwater light propagation laws. Furthermore, the integration of physical models and network architecture remains largely “loosely coupled,” failing to deeply incorporate branch design. This results in abrupt performance degradation under specific scenarios.
Unlike the imaging mechanism of traditional underwater optical cameras, laser-gated imaging employs pulsed beams that approximately follow a Gaussian distribution. This uneven distribution during underwater transmission often leads to overexposure or underexposure of local target features. Furthermore, the temporal broadening of laser transmission makes it challenging to precisely match the gating signal with the target echo signal. Compared to conventional underwater images, enhancing color bias in laser-gated images presents greater difficulty. Numerous researchers have focused on enhancing underwater laser-gated distance images as a key research direction; for instance, Sheng Wu et al. [22] proposed a convolutional sparse coding denoising neural network based on deep learning to improve resolution, while Liu, P. et al. [23] proposed a U-Net architecture incorporating residual connections. However, most methods fail to effectively account for the noise photon distribution characteristics inherent in underwater imaging, making it difficult to recover pixel loss caused by scattering noise and camera dark noise. Therefore, to more precisely adapt to the core characteristics of underwater laser-gated images, it is necessary to reconstruct existing algorithms to ultimately achieve efficient and reliable restoration of such images, as shown in Figure 2.
In this paper, we propose PLPGR-Net, a photon-level physics-guided underwater laser-gated image restoration network. Its main contributions are summarized as follows:
  • PLPGR-Net achieves effective separation of echo photons from scattering noise by integrating a water scattering background suppression module with a sparse photon perception module to model photon propagation and distribution characteristics. The network utilizes an enhanced U-Net decoder to recover high-frequency image information and introduces a multi-scale discriminator with self-attention to learn true photon distribution patterns, thereby constraining image generation. Ultimately, the algorithm overcomes the limitations of traditional multi-frame fusion methods that rely on temporal continuity, achieving high-quality reconstruction from single-frame underwater laser-gated images.
  • This paper proposes a multidimensional physical joint-constraint loss framework that translates underwater optical imaging principles into a differentiable optimization objective. The framework integrates gradient consistency, frequency-domain amplitude spectrum constraints, and sparse photon-region binary cross-entropy loss to synergistically guide image reconstruction across three dimensions: pixel-level, feature-level, and photon-level.
  • We have developed a highly integrated underwater laser-gated imaging system for underwater detection and constructed a paired dataset of “single-frame gated images and multi-frame enhanced images,” which holds significant importance for advancing the research and development of underwater equipment. Experiments demonstrate that our algorithm significantly outperforms existing noise reduction and underwater image restoration techniques in repairing laser-gated underwater images. This breakthrough is expected to propel underwater detection technology from passive environmental adaptation toward active sensing and reconstruction capabilities.

2. Related Works

In recent years, laser range-gated imaging technology has demonstrated unique advantages in long-range underwater detection (as shown in Figure 3). However, its practical application still faces numerous technical challenges. Existing research has primarily focused on system integration and design at the macro level [24], while issues such as target signal attenuation under nanosecond-level timing control, image edge blurring caused by water scattering noise, and reduced contrast remain insufficiently explored. Particularly when applying laser range-gated imaging to underwater scenarios, the strong coupling between target echo signals and water scattering noise fundamentally limits the ability to extract meaningful information from background noise. In image enhancement, both traditional physics-based algorithms and data-driven deep learning methods struggle to address the unique challenges inherent in underwater laser range-gated images, such as low signal-to-noise ratio and compressed dynamic range. This study achieves adaptive integration of spatial composite features through a gated feature fusion mechanism. By combining deformable convolutions with channel attention mechanisms within a multi-scale restoration module, it successfully decouples target echo signals from water scattering noise. This approach systematically resolves edge blurring and scattering noise issues in underwater laser-gated images, opening a novel pathway to overcome the current physical limitations of underwater detection range.

2.1. Characteristics of Underwater Laser Range-Gated Imaging

Underwater laser range-gated imaging technology, as a highly promising solution addressing severe backscatter noise and low target resolution issues in traditional underwater optical imaging, has garnered significant attention from researchers worldwide. Following theoretical and practical validation, it has now advanced to the engineering application stage. Leveraging its unique advantages, future development will focus on multi-physics field joint compensation algorithms and the development of miniaturized portable devices. Research by Fournier et al. [25] demonstrates that in underwater imaging applications, laser distance gated systems can extend imaging range by 3–5 times compared to traditional illumination imaging. Moreover, the ultra-narrow gate width design exhibits superior scattering suppression effects. However, both target reflection signals and water scattering noise in laser-gated images fall under the category of “valid detection light signals within the gating period”. Their photon flux is converted into high grayscale values in image pixels, lacking chromaticity information.

2.2. Bottlenecks in the Reconstruction of Underwater Laser Range-Gated Images

Water scatter noise and background light interference in laser range-gated images predominantly exhibit a mixture of Poisson noise [26] and Gaussian noise, characterized by zero mean and temporal incoherence. Multi-frame stacking averaging [27] reduces noise variance proportionally by averaging pixel grayscale values across frames. However, this method not only extends image acquisition time but also introduces pixel misalignment between frames when targets or underwater robotic platforms experience minor current disturbances. In such cases, stacking averaging may even generate ghosting or artifacts. Johnstone et al. [28] also demonstrated that post-processing sparse photon images can provide superior grayscale information compared to simple frame averaging.
Due to the high-gray-level characteristics of both target echo signals and noise signals in underwater laser range-gated images, coupled with the three-channel gray-level input feature, CNN architectures such as DnCNN [29] and IRCNN [30] struggle to effectively distinguish signals from noise at the pixel level. They may fail to activate corresponding feature channels due to the lack of prominent edges. As shown in Figure 4, deep convolutional neural networks like ResNet [31] and DenseNet [32] struggle to learn detailed features in high-grayscale regions. High-frequency noise components may be directly propagated to deeper layers via residual paths. As illustrated in Figure 5, when target and noise grayscale values are similar, residual learning may treat noise as a “valid signal” and amplify it, resulting in significantly high grayscale noise retention in denoised images. Meanwhile, backbone networks pretrained on underwater color datasets rely on the statistical distribution of color images. Mismatched distributions between convolution kernel weights and spectral dimensions cause feature extraction bias, resulting in poor transfer learning performance.

3. Methodology

3.1. Overview

This study focuses on resolving the strong coupling between target photons and scattered photons in underwater laser range-gated images. It eliminates the need for complex frame alignment and temporal processing, thereby avoiding motion artifacts that arise when multiple frames are stacked during target or payload platform movement. This approach is suitable for dynamic scenarios and real-time applications. Inspired by the implementation of composite multi-branch recovery tasks in a general framework [33,34], and addressing the poor system adaptability and unstable imaging inherent in complex underwater environments for laser-gated imaging, we propose an innovative single-frame image restoration algorithm for underwater laser-gated distance imaging: PLPGR-Net. This achieves efficient, high-quality imaging, further enabling underwater laser range-gated imaging systems with fixed hardware configurations to overcome physical limitations and achieve imaging at greater distances. As shown in Figure 6, the proposed algorithm’s network architecture is as follows: First, an integrated underwater laser range-gated imaging system acquires time-synchronized degraded images and their corresponding high-quality reference images. PLPGR-Net undergoes end-to-end optimization training. Once fully trained and deployed, it can directly process single-frame degraded images, breaking away from the traditional multi-frame stacking paradigm, requiring only a single forward pass to output restored images. This provides highly robust optical imaging support for complex scenarios such as underwater military reconnaissance and deep-sea exploration.
This underwater laser range-gated single-frame image restoration network achieves end-to-end collaborative processing from degraded single-frame images to clear restored images through a four-stage architecture featuring: carefully designed shared underlying feature encoding, multi-task module branch decoupling, adaptive weight fusion, and multi-dimensional physical joint loss constraints. The network processing flow begins with pre-processing the input image I i n p u t R 3 × 1024 × 1024 , obtaining I n o r m = I i n p u t 0.5 0.5 [ 1,1 ] 3 × 1024 × 1024 through normalization transformation, and entering the shared feature encoder (SFE) for multi-level downsampling. This extracts initial features F 0 R 64 × 1024 × 1024 capturing fundamental spatial information. After progressively extracting multi-scale bottleneck features, the network concurrently activates three specialized processing modules for underwater laser range-gated images. The outputs from these three modules are fed into the critical adaptive branch fusion (ABF) to yield I f u s e d . During this process, the Scatter Background Suppression Module (SBSM) and Sparse Photon Sensing Module (SPSM) employ physics-guided methods to suppress strong water scattering while locating and enhancing valid target information from low-photon signals. Concurrently, the High-Frequency Information Reconstruction Module (HFIRM) ensures structural integrity and visual quality of the final output. Finally, residual connections yield preliminary reconstruction results I o u t p u t = T a n h ( I f u s e d + I n o r m ) , with the Multi-Dimensional Joint Physical Loss Module (MDJPLM) providing feedback on output accuracy. Detailed descriptions of ABF, SBSM, SPSM, HFIRM, and MDJPLM are provided below.

3.2. Adaptive Branch Fusioner

Different regions within underwater laser range-gated images exhibit distinctly different noise-signal coupling characteristics. Target areas require enhancement of sparse photon signals, background areas necessitate suppression of scattering noise, while edge regions demand accurate discrimination and preservation of detailed features. Traditional fixed-weight fusion methods cannot adapt to these spatially varying processing requirements. The ABF module achieves true spatially adaptive fusion by dynamically learning the preference weights for each branch output at every pixel location [35], fundamentally overcoming the limitations of fixed-weight fusion strategies.
This module takes the outputs I b g s u p p r e s s e d R 3 × 1024 × 1024 , I p h o t o n e n h a n c e d R 3 × 1024 × 1024 , and I d e t a i l R 3 × 1024 × 1024 from three branches as input. It concatenates these outputs along the channel dimension to form a 9-channel composite feature representation I c o n c a t R 9 × 1024 × 1024 . During weight generation, it uses a 1 × 1 convolution to learn preliminary fusion weights W r a w R 3 × H × W , capturing cross-channel dependencies. It then performs spatial soft maximization: for each spatial location ( i , j ) , W c , i , j is calculated to ensure the sum of the three weights at each position equals 1, forming a probabilistic weight distribution W c R 3 × H × W . Finally, element-wise multiplication and weighted fusion are performed to achieve spatially adaptive fine-grained fusion. The core objective of the entire optimization process is to minimize the supervised loss term, ensuring the fused output approximates the true target as closely as possible at the pixel level.
I c o n c a t = C o n c a t ( I b g s u p p r e s s e d , I p h o t o n e n h a n c e d , I d e t a i l )
W r a w = C o n v 1 × 1 ( I c o n c a t ) R 3 × H × W
W c , i , j = e x p ( W r a w [ c , i , j ] ) c = 1 3 e x p ( W r a w [ c , i , j ] )
I f u s e d = c = 1 3 W c I b r a n c h c

3.3. Scattering Background Suppression Module

In underwater laser range-gated images, backscatter noise from the underwater environment manifests as global background interference. This noise fundamentally arises from “the random propagation of photons reflected by impurities in the water”, severely impairing the visibility of target signals. Traditional denoising methods often struggle to distinguish target photons from scattered noise photons, leading to either excessive suppression that causes target loss or insufficient suppression that leaves residual noise. The SBSM module leverages the Beer–Lambert law [36], a fundamental principle of underwater optical imaging. It employs deep learning to learn global illumination distribution patterns, modeling the physical characteristics of underwater images—namely, the global nature of backscatter noise and the regional unevenness of scattering intensity. This process generates spatially variable suppression masks, enabling precise suppression of background scattering noise while preserving regions containing potential target signals.
The bottleneck features F b o t t l e n e c k R 512 × 128 × 128 from the shared encoder serve as input to this module. Feature enhancement is performed via residual dense blocks (RDB) to obtain F e n h a n c e d , employing dense connections and residual learning to capture complex feature dependencies. A 3 × 3 convolution then maps the 512-channel features to 3 channels, and a Sigmoid activation generates the 0–1 range suppression mask M b g R 3 × 128 × 128 . An 8× upsampling of M b g yields M b g u p R 3 × 1024 × 1024 , restoring the low-resolution mask to the input image dimensions. Finally, element-wise multiplication with the original image data performs background suppression to obtain I b g _ s u p p r e s s e d , achieving spatially adaptive background darkening.
F e n h a n c e d = F b o t t l e n e c k + α · C o n v 1 × 1 ( C o n c a t [ ] )
In Equation (5), α is a learnable weight parameter (not explicitly defined in the code and requires additional implementation) that controls the contribution of features from the C o n v 1 × 1 ( C o n c a t [ ] ) section to F b o t t l e n e c k , enabling the model to adaptively adjust the intensity of feature enhancement.
M b g = S i g m o i d ( C o n v 3 × 3 ( F e n h a n c e d ) )
M b g u p = I n t e r p o l a t e ( M b g , s c a l e = 8 , m o d e = b i l i n e a r )
I b g _ s u p p r e s s e d = I i n p u t M b g u p

3.4. Sparse Photon Sensing Module

In underwater laser-gated images, target signals often manifest as sparsely distributed photon points. These photon signals are easily overwhelmed by intense backscatter noise, and their photon-counting nature leads to signal discontinuity. Building upon the concept of time-of-flight-correlated single-photon counting to reconstruct sparse photon regions [37], the SPSM module addresses the challenge of sparse signal detection. It employs dilated convolutions to expand the receptive field, capturing widely distributed yet sparse photon events. Combined with a channel attention mechanism that focuses on important feature channels, it ultimately outputs a photon presence probability map, providing precise guidance for subsequent signal enhancement.
First, dilated convolution feature extraction is performed on the input bottleneck feature F b o t t l e n e c k R 512 × 128 × 128 . A 3 × 3 convolution with a dilation factor of 2 yields F d i l a t e d R 64 × 128 × 128 , expanding the receptive field from 3 × 3 to 5 × 5 while maintaining resolution. This effectively captures sparse yet potentially widely distributed photon signals. The F d i l a t e d feature is enhanced via channel attention to produce F a t t e n d e d . The channel attention layer first obtains channel-level statistics through global average pooling (GAP), then learns channel importance weights through two fully connected layers. Finally, it multiplies the original feature to achieve channel recalibration. A 1 × 1 convolution maps the 64-channel feature to 3 channels, and Sigmoid activation generates the photon presence probability map, yielding the photon probability estimate P p h o t o n R 3 × 128 × 128 . Finally, 8× upsampling and signal enhancement are applied to P p h o t o n to yield I p h o t o n e n h a n c e d . This module employs a probability-map-guided additive enhancement strategy to strengthen potential target regions.
F d i l a t e d = L e a k y R e L U ( C o n v 3 × 3 d i l a t i o n = 2 , p a d d i n g = 2 ( F b o t t l e n e c k ) )
F a t t e n d e d = F d i l a t e d S i g m o i d ( F C 2 ( R e L U ( F C 1 ( G A P ( F d i l a t e d ) ) ) )
P p h o t o n = S i g m o i d ( C o n v 1 × 1 ( F a t t e n d e d ) )
P p h o t o n u p = I n t e r p o l a t e ( P p h o t o n , s c a l e = 8 )
I p h o t o n _ e n h a n c e d = I i n p u t ( 1 + P p h o t o n u p )

3.5. High-Frequency Information Reconstruction Module

To further enhance image texture information by suppressing scattered noise and amplifying faint signals, a High-Frequency Image Recovery Module (HFIRM) was designed to specifically restore target contours and edge details within images. Based on an enhanced U-Net decoder architecture, this module leverages multi-level features extracted during the encoding phase. Employing residual dense blocks to strengthen feature representation capabilities ensures reconstructed images retain structural integrity close to that of the original.
This module takes the bottleneck feature F b o t t l e n e c k R 512 × 128 × 128 as input, upsamples it to 256 × 256 via the first transposed convolution, and concatenates it along channels with the encoder’s second layer output to form a 512-channel fusion feature map. It then upscales the resolution to 512 × 512 through another transposed convolution and concatenates it with the encoder’s first layer output to form a 256-channel feature. After a third upsampling, it restores the original 1024 × 1024 size while simultaneously compressing the channel count to 64. It is then concatenated with the initial encoded features to form a 128-channel feature. Subsequent 1 × 1 convolutions compress this to 64 channels. After inputting to the RDB, five layers of 3 × 3 convolutions progressively concatenate the pre-processed features. Following 1 × 1 convolution fusion, 64 channels are retained. Finally, 3 × 3 convolutions map to 3 channels, with a hyperbolic tangent activation function, outputting the high-frequency detail reconstruction result.

3.6. Multi-Scale PatchGAN Discriminator with Self-Attention Enhancement

Assessing the generation quality of underwater laser-gated images presents unique challenges, as generated images may exhibit inconsistencies in global photon distribution patterns. Traditional discriminators struggle to capture artifacts that are “locally realistic but globally inconsistent”. This module employs a multi-scale architecture combined with self-attention mechanisms to evaluate image authenticity across multiple spatial scales. Simultaneously, it leverages self-attention to capture long-range dependencies, identifying overall consistency in photon distribution. This dual-discrimination capability—combining “multi-scale local discrimination” with “long-distance dependency capture”—indirectly constrains the generator’s photon distribution, ensuring visually coherent reconstruction results.
As shown in Figure 7, the discriminator takes generated images or real images I R 3 × 1024 × 1024 as input. Multi-scale processing is performed at three scales: I , I 1 , and I 2 . The discriminators at each scale share the same architecture but have independent parameters. Taking the original scale as an example, features are sequentially extracted through subsampling F 1 R 64 × 512 × 512 , F 2 R 128 × 256 × 256 , and F 3 R 256 × 128 × 128 . The self-attention mechanism is applied to the 256-channel features to compute the output F a t t , followed by the extraction of deep features F 4 R 512 × 64 × 64 . Finally, patch-level discrimination is performed, outputting the authenticity probability D o u t p u t R 1 × 63 × 63 for each image patch. The outputs from all three scales collectively form the final discrimination result.
I 1 = I n t e r p o l a t e ( I , s c a l e = 0.5 )
I 2 = I n t e r p o l a t e ( I , s c a l e = 0.25 )
F 1 = L e a k y R e L U ( C o n v 4 × 4 s t r i d e = 2 ( I 0 ) )
F 2 = L e a k y R e L U ( I n s t a n c e N o r m ( C o n v 4 × 4 s t r i d e = 2 ( F 1 ) ) )
F 3 = L e a k y R e L U ( I n s t a n c e N o r m ( C o n v 4 × 4 s t r i d e = 2 ( F 2 ) ) )
A t t e n t i o n = S o f t m a x ( Q K d k )
F a t t = γ · R e s h a p e ( V · A t t e n t i o n ) + F 3  
Q , K = C o n v 1 × 1 ( F 3 ) R 32 × 16384
V = C o n v 1 × 1 ( F 3 ) R 256 × 16384
F 4 = L e a k y R e L U ( I n s t a n c e N o r m ( C o n v 4 × 4 s t r i d e = 2 ( F a t t ) ) )
D o u t p u t = C o n v 4 × 4 p a d d i n g = 1 ( F 4 )
D = [ D 0 , D 1 , D 2 ]
In Equation (19), K denotes the transpose of the key matrix, where d k represents the dimension of both the query vector and the key vector. The square root is used to scale the dot product result, preventing gradient instability. In Equation (20), γ is a learnable weight parameter within the self-attention module, controlling the extent to which the enhanced features contribute to the original features.

3.7. Multi-Dimensional Joint Physical Loss Module

Since a single loss function struggles to comprehensively capture all aspects of image quality, this module combines six loss functions with distinct physical interpretations, as shown in Figure 8. These encompass pixel-level accuracy, feature-level precision, edge sharpness, frequency-domain texture consistency, and photon-level specificity. Together, they form a comprehensive constraint optimization objective for underwater laser-gated images, ensuring generated images achieve optimal visual quality and physical plausibility.
The system takes the forward propagation outputs as inputs to compute a six-dimensional loss component. It employs pixel-level loss, adversarial loss, and perceptual loss to enhance feature consistency between generated and real images. Gradient loss and frequency-domain loss are introduced to sharpen edges, while texture consistency is constrained in the frequency domain via the fast Fourier transform. Finally, we designed a sparse photon-assisted loss. Leveraging the brightness distribution of the clear image as weak supervision, we constrain the consistency between the photon probability map and the true signal region via binary cross-entropy (BCE). This highlights target signal photons, guiding the model to focus on recovering sparse photon signals. This further enables learning of photon statistical patterns, thereby optimizing the recovery quality of photon-sensitive regions.
L 1 = | | I f a k e I r e a l | |
L a d v = E [ l o g ( D ( I f a k e ) ) ]
L p e r c = | | ϕ l ( I f a k e ) ϕ l ( I r e a l ) | |
L g r a d = | | I f a k e I r e a l | |
L f f t = | | | F F T ( I f a k e ) | | F F T ( I r e a l ) | | |
L s p a r s e = B C E ( P p h o t o n , B i n a r i z e ( I r e a l ) )
L t o t a l = λ L 1 L 1 + λ a d v L a d v + λ p e r c L p e r c + λ g r a d L g r a d + λ f f t L f f t + λ s p a r s e L s p a r s e
In Equation (27), E [ · ] denotes the mathematical expectation, and D ( · ) represents the output of the discriminator. In Equation (32), the values of the respective weights are λ L 1 = 100 , λ a d v = 1 , λ p e r c = 0.5 , λ g r a d = 10 , λ f f t = 1 , λ s p a r s e = 5 .

4. Experimental Results

4.1. Dataset Description

This section introduces the underwater laser-gated image dataset used for training and evaluating algorithm effectiveness. We are releasing for the first time a dataset of reconstructed underwater laser-gated slices, containing multiple “underwater real-world target scenarios” such as marine life and divers, as shown in Figure 9. To construct this dataset, we designed an integrated underwater nanosecond laser imaging system. Figure 10 shows its three-dimensional structural model. The core components include a 450 nm ultra-high repetition rate semiconductor laser with a maximum average power of approximately 1.5 W and a miniaturized nanosecond-level ultrafast single-photon gated camera featuring a maximum shutter repetition rate of 500 KHz, a minimum shutter width of 3 ns, and a frame rate up to 98 fps. The laser possesses a field of view of 6.8° in the horizontal direction and 4.9° in the vertical direction, ensuring sufficient field of view for practical aquatic applications while maintaining effective laser beam focusing during emission. To guarantee data source stability and validity, we constructed an indoor pipeline measuring 16 m in length and 20 cm in diameter as an experimental testing platform for the equipment. Over a four-month period, the underwater laser-gated imaging system captured 1120 single-frame gated images alongside corresponding clear reference images of marine life in various forms and combinations.
To obtain ultra-low-noise, high-signal-to-noise-ratio “true” images as benchmarks for supervised learning, we employ a static gazing method in fixed-delay mode. Specific parameters were exposure time of 10 milliseconds, gate width of 20 nanoseconds, and laser repetition rate of 100 kHz. By actively synchronizing the laser pulse with the camera gate, we effectively capture the echo signal window reflected from the target. Building upon this foundation, we dynamically adjusted image stacking from 20 to 100 frames based on target reflectivity variations. This approach achieved an optimal balance between preserving sufficient detail and maximizing noise suppression, thereby generating a reliable reference image for each single-frame underwater laser-gated image under experimental conditions. To simulate the diverse scattering noise of real underwater environments and enhance model generalization, we established a controlled experimental setup: quantitatively adding 1 milliliter of pure milk as a scattering medium to the experimental water pipe each time, thereby generating a series of “noisy” single-frame images with varying noise intensities and degradation levels. Each noisy single-frame image was strictly paired with a reference image acquired under the same medium concentration.

4.2. Implementation Details

During network training, the Adam optimizer was employed to update parameters for both the generator and discriminator. The optimizer parameters were set to β1 = 0.5 and β2 = 0.999, accelerating early gradient descent while accommodating convergence requirements. The initial learning rate was set to 2 × 10−4. Gradually decreasing the learning rate in later stages prevented loss oscillations and further enhanced the model’s learning accuracy in separating echo signals from scattering noise. Additionally, to increase training data diversity, image degradation levels were progressively elevated by adding milk to tap water, thereby improving generalization performance in real underwater scenarios.

4.3. Quantitative Evaluation

To establish a multidimensional evaluation framework, we comprehensively quantified the restoration performance and practical deployment value of underwater laser-gated images across five dimensions: objective error, structural fidelity, detail preservation, subjective perception, and model efficiency. Six key evaluation metrics from image restoration and engineering applications were selected: mean absolute error (MAE), a pixel-based metric robust to outliers, better suited for the degradation caused by “brightness inhomogeneity due to scattering noise”; peak signal-to-noise ratio (PSNR), derived from pixel error, for intuitively measuring overall restoration accuracy; Structural Similarity Index (SSIM) [38], emphasizing structural similarity perception, to specifically quantify improvements in “structural blurring caused by target-noise coupling”; and LPIPS [39] quantifies human-perceived subjective restoration quality differences, precisely matching subjective visual biases caused by “detail loss due to photon sparsity”. Simultaneously, to accurately reflect the inherent computational overhead of different models, we introduced total model parameters (Params) and cumulative floating-point operations (FLOPs) to evaluate model lightweighting and inference efficiency, ensuring deployment feasibility in practical scenarios such as underwater embedded devices. These metrics establish a precise alignment of “degradation issues–evaluation dimensions–engineering requirements,” providing comprehensive validation from performance to practicality for the proposed method.
To establish a multidimensional evaluation framework, we comprehensively quantified the restoration performance and practical deployment value of underwater laser-gated images across five dimensions: objective error, structural fidelity, detail preservation, subjective perception, and model efficiency. Six key evaluation metrics from image restoration and engineering applications were selected: mean absolute error (MAE), a pixel-based metric robust to outliers, better suited for the degradation caused by “brightness inhomogeneity due to scattering noise”; peak signal-to-noise ratio (PSNR), derived from pixel error, for intuitively measuring overall restoration accuracy; Structural Similarity Index (SSIM) [38], emphasizing structural similarity perception, to specifically quantify improvements in “structural blurring caused by target-noise coupling”; and LPIPS [39] quantifies human-perceived subjective restoration quality differences, precisely matching subjective visual biases caused by “detail loss due to photon sparsity”. Simultaneously, to accurately reflect the inherent computational overhead of different models, we introduced total model parameters (Params) and cumulative floating-point operations (FLOPs) to evaluate model lightweighting and inference efficiency, ensuring deployment feasibility in practical scenarios such as underwater embedded devices. These metrics establish a precise alignment of “degradation issues–evaluation dimensions–engineering requirements”, providing comprehensive validation from performance to practicality for the proposed method.

4.4. Comparison with State-of-the-Art Methods

To comprehensively validate the performance advantages of PLPGR-Net in underwater laser-gated image restoration tasks, this study selected advanced image restoration methods as baselines. These methods encompass seven algorithms across three major categories: traditional enhancement algorithms (BM3D [40]), deep learning-based image denoising algorithms (DnCNN, USRnet [41], CWR [42], MPRNet [43], NAFNet [44], DDNM [45]). By selecting seven representative algorithms from three technical pathways, this study comprehensively validates PLPGR-Net’s superior performance in the specific degradation scenarios of underwater laser-gated images. To verify the method’s universality, experiments employ an identical training dataset comprising 80% of the total dataset, with the test dataset accounting for 10% of the total. All neural networks were trained from scratch until convergence. BM3D is a non-data-driven method that requires no training; it was evaluated using the same test set to maintain consistent comparison conditions.
PLPGR-Net achieved an average PSNR of 36.27 dB in underwater laser-gated image tests across varying photon densities, as shown in Table 1. This represents a 44.62% improvement over the traditional BM3D algorithm (25.08 dB) and a 25.55% enhancement compared to CWR (29.12 dB), an underwater image restoration algorithm with a similar architecture, and surpassed the multi-scale restoration model MPRNet (25.53 dB) by 42.07%. This advantage stems from PLPGR-Net’s physically guided layer, which simulates the propagation laws of single photons. The generated photon probability map accurately distinguishes target photons from scattering noise. PLPGR-Net also demonstrates significant advantages in structural integrity and detail preservation, achieving nearly 90% similarity to multi-frame superimposed clear images. Results indicate it restores finer details and higher-resolution images.
Secondly, PLPGR-Net also demonstrates outstanding performance in terms of model parameters. Compared to other high-performance models, it has 5.17 M parameters, significantly lower than NAF-Net (116.41 M) and DDNM (552.81 M), enabling efficient utilization of computational resources. Given the camera resolution of 1600 × 1088, the model’s moderate FLOPs value ensures rapid processing of large-scale images in practical engineering applications. It achieves efficient computation by restoring a single 1600 × 1088 image within one second.
As shown in Figure 11, to fully demonstrate the model’s generalization capability, we selected images with varying levels of scattering noise—from weak to strong—for processing. PLPGR-Net exhibits significantly superior visual plausibility and practicality compared to competing algorithms in underwater laser-gated image reconstruction, based on the visual performance achieved by preserving sparse photon signals. The spatial distribution of reconstructed photon signals closely matches the actual scene, with no significant loss or spurious generation. A limitation of DnCNN among deep learning restoration algorithms is its failure to effectively filter scattered noise photons (manifesting as low-intensity photons), due to its lack of photon-specific design. While NAF-Net preserves most photons, it suffers from target blurring in visual perception. Consequently, although deep learning-based underwater laser-gated image restoration techniques exhibit less visual bias than PLPGR-Net, they primarily achieve noise reduction by blurring scattered noise photons, resulting in a significant disadvantage in preserving high-frequency details.
In summary, PLPGR-Net leverages its physically guided and multi-branch adaptive fusion design to specifically address the core visual challenges in underwater laser-gated image reconstruction. It achieves precise reconstruction of sparse photon signals without generating artifacts. Its comprehensive advantages in both quantitative performance and qualitative visual effects, coupled with its stability in complex scenarios, demonstrate that it is the more reliable and practical reconstruction solution for this task.
As shown in Figure 12, by constructing heatmaps from the original image, the clear reference image, and the restored image, it was discovered that the heatmap of the image containing water scattering noise clearly exposed defects in the photon distribution of the underwater laser-gated image. The heatmap of the true, clear image exhibited a highly ordered photon density distribution, establishing an ideal reference standard for photon distribution. Within the blue background regions of low photon density (corresponding to the water background), numerous randomly distributed yellow/red medium-to-high brightness patches are interspersed. Target contours become blurred and fragmented due to the intrusion of these noise patches, and some genuine photon signals become indistinguishable from noise patches, making it impossible to visually differentiate “valid photons” from “scattered noise”. Comparing the three images reveals that the restored image’s photon distribution closely approximates the true image, with significant suppression of uneven scattering noise in the background. The medium-to-high brightness patches in the target area exhibit high alignment with the true image in terms of shape, position, and photon density gradient. This heatmap performance—characterized by “thorough background noise removal and precise target photon restoration”—directly visualizes the core capability of the restoration algorithm in reconstructing true photon distributions. It offers a more intuitive understanding of how well the restoration aligns with real-world scenarios than purely numerical metrics.

4.5. Ablation Studies

We conducted systematic ablation experiments to validate the effectiveness of each component in PLPGR-Net. As shown in Table 2, the experimental setup included six progressive configurations: the baseline model employed a standard U-Net architecture using only L1 loss. Adding the Background Suppression Module (SBSM) to the baseline improved PSNR from 26.53 dB to 31.46 dB and SSIM from 0.599 to 0.830, primarily enhancing noise suppression in background regions but insufficiently restoring target details. Further incorporating the Sparse Photon Sensing Module (SPSM) achieved a PSNR of 33.61 dB and an SSIM of 0.867, significantly enhancing sparse target signal recovery. Integrating background suppression, sparse photon detection, and detail feature reconstruction branches further boosted PSNR to 34.41 dB and SSIM to 0.883, establishing preliminary multi-branch collaborative processing capability. Building upon this, the introduction of an adaptive fusion layer and a discriminator self-attention mechanism maintained high PSNR while significantly boosting SSIM to 0.892. Visual quality markedly improved, achieving a better balance between global consistency and local realism in generated images. The final full-scale model PLPGR-Net, integrating all innovative components and multi-dimensional loss functions, achieved optimal performance on the test set. Compared to the baseline model, it demonstrated improvements of 9.74 dB and 0.293, respectively. It also exhibited outstanding background purification effects, sparse photon enhancement capabilities, and detail fidelity in visual evaluations, fully validating the synergistic effects of each module and the effectiveness of the overall architectural design.

5. Discussion and Conclusions

This study successfully developed an integrated underwater laser-gated imaging application platform. By employing a high-precision synchronized laser emitter and a nanosecond-level ultra-fast gated CCD camera, it achieved high-quality data acquisition of underwater targets under varying turbidity conditions. Based on this platform, we systematically constructed a large-scale underwater laser-gated image dataset. This dataset encompasses blurred-sharp image pairs of various target types and water conditions in underwater scenes, providing a robust data foundation for deep learning model training. Leveraging this dataset, we propose PLPGR-Net (Photon-Layer Photon-Gated Restoration Network), a deep, sparse photon-adaptive enhancement network. Through multi-branch collaborative processing and physics-constrained optimization, PLPGR-Net achieves high-fidelity restoration of single-frame laser-gated slices. It effectively suppresses backscatter noise while preserving target detail integrity, yielding significantly enhanced visual quality in the output clear images.
This research addresses several critical challenges in underwater laser-gated imaging: First, to tackle the strong coupling between target signals and water scattering noise—both composed of photons—we propose a photon coupling disentanglement mechanism that effectively separates signal photons from noise photons. Second, to address pixel loss and photon sparsity, a dedicated sparse photon enhancement branch was designed to ensure the complete preservation and amplification of sparse photon signals. Third, to overcome the reliance on static assumptions and motion artifacts in traditional multi-frame stacking methods, high-quality restoration of single-frame underwater laser-gated slice images was achieved, significantly enhancing the method’s practicality and robustness. The profound significance of this advancement lies in establishing a novel technical paradigm to push beyond the physical imaging range limitations of underwater laser-gated imaging systems. By deeply integrating the powerful representational capabilities of deep learning, the physical prior knowledge of underwater optics, and the unique characteristics of underwater laser-gated images, this approach not only propels the development of underwater vision technology but also provides reliable technical support for marine exploration, underwater engineering, military applications, and scientific research. It holds significant theoretical value and broad application prospects.

Author Contributions

Conceptualization, L.H. and Q.T.; methodology, L.H.; software, L.H.; validation, Q.T.; formal analysis, Q.T., Z.Z. and Q.Y.; investigation, L.H.; resources, Q.T. and Q.Y.; data curation, L.H.; writing—original draft preparation, L.H.; writing—review and editing, Q.T. and Z.Z.; visualization, L.H. and Z.Z.; supervision, Q.T. and Z.Z.; project administration, Q.T. and Q.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original findings presented in this study are included in the article. The dataset was collected using a self-developed underwater laser range-gated imaging system under specific conditions, such as water quality and laser divergence angle. Due to technical limitations of components like the camera and light source within this underwater imaging equipment, as well as experimental environment constraints, these data may not be directly applicable to other researchers lacking equivalent hardware or experimental facilities. For further inquiries, please contact the corresponding author.

Acknowledgments

The authors would like to express profound gratitude to the reviewers and editors who contributed their time and expertise to ensure the quality of this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Du, X.; Sun, Y.; Song, Y.; Dong, L.; Zhao, X. Revealing the potential of deep learning for detecting submarine pipelines in side-scan sonar images: An investigation of pre-training datasets. Remote Sens. 2023, 15, 4873. [Google Scholar] [CrossRef]
  2. Guo, Y.; Wang, Y.; Jin, F.; Li, G. Water tank experiments for laser backward-scattering properties of bubble cluster. In Proceedings of the 2016 IEEE/OES China Ocean Acoustics (COA), Harbin, China, 9–11 January 2016; pp. 1–4. [Google Scholar]
  3. Sun, F.; Wang, G.; Xu, J. An approach for underwater dim targets detection via power spectrum of back-scattering noise. In Proceedings of the 2011 International Conference on Multimedia Technology, Hangzhou, China, 26–28 July 2011; pp. 2922–2925. [Google Scholar]
  4. Bystrov, A.; Hoare, E.; Gashinova, M.; Cherniakov, M.; Tran, T.Y. Underwater optical imaging for automotive wading. Sensors 2018, 18, 4476. [Google Scholar] [CrossRef]
  5. Schmidt, J.; Peters, E.; Stephan, M.; Zielinski, O. Feasibility study on LED-based underwater gated-viewing for inspection tasks. In Proceedings of the OCEANS 2022, Hampton Roads, VA, USA, 17–20 October 2022; pp. 1–8. [Google Scholar]
  6. Lin, H.; Han, H.; Ma, L.; Ding, Z.; Jin, D.; Zhang, X. Range Intensity Profiles of Multi-Slice Integration for Pulsed Laser Range-Gated Imaging System. Photonics 2022, 9, 505. [Google Scholar] [CrossRef]
  7. Su, L.; Duan, C.; Sun, L.; Song, B.; Lei, P.; Chen, J.; He, J.; Zhou, Y.; Wang, X. Influence of optical polarization on underwater range-gated imaging for target recognition distance under different water quality conditions. Infrared Laser Eng. 2024, 53, 20230372-1. [Google Scholar]
  8. Yang, M.; Hu, J.; Li, C.; Rohde, G.; Du, Y.; Hu, K. An in-depth survey of underwater image enhancement and restoration. IEEE Access 2019, 7, 123638–123657. [Google Scholar] [CrossRef]
  9. Chandrasekar, A.; Sreenivas, M.; Biswas, S. Phish-net: Physics inspired system for high resolution underwater image enhancement. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 1–6 January 2024; pp. 1506–1516. [Google Scholar]
  10. Hou, G.; Li, N.; Zhuang, P.; Li, K.; Sun, H.; Li, C. Non-Uniform Illumination Underwater Image Restoration via Illumination Channel Sparsity Prior. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 799–814. [Google Scholar] [CrossRef]
  11. Voss, K.J.; Chapin, A.L. Measurement of the point spread function in the ocean. Appl. Opt. 1990, 29, 3638–3642. [Google Scholar] [CrossRef]
  12. Peng, Y.T.; Cosman, P.C. Underwater image restoration based on image blurriness and light absorption. IEEE Trans Image Process 2017, 26, 1579–1594. [Google Scholar] [CrossRef] [PubMed]
  13. Zhang, K.; Zhang, Y.; Yuan, D.; Feng, X. Underwater Image Enhancement Using Dynamic Color Correction and Lightweight Attention-Embedded SRResNet. J. Mar. Sci. Eng. 2025, 13, 1546. [Google Scholar] [CrossRef]
  14. Lu, C.; Hong, L.; Fan, Y.; Shu, X. A Multi-Scale Contextual Fusion Residual Network for Underwater Image Enhancement. J. Mar. Sci. Eng. 2025, 13, 1531. [Google Scholar] [CrossRef]
  15. Hu, W.; Rong, Z.; Zhang, L.; Liu, Z.; Chu, Z.; Zhang, L.; Zhou, L.; Xu, J. Enhancing Underwater Images with LITM: A Dual-Domain Lightweight Transformer Framework. J. Mar. Sci. Eng. 2025, 13, 1403. [Google Scholar] [CrossRef]
  16. Zhao, S.; Ye, X.; Mei, X.; Guo, S.; Qi, H. TFCNet: A Hybrid Architecture for Multi-Task Restoration of Complex Underwater Optical Images. J. Mar. Sci. Eng. 2025, 13, 1090. [Google Scholar] [CrossRef]
  17. Li, Z.; Liu, W.; Wang, J.; Yang, Y. Progressive Color Correction and Vision-Inspired Adaptive Framework for Underwater Image Enhancement. J. Mar. Sci. Eng. 2025, 13, 1820. [Google Scholar] [CrossRef]
  18. Cao, D.; Lv, Z.; Ding, W.; Wei, J.; Ma, X.; Chao, F. Adaptive enhancement of underwater image based on dense residual denoising. In Proceedings of the 2025 IEEE 8th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 14–16 March 2025; Volume 8, pp. 1582–1587. [Google Scholar]
  19. Li, S.; Yang, X.; Han, H.; Meng, Q. CMFNET: An end-to-end ultra-lightweight underwater image enhancement network. In Proceedings of the 2024 IEEE 7th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 20–22 September 2024; Volume 7, pp. 1611–1615. [Google Scholar]
  20. Zhang, W.; Li, X.; Xu, S.; Li, X.; Yang, Y.; Xu, D.; Liu, T.; Hu, H. Underwater image restoration via adaptive color correction and contrast enhancement fusion. Remote Sens. 2023, 15, 4699. [Google Scholar] [CrossRef]
  21. Yang, J.; Wang, X.; Yue, H.; Fu, X.; Hou, C. Underwater image enhancement based on structure-texture decomposition. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 1207–1211. [Google Scholar]
  22. Wu, S.; Liu, C.; Zhang, H.; Duan, E.; Jiang, N. Image Denoising of Fishing Nets Underwater with Range Selection Based on Convolutional Sparse Coding. In Proceedings of the 2023 4th International Symposium on Computer Engineering and Intelligent Communications (ISCEIC), Nanjing, China, 18–20 August 2023; pp. 77–80. [Google Scholar]
  23. Liu, P.; Chen, S.; He, W.; Wang, J.; Chen, L.; Tan, Y.; Luo, D.; Chen, W.; Jiao, G. Enhanced U-Net for Underwater Laser Range-Gated Image Restoration: Boosting Underwater Target Recognition. J. Mar. Sci. Eng. 2025, 13, 803. [Google Scholar] [CrossRef]
  24. Xu, S.W.; Zhang, S.C.; Tang, S.W. Design and implementation of the laser range-gating imaging synchronization control system. In Proceedings of the 2011 International Conference on Electronics and Optoelectronics, Dalian, China, 29–31 July 2011; Volume 2, p. V2-237. [Google Scholar]
  25. Fournier, G.R.; Bonnier, D.; Forand, J.L.; Pace, P.W. Range-gated underwater laser imaging system. Opt. Eng. 1993, 32, 2185–2190. [Google Scholar] [CrossRef]
  26. Porzio, A.; Ercolano, P.; Scarano, D.; Bruscino, C.; Peluso, M.; Salvoni, D.; Ejrnaes, M.; Zhang, C.; Li, H.; You, L.; et al. Reconstruction of photon number distributions from single photon events. In Proceedings of the 2025 25th Anniversary International Conference on Transparent Optical Networks (ICTON), Barcelona, Spain, 6–10 July 2025; pp. 1–3. [Google Scholar]
  27. Driggers, R.G.; Vollmerhausen, R.H.; Devitt, N.; Halford, C.; Barnard, K.J. Impact of speckle on laser range-gated shortwave infrared imaging system target identification performance. Opt. Eng. 2003, 42, 738–746. [Google Scholar] [CrossRef]
  28. Johnstone, G.E.; Herrnsdorf, J.; Dawson, M.D.; Strain, M.J. Efficient reconstruction of low photon count images from a high speed camera. Photonics 2023, 10, 10. [Google Scholar] [CrossRef]
  29. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef]
  30. Zhang, K.; Zuo, W.; Gu, S.; Zhang, L. Learning deep CNN denoiser prior for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3929–3938. [Google Scholar]
  31. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  32. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  33. Fan, C.M.; Liu, T.J.; Liu, K.H. Compound multi-branch feature fusion for image deraindrop. In Proceedings of the 2023 IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, 8–11 October 2023; pp. 3399–3403. [Google Scholar]
  34. Wu, J.; Zhang, G.; Fan, Y. MambaRA-GAN: Underwater Image Enhancement via Mamba and Intra-Domain Reconstruction Autoencoder. J. Mar. Sci. Eng. 2025, 13, 1745. [Google Scholar] [CrossRef]
  35. Xue, Q.; Kolagunda, A.; Eliuk, S.; Wang, X. AWDF: An Adaptive Weighted Deep Fusion Architecture for Multi-modality Learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 2503–2512. [Google Scholar]
  36. Chen, Y.; Hu, X.; Wang, D.; Chen, H.; Zhan, C.; Ren, H. Researches on underwater transmission characteristics of blue-green laser. In Proceedings of the OCEANS 2014-TAIPEI, Taipei, Taiwan, 7–10 April 2014; pp. 1–5. [Google Scholar]
  37. Kang, Y.; Li, L.; Liu, D.; Li, D.; Zhang, T.; Zhao, W. Fast long-range photon counting depth imaging with sparse single-photon data. IEEE Photonics J. 2018, 10, 1–10. [Google Scholar] [CrossRef]
  38. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  39. Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 586–595. [Google Scholar]
  40. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
  41. Zhang, K.; Gool, L.V.; Timofte, R. Deep unfolding network for image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 3217–3226. [Google Scholar]
  42. Han, J.; Shoeiby, M.; Malthus, T.; Botha, E.; Anstee, J.; Anwar, S.; Wei, R.; Petersson, L.; Armin, M.A. Single underwater image restoration by contrastive learning. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021. [Google Scholar]
  43. Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14821–14831. [Google Scholar]
  44. Chen, L.; Chu, X.; Zhang, X.; Sun, J. Simple baselines for image restoration. In European Conference on Computer Vision; Springer Nature: Cham, Switzerland, 2022; pp. 17–33. [Google Scholar]
  45. Wang, Y.; Yu, J.; Zhang, J. Zero-shot image restoration using denoising diffusion null-space model. In Proceedings of the Eleventh International Conference on Learning Representations (ICLR), Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
Figure 1. Classic underwater image enhancement algorithms and networks.
Figure 1. Classic underwater image enhancement algorithms and networks.
Jmse 13 02217 g001
Figure 2. Sample reconstruction results (b) of different original underwater laser range-gated images (a). All original underwater laser-gated images are sourced from our self-built dataset. Our method consistently produces visually pleasing results.
Figure 2. Sample reconstruction results (b) of different original underwater laser range-gated images (a). All original underwater laser-gated images are sourced from our self-built dataset. Our method consistently produces visually pleasing results.
Jmse 13 02217 g002
Figure 3. Principle of synchronized imaging combining an underwater laser with nanosecond-level ultrafast gating: (a) process of precisely controlling nanosecond-level laser pulse emission and nanosecond-level gating to achieve imaging; (b) principle of realizing the high-gain characteristics of the image intensifier.
Figure 3. Principle of synchronized imaging combining an underwater laser with nanosecond-level ultrafast gating: (a) process of precisely controlling nanosecond-level laser pulse emission and nanosecond-level gating to achieve imaging; (b) principle of realizing the high-gain characteristics of the image intensifier.
Jmse 13 02217 g003
Figure 4. Signal-to-noise ratio (SNR) and signal-noise cross-correlation analysis of actual underwater laser range-gated images across different frequency components: (a) a steep decline is observed in SNR in the high-frequency region, indicating strong coupling between high-frequency signals and noise. The peak in the cross-correlation curve at zero displacement in (b) indicates that noise is not randomly distributed but is coupled with target edges and textural structures.
Figure 4. Signal-to-noise ratio (SNR) and signal-noise cross-correlation analysis of actual underwater laser range-gated images across different frequency components: (a) a steep decline is observed in SNR in the high-frequency region, indicating strong coupling between high-frequency signals and noise. The peak in the cross-correlation curve at zero displacement in (b) indicates that noise is not randomly distributed but is coupled with target edges and textural structures.
Jmse 13 02217 g004
Figure 5. (a) original underwater laser-gated image. (b) Reference image corresponding to Figure (a). (c) demonstrates that noise obscures the sharp edge transitions in the real scene, causing gradient changes to lose steepness. (d) indicates that when the propagation intensity approaches 1, water scattering noise is directly “copied” to the deep layers through residual connections, with minimal suppression by convolutional or activation layers during forward propagation.
Figure 5. (a) original underwater laser-gated image. (b) Reference image corresponding to Figure (a). (c) demonstrates that noise obscures the sharp edge transitions in the real scene, causing gradient changes to lose steepness. (d) indicates that when the propagation intensity approaches 1, water scattering noise is directly “copied” to the deep layers through residual connections, with minimal suppression by convolutional or activation layers during forward propagation.
Jmse 13 02217 g005
Figure 6. PLPGR-Net architecture for laser range-gated image restoration.
Figure 6. PLPGR-Net architecture for laser range-gated image restoration.
Jmse 13 02217 g006
Figure 7. Multi-scale PatchGAN with attention architecture.
Figure 7. Multi-scale PatchGAN with attention architecture.
Jmse 13 02217 g007
Figure 8. Logical Flow of the Multi-Dimensional Combined Physical Loss Module. (1) Black arrows indicate directional logical relationships between inputs and outputs; (2) Dashed frame represent complete input units composed of the enclosed content; (3) These dots denote the selection of different quantities of single-frame images for overlay.
Figure 8. Logical Flow of the Multi-Dimensional Combined Physical Loss Module. (1) Black arrows indicate directional logical relationships between inputs and outputs; (2) Dashed frame represent complete input units composed of the enclosed content; (3) These dots denote the selection of different quantities of single-frame images for overlay.
Jmse 13 02217 g008
Figure 9. Physical demonstration of the extinction platform for dataset acquisition (a) and virtual marine biological targets (b).
Figure 9. Physical demonstration of the extinction platform for dataset acquisition (a) and virtual marine biological targets (b).
Jmse 13 02217 g009
Figure 10. Underwater laser-gated imaging platform: (a) the system primarily consists of a nanosecond-gated camera, laser emitter, and lens; (b) internal three-dimensional structure of the integrated system. Red indicates watertight connectors, while green denotes image reception and processing modules.
Figure 10. Underwater laser-gated imaging platform: (a) the system primarily consists of a nanosecond-gated camera, laser emitter, and lens; (b) internal three-dimensional structure of the integrated system. Red indicates watertight connectors, while green denotes image reception and processing modules.
Jmse 13 02217 g010
Figure 11. Advanced methods and our visual comparisons with real data. The standard deviation of noise, calculated jointly from the original images and reference images in the leftmost column, is 11.33, 11.23, 14.14, 16.43, 19.50, and 30.96 from top to bottom, showing an overall upward trend.
Figure 11. Advanced methods and our visual comparisons with real data. The standard deviation of noise, calculated jointly from the original images and reference images in the leftmost column, is 11.33, 11.23, 14.14, 16.43, 19.50, and 30.96 from top to bottom, showing an overall upward trend.
Jmse 13 02217 g011
Figure 12. (a) The first row displays, from left to right: the underwater laser-gated noise image, the reference image, and the image restored by the proposed algorithm, with corresponding heatmaps below each. (b) The first row displays, from left to right: the underwater laser-gated noise image, the reference image, and the image restored by the proposed algorithm, with corresponding heatmaps below each.
Figure 12. (a) The first row displays, from left to right: the underwater laser-gated noise image, the reference image, and the image restored by the proposed algorithm, with corresponding heatmaps below each. (b) The first row displays, from left to right: the underwater laser-gated noise image, the reference image, and the image restored by the proposed algorithm, with corresponding heatmaps below each.
Jmse 13 02217 g012
Table 1. Quantitative comparison of different image restoration methods on real datasets.
Table 1. Quantitative comparison of different image restoration methods on real datasets.
MethodPSNRSSIMFIDMAELPIPSParams (M)Flops (G)
BM3D [40]25.080.54093.3159.8480.3570.001.18
DnCNN [29]24.800.507117.37811.0030.1780.5673.13
USRNet [41]24.890.50695.88110.7270.3100.5964.46
CWR [42]29.120.751105.0405.3020.21011.4084.87
MPRNet [43]25.530.53157.90010.5370.14515.7443.52
NAFNet [44]23.990.49056.44912.6510.139116.412040.00
DDNM [45]25.350.51054.70111.0220.182552.811113.75
PLPGR-Net (proposed)36.270.89228.7961.7650.11913.71697.77
Table 2. Quantitative comparison of different modules and the full model on real datasets.
Table 2. Quantitative comparison of different modules and the full model on real datasets.
MethodBase+SBSM+SPSM+HFIRM and ABF+MSPGA
PSNR26.5331.4633.6134.4136.27
SSIM0.5990.8300.8670.8830.892
FID45.97069.11868.14130.46628.796
MAE8.7483.2942.1552.6791.765
LPIPS0.2680.1700.1590.1190.119
Params (M)9.6512.8413.1313.4613.71
Flops (G)18.70148.50153.35685.07697.77
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tian, Q.; Hu, L.; Zhang, Z.; Yang, Q. PLPGR-Net: Photon-Level Physically Guided Restoration Network for Underwater Laser Range-Gated Image. J. Mar. Sci. Eng. 2025, 13, 2217. https://doi.org/10.3390/jmse13122217

AMA Style

Tian Q, Hu L, Zhang Z, Yang Q. PLPGR-Net: Photon-Level Physically Guided Restoration Network for Underwater Laser Range-Gated Image. Journal of Marine Science and Engineering. 2025; 13(12):2217. https://doi.org/10.3390/jmse13122217

Chicago/Turabian Style

Tian, Qing, Longfei Hu, Zheng Zhang, and Qiang Yang. 2025. "PLPGR-Net: Photon-Level Physically Guided Restoration Network for Underwater Laser Range-Gated Image" Journal of Marine Science and Engineering 13, no. 12: 2217. https://doi.org/10.3390/jmse13122217

APA Style

Tian, Q., Hu, L., Zhang, Z., & Yang, Q. (2025). PLPGR-Net: Photon-Level Physically Guided Restoration Network for Underwater Laser Range-Gated Image. Journal of Marine Science and Engineering, 13(12), 2217. https://doi.org/10.3390/jmse13122217

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop