Next Article in Journal
Robustness Analysis of a Fast Virtual Temperature Sensor Using a Recurrent Neural Network Model Sensitivity
Previous Article in Journal
EEG Sensor-Based Parkinson’s Disease Detection Using a Multi-Domain Feature Fusion Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DCT Underwater Image Enhancement Based on Attenuation Analysis

by
Leyuan Wang
,
Miao Yang
*,
Can Pan
and
Jiaju Tao
School of Electronic Engineering, Jiangsu Ocean University, Lianyungang 222000, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(23), 7192; https://doi.org/10.3390/s25237192
Submission received: 27 September 2025 / Revised: 21 November 2025 / Accepted: 22 November 2025 / Published: 25 November 2025

Abstract

Underwater images often suffer from color distortion, reduced contrast, and blurred details due to the selective absorption and scattering of light by water, which limits the performance of underwater visual tasks. To address these issues, this paper proposes an underwater image enhancement method that integrates multi-channel attenuation analysis and discrete cosine transform (DCT). First, the color statistics of an in situ-captured underwater image are mapped to those of a reference image selected from a well-illuminated natural image dataset with standard color distribution; no pristine underwater image is required. This mapping yields a color transfer image, i.e., an intermediate color-corrected result obtained via statistical matching. Subsequently, this image is fused with an attenuation weight map and the original input to produce the final color-corrected result. Secondly, taking advantage of the median’s resistance to extreme value interference and the Sigmoid function’s flexible control of gray-scale transformation, the gray-scale range is adjusted in different regions through nonlinear mapping to achieve global contrast balance. Finally, considering the visual system’s sensitivity to high-frequency details, a saliency map is extracted using Gabor filtering, and the frequency characteristics are analyzed through block DCT transformation. Adaptive gain is applied to high-frequency details to enhance them. Experiments were conducted on the UIEB, EUVP, and LSUI datasets and compared with existing methods. Through qualitative and quantitative analysis, it was verified that the proposed algorithm not only effectively enhances underwater images but also significantly improves image clarity.

1. Introduction

In many activities such as underwater welding, seabed exploration, scientific research, and biodiversity studies, high-quality visual information is a key foundation for supporting the execution of tasks [1]. However, the complex underwater environment poses significant challenges regarding obtaining clear images. The selective attenuation of light by water and the scattering and blocking effects of suspended particles result in common problems in underwater images, such as low visibility, color distortion, insufficient contrast, and blurred details [2]. Although many underwater image enhancement methods have been proposed, most of them fail to achieve good detail enhancement.
At present, underwater image enhancement technologies are mainly classified into three major categories: non-physical model methods, physical model-based methods, and deep learning-based methods. Non-physical model methods [3,4,5] take image processing technology as the core and do not rely on complex optical physical modeling. They improve image quality through classic means such as histogram equalization, contrast enhancement, denoising and smoothing, super-resolution reconstruction, and enhancement filters. However, the variability of image degradation makes it difficult for a single image processing operation to adapt to all scenarios. On the other hand, some methods may cause image information loss or distortion during multi-domain transformation. Therefore, it is necessary to comprehensively consider the balance of image quality in multiple feature dimensions.
In recent years, underwater image enhancement techniques based on physical models [6,7,8] and deep learning [9,10,11] have seen significant progress. Due to the difficulty in comprehensively covering all underwater optical effects, physical model methods [12] have limited generalization ability across different scenarios. Meanwhile, deep learning-based methods [13] are constrained by the difficulty in obtaining high-quality paired images, resulting in a deviation between the model training data and the real underwater environment. Additionally, they suffer from weak interpretability, high computational resource consumption, and complex parameter tuning.
This paper introduces a novel method for enhancing underwater images by integrating multi-channel attenuation analysis with discrete cosine transform (DCT). The approach employs a collaborative optimization strategy for color correction, contrast enhancement, and detail enhancement, resulting in a comprehensive improvement in underwater image quality. In particular, the color correction module generates a dynamic weight map that combines the attenuation indices of the red and blue channels. This adaptive mechanism effectively compensates for color loss in various regions of the image. The contrast enhancement module utilizes a mean–median fusion threshold segmentation histogram and applies Sigmoid nonlinear mapping to separately enhance the background and foreground regions. Additionally, the detail enhancement module employs Gabor filtering to extract a visual saliency map and establishes a three-level gain mechanism based on the AC energy and frequency characteristics in the DCT domain. This mechanism enhances details in salient regions while mitigating blocking artifacts through a weighted superposition strategy. The key contributions of this study are outlined as follows:
(1)
A color correction algorithm based on multi-channel attenuation analysis is proposed. By combining CIELab space-based color statistical matching with red/blue channel attenuation/scattering physical models, it achieves adaptive color distortion compensation in complex scenarios like strong and weak light.
(2)
A contrast enhancement strategy based on mean–median fusion segmentation is designed. Combined with Sigmoid nonlinear mapping, it suppresses extreme value interference while precisely regulating the gray dynamic range, effectively solving the contrast imbalance.
(3)
A DCT detail enhancement mechanism is constructed. By guiding DCT domain frequency gain adjustment with visual saliency, it enhances edge and texture details while effectively avoiding distortion caused by amplification of high-frequency noise and excessive sharpening of details, achieving a balance between enhancement effect and image naturalness.

2. Related Work

2.1. Underwater Image Enhancement Method Based on Physical Model

Underwater image enhancement methods based on physical models rely on Jaffe’s foundational imaging model [14]—which decomposes received light into three physical components—and reconstruct clear images by quantifying the optical parameters of water. Existing methods mainly fall into three categories: first, those centered on He et al.’s Dark Channel Prior (DCP) [15], including Wang et al.’s [16] extension to more scenarios via adaptive attenuation curves and Ravi et al.’s [17] optimization of color cast and brightness by combining underexposure correction. Second, there are methods focused on multi-function integration, such as Li et al.’s [18] method that integrates color correction and defogging (outperforming DCP) and Peng et al.’s [19] model that incorporates depth estimation to improve image quality. Third, there are innovative approaches: Zhuang et al.’s [20] Super-Laplacian algorithm for color restoration, Li et al.’s [21] method of splitting high/low-frequency components for enhancement, Yan et al.’s [22] bio-normalization model simulating biological vision to correct color bias, and Acharya et al.’s [23] histogram-splitting method to preserve image information. However, these methods heavily depend on prior assumptions, leading to insufficient robustness in complex underwater environments, with DCP prone to distortion in specific areas.

2.2. Underwater Image Enhancement Methods Based on Non-Physical Models

Non-physical model-based underwater image enhancement methods are mainly divided into two categories: spatial domain methods and transform domain methods. Spatial domain methods focus on directly manipulating image pixels, achieving enhancement by adjusting pixel gray values and optimizing local or global pixel distribution. The fusion-based method proposed by Ancuti et al. [24] uses dual-branch processing to obtain contrast enhancement results and color correction results, respectively, improving contrast while enhancing detail edges; Tolie et al.’s work [25] centers on addressing the poor quality of underwater images caused by light attenuation, water turbidity, and optical device limitations, introducing a blind quality assessment method that leverages channel-based structural features, chrominance dispersion rate scores, and overall saturation and hue and fuses these features through a multiple linear regression model, achieving superior accuracy, consistency, and computational efficiency in assessing both raw and enhanced underwater images; Muniraj et al. [26] focus on the principle of color constancy, introducing gamma correction to enhance color intensity and combining it with a defogging algorithm to design a correction factor, achieving a breakthrough in improving color cast and clarity; and the graph signal processing method by Sharma et al. [27] is more innovative, replacing traditional transformation methods with graph Fourier transform and image wavelet filter banks, effectively improving image clarity and contrast.
The core logic of transform domain methods is to map an image from the spatial domain to a specific transform domain and use the characteristics of this domain to achieve enhancement, with discrete cosine transform (DCT) being the current core technical direction. The total JND contour model based on DCT proposed by Sung-Ho Bae et al. [28] integrates multiple masking effects such as spatial contrast sensitivity function and luminance masking, indirectly optimizing image quality while improving image and video compression performance; the review study by Wan Azani Mustafa et al. [29] systematically sorts out the advantages and disadvantages of DCT and Discrete Wavelet Transform (DWT) methods, providing a theoretical reference for the development of subsequent image enhancement technologies; for specific scenarios, An [30] proposes a DCT-based local contrast enhancement detection algorithm, which enhances the AC coefficients of the target area in Synthetic Aperture Radar (SAR) images and exhibits better detection results and accuracy than traditional methods in complex noise environments; and the AIIE method proposed by Ju et al. [31] in the same period is more targeted, specifically adapted to scenarios with low-frequency interference such as underwater environments. It utilizes the statistical characteristics in the DCT domain and a designed mask to suppress low-frequency information and highlight high-frequency information, significantly improving image visibility. Overall, although non-physical model-based methods offer fast processing speeds and do not require explicit modeling, they often suffer from incomplete color cast correction and limited detail recovery. Moreover, by neglecting the underwater optical degradation process, these methods are prone to over- or underenhancement, exhibit poor adaptability in complex environments, lack robustness in enhancement performance, and may even introduce noise or artifacts.

2.3. Underwater Image Enhancement Method Based on Deep Learning

Underwater image enhancement technology based on deep learning acquires the mapping relationship between degraded images and reference images through end-to-end learning, breaking through the limitations of traditional physical models. Existing research mainly focuses on two directions: first, network innovation. For example, Md Jahidul Islam et al. [32] proposed a residual generative model that can handle both image enhancement and super-resolution tasks; Junaed Sattar et al. [33] developed the FUnIE-GAN network and constructed the EUVP dataset; Ankita Naik et al. [34] designed the lightweight network Shallow-UWnet; Restormer, proposed by Zamir et al. [35], can efficiently capture long-distance pixel information; the multi-color encoder by Li et al. [36] can solve the problems of low contrast and color spots in underwater images; Deep-Wave-Net, proposed by Preethi et al. [37], has a faster convergence speed; the DICAM model by Tolie et al. [38] can correct image color bias and distance-related degradation; and the Swin Transformer by Liu et al. [39] can adapt to multiple image tasks. Second, there is dataset improvement: Li et al. [40] constructed the UIEB dataset and proposed the Water_Net network, but this network failed to effectively solve the backscattering problem; Peng et al. [41] constructed the LSUI dataset, which contains richer scenes. However, these methods entail considerable computational overhead and necessitate high-quality paired training data. In complex environments, they frequently suffer from color deviations and over-smoothed details, and their generalization performance across diverse water types remains limited.

3. Our Method

The flowchart of our method is shown in Figure 1. A color transfer image is first generated by aligning the statistical distributions of the degraded input and a reference image and then fused with the original frame and an attenuation weight map to yield an initial color-corrected image; this corrected image is subsequently segmented through mean–median fusion and transformed by region-adaptive Sigmoid nonlinear mapping to produce a contrast-enhanced representation. In parallel, a grayscale version of the enhanced image is derived and a visual saliency map is computed via multi-scale feature extraction followed by adaptive smoothing, which serves as the dominant spatial weight for frequency domain gain control, while the enhanced image in YUV space undergoes block-wise DCT processing on the luminance (Y) channel with saliency-weighted gain adjustment and blocking artifact suppression through overlapping block weighted fusion; the refined Y channel is finally recombined with the U and V channels and converted back to the RGB space to deliver the detail-preserving enhanced result.

3.1. Color Correction

This paper proposes an improved CTCS [42] method. First, the in situ underwater degraded image and a reference image—selected from a well-illuminated natural dataset exhibiting standard color statistics and requiring no adaptive adjustment—are both transformed into the CIELab space. The three-channel statistical moments of each image are then computed, and channel-wise color matching is applied to the degraded image to generate the color-transferred result in the CIELab space. Subsequently, a dual-channel attenuation–interaction model is employed to derive an attenuation weight map, which is then fused with the color-transferred image and the original frame to yield the final color-corrected result. See Table A1 for the meanings of relevant symbols.
In the CIELab space, the mean and standard deviation of the original image and the reference image are calculated, respectively:
μ L a b S = 1 H x = 1 H   I L a b S ( x ) , σ L a b S = 1 H x = 1 H I L a b S ( x ) μ L a b S 2 1 2 μ L a b R = 1 H x = 1 H   I L a b R ( x ) , σ L a b R = 1 H x = 1 H I L a b R ( x ) μ L a b R 2 1 2
Here, I L a b S ( x ) and I L a b R ( x ) denote the values of the original image and the reference image at pixel x in the CIELab space; μ L a b S and σ L a b S are the mean and standard deviation of the original image in the CIELab color space, while μ L a b R and σ L a b R are the mean and standard deviation of the reference image in the CIELab color space; H = M × N is the total number of pixels. Based on these statistical features, we perform color transfer using the following formula:
I C T , L a b ( i ) = I L a b ( i ) μ L a b S σ L a b S σ L a b R + μ L a b R
Here, I C T , L a b is the result after color style transfer; I L a b ( i ) is the pixel value of the original degraded image in the i channel of the CIELab space (where i corresponds to the L, a, and b channels). After obtaining I C T , L a b , the final corrected image I CT in the RGB space can be obtained through the standard color space conversion from CIELab to RGB.
To mitigate the limitation of conventional color transfer methods in neglecting wavelength-dependent underwater attenuation, the algorithm first estimates channel-specific attenuation indices for the red and blue bands of the original image as
A r = 1 r 255 γ 1
A b = 1 b 255 γ 2
Among them, r and b are the normalized value of the red and blue channel individually, and γ 1 is the red light attenuation sensitivity parameter. In strong light scenes, it takes γ 1 = 1.1 ; in weak light scenes, it takes γ 1 = 1.4 . γ 2 is the blue light scattering sensitivity parameter. In strong light scenes, γ 2 = 0.9 is taken, and in weak light scenes, γ 2 = 0.6 is taken. The attenuation weight map is generated:
W = A r A r + A b + ϵ
Among them, ϵ = 0.001 prevents division by zero errors; W 0,1 . The larger the value, the more the red light attenuation dominates the distortion in the region.
Ultimately, the corrected image is obtained through a weighted fusion of the attenuation map and the color transfer result:
I C C i , j = W i , j I C T i , j + 1 W i , j I S i , j
Here, I C C i , j denotes the final corrected image, W i , j represents the attenuation weight map, I C T i , j is the color-transferred image, and I S i , j is the original degraded image. The proposed fusion scheme adaptively modulates the contribution of the color transfer term versus the original pixel values as a function of the local attenuation weight. In regions where attenuation is severe, the color transfer result dominates, thereby replenishing lost chromatic information; conversely, in weakly attenuated regions, the original data are preferentially preserved to safeguard natural appearance. Consequently, the model delivers accurate, location-aware correction of underwater color distortions while remaining consistent with human visual system characteristics and the physical propagation laws of light in water.

3.2. Contrast Enhancement

Following color correction, chromatic aberrations are notably reduced; however, the image still exhibits low global contrast arising from water column scattering and wavelength-dependent attenuation. Abdul Ghani et al. [43] used the mean as the segmentation threshold to divide the histogram into two sub-regions. Owing to the abundance of extreme gray-level values in underwater imagery, the sample median offers higher robustness against outliers than the arithmetic mean. Consequently, we define the optimal separation intensity I s e p , c as the midpoint between the channel-wise mean and median, ensuring a stable partition of the tonal distribution.
With I s e p , c serving as the channel-specific boundary, each histogram is partitioned into background and foreground sub-histograms. Both segments are then subjected to Sigmoid-based nonlinear stretching, where the compressive–expansive response expands low-contrast intervals while compressing high-contrast extremes, yielding a perceptually natural contrast enhancement that respects the original tonal distribution of underwater scenes.
The background region intensity mapping formula is
I B = I m i n , c + I s e p , c I m i n , c × S i g m o i d P B
where I B is the intensity of a background pixel after stretching, I m i n , c is the minimum pixel intensity of channel c, and P B is the cumulative distribution probability of the background pixel.
The foreground region intensity mapping formula is
I F = I s e p , c + 1 + I m a x , c I s e p , c + 1 × S i g m o i d P F
where I F is the intensity of a foreground pixel after stretching, I m a x , c is the maximum pixel intensity of channel c, and P F is the cumulative distribution probability of the foreground pixel.
Upon completion of the channel-wise regional Sigmoid stretching, the background sub-images I B of all channels are merged to yield a low-enhancement image that preserves dark-region details while maintaining subdued foreground brightness. Conversely, the foreground sub-images I F are integrated to produce an overenhancement image that accentuates bright-area details at the cost of amplifying background noise. The two complementary images are finally combined to generate the contrast-balanced enhanced result I B F .

3.3. Detail Enhancement

Two complementary operations are executed in parallel on the contrast-enhanced image. First, the image is converted to grayscale, and a visual saliency map is computed via multi-scale feature extraction followed by adaptive smoothing; this map subsequently serves as the primary spatial weight for frequency domain gain control. Second, the enhanced image is transformed into the YUV space, where the luminance (Y) channel undergoes block-wise discrete cosine transform (DCT) enhancement guided by the saliency weights. Blocking artifacts are mitigated through overlapping-block weighted fusion. Finally, the refined Y channel is recombined with the U and V channels, and the composite image is converted back to the RGB color space to yield an enhanced result with preserved detail clarity.

3.3.1. Theoretical Basis

The human visual system exhibits heightened sensitivity to mid-frequency and high-frequency spatial components such as edges and fine-scale textures and spontaneously allocates selective attention to visually salient regions [44]. The discrete cosine transform (DCT)—a canonical frequency domain operator—orthogonally decomposes an N × N image block into a compact ensemble of cosine basis functions, thereby furnishing an analytical substrate for targeted spectral manipulation. Formally, the DCT transformation is defined as
D u , v = α u α v x = 0 N 1 y = 0 N 1 x , y c o s 2 x + 1 u π 2 N c o s 2 y + 1 v π 2 N
In the formula, u , v = 0 , 1 , , N 1 , α u = 1 / N   ( u = 0 ) or 2 / N   ( u > 0 ). The same applies to α v .
In the discrete cosine transform (DCT) domain, the coefficient located at position (0,0), known as the DC coefficient, represents the average intensity or luminance of the corresponding image block. The remaining coefficients, termed AC coefficients, encapsulate the spatial frequency details: high-frequency AC coefficients primarily characterize edges and textures, whereas low-frequency AC coefficients correspond to smoother regions within the block. This clear separation of information in the frequency domain offers a precise and manipulable basis for targeted image detail enhancement.

3.3.2. Visual Salience Perception

In underwater imagery, the perceptual significance of different regions varies considerably. Applying uniform enhancement across the entire image may inadvertently amplify background noise or result in over-sharpening of non-critical areas. This approach fails to align with the way the human visual system (HVS) naturally prioritizes focal regions. To address this, we propose the integration of a visual saliency perception stage that emulates the HVS’s ability to identify key areas of interest, thereby providing regional weighting cues for subsequent adaptive enhancement [45].
To extract salient texture and edge information, we perform multi-scale Gabor filtering, expressed as
G λ , θ , ψ , σ , γ x , y = e x p x 2 + γ 2 y 2 2 σ 2 c o s 2 π x λ + ψ
In the formula, x = x c o s θ + y s i n θ , y = x s i n θ + y c o s θ , λ , θ , ψ , σ , and γ represent the wavelength, direction, phase, standard deviation of the Gaussian function, and aspect ratio, respectively. Two Gabor filters in the horizontal direction ( θ = 0 , λ = 0.15 ) and the vertical direction ( θ = 90 , λ = 0.2 ) are selected to filter the luminance channel. The filtering results are denoted as G 1 x , y and G 2 x , y . The preliminary saliency map S x , y is obtained by adding the absolute values of the filtering results, as shown in Equation (11), to highlight the salient regions.
S x , y = G 1 x , y + G 2 x , y
To suppress noise and smooth the saliency map, adaptive normalization processing is performed: mapping S ( x , y ) to the [0, 255] interval to obtain S n o r m 1 x , y and then applying a Gaussian kernel G g a u s s x , y with a size of 21 × 21 and a standard deviation of 7 for blurring. A 21 × 21 kernel is large enough to cover the local neighborhood of salient regions, merging adjacent pixel features for effective noise suppression without excessively blurring salient details, while a standard deviation of 7 provides an appropriate blur strength. It is expressed as
S b l u r x , y = S n o r m 1 x , y G g a u s s x , y
The final result is normalized to the [0, 1] interval to obtain the final visual saliency map S final x , y that reflects the regional attention, with higher values indicating that the region is more likely to attract visual attention.

3.3.3. Frequency Domain Adaptive Gain Adjustment

Conventional frequency domain enhancement methods typically apply a uniform gain, which unavoidably amplifies high-frequency noise, over-emphasizes structurally irrelevant regions, and deviates from human visual perception. The discrete cosine transform (DCT) mitigates these drawbacks by decomposing the image into spectrally disjoint coefficients: low-frequency terms represent smooth areas, whereas high-frequency terms encode edges and textures. This separability enables tiered processing—the selective amplification of high-frequency components for detail sharpening while mildly adjusting low-frequency components for naturalness preservation—thereby achieving a perceptually convincing balance between clarity and fidelity. To address this, we propose a “three-level gain tuning” method that fuses DCT frequency characteristics with visual saliency to deliver precise, detail-orientated enhancement. Firstly, the luminance channel is subjected to block DCT transformation: the luminance channel Y x , y is divided into 8 × 8 sub-blocks. For each sub-block B x , y , a DCT transformation is performed to obtain the coefficient matrix D u , v , and the intra-block AC energy is calculated to measure the regional contrast. The AC energy is defined as the sum of the squares of all coefficients within the block minus the square of the DC coefficient, that is,
E ac = u = 0 7 v = 0 7 u , v D 2 0,0
To suppress numerical fluctuations, the activation energy E ac is first log-transformed and subsequently mapped to the range [0, 1] via the hyperbolic tangent function, yielding the normalized contrast measure C.
The frequency gain function is
G f r e q u , v = α m a x C e x p β f u , v κ
Among them, α max = 1.2 is the maximum gain coefficient, C is the contrast measurement value, β = 0.12 controls the gain bandwidth, and κ = 0.8 adjusts the gain rate to ensure that the high-frequency region achieves stronger enhancement. The gain factor is further optimized by incorporating edge-direction characteristics. Determine the edge direction based on the horizontal and vertical low-frequency change information in the DCT coefficients: calculate the horizontal low-frequency change coefficient d x = D 0,1 and the vertical low-frequency change coefficient d y = D 1,0 and define the edge ratio:
z = d y / d x + ϵ
Here, ϵ = 10 6 is used to avoid division by zero.
Adjust the directional gain factor G factor based on the r value: if z > 2.0 (vertical edge), then G f a c t o r = 1.4 ( j < i , to enhance the vertical high-frequency region) or 1.0 ( j i , to prevent overenhancement). If z < 0.5 (vertical edge), then G f a c t o r = 1.5 ( j < i , enhance the vertical high-frequency region) or 1.0 ( j i ,); for isotropic regions ( 0.5 z 2.0 ), set G f a c t o r = 1.2. The total gain is ultimately obtained by integrating the visual saliency, frequency gain, and direction factor. Among them, S f i n a l is the average value of S final x , y in each small block. The formula for the total gain is shown as
G u , v = S f i n a l × G f a c t o r × G f r e q u , v
Adjust the DCT coefficients using the total gain, and the adjustment formula is
D u , v = D u , v × 1 + 0.8 t a n h G u , v
Among them, the DC coefficient D 0,0 remains unchanged to maintain the overall brightness stability of the image. After the coefficient adjustment is completed, the inverse DCT transformation is performed on each sub-block to reconstruct the enhanced image block B x , y .

3.3.4. Enhanced Post-Processing

During the block-based frequency domain enhancement process, the discontinuity at the boundaries between blocks can easily cause blocking artifacts [46], and the gain adjustment may lead to local brightness anomalies. Additionally, there is a risk of color distortion in the channel-merging stage. To address these issues, the post-processing of enhancement employs a multi-strategy collaborative optimization approach: to reduce the blocking artifacts caused by block processing, the algorithm adopts an overlapping block strategy and a weighted fusion mechanism. The block movement step size is set to half the block size (i.e., 4 pixels), and the enhanced sub-blocks obtained from the inverse DCT are weighted using a Hanning window H x , y , which is defined as
H x , y = 0.5 1 c o s 2 π x / 7 0.5 1 c o s 2 π y / 7
Accumulate the weighted sub-blocks to the output luminance channel E x , y and simultaneously record the sum of the weights as H a c c x , y , that is: E x , y E x , y + B x , y H x , y , H a c c x , y H a c c x , y + H x , y . After the enhancement is completed, the output brightness channel is normalized through E x , y / H a c c x , y to limit the brightness value within the range 0 , 255 to avoid overexposure or underexposure. Finally, the enhanced luminance channel E x , y is recombined with the temporarily stored chrominance channels U x , y and V x , y to form a YUV image, which is then converted back to the RGB color space to obtain the final color image with enhanced details. By combining the inverse DCT transformation with a superimposed weighting strategy, it effectively balances detail enhancement and block effect suppression, ultimately achieving a high-quality enhancement effect with clear details in the significant regions and high visual comfort.

4. Experiment and Analysis

In this section, the proposed method is compared with existing techniques, including DCTE [47], UNTV [48], PCDE [49], PCFB [50], UDHTV [51], ZSRM [52], and WFAC [53]. Among the comparison algorithms, UDHTV is a physical model-based enhancement method, UNTV, PCDE, PCFB, and ZSRM are non-physical model-based enhancement methods, and WFAC and DCTE are frequency domain enhancement methods. By choosing these seven types of algorithms for comparison, we can comprehensively cover mainstream technical routes such as physical modeling, non-physical constraints, and frequency domain processing. Through the performance differences among different methods in color cast correction, defogging, and detail enhancement, the comprehensive advantages and applicable scenarios of the proposed method are highlighted. Three representative low-quality image datasets are selected for the experiments, namely, UIEB [54], EUVP [55], and LSUI [56]. Both the UIEB and EUVP datasets focus on the systematic perceptual research and analysis of underwater image enhancement methods. The UIEB dataset contains 950 real underwater scene images collected from the Internet, while the EUVP dataset includes over 12,000 paired underwater images and 8000 unpaired underwater images, providing rich test samples for the performance evaluation of underwater image enhancement algorithms. The LSUI dataset is a large-scale underwater image dataset containing 5004 pairs of images, covering diverse underwater scenes and providing strictly selected high-quality reference images, specifically designed for underwater image enhancement research.

4.1. Qualitative Assessment

To visually verify the comprehensive effect of the proposed underwater image enhancement method, this section reports qualitative comparison experiments based on the UIEB, EUVP, and LSUI datasets. For the experiments, we selected DCTE, UNTV, PCDE, PCFB, UDHTV, ZSRM, and WFAC as mainstream comparison methods. Through visual effect comparison, the performance of the proposed method in color correction, contrast enhancement, and detail enhancement is evaluated. The qualitative comparison results are shown in Figure 2, which integrates the enhancement effects of the three datasets: (a)–(c) are samples from the UIEB dataset, (d)–(f) are samples from the EUVP dataset, and (g)–(i) are samples from the LSUI dataset. Each dataset contains three typical underwater images. The experiments verify the enhancement stability of the method in different underwater scenarios through the comparison of multiple dataset samples.
As shown in Figure 2, DCTE delivers a flat enhancement with no standout dimension: colors remain dim and shallow, while edges and textures gain little clarity, so complex patterns in LSUI still look almost identical to the original. UNTV fails to remove color casts—greenish or bluish hues linger—and keeps underwater creatures and reefs blurry; its low contrast leaves the entire image gray and lifeless. PCDE clearly overenhances: it over-brightens the scene, producing whitish “washed-out” areas that destroy natural underwater tones; aggressive sharpening turns texture edges ragged and noisy, strong contrast blows highlights and crushes shadows, flattening depth. PCFB tinges regions yellow, yielding a dark, disharmonious palette; weak detail recovery leaves edges soft, and blown contrast erases highlight information, breaks layering, and produces a harsh glare. The UDHTV algorithm has an obvious overenhancement problem. In terms of color correction, some areas show over-saturation, and the blue tone of scenes in the UIEB database is overly intensified, appearing unnatural. In terms of contrast enhancement, local highlight areas are overexposed, and the transition between dark and bright areas is harsh, which destroys the layering of the image. Although there is a certain effect in detail enhancement, overenhancement causes texture details to be masked by noise, resulting in relatively low overall visual comfort. ZSRM offers moderate enhancement—no obvious color cast—but its mild contrast still muddies light–dark separation; detail gains are limited, complex textures stay blurred, and edge acuity falls short of DCT. Finally, WFAC remains conservative, giving a hazy impression: blue-green tones are subdued, overall grayness is high, layering is weak, and the soft edge profile provides the lowest detail definition of all. Our column in Figure 2 clearly shows that the proposed method has the best comprehensive performance of the three core indicators. In color correction, it can effectively correct the color shift in underwater scenes, with no color bias or over-saturation, and is close to the true color of the scene. In contrast enhancement, by adaptively adjusting the gain of high-frequency coefficients, it enhances the overall contrast while avoiding local overexposure—details in dark areas are clearly presented, and bright areas are not over-brightened, significantly enhancing the image layering and visual impact. The detail enhancement effect is particularly outstanding: the directional adjustment module specifically strengthens edges and textures, making edges sharper and details richer without noise amplification, achieving a good balance between detail clarity and overall naturalness.
By comparing the 8 groups of results from 9 samples across the three datasets in Figure 2, the following conclusions can be drawn: UDHTV causes image distortion due to overenhancement, resulting in low visual comfort; WFAC adopts a conservative enhancement approach, suffering from insufficient enhancement, which leads to hazy images and missing details; ZSRM achieves moderate enhancement but performs poorly in detail processing, with no outstanding advantages in any dimension; DCTE shows a plain overall enhancement effect, featuring dark colors and limited detail improvement, thus performing mediocrely; UNTV has average color correction and insufficient detail and contrast enhancement, leading to high image grayscale and weak visual impact; PCDE’s overenhancement results in bright colors and whitish images, accompanied by noise amplification and damaged image layers; PCFB has problems such as insufficient details, partial yellowish tones and overexposed contrast, leading to uncoordinated color tones and low detail recognition; and the DCT algorithm achieves a better balance in color naturalness, contrast balance, and detail clarity, and its comprehensive performance is significantly superior to other algorithms.

4.2. Quantitative Evaluation

To quantitatively verify the performance of the method, in this section, seven metrics including UIQM [57] (Underwater Image Quality Measure), SSIM [58] (Structural Similarity Index), PSNR [59] (Peak Signal-to-Noise Ratio), UCIQE [57] (Underwater Color Image Quality Evaluator), information entropy [60] (IE), average gradient [61] (AG), and standard deviation [62] (SD) are adopted for evaluation with the UIEB, EUVP, and LSUI datasets. The quantitative results are summarized in Table 1, Table 2 and Table 3.
From the results of the UIEB dataset in Table 1, the DCT algorithm stands out in several key indicators. In terms of overall quality, its UIQM reaches 4.5542, significantly higher than other algorithms. Meanwhile, its UCIQE ranks first at 0.5100, highlighting its advantages in color balance and contrast. In terms of detail enhancement, it achieves the highest information entropy of 7.6433, an average gradient of 122.4165 that far exceeds similar algorithms, and a standard deviation of 67.5269 that leads by a large margin. These indicate its remarkable effects in detail preservation, edge clarity and contrast improvement. In terms of structural consistency, the SSIM of DCT (0.8172) is slightly lower than that of PCFB (0.8270) and DCTE (0.8211), but it still remains at a good level. Its PSNR (31.3165) is close to that of most algorithms, belonging to the upper-middle level. In summary, although the DCT algorithm is not the best in terms of SSIM and PSNR, it has obvious advantages in core indicators such as UCIQE and information entropy. Its overall performance is stable and excellent, especially in color balance, detail preservation, and contrast improvement.
The results for the EUVP underwater image dataset in Table 2 show that the DCT algorithm has significant advantages. In terms of overall quality, its UIQM reaches 4.9359, which is significantly higher than that of other algorithms; its UCIQE stands at 0.4915, second only to ZSRM (0.4942), still highlighting its advantages in color balance and contrast. In terms of detail enhancement, it achieves the highest information entropy of 7.7333, an average gradient of 146.1495 that far exceeds similar algorithms (with WFAC ranking second at 125.6689), and a standard deviation of 71.7036 that leads by a large margin, which indicates its remarkable effects in detail preservation, edge clarity, and contrast improvement. In terms of structural consistency, the SSIM of DCT (0.8093) is slightly lower than that of DCTE (0.8148) and UDHTV (0.8129) but higher than most algorithms such as UNTV (0.7748) and ZSRM (0.7340), still remaining at a good level; its PSNR (31.4466) is close to the values of DCTE (31.4721), UNTV (31.4732) and other algorithms, belonging to the upper-middle level. In summary, although the DCT algorithm is not the best in terms of SSIM and PSNR, it has obvious advantages in core indicators such as UIQM, information entropy, average gradient, and standard deviation, with stable and excellent overall performance, especially in color balance, detail preservation, and contrast improvement.
The results for the LSUI dataset in Table 3 further validate the advantages of the DCT algorithm. In terms of overall quality, its UIQM reaches 4.5252, significantly higher than other algorithms (with WFAC ranking second at 3.9052); its UCIQE ranks first at 0.4883, far exceeding other algorithms, highlighting its advantages in color balance and contrast. In terms of detail enhancement, its IE (information entropy) is 7.6418, second only to WFAC (7.7409); its AG (average gradient) is 146.0433, which far surpasses similar algorithms; and its SD (standard deviation) leads by a large margin at 69.2745, indicating its remarkable effects in detail preservation, edge clarity, and contrast improvement. In terms of structural consistency, the SSIM of DCT (0.7783) is slightly lower than that of UNTV (0.8056) and UDHTV (0.7998) but higher than that of ZSRM (0.7720) and WFAC (0.6870), still remaining at a reasonable level; its PSNR (31.3688) is close to the values of ZSRM (31.4130), UDHTV (31.3572), and other algorithms, belonging to the upper-middle level. In summary, although the DCT algorithm is not the best in terms of SSIM and PSNR, it has obvious advantages in core indicators such as UIQM, UCIQE, average gradient, and standard deviation, with stable and excellent overall performance, especially in color balance, detail preservation, and contrast improvement.
Across the three benchmark datasets, the proposed DCT method consistently ranks first in terms of global visual quality, detail fidelity, and contrast enhancement. Its superior edge-sharpening and texture-preserving capabilities directly address the core degradations encountered in underwater imaging. Among the competitors, DCTE attains marginally higher scores on certain structural consistency indices, whereas WFAC occasionally surpasses others on isolated detail metrics; UNTV, PCDE, and PCFB exhibit intermediate performance. Overall, none of the alternative algorithms match the comprehensive enhancement delivered by the DCT framework.

4.3. Ablation Experiment

To quantitatively evaluate the contribution of each constituent module—color correction, contrast enhancement, and DCT-based detail refinement—systematic ablation studies were conducted on the UIEB dataset. Progressive removal of individual components and subsequent performance comparison elucidate both their standalone efficacy and their synergistic role in improving underwater image quality.
For the ablation experiment, we selected representative samples from the UIEB dataset and set up four comparison scenarios to isolate the influence of each component: the original image; the method without color correction (-w/o Color Correction, -w/o CC): removing the color correction module while retaining the contrast enhancement and DCT detail enhancement processes; the method without contrast enhancement (-w/o Contrast Enhancement, -w/o CE): removing the contrast enhancement module while retaining the color correction and DCT detail enhancement processes; and the method without DCT detail enhancement (-w/o DCT Detail Enhancement, -w/o HDE): removing the DCT detail enhancement module while retaining the color correction and contrast enhancement processes. The experiment adopted a combined qualitative and quantitative evaluation approach: qualitative results are presented through visual comparison images, as shown in Figure 3, to demonstrate the visual effect differences among various scenarios; quantitative assessment was conducted by calculating metrics such as UIQM (Underwater Image Quality Measure), SSIM (Structural Similarity Index), PSNR (Peak Signal-to-Noise Ratio), UCIQE (Underwater Color Image Quality Evaluation), information entropy (IE), average gradient (AG), and standard deviation (SD). The average scores of each scenario on the UIEB dataset are summarized in Table 4, with bolded values indicating the best results for the corresponding metrics.
Figure 3 shows the original underwater images selected from the UIEB dataset, as well as the enhancement results of the complete method and four ablation scenarios. Visual inspection unambiguously reveals the function of each module. In the absence of color correction (-w/o CC), the image retains moderate contrast and coarse details yet suffers from a pronounced blue-green cast caused by wavelength-selective attenuation; the foreground and background exhibit poor chromatic coherence, yielding an unnaturally rigid appearance. This confirms that the color correction module is essential for neutralizing scattering-induced color shift and for re-balancing channel intensities. When contrast enhancement is disabled (-w/o CE), the global histogram is restored to a natural palette, but the dynamic range collapses: dark-region details are submerged, bright-region highlights are compressed, and the entire image appears veiled and “grayish”. Even though the detail enhancement module is active, the low-contrast baseline prevents texture information from being perceived, underscoring the role of contrast enhancement in expanding the dynamic range. Without DCT detail enhancement (-w/o HDE), color fidelity and global contrast remain satisfactory, yet object boundaries and fine textures are blurred, and the overall plasticity is reduced; in complex regions, high-frequency details are clearly lost. This demonstrates that the DCT module, by boosting high-frequency coefficients in a visually adaptive manner, compensates for the local information deficit that contrast enhancement alone cannot restore.
Table 4 presents the average quantitative evaluation results for the complete method and three ablation scenarios on the UIEB dataset. Bold entries denote the best result for each metric. Ablation results demonstrate that the full algorithm consistently outperforms all degraded variants, corroborating the synergistic value of color correction, contrast enhancement, and DCT-based detail refinement. Removing color correction (-w/o CC) reduces UIQM to 2.7586 and UCIQE to 0.4100, confirming that this module is indispensable for natural color recovery and overall perceptual quality. Eliminating contrast enhancement (-w/o CE) decreases PSNR to 22.28 dB; although the average gradient remains high (101.11), the drop in luminance fidelity and global quality underscores the module’s role in boosting image layering. Excluding DCT detail enhancement (-w/o HDE) lowers the average gradient to 91.79 and information entropy to 7.583, revealing a clear loss in texture acuity and fine-detail richness. The complete algorithm, integrating all three modules, attains UIQM = 4.5542, PSNR = 31.32 dB, UCIQE = 0.5100, and average gradient = 122.42—delivering optimal color naturalness, contrast stratification, and detail sharpness.

4.4. Computational Complexity Experiment

4.4.1. Experimental Set Up

The hardware environment of this experiment adopts an Intel Core i7-12700H CPU, an NVIDIA RTX 3060 GPU, and 32 GB DDR4 3200 MHz memory. The Intel Core i7-12700H CPU is manufactured by Intel Corporation, with its source city being Santa Clara, CA, USA. The graphics card selected for the experiment is the NVIDIA RTX 3060 GPU, which is produced by NVIDIA Corporation and also sourced from Santa Clara, CA, USA. The memory configuration is 32 GB DDR4 3200 MHz, adopting products of the Samsung brand, whose source city is Suwon, Republic of Korea. The software environment is built based on the Windows 11 Professional operating system, using Python 3.9 as the programming language and relying on OpenCV 4.8.0 and NumPy 1.25.2 for image processing and numerical calculation. The runtime measurement is implemented through the timeit. Timer tool, and each group of tests is repeated 50 times to eliminate errors caused by random system fluctuations. For dataset selection, 100 representative samples are randomly selected from each of the three benchmark datasets (UIEB, EUVP, and LSUI), and all samples are uniformly adjusted to a resolution of 640 × 480 to ensure a consistent data scale. The measurement scope of the color correction component starts from CIELab space conversion and ends at the fusion of the attenuation weight map and the color transfer image; the contrast enhancement component starts from the mean–median fusion threshold calculation and ends at the background/foreground image fusion; the detail enhancement component starts from Gabor filtering and ends at inverse DCT and overlapping block weighted fusion.

4.4.2. Experimental Results and Analysis

Table 5 shows the average runtime of the three core components on the UIEB, EUVP, and LSUI datasets. As can be seen from Table 5, the runtime of each component follows a consistent pattern across the three datasets. The detail enhancement component has the longest average runtime, which is 16.87 ms, and it is the main source of the algorithm’s computational overhead; the color correction component ranks second; and the contrast enhancement component has the shortest runtime. At the same time, the runtime fluctuation of the same component across different datasets is small, with the maximum difference being less than 0.2 ms. Combined with the unified resolution and experimental parameter settings, this verifies the stability and reliability of the complexity results and also provides a clear data basis for the subsequent engineering optimization of the algorithm.

4.5. Application Testing

To verify the practical value of the enhancement method in real visual tasks, this section applies the enhanced images to two typical downstream tasks: edge detection and key point detection. By comparing the task results of the original images with those of the enhanced images, the promoting effect of the enhancement method on visual feature extraction is evaluated. The tests are based on representative samples from the EUVP dataset. Edge detection uses the Canny operator, and key point detection uses the SIFT algorithm. The effects are comprehensively analyzed through qualitative and quantitative indicators.

4.5.1. Edge Detection

Edges are fundamental visual cues for characterizing image structure and object boundaries, and the fidelity of edge extraction is inherently determined by the magnitude of gray-level gradients [63]. In raw underwater images, contrast attenuation induces gradient smoothing, which causes the Canny edge detector to generate fragmented contours, missing weak edges, and spurious responses. As illustrated in Figure 4, compared with seven mainstream underwater image enhancement algorithms (i.e., DCTE, UNTV, PCDE, PCFB, UDHTV, ZSRM, and WFAC), the proposed algorithm effectively enhances edge intensity and enriches fine-grained detail information by targeted the amplification of gray-level gradients. Consequently, it achieves continuous object silhouettes, preserved weak edges, and improved edge-background separation, thereby significantly improving the accuracy of edge detection for underwater scenes.

4.5.2. Key Point Detection

The key points serve as the basis for local feature matching, and their quantity and stability depend on the clarity of the detail texture. Due to the blurriness of the details in the original underwater images, the key points extracted by the SIFT algorithm are concentrated in high-contrast areas, with a sparse distribution and weak discrimination of neighborhood features. Through detail enhancement, the local features in low-contrast areas (such as shadow areas and weak-textured surfaces) of the enhanced images are highlighted. As shown in Figure 5, the number of key points significantly increases and their distribution becomes more uniform. The quantitative results in Table 6 show that the average number of key points in the enhanced images increases by 107%, verifying the effectiveness of the enhancement method in enriching and stabilizing local features.

4.5.3. Object Detection

Object detection underpins critical underwater downstream tasks such as biological monitoring and search-and-rescue; its accuracy is contingent upon feature discriminability and contour sharpness. To quantify the detection benefit conferred by enhancement, we constructed a benchmark of 50 representative EUVP images containing fish, coral colonies, and man-made equipment and annotated them with axis-aligned bounding boxes in Pascal VOC format. A COCO-pre-trained YOLOv5s detector was fine-tuned on this subset while keeping the IoU threshold at 0.5. The original frames (baseline) and their enhanced counterparts—produced by DCTE, UNTV, PCDE, PCFB, UDHTV, ZSRM, WFAC, and the proposed DCT method—were fed into the identical network. The mean precision (P)24 [64], recall (R) [64], F1-score [65], and F1-score improvement across 50 independent runs are reported in Table 7. F1-score improvement refers to the relative increase in the F1-score achieved by each algorithm compared to the F1-score of the original image. This metric is used to quantify the algorithm’s enhancement in object detection performance relative to the original image. It is calculated using the following formula:
F 1 s c o r e   i m p r o v e m e n t = F 1 a l g o r i t h m F 1 o r i g i n a l F 1 o r i g i n a l × 100 %
Table 7 confirms that raw underwater images yield the lowest scores, underscoring the detrimental effect of aquatic degradation on detection performance. Among the competing algorithms, DCTE ranks lowest; UNTV, PCFB, and PCDE deliver moderate results; and UDHTV, ZSRM, and WFAC perform comparatively better yet still exhibit deficiencies such as missing details, residual color casts, or overenhancement. In sharp contrast, the proposed DCT method achieves the highest precision, recall, and F1-score. Relative to WFAC, these metrics rise by 15.7%, 14.0%, and 14.8%, respectively, while the F1-score surpasses that of the original images by 51.7%. This substantial gain originates from the joint optimization of color correction, contrast enhancement, and detail boosting, which collectively suppress both false positives and false negatives. The pronounced margin over all competitors validates the practical utility of the DCT method for real-world underwater vision tasks.
As illustrated in Figure 6, this figure presents the qualitative comparison results of underwater object detection between the proposed algorithm and seven mainstream methods (i.e., DCTE, UNTV, PCDE, PCFB, UDHTV, ZSRM, and WFAC), where the detection bounding boxes of all algorithms are marked in red. From the perspective of the localization accuracy and coverage range of the bounding boxes, it can be observed that the detection boxes of other comparative algorithms generally exhibit an over-expansion phenomenon: the size of the bounding boxes is excessively large, failing to accurately fit the target contours. Some algorithms even include background regions within the bounding boxes or miss key local parts of the targets, which reflects their insufficient capability to identify underwater target boundaries. In contrast, the detection boxes of the proposed algorithm can strictly match the actual contours of the targets with optimal size adaptability, without obvious over-coverage or under-coverage issues, thus accurately capturing the complete morphology of underwater targets. This difference originates from the synergistic effect of multi-channel attenuation analysis and DCT-based detail enhancement in the proposed algorithm, which effectively improves the edge clarity of images and the separability between targets and backgrounds. Consequently, more reliable visual features are provided for the YOLOv5s detector, thereby significantly optimizing the localization accuracy of the detection boxes and achieving precise detection of underwater targets.

5. Conclusions

To tackle the three major challenges of underwater imaging—color cast, low contrast, and blurred details—this paper presents a unified enhancement framework that couples multi-channel attenuation analysis with DCT. By integrating physical priors with frequency domain processing, the proposed method breaks the limitation of conventional schemes that handle these issues in isolation. A three-level synergy of color correction, contrast stretching, and detail sharpening is designed as the core: each module has a dedicated role while mutually reinforcing the others, enabling parameter-free, adaptive enhancement. Experiments on three public datasets—UIEB, EUVP, and LSUI—as well as on downstream tasks such as object detection and edge extraction demonstrate that our results significantly outperform mainstream algorithms such as UDHTV in both visual quality and no-reference metrics like UIQM and UCIQE. Ablation studies further confirm that all three modules are indispensable. The algorithm respects human visual characteristics and underwater light propagation physics, effectively suppressing overenhancement and artifacts. Future work will focus on refining the attenuation model and developing lightweight implementations to facilitate real-time applications.

Author Contributions

Methodology, L.W.; Validation, C.P.; Investigation, J.T.; Writing—original draft, L.W.; Writing—review and editing, M.Y.; Supervision, M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant 62271236, the National Key R&D Program Project under Grant 2023YFC3108205, the Key Country-Specific Industrial Technology R&D Cooperation Project under Grant 23GH002, and the 521 Talent Program under Grant LYG065212024009.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The following table summarizes all symbols used in this paper, arranged in the order of their first appearance.
Table A1. Symbol notation table.
Table A1. Symbol notation table.
SymbolDefinitionSymbolDefinition
I L a b S ( x ) original image in CIELab I L a b R ( x ) reference image in CIELab
μ L a b S mean of original σ L a b S std of original
μ L a b R std of original σ L a b R std of reference
Htotal pixels M × N I C T , L a b color-transferred image
r, bnormalized R/B channel γ 1 red attenuation parameter
γ 2 blue scattering parameter A r red attenuation index
A b blue attenuation indexWattenuation weight map
I C C final color-corrected image I s e p , c mean–median separation threshold
I B background intensity after stretch I F foreground intensity after stretch
P B cumulative probability of background P F cumulative probability of foreground
Yluminance channel D u , v DCT coefficient
E ac AC energyCcontrast measure
f u , v frequency α max max gain coefficient
β gain bandwidth control κ gain rate adjustment
d x horizontal low-freq change d y vertical low-freq change
z edge direction ratio G f r e q frequency gain
G f a c t o r directional gain factor S f i n a l visual saliency weight
G u , v total gain D u , v adjusted DCT coefficient
H x , y Hanning window E x , y accumulated intensity
H a c c accumulated weightPprecision
RrecallF1F1-score

References

  1. Jiang, Z.; Li, Z.; Yang, S.; Fan, X.; Liu, R. Target Oriented Perceptual Adversarial Fusion Network for Underwater Image Enhancement. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6584–6598. [Google Scholar] [CrossRef]
  2. Zhuang, P.; Li, C.; Wu, J. Bayesian retinex underwater image enhancement. Eng. Appl. Artif. Intell. 2021, 101, 104171. [Google Scholar] [CrossRef]
  3. Garg, D.; Garg, N.K.; Kumar, M. Underwater image enhancement using blending of CLAHE and percentile methodologies. Multimed. Tools Appl. 2018, 77, 26545–26561. [Google Scholar] [CrossRef]
  4. Zhou, J.; Pang, L.; Zhang, D.; Zhang, W. Underwater Image Enhancement Method via Multi-Interval Subhistogram Perspective Equalization. IEEE J. Ocean. Eng. 2023, 48, 474–488. [Google Scholar] [CrossRef]
  5. Bai, L.; Zhang, W.; Pan, X.; Zhao, C. Underwater Image Enhancement Based on Global and Local Equalization of Histogram and Dual-Image Multi-Scale Fusion. IEEE Access 2020, 8, 128973–128990. [Google Scholar] [CrossRef]
  6. Zhang, M.; Peng, J. Underwater Image Restoration Based on A New Underwater Image Formation Model. IEEE Access 2018, 6, 58634–58644. [Google Scholar] [CrossRef]
  7. Yu, H.; Li, X.; Lou, Q.; Lei, C.; Liu, Z. Underwater image enhancement based on DCP and depth transmission map. Multimed. Tools Appl. 2020, 79, 20373–20390. [Google Scholar] [CrossRef]
  8. Yang, M.; Sowmya, A.; Wei, Z.; Zheng, B. Offshore Underwater Image Restoration Using Reflection-Decomposition-Based Transmission Map Estimation. IEEE J. Ocean. Eng. 2020, 45, 521–533. [Google Scholar] [CrossRef]
  9. Yang, M.; Hu, K.; Du, Y.; Wei, Z.; Sheng, Z.; Hu, J. Underwater image enhancement based on conditional generative adversarial network. Signal Process. Image Commun. 2020, 81, 115723. [Google Scholar] [CrossRef]
  10. Liu, R.; Jiang, Z.; Yang, S.; Fan, X. Twin Adversarial Contrastive Learning for Underwater Image Enhancement and Beyond. IEEE Trans. Image Process. 2022, 31, 4922–4936. [Google Scholar] [CrossRef]
  11. Wang, Y.; Guo, J.; Gao, H.; Yue, H. UIEC^2-Net: CNN-based underwater image enhancement using two color space. Signal Process. Image Commun. 2021, 96, 116250. [Google Scholar] [CrossRef]
  12. Zhang, W.; Zhuang, P.; Sun, H.; Li, G.; Kwong, S.; Li, C. Underwater Image Enhancement via Minimal Color Loss and Locally Adaptive Contrast Enhancement. IEEE Trans. Image Process. 2022, 31, 3997–4010. [Google Scholar] [CrossRef]
  13. Lyu, Z.; Chen, Y.; Hou, Y. MCPNet: Multi-space color correction and features prior fusion for single-image dehazing in non-homogeneous haze scenarios. Pattern Recognit. 2024, 150, 110290. [Google Scholar] [CrossRef]
  14. Jaffe, J.S. Computer modeling and the design of optimal underwater imaging systems. IEEE J. Ocean. Eng. 1990, 15, 101–111. [Google Scholar] [CrossRef]
  15. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1956–1963. [Google Scholar]
  16. Wang, Y.; Liu, H.; Chau, L.-P. Single Underwater Image Restoration Using Adaptive Attenuation-Curve Prior. IEEE Trans. Circuits Syst. I Regul. Pap. 2018, 65, 992–1002. [Google Scholar] [CrossRef]
  17. Ravi, R.V.; Goyal, S.B.; Islam, S.M.N. Underwater Image Dehazing using Dark Channel Prior and Filtering Techniques. In Proceedings of the 2022 International Conference on Computational Modelling, Simulation and Optimization (ICCMSO), Pathum Thani, Thailand, 23–25 December 2022; pp. 12–16. [Google Scholar]
  18. Li, C.; Guo, J.; Guo, C.; Cong, R.; Gong, J. A hybrid method for underwater image correction. Pattern Recognit. Lett. 2017, 94, 62–67. [Google Scholar] [CrossRef]
  19. Peng, Y.T.; Cosman, P.C. Underwater Image Restoration Based on Image Blurriness and Light Absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef]
  20. Zhuang, P.; Wu, J.; Porikli, F.; Li, C. Underwater Image Enhancement with Hyper-Laplacian Reflectance Priors. IEEE Trans. Image Process. 2022, 31, 5442–5455. [Google Scholar] [CrossRef]
  21. Li, X.; Hou, G.; Li, K.; Pan, Z. Enhancing underwater image via adaptive color and contrast enhancement, and denoising. Eng. Appl. Artif. Intell. 2022, 111, 104759. [Google Scholar] [CrossRef]
  22. Yan, X.; Wang, G.; Wang, G.; Wang, Y.; Fu, X. A novel biologically-inspired method for underwater image enhancement. Signal Process. Image Commun. 2022, 104, 116670. [Google Scholar] [CrossRef]
  23. Li, Y.; Li, L.; Yao, J.; Xia, M.; Wang, H. Contrast-Aware Color Consistency Correction for Multiple Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 4941–4955. [Google Scholar] [CrossRef]
  24. Ancuti, C.; Haber, T.; Bekaert, P. Enhancing Underwater Images and Videos by Fusion. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012. [Google Scholar]
  25. Tolie, H.F.; Ren, J.; Cai, J.; Chen, R.; Zhao, H. Blind Quality Assessment Using Channel-Based Structural, Dispersion Rate Scores, and Overall Saturation and Hue for Underwater Images. IEEE J. Ocean. Eng. 2025, 50, 1944–1959. [Google Scholar] [CrossRef]
  26. Muniraj, M.; Dhandapani, V. Underwater image enhancement by combining color constancy and dehazing based on depth estimation. Neurocomputing 2021, 460, 211–230. [Google Scholar] [CrossRef]
  27. Sharma, S.; Varma, T. Graph signal processing based underwater image enhancement techniques. Eng. Sci. Technol. Int. J. 2022, 32, 101059. [Google Scholar] [CrossRef]
  28. Bae, S.H.; Kim, M. A DCT-Based Total JND Profile for Spatiotemporal and Foveated Masking Effects. IEEE Trans. Circuits Syst. Video Technol. 2017, 27, 1196–1207. [Google Scholar] [CrossRef]
  29. Azani Mustafa, W.; Yazid, H.; Khairunizam, W.; Aminuddin Jamlos, M.; Zunaidi, I.; Razlan, Z.M.; Shahriman, A.B. Image Enhancement Based on Discrete Cosine Transforms (DCT) and Discrete Wavelet Transform (DWT): A Review. IOP Conf. Ser. Mater. Sci. Eng. 2019, 557, 012027. [Google Scholar] [CrossRef]
  30. An, R.; Huo, W.; Zhang, Y.; Pei, J.; Zhang, Y.; Huang, Y. A DCT-Based Local Contrast Enhancement SAR Imaging Detection Algorithm. In Proceedings of the IGARSS 2024—2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 7–12 July 2024; pp. 9866–9869. [Google Scholar]
  31. Ju, M.; He, C.; Ding, C.; Ren, W.; Zhang, L.; Ma, K. All-Inclusive Image Enhancement for Degraded Images Exhibiting Low-Frequency Corruption. IEEE Trans. Circuits Syst. Video Technol. 2025, 35, 838–856. [Google Scholar] [CrossRef]
  32. MdJahidul Islam, P.; Luo, P.; Sattar, J. Simultaneous enhancement and super-resolution of underwater imagery for improved visual perception. In Proceedings of the Science and Systems 2020, Corvalis, OR, USA, 12–16 July 2020. [Google Scholar]
  33. Islam, M.J.; Xia, Y.; Sattar, J. Fast Underwater Image Enhancement for Improved Visual Perception. IEEE Robot. Autom. Lett. 2020, 5, 3227–3234. [Google Scholar] [CrossRef]
  34. Mittal, A.N.A.S.K. Shallow-UWnet: Compressed Model for Underwater Image Enhancement (Student Abstract). Proc. AAAI Conf. Artif. Intell. 2021, 35, 15853–15854. [Google Scholar] [CrossRef]
  35. Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.-H. Restormer: Efficient Transformer for High-Resolution Image Restoration. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5728–5739. [Google Scholar]
  36. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An Underwater Image Enhancement Benchmark Dataset and Beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389. [Google Scholar] [CrossRef]
  37. Wang, C. Underwater Image Enhancement by Dehazing with Minimum Information Loss and Histogram Distribution Prior. IEEE Trans. Image Process. 2016, 25, 5664–5677. [Google Scholar] [CrossRef]
  38. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
  39. Peng, L.; Zhu, C.; Bian, L. U-Shape Transformer for Underwater Image Enhancement. IEEE Trans. Image Process. 2023, 32, 3066–3079. [Google Scholar] [CrossRef]
  40. Wang, J.; Li, P.; Deng, J.; Du, Y.; Zhuang, J.; Liang, P.; Liu, P. CA-GAN: Class-Condition Attention GAN for Underwater Image Enhancement. IEEE Access 2020, 8, 130719–130728. [Google Scholar] [CrossRef]
  41. Tolie, H.F.; Ren, J.; Elyan, E. DICAM: Deep Inception and Channel-wise Attention Modules for underwater image enhancement. Neurocomputing 2024, 584, 127585. [Google Scholar] [CrossRef]
  42. Zhang, W.; Wang, Y.; Li, C. Underwater Image Enhancement by Attenuated Color Channel Correction and Detail Preserved Contrast Enhancement. IEEE J. Ocean. Eng. 2022, 47, 718–735. [Google Scholar] [CrossRef]
  43. Abdul Ghani, A.S.; Mat Isa, N.A. Underwater image quality enhancement through integrated color model with Rayleigh distribution. Appl. Soft Comput. 2015, 27, 219–230. [Google Scholar] [CrossRef]
  44. Mukherjee, J.; Mitra, S.K. Enhancement of color images by scaling the DCT coefficients. IEEE Trans. Image Process 2008, 17, 1783–1794. [Google Scholar] [CrossRef]
  45. Wang, H.; Yu, L.; Yin, H.; Li, T.; Wang, S. An improved DCT-based JND estimation model considering multiple masking effects. J. Vis. Commun. Image Represent. 2020, 71, 102850. [Google Scholar] [CrossRef]
  46. Shizhong, L.; Bovik, A.C. Efficient DCT-domain blind measurement and reduction of blocking artifacts. IEEE Trans. Circuits Syst. Video Technol. 2002, 12, 1139–1149. [Google Scholar] [CrossRef]
  47. Bang, S.; Kim, W. DCT Domain Detail Image Enhancement for More Resolved Images. Electronics 2021, 10, 2461. [Google Scholar] [CrossRef]
  48. Xie, J.; Hou, G.; Wang, G.; Pan, Z. A Variational Framework for Underwater Image Dehazing and Deblurring. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 3514–3526. [Google Scholar] [CrossRef]
  49. Zhang, W.; Jin, S.; Zhuang, P.; Liang, Z.; Li, C. Underwater Image Enhancement via Piecewise Color Correction and Dual Prior Optimized Contrast Enhancement. IEEE Signal Process. Lett. 2023, 30, 229–233. [Google Scholar] [CrossRef]
  50. Zhang, W.; Liu, Q.; Feng, Y.; Cai, L.; Zhuang, P. Underwater Image Enhancement via Principal Component Fusion of Foreground and Background. IEEE Trans. Circuits Syst. Video Technol. 2024, 4, 10930–10943. [Google Scholar] [CrossRef]
  51. Li, Y.; Hou, G.; Zhuang, P.; Pan, Z. Dual High-Order Total Variation Model for Underwater Image. arXiv 2024, arXiv:2407.14868. [Google Scholar] [CrossRef]
  52. Li, F.; Liu, C.; Li, X. Underwater image enhancement with zero-point symmetry prior and reciprocal mapping. Displays 2024, 85, 2024.102845. [Google Scholar] [CrossRef]
  53. Zhang, W.; Liu, Q.; Lu, H.; Wang, J.; Liang, J. Underwater Image Enhancement via Wavelet Decomposition Fusion of Advantage Contrast. IEEE Trans. Circuits Syst. Video Technol. 2025, 35, 7807–7820. [Google Scholar] [CrossRef]
  54. Kalantzopoulos, G.N.; Lundvall, F.; Checchia, S.; Lind, A.; Wragg, D.S.; Fjellvag, H.; Arstad, B. In Situ Flow MAS NMR Spectroscopy and Synchrotron PDF Analyses of the Local Response of the Bronsted Acidic Site in SAPO-34 during Hydration at Elevated Temperatures. ChemPhysChem 2018, 19, 519–528. [Google Scholar] [CrossRef]
  55. Wang, Y.; Song, W.; Fortino, G.; Qi, L.; Zhang, W.; Liotta, A. An Experimental-Based Review of Image Enhancement and Image Restoration Methods for Underwater Imaging. IEEE Access 2019, 7, 140233–140251. [Google Scholar] [CrossRef]
  56. Zhang, W.; Wang, H.; Ren, P.; Zhang, W. MACT: Underwater image color correction via Minimally Attenuated Channel Transfer. Pattern Recognit. Lett. 2025, 187, 28–34. [Google Scholar]
  57. Li, C.Y.; Mazzon, R.; Cavallaro, A. An Online Platform for Underwater Image Quality Evaluation. In Proceedings of the International Conference on Pattern Recognition, Beijing, China, 20–24 August 2018; Springer: Cham, Switzerland, 2019; Volume 11188, pp. 37–44. [Google Scholar] [CrossRef]
  58. Cao, Z.; Liang, Y.; Deng, L.; Vivone, G. An Efficient Image Fusion Network Exploiting Unifying Language and Mask Guidance. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 9845–9862. [Google Scholar] [CrossRef]
  59. Lv, M.; Song, S.; Jia, Z.; Li, L.; Ma, H. Multi-Focus Image Fusion Based on Dual-Channel Rybak Neural Network and Consistency Verification in NSCT Domain. Fractal Fract. 2025, 9, 432. [Google Scholar] [CrossRef]
  60. Harte, J. Maximum Entropy and Ecology: A Theory of Abundance, Distribution, and Energetics; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  61. Ruder, S. An overview of gradient descent optimization algorithms. arXiv 2016, arXiv:1609.04747. [Google Scholar]
  62. Lee, D.K.; In, J.; Lee, S. Standard Deviation and Standard Error of the Mean. Korean J. Anesthesiol. 2015, 68, 220–223. [Google Scholar] [CrossRef]
  63. Zhang, D.; He, Z.; Zhang, X.; Wang, Z.; Ge, W.; Shi, T.; Lin, Y. Underwater image enhancement via multi-scale fusion and adaptive color-gamma correction in low-light conditions. Eng. Appl. Artif. Intell. 2023, 126, 106972. [Google Scholar] [CrossRef]
  64. Zhou, P.; Ni, B.; Geng, C.; Hu, J.; Xu, Y. Scale-transferrable object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 528–537. [Google Scholar]
  65. Li, L.; Ma, H.; Zhang, X.; Zhao, X.; Lv, M.; Jia, Z. Synthetic Aperture Radar Image Change Detection Based on Principal Component Analysis and Two-Level Clustering. Remote Sens. 2024, 16, 1861. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the algorithm.
Figure 1. Flowchart of the algorithm.
Sensors 25 07192 g001
Figure 2. Comparison of the effects of different algorithms on three underwater image datasets. (ac) are samples from the UIEB dataset, (df) are samples from the EUVP dataset, and (gi) are samples from the LSUI dataset.
Figure 2. Comparison of the effects of different algorithms on three underwater image datasets. (ac) are samples from the UIEB dataset, (df) are samples from the EUVP dataset, and (gi) are samples from the LSUI dataset.
Sensors 25 07192 g002
Figure 3. Qualitative comparison results of ablation experiments.
Figure 3. Qualitative comparison results of ablation experiments.
Sensors 25 07192 g003
Figure 4. Qualitative comparison of edge detection results between the proposed algorithm and mainstream underwater image enhancement algorithms.
Figure 4. Qualitative comparison of edge detection results between the proposed algorithm and mainstream underwater image enhancement algorithms.
Sensors 25 07192 g004
Figure 5. Comparison of key point distribution between the original image and the enhanced image. (ac) respectively correspond to three representative groups of underwater image samples in the EUVP dataset. Each sub-figure pair shows the key point detection results of the original image and the enhanced image.
Figure 5. Comparison of key point distribution between the original image and the enhanced image. (ac) respectively correspond to three representative groups of underwater image samples in the EUVP dataset. Each sub-figure pair shows the key point detection results of the original image and the enhanced image.
Sensors 25 07192 g005
Figure 6. Qualitative comparison of underwater object detection results between the proposed algorithm and mainstream methods.
Figure 6. Qualitative comparison of underwater object detection results between the proposed algorithm and mainstream methods.
Sensors 25 07192 g006
Table 1. Experimental comparison data of each algorithm for the UIEB dataset. All bold values in the table indicate the optimal results under the corresponding evaluation metrics, and italic values denote the suboptimal results.
Table 1. Experimental comparison data of each algorithm for the UIEB dataset. All bold values in the table indicate the optimal results under the corresponding evaluation metrics, and italic values denote the suboptimal results.
UIQMSSIMPSNRUCIQEIEAGSD
DCTE4.02100.821131.32310.46207.0321101.510040.4632
UNTV4.12210.811231.31010.45767.1221104.320145.6781
PCDE4.32140.813831.30190.46217.1456101.789240.7829
PCFB4.41230.827031.30280.47627.4653113.339844.7921
UDHTV4.40220.816231.31310.44106.9961101.417840.4631
ZSRM4.98060.807431.16880.47446.997871.283140.4722
WFAC3.83810.782031.35750.41787.5633112.402355.3321
DCT4.55420.817231.31650.51007.6433122.416567.5269
Table 2. Experimental comparison data of each algorithm for the EUVP dataset. All bold values in the table indicate the optimal results under the corresponding evaluation metrics, and italic values denote the suboptimal results.
Table 2. Experimental comparison data of each algorithm for the EUVP dataset. All bold values in the table indicate the optimal results under the corresponding evaluation metrics, and italic values denote the suboptimal results.
UIQMSSIMPSNRUCIQEIEAGSD
DCTE4.63280.814831.47210.47707.2403101.123149.2145
UNTV4.48920.774831.47320.47117.2431102.129350.9128
PCDE4.50110.799031.40010.45217.3129102.492852.4938
PCFB4.61220.810031.45290.48117.3190103.413253.8732
UDHTV4.43410.812931.46850.47747.239699.102148.1158
ZSRM4.40360.734031.40190.49427.2315102.321950.0777
WFAC4.45060.647031.10220.43377.6899125.668959.5134
DCT4.93590.809331.44660.49157.7333146.149571.7036
Table 3. Experimental comparison data of each algorithm for the LSUI dataset. All bold values in the table indicate the optimal results under the corresponding evaluation metrics, and italic values denote the suboptimal results.
Table 3. Experimental comparison data of each algorithm for the LSUI dataset. All bold values in the table indicate the optimal results under the corresponding evaluation metrics, and italic values denote the suboptimal results.
UIQMSSIMPSNRUCIQEIEAGSD
DCTE3.34870.791231.35750.43667.055871.490143.2123
UNTV3.61710.805631.34170.44827.2341102.782143.8989
PCDE3.38130.797831.31220.45117.3123108.789144.5879
PCFB3.83210.792831.33220.46117.4123111.234244.9876
UDHTV3.36300.799831.35720.43697.055871.488841.3137
ZSRM3.86960.772031.41300.46837.202490.591945.9988
WFAC3.90520.687031.22430.43067.7409121.189856.2596
DCT4.52520.778331.36880.48837.6418145.043369.2745
Table 4. Quantitative results of ablation experiments based on the UIEB dataset.
Table 4. Quantitative results of ablation experiments based on the UIEB dataset.
UIQMSSIMPSNRUCIQEIEAGSD
Raw image1.95440.763919.44380.32216.890041.381836.3170
-w/o CC2.75860.782221.40570.41007.064561.482841.5255
-w/o CE4.18320.791122.28390.45117.1490101.112772.4165
-w/o HDE3.99160.793322.11540.44797.582791.788259.1186
Complete4.55420.812231.31650.51007.6433122.416567.5269
Table 5. Average runtime of each component on the three datasets.
Table 5. Average runtime of each component on the three datasets.
Color Correction Runtime (ms)Contrast Enhancement Runtime (ms)Detail Enhancement Runtime (ms)Total Runtime (ms)
UIEB8.21 ± 0.355.17 ± 0.2216.83 ± 0.5130.21
EUVP8.35 ± 0.415.23 ± 0.2517.02 ± 0.4830.60
LSUI8.18 ± 0.385.12 ± 0.2316.75 ± 0.4530.15
Average8.25 ± 0.385.17 ± 0.2316.87 ± 0.4830.29
Table 6. Comparison of key point quantities. (a)–(c) correspond to the subfigures of Figure 5.
Table 6. Comparison of key point quantities. (a)–(c) correspond to the subfigures of Figure 5.
Raw ImageEnhanced ImageIncrease in Magnitude
(a)7071463108%
(b)304677122%
(c)46990292%
Table 7. Comparison of target detection results of different algorithms. All bold values in the table indicate the optimal results under the corresponding evaluation metrics.
Table 7. Comparison of target detection results of different algorithms. All bold values in the table indicate the optimal results under the corresponding evaluation metrics.
PRF1F1 Improvement
Raw image0.6280.5410.582-
DCTE0.6850.6230.65312.2%
UNTV0.7120.6680.68918.4%
PCDE0.6980.6470.67215.5%
PCFB0.6890.6560.67215.5%
UDHTV0.7750.7000.73526.3%
ZSRM0.7530.6910.72123.9%
WFAC0.7640.7140.73826.8%
DCT0.8920.8750.88351.7%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, L.; Yang, M.; Pan, C.; Tao, J. DCT Underwater Image Enhancement Based on Attenuation Analysis. Sensors 2025, 25, 7192. https://doi.org/10.3390/s25237192

AMA Style

Wang L, Yang M, Pan C, Tao J. DCT Underwater Image Enhancement Based on Attenuation Analysis. Sensors. 2025; 25(23):7192. https://doi.org/10.3390/s25237192

Chicago/Turabian Style

Wang, Leyuan, Miao Yang, Can Pan, and Jiaju Tao. 2025. "DCT Underwater Image Enhancement Based on Attenuation Analysis" Sensors 25, no. 23: 7192. https://doi.org/10.3390/s25237192

APA Style

Wang, L., Yang, M., Pan, C., & Tao, J. (2025). DCT Underwater Image Enhancement Based on Attenuation Analysis. Sensors, 25(23), 7192. https://doi.org/10.3390/s25237192

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop