Next Article in Journal
Integrated Correction of Nonlinear Dynamic Drift in Terrestrial Mobile Gravity Surveys: A Comparative Study Based on the Northeastern China Gravity Monitoring Network
Previous Article in Journal
Accurate Rainfall Prediction Using GNSS PWV Based on Pre-Trained Transformer Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Denoising Based on Non-Convex Correlated Total Variation

1
School of Mathematics and Statistics, Northwestern Polytechnical University, Xi’an 710021, China
2
School of Automation, Northwestern Polytechnical University, Xi’an 710021, China
3
Research and Development Institute of Northwestern Polytechnical University in Shenzhen, Shenzhen 518063, China
*
Author to whom correspondence should be addressed.
These authors contributed equally and can be considered as co-first authors.
Remote Sens. 2025, 17(12), 2024; https://doi.org/10.3390/rs17122024
Submission received: 10 April 2025 / Revised: 2 June 2025 / Accepted: 9 June 2025 / Published: 12 June 2025

Abstract

Hyperspectral image (HSI) quality is generally degraded by diverse noise contamination during acquisition, which adversely impacts subsequent processing performance. Current techniques predominantly rely on nuclear norms and low-rank matrix approximation theory to model the inherent property that HSIs lie in a low-dimensional subspace. Recent research has demonstrated that HSI gradient maps also exhibit low-rank priors. The correlated total variation (CTV), which is defined as the nuclear norm of gradient maps, can simultaneously model low-rank and local smoothness priors, and shows better performance than the standard nuclear norm. However, similar to nuclear norms, CTV may excessively penalize large singular values. To overcome these constraints, this study introduces a non-convex correlated total variation (NCTV), which shows the potential to eliminate mixed noise (including Gaussian, impulse, stripe, and dead-line noise) while preserving critical textures and spatial–spectral details. Numerical experiments on both simulated and real HSI datasets demonstrate that the proposed NCTV method achieves better performance in detail retention compared with the state-of-the-art techniques.

1. Introduction

Hyperspectral images (HSIs) are an essential modality in remote sensing that play a critical role in such diverse applications as environmental monitoring and agricultural assessment [1,2,3,4]. By capturing target scenes across continuous spectral bands, HSIs provide rich spectral information, enabling precise classification and identification of ground objects [5,6]. However, during practical acquisition HSIs are often contaminated by various types of noise, including Gaussian noise, impulse noise, stripe noise, and dead-line noise [3,7]. Such noise not only degrades image quality but also adversely affects downstream tasks such as object detection [8], image fusion [9,10,11], image segmentation [12], compressive snapshot reconstruction [13], cloud removal [14], and quantitative analysis [4,15,16,17]. Therefore, eliminating noise in HSIs while preserving structural details remains a critical research topic in HSI processing [18].
With the rapid development of deep learning techniques in recent years, data-driven denoising methods have shown remarkable success in HSI processing [19]. For instance, convolutional neural networks (CNNs) and autoencoders can be trained in an end-to-end manner to automatically extract complex noise characteristics and structural information from large-scale datasets, achieving excellent performance under specific noise conditions [20,21,22]. Furthermore, architectures such as 3D-CNNs and transformers have been proposed to exploit the joint spatial–spectral correlations in HSIs, resulting in significantly improved denoising performance [23,24].
Despite these advances, deep learning methods often require a large amount of labeled or clean training data, and can exhibit limited generalization when faced with unknown or complex noise distributions commonly encountered in real-world scenarios [25]. In contrast, model-based methods such as the non-convex correlated total variation (NCTV) approach proposed in this paper are built upon well-established mathematical priors, enabling them to adapt flexibly to diverse noise types without relying on training data. This makes them robust and interpretable in practical applications involving mixed noise environments.
Methods based on mathematical models have attracted significant attention in the field of HSI denoising. For an HSI with H W pixels and B bands, let X and Y represent the clean and noisy image, respectively. The observation model can be expressed as Y = X + S , where S represents the noise term. The denoising problem is typically formulated as a minimization:   
min X L ( X , Y ) + γ R ( X )
where the first term L ( · , · ) is the data fidelity term (often chosen as the 1 -norm) and the second term R ( · ) is the regularization term with weight γ . The design of R ( · ) typically relies on prior knowledge, which includes three common types [26]:
1.
Non-local self-similarity priors: Exploit the recurrence of structurally similar patches across spatial domains [4].
2.
Local smoothness prior: Enforces consistency among adjacent pixels through spatial continuity constraints [7,27,28].
3.
Low-rank prior: Leverages spectral correlation to model global redundancy in the data cube [29,30,31,32].
While the non-local self-similarity prior achieves strong denoising performance, it relies on computationally intensive operations such as patch extraction and high-order tensor reorganization, which limits its practical applicability. In contrast, local smoothness and low-rank priors strike a tradeoff between efficiency and effectiveness [33,34].
The low-rank prior is widely regarded as one of the most prominent priors for hyperspectral images [35]. For a clean HSI data matrix, its rank is typically significantly lower than either the number of pixels H W or the band count B [36]. Mathematically, when an HSI is represented as a matrix, the global correlation along the spectrum indicates that the data matrix has a low-rank property. This property has been utilized in [37,38,39] for HSI denoising. By leveraging this prior, a robust principal component analysis (RPCA) model can be established to achieve HSI denoising, with the optimization problem formulated as follows [40]:
min X , S X * + λ S 1 , s . t . Y = X + S
where X * = i σ i represents the nuclear norm (with σ i as the ith singular value of matrix X ) and the 1 -norm of matrix S is defined as S 1 = i , j | s i j | . Researchers such as Wright et al. [41] and Candès et al. [42] have proven that, under the incoherence assumption, exact recovery is possible even when the data are severely corrupted by sparse noise. As a result, this model has been extensively applied in the field of HSI denoising [43,44,45,46,47].
Additionally, HSIs are not only affected by sparse noise but may also be influenced by non-sparse noise. In such cases, the performance of RPCA-based models can be further improved by incorporating the local smoothness prior through the introduction of a total variation (TV) regularization term [48]. The optimization problem can then be expressed as   
min X , S X * + λ S 1 + γ X TV , s . t . Y = X + S ,
where the TV term X TV = i = 1 3 i X 1 penalizes gradient magnitudes across spatial and spectral dimensions and where 1 X , 2 X , and 3 X respectively denote the horizontal, vertical, and spatial gradients of the data along the three dimensions.
Although the aforementioned model and its variants have achieved significant progress, they struggle with poor compatibility between local smoothness and low-rank priors. Although incorporating the local smoothness prior into RPCA-based models can handle non-sparse noise [49,50,51,52,53], the additional TV regularization term in Formula (3) also introduces a tunable parameter γ . The effectiveness of these methods depends heavily on the parameter settings, undoubtedly increasing the burden of manual parameter tuning. More importantly, simply combining the nuclear norm and total variation treats low-rankness and local smoothness as independent components, resulting in inefficient modeling. To address this issue, Peng et al. [54] developed a correlated total variation (CTV) regularization term that can simultaneously model local smoothness and low-rank priors, which they defined as   
X CTV = 1 3 i = 1 3 i X * .
Obviously, CTV imposes the nuclear norm on the gradient map i X instead of on the HSI itself; therefore, CTV directly models the low-rank property for gradient maps. This not only removes the tunable parameter but also provides better characterization of details, leading to superior performance over nuclear norms.
As CTV is built upon nuclear norms, it inherits some of their drawbacks. It is well known that the nuclear norm is a convex relaxation of the rank; thus, minimizing the nuclear norm (i.e., the sum of the singular values) inevitably introduces a shrinkage effect for large singular values. Given that large singular values correspond to important information, this shrinkage effect results in biased estimation of the data [55,56,57]. Approximating the rank in the family of non-convex norms can reduce this shrinkage effect [56,57,58,59,60].
To this end, the present study proposes a new HSI denoising method based on the non-convex correlated total variation (NCTV). The proposed approach introduces a non-convex function to more accurately approximate the rank of the matrix and integrates the gradient information of hyperspectral images to construct a novel regularization term that effectively combines the low-rank and local smoothness priors. Extensive simulation experiments show that when compared to existing algorithms, the proposed method not only effectively removes mixed noise from hyperspectral images but also preserves image details more meticulously during the denoising process, thereby significantly improving the overall quality of denoising.
The main contributions of this work can be briefly summarized as follows:
1.
We propose the non-convex correlated total variation regularization term for simultaneously modeling the low-rank and local smoothness priors.
2.
We propose a hyperspectral image denoising algorithm based on our non-convex correlated total variation, demonstrating excellent denoising performance in scenarios with severe mixed noise.
The remainder of this article is structured as follows: Section 2 reviews relevant background knowledge and explains the motivation behind this study while also analyzing the low-rank models for hyperspectral image denoising; Section 3 presents the non-convex correlated total variation regularization (NCTV) and describes its application in hyperspectral image denoising; Section 4 reports the outcomes of our numerical experiments; finally, Section 5 summarizes the findings of this study.

2. Background and Motivation

As highlighted in the introduction, HSIs are inherently susceptible to complex noise contamination during acquisition [61,62]. This degradation arises from multiple sources, including sensor thermal fluctuations, photon shot noise, atmospheric scattering, and calibration errors, which manifest as mixed noise types such as Gaussian noise, impulse noise (i.e., sparse outliers), stripe noise (i.e., line artifacts), and dead-line noise (i.e., column-wise dropouts). The degradation model of a hyperspectral image can be expressed as the sum of a clean image X and noise S , i.e., Y = X + S [63]. Critically, the spectral correlation inherent to HSIs—stemming from the high redundancy between adjacent bands—endows the data with low-rank properties. This characteristic enables effective subspace regularization through low-rank constraints, as the intrinsic dimensionality of the clean HSI X is significantly lower than its ambient space defined by the spatial dimensions ( H W ) and spectral bands (B).
The theoretical foundation for exploiting this low-rank structure was established by Wright et al. [41] and Candès et al. [42], who introduced the RPCA framework. The RPCA model originally aimed to minimize the rank of the data. Due to the discreteness of the matrix rank, the solution to this problem is NP-hard [64]. To overcome this difficulty, a common method is to use convex relaxation [45,60,65], in which the rank is replaced with a nuclear norm in order to convert it into a convex optimization problem, as shown in Equation (2). The nuclear norm, defined as X * = i σ i , linearly penalizes singular values, leading to uniform shrinkage across all magnitudes; however, this significantly differs from the rank function, which nonlinearly “counts” non-zero singular values via the step function:
Rank ( X ) = i f ( σ i )
where f ( σ i ) represents the step function
f ( σ i ) = I ( σ i > 0 ) = 1 , if σ i > 0 ; 0 , if σ i = 0 ,
where I ( σ i > 0 ) represents the indicator function about σ i > 0 .
In fact, the definition of the nuclear norm can be rewritten as a function of singular values:  
X * = i g ( σ i )
where g ( σ i ) represents the identity mapping. Comparing Equations (5) and (7), when replacing the matrix rank by nuclear norms, an identity mapping is actually used to approximate the step function. As visualized in Figure 1, this approximation is suboptimal, particularly for larger singular values. This mismatch introduces a shrinkage bias in which dominant singular values that encode critical structural and spectral information are disproportionately penalized. Consequently, nuclear norm minimization risks oversmoothing details, thereby undermining the fidelity of the recovered HSI.
Although the correlated total variation (CTV) regularization term provides an innovative approach in unifying low-rank and local smoothness priors by applying the nuclear norm to gradient maps i X , it still inherits this fundamental flaw. By imposing i X * , CTV propagates the shrinkage bias into the gradient domain, potentially erasing fine textures and edges. This limitation underscores the need for non-convex rank surrogates that can better approximate the step function.
In summary, while convex relaxation via the nuclear norm has enabled progress in HSI denoising, its inherent approximation errors necessitate advanced regularization strategies. Addressing these limitations forms the cornerstone of our proposed non-convex CTV framework, which is presented in the next section.

3. Method

3.1. Non-Convex Correlated Total Variation

According to the analysis in the previous section, we propose to utilize the low-rankness of the gradient graph of hyperspectral images for regularization, i.e., applying the nuclear norm i X * to i X . However, the nuclear norm has limitations in accurately estimating the rank. Essentially, this stems from the error generated by the identity mapping in approximating the step function. To address this issue, we employ the non-convex function h ( σ i ) = σ i to approximate the identity mapping, which is the difference between our method NCVT and the previous CTV method. Figure 1 shows that h ( σ i ) = σ i has a better approximation effect than g ( σ i ) = σ i . Although various non-convex functions exist, only regularization with the function h ( σ i ) = σ i can derive a closed-form solution. This property permits explicit mathematical expression of the results and facilitates efficient algorithmic implementation. Based on this observation, we propose the following definition.
Definition 1 (Non-convex Correlation Total Variation). For a tensor X R H × W × B , its matrix form is denoted as X R H W × B . If i X ( i = 1 , 2 , 3 ) is the gradient map along the ith dimension, then the non-convex correlation total variation is defined as
X NCTV = 1 3 i = 1 3 i X * , 1 2 ,
where i X * , 1 2 = i = 1 B σ i is the Schatten- 1 2 norm and σ = ( σ 1 , σ 2 , , σ B ) is the singular value of i X .
HSIs can be represented in two forms, the matrix and the tensor; these two forms can be converted into each other using the reshape operation. While the form in the definition is a matrix, the algorithmic implementation first computes gradients along the three dimensions in tensor form. These tensor-based gradients are subsequently converted into matrix form in order to apply non-convex nuclear norm regularization, as the nuclear norm operator is defined for matrix-structured data.

3.2. Application of Non-Convex Correlation Total Variation in Hyperspectral Image Denoising

Based on the proposed non-convex correlation total variation and combined with the idea of robust principal component analysis, the following model is established:
min X , S X NCTV + λ S 1 s . t . Y = X + S .
According to the definition of the non-convex correlation total variation, this can be rewritten as follows:
min X , S 1 3 i = 1 3 i X * , 1 2 + λ S 1 s . t . Y = X + S .
Because the regularization term is applied to the discrete gradient of the data, auxiliary variables are introduced for decoupling:
min X , S , G I 1 3 i = 1 3 G I * , 1 2 + λ S 1 s . t . Y = X + S ,   G I = i X .
Due to the objective function being non-convex and non-smooth, optimization is challenging; therefore, the alternating direction method of multipliers (ADMM) [66,67] is used to solve the optimization [68]. Based on the ADMM method, we first formulate the augmented Lagrangian function corresponding to the objective function, as follows:
L ( X , S , { Γ i } i = 1 4 , { G i } i = 1 3 ) = i = 1 3 G i * + 3 λ S 1 + i = 1 3 μ 2 i X G i + Γ i μ F 2 + μ 2 Y X S + Γ 4 μ F 2
where μ is the penalty parameter and Γ i ( i = 1 , 2 , 3 , 4 ) are the Lagrange multipliers.
Next, we discuss how to solve the subproblems for each variable.
1. Update S k + 1 : Remove all variables except S from the Lagrangian function, resulting in the following subproblem:
min S 3 λ S 1 + μ 2 Y X S + Γ 4 μ F 2 .
The solution to the above problem can be expressed as S k + 1 = S 3 λ / μ ( Y X k + Γ 4 k / μ ) , where S is the soft thresholding operator.
2. Update ( X k + 1 , G i k + 1 ) : We first update G i by solving the following subproblem:
min G i G i * + μ 2 G i ( i X k + Γ i k μ ) F 2 .
The solution to this subproblem is as follows:
G i k + 1 = U H 1 / μ ( Σ ) V T U Σ V T = svd ( i X k + Γ i k / μ )
where  
H ( x ) = 2 3 x 1 + cos 2 π 3 2 3 h γ ( x ) , | x | > 54 3 4 γ 2 3 0 , otherwise
h γ ( x ) = arccos γ 8 | x | 3 3 / 2 .
Next, we update X by solving the following subproblem:
min X i = 1 3 μ 2 i X G i k + 1 + Γ i k / μ F 2 + μ 2 Y X S k + 1 + Γ 4 k / μ F 2 .
Optimizing the above problem can be regarded as solving the following linear system:
( μ I + μ i = 1 3 i T i ) ( X ) = μ ( Y S k + 1 ) + Γ 4 k + μ i = 1 3 i T ( G i k + 1 ) i T ( Γ i k ) ,
where i T ( · ) represents the transpose operator of i ( · ) . Because the matrix corresponding to the operator i T i has a block-circulant structure, it can be diagonalized by performing Fourier transformation on both sides of the Lagrangian function. According to the convolution theorem, the closed-form solution for X k + 1 can be easily derived as follows:
H = i = 1 3 F ( i ) * F ( μ G i k + 1 Γ i k ) T x = | F ( 1 ) | 2 + | F ( 2 ) | 2 + | F ( 3 ) | 2 X k + 1 = F 1 ( F ( μ Y μ S k + 1 + Γ 4 k ) + H μ 1 + μ T x )
where 1 represents a tensor with all elements equal to 1, ⊙ represents element-wise multiplication, F ( · ) represents the Fourier transform, and | · | 2 represents the element-wise square operation.
3. Update the multipliers Γ i k + 1 , i = 1 , 2 , 3 , 4 : Based on the ADMM principle, the multipliers are further updated using the following equations:
Γ i k + 1 = Γ i k + μ ( i X k + 1 G i k + 1 ) , i = 1 , 2 , 3 Γ 4 k + 1 = Γ 4 k + μ ( Y X k + 1 S k + 1 ) μ = μ ρ
where ρ is a constant that is greater than 1.
Algorithm 1 summarizes the workflow, while Figure 2 shows the flow of the algorithm.
Algorithm 1 ADMM Algorithm Process
Require: 
Noisy hyperspectral image Y R H × W × B , its matrix form Y R H W × B , μ = 10 6 , ϵ = 10 6
Ensure: 
X
1:
Initialize S = 0 .
2:
Initialize X randomly through a standard normal distribution.
3:
while not converged do
4:
   Update G i k + 1 .
5:
   Update S k + 1 .
6:
   Update X k + 1 .
7:
   Update Γ i k + 1 .
8:
   Update μ = ρ μ .
9:
   Update k : = k + 1 .
10:
   if  Y X k + 1 S k + 1 F 2 / Y F 2 ϵ and i X k + 1 G i k + 1 F 2 / Y F 2 ϵ  then
11:
     BREAK
12:
   end if
13:
end while

4. Experiments

This study included comprehensive experiments to validate the effectiveness of the proposed non-convex correlated total variation method and compare the results with existing hyperspectral image denoising methods. Prior to denoising, the grayscale values of each band were scaled proportionally to the range [ 0 , 1 ] and subsequently restored to their original dynamic range in postprocessing. All experiments were performed using MATLAB R2022a on a laptop computer.
The compared methods are briefly introduced below along with their respective parameter settings.
1.
NMoG (non-i.i.d. mixture of Gaussians) [69]: The NMoG method models the noise within each spectral band using a distinct MoG distribution and imposes hierarchical priors on the MoG parameters. The parameters were set as follows: target rank of the low-rank component = 5, initial rank of the low-rank component = 30, rank reduction per iteration = 7, Gaussian mixture components reduced per band = 1, maximum number of iterations = 30, and convergence tolerance = 10 4 .
2.
NGMeet (non-local meets global) [70]: NGMeet proposes a unified spatial–spectral denoising paradigm that jointly models the global spectral low-rank property (via an orthogonal basis and reduced image) and spatial non-local similarity (via low-rank regularization on the reduced image).
3.
LRTV (low-rank total variation) [46]: LRTV integrates nuclear norm minimization for spectral low-rank property, total variation regularization for spatial smoothness, and l 1 -norm regularization for sparse noise separation within a unified framework. The parameters in this method were set as follows: when the number of bands exceeded 100, the τ parameter was set to 0.015 , λ to 20 / M N , and the target rank to 10. When the number of bands did not exceed 100, the τ parameter was set to 0.01 , λ to 10 / M N , and the target rank to 5.
4.
CTV (correlated total variation) [54]: CTV regularization captures the joint low-rankness and local smoothness by applying the nuclear norm to the gradient maps of the data. The ρ parameter was set to 1.5 .
5.
3DTNN (three-directional tensor nuclear norm) [71]: 3DTNN employs a convex three-directional tensor nuclear norm as a regularizer to enforce low-rankness across all modes of the hyperspectral image tensor. The standard deviation of random noise was set to a uniform distribution between 0 and 70 / 255 , while the θ parameter was set to 0.001 , the ϕ parameter to 0.004 , the ϖ parameter to 1, and the ω parameter to 100.
6.
3DLogTNN (three-directional log-based tensor nuclear norm) [71]: This model employs a non-convex logarithmic function to approximate the rank by penalizing singular values differently across three directional tensor nuclear norms. The standard deviation of random noise was set to a uniform distribution between 0 and 70 / 255 . The θ parameter was set to 0.001 , the ϕ parameter to 0.00005 , the ϖ parameter to 0.011 , and the ω parameter to 10,000. Finally, the logarithmic tolerance was set to 80.
7.
WNLRATV (weighted non-local low-rank model with adaptive total variation regularization) [72]: WNLRATV integrates a weighted term based on non-i.i.d. mixture-of-Gaussian noise modeling, a non-local low-rank tensor prior, and an adaptive edge-preserving total variation regularization for denoising. The parameters in this method were set as follows: the initial rank was set to 3, the target rank to 6, the α parameter to 30, the β parameter to 1, the γ parameter to 0.08 , the maximum iteration to 15, the patch number to 200, and the λ parameter to 0.2 .
8.
BALMF (band-wise asymmetric Laplacian noise modeling matrix factorization) [63]: BALMF models the hyperspectral image noise per band using an asymmetric Laplacian distribution within a low-rank matrix factorization framework. The r parameter was set to 4.

4.1. Simulation Results and Analysis

4.1.1. Data Simulation

In this section, we first compare the denoising effects of different methods on images containing mixed noise. The selected datasets included Reno_256_128 and WDCMall, with image sizes of 256 × 256 × 128 and 200 × 200 × 160 , respectively. The pixel values of all images were normalized to the range [ 0 , 1 ] . In this experiment, six types of noise were considered, described briefly as follows:
1.
Gaussian noise with a standard deviation of 25 / 255 is added following an independent and identically distributed (i.i.d.) pattern.
2.
Non-i.i.d. Gaussian noise is added with a standard deviation randomly distributed between 10 / 255 and 70 / 255 .
3.
On the basis of noise type 2, impulse noise is randomly added to 1 / 3 of the bands, with the noise intensity randomly distributed between 0.05 and 0.2 .
4.
On the basis of noise type 2, stripe noise is randomly added to 1 / 3 of the bands, with the noise intensity randomly distributed between 0.05 and 0.2 .
5.
On the basis of noise type 2, 1 / 3 of the bands are randomly selected for added dead-line noise, with the noise intensity randomly distributed between 0.05 and 0.2 .
6.
On the basis of noise type 2, 1 / 3 of the bands are randomly selected for added mixed noise consisting of impulse noise, stripe noise, and dead-line noise, with the noise intensity randomly distributed between 0.05 and 0.2 for all types.

4.1.2. Results Analysis

Four quantitative metrics were adopted for evaluation: the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), error relative to global variance (ERGAS), and spectral angle mapper (SAM). Typically, the denoised image will be closer to the original image if the first two metrics are higher and the last two metrics are lower, indicating better denoising performance.
Examples of typical bands of denoised images obtained by the different methods are shown in Figure 3 and Figure 4. For the results of case 4 in Figure 3, it can be seen in the images that the results from methods such as NMoG, NGMeet, LRTV, and BALMF still contain noticeable dot pattern noise. Moreover, upon magnification of the micrographs, it is apparent that the result obtained by our non-convex total variation (NCTV) method is the closest to the true image and that the residual is relatively minimal, resulting in the best denoising effect. For the results of case 6 in Figure 4, the results from methods such as NMoG, NGMeet, 3DTNN, 3DLogTNN, and BALMF still contain dot pattern impulse noise and line stripe noise, while the magnified micrographs show that the denoising effect of our NCTV method is better. Therefore, compared with other denoising methods, our proposed method achieves the best visual effect in this simulation scenario.
Table 1 and Table 2 summarize the performance of all competing methods in terms of PSNR, SSIM, ERGAS, and SAM on the Reno_256_128 and WDCMall datasets. It can be observed that our NCTV method achieves the best or second-best metrics under almost all noise conditions. In particular, our method obtains the best results on both datasets under mixed complex noise conditions, implying that it may perform better in real-world scenarios. Moreover, the results of our method are the most stable when comprehensively considering all noise conditions, verifying the effectiveness of non-convex total variation in removing mixed noise.
To further compare the differences among various methods, Figure 5 displays the PSNR values for each band of the denoised images. The Reno_256_128 dataset with noise type 3 and the WDCMall dataset with noise type 6 are shown, while the PSNR curves are displayed in Figure 5a,b. The curves reveal that the PSNR values of the NCTV method are relatively high on most bands compared to the other methods, indicating that the superior denoising effect of NCTV.

4.2. Experiments on Real-World Datasets

In the real-world experiment, we used the Mars and Indian_Pines hyperspectral images as real datasets and tested our denoising method against other existing methods. The Mars hyperspectral image was captured and cropped into blocks of size 400 × 400 with 102 channels, while the Indian_Pines hyperspectral image was captured and cropped into blocks of size 145 × 145 , with 206 channels.
The results of the denoised hyperspectral images are shown in Figure 6 and Figure 7. It can be observed that the denoising effects of the NMoG, NGMeet, LRTV, CTV, and BALMF methods are not satisfactory, as there are still various levels of impulse and stripe noises in the images. The NCTV method proposed in this project provides better denoising performance compared with most methods, especially in terms of impulse and stripe noise. The result also show that NCTV restores a majority of details and maintains color consistency with the original image.

4.3. Model Analysis

4.3.1. Parameter Sensitivity

This section analyzes the selection and setting of the model parameters of our NCTV method, specifically the penalty parameter λ and hyperparameter ρ , in order to examine its sensitivity. Through a robust principal component analysis, the optimal parameter can be proven to be λ * = 1 / H W . However, the NCTV method proposed in this study deviates from the assumptions of robust principal component analysis, requiring adjustment of the λ parameter. Based on experience, λ can be set to s / H W in order to further adjust the s parameter. In a grid search experiment, we took uniform sampling points in the interval [ 10 1.5 , 10 1 ] as candidate values for the s parameter. On the other hand, previous studies usually set ρ to 1.5 ; thus, we tried to find the optimal parameter near 1.5 . In a grid search experiment, uniform sampling points were taken in the interval [ 1.6 , 1.8 ] as candidate values for the hyperparameter ρ .
To examine the impact of the parameter λ and hyperparameter ρ , we implemented NCTV on the WDCMall dataset. As depicted in Figure 8, the PSNR curve gradually increases before gradually descending as λ increases, achieving the optimal performance when λ is around 0.05. On the other hand, Figure 9 indicates that the optimal hyperparameter ρ value is around 1.7.

4.3.2. Processing Time

Table 3 illustrates the processing time of different methods on the Reno_256_128 dataset. NMoG and LRTV are the fastest methods; however, as described before, the denoising performance of these methods is not as good as our NCTV method. Although our method is not the fastest, it provides better denoising results. In particular, although the denoising effect of the WNLRATV method is satisfactory, its processing time is much longer, meaning that its efficiency is relatively lower than that of our NCTV method.

5. Conclusions

In conclusion, our proposed denoising method for hyperspectral images based on the non-convex total variation (NCTV) demonstrates superior performance compared to other existing methods when handling images with mixed noise containing Gaussian, impulse, stripe, and dead-line noise. Through simulation experiments on the Reno_256_128 and WDCMall datasets and using PSNR, SSIM, ERGAS, and SAM as evaluation metrics for assessment, our results indicate that the proposed NCTV method can effectively remove noise while largely preserving the texture and detail of the images. The results using the proposed NCTV method are more stable under different noise conditions. Additionally, although the processing time of our method is not the best, its denoising results are superior in terms of both visual effect and metrics when compared to the NMoG, NGMeet, LRTV, CTV, 3DTNN, 3DLogTNN, and BALMF methods. Through a parameter sensitivity analysis, we determined the optimal range for the penalty parameter λ and hyperparameter ρ . In summary, the proposed non-convex total variation denoising method provides an effective technical approach for the field of hyperspectral image denoising. Although our model possesses the aforementioned advantages, there remains room for improvement in certain aspects. The performance and efficiency could be further enhanced; in addition, future work could refine the model architecture and accelerate the algorithm.

Author Contributions

Conceptualization, S.X.; methodology, S.X.; software, J.S.; validation, C.M., Y.Y., and S.W.; formal analysis, Y.Y.; writing—original draft preparation, J.S.; writing—review and editing, S.W.; visualization, C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Guangdong Basic and Applied Basic Research Foundation under Grant 2023A1515011358, by the National Natural Science Foundation of China under Grant 12201497, and by the Young Talent Fund of Xi’an Association for Science and Technology Grant 0959202513207.

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A.; et al. Recent advances in techniques for hyperspectral image processing. Remote Sens. Environ. 2009, 113, S110–S122, Imaging Spectroscopy Special Issue. [Google Scholar] [CrossRef]
  2. Haboudane, D.; Miller, J.R.; Pattey, E.; Zarco-Tejada, P.J.; Strachan, I.B. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture. Remote Sens. Environ. 2004, 90, 337–352. [Google Scholar] [CrossRef]
  3. Hu, Y.; Li, X.; Gu, Y.; Jacob, M. Hyperspectral Image Recovery Using Nonconvex Sparsity and Low-Rank Regularizations. IEEE Trans. Geosci. Remote Sens. 2020, 58, 532–545. [Google Scholar] [CrossRef]
  4. Chen, Y.; Guo, Y.; Wang, Y.; Wang, D.; Peng, C.; He, G. Denoising of Hyperspectral Images Using Nonconvex Low Rank Matrix Approximation. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5366–5380. [Google Scholar] [CrossRef]
  5. Lu, H.; Yang, Y.; Huang, S.; Tu, W.; Wan, W. A Unified Pansharpening Model Based on Band-Adaptive Gradient and Detail Correction. IEEE Trans. Image Process. 2022, 31, 918–933. [Google Scholar] [CrossRef] [PubMed]
  6. Yang, Y.; Tu, W.; Huang, S.; Lu, H.; Wan, W.; Gan, L. Dual-Stream Convolutional Neural Network With Residual Information Enhancement for Pansharpening. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 5402416. [Google Scholar] [CrossRef]
  7. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral Image Restoration Using Low-Rank Matrix Recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  8. Sun, K.; Zhang, J.; Xu, S.; Zhao, Z.; Zhang, C.; Liu, J.; Hu, J. CACNN: Capsule Attention Convolutional Neural Networks for 3D Object Recognition. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 4091–4102. [Google Scholar] [CrossRef] [PubMed]
  9. Bai, H.; Zhao, Z.; Zhang, J.; Wu, Y.; Deng, L.; Cui, Y.; Jiang, B.; Xu, S. ReFusion: Learning Image Fusion from Reconstruction with Learnable Loss Via Meta-Learning. Int. J. Comput. Vis. 2025, 133, 2547–2567. [Google Scholar] [CrossRef]
  10. Bai, H.; Zhao, Z.; Zhang, J.; Jiang, B.; Deng, L.; Cui, Y.; Xu, S.; Zhang, C. Deep Unfolding Multi-Modal Image Fusion Network via Attribution Analysis. IEEE Trans. Circuits Syst. Video Technol. 2025, 35, 3498–3511. [Google Scholar] [CrossRef]
  11. Xu, S.; Amira, O.; Liu, J.; Zhang, C.; Zhang, J.; Li, G. HAM-MFN: Hyperspectral and Multispectral Image Multiscale Fusion Network With RAP Loss. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4618–4628. [Google Scholar] [CrossRef]
  12. Wang, C.; Pedrycz, W.; Li, Z.; Zhou, M. Residual-driven Fuzzy C-Means Clustering for Image Segmentation. IEEE CAA J. Autom. Sin. 2021, 8, 876–889. [Google Scholar] [CrossRef]
  13. Chen, Y.; Lai, W.; He, W.; Zhao, X.; Zeng, J. Hyperspectral Compressive Snapshot Reconstruction via Coupled Low-Rank Subspace Representation and Self-Supervised Deep Network. IEEE Trans. Image Process. 2024, 33, 926–941. [Google Scholar] [CrossRef] [PubMed]
  14. Chen, Y.; Chen, M.; He, W.; Zeng, J.; Huang, M.; Zheng, Y. Thick Cloud Removal in Multitemporal Remote Sensing Images via Low-Rank Regularized Self-Supervised Network. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5506613. [Google Scholar] [CrossRef]
  15. Manolakis, D.; Shaw, G. Detection algorithms for hyperspectral imaging applications. IEEE Signal Process. Mag. 2002, 19, 29–43. [Google Scholar] [CrossRef]
  16. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Sparse Unmixing of Hyperspectral Data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2014–2039. [Google Scholar] [CrossRef]
  17. Cerra, D.; Müller, R.; Reinartz, P. Noise Reduction in Hyperspectral Images Through Spectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2014, 11, 109–113. [Google Scholar] [CrossRef]
  18. Xu, S.; Ke, Q.; Peng, J.; Cao, X.; Zhao, Z. Pan-Denoising: Guided Hyperspectral Image Denoising via Weighted Represent Coefficient Total Variation. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5528714. [Google Scholar] [CrossRef]
  19. Zhang, Q.; Zheng, Y.; Yuan, Q.; Song, M.; Yu, H.; Xiao, Y. Hyperspectral Image Denoising: From Model-Driven, Data-Driven, to Model-Data-Driven. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 13143–13163. [Google Scholar] [CrossRef]
  20. Zhang, Q.; Yuan, Q.; Li, J.; Sun, F.; Zhang, L. Deep spatio-spectral Bayesian posterior for hyperspectral image non-i.i.d. noise removal. ISPRS J. Photogramm. Remote Sens. 2020, 164, 125–137. [Google Scholar] [CrossRef]
  21. Zhang, Q.; Yuan, Q.; Li, J.; Liu, X.; Shen, H.; Zhang, L. Hybrid Noise Removal in Hyperspectral Imagery With a Spatial–Spectral Gradient Network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7317–7329. [Google Scholar] [CrossRef]
  22. Yuan, Q.; Zhang, Q.; Li, J.; Shen, H.; Zhang, L. Hyperspectral Image Denoising Employing a Spatial–Spectral Deep Residual Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1205–1218. [Google Scholar] [CrossRef]
  23. Liu, W.; Lee, J. A 3-D Atrous Convolution Neural Network for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5701–5715. [Google Scholar] [CrossRef]
  24. Chen, H.; Yang, G.; Zhang, H. Hider: A Hyperspectral Image Denoising Transformer With Spatial–Spectral Constraints for Hybrid Noise Removal. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 8797–8811. [Google Scholar] [CrossRef]
  25. Zhang, Q.; Yuan, Q.; Song, M.; Yu, H.; Zhang, L. Cooperated Spectral Low-Rankness Prior and Deep Spatial Prior for HSI Unsupervised Denoising. IEEE Trans. Image Process. 2022, 31, 6356–6368. [Google Scholar] [CrossRef] [PubMed]
  26. Shi, K.; Peng, J.; Gao, J.; Luo, Y.; Xu, S. Hyperspectral Image Denoising via Double Subspace Deep Prior. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5531015. [Google Scholar] [CrossRef]
  27. Lu, X.; Wang, Y.; Yuan, Y. Graph-regularized low-rank representation for destriping of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4009–4018. [Google Scholar] [CrossRef]
  28. Yuan, F.; Chen, Y.; He, W.; Zeng, J. Feature Fusion-Guided Network With Sparse Prior Constraints for Unsupervised Hyperspectral Image Quality Improvement. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5511912. [Google Scholar] [CrossRef]
  29. Liu, X.; Bourennane, S.; Fossati, C. Denoising of hyperspectral images using the PARAFAC model and statistical performance analysis. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3717–3724. [Google Scholar] [CrossRef]
  30. Li, C.; Ma, Y.; Huang, J.; Mei, X.; Ma, J. Hyperspectral image denoising using the robust low-rank tensor recovery. J. Opt. Soc. Am. A 2015, 32, 1604–1612. [Google Scholar] [CrossRef]
  31. Renard, N.; Bourennane, S.; Blanc-Talon, J. Denoising and dimensionality reduction using multilinear tools for hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2008, 5, 138–142. [Google Scholar] [CrossRef]
  32. Wang, Y.; Xu, S.; Cao, X.; Ke, Q.; Ji, T.; Zhu, X. Hyperspectral Denoising Using Asymmetric Noise Modeling Deep Image Prior. Remote Sens. 2023, 15, 1970. [Google Scholar] [CrossRef]
  33. Wu, J.M.; Yin, S.B.; Jiang, T.X.; Liu, G.S.; Zhao, X.L. PALADIN: A novel plug-and-play 3D CS-MRI reconstruction method. Inverse Probl. 2025, 41, 035014. [Google Scholar] [CrossRef]
  34. Xu, S.; Zhang, J.; Wang, J.; Sun, K.; Zhang, C.; Liu, J.; Hu, J. A model-driven network for guided image denoising. Inf. Fusion 2022, 85, 60–71. [Google Scholar] [CrossRef]
  35. Xu, S.; Peng, J.; Ji, T.; Cao, X.; Sun, K.; Fei, R.; Meng, D. Stacked Tucker Decomposition With Multi-Nonlinear Products for Remote Sensing Imagery Inpainting. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5533413. [Google Scholar] [CrossRef]
  36. Letexier, D.; Bourennane, S. Noise removal from hyperspectral images by multidimensional filtering. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2061–2069. [Google Scholar] [CrossRef]
  37. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  38. Elad, M.; Aharon, M. Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef]
  39. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar] [CrossRef]
  40. Jiang, T.X.; Ng, M.K.; Pan, J.; Song, G.J. Nonnegative low rank tensor approximations with multidimensional image applications. Numer. Math. 2023, 153, 141–170. [Google Scholar] [CrossRef]
  41. Wright, J.; Ganesh, A.; Rao, S.; Peng, Y.; Ma, Y. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 7–10 December 2009; pp. 2080–2088. [Google Scholar]
  42. Candès, E.J.; Li, X.; Ma, Y.; Wright, J. Robust principal component analysis? J. ACM (JACM) 2011, 58, 1–37. [Google Scholar] [CrossRef]
  43. Song, H.; Wang, G.; Zhang, K. Hyperspectral image denoising via low-rank matrix recovery. Remote Sens. Lett. 2014, 5, 872–881. [Google Scholar] [CrossRef]
  44. He, W.; Zhang, H.; Zhang, L.; Shen, H. Hyperspectral image denoising via noise-adjusted iterative low-rank matrix approximation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3050–3061. [Google Scholar] [CrossRef]
  45. Xie, Y.; Qu, Y.; Tao, D.; Wu, W.; Yuan, Q.; Zhang, W. Hyperspectral image restoration via iteratively regularized weighted schatten p-norm minimization. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4642–4659. [Google Scholar] [CrossRef]
  46. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2015, 54, 178–188. [Google Scholar] [CrossRef]
  47. Ye, M.; Qian, Y.; Zhou, J. Multitask sparse nonnegative matrix factorization for joint spectral–spatial hyperspectral imagery denoising. IEEE Trans. Geosci. Remote Sens. 2014, 53, 2621–2639. [Google Scholar] [CrossRef]
  48. Xu, S.; Zhang, J.; Zhang, C. Hyperspectral image denoising by low-rank models with hyper-Laplacian total variation prior. Signal Process. 2022, 201, 108733. [Google Scholar] [CrossRef]
  49. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  50. Sun, J.; Xu, Z.; Shum, H.Y. Image super-resolution using gradient profile prior. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA; pp. 1–8. [Google Scholar]
  51. Chambolle, A. An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 2004, 20, 89–97. [Google Scholar]
  52. Chambolle, A.; Caselles, V.; Cremers, D.; Novaga, M.; Pock, T. An introduction to total variation for image analysis. Theor. Found. Numer. Methods Sparse Recovery 2010, 9, 227. [Google Scholar]
  53. Li, S.Z. Markov random field models in computer vision. In Proceedings of the Computer Vision—ECCV’94: Third European Conference on Computer Vision, Stockholm, Sweden, 2–6 May 1994; Proceedings, Volume II. Springer: Berlin/Heidelberg, Germany, 1994; pp. 361–370. [Google Scholar]
  54. Peng, J.; Wang, Y.; Zhang, H.; Wang, J.; Meng, D. Exact Decomposition of Joint Low Rankness and Local Smoothness Plus Sparse Matrices. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 5766–5781. [Google Scholar] [CrossRef]
  55. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  56. Zhang, C.H. Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 2010, 38, 894–942. [Google Scholar] [CrossRef] [PubMed]
  57. Chen, Y.; Wang, Y.; Li, M.; He, G. Augmented Lagrangian alternating direction method for low-rank minimization via non-convex approximation. Signal Image Video Process. 2017, 11, 1271–1278. [Google Scholar] [CrossRef]
  58. Frank, L.E.; Friedman, J.H. A statistical view of some chemometrics regression tools. Technometrics 1993, 35, 109–135. [Google Scholar] [CrossRef]
  59. Candes, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted 1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  60. Kang, Z.; Peng, C.; Cheng, Q. Robust PCA via nonconvex rank approximation. In Proceedings of the 2015 IEEE International Conference on Data Mining, Atlantic City, NJ, USA, 14–17 November 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 211–220. [Google Scholar]
  61. Lu, H.; Yang, Y.; Huang, S.; Chen, X.; Chi, B.; Liu, A.; Tu, W. AWFLN: An Adaptive Weighted Feature Learning Network for Pansharpening. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5400815. [Google Scholar] [CrossRef]
  62. Wang, C.; Lin, J.; Li, X. Structural-Equation-Modeling-Based Indicator Systems for Image Quality Assessment. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 1–14. [Google Scholar] [CrossRef]
  63. Xu, S.; Cao, X.; Peng, J.; Ke, Q.; Ma, C.; Meng, D. Hyperspectral Image Denoising by Asymmetric Noise Modeling. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5545214. [Google Scholar] [CrossRef]
  64. Peng, J.; Wang, H.; Cao, X.; Jia, X.; Zhang, H.; Meng, D. Stable Local-Smooth Principal Component Pursuit. SIAM J. Imaging Sci. 2024, 17, 1182–1205. [Google Scholar] [CrossRef]
  65. Lu, C.; Tang, J.; Yan, S.; Lin, Z. Nonconvex nonsmooth low rank minimization via iteratively reweighted nuclear norm. IEEE Trans. Image Process. 2015, 25, 829–839. [Google Scholar] [CrossRef]
  66. Bai, J.; Zhang, M.; Zhang, H. An inexact ADMM for separable nonconvex and nonsmotth optimization. Comput. Optim. Appl. 2025, 90, 445–479. [Google Scholar] [CrossRef]
  67. Bai, J.; Chen, Y.; Yu, X.; Zhang, H. Generalized Asymmetric Forward-Backward-Adjoint Algorithms for Convex-Concave Saddle-Point Problem. J. Sci. Comput. 2025, 102, 80. [Google Scholar] [CrossRef]
  68. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 2011, 3, 1–122. [Google Scholar]
  69. Chen, Y.; Cao, X.; Zhao, Q.; Meng, D.; Xu, Z. Denoising Hyperspectral Image With Non-i.i.d. Noise Structure. IEEE Trans. Cybern. 2018, 48, 1054–1066. [Google Scholar] [CrossRef]
  70. He, W.; Yao, Q.; Li, C.; Yokoya, N.; Zhao, Q. Non-Local Meets Global: An Integrated Paradigm for Hyperspectral Denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, 16–20 June 2019; pp. 6868–6877. [Google Scholar]
  71. Zheng, Y.; Huang, T.; Zhao, X.; Jiang, T.; Ma, T.; Ji, T. Mixed Noise Removal in Hyperspectral Image via Low-Fibered-Rank Regularization. IEEE Trans. Geosci. Remote Sens. 2020, 58, 734–749. [Google Scholar] [CrossRef]
  72. Chen, Y.; Cao, W.; Pang, L.; Cao, X. Hyperspectral Image Denoising With Weighted Nonlocal Low-Rank Model and Adaptive Total Variation Regularization. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5544115. [Google Scholar] [CrossRef]
Figure 1. Approximation of the step function by different functions.
Figure 1. Approximation of the step function by different functions.
Remotesensing 17 02024 g001
Figure 2. Flowchart of the algorithm.
Figure 2. Flowchart of the algorithm.
Remotesensing 17 02024 g002
Figure 3. Denoising results of all methods after adding Gaussian and stripe noise to the Reno_256_128 dataset (corresponding to noise type 4). The PSNR value is indicated in the upper left corner of each image. The box in the bottom left corner magnifies local details of the image, while the box in the bottom right corner displays the residual image of the corresponding region.
Figure 3. Denoising results of all methods after adding Gaussian and stripe noise to the Reno_256_128 dataset (corresponding to noise type 4). The PSNR value is indicated in the upper left corner of each image. The box in the bottom left corner magnifies local details of the image, while the box in the bottom right corner displays the residual image of the corresponding region.
Remotesensing 17 02024 g003
Figure 4. Denoising results of all methods after adding four types of mixed noise to the WDCMall dataset (corresponding to noise type 6). The PSNR value is indicated in the upper left corner of each image. The box in the bottom left corner magnifies local details of the image, while the box in the bottom right corner displays the residual image of the corresponding region.
Figure 4. Denoising results of all methods after adding four types of mixed noise to the WDCMall dataset (corresponding to noise type 6). The PSNR value is indicated in the upper left corner of each image. The box in the bottom left corner magnifies local details of the image, while the box in the bottom right corner displays the residual image of the corresponding region.
Remotesensing 17 02024 g004
Figure 5. PSNR metrics by band for various denoising methods, with band numbers on the x-axis and PSNR values on the y-axis.
Figure 5. PSNR metrics by band for various denoising methods, with band numbers on the x-axis and PSNR values on the y-axis.
Remotesensing 17 02024 g005
Figure 6. Denoising results of all methods on the Mars dataset. The box in the bottom right corner magnifies local details of the image.
Figure 6. Denoising results of all methods on the Mars dataset. The box in the bottom right corner magnifies local details of the image.
Remotesensing 17 02024 g006
Figure 7. Denoising results of all methods on the Indian_Pines dataset. The box in the bottom right corner magnifies local details of the image.
Figure 7. Denoising results of all methods on the Indian_Pines dataset. The box in the bottom right corner magnifies local details of the image.
Remotesensing 17 02024 g007
Figure 8. Sensitivity analysis of the penalty parameter λ .
Figure 8. Sensitivity analysis of the penalty parameter λ .
Remotesensing 17 02024 g008
Figure 9. Sensitivity analysis of the hyperparameter ρ .
Figure 9. Sensitivity analysis of the hyperparameter ρ .
Remotesensing 17 02024 g009
Table 1. Metrics on the Reno_256_128 dataset. The best results are in bold and the second-best results are in red. The column on the far right is the NCTV method proposed in this paper.
Table 1. Metrics on the Reno_256_128 dataset. The best results are in bold and the second-best results are in red. The column on the far right is the NCTV method proposed in this paper.
Cases of the NoiseMetricsNoisyNMoGNGMeetLRTVCTV3DTNN3DLogTNNWNLRATVBALMFNCTV
1PSNR20.1735.2339.2432.8134.0334.4536.0236.5733.2235.58
SSIM0.37470.93710.97580.89830.91360.95220.95720.95410.8890.9436
ERGAS317.4357.3135.4676.4464.2361.2251.2548.2671.8155.21
SAM23.953.732.044.264.53.032.822.894.993.33
2PSNR16.8734.5931.6730.8531.7832.2733.8733.0131.8533.96
SSIM0.26960.92650.86830.85120.86290.92580.91860.9150.88260.9198
ERGAS559.4161.76130.8117.0985.2979.1568.4676.86118.0866.07
SAM37.134.069.857.66.093.744.224.659.453.88
3PSNR15.8631.6729.8130.1232.0531.9934.0733.9230.2434.12
SSIM0.24180.88860.83780.83580.86820.92420.92830.92740.86930.921
ERGAS629.76150.72162.04156.7783.3482.5466.6671.37155.5664.24
SAM38.3110.1610.8411.45.983.813.954.2311.333.71
4PSNR16.3533.0331.4930.3831.1730.7230.8131.5831.3133.56
SSIM0.25090.90150.86940.84130.84630.87220.83770.88060.8670.9106
ERGAS575.66115.38139.97133.1792.03100.14113.6198.71129.8869
SAM37.918.5510.29.56.626.268.415.869.774.04
5PSNR16.0130.0628.8427.3327.4827.0826.4629.2628.1529.63
SSIM0.23750.88910.83960.76590.75590.73230.680.85410.78140.8695
ERGAS585.57156.39169.36217.42188.62230.21250.54141.23225.14150.13
SAM39.910.5611.6114.9113.8515.9618.47.4215.5410.19
6PSNR14.3627.7727.327.1526.9725.9425.728.7826.5429.24
SSIM0.19110.83710.82220.75990.74090.71020.67210.84390.74840.8634
ERGAS690.46202.48189.33225.61195.79243.66257.79146.98259.12154.53
SAM42.4213.0612.8415.2514.3217.0619.088.1918.0610.35
Table 2. Metrics on the WDCMall dataset. The best results are in bold and the second-best results are in red. The column on the far right is the NCTV method proposed in this paper.
Table 2. Metrics on the WDCMall dataset. The best results are in bold and the second-best results are in red. The column on the far right is the NCTV method proposed in this paper.
Cases of the NoiseMetricsNoisyNMoGNGMeetLRTVCTV3DTNN3DLogTNNWNLRATVBALMFNCTV
1PSNR20.1735.5338.0132.0533.5733.6435.2234.7233.7435.36
SSIM0.52230.96950.98290.92320.94960.96680.97140.96390.95230.9699
ERGAS368.7660.945.2691.8776.5975.363.0465.875.464.27
SAM26.865.273.375.756.484.44.563.796.544.92
2PSNR16.7334.0131.3729.7831.0428.1526.6331.2431.4833.42
SSIM0.37320.95660.92380.8720.91060.8230.76080.91760.92870.9536
ERGAS638.6478.98123.64152.34105.6156.25209.9898.53112.2480.37
SAM38.97.2911.1511.638.7612.4217.525.1710.195.95
3PSNR15.7431.1728.8629.9231.6228.3327.2832.0830.3533.84
SSIM0.34480.91470.86750.87660.9180.82020.77160.92780.89330.9591
ERGAS781.42180.49221.46166.92100.18159.33207.5393.8157.6276.13
SAM40.3811.213.6712.067.8811.9316.155.639.925.24
4PSNR16.4434.1630.9729.5430.828.8728.1631.5431.5733.37
SSIM0.36460.96020.91150.86950.90740.86540.82920.92120.92250.954
ERGAS660.0270.94133.14161.59106.49141.18167.4196.1497.2378.18
SAM39.796.0511.1812.788.8910.5613.975.528.555.91
5PSNR16.331.1228.8627.1527.9526.827.1429.1529.1329.88
SSIM0.35680.94140.89890.82120.84540.80450.79790.89210.87030.9141
ERGAS679.36141.75171.92231.82189.1246.81252.03149.13199.12153.49
SAM42.2810.1212.9216.9614.8118.0819.797.9914.1411.18
6PSNR14.4627.826.6726.9127.3723.2522.0828.5627.0329.39
SSIM0.28650.88190.85090.79920.82690.6460.59610.86420.79920.9084
ERGAS826.68210.11230.92244.93196.95313.68360.36182.72260.23156.2
SAM44.1314.1115.5217.2414.7422.8326.569.6615.6610.74
Table 3. Processing times on the dataset Reno_256_128 in all cases (in seconds).
Table 3. Processing times on the dataset Reno_256_128 in all cases (in seconds).
MethodsCase1Case2Case3Case4Case5Case6
NMoG88.46127.22115.96108.67104.06138.45
NGMeet143.90125.23128.43118.65123.90174.14
LRTV85.8990.18103.3994.7893.25103.40
CTV145.89150.53173.55162.89157.76162.34
3DTNN215.74301.84279.58265.59260.54349.90
3DLogTNN381.56409.09399.21366.45356.73473.65
WNLRATV948.94811.66967.99911.02915.611091.36
BALMF215.75153.76208.99208.10209.64225.33
NCTV178.76132.49138.10135.01141.77211.23
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, J.; Mao, C.; Yang, Y.; Wang, S.; Xu, S. Hyperspectral Image Denoising Based on Non-Convex Correlated Total Variation. Remote Sens. 2025, 17, 2024. https://doi.org/10.3390/rs17122024

AMA Style

Sun J, Mao C, Yang Y, Wang S, Xu S. Hyperspectral Image Denoising Based on Non-Convex Correlated Total Variation. Remote Sensing. 2025; 17(12):2024. https://doi.org/10.3390/rs17122024

Chicago/Turabian Style

Sun, Junjie, Congwei Mao, Yan Yang, Shengkang Wang, and Shuang Xu. 2025. "Hyperspectral Image Denoising Based on Non-Convex Correlated Total Variation" Remote Sensing 17, no. 12: 2024. https://doi.org/10.3390/rs17122024

APA Style

Sun, J., Mao, C., Yang, Y., Wang, S., & Xu, S. (2025). Hyperspectral Image Denoising Based on Non-Convex Correlated Total Variation. Remote Sensing, 17(12), 2024. https://doi.org/10.3390/rs17122024

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop