Next Article in Journal
Aircraft Target Detection in Low Signal-to-Noise Ratio Visible Remote Sensing Images
Next Article in Special Issue
Improved Generalized IHS Based on Total Variation for Pansharpening
Previous Article in Journal
Forest Structure Characterization in Germany: Novel Products and Analysis Based on GEDI, Sentinel-1 and Sentinel-2 Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Denoising Using Asymmetric Noise Modeling Deep Image Prior

1
School of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China
2
Guangxi Key Laboratory of Multi-Source Information Mining & Security, Guangxi Normal University, Guilin 541004, China
3
Research and Development Institute of Northwestern Polytechnical University in Shenzhen, Shenzhen 518063, China
4
School of Mathematics and Statistics, Northwestern Polytechnical University, Xi’an 710021, China
5
Key Laboratory for Intelligent Networks and Network Security, Ministry of Education, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(8), 1970; https://doi.org/10.3390/rs15081970
Submission received: 27 February 2023 / Revised: 4 April 2023 / Accepted: 4 April 2023 / Published: 8 April 2023
(This article belongs to the Special Issue Machine Vision and Advanced Image Processing in Remote Sensing II)

Abstract

:
Deep image prior (DIP) is a powerful technique for image restoration that leverages an untrained network as a handcrafted prior. DIP can also be used for hyperspectral image (HSI) denoising tasks and has achieved impressive performance. Recent works further incorporate different regularization terms to enhance the performance of DIP and successfully show notable improvements. However, most DIP-based methods for HSI denoising rarely consider the distribution of complicated HSI mixed noise. In this paper, we propose the asymmetric Laplace noise modeling deep image prior (ALDIP) for HSI mixed noise removal. Based on the observation that real-world HSI noise exhibits heavy-tailed and asymmetric properties, we model the HSI noise of each band using an asymmetric Laplace distribution. Furthermore, in order to fully exploit the spatial–spectral correlation, we propose ALDIP-SSTV, which combines ALDIP with a spatial–spectral total variation (SSTV) term to preserve more spatial–spectral information. Experiments on both synthetic data and real-world data demonstrate that ALDIP and ALDIP-SSTV outperform state-of-the-art HSI denoising methods.

Graphical Abstract

1. Introduction

Hyperspectral images (HSIs) are a type of remote sensing data that provide enriched information of the spectral characteristics of a scene. HSIs can be utilized for diverse visual tasks including object detection [1] and classification [2,3,4,5,6,7,8]. However, during the generation and transmission process, the vast amount of HSIs are often corrupted by severe noise, making denoising techniques crucial for effectively analyzing and interpreting the images. Therefore, HSI denoising is vital and has inspired extensive research.
Conventional techniques for HSI denoising, which are often called model-based methods, can be categorized into two groups: filter-based methods and low-rank-based methods. For filter-based methods, 3D model-based methods [9,10] first attempted to take advantage of spatial–spectial information. Then, a variety of methods that utilized penalties to exploit spatial and spectral information [11,12,13,14] were proposed. For low-rank-based methods, they have been found to be more efficient for HSI denoising, and various methods were developed based on low-rank matrix recovery [15,16,17,18,19]. Considering HSI data as a three-order tensor, many low-rank approaches based on tensor decomposition [20,21,22,23] have achieved good effects.
Recently, deep learning has made great progress in a variety of fields. Deep-learning-based denoising methods are regarded as state-of-the-art and many network architectures were proposed for HSI denoising work [24,25,26,27,28]. In refs. [25,27], convolutional-neural-network (CNN)-based architectures were suggested. Generative adversarial networks (GANs) were also examined in [28]. These supervised deep-learning-based methods have a shortcoming that they need a large training set to obtain good effects while HSIs data are limited. To address this issue, a variety of unsupervised methods were developed [29,30,31]. These methods can denoise a single observed HSI without external data.
Among these unsupervised methods, ref. [29] uses deep image prior (DIP) [32] for HSI inverse problems (denoising, inpainting, super-resolution). In ref. [29], 2D convolution was extended to a 3D one, but the 3D one has a poorer performance. However, neither of them is as advanced as most state-of-the-art methods. A popular research trend is to combine DIP with other data priors. For example, reference [30] inserts spatial–spectral total variation (SSTV) into DIP and achieves state-of-the-art results. Despite their good performance, most of the DIP-based methods assume HSIs are corrupted by Gaussian noise or Laplace noise (a.k.a., sparse noise). It is well-known that HSI noise is very complicated, including Gaussian, impulse, stripe and deadline noise. By no means can HSI noise be simply modeled by Gaussian or Laplace noise. How to design a proper noise model for real HSIs plays an important role in HSI denoising and deepens the understanding on the HSI noise pattern. Hence, there is still room to enhance the performance for DIP if it is equipped with more suitable and reasonable noise assumptions.
Our previous work [19] has revealed that synthetic and real-world HSI noise are both heavy-tailed and asymmetric. Taking the Urban dataset as an example, Figure 1 analyzes the statistical distribution of the real-world HSI noise. Band 87, which is degraded by deadline and horizon stripe noise, is shown in Figure 1a. An approximately clean band is generated by averaging bands 85, 86, 87, 88 and 89, as shown in Figure 1b. Finally, the noise of band 87 can be roughly estimated by the observation band minus the corresponding clean one, as displayed in Figure 1c. We thereafter discuss the noise distribution.
Firstly, traditional DIP’s loss function is mean squared error (MSE, or 2 -norm), which hypothesizes that noise obeys a Gaussian distribution, while Figure 1d demonstrates that the Laplace distribution better fits real-world HSI noise than the Gaussian distribution. This fact suggests that HSI noise is heavy-tailed, so mean absolute error (MAE, or 1 -norm) is the better choice.
Secondly, HSI noise is also asymmetric. For example, the noise frequencies are highly distinct for n = 0.05 and n = 0.05 . Neither Gaussian nor Laplace distribution can characterize this property. We propose to utilize an asymmetric Laplace (AL) distribution for modeling real-world HSI noise, and Figure 1e illustrates that the AL distribution is more suitable to characterize the noise, which is both heavy-tailed and asymmetric.
Based on the analysis above, incorporating DIP with the AL distribution assumption may enhance the denoising performance for HSI data. ThereforeInspired by this discovery, an asymmetric Laplace noise modeling deep image prior (ALDIP) method is formulated to boost performance in HSI mixed-noise removal, where the key idea is to assume that the HSI noise of each band obeys an AL distribution. Additionally, to fully utilize spatial–spectral information, we incorporate a spatial–spectral total variation (SSTV) [33] term to preserve the spatial–spectral local smoothness. To validate ALDIP and ALDIP-SSTV’s performance, DIP2D- 2 (2D convolution DIP with 2 loss), DIP2D- 1 (2D convolution DIP with 1 loss), ALDIP, ALDIP-SSTV are applied to the real-world HSI dataset Shanghai. The result is displayed in Figure 2.
According to Figure 2, it is evident that ALDIP and ALDIP-SSTV preserve more details than DIP2D- 2 and DIP2D- 1 . This confirms our analysis of HSI noise and shows that modeling HSI noise as an asymmetric Laplace distribution provides superior results for HSI mixed noise removal. Furthermore, we compare ALDIP and ALDIP-SSTV with other state-of-the-art methods on two synthetic and three real-world HSI datasets. The result shows that ALDIP and ALDIP-SSTV outperform other methods. The main contributions of this paper can be formulated as follows:
  • We propose ALDIP for HSI mixed noise removal. More specifically, we combine a more suitable and reasonable noise model with DIP. Our model hypothesizes real-world HSI noise obeys asymmetric Laplace (AL) distribution.
  • ALDIP-SSTV is presented by incorporating the SSTV term to fully utilize spatial–spectral information for performance improvement.
  • A variety of experiments are conducted that rigorously validate the effectiveness of our methods. The result shows that our methods outperform many state-of-the-art methods.
The rest of this paper is structured as follows: Section 2 contains a brief overview of the related work. Section 3 introduces the methods we propose. Section 4 introduces the experiments we conduct. Finally, Section 5 concludes the paper.

2. Related Works

2.1. Low-Rank Models for HSI Denoising

As is known, HSIs contain high correlations in the spectral dimension and clean HSIs tend to have many minor singular values, which means clean HSIs are mostly low-rank. For a HSI with B bands and M × N pixels denoted by Y R M N × B , the low-rank matrix factorization (LRMF) is modeled as follows:
min U R M N × R , V R B × R Y U V 2 2 ,
where U and V are of rank R. Singular value decomposition (SVD) [34] can solve Equation (1). The above model hypothesizes noise obeys the Gaussian distribution while real-world HSI noise is complicated. To achieve higher robustness, modeling can be approached from another perspective. We assume that a noisy HSI Y can be regarded as a combination of a low-rank clean HSI X and unknown sparse noise S , i.e.,
Y = X + S .
Robust principal component analysis (RPCA) can be utilized to recover low-rank clean HSI X and sparse noise S from Y , and the corresponding optimization issue is written as
min X , S X * + λ S 1 ,     s . t . Y = X + S ,
where · * denotes the nuclear norm of a matrix and λ denotes the parameter for regularization terms. Ref. [16] utilized the above model to recover HSIs and performed well when removing Gaussian noise, impulse noise, deadlines and stripes.
Besides LRMF and RPCA, tensor decomposition algorithms including Tucker and CANDECOMP/PARAFAC (CP) [34] have been developed [21,22,23,35] by regarding the HSI data cube as a three-order tensor, and have made a remarkable achievement on HSI denoising.
Low-rank denoising methods can be improved by adding different regularizations. To preserve more spatial details, the non-local similarity (NLS) property is widely combined with low-rank models [34,36,37]. Total variation regularization can help keep local smoothness to a certain degree. It has been widely used in low-rank-based methods [22,38,39]. A TV-regularized low-rank (LRTV) matrix factorization method [39] was proposed and has impressive performance.

2.2. Deep Learning Based Methods for HSI Denoising

Deep-learning-based methods have shown great promise in HSI denoising. It has been proved that convolutional neural networks (CNNs) can approximate any complex, non-linear relationships between the input and output data [40], which indicates that CNNs can handle complicated mixed noise. Ref. [28] first introduces CNNs in HSI denoising, which uses 2D filters to capture spatial and spectral structures. In ref. [41], two parallel branches for feature extraction are used to obtain spatial and spectral information. To further exploit the spectral correlations, a 3D U-net is proposed for HSI denoising [24]. With regard to the directivity and spectral difference of the spatial structure, Ref. [42] is proposed for HSI mixed noise removal. Refs. [43,44,45,46] utilize attention to reduce redundant information.
Although deep-learning-based methods have state-of-the-art performance, they depend on large amounts of data. Paired real-world HSIs are limited and difficult to obtain while models trained on synthetic data are difficult to generalize to real data.

2.3. Noise Modeling for HSI Denoising

Noise modeling in denoising tasks can be regarded as a type of prior knowledge. For HSI denoising, the insightful understanding towards noise distribution provides valuable information to better remove the noise and preserve the underlying clean signal.
Most methods simply model HSI noise as a Gaussian distribution. In this case, 2 norm is used for the fidelity term. This can simplify the optimization problem and make it more tractable. However, in most cases, it may not accurately reflect the complicated mixed noise in real-world HSIs.
Complex mixed HSI noise can be modeled as a mixture of Gaussians (MoG), which can approximate any continuous distributions. Based on this theory, ref. [47] developed MoG-LRMF. In the following works, MoG is combined with low-rank tensor decomposition [48,49,50]. However, the MoG’s universal approximation property holds only if the number of Gaussian components increases to an infinite amount, which is not feasible in real-world applications. Refs. [51,52] model unknown noise as a mixture of exponential power distributions (MoEP), which alleviates MoG’s limitations. These methods are based on independent and identically distributed (i.i.d.) noise assumptions, so they neglect diverse noise patterns for different bands. The non-i.i.d. MoG (NMoG) proposed in [50] breaks through this bottleneck by assigning all bands with different forms of MoG noise.
The asymmetry of HSI noise is ignored in the above noise modeling methods. In ref. [19], HSI noise is modeled as an AL distribution and they proposed a bandwise-AL-noise-based matrix factorization (BALMF) method for achieving better performance.

2.4. Deep Image Prior

The recent work by [32] is an unsupervised deep-learning-based method. It found that the structure of a generator network can be used as a handcrafted prior to solve standard inverse problems, e.g., denoising and inpainting. The knowledge implicit contained in the network is called deep image prior (DIP). Hereafter, ref. [29] applied DIP to HSI restoration. In ref. [29], 2D convolution was extended to a 3D one, but the 3D one has poorer performance.
DIP for HSI solves the following optimization problem:
min θ f θ ( z ) x 0 2 2 ,
where z is a randomly sampled input, x 0 is a corrupted image, and f is a CNN with parameters θ . The restored result is
x * = f θ * ( z ) , where θ * = arg min θ f θ ( z ) x 0 2 2 .
Apparently, the optimization will eventually result in a degraded image, which is the same as x 0 . Ref. [32] found that the CNN will learn the signal first and learn noise slowly. Therefore, the number of iterations is restricted to a certain number. This operation is called early stopping.
To improve the performance of DIP, many studies have incorporated various regularization terms into the loss function [53,54,55]. For HSI denoising, ref. [30] incorporates a spatial–spectral total variation (SSTV) term to preserve the spatial–spectral local smoothness of HSI.

3. Proposed Model

3.1. Model Formulation

Given a noisy HSI with B bands and M × N pixels denoted by Y R M × N × B , it can be considered as a clean HSI X R M × N × B plus the mixed noise N R M × N × B , which can be formulated as follows:
Y = X + N .
For each pixel, it can be formulated as
Y i , j , k = X i , j , k + N i , j , k ,
where Y i , j , k , X i , j , k and N i , j , k denote the ( i , j , k ) -th element of Y , X and N , respectively.
To effectively characterize the self-similarity of HSIs, we employ the DIP method. Specifically, based on Equation (5), a clean HSI X can be generated by a network:
X = f Θ ( Z ) ,
where Θ is the parameters of network and Z R M × N × B is the network input, which is randomly sampled from Uniform ( 0 , 0.1 ) . This form implicitly contains the prior captured by the neural network.
According to the loss function in Equation (4), the DIP model only considers the Gaussian noise. To overcome this drawback, we assume the HSI noise of each band obeys an AL distribution. In more detail, we consider that the real-world HSI noise of each band is different, but is distributed as the same kind of distribution. Mathematically, there is
N i , j , k AL k N i , j , k | 0 , λ k , κ k , k = 1 , 2 , , B ,
where λ k and κ k are the k-th element of λ and κ , respectively. The probability density function (pdf) of AL is as
f AL ( x , μ , λ , κ ) = λ κ ( 1 κ ) exp | x μ | λ η ,
where inf < μ < inf is the location parameter, λ > 0 is the scale parameter and 0 < κ < 1 is the skew parameter. η = κ I ( x μ ) + ( 1 κ ) I ( x < μ ) and I ( e ) is the indicator function defined as follows:
I ( e ) = 1 if e is true 0 if e is false .
According to Equations (7) and (9), each pixel of noisy HSI Y has the following distribution:
Y i , j , k AL k Y i , j , k | X i , j , k , λ k , κ k , k = 1 , 2 , , B .
Given this assumption, the log-likelihood function can be written as follows:
l ( X , λ , κ ) = i , j , k log AL k Y i , j , k | X i , j , k , λ k , κ k = i , j , k | Y i , j , k X i , j , k | λ k η i , j , k + log λ k κ k ( 1 κ k ) = W Y X 1 + M N k = 1 B log λ k κ k ( 1 κ k ) ,
where the ( i , j , k ) -th element of W is defined by W i , j , k = λ k η i , j , k = λ k κ k I ( Y i , j , k X i , j , k ) + ( 1 κ k ) I ( Y i , j , k < X i , j , k ) .
According to the maximum likelihood estimation (MLE) principle, the parameters Θ , λ and κ can be iteratively updated by maximizing the likelihood function ( X , λ , κ ) . Therefore, the optimization model is as follows:
min Θ , λ , κ W Y X 1 M N k = 1 B log λ k κ k ( 1 κ k ) s . t .   X = f Θ ( Z ) .
After removing the equality constraint, the issue is cast as the following problem:
min Θ , λ , κ W Y f Θ ( Z ) 1 M N k = 1 B log λ k κ k ( 1 κ k ) .
To further preserve the spatial–spectral local smoothness, we incorporate a spatial–spectral total variation (SSTV) term for an ALDIP model. The SSTV term is shown below:
X SSTV = D h X D 1 + D v X D 1 ,
where D h and D v are horizontal and vertical 2D finite differencing operators, respectively. D is a 1D finite differencing operator on the spectral signature of each pixel. After incorporating the SSTV term to ALDIP, we will have the ALDIP-SSTV model as:
min Θ , λ , κ W Y f Θ ( Z ) 1 M N k = 1 B log λ k κ k ( 1 κ k ) + τ f Θ ( Z ) SSTV ,
where τ is the parameter for the SSTV term.

3.2. Solving Algorithm

For ALDIP (Equation (15)) and ALDIP-SSTV (Equation (17)), the solving algorithms are similar. The following derivation is based on ALDIP-SSTV. The loss function of ALDIP-SSTV is as the following:
L ( Θ , λ , κ ) = W Y f Θ ( Z ) 1 M N k = 1 B log λ k κ k ( 1 κ k ) + τ f Θ ( Z ) SSTV .
The parameters of AL noise ( λ and κ ) and network ( Θ ) can be iteratively updated by minimizing Equation (18).
Updating λ : The loss function with respect to λ is written as:
L ( λ ) = W Y f Θ ( Z ) 1 M N k = 1 B log λ k .
For λ k , k = 1 , 2 , , B , the loss function is shown as:
L ( λ k ) = λ k i , j η i , j , k | Y i , j , k f Θ ( Z ) i , j , k | M N k = 1 B log λ k .
The gradient of the above equation can be obtained as follows:
L ( λ k ) λ k = i , j η i , j , k | Y i , j , k f Θ ( Z ) i , j , k | M N λ k .
Let Equation (21) be zero, and the updating formula can be obtained as follows:
λ k = M N i , j η i , j , k | Y i , j , k f Θ ( Z ) i , j , k | , k = 1 , 2 , , B
Updating κ : The loss function with regard to κ j is
L ( κ k ) = λ k i , j η i , j , k | Y i , j , k f Θ ( Z ) i , j , k | M N k = 1 B log κ k ( 1 κ ) .
For η i , j , k , take the partial derivative with respect to κ k :
η i , j , k κ k = κ k I ( Y i , j , k f Θ ( Z ) i , j , k 0 ) + ( 1 κ k ) I ( Y i , j , k f Θ ( Z ) i , j , k < 0 ) κ k = I ( Y i , j , k f Θ ( Z ) i , j , k 0 ) I ( Y i , j , k f Θ ( Z ) i , j , k < 0 ) = sign ( Y i , j , k f Θ ( Z ) i , j , k ) .
Thus, the gradient of Equation (23) is written as follows:
L ( κ k ) κ k = λ k i , j Y i , j , k f Θ ( Z ) i , j , k M N κ k + M N 1 κ k .
Let the above equation be zero, and we can obtain the following quadratic equation:
ξ k κ k 2 ( ξ k + 2 M N ) κ k + M N = 0 ,
where ξ k = λ k i , j Y i , j , k f Θ ( Z ) i , j , k . This quadratic equation can be solved directly using the quadratic formula. The following root satisfies the constraint 0 < κ < 1 [19]:
κ k = ξ k + 2 M N ξ k 2 + 4 M 2 N 2 2 ξ k , k = 1 , 2 , , B .
Updating Θ : After updating the parameters of AL noise model λ and κ , we update the network parameter Θ . The loss function with respect to Θ is
L ( Θ ) = W ( Y f Θ ( Z ) ) 1 + τ f Θ ( Z ) SSTV .
Adaptive moment estimation (ADAM) algorithm is used for this step, which includes momentum and bias correction terms to accelerate the learning process and stabilize the updates.
The solving algorithm is summarized in Algorithm 1.
Algorithm 1: ALDIP or ALDIP-SSTV.
Remotesensing 15 01970 i001

4. Experiments

To validate the effectiveness and performance of ALDIP and ALDIP-SSTV, we conduct experiments on both synthetic and real-world HSI data. To further demonstrate the superiority of our methods, following eight state-of-the-art methods are selected for comparison:
  • Low-rank methods: fast hyperspectral denoising (FastHyDe) [56], which is based on low-rank and sparse representations.
  • TV regularized low-rank methods: TV regularized low-rank matrix factorization (LRTV) [39] and TV regularized low-rank tensor decomposition (LRTDTV) [22], and three-dimensional correlated total variation regularized RPCA (CTV) [57].
  • Noise modeling methods: non-i.i.d. mixture of Gaussians modeling low-rank matrix factorization (NMoG) [50] and a bandwise-AL-noise-based matrix factorization (BALMF) [19].
  • DIP-based methods: 2D convolution DIP with 2 loss (DIP2D- 2 ) [29], 2D convolution DIP with 1 loss (DIP2D- 1 ) [29] and spatial–spectral constrained unsupervised deep image prior (S2DIP) [30].
In this paper, we use a four-layer hourglass architecture with skip connections, which is shown in Figure 3. We use the Adam optimizer with a learning rate of 0.01 for the network in the following experiments.

4.1. Synthetic Data Experiment

Two HSI datasets are selected to conduct the simulated denoising experiment:
  • Indian Pines: ground truth of the scene gathered by AVIRIS sensor over the Indian Pines test site in north-western Indiana with 145 × 145 pixels and 224 bands.
  • Pavia Centre: a cropped HSI with 200 × 200 pixels and 80 bands acquired by the ROSIS sensor during a flight campaign over Pavia, northern Italy.
To simulate real-world HSI noise, we firstly set up eight different noise cases that synthesize various types of noise ranging from simple to complex. Furthermore, four more cases are set up to simulate the scenario where the signal is more severely corrupted by mixed noise. On the basis of case 8, these four cases simulate more severe mixture noise either by adding more severe noise (case 9, case 10) or by having more bands corrupted (case 11, case 12). Table 1 shows the details of the synthetic noise setting. Cases 3 to 12 are more consistent with real-world HSI noise.
For fair comparison, the hyper-parameters for all methods are fine-tuned to achieve their best performance and metrics for evaluation include: PSNR (Peak Signal-to-Noise Ratio), SSIM (Structural Similarity), SAM (Spectral Angle Mapper) and ERGAS (Erreur Relative Global Adimensionnelle de Synthèse).
Table 2 and Table 3 present the evaluation results for the twelve cases on the Indian Pines and Pavia Centre datasets. We will analyze the experiment results to gain a deeper understanding of the performance of our model.
Firstly, we will compare our methods with non-DIP-based methods. Although our methods are not specifically designed for Gaussian noise, ALDIP and ALDIP-SSTV perform well in case 1 and case 2, and even outperform other methods in some metrics. It should be attributed to DIP and SSTV terms. However, the effect of AL noise modeling is not obvious in these cases. As we further add various types of noise and increase the intensities, the superior performance of our methods becomes more apparent. In cases 3 to 12, our methods rank first or second in most metrics. In some cases of the Pavia Centre dataset (case 3, 5, 7, 8), NMoG takes the first place on the PSNR metric, but the structural and spectral similarity is not as good as our methods.
Secondly, we compare our methods with other DIP-based methods. In case 1 and case 2, DIP2D- 2 performs well. However, in other cases, it performs much worse than other DIP-based methods. Therefore, the assumption of Gaussian distribution performs poorly for mixed HSI noise. S2DIP has the best performance in case 1 and case 2. In other cases, our methods outperform S2DIP. Next, we focus on the comparison between DIP2D- 1 and ALDIP. When updating the parameters of the network, DIP2D- 1 uses Y f Θ ( Z ) 1 as the loss function while ALDIP uses W ( Y f Θ ( Z ) ) 1 . ALDIP multiplies an additional W , which guides the network to learn the skewness of noise. To assess the effect of this operation, we further show the difference between the above two methods in the metrics. Table 4 shows the improvement of AL noise modeling, where positive numbers represent positive effects and negative numbers represent negative effects.
According to Table 4, we find that AL noise modeling can always achieve good improvements on the Indian Pines dataset. However, on the Pavia Centre dataset, AL modeling leads to a decrease in PSNR and ERGAS, but an improvement in SSIM and SAM in cases 1–5. For other cases with more intensive noise, the effect of AL modeling becomes more and more obvious. In other words, the more complex the noise, the more asymmetric it becomes.
Figure 4 and Figure 5 display the denoising result on Indian Pines and Pavia Centre. Figure 4a and Figure 5a show the noisy HSI, which is seriously degraded by the mixture of Gaussian noise, impulse noise, deadlines and stripes. Some local areas are amplified to display detailed information. It is shown that DIP-based methods outperform other methods. For DIP-based methods, our methods retain some important details.
Pixel (67,7) on the Indian Pines dataset for case 9 is chosen to visualize the spectral curves in Figure 6. For the Indian Pines dataset, DIP2D- 1 , S2DIP, ALDIP and ALDIP-SSTV’s spectral curves almost perfectly fit the curves of the ground truth. However, a local area of curves is zoomed in to show that ALDIP and ALDIP-SSTV preserve details that other methods fail to.
Pixel (52,59) on the Pavia Centre dataset for case 12 is also selected. In Figure 7, it is clear that ALDIP and ALDIP-SSTV finely recover the truth spectrum. Therefore, our proposed ALDIP and ALDIP-SSTV are able to achieve a restoration closest to the truth.

4.2. Real Data Experiments

Three real-world HSI datasets are used to conduct the real data experiments to validate the effectiveness of our methods. They are:
  • Shanghai: captured by the GaoFen-5 satellite with 300 × 300 pixels and 155 bands.
  • Terrain: captured by Hyperspectral Digital Imagery Collection Experiment with 500 × 307 pixels and 210 bands.
  • Urban: captured by Hyperspectral Digital Imagery Collection Experiment with 307 × 307 pixels and 210 bands.
In the real data experiment, the early stopping trick is applied to all DIP-based methods by manually monitoring to preventing overfitting. Figure 8 shows that on the Shanghai dataset, FastHyDe, LRTDTV, NMoG and DIP-based methods successfully remove the stripes. Taking a closer observation at these denoising results, FastHyDe, LRTDTV, DIP2D- 2 and DIP2D- 1 lose some details in the zoomed local area. Moreover, ALDIP and ALDIP-SSTV preserve more details than DIP2D- 2 and DIP2D- 1 , which demonstrates that ALDIP and ALDIP-SSTV obtain a superiority on HSI mixed noise removal.
The denoising result of the Terrain dataset is shown in Figure 9. Apparently, all non-DIP-based methods cannot completely remove stripes. Among the DIP-based methods, the results of ALDIP and ALDIP-SSTV are less blurring and preserve more details. On the Urban dataset, the effectiveness of our methods is more significant. As shown in Figure 10, non-DIP-based methods have poor performance. It can be seen visually that ALDIP and ALDIP-SSTV outperform all other methods.

4.3. Sensitivity Analysis

At the first glance, ALDIP needs to initialize parameters λ and κ . Let us revisit Equation (22), and it is shown that only κ should be initialized, because updating λ depends on κ rather than the previous λ . In practice, we do not know whether the noise is negative skew, positive skew or symmetric. An intuitive strategy is to initialize κ k = ( k = 1 , 2 , , B ) , which corresponds to the symmetric case.
As stated before, the loss function of ALDIP can actually be deemed as the weighted 1 norm, where the weight is determined by λ and κ . DIP2D- 1 is hence employed as the baseline. Figure 11 exhibits the PSNR curve versus a different initial value of κ . It is revealed that the PSNR of ALDIP fluctuates within a very small interval and is always higher than that of the baseline. A conclusion is thus drawn that ALDIP is not sensitive to the initial value of κ .
Furthermore, as shown in Figure 12, it is found that ALDIP can always reach the optimal PSNR value with fewer iterations than DIP2D- 1 , no matter how κ is initialized. This indicates that DIP guided by AL noise modeling has a faster learning process.
DIP-based methods require early stopping to avoid the inherited overfitting issue. Therefore, the number of steps is a critical hyperparameter that affects the performance of our methods. Figure 13 visualized how the number of steps impacts the performance of our methods. As shown in Figure 13, the PSNR values on both the Indian Pines and Pavia Centre datasets approach their respective peaks at around 800 iterations when the image suffers from i.i.d. Gaussian noise (case 1). There is a similar conclusion for non-i.i.d. Gaussian noise (case 2). For the mixed noise, cases 5, 8 and 10 are selected to be displayed. At about 1500 iterations, ALDIP reaches a high PSNR value on both datasets. Thus, we set the number of steps to 800 for Gaussian noise cases and 1500 for the mixed noise cases.

4.4. Execution Time

Table 5 presents the execution time and corresponding PSNR of DIP-based methods when achieving their optimal PSNR. It is shown that although DIP2D- l 2 has the shortest execution time, its performance is far inferior to other methods. Among DIP2D- l 1 , S2DIP and ALDIP, ALDIP exhibits the shortest execution time and the best performance. Although we need to update the parameters of AL at each iteration, it is not a time-consuming process. Even with the additional time required for updating the parameters of AL, ALDIP still exhibits a shorter execution time due to improvements in learning speed. Under the guidance of AL noise modeling, DIP is accelerated significantly. However, incorporated with the SSTV term, ALDIP-SSTV has the longest execution time. This is because the backpropagation process of the SSTV term is time-consuming.

5. Conclusions

In this paper, considering the asymmetric and heavy-tailed properties of HSI noise, we combine an AL noise model and deep image prior. We propose ALDIP and ALDIP-SSTV for HSI denoising. Compared with other state-of-the-art methods, our methods outperform other methods for both synthetic data and real-world data. As unsupervised-learning-based methods, ALDIP and ALDIP-SSTV avoid training on massive paired data. There are still several directions to further improve our methods. For instance, it is still urgent to design the automatic early stopping strategy to replace manual monitoring. Moreover, except for SSTV, other more advanced regularizations can be combined into ALDIP. Last but not least, the construction of a more suitable network architecture for HSI denoising is also an interesting problem. Additionally, our methods focus on modifying the loss function to enable the network to learn the asymmetric and heavy-tailed properties of complex noise. It would be interesting to design a learnable module that can achieve the same objective in the future.

Author Contributions

Conceptualization, Y.W. and S.X.; methodology, Y.W. and S.X.; software, Y.W. and S.X.; validation, Y.W. and S.X.; formal analysis, Y.W. and S.X.; investigation, Y.W. and S.X.; resources, Y.W. and S.X.; data curation, Y.W. and S.X.; writing—original draft preparation, Y.W. and S.X.; writing—review and editing, Y.W. and S.X.; visualization, Y.W. and S.X.; supervision, Y.W. and S.X.; project administration, Y.W. and S.X.; funding acquisition, S.X., X.C., Q.K., T.-Y.J. and X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by Research Fund of Guangxi Key Lab of Multi-source Information Mining & Security under Grant MIMS22-16, in part by National Natural Science Foundation of China under Grant 12201497, 62272375 and 12001432, in part by Guangdong Basic and Applied Basic Research Foundation under Grant 2023A1515011358, in part by Shaanxi Fundamental Science Research Project for Mathematics and Physics under Grant 22JSQ033, and in part by Fundamental Research Funds for the Central Universities under Grant D5000220060.

Data Availability Statement

Indian Pines and Pavia Centre dataset: https://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 20 January 2023); Terrain and Urban dataset: http://www.erdc.usace.army.mil/Media/Fact-Sheets/Fact-Sheet-Article-View/Article/610433/hypercube/ (accessed on 20 January 2023); Shanghai dataset is available upon requests.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nasrabadi, N.M. Hyperspectral target detection: An overview of current and future challenges. IEEE Signal Process. Mag. 2013, 31, 34–44. [Google Scholar] [CrossRef]
  2. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  3. Ding, Y.; Zhang, Z.; Zhao, X.; Hong, D.; Cai, W.; Yu, C.; Yang, N.; Cai, W. Multi-feature fusion: Graph neural network and CNN combining for hyperspectral image classification. Neurocomputing 2022, 501, 246–257. [Google Scholar] [CrossRef]
  4. Yao, D.; Zhi-li, Z.; Xiao-feng, Z.; Wei, C.; Fang, H.; Yao-ming, C.; Cai, W.W. Deep hybrid: Multi-graph neural network collaboration for hyperspectral image classification. Def. Technol. 2022, in press. [CrossRef]
  5. Ding, Y.; Zhang, Z.; Zhao, X.; Cai, Y.; Li, S.; Deng, B.; Cai, W. Self-supervised locality preserving low-pass graph convolutional embedding for large-scale hyperspectral image clustering. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5536016. [Google Scholar] [CrossRef]
  6. Ding, Y.; Zhang, Z.; Zhao, X.; Cai, W.; Yang, N.; Hu, H.; Huang, X.; Cao, Y.; Cai, W. Unsupervised self-correlated learning smoothy enhanced locality preserving graph convolution embedding clustering for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5536716. [Google Scholar] [CrossRef]
  7. Ding, Y.; Zhao, X.; Zhang, Z.; Cai, W.; Yang, N.; Zhan, Y. Semi-supervised locality preserving dense graph neural network with ARMA filters and context-aware learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5511812. [Google Scholar] [CrossRef]
  8. Zhang, Z.; Ding, Y.; Zhao, X.; Siye, L.; Yang, N.; Cai, Y.; Zhan, Y. Multireceptive field: An adaptive path aggregation graph neural framework for hyperspectral image classification. Expert Syst. Appl. 2023, 217, 119508. [Google Scholar] [CrossRef]
  9. Rasti, B.; Sveinsson, J.R.; Ulfarsson, M.O.; Benediktsson, J.A. Hyperspectral image denoising using 3D wavelets. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 1349–1352. [Google Scholar]
  10. Qian, Y.; Ye, M. Hyperspectral imagery restoration using nonlocal spectral-spatial structured sparse representation with noise estimation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 6, 499–515. [Google Scholar] [CrossRef]
  11. Zelinski, A.; Goyal, V. Denoising hyperspectral imagery and recovering junk bands using wavelets and sparse approximation. In Proceedings of the 2006 IEEE International Symposium on Geoscience and Remote Sensing, Denver, CO, USA, 31 July 31–4 August 2006; pp. 387–390. [Google Scholar]
  12. Rasti, B.; Sveinsson, J.R.; Ulfarsson, M.O.; Benediktsson, J.A. Hyperspectral image denoising using first order spectral roughness penalty in wavelet domain. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 2458–2467. [Google Scholar] [CrossRef]
  13. Chen, S.L.; Hu, X.Y.; Peng, S.L. Hyperspectral imagery denoising using a spatial-spectral domain mixing prior. J. Comput. Sci. Technol. 2012, 27, 851–861. [Google Scholar] [CrossRef]
  14. Rasti, B.; Sveinsson, J.R.; Ulfarsson, M.O.; Benediktsson, J.A. Wavelet based hyperspectral image restoration using spatial and spectral penalties. In Proceedings of the Image and Signal Processing for Remote Sensing XIX, Dresden, Germany, 23–26 September 2013; Volume 8892, pp. 135–142. [Google Scholar]
  15. Wright, J.; Ganesh, A.; Rao, S.; Peng, Y.; Ma, Y. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. Adv. Neural Inf. Process. Syst. 2009, 22, 5249–5257. [Google Scholar]
  16. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 4729–4743. [Google Scholar] [CrossRef]
  17. Zhu, R.; Dong, M.; Xue, J.H. Spectral nonlocal restoration of hyperspectral images with low-rank property. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 8, 3062–3067. [Google Scholar] [CrossRef]
  18. Wang, M.; Yu, J.; Xue, J.H.; Sun, W. Denoising of hyperspectral images using group low-rank representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4420–4427. [Google Scholar] [CrossRef] [Green Version]
  19. Xu, S.; Cao, X.; Peng, J.; Ke, Q.; Ma, C.; Meng, D. Hyperspectral Image Denoising by Asymmetric Noise Modeling. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  20. Renard, N.; Bourennane, S.; Blanc-Talon, J. Denoising and dimensionality reduction using multilinear tools for hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2008, 5, 138–142. [Google Scholar] [CrossRef]
  21. Bai, X.; Xu, F.; Zhou, L.; Xing, Y.; Bai, L.; Zhou, J. Nonlocal similarity based nonnegative tucker decomposition for hyperspectral image denoising. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 701–712. [Google Scholar] [CrossRef] [Green Version]
  22. Wang, Y.; Peng, J.; Zhao, Q.; Leung, Y.; Zhao, X.L.; Meng, D. Hyperspectral image restoration via total variation regularized low-rank tensor decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 11, 1227–1243. [Google Scholar] [CrossRef] [Green Version]
  23. Karami, A.; Yazdi, M.; Asli, A.Z. Noise reduction of hyperspectral images using kernel non-negative tucker decomposition. IEEE J. Sel. Top. Signal Process. 2011, 5, 487–493. [Google Scholar] [CrossRef]
  24. Dong, W.; Wang, H.; Wu, F.; Shi, G.; Li, X. Deep spatial–spectral representation learning for hyperspectral image denoising. IEEE Trans. Comput. Imaging 2019, 5, 635–648. [Google Scholar] [CrossRef]
  25. Xie, W.; Li, Y. Hyperspectral imagery denoising by deep learning with trainable nonlinearity function. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1963–1967. [Google Scholar] [CrossRef]
  26. Wei, K.; Fu, Y.; Huang, H. 3-D quasi-recurrent neural network for hyperspectral image denoising. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 363–375. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Maffei, A.; Haut, J.M.; Paoletti, M.E.; Plaza, J.; Bruzzone, L.; Plaza, A. A single model CNN for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2516–2529. [Google Scholar] [CrossRef]
  28. Chang, Y.; Yan, L.; Fang, H.; Zhong, S.; Liao, W. HSI-DeNet: Hyperspectral image restoration via convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2018, 57, 667–682. [Google Scholar] [CrossRef]
  29. Sidorov, O.; Yngve Hardeberg, J. Deep hyperspectral prior: Single-image denoising, inpainting, super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019. [Google Scholar]
  30. Luo, Y.S.; Zhao, X.L.; Jiang, T.X.; Zheng, Y.B.; Chang, Y. Hyperspectral mixed noise removal via spatial-spectral constrained unsupervised deep image prior. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 9435–9449. [Google Scholar] [CrossRef]
  31. Imamura, R.; Itasaka, T.; Okuda, M. Zero-shot hyperspectral image denoising with separable image prior. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019. [Google Scholar]
  32. Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9446–9454. [Google Scholar]
  33. Aggarwal, H.K.; Majumdar, A. Hyperspectral image denoising using spatio-spectral total variation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 442–446. [Google Scholar] [CrossRef]
  34. Sidiropoulos, N.D.; De Lathauwer, L.; Fu, X.; Huang, K.; Papalexakis, E.E.; Faloutsos, C. Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process. 2017, 65, 3551–3582. [Google Scholar] [CrossRef]
  35. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.W. Nonlocal low-rank regularized tensor decomposition for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5174–5189. [Google Scholar] [CrossRef]
  36. Xue, J.; Zhao, Y.; Liao, W.; Kong, S.G. Joint spatial and spectral low-rank regularization for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1940–1958. [Google Scholar] [CrossRef]
  37. He, W.; Yao, Q.; Li, C.; Yokoya, N.; Zhao, Q. Non-local meets global: An integrated paradigm for hyperspectral denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 6868–6877. [Google Scholar]
  38. Yang, Y.; Zheng, J.; Chen, S.; Zhang, M. Hyperspectral image restoration via local low-rank matrix recovery and Moreau-enhanced total variation. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1037–1041. [Google Scholar] [CrossRef]
  39. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2015, 54, 178–188. [Google Scholar] [CrossRef]
  40. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  41. Yuan, Q.; Zhang, Q.; Li, J.; Shen, H.; Zhang, L. Hyperspectral image denoising employing a Spatial–Spectral deep residual convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2018, 57, 1205–1218. [Google Scholar] [CrossRef] [Green Version]
  42. Zhang, Q.; Yuan, Q.; Li, J.; Liu, X.; Shen, H.; Zhang, L. Hybrid noise removal in hyperspectral imagery with a Spatial–Spectral gradient network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7317–7329. [Google Scholar] [CrossRef]
  43. Zhao, Y.; Zhai, D.; Jiang, J.; Liu, X. ADRN: Attention-based deep residual network for hyperspectral image denoising. In Proceedings of the ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 2668–2672. [Google Scholar]
  44. Shi, Q.; Tang, X.; Yang, T.; Liu, R.; Zhang, L. Hyperspectral image denoising using a 3-D attention denoising network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10348–10363. [Google Scholar] [CrossRef]
  45. Wang, Z.; Shao, Z.; Huang, X.; Wang, J.; Lu, T. SSCAN: A spatial–spectral cross attention network for hyperspectral image denoising. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  46. Pan, E.; Ma, Y.; Mei, X.; Fan, F.; Huang, J.; Ma, J. Sqad: Spatial-spectral quasi-attention recurrent network for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  47. Meng, D.; De La Torre, F. Robust matrix factorization with unknown noise. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 1337–1344. [Google Scholar]
  48. Han, Z.; Wang, Y.; Zhao, Q.; Meng, D.; Lin, L.; Tang, Y. A generalized model for robust tensor factorization with noise modeling by mixture of Gaussians. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5380–5393. [Google Scholar]
  49. Luo, Q.; Han, Z.; Chen, X.; Wang, Y.; Meng, D.; Liang, D.; Tang, Y. Tensor rpca by bayesian cp factorization with complex noise. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 5019–5028. [Google Scholar]
  50. Chen, Y.; Cao, X.; Zhao, Q.; Meng, D.; Xu, Z. Denoising hyperspectral image with non-iid noise structure. IEEE Trans. Cybern. 2017, 48, 1054–1066. [Google Scholar] [CrossRef] [Green Version]
  51. Cao, X.; Chen, Y.; Zhao, Q.; Meng, D.; Wang, Y.; Wang, D.; Xu, Z. Low-rank matrix factorization under general mixture noise distributions. In Proceedings of the IEEE International Conference on Computer Vision, Washington, DC, USA, 7–13 December 2015; pp. 1493–1501. [Google Scholar]
  52. Cao, X.; Zhao, Q.; Meng, D.; Chen, Y.; Xu, Z. Robust low-rank matrix factorization under general mixture noise distributions. IEEE Trans. Image Process. 2016, 25, 4677–4690. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Liu, J.; Sun, Y.; Xu, X.; Kamilov, U.S. Image restoration using total variation regularized deep image prior. In Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 7715–7719. [Google Scholar]
  54. Mataev, G.; Milanfar, P.; Elad, M. DeepRED: Deep image prior powered by RED. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019. [Google Scholar]
  55. Cascarano, P.; Sebastiani, A.; Comes, M.C.; Franchini, G.; Porta, F. Combining weighted total variation and deep image prior for natural and medical image restoration via ADMM. In Proceedings of the 2021 21st International Conference on Computational Science and Its Applications (ICCSA), Cagliari, Italy, 13–16 September 2021; pp. 39–46. [Google Scholar]
  56. Zhuang, L.; Bioucas-Dias, J.M. Fast hyperspectral image denoising and inpainting based on low-rank and sparse representations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 730–742. [Google Scholar] [CrossRef]
  57. Peng, J.; Wang, Y.; Zhang, H.; Wang, J.; Meng, D. Exact decomposition of joint low rankness and local smoothness plus sparse matrices. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5766–5781. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Real—world HSI noise analysis. (a) Band 87 of Urban dataset. (b) Generated clean band. (c) Approximate noise of band. (d) Histogram of approximate noise probability density function curves fitted by Laplace distribution and Gaussian distribution, respectively. (e) Histogram of approximate noise and probability density function curve fitted by asymmetric Laplace distribution.
Figure 1. Real—world HSI noise analysis. (a) Band 87 of Urban dataset. (b) Generated clean band. (c) Approximate noise of band. (d) Histogram of approximate noise probability density function curves fitted by Laplace distribution and Gaussian distribution, respectively. (e) Histogram of approximate noise and probability density function curve fitted by asymmetric Laplace distribution.
Remotesensing 15 01970 g001
Figure 2. Comparison of DIP2D- 2 (2D convolution DIP with 2 loss) and DIP2D- 1 (2D convolution DIP with 1 loss) on the real-world HSI Shanghai dataset. Two local areas (red and blue squares) are demarcated zoomed for easy observation. (a) Real-world noisy HSI Shanghai (The enhanced pseudo images consisted of the 152nd, 89th and 43rd bands). (b) DIP2D- 2 method denoising result. (c) DIP2D- 1 denoising result. (d) ALDIP method denoising result. (e) ALDIP-SSTV method denoising result.
Figure 2. Comparison of DIP2D- 2 (2D convolution DIP with 2 loss) and DIP2D- 1 (2D convolution DIP with 1 loss) on the real-world HSI Shanghai dataset. Two local areas (red and blue squares) are demarcated zoomed for easy observation. (a) Real-world noisy HSI Shanghai (The enhanced pseudo images consisted of the 152nd, 89th and 43rd bands). (b) DIP2D- 2 method denoising result. (c) DIP2D- 1 denoising result. (d) ALDIP method denoising result. (e) ALDIP-SSTV method denoising result.
Remotesensing 15 01970 g002
Figure 3. Network architecture used for experiments.
Figure 3. Network architecture used for experiments.
Remotesensing 15 01970 g003
Figure 4. GT, Noisy and denoising result by methods for comparison on Indian Pines dataset (case 8). The pseudo images consisted of the 185th, 136th and 19th bands are selected to display. Two local areas (red and blue squares) are demarcated zoomed for easy observation.
Figure 4. GT, Noisy and denoising result by methods for comparison on Indian Pines dataset (case 8). The pseudo images consisted of the 185th, 136th and 19th bands are selected to display. Two local areas (red and blue squares) are demarcated zoomed for easy observation.
Remotesensing 15 01970 g004
Figure 5. GT, noisy and denoising results by methods for comparison on Pavia Centre dataset (case 12). The pseudo images that consisted of the 30th, 14th and 2nd bands are selected to display. A local area (red square) is demarcated zoomed for easy observation.
Figure 5. GT, noisy and denoising results by methods for comparison on Pavia Centre dataset (case 12). The pseudo images that consisted of the 30th, 14th and 2nd bands are selected to display. A local area (red square) is demarcated zoomed for easy observation.
Remotesensing 15 01970 g005
Figure 6. Spectral curves of methods for comparison at pixel (67,7) on Indian Pines dataset (case 9).
Figure 6. Spectral curves of methods for comparison at pixel (67,7) on Indian Pines dataset (case 9).
Remotesensing 15 01970 g006
Figure 7. Spectral curves of methods for comparison at pixel (52,59) on Pavia Centre dataset (case 12).
Figure 7. Spectral curves of methods for comparison at pixel (52,59) on Pavia Centre dataset (case 12).
Remotesensing 15 01970 g007
Figure 8. Noisy and denoising results by methods for comparison on real-world noisy HSI Shanghai dataset. The enhanced pseudo images consisted of the 152nd, 88th and 43rd bands are displayed. Two local areas (red and blue squares) are demarcated zoomed for easy observation.
Figure 8. Noisy and denoising results by methods for comparison on real-world noisy HSI Shanghai dataset. The enhanced pseudo images consisted of the 152nd, 88th and 43rd bands are displayed. Two local areas (red and blue squares) are demarcated zoomed for easy observation.
Remotesensing 15 01970 g008
Figure 9. Noisy and denoising results by methods for comparison on real-world noisy HSI Terrain dataset. The images of band 104 are displayed. Two local areas (red and blue squares) are demarcated zoomed for easy observation.
Figure 9. Noisy and denoising results by methods for comparison on real-world noisy HSI Terrain dataset. The images of band 104 are displayed. Two local areas (red and blue squares) are demarcated zoomed for easy observation.
Remotesensing 15 01970 g009
Figure 10. Noisy and denoising result by methods for comparison on real-world noisy HSI Urban dataset. The pseudo images consisted of the 104th, 108th and 109th bands are displayed. Two local areas (red and blue squares) are demarcated zoomed for easy observation.
Figure 10. Noisy and denoising result by methods for comparison on real-world noisy HSI Urban dataset. The pseudo images consisted of the 104th, 108th and 109th bands are displayed. Two local areas (red and blue squares) are demarcated zoomed for easy observation.
Remotesensing 15 01970 g010
Figure 11. The trend of PSNR changing with the initial value of κ on Indian Pines (case 8) and Pavia Centre (case 12).
Figure 11. The trend of PSNR changing with the initial value of κ on Indian Pines (case 8) and Pavia Centre (case 12).
Remotesensing 15 01970 g011
Figure 12. The trend of iterations to reach the best PSNR changing with the initial value of κ on Indian Pines (case 8) and Pavia Centre (case 12).
Figure 12. The trend of iterations to reach the best PSNR changing with the initial value of κ on Indian Pines (case 8) and Pavia Centre (case 12).
Remotesensing 15 01970 g012
Figure 13. The empirical number of steps for experiments and how different numbers of steps impact the noise removal results.
Figure 13. The empirical number of steps for experiments and how different numbers of steps impact the noise removal results.
Remotesensing 15 01970 g013
Table 1. Synthetic noise setting. σ denotes the standard deviation of Gaussian noise. p denotes the sampling rate of impulse noise. q is the number of deadlines and stripes.
Table 1. Synthetic noise setting. σ denotes the standard deviation of Gaussian noise. p denotes the sampling rate of impulse noise. q is the number of deadlines and stripes.
CaseGaussian NoiseImpulse NoiseDeadlineStripe
Bands σ BandspBandsqBandsq
Case 1all bands0.05------
Case 2all bands0.01∼0.05 *------
Case 3all bands0.01all bands p = 0.1----
Case 4all bands0.01all bands p = 0.110% bands q = 35--
Case 5all bands0.01all bands p = 0.120% bands q = 3520% bands q = 35
Case 6all bands0.01all bands p = 0.1530% bands q = 3530% bands q = 35
Case 7all bands0.01all bands p = 0.2540% bands q = 3540% bands q = 35
Case 8all bands0.01all bands p = 0.350% bands q = 3550% bands q = 35
Case 9all bands0.01all bands p = 0.3550% bands q = 4050% bands q = 40
Case 10all bands0.01all bands p = 0.450% bands q = 4550% bands q = 45
Case 11all bands0.01all bands p = 0.360% bands q = 3560% bands q = 35
Case 12all bands0.01all bands p = 0.370% bands q = 3 570% bands q = 35
* In case 2, each band is corrupted by Gaussian noise with σ randomly sampled from 0.01 to 0.05.
Table 2. The metrics of a synthetic data experiment on HSI dataset Indian Pines. The optimal values are indicated in bold and the second-best values are underlined.
Table 2. The metrics of a synthetic data experiment on HSI dataset Indian Pines. The optimal values are indicated in bold and the second-best values are underlined.
CasesMetricsNoisyFastHyDeLRTVLRTDTVNMoGBALMFCTVDIP2D- 2 DIP2D- 1 S2DIPALDIPALDIP-SSTV
Case 1PSNR26.0339.2343.7643.4439.0536.1340.0841.0440.1044.9140.1041.84
SSIM0.60360.95900.99290.99560.95830.90920.97210.97800.96990.99710.96690.9816
SAM5.89731.13060.62300.65131.20911.78041.050.89830.99760.69300.98920.7755
ERGAS116.6226.8016.0817.1627.3237.8724.4121.1823.7415.7023.6620.46
Case 2PSNR30.8441.7847.2844.8044.1039.9842.2544.6744.0547.0246.0347.07
SSIM0.75690.98120.99680.99730.98650.95610.98120.99040.98880.99810.99210.9947
SAM3.95170.80900.39740.53230.67811.24620.8620.59320.65100.53450.50220.4529
ERGAS78.1320.2911.0915.2415.6126.8819.5614.0115.1512.4212.1211.39
Case 3PSNR18.5228.0648.4744.6246.8047.1947.8430.4649.4548.1149.8250.40
SSIM0.45570.80760.99880.99800.99480.99470.99440.91330.99780.99930.99800.9984
SAM13.19362.77640.24940.43440.39990.41280.43191.83060.33750.29030.32180.3047
ERGAS273.8092.088.9615.6816.7511.0310.2469.448.749.417.867.48
Case 4PSNR17.1024.7437.0939.9635.2441.9644.1130.2748.9948.0649.7650.31
SSIM0.40730.71810.96550.98450.98090.97430.96920.90610.99780.99920.99790.9982
SAM17.92698.64142.82131.59485.62503.69641.59771.96430.29140.29790.32450.3057
ERGAS371.64213.7475.1845.80117.7687.5736.5270.968.359.477.897.50
Case 5PSNR16.8924.5936.7839.4536.0141.1941.2329.7748.0047.7248.8149.62
SSIM0.39370.70890.96020.97920.98050.96620.93540.89750.99720.99910.99770.9981
SAM18.33668.76093.02091.67835.31243.28472.81462.34040.35660.35640.35580.3234
ERGAS377.49217.1681.7251.00111.1784.2961.2676.7210.0610.319.288.18
Case 6PSNR16.0622.9436.1838.9140.0439.9337.9129.7247.2447.5548.6649.54
SSIM0.36220.65690.95840.98170.98300.95920.98800.88380.99680.99900.99730.9980
SAM20.817110.60672.58721.50865.04813.85951.09382.67510.41340.38350.37130.3334
ERGAS423.77258.3376.0748.55106.8088.8436.4277.9910.9810.399.248.25
Case 7PSNR15.3021.8534.3537.8039.6137.1636.6329.6346.3047.0348.1149.71
SSIM0.33100.63350.94570.97630.98450.93690.98570.88710.99600.99870.99660.9975
SAM22.672211.07503.33621.37654.50663.93601.15512.90890.45940.44950.38460.3577
ERGAS461.45278.0184.8043.5094.4498.3240.5680.2212.4411.489.778.37
Case 8PSNR14.5219.9632.8036.4939.3135.0235.6629.1846.0146.6147.9549.32
SSIM0.30120.55660.93000.97500.98490.91280.98380.86680.99590.99850.99690.9973
SAM24.629313.68303.23311.38494.32613.20841.23823.27430.47920.49330.40130.3914
ERGAS497.95323.7789.6751.5292.3094.7544.5985.3912.8612.289.959.19
Case 9PSNR14.2919.7028.5433.9434.1533.3134.6528.8144.4846.0847.1050.20
SSIM0.29160.69600.91710.94440.92160.87490.98110.85250.99370.99810.99620.9981
SAM25.618212.30294.70912.64078.00425.09921.23953.66460.63100.55710.48110.3556
ERGAS518.06313.29123.0480.01180.95132.8248.7991.3116.0013.3911.948.18
Case 10PSNR14.0819.1227.1331.0129.2129.5633.6628.8442.7046.1446.9949.79
SSIM0.28310.67540.87830.90470.83760.82670.97770.85020.99190.99830.99590.9978
SAM26.828713.94695.82484.012110.75487.06741.47483.71190.75840.55700.49090.3923
ERGAS538.55342.07139.38107.30259.88167.5655.6991.6320.0713.5312.529.32
Case 11PSNR13.8018.6231.5834.6037.8732.8735.2327.6743.9545.6747.3748.79
SSIM0.27960.52330.97140.96150.98110.90030.98210.83690.99360.99820.99560.9972
SAM26.490314.67442.39982.03435.02044.40531.29453.95640.63780.54780.45630.4067
ERGAS531.88349.8479.3063.94104.89108.8347.07101.4616.1713.6510.709.56
Case 12PSNR13.1417.5531.0133.6235.5131.3734.627.3143.2945.4445.5448.05
SSIM0.25760.49940.97060.96000.96570.85940.98040.82350.99310.99820.99490.9966
SAM28.014515.05412.45152.10396.09694.80161.43214.00360.62070.55700.49420.4582
ERGAS561.12372.7579.5665.50131.85122.8050.42103.9317.0313.8813.1311.05
Table 3. The metrics of a synthetic data experiment on the HSI dataset Pavia Centre. The optimal values are indicated in bold and the second-best values are underlined.
Table 3. The metrics of a synthetic data experiment on the HSI dataset Pavia Centre. The optimal values are indicated in bold and the second-best values are underlined.
CasesMetricsNoisyFastHyDeLRTVLRTDTVNMoGBALMFCTVDIP2D- 2 DIP2D- 1 S2DIPALDIPALDIP-SSTV
Case 1PSNR26.3039.2039.4835.8639.5137.4338.839.1739.0040.0437.4538.69
SSIM0.74240.98000.98290.96200.98110.96800.97940.98090.98050.98450.98140.9840
SAM17.00673.73253.85314.60133.59185.35503.71283.77964.01333.09264.35704.0333
ERGAS175.1040.6239.0358.3439.2250.1641.8340.3741.2236.2249.9143.81
Case 2PSNR31.6541.7842.0837.4644.6841.6541.7841.9842.5943.8043.4744.10
SSIM0.87290.99020.99120.97340.99380.98680.99020.98910.99070.99280.99430.9949
SAM12.18693.01193.08574.04712.50403.91172.72673.21442.98362.45912.52912.4204
ERGAS113.6030.2529.2549.4222.2433.5630.4429.9927.9423.9725.8324.08
Case 3PSNR22.3635.4843.9038.4947.6446.4042.7135.6447.2547.4346.3647.29
SSIM0.69670.96620.99480.98060.99700.99640.99420.96730.99670.99690.99740.9975
SAM20.08794.10472.60643.38831.97522.09722.14134.93941.96861.85051.70341.7181
ERGAS273.2260.7824.4043.8115.8618.8830.5159.4216.0915.7417.5116.00
Case 4PSNR21.5732.8243.6036.1247.5245.6442.0735.5347.2047.3946.4047.35
SSIM0.65490.94930.99460.95650.99690.99620.99420.96600.99670.99690.99740.9975
SAM23.05737.40072.64267.96082.01542.14482.14135.03821.97591.85431.72621.6897
ERGAS308.95101.5125.1490.7716.0320.3030.5160.2216.1615.8117.4315.94
Case 5PSNR21.3032.6743.5136.1447.3745.6142.0734.4046.9346.8846.2246.97
SSIM0.63430.94710.99450.95600.99690.99600.99370.95830.99650.99660.99720.9973
SAM24.78387.57692.67018.14552.04952.20432.28565.69352.03751.94161.80481.8292
ERGAS319.28103.0825.3786.6216.2920.4333.0269.5216.6716.7517.9316.75
Case 6PSNR20.1231.5743.2736.5346.9545.4741.8234.0446.4246.7446.6747.21
SSIM0.57860.93120.99430.95840.99670.99570.99340.95240.99630.99650.99660.9974
SAM27.94898.32782.69367.91022.08062.25142.24725.90202.13591.97482.06901.7600
ERGAS363.07116.0026.0085.8717.0621.1833.2772.7117.7116.9817.2416.34
Case 7PSNR19.0230.3743.1236.3047.0345.4741.433.5346.0446.0046.4046.95
SSIM0.52230.92100.99410.95930.99660.99540.99290.94660.99600.99610.99630.9972
SAM31.29148.87952.73147.64612.11792.30552.29376.07222.19852.10822.14991.8355
ERGAS408.59127.0626.2780.6316.9121.2635.177.2918.6018.6118.1517.15
Case 8PSNR18.4529.5942.6935.6446.7945.0841.0832.8545.6245.9046.1246.48
SSIM0.48540.90760.99350.95380.99650.99500.99270.93860.99560.99600.99610.9971
SAM32.68799.34072.78698.34672.17692.42552.35616.46932.24872.09952.19691.8739
ERGAS436.01133.6627.5089.5317.5922.4835.6183.9719.7318.7618.8418.20
Case 9PSNR17.8528.7941.9634.9144.9443.0340.7331.9744.1844.2044.5745.20
SSIM0.45200.89320.99270.95060.99510.99090.99190.92940.99430.99460.99490.9952
SAM34.695810.51873.09508.18362.83473.32992.40716.83192.89622.75472.83092.7075
ERGAS466.46148.9231.7891.9224.9633.9437.992.2126.5425.9025.7424.12
Case 10PSNR17.4128.4241.5034.2244.2342.5339.4831.3243.9544.2844.4245.00
SSIM0.42610.88470.99220.94130.99440.98760.99020.92150.99390.99460.99480.9950
SAM35.885411.06853.13278.83843.06093.66753.25937.31292.94342.76172.83152.7416
ERGAS491.22155.1233.31102.0630.2642.0246.45101.0027.0825.8026.2724.69
Case 11PSNR18.1828.7342.4534.9746.3643.9339.4332.5045.5045.7146.0046.71
SSIM0.46170.89920.99320.95220.99620.99330.98960.93540.99550.99580.99610.9963
SAM33.81809.61142.81417.30382.24812.69483.25936.62502.26022.13922.18632.0957
ERGAS449.56144.9728.3984.5918.7926.8946.4587.3019.7919.1918.8917.47
Case 12PSNR17.8127.7641.7934.7345.4342.8039.332.0244.9245.0745.6746.52
SSIM0.43570.88710.99240.95400.99590.99360.98960.93050.99510.99540.99590.9962
SAM35.83039.47892.86917.17152.30832.62953.15116.95582.35502.24682.23712.1170
ERGAS469.43157.3330.5480.5520.9928.0645.6491.8121.2020.5719.5917.91
Table 4. The improvement of AL noise modeling, i.e., the improvement of ALDIP over DIP2D- 1 in four metrics. Positive numbers indicate positive effects, while negative numbers indicate negative effects. Negative effects are all underlined.
Table 4. The improvement of AL noise modeling, i.e., the improvement of ALDIP over DIP2D- 1 in four metrics. Positive numbers indicate positive effects, while negative numbers indicate negative effects. Negative effects are all underlined.
DatasetsMetricsCase 1Case 2Case 3Case 4Case 5Case 6Case 7Case 8Case 9Case 10Case 11Case 12
Indian PinesPSNR−0.011.980.370.770.811.421.811.942.614.293.432.25
SSIM0.00300.00330.00030.00010.00040.00050.00060.00100.00250.00400.00200.0018
SAM0.00840.14870.01570.03310.00080.04210.07480.07790.14990.26750.18150.1265
ERGAS0.093.030.890.460.781.742.672.914.067.555.473.90
Pavia CentrePSNR−1.550.87−0.89−0.80−0.710.250.370.500.390.470.500.75
SSIM0.00090.00350.00070.00070.00070.00030.00020.00060.00060.00090.00060.0008
SAM−0.34360.45450.26520.24960.23270.06690.04860.05190.06540.11190.90140.1179
ERGAS−8.69712.1089−1.4204−1.2677−1.26120.47470.44510.89530.80050.80740.90141.6018
Table 5. The execution time (Exe. time) and PSNR of DIP-based methods on Indian Pines (case 8) and Pavia Centre (case 12).
Table 5. The execution time (Exe. time) and PSNR of DIP-based methods on Indian Pines (case 8) and Pavia Centre (case 12).
DatasetsMetricsDIP2D- 2 DIP2D- 1 S2DIPALDIPALDIP-SSTV
Indian PinesExe. time72.29123.74341.27114.23660.58
PSNR29.2745.7546.6147.4450.00
Pavia CentreExe. time42.75109.48128.9074.59231.91
PSNR32.0344.9144.9545.7446.46
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Xu, S.; Cao, X.; Ke, Q.; Ji, T.-Y.; Zhu, X. Hyperspectral Denoising Using Asymmetric Noise Modeling Deep Image Prior. Remote Sens. 2023, 15, 1970. https://doi.org/10.3390/rs15081970

AMA Style

Wang Y, Xu S, Cao X, Ke Q, Ji T-Y, Zhu X. Hyperspectral Denoising Using Asymmetric Noise Modeling Deep Image Prior. Remote Sensing. 2023; 15(8):1970. https://doi.org/10.3390/rs15081970

Chicago/Turabian Style

Wang, Yifan, Shuang Xu, Xiangyong Cao, Qiao Ke, Teng-Yu Ji, and Xiangxiang Zhu. 2023. "Hyperspectral Denoising Using Asymmetric Noise Modeling Deep Image Prior" Remote Sensing 15, no. 8: 1970. https://doi.org/10.3390/rs15081970

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop