Next Article in Journal
Retrieval of Snow Properties from the Sentinel-3 Ocean and Land Colour Instrument
Next Article in Special Issue
A Spectral-Spatial Cascaded 3D Convolutional Neural Network with a Convolutional Long Short-Term Memory Network for Hyperspectral Image Classification
Previous Article in Journal
Multiple-Object-Tracking Algorithm Based on Dense Trajectory Voting in Aerial Videos
Previous Article in Special Issue
Tensor Discriminant Analysis via Compact Feature Representation for Hyperspectral Images Dimensionality Reduction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Denoising Using Global Weighted Tensor Norm Minimum and Nonlocal Low-Rank Approximation

1
Research & Development Institute of Northwestern Polytechnical University in Shenzhen, Shenzhen 518057, China
2
Ministry of Basic Education, Sichuan Engineering Technical College, Deyang 618000, China
3
Department of Electronics and Informatics, Vrije Universiteit Brussel, 1050 Brussel, Belgium
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(19), 2281; https://doi.org/10.3390/rs11192281
Submission received: 6 August 2019 / Revised: 22 September 2019 / Accepted: 24 September 2019 / Published: 29 September 2019
(This article belongs to the Special Issue Advanced Techniques for Spaceborne Hyperspectral Remote Sensing)

Abstract

:
A hyperspectral image (HSI) contains abundant spatial and spectral information, but it is always corrupted by various noises, especially Gaussian noise. Global correlation (GC) across spectral domain and nonlocal self-similarity (NSS) across spatial domain are two important characteristics for an HSI. To keep the integrity of the global structure and improve the details of the restored HSI, we propose a global and nonlocal weighted tensor norm minimum denoising method which jointly utilizes GC and NSS. The weighted multilinear rank is utilized to depict the GC information. To preserve structural information with NSS, a patch-group-based low-rank-tensor-approximation (LRTA) model is designed. The LRTA makes use of Tucker decompositions of 4D patches, which are composed of a similar 3D patch group of HSI. The alternating direction method of multipliers (ADMM) is adapted to solve the proposed models. Experimental results show that the proposed algorithm can preserve the structural information and outperforms several state-of-the-art denoising methods.

Graphical Abstract

1. Introduction

A hyperspectral image (HSI) consists of hundreds of contiguous bands at specific wavelengths. They can deliver rich spectral information for real scenes. They have been widely used in many fields, such as urban planning, mapping, agriculture monitoring, forestry, and mineral resources exploration [1,2]. But HSI is unavoidably corrupted by various noises, e.g., Gaussian noise, mixed Poisson–Gaussian noise, dead-lines, and stripes. The noise will influence the subsequent processing, such as classification [3,4,5,6,7], segmentation [8], unmixing [9,10], object detection [11,12], background subtraction [13], and super-resolution [14]. The central limit theorem establishes that the composite effect of many independent noise sources (e.g., thermal noise, shot noise, etc.) should approach a Gaussian distribution. From a practical point of view, current imaging systems designed based on the assumption of additive Gaussian noise perform quite well. As a kind of signal independent noise, the Gaussian assumption has been broadly used in HSI denoising [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]. From a theoretical point of view, Gaussian noise is the worst-case scenario for additive noise as the Gaussian distribution maximizes the entropy subject to a variance constraint [15]. Therefore, reducing the noise of HSI, especially the Gaussian noise, has been an important preprocessing step in practical applications.
When we add noise to the image to simulate a degraded image, the noise is usually chosen from N(0, Σ), and Σ is the noise covariance matrix which may contain off-diagonal elements. When Σ = σ2Ib, b is the number of bands, and σ2 is the variance of the independent and identically distributed noise. In this case, N(0, Σ) is always written as N(0, σ2) for the sake of simplicity. This represents spectrally uncorrelated noise. In the case of spectrally correlated noise, the noise covariance matrix Σ was generated by:
= ( σ 1 2 c 2 σ 2 2 0 0 c 2 σ 2 2 σ 2 2 0 0 0 σ ( p 1 ) 2 c p σ p 2 0 0 c p σ p 2 σ p 2 ) ,
where c i is the correlation coefficient between the noise in bands i and (i−1). This is the simple case where the noise in each band is correlated with that of its neighbors. Σ can be adapted to account for more complex interactions. In the literature, additive Gaussian noise is assumed to be spectrally uncorrelated, and the spectrally correlated noise can be transformed to spectrally uncorrelated noise by decorrelation methods. In this paper, we mainly consider reducing the Gaussian noise, which is spectrally uncorrelated.
HSI denoising methods can be classified into different categories from different perspectives. In this work, we classify the HSI denoising methods from the mathematical view. It can be divided into three categories: vectorization, matricization, and tensor (here tensor is especially referred to whose order is more than two). An HSI contains hundreds of spectral channels, and each channel can be regarded as a grayscale image. In the tensor framework, HSI has three modes, one mode is the spectral dimension, the other two are spatial dimensions. Vectorization methods represent the HSI as a vector, such as Spatio-Spectral Total Variation (SSTV) [18], which may result in very high dimensionality and computation complexity. Matricization methods can be divided into two categories. One is band-wise (or band-by-band) processing. Each band of HSI is a gray image, and it can be denoised band-wise by traditional gray-level image denoising methods, such as the nonlocal-based algorithm [19], K-singular value decomposition (K-SVD) [20], and block-matching 3-D filtering (BM3D) [21]. The band-wise methods ignore the correlation in the spectral domain, leading to relatively poor performance. The second category, [10] build the HSI denoising methods under a matrix framework by reshaping an HSI into a matrix, which evidently is more reasonable than band-wise methods. By contrast, though HSI denoising methods based on matrix always outperform the band-wise method, the intrinsic spatial structure of HSI is also inevitably destroyed compared with the methods with the tensor model [22].
Spectral information in an HSI is of great importance in HSI analysis. Therefore, it is essential for HSI denoising techniques to preserve the spectral information. Low rank (LR) prior can reveal the low-dimensional structure in high-dimensional data. It has always been used to regularize the denoising problem [22,23,24]. The tensor-based approach is feasible and effective for HSI processing from a physical point of view. Tensor-based approaches have achieved promising results in HSI denoising [25,26,27,28,29,30,31,32,33,34,35,36] as they can process the spatial and spectral information jointly. Renard et al. [25] proposed a low-rank-tensor-approximation (LRTA) method by employing the Tucker factorization method to obtain the LR approximation of an input HSI. Because LRTA depends on the estimation of the rank of tensor, it may result in unstable results. Liu et al. [27] designed the parallel factor analysis (PARAFAC) method by utilizing the parallel factor analysis [28] to denoise HSI. It regards the two spatial dimensions as two modes of tensor, this will lead to vertical and horizontal artifacts. Xue et al. [29] used the rank-1 tensor decomposition for HSI denoising. However, the number of components of this decomposition cannot be estimated precisely, and the accurate estimation of rank is difficult to guarantee. By jointly considering Tucker and Canonical Polyadic(CP) factorization, a new sparsity measure is presented to characterize tensor data with physical meaning, and it can achieve better recovery from a corrupted multispectral image (MSI) [30].
Considering the nonlocal self-similarity across space in an MSI, a tensor dictionary learning-based (TDL) model [31] is proposed for denoising by enforcing hard constraints on the rank of the core tensor. In real data, this constraint is not guaranteed as the real rank of the core tensor is hard to determine. Yan et al. [34] considered the low-rankness of patches and Hao et al. [35] considered the unfolding matrices low-rankness of HSI. But the former method ignored the correlation of different modes and the latter ignored the global low-rankness of the whole HSI (see Figure 1). Figure 1 illustrates the low-rankness of the unfolding (matricization) of HSI along three modes. The blue curves in Figure 1b,c are the singular values of matrices unfolding along mode-1 and mode-2 (spatial dimension). As both of them show almost the same tendency, it implies they have the same property in the spatial domain along mode-1 and mode-2. The fast decaying trend of the blue curve in Figure 1d indicates the strong correlation along mode-3 (spectral dimension). The yellow squares in Figure 1b–d are examples of similar matrix patches in the unfolding matrices along three modes which imply global similarity of HSI. Similar cube patches in the HSI can be used to enhance denoising performance. Xue et al. [36] have shown good results with nonlocal similarity consideration. However, the singular value of global HSI, or the weighted nuclear norm introduced in [37], should be treated differently to further improve performance. How to fully explore the high correlation and nonlocal similarity is still a challenge for HSI denoising algorithms. By taking into account of the global correlation and nonlocal similarities jointly, we proposed a novel denoising approach.
The main contributions of this paper are three-fold. First, instead of only exploring mode-3 LR prior knowledge of the clean HSI [35], we also consider the LR property of mode-1 and mode-2, which is called global low-rankness. This idea makes full use of the global information in both the spectral and spatial domains.
Second, we use k-nearest neighbor (k-NN) to cluster the nonlocal similar patches (3-order tensor) to construct a 4-order tensor, in which the third dimension is to keep spectral consistent with the HSI itself. These constructed tensors are LR. This is different from the method in [35], which reshapes these similar patches to a new 3-order tensor.
Third, the joint global correlation (GC) and nonlocal low-rankness regularization are integrated into a single scheme, in which the alternative direction minimization method (ADMM) [38,39] is utilized and extended in the proposed model. Extensive experimental testing shows that the proposed method can effectively remove the Gaussian noises and recover details of the original scene. When compared with state-of-the-art methods, the proposed method has done well in both the quantitative evaluation and the visual comparison.
The remainder of this paper is organized as follows: Section 2 reviews the related work and details the proposed method for HSI denoising. Then, the denoising mathematical algorithm and optimization procedure are presented in Section 3. Experimental results and discussions about the experimental results are reported in Section 4 and Section 5, respectively, and conclusions are drawn in Section 5.

2. Materials and Methods

2.1. Motivation

We use boldface Euler script letter (e.g.,   X ), boldface capital letter (e.g., X), boldface lowercase letter (e.g., x), lowercase letter (e.g., x ) to denote, respectively, tensor, matrix, vector, and scalar. For an N order tensor X I 1 × I 2 × × I N , its element at location ( i 1 , i 2 , , i N ) is denoted by x i 1 , i 2 , , i N , where is the real manifold. Its fiber is a vector defined by fixing all indices but one [13]. Its L1 norm and Frobenius norm are defined as X 1 = i 1 = 1 I 1 i 2 = 1 I 2 i N = 1 I N | x i 1 , i 2 , , i N | and X F 2 = i 1 = 1 I 1 i 2 = 1 I 2 i N = 1 I N x i 1 , i 2 , , i N , 2 respectively. Matricization (or unfolding) of tensor X along mode-n is denoted by X ( n ) I n × ( I 1 I n 1 I n + 1 I N ) , which is obtained by taking all the mode-n fibers to be the columns of the resulting matrix. For a tensor X , its n-mode product with a matrix U K × I n is denoted by O = X × n U I 1 × × I n 1 × K × I n + 1 × × I N . The Tucker decomposition of tensor X is defined as a core tensor multiplied by a matrix along each mode [17]: X = G × 1 U 1 × 2 U 2 × 3 × N U N , where G is the core tensor of the same size as X , and U i I i × I i is the orthogonal factor matrix which can be regarded as the principal component in mode i. For more details of tensor formulation, the reader is referred to [13].
The degradation model with additive Gaussian noise of HSI can be represented by Y = X + N , where Y , X , N d × h × b represent the noisy HSI, clean HSI, and Gaussian noise, respectively, and d, h, b denote the width, height, and number of bands of the HSI, respectively. In this paper, we assume the additive white Gaussian noise N is with zero mean and variance σ2.
With the knowledge of LR prior, the denoising problem of matrix X can described as [37]
argmin X 1 2 Y X F 2 + μ rank ( X ) ,
  μ is the tradeoff parameter. Similarly, the denoising problem of HSI in tensor format can be described as
argmin X 1 2 Y X F 2 + μ rank ( X ) .
In the low-rank approximation problem, the nuclear norm is usually introduced as the surrogate functional of LR constraint. The matrix unfolding along mode-3 is LR [22]. Through unfolding along mode-3 with the weighted nuclear norm minimum (WNNM) described in [37], excellent performance in noise removal has been shown in [35]. However, the method in [31] ignores the fact that the unfolding matrix along each mode is coded information which means that both matrices unfolding along mode-1 and mode-2 have the same LR property.
For each matrix, the nuclear norm minimum method treats each singular value equally, and it will lead to a sub-optimum estimation. To overcome this problem, the denoising problem based on the global correlation can be described as
X ^ = argmin X X w , *   s . t .   Y X F 2 σ 2 ,
X w , * = i = 1 N α i X ( i ) w , * and X w , * = i | w i σ i ( X ) | 1 is the weight nuclear norm of matrix X [37], and w = [w1,w2,…,wn] ( w i 0 ) is the non-negative weight of σi(X), where w i k = C / ( | σ i ( X ) | + ε ) , and C =0.05 is a constant, ε > 0   is a small number to avoid zero numerator. σi(X) is the i-th largest singular value of X. In this paper, the original HSI is treated as a 3-order tensor, so N=3, and through some experiments, we find that α 1 , α 2 should be equal and they should be smaller than α 3 when the denoising result is good. We have conducted various experiments and have empirically chosen α 1 = α 2 = 0.2 ,     α 3 = 0.6.

2.2. The Low-Rankness Approximation of Nonlocal Similar Patches Groups

The HSI represents continuous imaging of the same object across the spectral domain. Spectral measurements taken from the same objects or materials in different locations are similar, as they exhibit almost the same spectral reflectance. The patches we call in the following are referred to as 3-D full-band patches (FBPs). As these patches contain the nonlocal similarity information in the spatial domain and the global correlation information across all spectral bands, they are usually used as prior knowledge in denoising methods [22,31]. However, the matricization method usually concatenates the patches to a matrix, this procession always results in spectral distortion. Based on GC prior, noise in the HSI will be removed globally, but local spatial and spectral distortion will appear, and there will be much residual noise. The distortion and residual noise can be removed efficiently based on nonlocal low-rank regularization [40], which uses nonlocal self-similarity (NSS) to characterize HSI [41]. The similarity is evaluated by the Frobenius norm of the Euclidean distance of given two patches. Smaller norms represent a higher similarity [42].
Motivated by [30], we separate noisy-free HSI X into many overlapped 3D patches of the size t × t × b , where t×t is the spatial dimension, and b is the number of bands of HSI. These patches are collected in a patch set S : S = { P i t × t × b , i Γ } , where Γ represents the index set and P i   is the i-th 3D patch in this set. For each exemplar patch P i , we search for its similar patches by k-nearest neighbor (k-NN) [43,44] method in the spatial domain. Here, two patches are considered similar if the Euclidean distance between two patch vectors is smaller than a given threshold. For each 3D patch of S , we search for similar 3D patches from a big window around this 3D patch. Then S can be grouped into J clusters and these similar 3D patches in each cluster are reshaped into a 4-order tensor R p ( X ) of size t × t × b × N 0 , where p is the index of the p-th cluster, as shown in Figure 2, and N0 is the number of patches. The selection of J is illustrated in Section 4.4. It is more reasonable than reshaping each cluster into a 3-order tensor, as described in [30]. Because the patches in each cluster have very similar structures, R p ( X ) can be approximated by a LR tensor L p , i.e., R p ( X ) L p . The corresponding optimization problem is
X ^ = argmin X X w , *   s . t .   Y X F 2 σ 2 .
As an HSI has strong correlation across the spectrum, ( L p ) ( 3 ) is LR. The patches in each cluster have a very similar structure which implies that ( L p ) ( 4 ) is also LR. So L p can be represented by Tucker decomposition: L p = G p × 1 U 1 p × 2 U 2 p × 3 U 3 × 4 U 4 p , where G p is core tensor, and U 1 p , U 2 p , U 3 , U 4 p are factor matrices orthogonal in columns, e.g., U i p T U j p = I , U 3 T U 3=I. Note that the factor matrices in the spectral mode for all p are set as a shared matrix U 3 , insuring that L p is low-rank in the spectral mode on the whole. Similar to the global denoising problem, the ideal HSI’s patches can be estimated by solving the following optimization problem:
X ^ = argmin X X w , *   s . t .   Y X F 2 σ 2 ,
where · w , * is tensor norm described in (1).
By replacing L p in (6) with the corresponding Tucker decomposition, we obtain the following problem:
min L p λ R p ( X ) G p × 1 U 1 p × 2 U 2 p × 3 U 3 × 4 U 4 p F 2 + η G p w , * ,
where G p is core tensor.

2.3. Proposed Method

Considering both global and nonlocal low-rankness, we propose the following regularized optimization problem:
min X , G p , U 1 p , U 2 p , U 3 , U 4 p   μ X w , * + p = 1 K ( λ 2 R p ( X ) G p × 1 U 1 p × 2 U 2 p × 3 U 3 × 4 U 4 p F 2 + η G p w , * ) s . t .   Y X F 2 σ 2 .
The unconstrained version of (8) is
min X , G p , U 1 p , U 2 p , U 3 , U 4 p   1 2 Y X F 2 + μ X w , * + p = 1 K ( λ 2 R p ( X ) G p × 1 U 1 p × 2 U 2 p × 3 U 3 × 4 U 4 p F 2 + η G p w , * ) .

3. Optimization Procedure and Algorithm Results

The scheme of the proposed HSI denoising method is summarized in Figure 3. To solve the proposed denoising model, we apply the variable splitting technique [39], and introduce new auxiliary variables Q and Z p (p = 1,2,...,K). Replacing variable X in the second term and G p in the third term, problem (8) can be reformulated as
min X , Q , Z p , U 1 p , U 2 p , U 3 , U 4 p   1 2 Y X F 2 + μ Q w , * + p = 1 K ( λ 2 R p ( X ) G p × 1 U 1 p × 2 U 2 p × 3 U 3 × 4 U 4 p F 2 + η Z p w , * ) s . t . X = Q , G p = Z p .
By introducing two proper parameters, α and β, (10) can be changed to the unconstrained version:
min X , Q , Z p , U 1 p , U 2 p , U 3 , U 4 p   1 2 Y X F 2 + μ Q w , * + α 2 X Q F 2 + p = 1 K ( λ 2 R p ( X ) G p × 1 U 1 p × 2 U 2 p × 3 U 3 × 4 U 4 p F 2 + η G p w , * + β 2 G p Z p F 2 ) .
The proposed algorithm for denoising can now be summarized in Algorithm 1. Please refer to the Appendix A for the detail of the optimization process with this algorithm.
Algorithm 1. Proposed method for HSI denoising
Input: Noisy HSI Y
Output: Denoised HSI X
Initialization:
Set parameters α, β, λ, μ, η, R1 = ceil(h ×0.6) and R2 = ceil(d×0.6); X is initialized by (R1, R2, R3)-Tucker approximation of Y , here ceil(a) indicates the smallest integer larger than a. Other variables are initialized by 0.
1: while not converged do
2: updating Q via Q = fold { D τ ω ( X ) }
3: updating G P via G p = fold { D υ ω ( Z ) }
4: updating Z p via Z p = λ R P ( X ) × 1 U 1 p T × 2 U 2 p T × 3 U 3 T × 4 U 4 p T + 2 β I λ + 2 β
5: updating U 1 p via U 1 p = SVD ( ( R p ( X ) × 2 U 2 p T × 3 U 3 T × 4 U 4 p T ) ( 1 ) , r 1 )
6: updating U 2 p via U 2 p = SVD ( ( R p ( X ) × 1 U 1 p T × 3 U 3 T × 4 U 4 p T ) ( 2 ) , r 2 )
7: updating U 3 via U 3 = eigs ( p = 1 K Q p Q p T , r 3 )
8: updating U 4 p via U 4 p = SVD ( ( R p ( X ) × 1 U 1 p T × 2 U 2 p T × 3 U 3 T ) ( 4 ) , r 4 )
9: updating X via X = Ten ( Φ 1 1 Φ 2 ) , where Φ 1 = 1 + α + p = 1 K λ R p T R p , Φ 2 = y + α q + p = 1 K λ R p T m
10: updating α=1.05α, β=1.05β
11: end while

4. Experimental Results and Discussion

Experiments on both simulated and real HSI data were executed to qualitatively and quantitatively assess the denoising performance of the proposed approach. We implemented five state-of-art HSI denoising methods for comparison, namely HyRes [22], LRTA [25], PARAFAC [27], tensor dictionary learning (TDL) [30], Intrinsic Tensor Sparsity (ITSReg) [31], and, tensor singular value decomposition (t-SVD) [32]. Here, the first four compared algorithms were based on tensor, although the last one was based on matrix, but it can deal with 3-D HSI data such as tensor, so we choose it to compare with the proposed method. Their implementation codes can be directly obtained from the authors’ websites. In our experiments, the parameter settings of the compared methods were the default setting provided in the reference papers.

4.1. Experiment on Simulated Noisy Data

The first simulated experiments were conducted with the Washington D.C. dataset (WDC for short) (https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html). This image comprised 1280 lines and 307 columns, with a spatial resolution of 3m and 210 bands. We extracted a 341×307 sub-image, and the bands which were seriously degraded were removed. The data were normalized to the range of [0, 1], and the variance of added noise varied from 0.1 to 0.3.
Five quantitative image quality assessment indices (IQAs) were employed for performance evaluation, including peak signal-to-noise ratio (PSNR) [45], structure similarity (SSIM) [45], erreur relative globale adimensionnelle de synthese (ERGAS) [46],feature similarity (FSIM) [47], and spectral angle mapper (SAM) [48]. PSNR and SSIM were utilized to evaluate the similarity between the target image and the reference image based on mean squared error (MSE) and structural consistency, respectively. The unit of PSNR was dB. The FSIM was utilized to evaluate the perceptual consistency with the reference image. Larger values of these three measures represent better results. ERGAS measures fidelity of the restored image based on the weighted sum of MSE in each band, and SAM calculates the average angle between spectrum vectors of the target HSI and the reference one across all spatial positions, so it fully reflects the fidelity of the spectral reflectance of the target HSI. However, different from the former three measures, smaller values of these two measures represent better denoising results of the HSI. In this paper, the denoising IQAs (MPSNR, MSSIM, MFSIM, MERGAS, or MSAM) for each HSI were calculated as the average of all the bands.
The performances of all methods on WDC dataset at different noise intensity levels are listed in Table 1. It can be found that the indices of the proposed method with ERGAS and SAM were lower than the compared methods, while the other three indices of the proposed method were higher than these methods. The visual superiority of our method with the results of compared methods is also obvious. To further depict the visual denoising performance of our method, the denoising results under variance 0.3 are shown in Figure 4 with a false-color image. The red, green, and blue channels are the composition of bands 60, 27, and 17 as described in https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html. It is observed from the enlarged part of the denoising results that there were obvious visual artifacts in the results of the PARAFAC and LRTA. The visual artifacts of the PARAFAC came from the fact that the two spatial dimensions should not be considered as two modes of tensor. With such decomposition, the outer products will lead to vertical and horizontal artifacts(see Figure 4d, Figure 5d, and Figure 6d). The details were over-smoothed with t-SVD and ITSReg, as shown in the zoomed square. The TDL considers only the nonlocal similarity but ignores the global correlation, so the reconstructed HSI by TDL was not so completed compared with the proposed method. Even though the false-color image of HyRes looks similar to the proposed method, blurred parts are still observed from the enlarged part. To verify the effectiveness of the algorithm in different datasets and at different noise intensity levels, the Urban dataset with size 301×201×162 was used. Figure 5 and Figure 6 show the denoising results with noise variance 0.2 of band 42 and 74, respectively. A similar conclusion can be drawn in the experiment on this dataset (see Table 2 and Figure 5 and Figure 6). Our algorithm not only suppresses noise but also preserves image details and texture.
As shown by the enlarged part of the denoising results, PARAFAC fails to maintain the structural integrity and generated obvious artifacts because it lacks accurate estimation for the rank of HSI. The result of LRTA still has much residual noise. As an extension of matrix singular value decomposition, the t-SVD still treats each singular value equally in the diagonal. It usually results in over-smooth in some regions and leaves residual noise in others. Although TDL considers nonlocal similarity in the spectral domain, it has higher spectral distortion and cannot preserve the details. While ITSReg has considered the intrinsic structure, it ignores the nonlocal similarity. The HyRes shows almost the same performance as the proposed method. By comparing with the LRTA, PARAFAC, TDL, t-SVD, and ITSReg, the proposed method produces better results on Gaussian noise removal. As we consider the global correlation and nonlocal similarity, it achieves the better visual outcome and spectral fidelity. Figure 7 shows the PSNR, SSIM, and FSIM values of each band of the Urban dataset with noise variance at 0.2. The PSNR, SSIM, and FSIM values of the proposed method are higher than those of the compared methods, indicating better noise removal.
To further investigate the performance of preserving useful spectral information while removing noise, we plotted the spectral values curve at spatial locations (100,100) and (150,150) in Figure 8a,b, respectively. The results are contrast-stretched to 0–255 for better visualization. The closer to the original spectrum means the spectral information preservation is better. Though the HyRes showed almost the same performance with the proposed method, it showed severe distortion in various bands. The same results have been observed with the other five compared methods. This result is in consistency with the IQAs and visual inspection mentioned above.
To give some qualitative comparison, the qualitative index of the mean cross-track profile was adopted. We show the horizontal mean profiles of 35th band in Indian Pine set before and after denoising in Figure 9. The horizontal axis in Figure 9 represents the row number, and the vertical axis represents the mean digital number values of each row. As shown in Figure 9a, the curve has sharp fluctuation because of the noise. The fluctuation is more or less suppressed by all the methods. The proposed method provides evidently smoother curves while preserving the spatial information.

4.2. Real HSI Denoising

In this section, we present the results of the Indian Pine dataset (https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html). Its size is 145 × 145 × 220, and spatial resolution is 20 m. The implementation strategies and parameter settings for all competing methods were the same as the above-simulated experiments. Since the noise level was unknown, we used the no-reference HSI quality assessment (NHQA for short) [49] method, which is based on quality-sensitive features. The results are listed in Table 3, from which we see that the value of the proposed method was lower than others, which indicates better performance.
By analyzing Table 1, Table 2 and Table 3, we can conclude that the proposed method works better than compared methods, no matter what the noise level and testing data (synthetic data or real data) are. It demonstrates the potentials of the global and nonlocal low-rank constraints in our algorithm. The proposed algorithm provides competitive results to the state-of-the-art algorithms.
Figure 10 shows the 107th band of the restored images obtained by all competing methods. Figure 10a shows that the original HSI bands were corrupted by heavy noise. Although the restored results from Figure 10b to Figure 10f look satisfactory, in Figure 10b–d, there still exists residual noise, and Figure 10e,f show over-smooth with missing details. Comparatively, albeit less prior knowledge was considered, our method can better restore textures and edge details, and manage to remove structural noises, as shown in Figure 10h. Some stripes and random noises were removed. This further substantiates the robustness of the proposed method.

4.3. Compare of Computational Costs

In this section, we compared the computational costs of different models for these datasets: the WDC (size 341 × 307 × 160), URBAN (size 145 × 145 × 220), and Indian Pine (size 145 × 145 × 220). The running time (in seconds) for the denoising task by LRTA, PARAFAC, TDL, t-SVD, ITSReg, and the proposed method are listed in Table 4. All the experiments were implemented on Windows 7, the Core i5-7300HQ, [email protected] GHz, and the 8 G memory platform by MATLAB R2014a.
The LRTA method is the fastest. But its denoising performance is lower compared to the other methods. The proposed method needs to search for similar 3D patches and the optimization procedure with Tucker decomposition. Both make it relatively slow. For the optimization procedure, though each sub-problem has the closed-form solution in the ADMM framework, the operation time is still long. For future work, we intend to investigate how to reduce the processing time. Although the HyRes obtains similar performance both qualitatively and quantitatively, it can be observed from Figure 8 and Figure 9 that the denoised HSI suffers from spectral distortion.

4.4. Parameter Selection and Analysis of Convergence

In Algorithm 1, there are eight parameters, i.e., α, β, λ, μ, η, R1, R2, and R3. Since the auxiliary variable Q should be close to the estimated X . The regularization parameter α will gradually increase, where the error between the two variables will decrease with increasing iterations. In all experiments, we initialize d   α as 10 and updated it by α = 1.05 * α . Similarly, we initialized β=10 and updated it by new β = 1.05*β. The R1 and R2 for factor matrices U1 and U2 control the complexity of spatial redundancy. They were empirically set as R1 = ceil(h ×0.6) and R2=ceil(d×0.6) in all conducted experiments as this setting works fairly well. The R3, which controls the complexity of temporal redundancy, should be carefully tuned with each dataset. We set R3 as 1 for the real-world datasets.
The parameter λ controls the error of LR approximation. In our experiments with simulated data, we initialized λ = 40/σ and λ = 10/σ for WDC dataset and Indian Pine dataset, respectively. The σ here is the variance of Gaussian noise. For the real data experiments, we set λ = 1.5 in the first iteration. Then, we gradually increased the value of λ with error decreasing.
The parameters of μ and η balanced the regularizations from the spectral domain and spatial domain, respectively. To determine the optimal values of these two parameters, we conducted simulated experiments. Figure 11 shows the example using mean PSNR (MPSNR) as the selection criterion, then a greedy strategy was applied to select the parameter values one by one. Although the parameters obtained by this method are not global optimum, it has achieved favorable denoising performance.
The function curve of MPSNR values with the regularization parameters μ and η are plotted in Figure 11a,b. MPSNR is insensitive to different values of μ and η. Therefore, we can conclude that the proposed method is robust with any μ and η. For convenience, we set μ and η to 1 in all the experiments with both simulated and real datasets.
Additionally, we conducted experiments to understand the best number of clusters J. The results of MPSNR versus the J are displayed in Figure 12. When J = 45, it achieved the best MPSNR. We also observe that MPSNR declined slightly when J was increased from 45 to 60. This is mainly because the similarity within a cluster cannot be guaranteed when J is too large. In other words, the parameter J is optimal when the denoising performance has reached a plateau.

4.5. Analysis of Convergence

To illustrate the convergence of the proposed algorithm, we provide the convergence curves with MPSNR and MSSIM of the Urban dataset. The curves under different noise levels are shown in Figure 13a,b. It can be seen that the values of MPSNR and MSSIM do not change after about 20 iterations. In addition, we also provide the convergence curves of the synthetic WDC dataset and Indian Pine dataset in Figure 14a,b. The objective function values were used as the assessment index of algorithm convergence. The steepest decline happened in the first 20 iterations.

4.6. A comparison of State-of-the-Art Clustering Methods

In Table 5, the proposed tensor-based learning, traditional deep learning method, and the clustering approach in [7] are compared. Both of the clustering approach in [7] and our proposed method can be regarded as tensor-based learning. However, there are still some differences. The main difference is that the clustering approach in [7] is CP (rank-1) decomposition (CPD), while our method is built on Tucker decomposition (TD). For each constructed 4-D tensor, which possesses the two local spatial modes, one spectral mode, and one nonlocal spatial mode, each mode has a specific physical meaning. Compared to CPD without reasonable interpretation to descript information prior to each mode, TD has a stronger ability to characterize the low-rank property of HSI. For other differences, please refer to Table 5.

5. Conclusions

In this paper, we proposed an HSI denoising method by jointly utilizing nonlocal and global low-rankness of HSI. Global low-rankness was exploited via three modes unfolding matrices. This approach exploited the structural information of the original HSI. To take advantage of nonlocal similarity, an LR constraint was added as regularization. This approach was also utilized to exploit the original structural information of image patches and structured sparsity of similar patches. We also split the variables and designed an efficient ADMM algorithm to solve the model. From the experiments with both simulated and real datasets, we conclude that the proposed method based on the intrinsic features of GC and NSS is more efficient. The values of ERGAS and SAM were lower while the FSIM, SSIM, and PSNR were higher (Table 1 and Table 2). Figure 8 and Figure 9 illustrate the efficiency of the proposed method where spectral features and spatial features were more consistent with the references. The experimental results demonstrate that our method can achieve competitive performance compared with other state-of-the-art methods. However, the parameters have to be selected experimentally, and the time cost is considered high. This is a disadvantage. In the future, we will concentrate on accelerating the speed of the algorithm to improve its practical significance.

Author Contributions

X.K. contributed to the research experiments, data analysis, and writing of the paper. Y.Z. helped to design the experiments and research analysis. J.X. collected and preprocessed the original data. J.C.-W.C. helped the with analysis and writing of the paper. All co-authors helped to revise the manuscript.

Funding

This work is supported by National Natural Science Foundation of China (NSFC) (61771391, 61371152), Science Technology Innovation Commission of Shenzhen Municipality (JCYJ20170815162956949, JCYJ20180306171146740) and the Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University (CX201917).

Acknowledgments

Our gratitude goes to the anonymous reviewers and editors for their careful work and thoughtful suggestions that have improved this paper substantially.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Optimization Process of Algorithm 1

The ADMM strategy is introduced to optimize the objective function (11):
(I) Optimizing Q :
min Q μ Q w , * + α 2 X Q F 2 .
This is a weighted tensor nuclear minimum problem, and its closed-form solution is given in [41], that is Q = fold { D τ ω ( X ) } , where τ = μ α .
(II) Optimizing G p :
min G p η G p w , * + β 2 G p Z p F 2 .
Here we omit the ∑ for simplicity. Similar to sub-problem (I), the closed-form solution is G p = fold { D υ ω ( Z p ) } , where υ = η β .
(III) Optimizing Z p , U 1 p , U 2 p , U 3 , U 4 p :
min Z p , U 1 p , U 2 p , U 3 , U 4 p p = 1 K ( λ 2 R p ( X ) Z p × 1 U 1 p × 2 U 2 p × 3 U 3 × 4 U 4 p F 2 ) .
The optimization problem in (A2) can be approximated by alternatively updating the following formulas:
Z p = λ R p ( X ) × 1 U 1 p T × 2 U 2 p T × 3 U 3 T × 4 U 4 p T + 2 β I λ + 2 β ,
U 1 p = SVD ( ( R p ( X ) × 2 U 2 p T × 3 U 3 T × 4 U 4 p T ) ( 1 ) , r 1 ) ,
U 2 p = SVD ( ( R p ( X ) × 1 U 1 p T × 3 U 3 T × 4 U 4 p T ) ( 2 ) , r 2 ) ,
U 4 p = SVD ( ( R p ( X ) × 1 U 1 p T × 2 U 2 p T × 3 U 3 T ) ( 4 ) , r 4 ) ,
U 3 = eigs ( p = 1 K Q p Q p T , r 3 ) ,
where I is unit tensor, Q p = ( R p ( X ) × 1 U 1 p T × 2 U 2 p T × 4 U 4 p T ) ( 3 ) , SVD(A,r) indicates top r singular vectors of matrix A, and eigs(A,r) indicates top r eigenvectors of matrix A. Let us denote the rank constraint of U1, U2, and U3 by R1, R2, and R3. The proposed algorithm for denoising can now be summarized in Algorithm 1.
(IV) Optimizing X :
min X   1 2 Y X F 2 + α 2 X Q w , * + p = 1 K ( λ 2 R p ( X ) G p × 1 U 1 p × 2 U 2 p × 3 U 3 × 4 U 4 p F 2 ) .
The vectorization of (A9) is
min x   1 2 y x F 2 + α 2 x q w , * + p = 1 K ( λ 2 R p x m F 2 ) .
The solution of (A10) is
X = Ten ( Φ 1 1 Φ 2 ) ,
where
Φ 1 = 1 + α + p = 1 K λ R p T R p , Φ 2 = y + α q + p = 1 K λ R p T m ,
where y = V e c ( Y ) , x = V e c ( X ) , q = V e c ( Q ) , M = Z p × 1 U 1 p × 2 U 2 p × 3 U 3 × 4 U 4 p , m = Vec( M ) is vectorization of tensor Y , X , Q , M . Ten(x) is tensorization of vector x, i.e., the inverse operation of Vec.

References

  1. Rasti, B.; Scheunders, P.; Ghamisi, P.; Licciardi, G.; Chanussot, J. Noise Reduction in Hyperspectral Imagery: Overview and Application. Remote Sens. 2018, 10, 482. [Google Scholar] [CrossRef]
  2. Han, J.; Zhang, D.; Cheng, G.; Guo, L.; Ren, J. Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3325–3337. [Google Scholar] [CrossRef]
  3. Zhang, L.; Zhang, L.; Tao, D.; Huang, X. On Combining Multiple Features for Hyperspectral Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2012, 50, 879–893. [Google Scholar] [CrossRef]
  4. Li, J.; Zhang, H.; Huang, Y.; Zhang, L. Hyperspectral image classification by nonlocal joint collaborative representation with a locally adaptive dictionary. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3707–3719. [Google Scholar] [CrossRef]
  5. Fang, L.; Li, S.; Kang, X.; Benediktsson, J.A. Spectral–spatial hyperspectral image classification via multiscale adaptive sparse representation. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7738–7749. [Google Scholar] [CrossRef]
  6. Zhao, Y.; Zhang, G.; Jie, F.; Gao, S.; Chen, C.; Pan, Q. Unsupervised Classification of Spectropolarimetric Data by Region-Based Evidence Fusion. IEEE Geosci. Remote Sens. Lett. 2011, 8, 755–759. [Google Scholar] [CrossRef]
  7. Makantasis, K.; Doulamis, A.D.; Doulamis, N.D.; Nikitakis, A. Tensor-based classification models for hyperspectral data analysis. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6884–6898. [Google Scholar] [CrossRef]
  8. Li, J.; Bioucas-Dias, J.M.; Plaza, A. Semisupervised hyperspectral image segmentation using multinomial logistic regression with active learning. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4085–4098. [Google Scholar] [CrossRef]
  9. Yi, C.; Zhao, Y.Q.; Yang, J.; Chan, J.C.W.; Kong, S.G. Joint Hyperspectral Super-Resolution and Unmixing with Interactive Feedback. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3823–3834. [Google Scholar] [CrossRef]
  10. Yang, J.; Zhao, Y.-Q.; Chan, J.C.-W.; Kong, S.G. Coupled Sparse Denoising and Unmixing with Low Rank Constraint for Hyper-spectral Image. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1818–1833. [Google Scholar] [CrossRef]
  11. Gao, S.B.; Cheng, Y.M.; Zhao, Y.Q.; Xiao, L.-P. Data-driven quadratic correlation filter using sparse coding for infrared targets detection. J. Infrared Millim. Waves 2014, 33, 498–506. [Google Scholar] [CrossRef]
  12. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.-W. Hyper-Laplacian Regularized Nonlocal Low-rank Matrix Recovery for Hyperspectral Image Compressive Sensing Reconstruction. Inf. Sci. 2019, 501, 406–420. [Google Scholar] [CrossRef]
  13. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.-W. Total Variation and Rank-1 Constraint RPCA for Background Subtraction. IEEE Access 2018, 6, 49955–49966. [Google Scholar] [CrossRef]
  14. Yi, C.; Zhao, Y.Q.; Chan, J.C.W. Spectral super-resolution for multispectral image based on spectral improvement strategy and spatial preservation strategy. IEEE Trans. Geosci. Remote Sens. 2019, 1–15. [Google Scholar] [CrossRef]
  15. Shomorony, I.; Avestimehr, A.S. Worst-Case Additive Noise in Wireless Networks. IEEE Trans. Inf. Theory 2013, 59, 3833–3847. [Google Scholar] [CrossRef] [Green Version]
  16. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. J. SIAM Rev. 2005, 66, 294–310. [Google Scholar] [CrossRef]
  17. Lathauwer, L.D.; Moor, B.D.; Vandewalle, J. A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 2000, 21, 1253–1278. [Google Scholar] [CrossRef]
  18. Aggarwal, H.K.; Majumdar, A. Hyperspectral Image Denoising Using Spatio-Spectral Total Variation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 442–446. [Google Scholar] [CrossRef]
  19. Buades, A.; Coll, B.; Morel, J.-M. A non-local algorithm for image denoising. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 60–65. [Google Scholar] [CrossRef]
  20. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
  21. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  22. Rasti, B.; Ulfarsson, M.O.; Ghamisi, P. Automatic Hyperspectral Image Restoration Using Sparse and Low-Rank Modeling. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2335–2339. [Google Scholar] [CrossRef] [Green Version]
  23. Zhao, Y.Q.; Yang, J. Hyperspectral Image Denoising via Sparse Representation and Low-Rank Constraint. IEEE Trans. Geosci. Remote Sens. 2015, 53, 296–308. [Google Scholar] [CrossRef]
  24. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.-W. Nonlocal Tensor Sparse Representation and Low-Rank Regularization for Hyperspectral Image Compressive Sensing Reconstruction. Remote Sens. 2019, 11, 193. [Google Scholar] [CrossRef]
  25. Renard, N.; Bourennane, S.; Blanc-Talon, J. Denoising and dimensionality reduction using multilinear tools for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2008, 5, 138–142. [Google Scholar] [CrossRef]
  26. Xue, J.; Zhao, Y.; Liao, W.; Kong, S.G. Joint Spatial and Spectral Low-Rank Regularization for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1940–1958. [Google Scholar] [CrossRef]
  27. Liu, X.; Bourennane, S.; Fossati, C. Denoising of hyperspectral images using the parafac model and statistical performance analysis. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3717–3724. [Google Scholar] [CrossRef]
  28. Sena, M.M.; Trevisan, M.G.; Poppi, R.J. Parallel factor analysis. In Practical Three-Way Calibration; Elsevier: Amsterdam, The Netherlands, 2005; pp. 109–125. [Google Scholar] [CrossRef]
  29. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.-W. Nonconvex tensor rank minimization and its applications to tensor recovery. Inf. Sci. 2019, 503, 109–128. [Google Scholar] [CrossRef]
  30. Xie, Q.; Zhao, Q.; Meng, D.; Xu, Z.; Gu, S.; Zuo, W.; Zhang, L. Multispectral Images Denoising by Intrinsic Tensor Sparsity Regularization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1692–1700. [Google Scholar] [CrossRef]
  31. Peng, Y.; Meng, D.; Xu, Z.; Gao, C.; Yang, Y.; Zhang, B. Decomposable Nonlocal Tensor Dictionary Learning for Multispectral Image Denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2949–2956. [Google Scholar] [CrossRef]
  32. Drineas, P.; Mahoney, M.W. A randomized algorithm for a tensor-based generalization of the singular value decomposition. Linear Algebra Appl. 2007, 420, 553–571. [Google Scholar] [CrossRef] [Green Version]
  33. Buades, A.; Coll, B.; Morel, J.-M. A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 2005, 4, 490–530. [Google Scholar] [CrossRef]
  34. Yan, L.; Fang, H.; Zhong, S.; Zhang, Z.; Chang, Y. Weighted Low-rank Tensor Recovery for Hyperspectral Image Restoration. arXiv 2017, arXiv:1709.00192. [Google Scholar]
  35. Hao, R.; Su, Z. A patch-based low-rank tensor approximation model for multiframe image denoising. J. Comput. Appl. Math. 2018, 329, 125–133. [Google Scholar] [CrossRef]
  36. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.-W. Nonlocal Low-Rank Regularized Tensor Decomposition for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5174–5189. [Google Scholar] [CrossRef]
  37. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted Nuclear Norm Minimization with Application to Image Denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar] [CrossRef]
  38. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  39. Fu, Y.; Wang, R.; Jin, Y.; Zhang, H. Fast image fusion based on alternating direction algorithms. In Proceedings of the 12th International Conference on Signal Processing, Alsace, France, 20–22 July, 2015; pp. 713–717. [Google Scholar] [CrossRef]
  40. Dong, W.; Li, G.; Shi, G.; Li, X.; Ma, Y. Low-Rank Tensor Approximation with Laplacian Scale Mixture Modeling for Multiframe Image Denoising. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 442–449. [Google Scholar] [CrossRef]
  41. Zhang, C.; Hu, W.; Jin, T.; Mei, Z. Nonlocal image denoising via adaptive tensor nuclear norm minimization. Neural Comput. Appl. 2016, 29, 3–19. [Google Scholar] [CrossRef]
  42. Peng, X.; Lu, C.; Yi, Z.; Tang, H. Connections Between Nuclear Norm and Frobenius Norm Based Representations. IEEE Trans. Neural Netw. Learn. Syst. 2015, 29, 218–224. [Google Scholar] [CrossRef] [PubMed]
  43. Peng, X.; Feng, J.; Xiao, S.; Yau, W.Y.; Zhou, J.T.; Yang, S. Structured AutoEncoders for Subspace Clustering. IEEE Trans. Image Process. 2018, 27, 5076–5086. [Google Scholar] [CrossRef] [PubMed]
  44. Peng, X.; Yu, Z.; Yi, Z.; Tang, H. Constructing the L2-Graph for Robust Subspace Learning and Subspace Clustering. IEEE Trans. Cybern. 2012, 47, 1053–1066. [Google Scholar] [CrossRef] [PubMed]
  45. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  46. Wald, L. Data Fusion: Definitions and Architectures. Fusion of Images of Different Spatial Resolutions; Presses de l’Ecole, Ecole des Mines de Paris: Paris, France, 2002. [Google Scholar]
  47. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. Fsim: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed]
  48. Du, P.; Chen, Y.; Fang, T.; Tang, H. Error analysis and improvements of spectral angle mapper (SAM) model. In MIPPR 2005: SAR and Multispectral Image Processing; International Society for Optics and Photonics: Bellingham, WA, USA, 2005; pp. 1–6. [Google Scholar] [CrossRef]
  49. Yang, J.; Zhao, Y.; Yi, C.; Chan, J.C.W. No-Reference Hyperspectral Image Quality Assessment via Quality-Sensitive Features Learning. Remote Sens. 2017, 9, 305. [Google Scholar] [CrossRef]
Figure 1. Illustration of global spatial-and-spectral correlation in hyperspectral images (HSI). (a) Original HSI and (b) the matrix unfolding by mode-1 and its singular value curve; (c) the matrix unfolding by mode-2 and its singular values curve, and (d) the matrix unfolding by mode-3 and its singular values curve. The x-label is the length of the unfolding matrix.
Figure 1. Illustration of global spatial-and-spectral correlation in hyperspectral images (HSI). (a) Original HSI and (b) the matrix unfolding by mode-1 and its singular value curve; (c) the matrix unfolding by mode-2 and its singular values curve, and (d) the matrix unfolding by mode-3 and its singular values curve. The x-label is the length of the unfolding matrix.
Remotesensing 11 02281 g001
Figure 2. The 4-order tensor grouped in each cluster can be reconstructed by low-rank Tucker decomposition.
Figure 2. The 4-order tensor grouped in each cluster can be reconstructed by low-rank Tucker decomposition.
Remotesensing 11 02281 g002
Figure 3. Scheme of the proposed HSI denoising method using global and nonlocal low-rank (LR) regularizations.
Figure 3. Scheme of the proposed HSI denoising method using global and nonlocal low-rank (LR) regularizations.
Remotesensing 11 02281 g003
Figure 4. Simulated noise removal results on Washington D.C. (WDC) data at band 10 (noise variance is 0.3). (a) Original image. (b) Noisy image. Denoising results by (c) low-rank-tensor-approximation (LRTA), (d) parallel factor analysis (PARAFAC), (e) tensor dictionary learning (TDL), (f) tensor singular value decomposition (t-SVD), (g) Intrinsic Tensor Sparsity (ITSReg), (h) HyRes, (i) Proposed.
Figure 4. Simulated noise removal results on Washington D.C. (WDC) data at band 10 (noise variance is 0.3). (a) Original image. (b) Noisy image. Denoising results by (c) low-rank-tensor-approximation (LRTA), (d) parallel factor analysis (PARAFAC), (e) tensor dictionary learning (TDL), (f) tensor singular value decomposition (t-SVD), (g) Intrinsic Tensor Sparsity (ITSReg), (h) HyRes, (i) Proposed.
Remotesensing 11 02281 g004
Figure 5. Visual quality comparison of the denoising results of band 42 with noise variance 0.2. (a) Original image. (b) Noisy image. Denoising results by (c) LRTA, (d) PARAFAC, (e) TDL, (f) t-SVD, (g) ITSReg, (h) HyRes, (i) Proposed
Figure 5. Visual quality comparison of the denoising results of band 42 with noise variance 0.2. (a) Original image. (b) Noisy image. Denoising results by (c) LRTA, (d) PARAFAC, (e) TDL, (f) t-SVD, (g) ITSReg, (h) HyRes, (i) Proposed
Remotesensing 11 02281 g005aRemotesensing 11 02281 g005b
Figure 6. Visual quality comparison of the denoising results of band 74 with noise variance 0.2. (a) Original image. (b) Noisy image. Denoising results by (c) LRTA, (d) PARAFAC, (e) TDL, (f) t-SVD, (g) ITSReg, (h) HyRes, (i) Proposed.
Figure 6. Visual quality comparison of the denoising results of band 74 with noise variance 0.2. (a) Original image. (b) Noisy image. Denoising results by (c) LRTA, (d) PARAFAC, (e) TDL, (f) t-SVD, (g) ITSReg, (h) HyRes, (i) Proposed.
Remotesensing 11 02281 g006aRemotesensing 11 02281 g006b
Figure 7. Comparison of IQAs of the denoising results with variance 0.20 Gaussian noise: including (a) peak signal-to-noise ratio (PSNR), (b) structure similarity (SSIM), and (c) feature similarity (FSIM), (d) erreur relative globale adimensionnelle de synthese (ERGAS), (e) spectral angle mapper (SAM).
Figure 7. Comparison of IQAs of the denoising results with variance 0.20 Gaussian noise: including (a) peak signal-to-noise ratio (PSNR), (b) structure similarity (SSIM), and (c) feature similarity (FSIM), (d) erreur relative globale adimensionnelle de synthese (ERGAS), (e) spectral angle mapper (SAM).
Remotesensing 11 02281 g007
Figure 8. Comparison of spectral reflectance value at location (a) (100,100), (b) (150,150) under variance 0.20 Gaussian noise.
Figure 8. Comparison of spectral reflectance value at location (a) (100,100), (b) (150,150) under variance 0.20 Gaussian noise.
Remotesensing 11 02281 g008
Figure 9. Horizontal mean profiles of 35th band in WDC data, the horizontal axis represents the row number, and the vertical axis represents the mean digital number values of each row. (a) LRTA, (b) PARAFAC, (c) TDL, (d) tSVD, (e) ITSReg, (f) HyRes, (g) Proposed.
Figure 9. Horizontal mean profiles of 35th band in WDC data, the horizontal axis represents the row number, and the vertical axis represents the mean digital number values of each row. (a) LRTA, (b) PARAFAC, (c) TDL, (d) tSVD, (e) ITSReg, (f) HyRes, (g) Proposed.
Remotesensing 11 02281 g009
Figure 10. Visual quality comparison of the denoising results of all methods on band 107 in Indian Pine HSI data. (a) Original image, (b) LRTA, (c) PARAFAC, (d) TDL, (e) t-SVD, (f) ITSReg, (g) HyRes, (h) Proposed.
Figure 10. Visual quality comparison of the denoising results of all methods on band 107 in Indian Pine HSI data. (a) Original image, (b) LRTA, (c) PARAFAC, (d) TDL, (e) t-SVD, (f) ITSReg, (g) HyRes, (h) Proposed.
Remotesensing 11 02281 g010
Figure 11. Sensitivity analysis of parameter: (a) Relationship between mean PSNR (MPSNR) and μ. (b) Relationship between MPSNR and η.
Figure 11. Sensitivity analysis of parameter: (a) Relationship between mean PSNR (MPSNR) and μ. (b) Relationship between MPSNR and η.
Remotesensing 11 02281 g011
Figure 12. Relationship between MPSNR and the cluster number J.
Figure 12. Relationship between MPSNR and the cluster number J.
Remotesensing 11 02281 g012
Figure 13. Curves of (a) MPSNR, (b) MSSIM of URBAN dataset under noise levels 0.1, 0.2, and 0.3.
Figure 13. Curves of (a) MPSNR, (b) MSSIM of URBAN dataset under noise levels 0.1, 0.2, and 0.3.
Remotesensing 11 02281 g013
Figure 14. Convergence curves for (a) WDC dataset under noise level 0.2 and (b) Indian Pine dataset.
Figure 14. Convergence curves for (a) WDC dataset under noise level 0.2 and (b) Indian Pine dataset.
Remotesensing 11 02281 g014
Table 1. Average performance of six competing methods with respect to five IQAs on the WDC dataset.
Table 1. Average performance of six competing methods with respect to five IQAs on the WDC dataset.
VarianceIndexNoisy ImageLRTAPARAFACTDLt-SVDITSRegHyResProposed
0.1MERGAS235.61880.934205.25458.89266.51666.40053.14251.285
MFSIM0.8370.9510.8420.9770.9730.9730.9890.989
MSSIM0.7130.9370.7410.9700.9680.9660.9840.986
MPSNR19.99329.58021.20932.31331.15631.09633.01533.212
MSAM0.34200.12580.10820.07760.06100.06020.04880.0481
0.2MERGAS471.236128.882208.84499.246106.972123.58390.94290.867
MFSIM0.7110.9080.8390.9510.9410.9250.9610.967
MSSIM0.4520.8700.7320.9310.9220.8950.9440.951
MPSNR13.97325.35421.05727.61926.91025.63027.98328.375
MSAM0.54280.14730.11120.08750.07170.07240.06810.0621
0.3MERGAS706.855165.924214.940132.553142.304164.568125.017124.803
MFSIM0.6200.8740.8330.9250.9050.8750.9370.944
MSSIM0.2930.8070.7180.8850.8680.8200.8960.904
MPSNR10.45123.08820.80525.03224.40023.12725.1625.814
MSAM0.69010.14020.11620.08980.07040.07270.06140.0698
Table 2. Average performance of six competing methods with respect to five IQAs on the URBAN dataset.
Table 2. Average performance of six competing methods with respect to five IQAs on the URBAN dataset.
VarianceIndexNoisy ImageLRTAPARAFACTDLt-SVDITSREGHyResProposed
0.1MERGAS304.98489.666253.52169.32679.28176.13463.8164.371
MFSIM0.8310.9600.8290.9740.9710.9720.9880.989
MSSIM0.5800.9050.6550.9480.9430.9460.9510.965
MPSNR19.99230.79821.60433.08731.91632.15935.07435.108
MSAM0.5140.1240.1670.0860.0930.0830.0640.068
0.2MERGAS609.968151.650258.479117.230127.088140.455111.057110.847
MFSIM0.6980.9110.8250.9440.9370.9240.9570.960
MSSIM0.3480.8020.6430.8870.8820.8690.9040.915
MPSNR13.97226.11921.43528.44327.67626.77729.68429.975
MSAM0.7750.1750.1770.11040.1080.1030.0980.091
0.3MERGAS914.952201.083266.740155.195168.758191.443146.910146.281
MFSIM0.6040.8710.8190.9140.9000.8750.9070.928
MSSIM0.2240.7190.6250.8330.8210.7910.8510.862
MPSNR10.45023.63021.16025.93025.16824.06027.02127.380
MSAM0.9340.1840.1910.1170.1120.1120.1150.103
Table 3. Comparison of no-reference HSI quality assessment (NHQA) in [49] on Indian Pine dataset.
Table 3. Comparison of no-reference HSI quality assessment (NHQA) in [49] on Indian Pine dataset.
LRTAPARAFACTDLt-SVDITSRegHyResProposed
NHQA27.361927.428727.191127.103827.136026.910526.8241
Table 4. Comparisons of computational time for the denoising methods (in second).
Table 4. Comparisons of computational time for the denoising methods (in second).
SizeLRTAPARAFACt-SVDTDLITSRegHyResProposed
WDC341 × 307 × 160482694.2306 × 10411310.6579 × 1041599.16 × 104
URBAN 301 × 201 × 162271320.2531 × 104451.0631 × 1041364.921 × 104
Indian Pine145 × 145 × 22010422370.1968 × 1041750.5053 × 1041822.184×104
Table 5. Comparisons of differences between tensor-based learning and traditional deep learning.
Table 5. Comparisons of differences between tensor-based learning and traditional deep learning.
Proposed ApproachApproach of [7]Traditional Deep Learning
layer of priorsinglesinglemulti
time costlowlowhigh
Learning methodon-lineon-lineoff-line
decompositiontuckerrank-1 canonical
labeled training sampleslarge numberlarge number
tunable parameterssmallsmallhuge
spatial and spectral structureintegratedintegrateddestroyed
computational complexitylowlowhigh
classification accuracylowhighhigh

Share and Cite

MDPI and ACS Style

Kong, X.; Zhao, Y.; Xue, J.; Chan, J.C.-W. Hyperspectral Image Denoising Using Global Weighted Tensor Norm Minimum and Nonlocal Low-Rank Approximation. Remote Sens. 2019, 11, 2281. https://doi.org/10.3390/rs11192281

AMA Style

Kong X, Zhao Y, Xue J, Chan JC-W. Hyperspectral Image Denoising Using Global Weighted Tensor Norm Minimum and Nonlocal Low-Rank Approximation. Remote Sensing. 2019; 11(19):2281. https://doi.org/10.3390/rs11192281

Chicago/Turabian Style

Kong, Xiangyang, Yongqiang Zhao, Jize Xue, and Jonathan Cheung-Wai Chan. 2019. "Hyperspectral Image Denoising Using Global Weighted Tensor Norm Minimum and Nonlocal Low-Rank Approximation" Remote Sensing 11, no. 19: 2281. https://doi.org/10.3390/rs11192281

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop