Next Article in Journal
RGB Image-Derived Indicators for Spatial Assessment of the Impact of Broadleaf Weeds on Wheat Biomass
Previous Article in Journal
Plant Counting of Cotton from UAS Imagery Using Deep Learning-Based Object Detection Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SLRL4D: Joint Restoration of Subspace Low-Rank Learning and Non-Local 4-D Transform Filtering for Hyperspectral Image

1
School of Computer and Software, Nanjing University of Information Science and Technology (NUIST), Nanjing 210044, China
2
Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology, NUIST, Nanjing 210044, China
3
Jiangsu Engineering Center of Network Monitoring, NUIST, Nanjing 210044, China
4
Henan Key Laboratory of Food Safety Data Intelligence, Zhengzhou University of Light Industry, Zhengzhou 450002, China
5
Department of Criminal Science and Technology, Nanjing Forest Police College, Nanjing 210023, China
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in IGARSS 2018.
Remote Sens. 2020, 12(18), 2979; https://doi.org/10.3390/rs12182979
Submission received: 3 August 2020 / Revised: 31 August 2020 / Accepted: 9 September 2020 / Published: 14 September 2020
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
During the process of signal sampling and digital imaging, hyperspectral images (HSI) inevitably suffer from the contamination of mixed noises. The fidelity and efficiency of subsequent applications are considerably reduced along with this degradation. Recently, as a formidable implement for image processing, low-rank regularization has been widely extended to the restoration of HSI. Meanwhile, further exploration of the non-local self-similarity of low-rank images are proven useful in exploiting the spatial redundancy of HSI. Better preservation of spatial-spectral features is achieved under both low-rank and non-local regularizations. However, existing methods generally regularize the original space of HSI, the exploration of the intrinsic properties in subspace, which leads to better denoising performance, is relatively rare. To address these challenges, a joint method of subspace low-rank learning and non-local 4-d transform filtering, named SLRL4D, is put forward for HSI restoration. Technically, the original HSI is projected into a low-dimensional subspace. Then, both spectral and spatial correlations are explored simultaneously by imposing low-rank learning and non-local 4-d transform filtering on the subspace. The alternating direction method of multipliers-based algorithm is designed to solve the formulated convex signal-noise isolation problem. Finally, experiments on multiple datasets are conducted to illustrate the accuracy and efficiency of SLRL4D.

Graphical Abstract

1. Introduction

It began with the forging of the great imaging spectrometer. Many merits were given to the generated hyperspectral image (HSI)—multi bands, high resolution and rich details [1]. The vision of mankind was no longer merely limited to the spatial dimension with the assistance of HSI. An increasing number of application fields now rely on HSI, for instance, urban planning, geological exploration, military surveillance and precision agriculture [2]. Among these application fields, many critical tasks of computer vision processing such as unmixing [3], target detection [4], and classification [5] demand extremely high data quality. However, during sampling and acquisition, HSI is constantly degraded by mixed noises, for example, Gaussian noise, impulse noise, stripes and deadlines [6]. Under these circumstances, the accuracy and efficiency of the above work will be badly weakened. Therefore, as a key pre-processing step in HSI processing, the design of the restoration methodology has been an extremely valuable, active and high-interest research topic in recent years.
The general idea of traditional HSI restoration methods is to extend the band-by-band or pixel-by-pixel methods that are applied to gray images. The disadvantage of these methods is distinct; that is, the spatial or spectral correlations of the 3-D HSI are not maintained. Although methods based on wavelet shrinkage [7,8,9], non-local means [10], sparse representation [11,12,13,14] or deep learning [15,16,17,18,19] achieve the combination of spatial-spectral information, the sphere of their application is still limited. The prior knowledge of the original image and mixed noises are insufficiently explored and usually remove specific noise only; thus, their performance in real scenarios is not guaranteed.

1.1. Restoration by Low-Rank Property

Inspired by the robust principal component analysis model (RPCA) [20], He et al. conducted research to explore the low-rank property of HSI and modelled the HSI restoration as a low-rank matrix recovery (LRMR) problem, thus realizing the removal of various mixed noises [21]. This work proved that the probability is extremely high of restoring the low-rank latent clean HSI matrix from the observation HSI matrix, which is degraded by mixed noises. During the process of restoration, performance would be much improved if the global and local redundancies of HSI were both thoroughly utilized [22]. In Reference [23], a sparse method is employed to encode the global and local redundancy of the spectral dimension. Meanwhile, the global redundancy of the spectral dimension is explored by a low-rank constraint. Chen et al. [24] proposed a low-rank non-negative matrix factorization method to capture the spectral geometric features while maximizing the restoration of spatial detail. However, the spatial smoothing information of HSI is not fully utilized by the studies above. Due to the powerful ability to maintain the spatial smoothness of the image, the total variation (TV) regularized low-rank methods [25] are widely used in HSI restoration research. Aggarwal et al. [26] extended the traditional TV model to the three-dimensional space, and proposed a spatio-spectral total variation (SSTV) method for HSI restoration. The proposed SSTV method considers the high correlations in both the spatial and spectral dimensions, and performance enhancement of the TV-based restoration is achieved. However, in SSTV, only local spatial and spectral features have been explored. There is still no further research on the global low-rank property of HSI. Based on LRMR, Wang et al. continued their previous research [27] and proposed a combinatorial method with low-rank constraints named LSSTV [28]. The HSI restoration problem is modelled as a weighted nuclear norm minimization problem, and the jagged distortion left in SSTV is solved under certain circumstances. As an improvement of the LRMR, He et al. designed a TV regularized low-rank factorization method [29]. For further enhancement, they divided the HSI cube into many local patches, then imposed low-rank constraints on these patches [30]. This is about images in local fields have a high probability of containing the same features, thus, they usually share similar spectral features [31]. Tensor representation methods also shows great potential to explore the low-rank property [32]. Fan et al. extended the low-rank constraints to tensor represented HSI, and proposed a low-rank tensor recovery method that maintains correlations between three dimensions of HSI [33]. Wang et al. [34] combined tensor Tucker decomposition and anisotropic spatial-spectral TV, achieving exciting results in the removal of mixed noises.
However, the lack of spatial information often leads to poor restoration performance when strong Gaussian noise arrives. Most of the low-rank methods above only consider the piecewise smoothness or local similarity of the spatial dimension. Meanwhile, TV-based methods often cause distortion in the spectral domain, and some noises also appear smooth property (e.g., white Gaussian noise) [35]. The structural information of restored HSI is not guaranteed in these methods.

1.2. Restoration by Non-Local Similarity

Non-local self-similarity (NSS) theory holds that a patch of an image has many homogeneous patches with similar characteristics at other positions of the image. Since the structure of the clean image is redundant, similar patches of the target patch may exist at any position [36]. The NSS-based restoration method exploits this information redundancy of HSI to get rid of noises, where details of the spatial domain are kept to the greatest extent in course of denoising [37]. The priors of low-rankness and NSS are bonded in Reference [38]. By imposing non-local priors to the LRMR model, both pixels and spectrum in HSI are double corrected to remove mixed noises while preserving the fine spatial structure of HSI, thus improving the denoising performance. Bai et al. [39] applied a fixed window to extract non-local patches from the original image, and characterized the non-local similarity of HSI in the spatial domain and the global similarity in the spectral domain. In Reference [40], Xue et al. embedded the combined prior terms of low-rank and NSS into a sparse representation model. For further research, the tensor CANDECOMP/PARAFAC (CP) decomposition is applied to the joint denoising model [41]. Under this circumstance, the spatial information, for instance, image edges and textures of HSI, was further exploited, and the low-rank property was further utilized by the clustering of non-local patches. Both the global correlation along the spectrum and NSS were explored by Xie et al. [42]. In this work, all non-local patches were stacked to a third-order tensor and achieved excellent denoising performance. Similar research has been conducted in Reference [43]; the group sparsity and low-rank property of HSI are further utilized. Zhang et al. hypothesized that the piecewise smooth feature exists in the restored non-local patches of HSI, and thus conducted research on the combination of NSS and TV [44]. Considering the poor maintenance of CP decomposition on the intrinsic structure of HSI, Chen et al. applied a novel tensor decomposition approach named tensor ring decomposition to the non-local based HSI restoration model [45].
Unfortunately, although NSS-based low-rank methods have delivered gratifying restoration performance, the setting of certain parameters impacts the result of restoration excessively. For instance, various sizes of the search window and number of non-local patches often bring different results, some of which are unacceptable. Maggioni et al. designed a block-matching and 4D filtering (BM4D) restoration algorithm, which extended planar space denoising to three-dimensional space, and provided new ideas for HSI restoration [46,47]. This work simultaneously exploits the local correlation and non-local correlation between voxels instead of pixels, high-dimensional data is thus divided into many patches with similar features. Then, these patches are stacked together to form a four-dimensional hyperrectangle. Eventually, two diverse steps of estimation occur, the signal and noise are isolated by order of thresholding or Wiener filtering. The best legacy of this work is that actual parameter-free denoising is accomplished when applying BM4D to the HSI restoration problem. However, as with all NSS based restoration methods, the processing procedure of BM4D is relatively intricate, which ineluctably results in excessive time consumption when confronting the huge size of the original image space of HSI.

1.3. Restoration by Subspace Projection

Recently, the subspace low-rank (SLR) representation method has shown great potential to address the above challenges [48,49,50,51,52]. The theory of manifold learning suggests that the data in high-dimensional space is immoderately redundant, and principal information lives in the low-dimensional subspace [53]. Similar theories exist in the HSI compression field. Since the high-dimensional data is redundant, according to the low-rank property of HSI, the dimensions of the dictionary matrix and coefficient matrix generated from the low-rank decomposition are necessarily much lower than the dimension of the original image, thereby the purpose of data compression is realized. The reconstruction process is the inverse of compression, that is, the linear combination of dictionary and coefficient is the original image. Therefore, through this low-rank decomposition method, it is no longer necessary to store the original high-dimensional data, but only the compressed low-dimensional data. For low-rank image restoration, Figure 1 illustrates the distribution of singular value in the original space and low-dimensional subspace of the HYDICE Urban dataset. Since the spectrum of HSI lives in the low-dimensional subspace, as shown in Figure 1, the subspace has a lower singular value than the original space, which indicates that the subspace decomposition can yield a more accurate low-rank approximation. Part of this subspace low-rank property is given by the lower dimension of the projected matrix, and the other is given by the finiteness of the linear combination of end-members. For any pixel in HSI, it is impossible to be mixed by all end-members in the scene, but only a few of them (usually 2–3 end-members). This sparsity of the linear combination of end-members leads to the low-rankness of the decomposed coefficient matrix. Therefore, through imposing low-rank regularization to the subspace of HSI instead of the original space, better restoration results would be achieved. Reference [49] fused spectral subspace low-rank learning and superpixel segment, its restoration results proved that subspace does carry stronger low-rank property. By taking advantage of the subspace decomposition, a fast restoration method through using denoiser on subspace is proposed in [50] , but the fixed subspace basis they used also prevents the further rising of restoration results. References [48,51] proposed a Gaussian noise-removal method by combining low-rank subspace decomposition and weighted nuclear norm minimum, and achieved outstanding results. However, this advancement comes with the reduction of usability, as complex parameter settings greatly suppress the real application. Reference [52] modeled mixed denoising problem with a robust l 1 norm, and achieved the unification of complex statistical noise distribution. The number of parameters that need to be manually tuned is reduced, thus its usability and efficiency in real application is further enhanced. However, these studies also show that along with low-rank-based restoration methods, SLR-based methods are also powerless when encountering structured sparse noise and strong Gaussian noise. It is essential to further exploit the high correlation in the spatial dimension of HSI by exploring the NSS prior based on SLR. Meanwhile, the complexity of the parameters should be as simple as possible to make the proposed method easier to be implement. Through the discreet design of a low-dimensional orthogonal dictionary and prior regularization, the joint method of SLR and NSS not only accomplishes precise restoration of HSI, but also reduces the cost of computation. In this way, the NSS property is utilized in subspace, the dilemma of heavy computation cost of NSS-based methods is distinctly compensated.
In this paper, a joint method of subspace low-rank learning and non-local 4-D transform filtering, named SLRL4D, is presented for HSI restoration. By assuming that the latent clean HSI exists in the low-dimensional subspace, the NSS prior of the HSI is embedded into the SLR learning architecture. Thus, the high correlation of HSI in both spatial and spectral dimensions is thoroughly explored, and the successful removal of the mixed noises of HSI is realized. The main contributions of this paper are summarized as:
  • The subspace low-rank learning method is designed to restore the clean HSI from the contaminated HSI which is degraded by mixed noises. An orthogonal dictionary with far lower dimensions is learned during the process of alternating updates, thus leading to precise restoration of the clean principal signal of HSI.
  • Based on the full exploration of low-rank property in subspace, BM4D filtering is employed to further explore the NSS property within the subspace of HSI rather than the original space. By preserving the details of the spatial domain more precisely, the complete parameter-free BM4D filtering leads to easier application in real world.
  • Each term in the proposed restoration model is convex after careful design, thus, the convex optimization algorithm based on the alternating direction method of multipliers (ADMM) could be derived to solve and optimize this model. Meanwhile, due to the HSI being decomposed into two sub-matrices of lower dimensions, computational consumption of the proposed SLRL4D algorithm is much lower than other existing restoration methods.
  • Extensive experiments of quantitative analysis and visual evaluation are conducted on several simulated and real datasets, and it is demonstrated that our proposed method not only achieves better restoration results, but also improves reliability and efficiency compared with the latest HSI restoration methods.
The remainder of this paper is organized as follows. Section 2 formulates the preliminary problem of HSI restoration. Then, the combined restoration approach of subspace low-rank learning and nonlocal 4-d transform filtering is elaborated specifically in Section 3. In Section 4, Section 5 and Section 6, substantial experimental evaluations are conducted to demonstrate that our proposed method is both effective and efficient. Eventually, the conclusion and future work are drawn in Section 7.

2. Problem Formulation

2.1. Degradation Model

During the process of acquisition and transmission, a succession of noises (e.g., Gaussian, deadline, impulse and stripe noises) frequently degrade the imaging quality of HSI. Denote the observed HSI cube as a 3D tensor Y R m × n × l with the size of spatial dimension m × n and the number of spectral bands l.
Considering a scenario of mixed noises contamination, the degradation model of HSI restoration is formulated as:
Y = X + S + N ,
where Y R l × d is the 2D HSI matrix reconstructed from the 3D tensor Y with d pixels and l spectral bands( d = m × n ). X R l × d denotes the latent clean HSI which demands to be recovered, N R l × d and S R l × d denote the Gaussian additive noise and sparse noise, respectively.
The purpose of HSI denoising is to seek the restoration of clean image X from the observed image Y by isolating S and N . However, it is clear that Equation (1) is an unsolvable ill-conditioned problem. Employing prior knowledge to introduce regularization constraints, the compression of the uncertain solution domain shall be realized:
min R 1 X + λ R 2 S s . t . Y X S F 2 ε ,
where R 1 X and R 2 S are the regularization terms that constrain the pure image and sparse noise, respectively. The regularization parameter λ is accountable for the balance of the two regularization terms. Y X S F 2 ε is the fidelity term, which constrains the Gaussian additive noise, and ε is the standard deviation of Gaussian noise.

2.2. Low-Rank Regularization

One of the most efficient ways for R 1 X is low-rank regularization. This is mainly caused by the high linear correlation of the spectral domain between diverse areas of HSI. The low-rank property of HSI can be effectively explored by enforcing nuclear norm to the original image X . As for the R 2 S , l 0 -norm is an option that meets the constraint of sparsity, thereby the restoration problem of HSI is reformulated as:
min X * + λ S 0 s . t . Y X S F 2 ε , r a n k ( X ) r .
In the process of solving, l 1 -norm is employed as the convex relaxation of l 0 -norm, thus, Equation (3) is transformed into a solvable problem.

2.3. Non-Local Regularization

Furthermore, the NSS is also a highly effective regularization constraint while exploring the low-rank property of images:
min X * + λ 1 X N L + λ 2 S 1 s . t . Y X S F 2 ε , r a n k ( X ) r .
Non-local regularization X N L realizes the full advantage of redundant information in the latent clean HSI and achieves the maintenance of image details in the course of restoration. By setting a neighborhood window and a search window, the estimate weight value of the target area is calculated from the area with a similar neighborhood structure.
However, even working favorable for HSI restoration, there still exists the following disadvantages:
  • For the low-rank regularization, when stripes or deadlines emerge at the same location in HSI, they also appear to have structured low-rank property, hence it is difficult to isolate sparse noise from low-rank signals. Meanwhile, its denoising ability is highly degraded when encountering heavy Gaussian noise.
  • For the non-local regularization, the performance of restoration relies on the selection of search window and the neighborhood window, better performance is often achieved with much higher computational complexity. In practical applications, the excessively high time consumption of processing is fatal.

3. Methodology Design

3.1. Subspace Low-Rank Representation

According to the basic principle of manifold learning, high-dimensional data lives in a low-dimensional manifold. Meanwhile, both spatial and spectral domains of HSI are correlated greatly. Therefore, under the linear mixture model, the HSI matrix X can be decomposed into the product of an endmember matrix and an abundance matrix. In specific HSI scenes, each pixel is a mixture of a few pure endmembers, and the number of endmembers is far less than the dimension of original HSI space. Projecting the original image signal into a low-rank subspace, the degraded HSI is revealed as
X = DA ,
where D R l × p is the dictionary matrix that spans each subspace of original space X , and A R p × d is the representation coefficient matrix of X in respect of D . In this equation, p is the dimension of the subspace. Consequently, the subspace low-rank representation is denoted as
arg min D , A , S A * + λ S 1 s . t . Y DA S F 2 ε , X = DA .
In this enhanced low-rank restoration model, the nuclear norm is employed to constraint the coefficient matrix A . It has been proven that under the certain dictionary D , integrated HSI could be recovered accurately by the SLR representation model [49]. It inspired that the employment of latent clean HSI as a dictionary might be a good choice. Therefore, the SLR representation model is converted to
arg min X , A , S A * + λ S 1 s . t . Y XA S F 2 ε , Y X S F 2 ε .
An additional fidelity item was implemented into Equation (6) by supposing that subspace D equal to clean HSI X . Although certain effects are achieved, the following disadvantage of this strategy still exists.
  • The use of clean HSI X R l × d as the dictionary D R l × p will inevitably cause the situation l = p . Hence the dimension of the coefficients matrix A will increase to l × l immediately. The velocity of calculation and the rate of convergence will be decreased remarkably.
  • In the face of heavy Gaussian noise, this model is not able to perform satisfied restoration results. The main reason is the plentiful sharp feature of the spatial domain in the original HSI are directly ignored.

3.2. Proposed SLRL4D Model

To address these challenges, we propose the SLRL4D method as follows. The flowchart of the proposed SLRL4D method is depicted in Figure 2. In our model, an orthogonal dictionary, D R l × p with p l is learned during the process of HSI restoration. In this manner, the computational cost is much decreased due to the reduction of subspace dimensions. Meanwhile, aiming to achieve the finest preservation of spatial details, the well-known non-local 4-D transform filtering method BM4D is invited into the subspace low-rank learning model. Through superimposing cubes of voxels with similar features in A , the composited hyperrectangle accomplishes successful exploration of the NSS. Under the above regularization, the proposed restoration method SLRL4D is designed as
arg min D , A , S ϕ N L A + γ A * + λ S 1 s . t . Y DA S F 2 ε , D T D = I ,
where ϕ N L A is the non-local regularization term which is imposed on the representation coefficient matrix A . The corresponding convex optimization subproblem could be efficiently solved by the BM4D filtering without any parameter. In our design, the following advantages emerge:
  • The low-rank property of the spectral dimension lives in multiple subspaces and NSS of the spatial dimension is synchronously exploited in the process of restoration. Better restoration results would be achieved consequently.
  • The dimension of subspace p is far lesser than the dimension of original space l which brings far lesser resource consumption and computational time in the practical experimentation. Higher restoration performance would be achieved consequently.
  • Each term of this model is convex, thereby it is solvable with the employment of distributed ADMM algorithm. Easier restoration solution and optimization would be achieved consequently.

3.3. Model Optimization

Obviously, attempting to solving Equation (8) directly is extremely difficult. Through introducing two variables L 1 = L 2 = A , the proposed SLRL4D model is translated as the following equivalent form
L ( L 1 , L 2 , S , D , A , Γ 1 , Γ 2 , Γ 3 ) = ϕ N L ( L 1 ) + γ L 2 * + λ S 1 + ζ I D T D + 1 2 μ Y DA S Γ 1 F 2 + 1 2 μ A L 1 Γ 2 F 2 + 1 2 μ A L 2 Γ 3 F 2 ,
where Γ 1 , Γ 2 and Γ 3 are three augmented Lagrange multipliers, μ is the penalty parameter, and ζ I D T D is the indicator function which is transformed from the orthogonal constraint of dictionary D . In this way, ADMM is prone to solve our proposed model by minimizing the above Lagrangian function [54]. The original difficult problem could be solved through alternately optimizing the divided five easily solved subproblems.
The main steps of solving Equation (9) are summarized in Algorithm 1. Mixed noises and latent clean HSI are gradually isolated during the process of alternating updates the following five variables:
  • Update S (Line 4): The subproblem of updating S is given by:
    S k + 1 = arg min S λ S 1 + 1 2 μ Y D k A k S Γ 1 k F 2 .
    The soft-thresholding algorithm is employed to efficiently solve this subproblem:
    S k + 1 = softT H λ μ Y D k A k Γ 1 k ,
    where softT H τ M = sign M max 0 , M τ is the soft-thresholding operator with threshold value τ . M is the absolute value of each element in matrix M .
  • Update D (Line 6):
    The subproblem of updating D is given by:
    D k + 1 = arg min D ζ I ( D T D ) + 1 2 μ Y D A k S k + 1 Γ 1 k F 2 .
    According to the reduced-rank Procrustes rotation algorithm [49], the solution of this subproblem is formulated as
    D k + 1 = Σ ^ Σ ^ T ,
    where Σ ^ = Y S k + 1 Γ 1 k A k T , Σ ^ and Σ ^ are the left singular vector and right singular vector of Σ ^ . In this way, the solution to D is given by the D = U V T form.
    Algorithm 1 ADMM-based algorithm for solving Equation (9).
    Require: 
    The contaminated HSI Y , dimension of subspace r, regularization parameters γ > 0 and λ > 0 , stop criterion ε , maximum iteration k m a x .
    Ensure: 
    The latent denoised HSI X .
     1:
    Initialization: Estimate D with the HySime algorithm [55], set A = S = L 1 = L 2 =0, set Γ i = 0 , i = 1 , 2 , 3 , set ε = 10 8 .
     2:
    while not converged do
     3:
      Update S ( k + 1 ) with L 1 ( k ) , L 2 ( k ) , D ( k ) , A ( k ) fixed
     4:
       S k + 1 arg min S L L 1 k , , S , , Γ i k
     5:
      Update D ( k + 1 ) with L 1 ( k ) , L 2 ( k ) , S ( k + 1 ) , A ( k ) fixed
     6:
       D k + 1 arg min D L L 1 k + 1 , , D , , Γ i k
     7:
      Update L 1 ( k + 1 ) with L 2 ( k ) , S ( k + 1 ) , D ( k + 1 ) , A ( k ) fixed
     8:
       L 1 k + 1 arg min L 1 L L 1 , , Γ i k
     9:
      Update L 2 ( k + 1 ) with L 1 ( k + 1 ) , S ( k + 1 ) , D ( k + 1 ) , A ( k ) fixed
    10:
       L 2 k + 1 arg min L 2 L L 1 k + 1 , L 2 , , Γ i k
    11:
      Update A ( k + 1 ) with L 1 ( k + 1 ) , L 2 ( k + 1 ) , S ( k + 1 ) , D ( k + 1 ) fixed
    12:
       A k + 1 arg min A L L 1 k + 1 , , A , Γ i k
    13:
      Update lagrange multipliers Γ i k + 1 , i = 1 , 2 , 3
    14:
       Γ 1 k + 1 Γ 1 k + Y D k + 1 A k + 1 S k + 1
    15:
       Γ 2 k + 1 Γ 2 k + A k + 1 L 1 k + 1
    16:
       Γ 3 k + 1 Γ 3 k + A k + 1 L 2 k + 1
    17:
      Update iteration
    18:
       k = k + 1
    19:
    end while
    20:
    return X = D k + 1 A k + 1 .
  • Update L 1 (Line 8):
    The subproblem of updating L 1 is given by:
    L 1 ( k + 1 ) = arg min L 1 ϕ N L ( L 1 ) + 1 2 μ A k L 1 Γ 2 k F 2 .
    Under our design, BM4D filtering could efficiently solve this non-local regularized subproblem at lightning speed:
    L 1 ( k + 1 ) = BM 4 D A k Γ 2 k .
  • Update L 2 (Line 10):
    The subproblem of updating L 2 is given by:
    L 2 ( k + 1 ) = arg min L 2 γ L 2 * + 1 2 μ A k L 2 Γ 3 k F 2 .
    Inviting the well-known singular value shrinkage method [56] to solve this nuclear norm subproblem, thus the solution could be formulated as:
    L 2 ( k + 1 ) = D μ γ A k Γ 3 k , r ,
    where D τ · is the singular value thresholding operator, which is defined as:
    D τ M = U D τ Σ r V T = U diag max 0 , δ i τ 1 i r V T .
    and M = U Σ V T represents the singular value decomposition of M , δ i 1 i r represents first r singular values of M .
  • Update A (Line 12):
    The subproblem of updating A is given by:
    A k + 1 = arg min A 1 2 μ A L 1 ( k + 1 ) Γ 2 k F 2 + 1 2 μ A L 2 ( k + 1 ) Γ 3 k F 2 + 1 2 μ Y D k + 1 A S k + 1 Γ 1 k F 2 .
    Through the calculation of partial derivative, the solution of this convex subproblem is equivalent to the beneath linear equation:
    A k + 1 = D ( k + 1 ) T D ( k + 1 ) + 2 I 1 D ( k + 1 ) T Ω ( k + 1 ) + Λ 1 ( k + 1 ) + Λ 2 ( k + 1 ) ,
    where Ω = Y S k + 1 Γ 1 k , Λ 1 ( k + 1 ) = L 1 ( k + 1 ) + Γ 2 k , Λ 2 ( k + 1 ) = L 2 ( k + 1 ) + Γ 3 k , and I is the identity matrix, D ( k + 1 ) T is the transpose of the matrix D ( k + 1 ) .

4. Experimental Configurations

Aiming to verify the effectiveness and efficiency of our proposed SLRL4D restoration method, extensive experiments were conducted in multiple simulated and real HSI datasets. Main configurations of the experiment environment are Intel Core I7-9700K CPU @4.90 GHz, RTX2060Super GPU with 8GB graphics memory, and 16GB Dual channel RAM. Before the experiment, the gray values of each band were first normalized into the interval [0, 1]. The source code of all the comparison methods is downloaded from the author’s homepage or obtained directly from the author. All our experiments were run on MATLAB 2018b and Windows 10. Source code of the proposed SLRL4D, corresponding datasets, and parameter settings are available in https://github.com/cxunhey/SLRL4D.

4.1. Datasets

Four different datasets were employed in our experiments, including two simulated datasets and two real datasets. The details of these datasets are listed as follows.
  • HYDICE Washington DC Mall (WDC): This dataset was acquired in the Washington DC mall by the HYDICE sensor, which contains intricate ground substances for example, rooftops, rubble roads, streets, lawn, vegetation, aquatoriums, and shadows. The spectral and spatial resolutions of corrected data are 191 high-quality clean bands ranging from 0.4–2.4 μm and 1208 × 307 pixels with 2.0 m/pixel. A sub-cube with the size of 256 × 256 × 191 is cropped for our simulated experiment.
  • ROSIS Pavia University (PU): This dataset was acquired in the Pavia University by the ROSIS sensor, which contains intricate ground substances for example, asphalt roads, meadows, buildings, Bricks, vegetation and shadows. The spectral and spatial resolutions of corrected data are 103 high-quality clean bands ranging from 0.43–0.86 μm and 610 × 340 pixels with 1.3 m/pixel. A sub-cube with the size of 256 × 256 × 103 is cropped for our simulated experiment.
  • HYDICE Urban: This dataset was acquired in the Copperas Cove by the HYDICE sensor, which also contains intricate ground substances. The spectral and spatial resolutions of this dataset are 210 bands and 307 × 307 pixels. Multiple bands are severely corrupted by the heavy mixed noises and watervapour absorption hence this data is employed to validate the performance of the proposed method in the real application.
  • EO-1 Australia: This dataset was acquired in Australia by the EO-1 HYPERION sensor, the spectral and spatial resolutions of this dataset are 150 bands and 200 × 200 pixels. Multiple bands are severely corrupted by a series of deadlines, stripes and Gaussian noise, hence this data is also employed to validate the performance of SLRL4D in the real application.

4.2. Comparison Methods

Nine different methods were employed in our experiment, and all parameters are manually optimized according to the instructions in the corresponding references. The details of these methods are listed as follows.
  • NLR-CPTD [41]: One of the latest HSI restoration methods which combine the low-rank CP tensor decomposition and non-local patches segmentation. It achieves state-of-the-art performance in removing Gaussian-stripes mixed noises removal.
  • LRTDGS [57]: One of the latest HSI restoration methods that combine the low-rank Tucker tensor decomposition and group sparsity regularization. It achieves state-of-the-art performance in isolating structured sparse noise.
  • LRTDTV [34]: A combination method of the tensor tucker decomposition and anisotropic TV, the piecewise smooth property of spatial-spectral domains and the global correlation of all bands are successfully explored.
  • LLRSSTV [30]: Local overlapping patches of HSI are first cropped, then SSTV is embedded in the LRMR framework, therefore, the isolation of clean HSI patch and mixed noises is achieved.
  • FSSLRL [49]: Superpixel segmentation is creatively introduced into the subspace low-rank learning architecture, and both spatial and spectral correlations are exploited simultaneously.
  • TDL [58]: A dictionary learning denoising method based on the well-known Tucker tensor decomposition technique, outstanding Gaussian noise removal is achieved with the splitting rate of convergence.
  • PCABM4D [47]: An enhanced principal component analysis method based on BM4D, BM4D filtering is employed to eliminate the remaining low-energy noisy component after PCA is executed in HSI.
  • LRMR [21]: A patch-based restoration method under the framework of low-rank matrix recovery, the low-rank property of HSI is explored by dividing the original HSI into several local patches.
  • BM4D [46]: An improved version of the well-known BM3D denoising algorithm, NSS of HSI is efficiently utilized in voxels instead of pixels.

4.3. Simulation Configurations

For simulating the contamination scenario of extreme noise to validate the restoration performance of our proposed method, six different instances of noises were added to the WDC and PU datasets.
  • Case 1: Zero-mean Gaussian noise with the same standard deviation was added to each band, and the variance of Gaussian noise was set to 0.25.
  • Case 2: Zero-mean Gaussian noise with the different standard deviations was added to each band, and the variances were randomly selected from 0.15 to 0.25.
  • Case 3: The variances and added bands of Gaussian noise were selected in the same manner as in Case 2. Meanwhile, impulse noise was added to 20 continuous bands with a percentage of 10%.
  • Case 4: Deadlines were added to 20 continuous bands, and their widths and number were randomly selected, ranging from [1, 3] and [3, 10], respectively. The variances and added bands of Gaussian noise were selected in the same manner as in Case 2 still.
  • Case 5: The stripes were added to 20 continuous bands, whose numbers were also randomly selected, ranging from [3, 10]. And the added manner of Gaussian noise were also selected same as in Case 2.
  • Case 6: Mixed noises composed of multiple diverse types were added to different bands of datasets, that is, Gaussian noise, impulse noise, deadlines and stripes were added in the same manner as Case 2–5 simultaneously.

5. Results

5.1. Experimental Results on Simulated Datasets

5.1.1. Visual Evaluation Results

For the visual performance of different methods, representative Case 1 and Case 6 are presented for evaluation. Figure 3 and Figure 4 illustrate the restoration results of WDC dataset. To further evaluate the visual results, zoom-in figures of specific area are enlarged in the bottom right corner.
As shown in Figure 3, the original band 1 of WDC is highly degraded by heavy Gaussian noise, ground information is completely unobservable. BM4D fails to remove noise and distorts the spatial domain of the image. LRMR, PCABM4D and TDL partially remove noise but are still unacceptable. FSSLRL, LLRSSTV, LRTDTV and LRTDGS are powerless in the front of heavy Gaussian noise. The non-local regularized method NLR-CPTD achieved fine removal of Gaussian noise, but there are still some gaps compared to SLRL4D. As the enlarged box shows, SLRL4D performs best in preserving details while the accurate restoration is realized successfully.
Same results are shown in Figure 4. Figure 4b indicates that band 110 of WDC in Case 6 is severely contaminated from mixed noises. Both BM4D, LRMR and PCABM4D failed their mission. TDL failed to completely remove the stripes. Four low-rank based methods, FSSLRL, LLRSSTV, LRTDTV and LRTDGS achieved fine removal of mixed noises, however, our proposed SLRL4D not only achieved the same outstanding mixed noises removal but also achieved the finest detail preservation according to the enlarged box. In addition, Figure 4k shows that the ability of NLR-CPTD in removing stripes partly decreases while heavy Gaussian noise exists.
Restoration results of all methods on PU dataset are shown in Figure 5 and Figure 6. Band 66 of the PU dataset is also degraded by heavy Gaussian noise. Among all methods, our SLRL4D method undoubtedly accomplished the best spatial information restoration as shown in the enlarged boxes of Figure 5. Severe mixed noises corrupted the band 82 of PU in Case 6, as shown in Figure 6b, and no progress was made by BM4D, LRMR, PCABM4D and TDL. Only FSSLRL, LLRSSTV and LRTDGS achieved the removal of mixed noises, but none of them ever transcended SLRL4D in spatial details preservation.
In summary, either with Gaussian noise or mixed noises, our SLRL4D method achieved the best results in visual performance for HSI restoration compared with all nine competitors. The superiority of low-rank learning and non-local filtering in the subspace of HSI is revealed.

5.1.2. Quantitative Evaluation Results

Five evaluation metrics with three positive metrics that is, the peak signal-to-noise ratio (PSNR), structure similarity (SSIM), feature similarity (FSIM) and two negative metrics, namely, erreur relative globale adimensionnellede synthese (ERGAS), and mean spectral angle (MSA) are invited to conduct a quantitative evaluation of our proposed method. The higher values of positive metrics indicate better restoration performance, as well as the lower values of negative metrics. Figure 7a–l shows the curves of the above three positive metrics comparison of all restoration methods on simulated WDC and PU datasets in Case 1 and Case 6. For Gaussian noise degraded Case 1 on WDC, SLRL4D achieved the highest PSNR, SSIM and FSIM in most of bands, and overwhelming dominance is achieved in more than half of all bands. Similar results occur in Case 6 on WDC which is heavily contaminated by mixed noises. SLRL4D still achieved optimal results in most of bands. Although the results in few bands are not optimal yet, the average results of all bands still exceed all competing methods. In Case 1 of PU dataset, our SLRL4D method purchased optimal PSNR and SSIM value effortlessly. Meanwhile, the same first-rank results happened in SSIM values of Case 6. As for rest metrics in the PU datasets, gratifying top-ranking results in all bands are also achieved. Meanwhile, the average optimum of all bands is also easily derived by SLRL4D.
Table 1 demonstrates all the five quantitative evaluation metrics of seven competitive methods in WDC and PU datasets under all simulated cases. Due to the poor results of BM4D and LRMR, we have omitted them in this table. The optimal value of metrics among all competitive methods is highlighted in bold. As shown in Case 1 and Case 2, it is obvious that NLR-CPTD achieves second optimal PSNR, SSIM and FSIM values encountering heavy Gaussian noise.
The reason is that self-similarity of HSI is sufficiently utilized by the non-local regularization. However, our SLRL4D still performs better even NLR-CPTD reached competitive results. The superiority of non-local BM4D filtering is revealed compared with patch matching of NLR-CPTD. Meanwhile, as a BM4D filtering based method, PCABM4D achieved only average results. Both above phenomenas indicate that the combination of subspace low-rank learning and non-local BM4D filtering is sincerely powerful for getting rid of heavy Gaussian noise. Case 3–6 show that our proposed method almost achieved global optimal quantitative results in all cases in contrast with every competitor. Thus, the ability to remove mixed noises of SLRL4D is verified convincingly. NLR-CPTD performs delightful in heavy Gaussian noise, but its restoration results on mixed noises case is relativity inferior. LRTDTV and LRTDGS achieved pleasant consequences in mixed noises case, but compared with subspace-based SLRL4D, a conspicuous divergence still exists.

5.1.3. Qualitative Evaluation Results

For further illustrating the ability to preserve spectral signatures while restoring clean HSI of the SLRL4D method, the spectrum difference curves between original and restored HSI are presented in Figure 8, Figure 9, Figure 10 and Figure 11. As shown in Figure 8b, extensiveness fluctuations occurs at the spectrum of pixel (92, 218) in WDC under the contamination of heavy Gaussian noise. It is clear that our SLRL4D achieved most parallel signature of spectral after restoration. Figure 9 shows that in front of intricate mixed noises, BM4D, PCABM4D, TDL and NLR-CPTD disappointed the retrieval of spectral signatures.
This is unsurprising, particularly given their poor performance on mixed denoising. Combination methods of low-rank and other regularization terms that is, FSSLRL, LLRSSTV and LRTDTV achieved suboptimal spectral signatures preservation, but some middle-scale fluctuations still exist. Both LRTDGS and our SLRL4D makes splendid preservation of spectral signatures, but whether in visual comparison or quantitative analysis, far better results are always accomplished by SLRL4D.
Regarding to the PU dataset, approximate phenomena occurred as illustrated in Figure 10 and Figure 11. In either Case 1 or Case 6, the proposed method performs far better stability on spectrum difference images than any competitive method without even a hitch. Figure 10k–l indicates that while NLR-CPTD shows suboptimal ability on the removal of heavy Gaussian noise with the help of non-local regularization, its ability to maintain spectral correlation is far inferior to SLRL4D. As for mixed noises degraded Case 6, Figure 11 and Table 1 collectively indicate that our SLRL4D not only achieved better quantitative analyzed results than both LRTDTV and LRTDGS, the ability to restore spectral signatures of SLRL4D still exceeded them in the PU dataset. In conclusion, the proposed SLRL4D achieved not only optimal visual and quantitative performance but also transcendental preservation of spectral signatures.

5.1.4. Classification Evaluation Results

In this evaluation, we conducted the classification experiments on the restored Pavia University dataset to further verify the superiority of the proposed method. Representative case 1 and case 6 are employed. Table 2 shows the classification results of restored HSI by every competitor and the proposed SLRL4D, respectively. The well-known support vector machine (SVM) [59,60,61] classier with RBF kernel is used for classification. Three indexes, that is, overall accuracy (OA), average accuracy (AA), and kappa statistic (Kappa), are employed for quantitative evaluation. Before classification, about 10% of samples are randomly selected from the labeled samples for training and the rest samples are for testing. Ten times of Monte Carlo runs are conducted and the averaged results are listed in Table 2. It leads us to conclude that restoration does improve the accuracy of classification, and better restoration delivers higher accuracy. The classification results once again verified that the proposed SLRL4D can accurately and effectively restore the HSI from mixed noise.

5.1.5. Comparison with Deep Learning Methods

In recent years, deep learning methods have earned a prestigious position that cannot be ignored in the field of computer vision [62,63,64,65,66]. Aiming to further verify the capacity of the proposed SLRL4D to remove heavy Gaussian noise, two state-of-the-art deep learning HSI denoising methods HSID-CNN [15] and HSI-SDeCNN [16] are employed for comparison. Similar to most deep learning methods, HSID-CNN and HSI-SDeCNN are mainly designed to remove Gaussian noise. Therefore, for a fair comparison, in the following experiments, we used the exact same WDC dataset and simulation manner of heavy Gaussian noise as in the article corresponding to HSI-SDeCNN [16], that is, the noise level σ n was set to [50, 75, 100]. The detailed quantitative evaluation results are shown in Table 3. Again, the optimal results are shown in bold font. It can be seen that the SLRL4D realizes better restoration than both latest deep learning methods from quantitative assessment.

5.2. Experimental Results on Real Datasets

5.2.1. Visual Evaluation Results

The aforementioned discussion demonstrated the outstanding ability in mixed denoising on the simulated datasets. However, noise degradation is far more intricate than simulation in real scenes.
Figure 12a shows the representative bands 103, 105, 144, 148 and 210 of the well-known real Urban dataset. In these five bands, the intensity of mixed noises is strengthened gradually. Except for band 103, which is slightly contaminated, the other four bands have been completely contaminated by a variety of heavy mixed noises, attempt to acquire any valuable information is absolutely impossible. For slightly contaminated band 103, PCABM4D, TDL, FSSLRL and LLRSSTV failed to remove the cross stripes, rest three comparison methods achieved the removal of stripes but spatial details contrast of some area are slightly enhanced as observed in the enlarged boxes. Both denoising and details preservation were achieved by our SLRL4D method. For band 105, as the noise density of mixed noises increases, PCABM4D, TDL and NLR-CPTD completely lose their power. Low-rank based methods FSSLRL and LRTDTV cannot deal with heavy stripes. LLRSSTV and LRTDGS achieved fine restoration results, but as shown in the enlarged boxes, distortion happened in the results, and, some areas became too bright in contrast with SLRL4D. For band 144, even LLRSSTV failed to remove the cross stripes. Similar to band 105, LRTDGS caused anamorphoses after restoration as shown in the enlarged box. The proposed SLRL4D successfully achieved delightful preservation of image details. Similar restoration results re-emerged on band 148. The difficulty of restoration was further increased, but SLRL4D still successfully accomplishes its mission without causing any abnormal brightness. As for band 210, SLRL4D also achieved optimal restoration results, the combination of subspace low-rank learning and subspace non-local filtering proved itself again.

5.2.2. Quantitative Evaluation Results

The horizontal mean profiles of bands 103, 105, 144, 148 and 210 on the HYDICE Urban dataset are shown in Figure 13. In each subfig, the horizontal axis represents the bands of urban dataset, and the vertical axis represents the digital number of horizontal mean profiles. As shown in Figure 13a, rapid fluctuations exist in the curve of the original bands due to the contamination of heavy mixed noises in all five bands. Particularly in the last three bands, severe mixed noises caused an abnormal jitter. As shown in Figure 13b–e and h, PCABM4D, TDL, FSSLRL, and LLRSSTV achieved inferior restoration performance. For non-local method NLR-CPTD, fine results are achieved only when the noises level are relativity mild. Figure 13f,g indicates that both LRTDTV and LRTDGS achieved relatively flat horizontal mean profiles curves. However, the blue box shows that compared with these two methods, our proposed method SLRL4D achieved flatter curves in all five bands, which indicates that the cross stripes and other diverse noises of Urban dataset have been eliminated more efficiently. In sum, competitive restoration results of our proposed SLRL4D were verified.
To further illustrate the efficiency of our proposed SLRL4D method in real scenarios, a blind non-reference assessment index named Q-metric was employed in our experiment to evaluate the restoration of spatial information. Table 4 lists the average Q values of all bands in both HYDICE Urban and EO-1 Hyperion datasets. A higher value of Q-metric yields a better restoration result. The optimal values are shown in bold. It can be seen that our proposed method SLRL4D acquires the best restoration indexes compared with all seven competitive methods.
The running time of an algorithm can effectively measure its efficiency. Table 5 lists the running time comparison of all methods on two real datasets. Each method is run 10 times and averaged. In order to ensure the fairness of comparison, all methods are conducted in the same experimental environment, and the iteration times of those methods that require iterative solution is set to 50. As shown by the bold values, TDL achieved the fastest running speed of all methods, but its restoration results are also worse than most methods. The proposed SLRL4D achieved a competitive running time while ensuring the denoising performance.

6. Discussion

In our SLRL4D model, multiple parameters need to be discussed. With the help of BM4D filtering, no parameter of the non-local term requires to be tunning arduously, which makes our algorithm far more efficient and stable than exist non-local methods. Only subspace dimension p, penalty parameter μ , regularization parameters γ and λ , which impact the nuclear norm and sparse norm respectively are left to be discussed. Obviously, less dependence on the parameters is achieved by our model in contrast with the congeneric methods.

6.1. The Impact of Parameters μ , γ and λ

Remarkable impacts were caused by the penalty parameter μ , as with the soft-thresholding parameter λ and singular value thresholding parameters γ . More parameters bring more complex tunning, in order to make the method easier to use, we empirically set μ = 10 / max d , l d + l at first [49]. For regularization parameters γ and λ , extensive experiments were conducted in the interval [0.1, 2] and [0.01, 0.1] to evaluate their sensitivity. As shown in Figure 14, mean PSNR, mean SSIM and ERGAS are employed to demonstrate the restoration performance of different parameter combinations on the mixed noises degraded Case 6 of the WDC dataset. It is easy to find that the proposed SLRL4D achieved delightful restoration results on a large scale of parameters, which indicates that our method does not require extremely specific parameter settings. Therefore, we recommend choosing γ and λ within the range of [0.1, 1] and [0.01, 0.05] respectively.

6.2. The Dimension of Subspace p

Theoretically, the number of endmembers in an HSI is the perfect candidate for the dimension of subspace p. The HySime signal identification algorithm is a highly effective tool to estimate p. For the simulated Case 6 under WDC dataset, HySime gives p = 3 as the estimated dimension. However, as illustrated in Figure 15, both mean PSNR and MSA achieved the best results at p = 5 instead of p = 3 . The primary cause of this phenomenon is that HySime only delivers precise estimation for Gaussian degraded signals. Therefore, for the case of mixed noises, we recommend choosing subspace dimension p within the range of [3, 8] according to the specific degradation status itself.
All our experiments select parameters in the above ranges, for instance, the parameters used on simulated case 1–6 of the WDC dataset are listed in Table 6.

6.3. Convergence Analysis

Figure 16 shows the rising tendency of the PSNR and SSIM under simulated Case 6 of WDC dataset along with the increase of iterations, both the PSNR and SSIM achieved convergence after rapid growth in the initial few iterations, hence the rapid convergence of the proposed SLRL4D is verified.

7. Conclusions and Future Work

Focusing on the mixed noises degeneration problem of HSI, in this paper, a joint method of subspace low-rank learning and nonlocal 4-d transform filtering is presented for HSI restoration. In summary, both spatial-spectral correlations of HSI are explored comprehensively through joint low-rank regularization and non-local 4-d filtering in the subspace decomposition framework. In this manner, the low-rank property in the spectral domain and the non-local self-similarity of the spatial domain are exploited simultaneously. In contrast with utilizing prior in original space, our proposed subspace-based method not only obtained better restoration results but also cost lower computational resource. The carefully designed restoration model is efficiently solved by the convex optimization algorithm. Extensive experimental evaluations on multiple simulated and real datasets validate that our proposed HSI restoration method is both effective and efficient. For future work, the tensor representation based subspace low-rank learning methods will be considered to further improve the performance, and more precise non-local self-similarity regularization of spatial domain of HSI will be explored. Meanwhile, using low-rankness for image compression is also worth further study.

Author Contributions

Conceptualization, L.S.; Funding acquisition, L.S.; Investigation, C.H.; Methodology, L.S.; Software, C.H.; Supervision, Y.Z.; Validation, L.S. and S.T.; Visualization, C.H.; Writing—original draft, C.H.; Writing—review & editing, L.S., Y.Z. and S.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China under grants Nos. 61971233, 61672291, 61972206, 61672293 and 61702269, in part by the Natural Science Foundation of Jiangsu Province under grant No. BK20171074, in part by the Henan Key Laboratory of Food Safety Data Intelligence under grant No. KF2020ZD01, in part by the Engineering Research Center of Digital Forensics, Ministry of Education, in part by the Qing Lan Project of higher education of jiangsu province, and in part by PAPD fund.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HSIHyperspectral image
LRMRLow-rank matrix recovery
NSSNon-local self-similarity
CPCANDECOMP/PARAFAC
BM4DBlock-matching and 4D filtering
SLRSubspace low-rank
ADMMAlternating direction method of multipliers
PSNRPeak signal-to-noise ratio
SSIMStructure similarity
FSIMFeature similarity
ERGASErreur relative globale adimensionnellede synthese
MSAMean spectral angle

References

  1. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
  2. Imani, M.; Ghassemian, H. An overview on spectral and spatial information fusion for hyperspectral image classification: Current trends and challenges. Inf. Fusion 2020, 59, 59–83. [Google Scholar] [CrossRef]
  3. Lu, X.; Dong, L.; Yuan, Y. Subspace clustering constrained sparse NMF for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3007–3019. [Google Scholar] [CrossRef]
  4. Moeini Rad, A.; Abkar, A.A.; Mojaradi, B. Supervised distance-based feature selection for hyperspectral target detection. Remote Sens. 2019, 11, 2049. [Google Scholar] [CrossRef] [Green Version]
  5. Sun, L.; Ma, C.; Chen, Y.; Zheng, Y.; Shim, H.J.; Wu, Z.; Jeon, B. Low rank component Induced spatial-spectral kernel method for hyperspectral image classification. IEEE Trans. Circuits Syst. Video Technol. 2020, 1. [Google Scholar] [CrossRef]
  6. Chen, Y.; Li, J.; Zhou, Y. Hyperspectral image denoising by total variation-regularized bilinear factorization. Signal Process. 2020, 174, 107645. [Google Scholar] [CrossRef]
  7. Othman, H.; Qian, S.E. Noise reduction of hyperspectral imagery using hybrid spatial-spectral derivative-domain wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 2006, 44, 397–408. [Google Scholar] [CrossRef]
  8. Chen, G.; Qian, S.E. Denoising of hyperspectral imagery using principal component analysis and wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 2010, 49, 973–980. [Google Scholar] [CrossRef]
  9. Zhang, X.; Peng, F.; Long, M. Robust coverless image steganography based on DCT and LDA topic classification. IEEE Trans. Multimed. 2018, 20, 3223–3238. [Google Scholar] [CrossRef]
  10. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar]
  11. Qian, Y.; Ye, M. Hyperspectral imagery restoration using nonlocal spectral-spatial structured sparse representation with noise estimation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 6, 499–515. [Google Scholar] [CrossRef]
  12. Song, Y.; Cao, W.; Shen, Y.; Yang, G. Compressed sensing image reconstruction using intra prediction. Neurocomputing 2015, 151, 1171–1179. [Google Scholar] [CrossRef]
  13. Song, Y.; Yang, G.; Xie, H.; Zhang, D.; Sun, X. Residual domain dictionary learning for compressed sensing video recovery. Multimed. Tools Appl. 2017, 76, 10083–10096. [Google Scholar] [CrossRef]
  14. Shen, Y.; Li, J.; Zhu, Z.; Cao, W.; Song, Y. Image reconstruction algorithm from compressed sensing measurements by dictionary learning. Neurocomputing 2015, 151, 1153–1162. [Google Scholar] [CrossRef]
  15. Yuan, Q.; Zhang, Q.; Li, J.; Shen, H.; Zhang, L. Hyperspectral image denoising employing a spatial–spectral deep residual convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1205–1218. [Google Scholar] [CrossRef] [Green Version]
  16. Maffei, A.; Haut, J.M.; Paoletti, M.E.; Plaza, J.; Bruzzone, L.; Plaza, A. A single model CNN for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2516–2529. [Google Scholar] [CrossRef]
  17. Long, M.; Zeng, Y. Detecting iris liveness with batch normalized convolutional neural network. Comput. Mater. Contin. 2019, 58, 493–504. [Google Scholar] [CrossRef]
  18. Zhang, J.; Jin, X.; Sun, J.; Wang, J.; Sangaiah, A.K. Spatial and semantic convolutional features for robust visual object tracking. Multimed. Tools Appl. 2020, 79, 15095–15115. [Google Scholar] [CrossRef]
  19. Zeng, D.; Dai, Y.; Li, F.; Wang, J.; Sangaiah, A.K. Aspect based sentiment analysis by a linguistically regularized CNN with gated mechanism. J. Intell. Fuzzy Syst. 2019, 36, 3971–3980. [Google Scholar] [CrossRef]
  20. Candès, E.J.; Li, X.; Ma, Y.; Wright, J. Robust principal component analysis? J. ACM (JACM) 2011, 58, 1–37. [Google Scholar] [CrossRef]
  21. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 4729–4743. [Google Scholar] [CrossRef]
  22. Wang, Q.; He, X.; Li, X. Locality and structure regularized low rank representation for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 911–923. [Google Scholar] [CrossRef] [Green Version]
  23. Zhao, Y.Q.; Yang, J. Hyperspectral image denoising via sparse representation and low-rank constraint. IEEE Trans. Geosci. Remote Sens. 2014, 53, 296–308. [Google Scholar] [CrossRef]
  24. Chen, Y.; Huang, T.Z.; Zhao, X.L.; Deng, L.J. Hyperspectral image restoration using framelet-regularized low-rank nonnegative matrix factorization. Appl. Math. Model. 2018, 63, 128–147. [Google Scholar] [CrossRef]
  25. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D: Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  26. Aggarwal, H.K.; Majumdar, A. Hyperspectral image denoising using spatio-spectral total variation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 442–446. [Google Scholar] [CrossRef]
  27. Wu, Z.; Wang, Q.; Wu, Z.; Shen, Y. Total variation-regularized weighted nuclear norm minimization for hyperspectral image mixed denoising. J. Electron. Imaging 2016, 25, 013037. [Google Scholar] [CrossRef]
  28. Wang, Q.; Wu, Z.; Jin, J.; Wang, T.; Shen, Y. Low rank constraint and spatial spectral total variation for hyperspectral image mixed denoising. Signal Process. 2018, 142, 11–26. [Google Scholar] [CrossRef]
  29. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2015, 54, 178–188. [Google Scholar] [CrossRef]
  30. He, W.; Zhang, H.; Shen, H.; Zhang, L. Hyperspectral image denoising using local low-rank matrix recovery and global spatial–spectral total variation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 713–729. [Google Scholar] [CrossRef]
  31. Fang, Y.; Li, H.; Ma, Y.; Liang, K.; Hu, Y.; Zhang, S.; Wang, H. Dimensionality reduction of hyperspectral images based on robust spatial information using locally linear embedding. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1712–1716. [Google Scholar] [CrossRef]
  32. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.W. Nonconvex tensor rank minimization and its applications to tensor recovery. Inf. Sci. 2019, 503, 109–128. [Google Scholar] [CrossRef]
  33. Fan, H.; Chen, Y.; Guo, Y.; Zhang, H.; Kuang, G. Hyperspectral image restoration using low-rank tensor recovery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4589–4604. [Google Scholar] [CrossRef]
  34. Wang, Y.; Peng, J.; Zhao, Q.; Leung, Y.; Zhao, X.L.; Meng, D. Hyperspectral image restoration via total variation regularized low-rank tensor decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1227–1243. [Google Scholar] [CrossRef] [Green Version]
  35. Yang, J.H.; Zhao, X.L.; Ma, T.H.; Chen, Y.; Huang, T.Z.; Ding, M. Remote sensing images destriping using unidirectional hybrid total variation and nonconvex low-rank regularization. J. Comput. Appl. Math. 2020, 363, 124–144. [Google Scholar] [CrossRef]
  36. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar]
  37. Katkovnik, V.; Foi, A.; Egiazarian, K.; Astola, J. From local kernel to nonlocal multiple-model image denoising. Int. J. Comput. Vis. 2010, 86, 1. [Google Scholar] [CrossRef] [Green Version]
  38. Zhu, R.; Dong, M.; Xue, J.H. Spectral nonlocal restoration of hyperspectral images with low-rank property. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 8, 3062–3067. [Google Scholar]
  39. Bai, X.; Xu, F.; Zhou, L.; Xing, Y.; Bai, L.; Zhou, J. Nonlocal similarity based nonnegative tucker decomposition for hyperspectral image denoising. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 701–712. [Google Scholar] [CrossRef] [Green Version]
  40. Xue, J.; Zhao, Y.; Liao, W.; Kong, S.G. Joint spatial and spectral low-rank regularization for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1940–1958. [Google Scholar] [CrossRef]
  41. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.W. Nonlocal low-rank regularized tensor decomposition for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5174–5189. [Google Scholar] [CrossRef]
  42. Xie, Q.; Zhao, Q.; Meng, D.; Xu, Z.; Gu, S.; Zuo, W.; Zhang, L. Multispectral images denoising by intrinsic tensor sparsity regularization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1692–1700. [Google Scholar]
  43. Huang, Z.; Li, S.; Fang, L.; Li, H.; Benediktsson, J.A. Hyperspectral image denoising with group sparse and low-rank tensor decomposition. IEEE Access 2017, 6, 1380–1390. [Google Scholar] [CrossRef]
  44. Zhang, H.; Liu, L.; He, W.; Zhang, L. Hyperspectral image denoising with total variation regularization and nonlocal low-rank tensor decomposition. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3071–3084. [Google Scholar] [CrossRef]
  45. Chen, Y.; He, W.; Yokoya, N.; Huang, T.Z.; Zhao, X.L. Nonlocal tensor-ring decomposition for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2019, 58, 1348–1362. [Google Scholar] [CrossRef]
  46. Maggioni, M.; Katkovnik, V.; Egiazarian, K.; Foi, A. Nonlocal transform-domain filter for volumetric data denoising and reconstruction. IEEE Trans. Image Process. 2012, 22, 119–133. [Google Scholar] [CrossRef] [PubMed]
  47. Chen, G.; Bui, T.D.; Quach, K.G.; Qian, S.E. Denoising hyperspectral imagery using principal component analysis and block-matching 4D filtering. Can. J. Remote Sens. 2014, 40, 60–66. [Google Scholar] [CrossRef]
  48. Sun, L.; Jeon, B. A novel subspace spatial-spectral low rank learning method for hyperspectral denoising. In Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, FL, USA, 10–13 December 2017; pp. 1–4. [Google Scholar]
  49. Sun, L.; Jeon, B.; Soomro, B.N.; Zheng, Y.; Wu, Z.; Xiao, L. Fast superpixel based subspace low rank learning method for hyperspectral denoising. IEEE Access 2018, 6, 12031–12043. [Google Scholar] [CrossRef]
  50. Zhuang, L.; Bioucas-Dias, J.M. Fast hyperspectral image denoising and inpainting based on low-rank and sparse representations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 730–742. [Google Scholar] [CrossRef]
  51. He, W.; Yao, Q.; Li, C.; Yokoya, N.; Zhao, Q. Non-local meets global: An integrated paradigm for hyperspectral denoising. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 6861–6870. [Google Scholar]
  52. Zhuang, L.; Ng, M.K. Hyperspectral mixed noise temoval by l_1-norm-based subspace representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1143–1157. [Google Scholar] [CrossRef]
  53. Huang, H.; Shi, G.; He, H.; Duan, Y.; Luo, F. Dimensionality reduction of hyperspectral imagery based on spatial–spectral manifold learning. IEEE Trans. Cybern. 2019, 50, 2604–2616. [Google Scholar] [CrossRef] [Green Version]
  54. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  55. Bioucas-Dias, J.M.; Nascimento, J.M. Hyperspectral subspace identification. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2435–2445. [Google Scholar] [CrossRef] [Green Version]
  56. Sun, L.; Wu, F.; Zhan, T.; Liu, W.; Wang, J.; Jeon, B. Weighted nonlocal low-rank tensor decomposition method for sparse unmixing of hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1174–1188. [Google Scholar] [CrossRef]
  57. Chen, Y.; He, W.; Yokoya, N.; Huang, T.Z. Hyperspectral image restoration using weighted group sparsity-regularized low-rank tensor decomposition. IEEE Trans. Cybern. 2020, 50, 3556–3570. [Google Scholar] [CrossRef] [PubMed]
  58. Peng, Y.; Meng, D.; Xu, Z.; Gao, C.; Yang, Y.; Zhang, B. Decomposable nonlocal tensor dictionary learning for multispectral image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2949–2956. [Google Scholar]
  59. Kuang, F.; Zhang, S.; Jin, Z.; Xu, W. A novel SVM by combining kernel principal component analysis and improved chaotic particle swarm optimization for intrusion detection. Soft Comput. 2015, 19, 1187–1199. [Google Scholar] [CrossRef]
  60. Chen, Y.; Xiong, J.; Xu, W.; Zuo, J. A novel online incremental and decremental learning algorithm based on variable support vector machine. Clust. Comput. 2019, 22, 7435–7445. [Google Scholar] [CrossRef]
  61. Chen, Y.; Xu, W.; Zuo, J.; Yang, K. The fire recognition algorithm using dynamic feature fusion and IV-SVM classifier. Clust. Comput. 2019, 22, 7665–7675. [Google Scholar] [CrossRef]
  62. Zeng, D.; Dai, Y.; Li, F.; Sherratt, R.S.; Wang, J. Adversarial learning for distant supervised relation extraction. Comput. Mater. Contin. 2018, 55, 121–136. [Google Scholar]
  63. Meng, R.; Rice, S.G.; Wang, J.; Sun, X. A fusion steganographic algorithm based on faster R-CNN. Comput. Mater. Contin. 2018, 55, 1–16. [Google Scholar]
  64. Zhou, S.; Ke, M.; Luo, P. Multi-camera transfer GAN for person re-identification. J. Vis. Commun. Image Represent. 2019, 59, 393–400. [Google Scholar] [CrossRef]
  65. He, S.; Li, Z.; Tang, Y.; Liao, Z.; Li, F.; Lim, S.J. Parameters compressing in deep learning. Comput. Mater. Contin. 2020, 62, 321–336. [Google Scholar] [CrossRef]
  66. Gui, Y.; Zeng, G. Joint learning of visual and spatial features for edit propagation from a single image. Vis. Comput. 2020, 36, 469–482. [Google Scholar] [CrossRef]
Figure 1. Singular values comparison of the original Urban hyperspectral image (HSI) and the corresponding subspace.
Figure 1. Singular values comparison of the original Urban hyperspectral image (HSI) and the corresponding subspace.
Remotesensing 12 02979 g001
Figure 2. Flowchart of the proposed subspace low-rank learning and non-local 4-d transform filtering (SLRL4D) method.
Figure 2. Flowchart of the proposed subspace low-rank learning and non-local 4-d transform filtering (SLRL4D) method.
Remotesensing 12 02979 g002
Figure 3. Visual comparison on band 1 of Washington DC Mall (WDC) dataset under simulated Case 1. (a) Original. (b) Contaminated. (c)BM4D. (d) LRMR. (e) PCABM4D. (f) TDL. (g) FSSLRL. (h) LLRSSTV. (i) LRTDTV. (j) LRTDGS. (k) NLR-CPTD. (l) SLRL4D.
Figure 3. Visual comparison on band 1 of Washington DC Mall (WDC) dataset under simulated Case 1. (a) Original. (b) Contaminated. (c)BM4D. (d) LRMR. (e) PCABM4D. (f) TDL. (g) FSSLRL. (h) LLRSSTV. (i) LRTDTV. (j) LRTDGS. (k) NLR-CPTD. (l) SLRL4D.
Remotesensing 12 02979 g003
Figure 4. Visual comparison on band 110 of WDC dataset under simulated Case 6. (a) Original. (b) Contaminated. (c) BM4D. (d) LRMR. (e) PCABM4D. (f) TDL. (g) FSSLRL. (h) LLRSSTV. (i) LRTDTV. (j) LRTDGS. (k) NLR-CPTD. (l) SLRL4D.
Figure 4. Visual comparison on band 110 of WDC dataset under simulated Case 6. (a) Original. (b) Contaminated. (c) BM4D. (d) LRMR. (e) PCABM4D. (f) TDL. (g) FSSLRL. (h) LLRSSTV. (i) LRTDTV. (j) LRTDGS. (k) NLR-CPTD. (l) SLRL4D.
Remotesensing 12 02979 g004
Figure 5. Visual comparison on band 66 of PU dataset under simulated Case 1. (a) Original. (b) Contaminated. (c) BM4D. (d) LRMR. (e) PCABM4D. (f) TDL. (g) FSSLRL. (h) LLRSSTV. (i) LRTDTV. (j) LRTDGS. (k) NLR-CPTD. (l) SLRL4D.
Figure 5. Visual comparison on band 66 of PU dataset under simulated Case 1. (a) Original. (b) Contaminated. (c) BM4D. (d) LRMR. (e) PCABM4D. (f) TDL. (g) FSSLRL. (h) LLRSSTV. (i) LRTDTV. (j) LRTDGS. (k) NLR-CPTD. (l) SLRL4D.
Remotesensing 12 02979 g005
Figure 6. Visual comparison on band 82 of PU dataset under simulated Case 6. (a) Original. (b) Contaminated. (c) BM4D. (d) LRMR. (e) PCABM4D. (f) TDL. (g) FSSLRL. (h) LLRSSTV. (i) LRTDTV. (j) LRTDGS. (k) NLR-CPTD. (l) SLRL4D.
Figure 6. Visual comparison on band 82 of PU dataset under simulated Case 6. (a) Original. (b) Contaminated. (c) BM4D. (d) LRMR. (e) PCABM4D. (f) TDL. (g) FSSLRL. (h) LLRSSTV. (i) LRTDTV. (j) LRTDGS. (k) NLR-CPTD. (l) SLRL4D.
Remotesensing 12 02979 g006
Figure 7. Peak signal-to-noise ratio (PSNR), structure similarity (SSIM), feature similarity (FSIM) comparison of all restoration methods under simulation Case 1 and Case 6. (ac) Case 1 of WDC dataset. (df) Case 6 of WDC dataset. (gi) Case 1 of Pavia University (PU) dataset. (jl) Case 6 of PU dataset.
Figure 7. Peak signal-to-noise ratio (PSNR), structure similarity (SSIM), feature similarity (FSIM) comparison of all restoration methods under simulation Case 1 and Case 6. (ac) Case 1 of WDC dataset. (df) Case 6 of WDC dataset. (gi) Case 1 of Pavia University (PU) dataset. (jl) Case 6 of PU dataset.
Remotesensing 12 02979 g007
Figure 8. Spectrum difference comparison in pixel (92, 218) of WDC under Case 1. (a) Original, (b) Contaminated, (c) BM4D, (d) LRMR, (e) PCABM4D, (f) TDL, (g) FSSLRL, (h) LLRSSTV, (i) LRTDTV, (j) LRTDGS, (k) NLR-CPTD, (l) SLRL4D.
Figure 8. Spectrum difference comparison in pixel (92, 218) of WDC under Case 1. (a) Original, (b) Contaminated, (c) BM4D, (d) LRMR, (e) PCABM4D, (f) TDL, (g) FSSLRL, (h) LLRSSTV, (i) LRTDTV, (j) LRTDGS, (k) NLR-CPTD, (l) SLRL4D.
Remotesensing 12 02979 g008
Figure 9. Spectrum difference comparison in pixel (53, 129) of WDC under Case 6. (a) Original, (b) Contaminated, (c) BM4D, (d) LRMR, (e) PCABM4D, (f) TDL, (g) FSSLRL, (h) LLRSSTV, (i) LRTDTV, (j) LRTDGS, (k) NLR-CPTD, (l) SLRL4D.
Figure 9. Spectrum difference comparison in pixel (53, 129) of WDC under Case 6. (a) Original, (b) Contaminated, (c) BM4D, (d) LRMR, (e) PCABM4D, (f) TDL, (g) FSSLRL, (h) LLRSSTV, (i) LRTDTV, (j) LRTDGS, (k) NLR-CPTD, (l) SLRL4D.
Remotesensing 12 02979 g009
Figure 10. Spectrum difference comparison in pixel (140, 105) of PU under Case 1. (a) Original, (b) Contaminated, (c) BM4D, (d) LRMR, (e) PCABM4D, (f) TDL, (g) FSSLRL, (h) LLRSSTV, (i) LRTDTV, (j) LRTDGS, (k) NLR-CPTD, (l) SLRL4D.
Figure 10. Spectrum difference comparison in pixel (140, 105) of PU under Case 1. (a) Original, (b) Contaminated, (c) BM4D, (d) LRMR, (e) PCABM4D, (f) TDL, (g) FSSLRL, (h) LLRSSTV, (i) LRTDTV, (j) LRTDGS, (k) NLR-CPTD, (l) SLRL4D.
Remotesensing 12 02979 g010
Figure 11. Spectrum difference comparison in pixel (72, 150) of PU under Case 6. (a) Original, (b) Contaminated, (c) BM4D, (d) LRMR, (e) PCABM4D, (f) TDL, (g) FSSLRL, (h) LLRSSTV, (i) LRTDTV, (j) LRTDGS, (k) NLR-CPTD, (l) SLRL4D.
Figure 11. Spectrum difference comparison in pixel (72, 150) of PU under Case 6. (a) Original, (b) Contaminated, (c) BM4D, (d) LRMR, (e) PCABM4D, (f) TDL, (g) FSSLRL, (h) LLRSSTV, (i) LRTDTV, (j) LRTDGS, (k) NLR-CPTD, (l) SLRL4D.
Remotesensing 12 02979 g011
Figure 12. Visual comparison of all restoration methods on bands 103, 105, 144, 148, 210 of HYDICE Urban dataset. (a) Original, (b) PCABM4D, (c) TDL, (d) FSSLRL, (e) LLRSSTV, (f) LRTDTV, (g) LRTDGS, (h) NLR-CPTD, (i) SLRL4D.
Figure 12. Visual comparison of all restoration methods on bands 103, 105, 144, 148, 210 of HYDICE Urban dataset. (a) Original, (b) PCABM4D, (c) TDL, (d) FSSLRL, (e) LLRSSTV, (f) LRTDTV, (g) LRTDGS, (h) NLR-CPTD, (i) SLRL4D.
Remotesensing 12 02979 g012
Figure 13. Horizontal mean profiles comparison of all methods on bands 103, 105, 144, 148, 210 of HYDICE Urban dataset. (a) Original, (b) PCABM4D, (c) TDL, (d) FSSLRL, (e) LLRSSTV, (f) LRTDTV, (g) LRTDGS, (h) NLR-CPTD, (i) SLRL4D.
Figure 13. Horizontal mean profiles comparison of all methods on bands 103, 105, 144, 148, 210 of HYDICE Urban dataset. (a) Original, (b) PCABM4D, (c) TDL, (d) FSSLRL, (e) LLRSSTV, (f) LRTDTV, (g) LRTDGS, (h) NLR-CPTD, (i) SLRL4D.
Remotesensing 12 02979 g013
Figure 14. Sensitivity analysis of parameters λ and γ by employing PSNR, SSIM and erreur relative globale adimensionnellede synthese (ERGAS) under Case 6 of simulated WDC dataset. (a) PSNR, (b) SSIM, (c) ERGAS.
Figure 14. Sensitivity analysis of parameters λ and γ by employing PSNR, SSIM and erreur relative globale adimensionnellede synthese (ERGAS) under Case 6 of simulated WDC dataset. (a) PSNR, (b) SSIM, (c) ERGAS.
Remotesensing 12 02979 g014
Figure 15. Sensitivity analysis of subspace dimension p by employing PSNR and mean spectral angle (MSA) under Case 6 of simulated WDC dataset. (a) PSNR, (b) MSA.
Figure 15. Sensitivity analysis of subspace dimension p by employing PSNR and mean spectral angle (MSA) under Case 6 of simulated WDC dataset. (a) PSNR, (b) MSA.
Remotesensing 12 02979 g015
Figure 16. Convergence analysis of SLRL4D by employing PSNR and SSIM under Case 6 of simulated WDC dataset. (a) PSNR, (b) SSIM.
Figure 16. Convergence analysis of SLRL4D by employing PSNR and SSIM under Case 6 of simulated WDC dataset. (a) PSNR, (b) SSIM.
Remotesensing 12 02979 g016
Table 1. Quantitative Comparison under all six diverse case in simulated WDC and PU datasets.
Table 1. Quantitative Comparison under all six diverse case in simulated WDC and PU datasets.
DatasetCaseIndexNoisyPCABM4D [47]TDL [58]FSSLRL [49]LLRSSTV [30]LRTDTV [34]LRTDGS [57]NLR-CPTD [41]SLRL4D
WDCCase 1PSNR12.04230.34130.01230.33930.22530.98131.33032.09032.633
SSIM0.13300.86390.86470.85860.85290.87400.88710.90700.9208
FSIM0.49190.93280.92300.93300.92390.92990.93700.94750.9520
ERGAS60.82097.07907.32587.11477.37596.59406.34245.78955.4803
MSA0.85700.13070.12220.13020.14320.10500.10580.09760.0802
Case 2PSNR14.12931.53431.42231.61131.53932.15032.41033.39633.439
SSIM0.19220.89660.89850.89850.89970.90080.90900.92880.9350
FSIM0.55390.94600.94160.94930.94190.94460.94850.95900.9607
ERGAS48.56556.17486.23206.14078.43255.77405.60875.01494.9961
MSA0.75720.11360.10590.10920.16070.09270.09250.08580.0742
Case 3PSNR13.82630.65630.44531.19631.51532.08132.31432.34132.969
SSIM0.18310.88460.88180.89240.89890.89950.90870.91480.9224
FSIM0.54720.94280.93560.94690.94140.94410.94860.95400.9552
ERGAS50.59637.40867.54946.54039.66275.81705.68606.34065.4038
MSA0.76400.14860.14840.12600.16170.09260.09340.12800.0876
Case 4PSNR13.94830.42731.40831.50331.80931.89332.24832.14433.312
SSIM0.18900.88160.89940.89570.89400.90050.90780.91430.9338
FSIM0.54860.93600.94250.94800.94310.94290.94780.94940.9599
ERGAS49.76937.69966.50656.21976.99876.09935.85746.37385.0699
MSA0.76510.14550.11430.11150.13810.09830.09550.11360.0751
Case 5PSNR14.00531.44631.45831.53031.58132.08032.29233.24433.390
SSIM0.18920.89460.89700.89690.89280.89940.91020.92710.9284
FSIM0.55010.94510.94090.94810.94070.94380.94880.95770.9576
ERGAS49.59546.24296.20886.19258.19995.81785.73035.10175.0423
MSA0.76370.11460.10420.10990.14720.09350.09550.08690.0736
Case 6PSNR13.79030.14130.63931.11031.78832.04732.21731.74832.919
SSIM0.18020.87650.88780.89250.89440.89950.90860.90590.9215
FSIM0.54580.93660.93880.94680.94380.94370.94810.94830.9553
ERGAS50.51408.49167.86166.65057.05725.84795.85647.29905.4843
MSA0.76540.16840.15750.13080.13800.09290.09630.14670.0903
PUCase 1PSNR12.04228.11330.44028.70229.61230.32530.45531.04131.671
SSIM0.08920.70180.84080.72910.81940.79540.82170.82900.8717
FSIM0.42340.88310.91340.89700.91260.90160.90540.92060.9180
ERGAS61.36139.53507.33908.97608.21308.11527.60816.84766.3751
MSA0.93180.18850.11860.17640.14520.17020.14780.13850.0984
Case 2PSNR14.09729.55531.59930.14430.52931.74431.54932.49532.643
SSIM0.13450.76560.86170.80350.85510.84350.85520.86720.8924
FSIM0.48470.90790.93160.92460.92770.92510.92210.93720.9321
ERGAS49.37418.12966.46327.63677.54146.94156.71905.90895.7106
MSA0.83190.16250.11590.14410.13360.15200.13490.12230.0916
Case 3PSNR13.75628.40729.68430.06431.21031.56531.42030.84731.906
SSIM0.12530.74350.81230.79620.82820.83850.85180.83480.8799
FSIM0.47940.90140.91460.92210.92570.92400.92030.92400.9253
ERGAS51.09819.54208.51007.70576.80007.00566.89037.64596.2219
MSA0.82530.19300.17040.14320.13520.15170.13170.16460.1005
Case 4PSNR14.05628.23230.18630.27831.35431.75031.38630.63532.506
SSIM0.13560.74520.82000.80490.83090.84420.84990.84490.8903
FSIM0.48410.89110.91830.92500.92550.92610.91950.92120.9316
ERGAS49.407010.16108.12627.52046.70236.83696.85188.29265.8021
MSA0.82610.21260.16130.14340.13590.15400.13570.16970.0945
Case 5PSNR13.90329.38431.36829.99930.46231.52031.30532.27232.578
SSIM0.12950.75870.85380.79730.85230.83560.84700.86150.8912
FSIM0.47910.90540.92940.92280.92720.92160.91850.93450.9320
ERGAS50.41288.27786.62217.75097.57017.04746.87146.04685.7400
MSA0.84190.16480.11710.14650.13350.15520.13480.12450.0936
Case 6PSNR13.46927.44828.89929.93231.06731.33431.23829.48531.811
SSIM0.11990.71970.78230.78920.82190.82890.84480.81090.8773
FSIM0.47050.88490.90870.91890.92260.91960.91710.90860.9238
ERGAS52.529111.00259.52247.82266.90167.17227.03749.28116.2780
MSA0.83390.22390.19100.14560.13710.15620.13690.19160.1025
Table 2. Classification accuracy for Case 1 and Case 6 of Pavia University dataset.
Table 2. Classification accuracy for Case 1 and Case 6 of Pavia University dataset.
CaseIndexNoisyPCABM4D [47]TDL [58]FSSLRL [49]LLRSSTV [30]LRTDTV [34]LRTDGS [57]NLR-CPTD [41]SLRL4D
Case 1OA0.577320.708910.748240.600570.653610.718160.735320.761890.84743
AA0.522420.677140.682350.503770.552860.639760.667120.720330.80327
Kappa0.456280.632050.682470.461980.550720.642550.665950.702200.80847
Case 6OA0.597750.727020.734040.639800.669570.738110.758060.778810.86227
AA0.542320.690830.674310.549240.577290.658130.691440.729330.83022
Kappa0.484070.656800.664640.535530.575370.667840.694980.721580.82752
Table 3. Comparison with Deep Learning Methods.
Table 3. Comparison with Deep Learning Methods.
Noise LevelIndexBM4D [46]HSID-CNN [15]HSI-SDeCNN [16]SLRL4D
σ n = 50 PSNR26.75228.96829.61229.722
SSIM0.92080.95320.96080.9656
σ n = 75 PSNR24.26126.75327.35128.045
SSIM0.86700.92730.93710.9489
σ n = 100 PSNR22.57725.29625.75326.783
SSIM0.81190.90140.91210.9321
Table 4. Blind Q-Metric comparison on HYDICE Urban and EO1-Hyperion datasets.
Table 4. Blind Q-Metric comparison on HYDICE Urban and EO1-Hyperion datasets.
DatasetNoisyPCABM4D [47]TDL [58]FSSLRL [49]LLRSSTV [30]LRTDTV [34]LRTDGS [57]NLR-CPTD [41]SLRL4D
Urban0.062930.078810.063060.062170.068300.074080.070770.092020.09818
Hyperion0.024320.034260.031980.031440.031980.033850.037160.039660.04459
Table 5. Running time comparison on HYDICE Urban and EO1-Hyperion datasets.
Table 5. Running time comparison on HYDICE Urban and EO1-Hyperion datasets.
DatasetPCABM4D [47]TDL [58]FSSLRL [49]LLRSSTV [30]LRTDTV [34]LRTDGS [57]NLR-CPTD [41]SLRL4D
Urban738.5956.08258.17337.96323.52202.90>1000269.24
Hyperion235.2113.4780.64101.6677.5055.82>100057.72
Table 6. Parameters used on simulated case 1–6 of the WDC dataset.
Table 6. Parameters used on simulated case 1–6 of the WDC dataset.
CaseCase 1Case 2Case 3Case 4Case 5Case 6
Parameter
λ 110.20.40.40.2
γ 0.020.020.020.020.020.02
p445454

Share and Cite

MDPI and ACS Style

Sun, L.; He, C.; Zheng, Y.; Tang, S. SLRL4D: Joint Restoration of Subspace Low-Rank Learning and Non-Local 4-D Transform Filtering for Hyperspectral Image. Remote Sens. 2020, 12, 2979. https://doi.org/10.3390/rs12182979

AMA Style

Sun L, He C, Zheng Y, Tang S. SLRL4D: Joint Restoration of Subspace Low-Rank Learning and Non-Local 4-D Transform Filtering for Hyperspectral Image. Remote Sensing. 2020; 12(18):2979. https://doi.org/10.3390/rs12182979

Chicago/Turabian Style

Sun, Le, Chengxun He, Yuhui Zheng, and Songze Tang. 2020. "SLRL4D: Joint Restoration of Subspace Low-Rank Learning and Non-Local 4-D Transform Filtering for Hyperspectral Image" Remote Sensing 12, no. 18: 2979. https://doi.org/10.3390/rs12182979

APA Style

Sun, L., He, C., Zheng, Y., & Tang, S. (2020). SLRL4D: Joint Restoration of Subspace Low-Rank Learning and Non-Local 4-D Transform Filtering for Hyperspectral Image. Remote Sensing, 12(18), 2979. https://doi.org/10.3390/rs12182979

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop