Next Article in Journal
Automatic Rice Early-Season Mapping Based on Simple Non-Iterative Clustering and Multi-Source Remote Sensing Images
Previous Article in Journal
Intelligent Reconstruction of Radar Composite Reflectivity Based on Satellite Observations and Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Denoising Based on Principal-Third-Order-Moment Analysis

1
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
Key Laboratory of Technology in Geo-Spatial Information Processing and Application System, Chinese Academy of Sciences, Beijing 100190, China
3
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
4
School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 242099, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(2), 276; https://doi.org/10.3390/rs16020276
Submission received: 10 November 2023 / Revised: 28 December 2023 / Accepted: 5 January 2024 / Published: 10 January 2024
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Denoising serves as a critical preprocessing step for the subsequent analysis of the hyperspectral image (HSI). Due to their high computational efficiency, low-rank-based denoising methods that project the noisy HSI into a low-dimensional subspace identified by certain criteria have gained widespread use. However, methods employing second-order statistics as criteria often struggle to retain the signal of the small targets in the denoising results. Other methods utilizing high-order statistics encounter difficulties in effectively suppressing noise. To tackle these challenges, we delve into a novel criterion to determine the projection subspace, and propose an innovative low-rank-based method that successfully preserves the spectral characteristic of small targets while significantly reducing noise. The experimental results on the synthetic and real datasets demonstrate the effectiveness of the proposed method, in terms of both small-target preservation and noise reduction.

1. Introduction

A hyperspectral image (HSI) is composed of tens or hundreds of spectral reflectance bands, providing abundant spectral information about the objects presenting in the image. Thus, HSIs have been applied in various fields of remote sensing, such as classification [1,2,3] and target detection [4,5,6]. In comparison to an RGB-remote-sensing image, an HSI exhibits relatively lower spatial resolution, resulting in many interested man-made small-sized targets in the scene being represented by only a few pixels. The detection of the targets heavily relies on their distinctive spectral characteristic. However, HSIs are often corrupted by noise during acquisition and transmission, which will drown out these spectral characteristics and degrade the accuracy of target detection. Therefore, denoising plays a vital role as a preprocessing step for HSI. The denoising problem is commonly addressed by incorporating the prior knowledge [7] of the HSI signal. Existing denoising algorithms can be broadly categorized into two groups, based on the prior knowledge they employ.
The methods in the first category [8,9,10,11,12,13,14] utilize the spatial-smoothness prior of the HSI to build a restoration result. For instance, Zhao [12] proposed the spectral-recognition-spatial-smooth-hyper-spectral filter (SRSSHF), which restores each pixel by taking a weighted average of its neighboring pixels. The weights are determined by the correlation coefficients between the central pixel and its neighboring pixels. This method can effectively preserve the edges in the restoration result. Considering a situation in which the noise level varies with different bands, Yuan [10] proposed the spectral–spatial kernel (SSK), which alleviates the influence of the bands with high noise levels on the calculation of the weights. Additionally, Buades [15] introduced ‘patch’, which consists of all the pixels in a local region as the fundamental unit, and proposed the non-local-means-(NLM) method, which determines the weights based on the distances between neighboring patches and the central patch. Because the patch provides additional local spatial information, the NLM method achieves improved denoising performance. Further, Maggioni [14] proposed the block-matching-and-4D-filtering-(BM4D) method, which aggregates similar patches and performs collaborative filtering in a transform domain. The above methods only utilize local spatial information for denoising.
In addition, the spatial-smoothness prior also leads to many total-variation-(TV)-regularization-based approaches [16,17,18] that exploit the smoothness assumption of the entire HSI. Rudin [18] initially introduced the original TV-denoising method, with the aim of finding a restoration image that approximated the noisy image while minimizing the L 1 norm of its spatial gradient. Aggarwal [16] observed that the spectral dimension also adheres to the smoothness prior. He introduced gradient information along the spectral direction, leading to the development of the spatio-spectral-total-variation-(SSTV) method. Recently, Takemoto [17] proposed the graph-SSTV method, which can better preserve the complex spatial structure in the restoration result. However, by this kind of method, the signal of the small targets with only one or sub-pixel size in the image is often mistakenly considered as noise and is removed in the restoration results.
The methods in the second category are based on the low-rank prior [19,20,21,22,23,24,25,26,27,28], and they generally perform notable computational efficiency. Methods of this kind assume that an HSI signal concentrates in a low-dimensional subspace, while the noise exists throughout the space. Consequently, they generally perform denoising by projecting the noisy HSI into the subspace identified by a certain criterion. Many of them employ the variance of the projection components, to evaluate the signal energy present in the corresponding direction. Based on the assumption that noise follows an independent-and-identically-distributed-(i.i.d.) Gaussian distribution, Richards [29] used the first few principal components of the noisy HSI matrix to construct the objective subspace. However, in real-world scenarios, the noise in a natural HSI generally exhibits a complex distribution. Chen [30] observed that the noise distributions of different bands have certain correlations and proposed a non-i.i.d mixture of Gaussian-(NMoG)-noise model to fit the real noise. Xu [31] utilized the bandwise-asymmetric-Laplacian-(BAL) distribution to model the real noise, introducing the BAL-matrix-factorization-(BALMF) method. Additionally, Zhuang [32] and Fu [33] assumed that real noise is a mixture of Gaussian noise and sparse noise and proposed the FastHyMix and RoSEGS methods, respectively. In both methods, the sparse noise is first located and masked, to avoid influencing the estimation of the signal subspace.
Moreover, researchers [26,34] have noticed that HSI signals also exhibit significant high-order statistical features in the signal subspace. They have proposed methods that utilize the high-order statistics of the projection components as criteria. For instance, Chiang [26] extracted the subspace corresponding to the maximum skewness, and the projection results achieved remarkable performance in the target-detection task. Geng [28] developed the principal-skewness-analysis-(PSA) method, which can rapidly extract an arbitrary number of the maximum skewness components. Wang [34] explored the application of the independent-component-analysis-(ICA) method to HSI, utilizing a mixture of skewness and kurtosis as the criterion.
It should be noted that existing low-rank-based denoising methods face a challenge in simultaneously preserving the spectra of small targets and suppressing noise. As small targets constitute only a minor portion of the overall HSI signal, they do not contribute to significant second-order statistical characteristics. Therefore, methods relying on variance as a criterion often struggle to identify the subspace corresponding to the small-target signal. Consequently, the spectral characteristic of the small targets is generally eliminated in the denoised results of the variance-based methods, severely affecting the detection of these small targets. By contrast, high-order-statistic-based methods can effectively capture the target signal, due to their sensitivity to minority components within the signal. However, the values of skewness and kurtosis are easily influenced by noise, causing these methods to fail to accurately identify the signal subspace in noisy situations (which will be further discussed in the subsequent section). As a result, evident residual noise generally persists in the denoised results of high-order-statistic-based methods.
To overcome these limitations, this paper aims to investigate a new low-rank-based denoising method that can preserve the small target signal while reducing noise. And the pivotal factor hinges on finding a new criterion that is sensitive to the minor portion of the signal and robust to the noise.
This paper is organized as follows. In Section 2, we briefly introduce the PSA algorithm. In Section 3, we propose a novel denoising method. The experimental results are shown in Section 4. Finally, we conclude this paper in Section 5.

2. Background

In this subsection, we provide a brief introduction to the principal-skewness-analysis-(PSA) [28] algorithm that is designed to calculate the component with maximum skewness. The commonly used symbols in this paper are summarized at the end of this paper.
Let X R N × L represent a noise-free HSI with N pixels and L bands. For simplicity, all HSIs used in this paper have been normalized—that is:
1 N i = 1 N X i , j = 0 , j = 1 , 2 , , L .
Here, X i , j denotes the element located at the i t h row and j t h column of X . The PSA algorithm initially whitens the data, using the following equation:
R = X F ,
where R is the whitened data; F = E D 1 2 is the whitening operator. Here, E and D are, respectively, the eigenvector matrix and the eigenvalue matrix of the covariance matrix of X . Then, the PSA algorithm constructs the co-skewness tensor, calculated as
S = 1 L i = 1 L R i R i R i .
In this equation, R i represents the i t h column of R , ∘ represents the outer product and S R L × L × L is the co-skewness tensor. Subsequently, the maximum skewness direction can be determined by solving the following optimization model:
max u S × 1 u × 2 u × 3 u     s . t . u T u = 1 ,
where u R L × 1 represents an arbitrary direction vector. It should be noted that the solution of Equation (4) is the maximum skewness direction of R , and the maximum skewness direction of X can be obtained by multiplying it by E D 1 2 . And the PSA exhibits effectiveness [28] on extracting the signal subspace of the noise-free HSI.

3. Method

In this section, we initially investigate the issues of the PSA algorithm in the presence of noise. Subsequently, we exploit a new criterion that is robust to Gaussian noise, to determine the projection subspace, and we develop a novel denoising method based on it.

3.1. Maximum Third-Order Moment Criterion

First, we explore the impact of noise on the projection direction searched by the PSA. Actually, the objective of the PSA algorithm is equivalent to finding a unit direction vector u that maximizes the following function [28]:
skew ( v ) = μ 3 ( v ) ( var ( v ) ) 3 2 = 1 N i = 1 N v i 3 ( 1 N i = 1 N v i 2 ) 3 2 ,
where v = X u represents the projection coefficients along direction u , and where v i is the i t h element of v . The third-order moment and the variance of v are, respectively, μ 3 ( v ) and var ( v ) . As X has been normalized, the mean of v is always zero and is eliminated in Equation (5).
In a noise situation, the observed HSI is usually modeled as
Y = X + N ,
where Y represents the observed noisy HSI and where N is the zero-mean Gaussian noise with covariance matrix C R L × L . We denote w = Yu and n = Nu as, respectively, the projection coefficients of Y and N along direction u . And in the noisy situation, the PSA algorithm aims to maximize the skewness of w :
skew ( w ) = μ 3 ( w ) ( var ( w ) ) 3 2 = 1 N i = 1 N ( v i + n i ) 3 ( 1 N i = 1 N ( v i + n i ) 2 ) 3 2 = 1 N i = 1 N ( v i 3 + 3 v i 2 n i + 3 v i n i 2 + n i 3 ) ( 1 N i = 1 N ( v i 2 + 2 v i n i + n i 2 ) ) 3 2 ,
where n i is the i t h element of n . As a linear combination of L Gaussian noise, n i is a random sample from a Gaussian distribution with zero mean and variance σ 2 , where σ 2 = u T C u . According to the law of large numbers, as N increases, we have
lim N 1 N i N v i 2 n i = 0
lim N 1 N i N v i n i 2 = 0
lim N 1 N i N v i n i = 0
lim N 1 N i N n i 3 = 0
lim N 1 N i N n i 2 = σ 2 .
As a result, Equation (7) can be approximately simplified as
skew ( w ) = 1 N i = 1 N v i 3 ( 1 N i = 1 N v i 2 + σ 2 ) 3 2 .
Comparing Equation (13) to Equation (5), it is evident that the noise introduces an additional term σ 2 in the denominator, resulting in a reduction in skewness along direction u . Additionally, the distribution of the HSI signal in different directions varies, leading to changes in the values of i = 1 N v i 3 and i = 1 N v i 2 , with respect to the projection direction. Consequently, the extent of reduction caused by σ 2 is direction-dependent, suggesting that the maximum skewness direction of the noisy HSI likely deviates from that of the noise-free HSI. Therefore, the PSA algorithm is unable to accurately identify the signal subspace in the presence of noise.
Fortunately, we find an important conclusion: that the numerator of Equation (13)—namely, the third-order moment—is always equal to that of Equation (5):
μ 3 ( w ) = μ 3 ( v ) .
Equation (14) indicates that the maximum third-order-moment direction remains unaffected by the Gaussian noise. Furthermore, it has been observed [35] that the third-order-moment value is also sensitive to the minority components in the signal, which implies that the third-order moment retains the property of preserving the small target. Therefore, we consider utilizing the third-order moment as the criterion for determining the projection subspace for denoising.

3.2. Principal Third-Order-Moment Analysis

In this subsection, we develop the principal-third-order-moment-analysis-(PTMA) algorithm, to calculate the maximum third-order-moment directions and apply the PTMA for denoising. The procedure of the PTMA is as follows:
Inspired by the PSA algorithm, we first construct the co-third-order-moment tensor M as
M = 1 L i = 1 L Y i Y i Y i ,
where Y i is the i t h column of Y . Then, the first third-order-moment direction can be determined by solving the following optimization problem:
argmax u M × 1 u × 2 u × 3 u     s . t . u T u = 1 .
The Lagrangian function of Equation (16) is
L ( u , λ ) = M × 1 u × 2 u × 3 u λ ( u T u 1 ) .
Subsequently, u that maximizes Equation (16) satisfies the following equation:
M × 1 u × 3 u = λ u .
We apply the fixed-point-iteration method [36] to solve Equation (18), and a specific iteration step is
u = M × 1 u × 3 u u = u / u 2 .
The convergence result (denoted as u 1 ) is the first maximum-third-order-moment direction. To obtain the second projection direction while ensuring it does not converge to u 1 , we project Y into the orthogonal complement space of u 1 and reconstruct the co-third-order-moment tensor. These two operations are equivalent to directly multiplying the projection matrix by M , as follows:
M = M × 1 P u 1 × 2 P u 1 × 3 P u 1 ,
where P u 1 = I u 1 u 1 T is the projection matrix onto the orthogonal complement subspace of vector u 1 , and I is the L × L identity matrix. Then, the same iteration process, i.e., Equation (19), is applied to the updated M to calculate the second maximum-third-order-moment direction u 2 , and the following process is conducted in the same manner.
We assume r represents the estimation of the dimension of the signal subspace (many algorithms [25,37,38] have been developed to estimate r). The restoration result is Y U U T , where U = [ u 1 , u 2 , , u r ] . The pseudocode of the PTMA is presented in Algorithm 1. It should be noted that although the PTMA only takes a minor modification of the PSA algorithm, it achieves a significant theoretical improvement in noise suppression.
Algorithm 1: PTMA algorithm
  • Require: Centralized Noisy HSI data Y N × L , the estimation of the dimension of the objective subspace r
  • Ensure: Denoising result X ^
1:
Calculate the co-skewness tensor M according to Equation (15)
2:
Randomly initialize the transformation matrix U R L × r
3:
for  i = 1 : r  do
4:
      Initialize u i with random unit vector
5:
      while stop conditions are not met do
6:
           u i = M × 1 u i × 3 u i
7:
           u i = u i / u i 2
8:
      end while
9:
       P u i = I u i u i T
10:
     M = M × 1 P u i × 2 P u i × 3 P u i
11:
     U [ : , i ] = u i
12:
end for
13:
Calculate the denoising result by X ^ = Y U U T

4. Experiments

For this section, we utilized simulated and real noisy hyperspectral images (HSIs) to assess the denoising and target-preservation capabilities of the PTMA. For comparison, seven advanced denoising methods were used as competitors: the non-local-means-3-D filter (NLM3D) [13], the block-matching-and-4D filter (BM4D) [14], the low-rank matrix recovery (LRMR) [21], global local factorization (GLF) [39], the spectral–spatial kernel (SSK) [10], the spectral-recognition-spatial-smooth-hyper-spectral filter (SRSSHF) [12] and the PSA.

4.1. Synthetic Experiment

For this subsection, synthetic noisy datasets were generated, based on two public HSI datasets: namely, the subregions of the Washington-DC-Mall data and the University-of-Pavia data. The Washington data were acquired by the airborne-visible/infrared-imaging-spectrometer-(AVIRIS) sensor, with a spectral range of 0.4 to 2.4 μ m. And the Pavia data were acquired by the reflective-optics-system-imaging-spectrometer-(ROSIS) sensor, with a spatial resolution of 1.3 m.
To obtain relatively clean images, we first removed the bands severely corrupted by water vapor in the atmosphere. Then, the spectral vectors of each image were projected onto a subspace spanned by the first few principal eigenvectors of each image. And the projection result was considered as the reference noise-free HSI. The processed Washington dataset consisted of 200 × 200 pixels and 191 bands, whose false-color image composed by bands 60, 27, 14 is shown in Figure 1a. The processed Pavia dataset consisted of 200 × 200 pixels and 87 bands, whose false-color image composed by bands 60, 40, 20 is shown in Figure 1b.
To generate noisy HSIs, four kinds of noise were added into both clean datasets (both datasets had been normalized). And for each noise type we generated noise at two distinct intensity levels. Specifically, the noise types were as follows.
Case 1 (i.i.d. Gaussian noise): The noise conformed to Gaussian distribution with zero mean. And the standard deviation of the low-intensity situation and the high-intensity situation were set as 0.03 and 0.05, respectively.
Case2 (Non-i.i.d. Gaussian noise): The noise conformed to Gaussian distribution with zero mean and band-varying standard deviation. The standard deviation of the low-intensity situation and the high-intensity situation were sampled from uniform distribution U ( 0.01 , 0.05 ) and U ( 0.01 , 0.1 ) , respectively.
Case3 (Non-i.i.d. Gaussian noise + Poisson noise): The standard deviation of Gaussian noise in the low- and high-intensity situations were sampled from uniform distribution U ( 0.01 , 0.02 ) and U ( 0.01 , 0.07 ) , respectively. The Poisson noise was pixel-value correlated, i.e., the output pixel value with Poisson noise was sampled from a Poisson distribution with mean equal to the pixel value. We used a scale factor β to control the intensity of the Poisson noise. For example, if an input pixel value was a, then the corresponding output pixel would be generated from a Poisson distribution with mean of a × β and then divided by β . The scale factor β was set as 7 × 10 11 and 5 × 10 12 , respectively, for the low-intensity situation and the high-intensity situation.
Case4 (Non-i.i.d. Gaussian noise + Poisson noise + Strip noise): The standard deviations of Gaussian noise in the low- and high-intensity situations were sampled from uniform distribution U ( 0.01 , 0.02 ) and U ( 0.01 , 0.03 ) , respectively. β was set as 3 × 10 11 and 10 12 . And the number of strips in each band was set as 10 and 30, respectively.
Then, the competitors and our method were employed for denoising. The parameters of the competitors were set as follows. For the synthetic noise, the standard deviation of the noise, which was required as a parameter for BM4D, GLF, the SSK and the SRSSHF, was known. For NLM3D, the search-area size, the small-similar-patch size and the big-similar-patch size were set as 7 × 7 , 3 × 3 and 5 × 5 , respectively. For the LRMR, the patch size and the upper bound of the cardinality were set to their default values, 20 × 20 and 4000, respectively, as recommended in the literature. In addition, GLF and the LRMR, respectively, also required the rank of the signal subspace, which was estimated for each noisy HSI according to the Akaike information criterion.
To quantitatively evaluate the denoising performance of these methods, four metrics were counted, including the mean of the peak signal-to-noise ratio of each band (MPSNR), the mean of the structural similarity index of each band (MSSIM), the mean of the spectral angle mapping (MSAM) and the Erreur Relative Globale Adimensionnelle de Synthèse (ERGAS) [40]. Higher MPSNR, MSSIM values and lower MSAM, ERGAS values generally implied better restoration results.
The MPSNR was calculated as follows:
MPSNR = 1 L b = 1 L PSNR b ,
PSNR b = 10 · log 10 ( MAX 2 MSE b 2 ) = 10 · log 10 ( MAX 2 1 N X ^ : , b X : , b 2 2 ) ,
where PSNR b denoted the PSNR value of the b t h band. MAX was the maximum-possible pixel value of the HSI, which was set as 1 because the HSI has been normalized. X ^ : , b and X : , b represented the b t h band of the denoised HSI and the reference noise-free HSI, respectively.
The MSSIM was calculated as follows:
MSSIM = 1 L b = 1 L SSIM b ,
SSIM b = 2 μ ( X ^ : , b ) μ ( X : , b ) + c 1 2 σ ( X ^ : , b , X : , b ) + c 2 μ ( X ^ : , b ) 2 + μ ( X : , b ) 2 + c 1 σ ( X ^ : , b ) 2 + σ ( X : , b ) 2 + c 2 ,
where SSIM b was the SSIM value of the b t h band, μ ( X ^ : , b ) , μ ( X b ) represented the mean of X ^ : , b and X : , b , respectively, σ ( X ^ : , b ) , σ ( X b ) were the variance of X ^ : , b and X : , b , respectively, σ ( X ^ : , b , X : , b ) represented their covariance and c 1 and c 2 were constants.
The calculation procedure for the MSAM was
SAM = 1 N i = 1 N arccos X i · X ^ i T X i 2 · X ^ i 2 ,
where X i and X ^ i were the i t h row of X and X ^ , respectively.
The calculation procedure for the ERGAS was
ERGAS = 100 1 L b = 1 L X : , b X ^ : , b 2 μ ( X : , b ) .
The metrics of the Washington and Pavia datasets are shown in Table 1 and Table 2, respectively. It can be seen that the PTMA achieved the highest MPSNR and MSSIM for most noise types, which indicates that the PTMA is not only applicable to Gaussian noise, but also robust against complex mixed noise. However, considering the ERGAS metric, BM4D obtained the best performance in most situations. It is worth noting that the value of the ERGAS was proportional to the mean of the MSE for each band divided by the average signal value within the respective band and that the PSNR was the inverse of the MSE of the entire HSI. Therefore, the reason that the PTMA achieved higher PSNR and ERGAS values compared to BM4D lay in the fact that for spectral bands with higher signal intensity there was more residual noise in the PTMA than in BM4D, whereas for bands with lower signal intensity, the PTMA demonstrated less residual noise.
We counted the average computation time of the PTMA and the competitors for all the noise cases (Table 1 and Table 2). It can be seen that the PTMA was the most computationally efficient algorithm, with a computation time less than 10 % of BM4D. In addition, it is worth noting that the PTMA determined the projection direction based on M R L × L × L , which meant that the computation complexity of the PTMA was almost uncorrelated to the image size.
Furthermore, to intuitively evaluate the denoising performance, we illustrate the restoration images of the Washington dataset with Case 1 noise type and high noise level and the Pavia dataset with Case 4 noise type and high noise level in Figure 2 and Figure 3, respectively. For the Washington data, two enlarged local regions are shown on the left of each image, to display the details clearly. The lower region contains abundant edges. In the upper region there is a small ‘white’ vehicle with a single pixel size, labeled by the green circle. It serves as an indicator to observe whether the signal of the small target could be maintained after denoising.
From Figure 2, it is evident that the NLM3D, GLF and the SSK caused severe blurring of edges. And the restoration result of the LRMR showed a noticeable bias. Except for the LRMR, the PSA and the PTMA, the small target in the HSIs restored by other methods was not identifiable. In Figure 3, the Pavia data were contaminated by severe strip noise. And in all the results, except for the LRMR and the PTMA, there were still noticeable residual strips, which implies that these methods were not applicable in removing the strip noise. The image information in the PSA result was barely perceptible, which meant that the strip noise severely affected the signal subspace identified by the PSA. By contrast, the PTMA achieved the best performance in removing the strip noise and preserving the image information by applying the third-order moment as the criterion.
Moreover, the comparisons of the denoised spectra to the noise-free spectrum are shown in Figure 4 and Figure 5. For the Washington data, we display the spectra of the small target labeled in Figure 2, to illustrate the ability of these methods to preserve the spectral characteristic of small targets. It can be seen that the spectral shape of the NLM3D, BM4D, GLF and the SSK significantly deviated from the original spectral shape. The LRMR, the PSA and the PTMA could preserve the spectral characteristic of the small target well. But the LRMR and the PSA contained more residual noise than the PTMA. In addition, Figure 5 shows the spectra of the pixel locating at (50, 50) in the Pavia data. The selected pixel corresponded to bare soil, serving as an indicator of whether the spectral characteristics of the background could be maintained by these methods. It can be seen that the denoised spectrum in the PTMA was the closest to the original spectrum. But the spectrum of the PTMA significantly differed from the clean spectrum. Therefore, by replacing the criterion from the skewness to the third-order moment, the PTMA retained the capability of preserving small targets, while concurrently achieving a significant enhancement in denoising efficacy.

4.2. Real Experiment

For this subsection, we utilized two HSIs containing real complex noise as the experiment datasets to evaluate the denoising and target-preservation ability of the PTMA under real-noise conditions. The first noisy HSI was acquired by the airborne-visible/infrared-imaging spectrometer (AVIRIS) over the Indian Pines test site located in northwestern Indiana. It comprised 145 × 145 pixels and 224 spectral reflectance bands within the wavelength range of 0.4–2.5 μ m. Its false-color image, composed of bands 40, 27, 14, is shown in Figure 6a. It can be seen that there was no apparent noise in these three bands. However, the noise level of the Indian Pines dataset varied in different bands. For instance, the image of the second band was affected by intense noise, which is shown in Figure 6b.
The other noisy HSI was acquired by the advanced hyperspectral imager (AHSI) aboard the GaoFen-5 (GF-5) satellite, which is the fifth satellite of a series of the China High-Resolution Earth Observation System. The AHSI is a 330-channel imaging spectrometer with a 30 m spatial resolution covering the 0.4–2.5 μ m spectral range. The visible and near-infrared range consists of 150 bands with a spectral resolution of 5 nm, while the shortwave infrared range consists of 180 bands with a spectral resolution of 10 nm. However, due to the detector’s low responsivity, 25 bands (bands number 193–200 and 246–262) had zero signal intensity and, thus, were removed. After preprocessing, the selected HSI consisted of 305 bands and 256 × 256 pixels, which were acquired over the southwest region of Beijing, China. The false-color image, composed by bands 67, 37, 20, as shown in Figure 6d, exhibited high image quality. However, several bands contained noticeable strip noise, such as band 305, shown in Figure 6e.
The PTMA and all the competitors were employed to restore both of these datasets. Through testing, it was determined that the optimal standard deviations for the noise in these two datasets, which were required by certain competitors, should be set to 0.03 and 0.012, respectively. These values were found to yield the best denoising performance. And the remaining parameters remained consistent with those used in the previous subsection.
To intuitively evaluate the denoising performance, the images of the restoration results of the Indian Pines data and the GF-5 data are displayed in Figure 7 and Figure 8. From Figure 7, it is obvious that there was visibly residual noise in the results of NLM3D, BM4D and the SSK. This suggests that NLM3D and BM4D, which achieved good denoising results under the synthetic Gaussian noise, are suitable for a situation where the noise level varies across different bands. The LRMR, GLF and the PTMA all reduced the noise well, while preserving the edges. Comparing the results of the PSA and the PTMA, it is evident that the PTMA achieved much better denoising performance than the PSA, which demonstrates that the third-order moment is also robust to noise with varying variance.
According to Figure 8, it can be observed that the restoration results of NLM3D, the SSK and the SRSSHF were still contaminated by severe stripes, which means that they were not applicable to the strip noise. The result of BM4D was severely blurred. The PSA exhibited some efficacy in suppressing the striping noise, but fell short of complete elimination. The LRMR, GLF and the PTMA achieved relatively good denoising performance. It is worth noting that the number of the residual strips in the HSI denoised by the PTMA was less than that in the LRMR and GLF results. Consequently, the PTMA demonstrated its ability to accurately identify the signal subspace, even in the presence of stripe noise.
To evaluate the target-preservation capabilities of these methods in real noisy conditions, we selected two materials that were present in the datasets and that constituted only a small fraction of the entire image signal, as the small targets to be detected. For the Indian Pines dataset, we selected oats, which consisted of 20 pixels, as the small target. The location of the target is shown in Figure 6c. For the GF-5 dataset, we selected a mine, which consisted of 15 pixels, as the small target. The location of the target is shown in Figure 6f.
Considering whether the spectral characteristic of the small target maintained in the restored HSIs directly affected the target detection accuracy, we used the target detection accuracy to evaluate the target-preservation ability of these methods. The spectral angle mapper (SAM) [41] was employed as the target-detection algorithm. The SAM algorithm required the reference of the target spectrum as the input, so we used the average spectrum of the spectra of all the small-target pixels in the noisy HSI as the reference spectrum, to minimize the noise in the reference spectrum as much as possible. The output value of the SAM at position i was calculated as
SAM ( i ) = arccos r t T X ^ i r t 2 X ^ i 2 ,
in which r t and X ^ i represented the reference spectrum of the target and the i t h spectrum in the denoised HSI, respectively. The output range of the SAM was [ 0 , π / 2 ] , and it was then normalized into [ 0 , 1 ] to create the target-detection score map.
In assessing the target-detection accuracy, we employed the receiver-operating- characteristic-(ROC) curve [42] and the corresponding area under the curve (AUC) as the metric. The ROC curve was the plot of the true positive rate (TPR) against the false positive rate (FPR) at each threshold setting. The TPR and FPR were calculated as
TPR = TP TP + FN ,
FPR = FP FP + TN ,
where TP meant the true positive rate, FN was the false negative rate, FP was the false positive rate and TN was the true negative rate. An ROC curve that lies closer to the top-left corner indicates better target detection performance, as it maintained a high true positive rate while keeping the false positive rate low. In addition, we calculated the area under the curve (AUC). A higher AUC value indicates better target detection performance.
The ROC and AUC of the detection score map are presented in Figure 9 and Table 3, respectively. It can be seen that the PTMA achieved the highest TPR values at most FPR. In addition, the PTMA obtained the highest AUC value for both datasets. The experiment results affirm that the PTMA is still effective under non-Gaussian noise conditions, in terms of both noise reduction and target preservation.

5. Conclusions

Existing low-rank-based denoising methods often encounter challenges in simultaneously preserving the signal of small targets and effectively reducing noise. In this paper, we demonstrated that the third-order-moment value remains unaffected by noise and that it exhibits sensitivity to minor components in the signal. Consequently, we adopted the maximum third-order moment as the criterion for identifying the signal subspace. We introduced the principal-third-order-moment-analysis algorithm, capable of rapidly calculating the first few maximum third-order-moment directions. Two simulated and two real HSIs were employed to assess the effectiveness of the proposed method. We compared the proposed denoising method to seven State-of-the-Art competitors. Our extensive experimental results demonstrated the superiority of the proposed method, in terms of noise removal and small-target preservation when compared to the competitors.

Author Contributions

Conceptualization, S.L.; methodology, S.L.; software, S.L.; validation, S.L.; investigation, S.L.; data curation, Y.Z.; writing—original draft preparation, S.L.; writing—review and editing, S.L., X.G., L.Z. and L.J.; visualization, S.L.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Key Research and Development Program of China (31400), and the Second Tibetan Plateau Scientific Expedition and Research Program (STEP), Grant No. 2019QZKK0206.

Data Availability Statement

The Washington data, Pavia data and Indian Pines data are all publicly archived dataset, and can be obtained in http://lesun.weebly.com/hyperspectral-data-set.html. In addition, the GF-5 data used in this paper can be obtained in https://github.com/SZ-Li/PTMA-Matlab-Code.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

NotationDescription
X R N × L Noise-free HSI with N pixels and L bands
X i R N × 1 i t h column of X , the image vector of the i t h band
X b , : R 1 × L b t h row of X , the spectrum vector at position b
X i , j Scalar element located at the i t h row and j t h column of X
N Additive-noise matrix
Y Observed noisy HSI
X ^ Estimated noise-free HSI
u R L × 1 Unit-projection direction vector
v , w , n R N × 1 Projection coefficients of X , Y and N along direction u
v i , w i , n i i t h element of v , w and n

References

  1. Sun, G.; Fu, H.; Ren, J.; Zhang, A.; Zabalza, J.; Jia, X.; Zhao, H. SpaSSA: Superpixelwise adaptive SSA for unsupervised spatial-spectral feature extraction in hyperspectral image. IEEE Trans. Cybern. 2021, 52, 6158–6169. [Google Scholar] [CrossRef] [PubMed]
  2. Huang, S.; Zhang, H.; Pižurica, A. Subspace clustering for hyperspectral images via dictionary learning with adaptive regularization. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–17. [Google Scholar] [CrossRef]
  3. Hennessy, A.; Clarke, K.; Lewis, M. Hyperspectral classification of plants: A review of waveband selection generalisability. Remote Sens. 2020, 12, 113. [Google Scholar] [CrossRef]
  4. Manolakis, D.; Marden, D.; Shaw, G.A. Hyperspectral image processing for automatic target detection applications. Linc. Lab. J. 2003, 14, 79–116. [Google Scholar]
  5. Sun, X.; Qu, Y.; Gao, L.; Sun, X.; Qi, H.; Zhang, B.; Shen, T. Target detection through tree-structured encoding for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4233–4249. [Google Scholar] [CrossRef]
  6. Zhu, D.; Du, B.; Zhang, L. Target dictionary construction-based sparse representation hyperspectral target detection methods. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1254–1264. [Google Scholar] [CrossRef]
  7. Zhang, J.; Zhao, D.; Gao, W. Group-based sparse representation for image restoration. IEEE Trans. Image Process. 2014, 23, 3336–3351. [Google Scholar] [CrossRef] [PubMed]
  8. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), Bombay, India, 7 January 1998; pp. 839–846. [Google Scholar]
  9. Peng, H.; Rao, R.; Dianat, S.A. Multispectral image denoising with optimized vector bilateral filter. IEEE Trans. Image Process. 2013, 23, 264–273. [Google Scholar] [CrossRef]
  10. Yuan, Y.; Zheng, X.; Lu, X. Spectral–spatial kernel regularized for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3815–3832. [Google Scholar] [CrossRef]
  11. Peng, H.; Rao, R. Bilateral kernel parameter optimization by risk minimization. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 3293–3296. [Google Scholar]
  12. Zhao, Y.; Pu, R.; Bell, S.S.; Meyer, C.; Baggett, L.P.; Geng, X. Hyperion image optimization in coastal waters. IEEE Trans. Geosci. Remote Sens. 2012, 51, 1025–1036. [Google Scholar] [CrossRef]
  13. Manjón, J.V.; Coupé, P.; Martí-Bonmatí, L.; Collins, D.L.; Robles, M. Adaptive non-local means denoising of MR images with spatially varying noise levels. J. Magn. Reson. Imaging 2010, 31, 192–203. [Google Scholar] [CrossRef]
  14. Maggioni, M.; Katkovnik, V.; Egiazarian, K.; Foi, A. Nonlocal transform-domain filter for volumetric data denoising and reconstruction. IEEE Trans. Image Process. 2012, 22, 119–133. [Google Scholar] [CrossRef] [PubMed]
  15. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar]
  16. Aggarwal, H.K.; Majumdar, A. Hyperspectral image denoising using spatio-spectral total variation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 442–446. [Google Scholar] [CrossRef]
  17. Takemoto, S.; Naganuma, K.; Ono, S. Graph spatio-spectral total variation model for hyperspectral image denoising. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  18. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  19. He, W.; Zhang, H.; Zhang, L.; Shen, H. Hyperspectral image denoising via noise-adjusted iterative low-rank matrix approximation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3050–3061. [Google Scholar] [CrossRef]
  20. Green, A.A.; Berman, M.; Switzer, P.; Craig, M.D. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. IEEE Trans. Geosci. Remote Sens. 1988, 26, 65–74. [Google Scholar] [CrossRef]
  21. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 4729–4743. [Google Scholar] [CrossRef]
  22. Bioucas-Dias, J.M.; Nascimento, J.M. Hyperspectral subspace identification. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2435–2445. [Google Scholar] [CrossRef]
  23. Fan, H.; Li, C.; Guo, Y.; Kuang, G.; Ma, J. Spatial–spectral total variation regularized low-rank tensor decomposition for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6196–6213. [Google Scholar] [CrossRef]
  24. Liu, Y.; Shan, C.; Gao, Q.; Gao, X.; Han, J.; Cui, R. Hyperspectral image denoising via minimizing the partial sum of singular values and superpixel segmentation. Neurocomputing 2019, 330, 465–482. [Google Scholar] [CrossRef]
  25. Grünwald, P.D. The Minimum Description Length Principle; MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  26. Chiang, S.S.; Chang, C.I.; Ginsberg, I.W. Unsupervised target detection in hyperspectral images using projection pursuit. IEEE Trans. Geosci. Remote Sens. 2001, 39, 1380–1391. [Google Scholar] [CrossRef]
  27. Ifarraguerri, A.; Chang, C.I. Unsupervised hyperspectral image analysis with projection pursuit. IEEE Trans. Geosci. Remote Sens. 2000, 38, 2529–2538. [Google Scholar]
  28. Geng, X.; Ji, L.; Sun, K. Principal skewness analysis: Algorithm and its application for multispectral/hyperspectral images indexing. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1821–1825. [Google Scholar] [CrossRef]
  29. Richards, J.A.; Richards, J.A. Remote Sensing Digital Image Analysis; Springer: Berlin, Germany, 2022; Volume 5. [Google Scholar]
  30. Chen, Y.; Cao, X.; Zhao, Q.; Meng, D.; Xu, Z. Denoising hyperspectral image with non-iid noise structure. IEEE Trans. Cybern. 2017, 48, 1054–1066. [Google Scholar] [CrossRef]
  31. Xu, S.; Cao, X.; Peng, J.; Ke, Q.; Ma, C.; Meng, D. Hyperspectral Image Denoising by Asymmetric Noise Modeling. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  32. Zhuang, L.; Ng, M.K. FastHyMix: Fast and parameter-free hyperspectral image mixed noise removal. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 702–4716. [Google Scholar] [CrossRef]
  33. Fu, X.; Guo, Y.; Xu, M.; Jia, S. Hyperspectral Image Denoising Via Robust Subspace Estimation and Group Sparsity Constraint. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5512716. [Google Scholar] [CrossRef]
  34. Wang, J.; Chang, C.I. Independent component analysis-based dimensionality reduction with applications in hyperspectral image analysis. IEEE Trans. Geosci. Remote Sens. 2006, 44, 1586–1600. [Google Scholar] [CrossRef]
  35. Tukey, J.W. Exploratory Data Analysis; UMI: Boston, MA, USA, 1977; Volume 2. [Google Scholar]
  36. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  37. Akaike, H. A new look at the statistical model identification. IEEE Trans. Autom. Control 1974, 19, 716–723. [Google Scholar] [CrossRef]
  38. Neath, A.A.; Cavanaugh, J.E. The Bayesian information criterion: Background, derivation, and applications. Wiley Interdiscip. Rev. Comput. Stat. 2012, 4, 199–203. [Google Scholar] [CrossRef]
  39. Zhuang, L.; Fu, X.; Ng, M.K.; Bioucas-Dias, J.M. Hyperspectral image denoising based on global and nonlocal low-rank factorizations. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10438–10454. [Google Scholar] [CrossRef]
  40. Renza, D.; Martinez, E.; Arquero, A. A new approach to change detection in multispectral images by means of ERGAS index. IEEE Geosci. Remote Sens. Lett. 2012, 10, 76–80. [Google Scholar] [CrossRef]
  41. Chang, C.I. An information-theoretic approach to spectral variability, similarity, and discrimination for hyperspectral image analysis. IEEE Trans. Inf. Theory 2000, 46, 1927–1932. [Google Scholar] [CrossRef]
  42. Fawcett, T. An introduction to ROC analysis. Pattern Recognit. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
Figure 1. (a) The false-color image composed by bands 60, 27, 14 of the Washington dataset. (b) The false-color image composed by bands 60, 40, 20 of the Pavia dataset.
Figure 1. (a) The false-color image composed by bands 60, 27, 14 of the Washington dataset. (b) The false-color image composed by bands 60, 40, 20 of the Pavia dataset.
Remotesensing 16 00276 g001
Figure 2. Restoration results of Washington data with Case 1 noise type and high noise level.
Figure 2. Restoration results of Washington data with Case 1 noise type and high noise level.
Remotesensing 16 00276 g002
Figure 3. The restoration results of Pavia dataset with Case 4 noise type and high noise level.
Figure 3. The restoration results of Pavia dataset with Case 4 noise type and high noise level.
Remotesensing 16 00276 g003
Figure 4. Comparison of the spectra of the small target in the HSIs shown in Figure 2 to the clean spectrum.
Figure 4. Comparison of the spectra of the small target in the HSIs shown in Figure 2 to the clean spectrum.
Remotesensing 16 00276 g004
Figure 5. Comparison of the spectra of the small target in the HSIs shown in Figure 3 to the clean spectrum.
Figure 5. Comparison of the spectra of the small target in the HSIs shown in Figure 3 to the clean spectrum.
Remotesensing 16 00276 g005
Figure 6. (a) The false-color image of the Indian Pines dataset composed by bands 40, 27, 14. (b) The second band of the Indian Pines dataset. (c) The mask of the target in the Indian Pines dataset. (d) The selected GF-5 data composed by bands 67, 37, 20. (e) The image of the 305th band of the GF-5 data. (f) The mask of the target in the GF-5 dataset.
Figure 6. (a) The false-color image of the Indian Pines dataset composed by bands 40, 27, 14. (b) The second band of the Indian Pines dataset. (c) The mask of the target in the Indian Pines dataset. (d) The selected GF-5 data composed by bands 67, 37, 20. (e) The image of the 305th band of the GF-5 data. (f) The mask of the target in the GF-5 dataset.
Remotesensing 16 00276 g006
Figure 7. The second band of the restoration results of the Indian Pines data.
Figure 7. The second band of the restoration results of the Indian Pines data.
Remotesensing 16 00276 g007
Figure 8. The 305th band in the restoration results of the GF-5 data.
Figure 8. The 305th band in the restoration results of the GF-5 data.
Remotesensing 16 00276 g008
Figure 9. The ROC curves of the restoration results. (a) The Indian Pines dataset. (b) The GF-5 dataset.
Figure 9. The ROC curves of the restoration results. (a) The Indian Pines dataset. (b) The GF-5 dataset.
Remotesensing 16 00276 g009
Table 1. Performance of the proposed and competitors on Washington data.
Table 1. Performance of the proposed and competitors on Washington data.
IndexesNoisyNLM3DBM4DLRMRGLFSSKSRSSHFPSAPTMA
Case 1
Low noise level
MPSNR30.4642.8846.3744.1037.2437.0441.7045.3348.45
MSSIM0.650.970.990.980.930.920.930.940.99
MSAM10.362.391.782.114.143.683.263.581.67
ERGAS24.684.993.787.4924.687.276.3318.554.52
Case 1
High noise level
MPSNR26.0239.7443.6240.6833.8832.9340.6940.9544.80
MSSIM0.480.950.980.970.870.850.920.920.97
MSAM16.863.282.513.126.246.123.354.752.28
ERGAS41.317.104.909.1741.3114.2810.7221.118.26
Case 2
Low noise level
MPSNR34.2643.4746.6744.5938.6838.8542.2736.6448.11
MSSIM0.730.970.990.980.940.930.920.860.99
MSAM10.232.421.882.334.103.673.5313.961.67
ERGAS24.584.943.857.5324.587.039.6155.107.36
Case 2
High noise level
MPSNR32.3839.2340.7043.9637.6537.3642.0332.9246.57
MSSIM0.820.940.940.980.940.950.940.740.99
MSAM9.135.565.722.634.844.613.7815.292.24
ERGAS17.266.906.767.6517.268.198.7157.597.60
Case 3
Low noise level
MPSNR42.8049.0251.9850.2143.7544.2344.0335.8752.15
MSSIM0.920.991.000.990.970.980.950.821.00
MSAM3.961.370.911.292.171.732.6714.291.27
ERGAS9.462.751.755.289.463.758.7658.555.40
Case 3
High noise level
MPSNR26.6733.6635.7333.0831.0331.3336.5932.8236.80
MSSIM0.550.840.840.860.810.820.830.790.87
MSAM21.2411.6311.4112.0913.1112.5711.6316.6511.34
ERGAS35.8113.0012.5014.2335.8117.2312.8429.8311.75
Case 4
Low noise level
MPSNR36.9443.2044.2448.2541.0841.4745.3933.1348.95
MSSIM0.920.970.980.990.960.970.970.750.99
MSAM5.453.183.071.573.152.712.2615.341.77
ERGAS10.494.363.246.4810.494.925.0958.557.12
Case 4
High noise level
MPSNR32.6339.2240.4745.0537.8637.6943.0932.9445.24
MSSIM0.850.940.950.990.940.950.950.770.99
MSAM8.995.595.662.394.784.453.6315.372.28
ERGAS20.0812.0111.927.5520.088.859.0958.267.98
Average computation time (s)0155.9392.640.723.753.619.829.917.1
The bold items correspond to the best denoising performance.
Table 2. Performance of the proposed and competitors on Pavia data.
Table 2. Performance of the proposed and competitors on Pavia data.
IndexesNoisyNLM3DBM4DLRMRGLFSSKSRSSHFPSAPTMA
Case 1
Low noise level
MPSNR30.4639.5242.6339.8335.5933.0033.4525.1443.04
MSSIM0.780.970.990.980.940.890.910.770.98
MSAM15.254.213.374.656.7010.845.237.723.07
ERGAS58.6617.9112.9813.3358.6641.4154.24177.5016.53
Case 1
High noise level
MPSNR26.0236.1039.4936.2232.9328.6631.8026.1039.75
MSSIM0.620.940.970.950.900.760.870.710.98
MSAM23.115.664.506.829.6416.745.767.895.68
ERGAS98.3325.1618.8021.1998.3377.8374.74162.0323.32
Case 2
Low noise level
MPSNR33.3840.2442.5940.0835.8435.1431.6724.4843.65
MSSIM0.820.970.990.970.940.890.860.700.99
MSAM15.234.263.415.206.7011.055.7011.143.29
ERGAS59.9716.9612.7319.4259.9749.2450.08180.3017.38
Case 2
High noise level
MPSNR30.9736.9039.0335.0833.4131.7530.5724.1539.98
MSSIM0.690.940.970.930.910.780.840.660.97
MSAM24.456.104.499.2910.2018.526.7412.016.35
ERGAS108.0026.0819.7938.07108.0094.0059.38184.0725.27
Case 3
Low noise level
MPSNR43.4745.4647.3245.7639.2342.6834.5024.2849.15
MSSIM0.960.990.990.990.970.980.930.661.00
MSAM6.273.002.142.353.814.644.5510.981.73
ERGAS23.517.757.367.6023.5118.2038.78202.858.56
Case 3
High noise level
MPSNR25.8028.4128.7428.4627.8527.0627.1426.9528.83
MSSIM0.660.850.870.870.830.770.780.820.88
MSAM21.606.805.128.979.5615.356.948.234.94
ERGAS84.8322.5316.4437.5884.8359.4749.9326.0320.50
Case 4
Low noise level
MPSNR36.5241.4941.3744.1137.4237.9736.6822.9644.87
MSSIM0.950.980.990.990.970.970.950.590.99
MSAM7.993.553.292.684.736.095.0013.933.09
ERGAS25.148.817.688.1625.1421.2930.24213.8511.37
Case 4
High noise level
MPSNR32.6937.4137.5439.9535.3834.3334.6023.1543.14
MSSIM0.900.960.970.980.960.940.930.650.99
MSAM11.495.394.904.366.248.725.4813.743.37
ERGAS35.2112.8610.7912.4935.2129.8045.86211.6913.09
Average computation time (s)053.4169.217.612.140.913.53.732.75
The bold items correspond to the best denoising performance.
Table 3. The AUC of the restoration results of the Indian Pines and GF-5 data.
Table 3. The AUC of the restoration results of the Indian Pines and GF-5 data.
AUCNLM3DBM4DLRMRGLFSSKSRSSHFPSAPTMA
Indian Pines0.9400.9560.9390.8230.9150.7630.9360.968
GF-50.9630.9490.9580.9620.9680.9590.9730.979
The bold items correspond to the highest target detection accuracy.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, S.; Geng, X.; Zhu, L.; Ji, L.; Zhao, Y. Hyperspectral Image Denoising Based on Principal-Third-Order-Moment Analysis. Remote Sens. 2024, 16, 276. https://doi.org/10.3390/rs16020276

AMA Style

Li S, Geng X, Zhu L, Ji L, Zhao Y. Hyperspectral Image Denoising Based on Principal-Third-Order-Moment Analysis. Remote Sensing. 2024; 16(2):276. https://doi.org/10.3390/rs16020276

Chicago/Turabian Style

Li, Shouzhi, Xiurui Geng, Liangliang Zhu, Luyan Ji, and Yongchao Zhao. 2024. "Hyperspectral Image Denoising Based on Principal-Third-Order-Moment Analysis" Remote Sensing 16, no. 2: 276. https://doi.org/10.3390/rs16020276

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop