Next Article in Journal
The Influence of Solar X-ray Flares on SAR Meteorology: The Determination of the Wet Component of the Tropospheric Phase Delay and Precipitable Water Vapor
Previous Article in Journal
Spectral Complexity of Hyperspectral Images: A New Approach for Mangrove Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mixed Noise Estimation Model for Optimized Kernel Minimum Noise Fraction Transformation in Hyperspectral Image Dimensionality Reduction

1
University of Chinese Academy of Sciences, Beijing 100049, China
2
Key Laboratory of Space Active Opto-Electronics Technology, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
3
Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou 310024, China
4
Department of Photogrammetry and Remote Sensing, Finnish Geospatial Research Institute, FI-02430 Masala, Finland
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(13), 2607; https://doi.org/10.3390/rs13132607
Submission received: 16 June 2021 / Revised: 29 June 2021 / Accepted: 30 June 2021 / Published: 2 July 2021
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Dimensionality reduction (DR) is of great significance for simplifying and optimizing hyperspectral image (HSI) features. As a widely used DR method, kernel minimum noise fraction (KMNF) transformation preserves the high-order structures of the original data perfectly. However, the conventional KMNF noise estimation (KMNF-NE) uses the local regression residual of neighbourhood pixels, which depends heavily on spatial information. Due to the limited spatial resolution, there are many mixed pixels in HSI, making KMNF-NE unreliable for noise estimation and leading to poor performance in KMNF for classification on HSIs with low spatial resolution. In order to overcome this problem, a mixed noise estimation model (MNEM) is proposed in this paper for optimized KMNF (OP-KMNF). The MNEM adopts the sequential and linear combination of the Gaussian prior denoising model, median filter, and Sobel operator to estimate noise. It retains more details and edge features, making it more suitable for noise estimation in KMNF. Experiments using several HSI datasets with different spatial and spectral resolutions are conducted. The results show that, compared with some other DR methods, the improvement of OP-KMNF in average classification accuracy is up to 4%. To improve the efficiency, the OP-KMNF was implemented on graphics processing units (GPU) and sped up by about 60× compared to the central processing unit (CPU) implementation. The outcome demonstrates the significant performance of OP-KMNF in terms of classification ability and execution efficiency.

Graphical Abstract

1. Introduction

Hyperspectral remote sensing combines imaging technology and spectroscopy technology to obtain continuous and narrow-band image data with high spectral resolution [1], which improves the ability to monitor the Earth’s systems and human activities [2,3]. However, the high-dimensional data obtained by the hyperspectral sensor are challenging to analyse and apply [4,5]. In hyperspectral image (HSI) classification, with limited training samples, the classifier performance first improves as the dimension increases but degrades when the dimension is higher than an optimal value (the Hughes phenomenon) [6,7]. Dimensionality reduction (DR) is a powerful tool to solve this problem and has become a critical step in HSI processing tasks because of its effectiveness at simplifying and optimizing HSI features [8].
DR methods are generally divided into two major categories: band selection and feature extraction. Band selection methods select a subset of spectral features from the original data [9,10]. These methods can be split into six groups [11]: ranking-based methods [12,13,14], search-based methods [15,16], clustering-based methods [17,18,19], sparsity-based methods [20,21,22], embedding-learning-based methods [11], and hybrid-scheme-based methods [23,24,25]. Although band selection retains valuable bands for subsequent processing, the algorithms have a large computational burden and often are not robust in complex scenes. Moreover, it is difficult to determine the optimal number of bands [11]. Unlike band selection, the feature extraction methods transform the original data into an optimized feature space with mathematical manipulation [26]. These methods can be divided into two classes. The first class consists of linear feature extraction methods such as non-negative matrix factorization (NMF) [27], non-negative matrix underapproximation (NMU) [28], linear discriminant analysis (LDA) [29], principal component analysis (PCA) [30], and minimum noise fraction (MNF) transformation [31]. The second class consists of non-linear feature extraction methods, which can be categorized into three groups: manifold-learning-based methods [32,33,34], graph-theory-based methods [35], and kernel-based methods [36,37]. NMF preserves the non-negativity of HSIs in low-dimensional space by imposing the nonnegative constrain. NMU is closely related to NMF but enables a recursive procedure for NMF and has the advantage of identifying features sequentially. However, both NMF and NMU ignore the intrinsic geometric structure information of data, which makes it difficult to obtain the desired results when the data are nonlinear [38]. LDA is a practical subspace technology that concentrates on mathematically manipulating the original high-dimensional space to a lower-dimensional space, where the class separation is maximized [29]. This method needs to annotate the data firstly, which undoubtedly increases the workload for the DR process. PCA is an orthogonal linear transformation based on information content, yet when the noise is unevenly distributed in each band, it cannot guarantee that the components are arranged following the image quality [31]. In contrast, the components are sorted by image quality after MNF transformation, no matter how the noise distributes in each band [31].
Due to factors such as the absorption and scattering of the atmosphere, and the interaction of terrain objects within a pixel in the imaging process, there are non-linear characteristics inherent in HSI [39]. The linear feature extraction methods mentioned above cannot handle the non-linear characteristics of data excellently. Kernel MNF (KMNF) transformation is proposed to solve this problem in MNF. As a kernel-based method, KMNF adopts kernel functions to transform the original data from the input space to the high-dimensional feature space to ensure data linear separability, and then, the non-linear features can be effectively extracted. Because of the kernel functions in KMNF, the computational complexity increased and the execution efficiency was reduced [38,40,41]. Although KMNF is widely used in non-linear feature extraction, it has been shown that KMNF cannot produce the expected results because of the inaccurate noise estimation [5,26,42,43]. The conventional MNF noise estimation (MNF-NE) and KMNF noise estimation (KMNF-NE) utilize the residual of a local regression in 3 × 3 neighbourhood pixels to a paraboloid or a plane, which depends heavily on the relationship between adjacent pixels [38,44]. The pixels obtained by hyperspectral sensors are mixtures of terrain objects located in the field of view. Due to the limited spatial resolution, there are many mixed pixels in HSI, making the spatial information unreliable for noise estimation [26]. In order to solve this problem in MNF, Gao et al. proposed optimized MNF (OMNF) employing spectral and spatial decorrelation (SSDC) to produce more precise noise estimation and significant improvements in MNF for classification [2,42,45,46]. Like OMNF, Gao et al. proposed an optimized KMNF (OKMNF), which used SSDC to improve the noise estimation during KMNF [26]. It was found that OKMNF is unsuitable for processing HSIs with complex scenes due to the weakness of SSDC in estimating noise. Zhao et al. introduced image segmentation to improve the performance of KMNF, which combines the spectral decorrelation and the spatial information of a homogeneous region generated by image segmentation to estimate noise. It produced better results than KMNF but was more time-consuming [5].
For instrumental noise and atmospheric effects, HSIs often suffer from annoying degradations during the imaging process by a mixture of various types of noise that include thermal noise, quantization noise, and sparse noise. Thermal noise and quantization noise in HSI is subject to Gaussian additive noise, which is signal-independent. Due to the imperfect instrument, sparse noise such as missing pixels, salt and pepper noise, and other outliers often exist in HSI [47]. In terms of the inaccurate noise estimation in KMNF and considering the types of noise in HSIs, this paper proposes a mixed noise estimation model (MNEM) during KMNF for optimized KMNF (OP-KMNF). Instead of using the residual of a local regression in the nearest neighbourhood pixels, the MNEM adopts the sequential and linear combination of the Gaussian prior denoising model, the median filter, and the Sobel operator to estimate noise during KMNF. To improve the execution efficiency, the OP-KMNF is implemented on graphics processing units (GPU). It achieves significant acceleration compared to the central processing unit (CPU) implementation.
It is worthwhile highlighting several aspects of the proposed method here: (1) aiming at the problem of inaccurate noise estimation during KMNF transformation, a new noise estimation method, MNEM, that is more suitable for noise estimation in KMNF is proposed. It is more effective and more robust, retains more details and edge features, and is more suitable than KMNF-NE and SSDC for noise estimation in KMNF. Experimental results, conducted using multiple hyperspectral datasets (including the Indian Pines, Salinas, and Xiong’an datasets), show that the OP-KMNF overperforms other traditional DR methods (including LDA, PCA, MNF, OMNF, factor analysis (FA), kernel PCA (KPCA), KMNF, OKMNF, and local preserving projections (LPP)) by 4% in average classification accuracy. (2) Different DR methods have different applicable conditions. In this paper, the adaptability of OP-KMNF and other data dimension reduction methods (including LDA, PCA, MNF, OMNF, FA, KPCA, KMNF, OKMNF, and LPP) in hyperspectral images with different spatial and spectral resolutions are assessed. The experimental results show the adaptability and robustness of OP-KMNF transformation. (3) In order to solve the problem of low execution efficiency of the OP-KMNF algorithm caused by introducing the kernel function, we introduce GPU-based parallel computing, testing the execution efficiency of the algorithm on two different computing systems. The execution efficiency speeds up by about 60× compared to the CPU implementation. Moreover, the calculation cost of the OP-KMNF is introduced in detail, which provides a reference for implementing the OP-KMNF real-time algorithm in the future.
The remainder of this paper is organized as follows. The procedures of MNEM and OP-KMNF transformation are described in detail in Section 2. The experiments and results for the proposed methods in this paper and other traditional DR methods are shown in Section 3. Section 4 presents the performance of the proposed methods in classification and execution efficiency quantitatively. The conclusions are given in Section 5.

2. Proposed Method

For optical images, it is generally assumed that signal and noise are independent of each other; that is, an HSI is composed of signal and noise [37]:
y ( x ) = y s ( x ) + y N ( x ) ,
where y ( x ) is the pixel vector at location x ; y s ( x ) and y N ( x ) are the signal and noise in y ( x ) . For a hyperspectral image Y containing n pixels and b bands, it can be expressed as a matrix with n rows and b columns.
The covariance matrix Y of image Y can be expressed as the sum of the signal covariance matrix Y S and the noise covariance matrix Y N , which can be expressed as in Equation (2):
Y = Y S + Y N
Treating y ¯ i as the average value of the i th band, a matrix Y m e a n with n rows and b columns can be obtained, which can be expressed as follows:
Y m e a n = [ y ¯ 1 y ¯ 2 y ¯ 1 y ¯ 2 y ¯ b y ¯ b y ¯ 1 y ¯ 2 y ¯ b ]
Then, we centralize the matrix Y , with the center matrix Y C given by:
Y C = Y Y m e a n
The covariance matrix Y of Y can be expressed as follows:
Y = Y C T Y C / ( n 1 )
Similarly, Y S and Y N can be obtained.
N F , which is shown in Equation (6), is defined as the ratio of the noise covariance matrix to the image covariance matrix:
N F = a T Y N a / a T Y a = a T Y N T Y N a / a T Y T Y a   ,
where a is the eigenvector matrix of N F . Minimizing N F is equivalent to maximizing 1 / N F :
1 / N F = a T Y a / a T Y N a = a T Y T Y a / a T Y N T Y N a
The dualistic formulation is got by re-parameterizing and setting a Y T b :
1 / N F = b T Y Y T Y Y T b / b T Y Y N T Y N Y T b   .
Introducing a non-linear mapping Φ : y Φ ( y ) to kernelized 1 / N F can be expressed as follows:
1 / K _ N F = b T Φ ( Y ) Φ ( Y ) T Φ ( Y ) Φ ( Y ) T b / b T Φ ( Y ) Φ ( Y N ) T Φ ( Y N ) Φ ( Y ) T b  
1 / K _ N F = b T K 2 b / b T K N K N T b   ,
where Φ ( Y ) is the matrix after mapping of   Y ,   Φ ( Y N ) is the matrix after mapping of Y N , K = Φ ( Y ) Φ ( Y ) T , and K N = Φ ( Y ) Φ ( Y N ) T .
In KMNF, in order to extract the components arranged by image quality after KMNF, we need to maximize 1 / K _ N F according to Equation (10) [26]. Solving the maximized 1 / K _ N F maximizes the Rayleigh quotient, which can be solved using the maximized Rayleigh entropy [5,37]. It can be written as follows:
K 2 b = λ K N K N T b
K 2 b = λ ( K N K N T ) 1 2 ( K N K N T ) 1 2 b
( K N K N T ) 1 2 K 2 ( K N K N T ) 1 2 [ ( K N K N T ) 1 2 b ] = λ [ ( K N K N T ) 1 2 b ]   ,
where λ is the eigenvalues of ( K N K N T ) 1 2 K 2 ( K N K N T ) 1 2   ; ( K N K N T ) 1 2 b is the eigenvectors of ( K N K N T ) 1 2 K 2 ( K N K N T ) 1 2 . Then, we get the matrix of b from this formula.
As mentioned above, a Y T b ; after non-linear mapping, Y T b is transformed to Φ ( Y ) T b . The feature extraction result R can be obtained by:
R = Φ ( Y ) a R = Φ ( Y ) a = K b   .
Through the above analysis, it can be seen that the calculation accuracy of 1 / K _ N F mainly depends on the noise estimation results, so noise estimation is a key step in the KMNF transformation. The KMNF-NE uses the residual of a local regression in a 3 × 3 neighbourhood to apply to a paraboloid or a plane, which can be regarded as a filtering problem [37,44]. However, recent studies show that space-based noise estimation methods like KMNF-NE are data-selective and unstable [46].
Considering the types of noise in HSIs, we propose an MNEM that combines a Gaussian prior denoising model, median filter, and Sobel operator to estimate noise during KMNF.

2.1. Mixed Noise Estimation Model

The Gaussian prior denoising model learns the Gaussian model from the image itself, which enables it to better adapt to the data and achieve better denoising results in the images mixed with Gaussian noise [48].
For images mixed with Gaussian noise, the noise model can be expressed as follows:
Z G = S G + N G   ,
where Z G is an image mixed with Gaussian noise,   S G is the clean image, and N G is the noise subject to a Gaussian distribution.
In the prior Gaussian model, it was assumed that the clean image patch satisfies the Gaussian distribution, that is S G ~ N ( μ , ) ; the result of denoising R G with Gaussian parameters can be obtained by maximizing the posterior probability p as in Equation (16) [48]:
R G = argmax S G   p ( S G / Z G ) ne   = argmax S G   p ( Z G / S G ) p ( S G ) = argmax S G 1 ( 2 π σ 2 ) d 2 exp ( 1 2 σ 2 Z G S G 2 2 ) × 1 ( 2 π ) d 2 | | 1 2 exp ( 1 2 ( S G μ ) T i 1 ( S G μ ) ) = a r g m i n S G 1 2 σ 2 Z G S G 2 2 + 1 2 ( S G μ ) T i 1 ( S G μ ) ,
where S G is the clean image patch, μ is the mean value of the clean image patch, is the covariance of the clean image, and σ is the standard deviation of Gaussian noise.
In Equation (16), the second line is derived from the first line by the Bayes theorem, and the last line is obtained from the logarithmic multiplier of the third line. The result depends on the balance of the first term (fidelity term) and the second term (prior term) in the equation. When is a positive semidefinite matrix, Equation (16) can be regarded as a convex optimization problem [48]. Thus, setting the derivative to 0, we obtain Equation (17) as follows:
R G = ( + σ 2 I ) 1 (   Z G + σ 2 μ ) .
Then, the Gaussian noise mixed in the image can be expressed as in Equation (18):
N G = Z G R G .
Recent works show the validity of this method for filtering out Gaussian noise in HSI [48].
The median filter, a typical non-linear filter for signal restoration based on order statistics, replaces the value of the pixel with the median value of the grayscale in a N × N neighbourhood. It protects details perfectly while filtering out noise (especially impulse noise) but cannot effectively maintain the edge feature of the image.
As a widely used edge detection algorithm, the Sobel operator produces a better edge detection result and has a smooth suppression effect on noise. The Sobel operator is a discrete difference operator. Applying this operator to any pixels in the image produces the corresponding gray vector or its normal vector [49].
The Sobel operator adopts the spatial neighbourhood (3 × 3) feature of an image to detect edge information, as shown in the following equation:
N S X x , y , i = Z S x + 1 , y 1 , i + 2 Z S x + 1 , y , i + Z S x + 1 , y + 1 , i Z S x 1 , y 1 , i 2 × Z S x 1 , y , i Z S x 1 , y + 1 , i
N S Y x , y , i = Z S x 1 , y 1 , i + 2 Z S x , y 1 , i + Z S x + 1 , y 1 , i Z S x 1 , y + 1 , i 2 Z S x , y + 1 , i Z S x + 1 , y + 1 , i
N S x , y , i = N S X x , y , i 2 + N S Y x , y , i 2 ,
where Z S x , y , i is the value of the pixel located at line x , column y , and band i of the hyperspectral image Z S ;   N S X x , y , i is the transverse edge detection value; N S Y x , y , i is the longitudinal edge detection value; and N S x , y , i is the edge detection value of the Sobel operator.
As mentioned above, HSIs often suffer from annoying degradations by a mixture of various types of noise that include thermal noise, quantization noise, and sparse noise [47]. According to the types of mixed noise in HSIs, the MNEM, which combines a Gaussian prior denoising model, median filter, and Sobel operator, is proposed in this paper to estimate noise.
The flowchart of the noise estimation process is shown in Figure 1. It adopts two different ways of using multiple filters. In image denoising, a filter can suppress or filter out different types of noise in the image. A linear combination processing is introduced in this paper to combine multiple filters for noise estimation. The linear combination processing method is the MNEM-Ratio shown in Figure 1. The sequential processing method is the MNEM-Order shown in Figure 1. First, the median filter is used to filter out the impulse noise in each band of HSIs. Second, considering that the median filter cannot retain the edge features of the image well, the Sobel operator is used for edge detection to enhance the edge information. Finally, the prior Gaussian denoising model is utilized to suppress the Gaussian noise, and the residual between the denoised image and the original image is calculated to estimate the noise in HSIs.
For the MNEM-Ratio, mean spectral angle distance (MSAD), which is expressed as in Equation (24), is used as the evaluation criteria to determine P M ,   P S , and P G . In evaluating the noise estimation performance, the smaller the value of MSAD, the better the denoised results. The procedure is shown in Algorithm 1.
Algorithm 1. The procedure of determining P M ,   P S and P G in MENM-Ratio
Input: hyperspectral image Y.
Step 1: input Y into the median filter to obtain the median filter denoised image Y_Median.
Step 2: Sobel operator is used in Y to get the Sobel denoised image Y_DeNoise_Sobel = Y − Y_Sobel.
Step 3: Y is input into the Gaussian prior denoising model to obtain the denoised image Y_Gaussian.
Step 4: calculate the MSAD values M S A D _ M e d i a n , M S A D _ S o b e l and M S A D _ G a u s s i a n between the input image and the denoised images Y_Median, Y_DeNoise_Sobel and Y_Gaussian.
Step 5: take the reciprocals, that is M S A D M = 1 / M S A D _ M e d i a n , M S A D S = 1 / M S A D _ S o b e l and M S A D G = 1 / M S A D _ G a u s s i a n .
Step 6: P M = M S A D M M S A D M + M S A D S + M S A D G , P S = M S A D S M S A D M + M S A D S + M S A D G and P G = M S A D G M S A D M + M S A D S + M S A D G .
Output: the values of P M ,   P S and P G .
To evaluate the noise estimation performance of the MNEM, we apply the proposed method to each band in HSI. The following evaluation criteria are used [50,51]:
1. Mean peak signal-to-noise ratio (MPSNR)
M P S N R = 1 M m = 1 M P S N R m ,
where P S N R m is the PSNR in band m , and M is the number of HSI bands.
2. Mean structure similarity (MSSIM)
M S S I M = 1 M m = 1 M S S I M m ,  
where S S I M m is the structure similarity (SSIM) in band m , and M is the number of HSI bands.
3. Mean spectral angle distance (MSAD)
M S A D = 1 x S i z e × y S i z e p = 1 x S i z e × y S i z e 180 π a r c cos ( z p ) T · ( r p ) z p 2 · r p 2 ,
MSAD is calculated between the original image pixel z p and the denoised image pixel r p and then averaged for all pixels over the entire spatial domain. The larger the values of MPSNR and MSSIM, the smaller the value of MSAD and the better the denoised results.

2.2. Optimized Kernel Minimum Noise Fraction (OP-KMNF) Transformation

Instead of using the residual of a local regression in the nearest neighbourhood pixels, we adopt MNEM-Ratio and MNEM-Order to estimate noise during KMNF for optimized OP-KMNF. The OP-KMNF-Ratio and OP-KMNF-Order transformation procedures are shown in Algorithm 2 and Algorithm 3.
Algorithm 2. The procedure of OP-KMNF-Ratio
Input: hyperspectral image Y
Step 1: input Y into the median filter to obtain Y_Median, and then Noise_Median = Y − Y_Median.
Step 2: Sobel operator is used in Y to get Noise_Sobel = Y_Sobel.
Step 3: Y is input into the Gaussian prior denoising model to obtain Y_Gaussian, and then
Noise_Gaussian = Y − Y_Gaussian.
Step 4: noise estimation: MNEM-Ratio=PM × Noise_Median+ PS × Noise_Sobel + PG × Noise_Gaussian.
Step 5: transformation and kernelization of noise fraction 1 / N F according to Equation (10).
Step 6: calculate the eigenvectors of ( K N K N T ) 1 2 K 2 ( K N K N T ) 1 2 , and obtain the matrix of b.
Step 7: map all pixels onto the transformation matrix using Equation (14).
Output: hyperspectral image feature extraction result R .
Algorithm 3. The procedure of OP-KMNF-Order
Input: hyperspectral image Y
Step 1: input Y into the median filter to obtain Y_Median.
Step 2: Sobel operator is used in Y_Median to get Noise_Sobel, and then
    Y_Sobel = Y_Median + Noise_Sobel.
Step 3: Y_Sobel is input into the Gaussian prior denoising model to obtain Y_Gaussian, and then
    MNEM-Order = Y − Y_Gaussian.
Step 4: transformation and kernelization of noise fraction 1 / N F according to Equation (10).
Step 5: calculate the eigenvectors of ( K N K N T ) 1 2 K 2 ( K N K N T ) 1 2 , and obtain the matrix of b.
Step 6: map all pixels onto the transformation matrix using Equation (14).
Output: hyperspectral image feature extraction result   R .

2.3. Graphics Processing Units (GPU)-Based Parallel Computing

Due to the kernel functions in KMNF, the computational complexity increased and the execution efficiency decreased. Recently, parallel processing has attracted more and more attention because of its powerful data processing capacity. The parallel platforms mainly include the CPU, field programmable gate array (FPGA), and graphics processing units (GPU). Compared to FPGA and CPU, GPU is excellent in floating-point operations and more flexible in programming [52,53].
Basic linear algebra subroutines (BLAS) are a set of kernels that provide a standard builder for executing basic vector and matrix operations [54]. BLAS specifications are widely used to develop high-quality linear algebra software, such as LAPACK and FLAME [55,56]. Based on the GPU parallel platform, NVIDIA computing unified device architecture (CUDA) provides a BLAS library (called CUBLAS) [57]. Recent works show the superior performance of CUBLAS in processing matrix and vector operations [54,57,58]. An analysis of OP-KMNF shows voluminous matrix and vector operations in the program, so this paper realizes the parallel processing of OP-KMNF based on the CUBLAS of GPU.
In this paper, the parallel computing performance of OP-KMNF is evaluated in two different computing systems. Computing system No. 1 consists of an Intel Core i5-4460 CPU at 3.20 GHz with four cores and an NVIDIA GeForce GTX745 GPU card, and computing system No. 2 consists of an Intel Core i7-10750H CPU at 2.60 GHz with six cores and an NVIDIA GeForce RTX2060 GPU card. The hardware specifications of NVIDIA GeForce GTX745 and NVIDIA GeForce RTX2060 are shown in Table 1.

3. Results

The experiments were conducted on three datasets (the Salinas dataset, Indian Pines dataset, and Xiong’an dataset), introduced in Section 3.1. All DR algorithms are implemented on Visual Studio 2015, and when using median filtering and the Sobel operator, Opencv3.1.0 is used as the dependency library to obtain the results. The first experiment used MPSNR, MSSIM, and MSAD as evaluation criteria to evaluate the denoised results of MNEM adopted in OP-KMNF, and the results are shown in Section 3.2. The second experiment took the average accuracy as the evaluation criterion to assess the performance of OP-KMNF in terms of the support vector machine (SVM) classifier. The SVM classifier with radial basis function (RBF) kernel was applied in this experiment. In all experiments, 25% of the samples are randomly selected for training and the rest are employed for testing, and the ten-fold cross-validation is used to find the best parameters in SVM. The experimental results are given in Section 3.3. In order to verify the adaptability of OP-KMNF to HSIs with different spatial resolutions and spectral resolutions, pixel merging and band merging were performed on the Xiong’an dataset to obtain images with different spatial resolutions and spectral resolutions. The experimental results of HSIs with different spatial resolutions are given in Section 3.4. The experimental results of HSIs with different spectral resolutions are given in Section 3.5. The last experiment was to verify the execution efficiency of OP-KMNF, implemented on GPU by increasing the data volume in processing, and the computational costs of OP-KMNF were analysed in detail with a data volume of 400 × 400 × 250. The results are given in Section 3.6. In order to ensure the reliability of the experimental results, each experiment was conducted five times, and the average values and confidence intervals were taken and are reported in this paper.

3.1. Input Data

3.1.1. Salinas Dataset

The Salinas dataset is an image of the Salinas Valley in California, USA, collected by an airborne visible infrared imaging spectrometer (AVIRIS), which consists of 512 × 217 pixels, and the spatial resolution is 3.7 m. This dataset has 224 spectral bands originally. In experiments, we used 204 bands after discarding the 108th–112th, 154–167th, and 224th bands because of the water absorption phenomenon. The pseudo-colour image and the ground truth classification map of Salinas are shown in Figure 2.

3.1.2. Indian Pines Dataset

Indian Pines is the earliest dataset used for hyperspectral image classification, collected by AVIRIS in Indiana, USA. This dataset consists of 145 × 145 pixels, and the spatial resolution is 20 m. It has 220 spectral bands originally. In the experiments, we used 200 bands after discarding the 104th–180th, 150th–163rd, and 220th bands because of the water absorption phenomenon. The pseudo-colour image and the ground truth classification map of Indian Pines are shown in Figure 3.

3.1.3. Xiong’an Dataset

Xiong’an is collected in New District, Hebei Province, China by an airborne multi-modular imaging spectrometer (AMMIS), a next-generation Chinese airborne hyperspectral imager. Funded by the China High-Resolution Earth Observation Project, AMMIS was designed and manufactured by the Shanghai Institute of Technical Physics, Chinese Academy of Sciences. This dataset consists of 512 × 512 pixels and 250 spectral bands, the wavelength range is 0.4–1.0 μ m , and the spatial resolution is 0.5 m [59,60,61,62,63]. The pseudo-colour image and the ground truth classification map of Xiong’an are shown in Figure 4.

3.2. Experiments on Noise Estimation

The denoised results of KMNF-NE, SSDC, and MNEM were compared, and the MPSNR, MSSIM, and MSAD values on the Salinas, Indian Pines, and Xiong’an datasets are shown in Table 2. To visualize the denoised results, we show the denoise results of band 5 in the Salinas dataset, band 187 in the Indian Pines dataset, and band 15 in the Xiong’an dataset in Figure 5, Figure 6 and Figure 7.
In this section, experiments were conducted to assess the noise estimation performance of MENM. As mentioned above, the larger the values of MPSNR and MSSIM and the smaller the value of MSAD, the better the denoised results. Experimental results in this section show that MNEM is more effective, more robust, and retains more details and edge features than KMNF-NE and SSDC. Therefore, we adopt MNEM as the noise estimation method during KMNF for OP-KMNF.

3.3. Experiments on OP-KMNF

The numbers of samples and training samples of the Salinas, Indian Pines, and Xiong’an datasets are listed in Table 3 and Table 4. The average accuracies of different DR methods for SVM classification on the Salinas, Indian Pines, and Xiong’an datasets are shown in Table 5. Taking the feature extracted by each method in the Xiong’an dataset (number of features = 9) as an example, the statistics on the classification accuracies of each class are given in Table 6. In order to visualize the classification results, the SVM classification results of the Indian Pines, Salinas, and Xiong’an datasets after different DR methods (number of features = 5) are shown in Figure 8, Figure 9 and Figure 10.
In this section, SVM is used as the classification algorithm, and the average accuracy is used to assess the results. Experimental results, obtained using multiple hyperspectral datasets (including the Indian Pines dataset, Salinas dataset, and Xiong’an dataset), show that OP-KMNF overperforms other traditional DR methods (including LDA, PCA, MNF, OMNF, FA, KPCA, KMNF, OKMNF, and LPP) in terms of the average classification accuracy. The improvements of OP-KMNF in terms of the average accuracy on the Salinas, Indian Pines, and Xiong’an datasets were 2.33%, 1.85%, and 4.54%, respectively, and, compared with KMNF, the improvements were 2.42%, 4.92%, and 7.64%. Taking the feature extracted by each method in the Xiong’an dataset (number of features = 9) as an example, the classification accuracy of each class was assessed. In the classification of Corn, Robinia, Populus and Sophora japonica, OP-KMNF showed a significant improvement compared with the other methods.

3.4. Adaptability of OP-KMNF to Hyperspectral Images with Different Spatial Resolutions

HSIs with different spatial resolutions are obtained after pixel merging on the Xiong’an dataset. We treat the Xiong’an dataset as Spatial_resolution_1, and for each band, four adjacent pixels are averaged to obtain Spatial_resolution_2, and eight adjacent pixels are averaged to obtain Spatial_resolution_3. The spatial resolution of Spatial_resolution_1 is 0.5 m, the spatial resolution of Spatial_resolution_2 is 1 m, and the spatial resolution of Spatial_resolution_3 is 2 m. The pseudo-colour images and the ground truth classification map are shown in Figure 11.
The numbers of samples and training samples used in the Xiong’an dataset with different spatial resolutions are listed in Table 7. The average accuracies of different DR methods for SVM classification are shown in Figure 12.
In this section, experiments are conducted to verify the adaptability of OP-KMNF to HSIs with different spatial resolutions. For experiments on Spatial_resolution_1, Spatial_resolution_2, and Spatial_resolution_3, the OP-KMNF showed better performance compared with some other DR methods (including LDA, PCA, MNF, OMNF, FA, KPCA, KMNF, OKMNF, and LPP).

3.5. Adaptability of OP-KMNF to Hyperspectral Images with Different Spectral Resolutions

HSIs with different spectral resolutions were obtained after band merging on the Xiong’an dataset. We treat the Xiong’an dataset as Spectral_resolution_1, and for each pixel, two adjacent bands are averaged to obtain Spectral_resolution_2, and four adjacent bands are averaged to obtain Spectral_resolution_3. The spectral resolution of Spectral_resolution_1 is 2.4 nm, the spectral resolution of Spectral_resolution_2 is 4.8 nm, and the spectral resolution of Spectral_resolution_3 is 9.6 nm. The pseudo-colour images and the ground truth classification map are shown in Figure 13.
The numbers of samples and training samples used in the Xiong’an dataset with different spectral resolutions are listed in Table 8. The average accuracies of different DR methods for SVM classification are shown in Figure 14.
In this section, experiments are conducted to verify the adaptability of OP-KMNF to HSIs with different spectral resolutions. For experiments on Spectral_resolution_1, Spectral_resolution_2, and Spectral_resolution_3, the OP-KMNF showed better performance compared with some other DR methods (including LDA, PCA, MNF, OMNF, FA, KPCA, KMNF, OKMNF, and LPP).

3.6. GPU Implementation of OP-KMNF

Table 9 shows the runtimes and speedups of OP-KMNF before parallel computing and after on two different computing systems with different data volumes. The CPU runtime was tested on computing system No. 1. Taking the processing with the data volume is 400 × 400 × 250 as an example, we detail the computational costs of OP-KMNF, and the result is shown in Table 10.
In this section, experiments were conducted to compare the time consumption before and after parallel computing in OP-KMNF with identical data volumes on two different computing systems. The experimental results show that OP-KMNF implemented in parallel on GPU leads to a significant improvement in efficiency. For experiments on a data volume of 400 × 400 × 250, the improvements of OP-KMNF-Order and OP-KMNF-Ratio sped up by 63.95× and 64.27×, respectively. For the computational costs of OP-KMNF with a data volume of 400 × 400 × 250, it can be seen that, in computing system No. 1, the execution efficiency of OP-KMNF-Order and OP-KMNF-Ratio in noise estimation sped up by 27.86× and 27.10× and that, in computing system No. 2, they sped up by 37.34× and 38.86×. For KMNF transformation, the execution efficiency sped up by 174.02× in computing system No. 1 and 559.69× in computing system No. 2.

4. Discussion

In this section, we discuss the performance of MNEM, OP-KMNF, and parallel computing on GPU that are proposed in this paper. The results were given in Section 3.
We conducted experiments to assess the noise estimation performance of MENM, and the results demonstrate that MNEM is more effective, more robust, and retains more details and edge features than KMNF-NE. From the experimental results of the Salinas dataset (spatial resolution: 3.7 m), Indian Pines dataset (spatial resolution: 20 m), and Xiong’an dataset (spatial resolution: 0.5 m), the improvements of MENM in MPSNR, MSSIM, and MASD were greatest for the Indian Pines dataset, which shows that MENM had better performance than KMNF-NE and SSDC on a dataset with limited spatial resolution and lots of mixed pixels. As mentioned above, noise estimation is a crucial component in KMNF, so we adopted MNEM for noise estimation during KMNF for OP-KMNF.
We validated the performance of DR methods in terms of SVM, and the average accuracy was used to assess the results. Experimental results obtained using three real hyperspectral datasets (the Indian Pines dataset, Salinas dataset, and Xiong’an dataset) showed that, (1) compared with other DR methods (including LDA, PCA, MNF, OMNF, FA, KPCA, KMNF, OKMNF, and LPP), OP-KMNF led to significant improvements in terms of the average classification accuracy; (2) it is not the case that the more features are extracted, the higher the classification accuracy will be, so the number of features should be selected to achieve the highest classification accuracy; (3) in practice, FA can be used as a dimensionality reduction method, and in many cases, it has better performance than MNF and KMNF; and (4) the experimental results prove the importance of accurate noise estimation for the KMNF algorithm and provide a way to optimize KMNF in the future. Compared to the other nine DR algorithms under the same experimental conditions, OP-KMNF handles the non-linear features well and performs better in subsequent classification processing.
To verify the adaptability of OP-KMNF to HSIs with different spatial resolutions, we obtained HSIs with different spatial resolutions by using pixel merging on the Xiong’an dataset. The spatial resolution of Spatial_resolution_1, Spatial_resolution_2, and Spatial_resolution_3 was 0.5 m, 1 m, and 2 m, respectively. Experimental results obtained using these images showed that, (1) compared with other DR methods (including LDA, PCA, MNF, OMNF, FA, KPCA, KMNF, OKMNF, and LPP), OP-KMNF led to significant improvements in average classification accuracy on HSIs with different spatial resolutions; (2) in most cases, the performance of DR methods (including LDA, PCA, MNF, OMNF, FA, KPCA, KMNF, OKMNF, LPP, and OP-KMNF) in classification was improved with the decrease in spatial resolution; and (3) the MNEM adapts HSIs with different spatial resolutions well during KMNF and, the lower the spatial resolution, the more the performance improvement of OP-KMNF over KMNF.
To verify the adaptability of OP-KMNF to HSIs with different spectral resolutions, we obtained HSIs with different spectral resolutions by using band merging on the Xiong’an dataset; the spectral resolution of Spectral_resolution_1, Spectral_resolution_2, and Spectral_resolution_3 was 2.4 nm, 4.8 nm, and 9.6 nm, respectively. Experimental results obtained using these images showed that, (1) compared with other DR methods (including LDA, PCA, MNF, OMNF, FA, KPCA, KMNF, OKMNF, and LPP), OP-KMNF led to significant improvements in terms of average classification accuracy on HSIs with different spectral resolutions; (2) the performance of LDA and LPP in classification improved with the decrease in spectral resolution, while the performance of KMNF and OKMNF worsened with the decrease in spectral resolution; and (3) the MNEM adapts the HSIs with different spectral resolutions during KMNF well and the lower the spectral resolution, the greater the performance improvement of OP-KMNF over KMNF.
In practice, we need to consider both the performance and the efficiency. We introduced GPU-based parallel computing to improve the execution efficiency of OP-KMNF and compared the time consumption before and after parallel computing under the same data volumes on two different computing systems. The results showed that the implementation in parallel on GPU led to a significant improvement in efficiency. The analysis of the experimental results in Table 9 and Table 10 showed that, (1) the larger the data volume, the more significant the acceleration effect; (2) when the data volume was 400 × 400 × 251, the execution efficiency sped up by about 60× compared to the CPU implementation; and (3) with the development of parallel systems, the execution efficiency of the algorithm is further improved.
From the experimental results, we can see that the MNEM-Order and MNEM-Ratio have different adaptability on different datasets. For the experimental results on the Salinas dataset, the MNEM-Ratio shows better noise estimation performance, while on the Indian Pines and Xiong’an datasets, MNEM-Order shows better performance. In most cases, OP-KMNF-Ratio shows better performance on the Salinas dataset in terms of the average classification accuracy, while OP-KMNF-Order shows better performance when the feature number is 3. Similar experimental results have also appeared in the Indian Pines dataset and Xiong’an dataset. The reason for this phenomenon is still incompletely understood.
To further analyse the causes of this phenomenon, the PSNR and SSIM values of each band on the Salinas, Indian Pines, and Xiong’an datasets are shown in Figure 15 and Figure 16. It can be seen that each noise estimation method has different adaptability to different bands. For KMNF transformation, it is not the case that the more features are extracted, the higher the classification accuracy will be; the noise fraction matrix of the HSI is processed as a whole during the KMNF transformation, and the impact of the noise estimation results of each band on the final results has not been quantitatively assessed. All of these factors have an impact on the final results. In future work, we will perform more in-depth research to explain this phenomenon.

5. Conclusions

The MNEM proposed in this paper can estimate the noise of HSI more effectively, is more robust, retains more details and edge features than KMNF-NE, and further optimizes the DR performance of KMNF in terms of classification. In consideration of the computational cost, we introduce GPU-based parallel computing to OP-KMNF. Experimental results show that this algorithm handles non-linear features well and perfectly adapts HSIs with different spatial resolutions and spectral resolutions. Additionally, it is superior to other commonly used DR algorithms in terms of classification. The experiments on two different computing systems proved that the execution efficiency is further improved with parallel system development, which clarifies the real-time dimensionality reduction for HSI. In the future, the performance of OP-KMNF in other applications (e.g., anomaly detection), and the implementation of its real-time algorithm will be studied.

Author Contributions

Conceptualization, T.X. and Y.W.; methodology, T.X.; software, T.X.; validation, T.X. and J.J.; formal analysis, T.X.; investigation, T.X.; resources, Y.W.; data curation, T.X.; writing—original draft preparation, T.X.; writing—review and editing, T.X., Y.C., J.J. and M.W.; visualization, R.G., T.W. and X.D.; supervision, Y.W. and Y.C.; project administration, Y.W.; funding acquisition, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (61627804). This research was financially supported by Academy of Finland projects “Ultrafast Data Production with Broadband Photodetectors for Active Hyperspectral Space Imaging (336145)”, Forest-Human-Machine Interplay—Building Resilience, Redefining Value Networks and Enabling Meaningful Experiences (UNITE), (337656) and Strategic Research Council project “Competence-Based Growth Through Integrated Disruptive Technologies of 3D Digitalization, Robotics, Geospatial Information and Image Processing/Computing—Point Cloud Ecosystem (314312). Chinese Academy of Science (181811KYSB20160040), and Huawei (9424877) are also acknowledged.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Xiong’an dataset used here can be download from http://www.hrs-cas.com/a/share/shujuchanpin/01/05/2019/1049.html, accessed on 20 June 2021.

Acknowledgments

We are grateful to State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, who provided the hyperspectral dataset of Xiong’an for this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Keshava, N.; Mustard, J.F. Spectral unmixing. IEEE Signal Process. Mag. 2002, 19, 44–57. [Google Scholar] [CrossRef]
  2. Gao, L.; Du, Q.; Zhang, B.; Yang, W.; Wu, Y. A Comparative study on linear regression-based noise estimation for hyperspectral imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 488–498. [Google Scholar] [CrossRef] [Green Version]
  3. Jia, J.; Wang, Y.; Chen, J.; Guo, R.; Shu, R.; Wang, J. Status and application of advanced airborne hyperspectral imaging technology: A review. Infrared Phys. Technol. 2020, 104, 103115. [Google Scholar] [CrossRef]
  4. Bruce, L.; Koger, C.; Li, J. Dimensionality reduction of hyperspectral data using discrete wavelet transform feature extraction. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2331–2338. [Google Scholar] [CrossRef]
  5. Zhao, B.; Gao, L.; Liao, W.; Zhang, B. A new kernel method for hyperspectral image feature extraction. Geospat. Inf. Sci. 2017, 20, 309–318. [Google Scholar] [CrossRef] [Green Version]
  6. Zhou, Y.; Peng, J.; Chen, C.L.P. Dimension Reduction Using Spatial and Spectral Regularized Local Discriminant Embedding for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1082–1095. [Google Scholar] [CrossRef]
  7. Hughes, G. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef] [Green Version]
  8. Liang, L.; Xia, Y.; Xun, L.; Yan, Q.; Zhang, D. Class-Probability Based Semi-Supervised Dimensionality Reduction for Hyperspectral Images. In Proceedings of the 2018 IEEE 9th International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 23–25 September 2018; pp. 460–463. [Google Scholar]
  9. Feng, J.; Jiao, L.; Liu, F.; Sun, T.; Zhang, X. Mutual-information-based semi-supervised hyperspectral band selection with high discrimination, high information, and low redundancy. IEEE Trans. Geosci. Remote Sens. 2014, 53, 2956–2969. [Google Scholar] [CrossRef]
  10. Chen, W.; Yang, Z.; Cao, F.; Yan, Y.; Wang, M.; Qing, C.; Cheng, Y. Dimensionality reduction based on determinantal point process and singular spectrum analysis for hyperspectral images. IET Image Process. 2019, 13, 299–306. [Google Scholar] [CrossRef] [Green Version]
  11. Sun, W.; Du, Q. Hyperspectral band selection: A review. IEEE Geosci. Remote Sens. Mag. 2019, 7, 118–139. [Google Scholar] [CrossRef]
  12. Chang, C.-I.; Du, Q.; Sun, T.-L.; Althouse, M. A joint band prioritization and band-decorrelation approach to band selection for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2631–2641. [Google Scholar] [CrossRef] [Green Version]
  13. Bajcsy, P.; Groves, P. Methodology for hyperspectral band selection. Photogramm. Eng. Remote Sens. 2004, 70, 793–802. [Google Scholar] [CrossRef]
  14. Yang, H.; Du, Q.; Su, H.; Sheng, Y. An efficient method for supervised hyperspectral band selection. IEEE Geosci. Remote Sens. Lett. 2010, 8, 138–142. [Google Scholar] [CrossRef]
  15. Keshava, N. Distance metrics and band selection in hyperspectral processing with applications to material identification and spectral libraries. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1552–1565. [Google Scholar] [CrossRef]
  16. Su, H.; Du, Q.; Chen, G.; Du, P. Optimized hyperspectral band selection using particle swarm optimization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2659–2670. [Google Scholar] [CrossRef]
  17. Martínez-Usómartinez-Uso, A.; Pla, F.; Sotoca, J.M.; García-Sevilla, P. Clustering-based hyperspectral band selection using information measures. IEEE Trans. Geosci. Remote Sens. 2007, 45, 4158–4171. [Google Scholar] [CrossRef]
  18. Cariou, C.; Chehdi, K.; Le Moan, S. BandClust: An unsupervised band reduction method for hyperspectral remote sensing. IEEE Geosci. Remote Sens. Lett. 2010, 8, 565–569. [Google Scholar] [CrossRef]
  19. Zhang, M.; Ma, J.; Gong, M. Unsupervised hyperspectral band selection by fuzzy clustering with particle swarm optimization. IEEE Geosci. Remote Sens. Lett. 2017, 14, 773–777. [Google Scholar] [CrossRef]
  20. Li, J.-M.; Qian, Y.-T. Clustering-based hyperspectral band selection using sparse nonnegative matrix factorization. J. Zhejiang Univ. Sci. C 2011, 12, 542–549. [Google Scholar] [CrossRef]
  21. Li, S.; Qi, H. Sparse Representation-Based Band Selection for Hyperspectral Images. In Proceedings of the 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 2693–2696. [Google Scholar] [CrossRef]
  22. Sun, W.; Zhang, L.; Du, B.; Li, W.; Lai, Y.M. Band selection using improved sparse subspace clustering for hyperspectral imagery classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2784–2797. [Google Scholar] [CrossRef]
  23. Yin, J.; Wang, Y.; Zhao, Z. Optimal Band Selection for Hyperspectral Image Classification Based on Inter-Class Separability. In Proceedings of the 2010 Symposium on Photonics and Optoelectronics, Chengdu, China, 19–21 June 2010; pp. 1–4. [Google Scholar] [CrossRef]
  24. Sildomar, T.-M.; Yukio, K. A Particle Swarm Optimization-Based Approach for Hyperspectral Band Selection. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 3335–3340. [Google Scholar] [CrossRef]
  25. Wang, Q.; Lin, J.; Yuan, Y. Salient band selection for hyperspectral image classification via manifold ranking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1279–1289. [Google Scholar] [CrossRef] [PubMed]
  26. Gao, L.; Zhao, B.; Jia, X.; Liao, W.; Zhang, B. Optimized kernel minimum noise fraction transformation for hyperspectral image classification. Remote Sens. 2017, 9, 548. [Google Scholar] [CrossRef] [Green Version]
  27. Berry, M.; Browne, M.; Langville, A.N.; Pauca, V.P.; Plemmons, R.J. Algorithms and applications for approximate nonnegative matrix factorization. Comput. Stat. Data Anal. 2007, 52, 155–173. [Google Scholar] [CrossRef] [Green Version]
  28. Casalino, G.; Gillis, N. Sequential dimensionality reduction for extracting localized features. Pattern Recognit. 2017, 63, 15–29. [Google Scholar] [CrossRef] [Green Version]
  29. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of hyperspectral images with regularized linear discriminant analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  30. Roger, R.E. Principal Components transform with simple, automatic noise adjustment. Int. J. Remote Sens. 1996, 17, 2719–2727. [Google Scholar] [CrossRef]
  31. Green, A.; Berman, M.; Switzer, P.; Craig, M. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. IEEE Trans. Geosci. Remote Sens. 1988, 26, 65–74. [Google Scholar] [CrossRef] [Green Version]
  32. Jia, X.; Kuo, B.-C.; Crawford, M.M. Feature mining for hyperspectral image classification. Proc. IEEE 2013, 101, 676–697. [Google Scholar] [CrossRef]
  33. Wong, W.K.; Zhao, H. Supervised optimal locality preserving projection. Pattern Recognit. 2012, 45, 186–197. [Google Scholar] [CrossRef]
  34. Roweis, S.T. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Chen, P.; Jiao, L.; Liu, F.; Gou, S.; Zhao, J.; Zhao, Z. Dimensionality reduction of hyperspectral imagery using sparse graph learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 10, 1165–1181. [Google Scholar] [CrossRef]
  36. Schölkopf, B.; Smola, A.J.; Müller, K.-R. Kernel Principal Component Analysis. In Transactions on Petri Nets and Other Models of Concurrency XV; Springer Science and Business Media LLC: Berlin, Germany, 1997; pp. 583–588. [Google Scholar]
  37. Nielsen, A.A. Kernel maximum autocorrelation factor and minimum noise fraction transformations. IEEE Trans. Image Process. 2010, 20, 612–624. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Gillis, N.; Plemmons, R.J. Sparse nonnegative matrix underapproximation and its application to hyperspectral image analysis. Linear Algebra Appl. 2013, 438, 3991–4007. [Google Scholar] [CrossRef]
  39. Bachmann, C.; Ainsworth, T.; Fusina, R. Exploiting manifold geometry in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2005, 43, 441–454. [Google Scholar] [CrossRef]
  40. Gomez-Chova, L.; Nielsen, A.A.; Camps-Valls, G. Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; pp. 3570–3573. [Google Scholar] [CrossRef]
  41. Nielsen, A.A.; Vestergaard, J.S. Parameter Optimization in the Regularized Kernel Minimum Noise Fraction Transformation. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 370–373. [Google Scholar] [CrossRef]
  42. Gao, L.; Zhang, B.; Chen, Z.; Lei, L. Study on the Issue of Noise Estimation in Dimension Reduction of Hyperspectral Images. In Proceedings of the 2011 3rd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Lisbon, Portugal, 6–9 June 2011; pp. 1–4. [Google Scholar] [CrossRef]
  43. Zhao, B.; Gao, L.; Zhang, B. An Optimized Method of Kernel Minimum Noise Fraction for Dimensionality Reduction of Hyperspectral Imagery. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 48–51. [Google Scholar] [CrossRef]
  44. Nielsen, A.A. An Extension to a Filter Implementation of a Local Quadratic Surface for Image Noise Estimation. In Proceedings of the 10th International Conference on Image Analysis and Processing, Venice, Italy, 27–29 September 1999; pp. 119–124. [Google Scholar] [CrossRef] [Green Version]
  45. Gao, L.; Du, Q.; Yang, W.; Zhang, B. A Comparative Study on Noise Estimation for Hyperspectral Imagery. In Proceedings of the 4th Workshop on Hyperspectral Image and Signal Processing, Shanghai, China, 4–7 June 2012; pp. 1–4. [Google Scholar] [CrossRef]
  46. Gao, L.; Zhang, B.; Sun, X.; Li, S.; Du, Q.; Wu, C. Optimized maximum noise fraction for dimensionality reduction of Chinese HJ-1A hyperspectral data. EURASIP J. Adv. Signal Process. 2013, 2013, 65. [Google Scholar] [CrossRef] [Green Version]
  47. Rasti, B.; Scheunders, P.; Ghamisi, P.; Licciardi, G.; Chanussot, J. Noise reduction in hyperspectral imagery: Overview and application. Remote Sens. 2018, 10, 482. [Google Scholar] [CrossRef] [Green Version]
  48. Hu, Z.; Huang, Z.; Huang, X.; Luo, F.; Ye, R. An adaptive nonlocal gaussian prior for hyperspectral image denoising. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1487–1491. [Google Scholar] [CrossRef]
  49. Sullivan, R. Introduction; Greenleaf Publishing Limited: Sheffield, UK, 2013; pp. 13–20. [Google Scholar]
  50. Xie, T.; Li, S.; Sun, B. Hyperspectral images denoising via nonconvex regularized low-rank and sparse matrix decomposition. IEEE Trans. Image Process. 2019, 29, 44–56. [Google Scholar] [CrossRef]
  51. Chen, Y.; Guo, Y.; Wang, Y.; Wang, D.; Peng, C.; He, G. Denoising of hyperspectral images using nonconvex low rank matrix approximation. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5366–5380. [Google Scholar] [CrossRef]
  52. Fykse, E. Performance Comparison of GPU, DSP and FPGA Implementations of Image Processing and Computer Vision Algo-rithms in Embedded Systems. Ph.D. Thesis, Department of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway, 2013. [Google Scholar]
  53. Fowers, J.; Brown, G.; Wernsing, J.; Stitt, G. A performance and energy comparison of convolution on GPUs, FPGAs, and multicore processors. ACM Trans. Arch. Code Optim. 2013, 9, 1–21. [Google Scholar] [CrossRef] [Green Version]
  54. Barrachina, S.; Castillo, M.; Igual, F.D.; Mayo, R.; Quintana-Orti, E.S. Evaluation and Tuning of the Level 3 CUBLAS for Graphics Processors. In Proceedings of the 22nd IEEE International Parallel & Distributed Processing Symposium, Miami, FL, USA, 14–18 April 2008; pp. 1–8. [Google Scholar]
  55. Anderson, E.; Bai, Z.; Bischof, C.; Blackford, L.S.; Demmel, J.; Dongarra, J.; Du Croz, J.; Greenbaum, A.; Hammarling, S.; McKenney, A.; et al. LAPACK Users’ Guide; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1999. [Google Scholar]
  56. Bientinesi, P.; Gunnels, J.A.; Myers, M.E.; Quintana-Ortí, E.S.; Van De Geijn, R.A. The science of deriving dense linear algebra algorithms. ACM Trans. Math. Softw. 2005, 31, 1–26. [Google Scholar] [CrossRef]
  57. Fujimoto, N. Faster Matrix-Vector Multiplication on GeForce 8800GTX. In Proceedings of the 2011 IEEE International Parallel & Distributed Processing Symposium, Anchorage, AK, USA, 16–20 May 2008; pp. 1–8. [Google Scholar]
  58. Barrachina, S.; Castillo, M.; Igual, F.D.; Mayo, R.; Quintana-Ortí, E.S.; Quintana-Ortí, G. Exploiting the capabilities of modern GPUs for dense matrix computations. Concurr. Comput. Pract. Exp. 2009, 21, 2457–2477. [Google Scholar] [CrossRef]
  59. Jia, J.; Wang, Y.; Cheng, X.; Yuan, L.; Zhao, D.; Ye, Q.; Zhuang, X.; Shu, R.; Wang, J. Destriping algorithms based on statistics and spatial filtering for visible-to-thermal infrared pushbroom hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4077–4091. [Google Scholar] [CrossRef]
  60. Jia, J.; Zheng, X.; Guo, S.; Wang, Y.; Chen, J. Removing stripe noise based on improved statistics for hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2020, 1–5. [Google Scholar] [CrossRef]
  61. Cen, Y.; Zhang, L.; Zhang, X.; Wang, Y.; Qi, W.; Tang, S.; Zhang, P. Aerial hyperspectral remote sensing classification dataset of Xiong’an new area (Matiwan Village). J. Remote Sens. 2020, 24, 1299–1306. [Google Scholar] [CrossRef]
  62. Liu, H.; Zhang, D.; Wang, Y. Preflight spectral calibration of airborne shortwave infrared hyperspectral imager with water vapor absorption characteristics. Sensors 2019, 19, 2259. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Zhang, D.; Yuan, L.; Wang, S.; Yu, H.; Zhang, C.; He, D.; Han, G.; Wang, J.; Wang, Y. Wide swath and high resolution airborne hyperspectral imaging system and flight validation. Sensors 2019, 19, 1667. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Flowchart of Noise Estimation. P M ,   P s   and P G are the coefficients of Noise_Median, Noise_Sobel, and Noise_Gaussian in mixed noise estimation model (MNEM)-Ratio.
Figure 1. Flowchart of Noise Estimation. P M ,   P s   and P G are the coefficients of Noise_Median, Noise_Sobel, and Noise_Gaussian in mixed noise estimation model (MNEM)-Ratio.
Remotesensing 13 02607 g001
Figure 2. (a) The pseudo-colour image of Salinas; (b) ground truth classification map of Salinas.
Figure 2. (a) The pseudo-colour image of Salinas; (b) ground truth classification map of Salinas.
Remotesensing 13 02607 g002
Figure 3. (a) The pseudo-colour image of Indian Pines; (b) ground truth classification map of Indian Pines.
Figure 3. (a) The pseudo-colour image of Indian Pines; (b) ground truth classification map of Indian Pines.
Remotesensing 13 02607 g003
Figure 4. (a) The pseudo-colour image of Xiong’an; (b) ground truth classification map of Xiong’an.
Figure 4. (a) The pseudo-colour image of Xiong’an; (b) ground truth classification map of Xiong’an.
Remotesensing 13 02607 g004
Figure 5. The denoising results of band 5 in Salinas. (a) Original image; (b) denoised result of KMNF-NE; (c) denoised result of SSDC; (d) denoised result of MNEM-Order; (e) denoised result of MNEM-Ratio.
Figure 5. The denoising results of band 5 in Salinas. (a) Original image; (b) denoised result of KMNF-NE; (c) denoised result of SSDC; (d) denoised result of MNEM-Order; (e) denoised result of MNEM-Ratio.
Remotesensing 13 02607 g005
Figure 6. The denoising results of band 187 in Indian Pines. (a) Original image; (b) denoised result of KMNF-NE; (c) denoised result of SSDC; (d) denoised result of MNEM-Order; (e) denoised result of MNEM-Ratio.
Figure 6. The denoising results of band 187 in Indian Pines. (a) Original image; (b) denoised result of KMNF-NE; (c) denoised result of SSDC; (d) denoised result of MNEM-Order; (e) denoised result of MNEM-Ratio.
Remotesensing 13 02607 g006
Figure 7. The denoising results of band 15 in Xiong’an. (a) Original image; (b) denoised result of KMNF-NE; (c) denoised result of SSDC; (d) denoised result of MNEM-Order; (e) denoised result of MNEM-Ratio.
Figure 7. The denoising results of band 15 in Xiong’an. (a) Original image; (b) denoised result of KMNF-NE; (c) denoised result of SSDC; (d) denoised result of MNEM-Order; (e) denoised result of MNEM-Ratio.
Remotesensing 13 02607 g007
Figure 8. The SVM classification results of Salinas after different DR methods (number of features = 5).
Figure 8. The SVM classification results of Salinas after different DR methods (number of features = 5).
Remotesensing 13 02607 g008
Figure 9. The SVM classification results of Indian Pines after different DR methods (number of features = 5).
Figure 9. The SVM classification results of Indian Pines after different DR methods (number of features = 5).
Remotesensing 13 02607 g009
Figure 10. The SVM classification results of Xiong’an after different DR methods (number of features = 5).
Figure 10. The SVM classification results of Xiong’an after different DR methods (number of features = 5).
Remotesensing 13 02607 g010
Figure 11. (a) The pseudo-colour image of Spatial_resolution_1 (0.5 m); (b) The pseudo-colour image of Spatial_resolution_2 (1 m); (c) The pseudo-colour image of Spatial_resolution_3 (2 m); (d) ground truth classification map.
Figure 11. (a) The pseudo-colour image of Spatial_resolution_1 (0.5 m); (b) The pseudo-colour image of Spatial_resolution_2 (1 m); (c) The pseudo-colour image of Spatial_resolution_3 (2 m); (d) ground truth classification map.
Remotesensing 13 02607 g011
Figure 12. Comparison of the average accuracies of SVM classification on (a) Spatial_resolution_1, (b) Spatial_resolution_2, and (c) Spatial_resolution_3 after using different DR methods.
Figure 12. Comparison of the average accuracies of SVM classification on (a) Spatial_resolution_1, (b) Spatial_resolution_2, and (c) Spatial_resolution_3 after using different DR methods.
Remotesensing 13 02607 g012
Figure 13. (a) The pseudo-colour image of Spectral_resolution_1 (2.4 nm); (b) The pseudo-colour image of Spectral_resolution_2 (4.8 nm); (c) The pseudo-colour image of Spectral_resolution_3 (9.6 nm); (d) ground truth classification map.
Figure 13. (a) The pseudo-colour image of Spectral_resolution_1 (2.4 nm); (b) The pseudo-colour image of Spectral_resolution_2 (4.8 nm); (c) The pseudo-colour image of Spectral_resolution_3 (9.6 nm); (d) ground truth classification map.
Remotesensing 13 02607 g013
Figure 14. Comparison of the average accuracies of SVM classification on Spectral_resolution_1, Spectral_resolution_2, and Spectral_resolution_3 after using different DR methods.
Figure 14. Comparison of the average accuracies of SVM classification on Spectral_resolution_1, Spectral_resolution_2, and Spectral_resolution_3 after using different DR methods.
Remotesensing 13 02607 g014
Figure 15. The peak signal-to-noise ratio (PSNR) of each band on the (a) Salinas, (b) Indian Pines, and (c) Xiong’an datasets.
Figure 15. The peak signal-to-noise ratio (PSNR) of each band on the (a) Salinas, (b) Indian Pines, and (c) Xiong’an datasets.
Remotesensing 13 02607 g015
Figure 16. The structure similarity (SSIM) of each band on the (a) Salinas, (b) Indian Pines, and (c) Xiong’an datasets.
Figure 16. The structure similarity (SSIM) of each band on the (a) Salinas, (b) Indian Pines, and (c) Xiong’an datasets.
Remotesensing 13 02607 g016
Table 1. Hardware parameters of the NVIDIA GeforceGTX745 and the NVIDIA GeforceRTX2060.
Table 1. Hardware parameters of the NVIDIA GeforceGTX745 and the NVIDIA GeforceRTX2060.
ParameterGeForce GTX 745GeForce RTX 2060
CUDA Cores 384 1920
Global Memory4096 MBytes6144 MBytes
Shared Memory49,152 bytes49,152 bytes
Constant Memory65,536 bytes65,536 bytes
Clock Rate1.03 GHz1.20 GHz
Memory Bus Width128 bit192 bit
Table 2. Mean peak signal-to-noise ratio (MPSNR), mean structure similarity (MSSIM) and mean spectral angle distance (MSAD) of different methods on Salinas, Indian Pines, and Xiong’an datasets.
Table 2. Mean peak signal-to-noise ratio (MPSNR), mean structure similarity (MSSIM) and mean spectral angle distance (MSAD) of different methods on Salinas, Indian Pines, and Xiong’an datasets.
MethodMPSNRMSSIMMSAD
Salinas
KMNF-NE38.430.98892.3579
SSDC33.730.93778.1005
MNEM-Order26.970.97342.3828
MNEM-Ratio43.690.99850.4701
Indian Pines
KMNF-NE29.780.94872.1644
SSDC26.470.87936.0796
MNEM-Order46.290.97940.1983
MNEM-Ratio29.760.95190.5402
Xiong’an
KMNF-NE33.580.97852.0654
SSDC37.860.98640.8181
MNEM-Order43.710.97870.5339
MNEM-Ratio36.960.99071.3098
Table 3. Samples and training samples used for the Salinas and Indian Pines datasets.
Table 3. Samples and training samples used for the Salinas and Indian Pines datasets.
ClassesSalinasClassesIndian Pines
SamplesTrainingSamplesTraining
Broccoli_green_weeds_12009502Alfalfa4612
Broccoli_green_weeds_23726932Corn_notill1428357
Fallow1976494Corn_mintill830208
Fallow_rough_plow1394349Corn23759
Fallow_smooth2678670Grass_pasture483121
Stubble3959990Grass_trees730183
Celery3579895Grass_pasture_mowed287
Grapes_untrained11,2712818Hay_windrowed478120
Soil_vineyard_develop62031551Oats205
Corn_senesced_green_weeds3278820Soybean_notill972243
Lettuce_romaine_4wk1068267Soybean_mintill2455614
Lettuce_romaine_5wk1927482Soybean_clean593148
Lettuce_romaine_6wk916229Wheat20551
Lettuce_romaine_7wk1070268Woods1265316
Vineyard_untrained72681817Buildings_Grass_Trees_Drives38697
Vineyard_vertical_trellis1807452Stone_Steel_Towers9323
Table 4. Samples and training samples used for the Xiong’an dataset.
Table 4. Samples and training samples used for the Xiong’an dataset.
ClassesSamplesTraining
Corn84,49621,124
Soybean10,5622641
Pear_trees1303326
Grassland27,7036926
Sparsewood92922323
Robinia25,7616440
Paddy30,0297507
Populus55341384
Sophora japonica811203
Peach_trees1498375
Table 5. The average accuracies of support vector machine (SVM) classification in the Salinas, Indian Pines, and Xiong’an datasets after different dimensionality reduction (DR) methods. All the results are percentages.
Table 5. The average accuracies of support vector machine (SVM) classification in the Salinas, Indian Pines, and Xiong’an datasets after different dimensionality reduction (DR) methods. All the results are percentages.
Salinas
Number of Features345152535455565
PCA82.04 ± 2.1385.07 ± 1.5086.53 ± 0.9590.54 ± 1.7190.93 ± 1.5590.85 ± 1.7191.02 ± 1.7591.23 ± 1.5791.22 ± 1.44
MNF88.42 ± 1.4489.15 ± 1.6489.22 ± 1.4492.30 ± 0.9792.80 ± 1.2292.74 ± 1.6292.59 ± 1.6592.53 ± 1.5992.40 ± 1.69
OMNF88.55 ± 1.3089.27 ± 1.5589.31 ± 1.3192.30 ± 1.1792.78 ± 1.3092.74 ± 1.6092.54 ± 1.6692.42 ± 1.7992.29 ± 1.85
FA85.87 ± 0.2189.04 ± 1.5889.02 ± 1.6391.66 ± 1.6592.15 ± 1.7892.01 ± 2.0893.12 ± 1.4493.80 ± 0.9793.76 ± 0.84
KPCA86.14 ± 0.6188.37 ± 0.6788.48 ± 0.8490.26 ± 1.6190.85 ± 1.7591.09 ± 1.8791.37 ± 1.8491.45 ± 1.9691.40 ± 1.95
KMNF88.42 ± 1.4489.15 ± 1.6489.22 ± 1.4492.30 ± 0.9792.80 ± 1.2292.74 ± 1.6292.60 ± 1.6492.54 ± 1.5892.41 ± 1.68
OKMNF87.32 ± 1.1988.55 ± 1.1388.43 ± 1.0592.01 ± 1.1492.97 ± 1.3293.21 ± 1.2993.25 ± 1.3193.59 ± 1.4693.66 ± 1.26
LDA86.27 ± 1.8486.72 ± 1.4788.61 ± 1.8590.79 ± 2.1291.46 ± 1.9691.37 ± 1.8891.35 ± 1.8391.49 ± 1.5191.48 ± 1.53
LPP86.30 ± 0.9688.86 ± 1.4389.22 ± 1.4391.36 ± 2.2891.94 ± 1.9092.05 ± 2.0091.91 ± 1.8891.88 ± 1.8191.85 ± 2.04
OP-KMNF-Ratio87.25 ± 0.4691.12 ± 1.6291.64 ± 1.7093.92 ± 1.1694.25 ± 1.2094.50 ± 1.3694.63 ± 1.5994.61 ± 1.7294.69 ± 1.81
OP-KMNF-Order89.86 ± 0.4590.80 ± 1.6690.66 ± 1.4493.27 ± 0.9894.02 ± 1.2194.36 ± 1.6294.23 ± 1.6494.12 ± 1.5994.09 ± 1.67
Indian Pines
Number of Features3456789
PCA36.45 ± 3.4943.97 ± 3.4344.92 ± 1.7944.83 ± 1.4649.87 ± 1.0552.91 ± 2.9353.65 ± 1.98
MNF55.33 ± 0.0457.84 ± 1.3258.88 ± 0.7959.66 ± 2.2560.97 ± 0.8561.85 ± 1.3860.94 ± 2.20
OMNF54.49 ± 0.5957.06 ± 1.4058.40 ± 1.0659.38 ± 2.5960.97 ± 1.7061.83 ± 1.2160.94 ± 1.91
FA47.14 ± 0.5254.49 ± 0.9455.94 ± 0.5957.40 ± 3.5759.22 ± 2.6761.41 ± 0.1261.13 ± 1.54
KPCA36.36 ± 0.6339.69 ± 0.6141.55 ± 0.6943.85 ± 0.7647.01 ± 0.2347.60 ± 0.1252.86 ± 0.42
KMNF52.25 ± 1.1855.05 ± 0.7055.83 ± 1.3858.54 ± 0.4356.87 ± 0.0958.38 ± 0.0460.90 ± 2.30
OKMNF32.67 ± 0.4243.99 ± 1.7943.51 ± 1.7546.30 ± 0.3149.55 ± 1.6851.30 ± 2.1754.47 ± 2.10
LDA31.92 ± 0.9839.32 ± 0.1249.51 ± 1.8352.44 ± 0.2854.86 ± 1.1554.90 ± 0.6055.57 ± 1.17
LPP36.55 ± 0.6842.49 ± 0.2644.94 ± 1.3255.86 ± 5.6254.76 ± 3.7358.20 ± 2.7859.26 ± 0.62
OP-KMNF-Ratio56.00 ± 0.1958.54 ± 0.2059.54 ± 0.6455.99 ± 0.3556.54 ± 1.3660.80 ± 1.0460.71 ± 0.87
OP-KMNF-Order53.72 ± 0.2057.25 ± 1.0456.88 ± 0.0861.50 ± 1.4761.79 ± 0.5261.90 ± 0.2962.98 ± 1.27
Xiong’an
Number of Features3456789
PCA31.05 ± 1.7831.70 ± 1.9136.03 ± 2.6436.82 ± 2.9737.00 ± 2.7544.25 ± 3.0446.36 ± 1.94
MNF38.12 ± 0.4551.98 ± 1.3755.93 ± 2.6260.17 ± 1.6962.83 ± 0.8564.11 ± 1.9567.03 ± 0.90
OMNF38.15 ± 0.5153.39 ± 1.4758.54 ± 2.6360.01 ± 1.7762.94 ± 0.8163.98 ± 1.9866.91 ± 0.88
FA40.39 ± 2.5651.82 ± 1.9454.59 ± 2.1457.03 ± 1.5557.88 ± 1.2458.74 ± 0.9759.54 ± 1.14
KPCA29.60 ± 2.5434.12 ± 1.4034.75 ± 1.9034.83 ± 1.6435.58 ± 1.7137.97 ± 1.7738.34 ± 1.65
KMNF43.82 ± 2.3051.27 ± 2.1654.37 ± 2.4058.61 ± 1.5661.71 ± 2.1560.57 ± 1.8763.38 ± 2.11
OKMNF39.42 ± 1.7453.34 ± 1.3258.98 ± 1.4859.61 ± 1.6661.41 ± 2.1763.18 ± 0.8565.71 ± 0.83
LDA37.57 ± 2.3540.93 ± 3.6542.25 ± 3.2143.00 ± 1.1144.94 ± 2.3149.51 ± 2.1855.88 ± 2.19
LPP30.03 ± 1.1341.48 ± 3.8646.29 ± 3.9647.75 ± 4.0652.96 ± 3.6953.21 ± 2.3254.35 ± 2.05
OP-KMNF-Ratio43.99 ± 2.0254.03 ± 1.6958.47 ± 1.4661.42 ± 0.9964.42 ± 1.3068.03 ± 0.5270.14 ± 0.23
OP-KMNF-Order48.36 ± 0.9854.32 ± 1.2059.53 ± 2.9561.73 ± 0.7562.38 ± 1.4265.60 ± 2.1667.35 ± 1.97
Table 6. The class accuracies of SVM classification on Xiong’an with number of features = 9. All the results are percentages.
Table 6. The class accuracies of SVM classification on Xiong’an with number of features = 9. All the results are percentages.
ClassesCornSoybeanPear_TreesGrasslandSparsewoodRobiniaPaddyPopulusSophora JaponicaPeach_Trees
PCA69.7228.1051.1139.888.0969.2263.7413.2829.1091.32
MNF79.3388.5477.1365.1547.1878.3475.4823.4948.4687.25
OMNF79.2588.6276.9064.9446.8678.3775.3923.4048.0987.25
FA74.9876.9773.9846.2029.6075.4066.8925.0336.0090.39
KPCA69.1719.6337.6833.763.9569.0650.526.270.0093.32
KMNF77.4083.5977.2161.3244.6477.2159.7331.9131.4489.32
OKMNF76.5483.1374.5252.8040.1783.3277.7532.6247.8488.38
LDA73.5473.6970.2243.4322.3170.9872.4729.8012.0890.25
LPP71.1363.9873.9846.1630.2868.7267.1632.060.0089.99
OP-KMNF
(Ratio)
78.1787.6775.9852.6044.1980.5481.1442.8469.3088.99
1Rank
2Improve
3
−1.16
3
−0.87
4
−1.23
5
−12.55
4
−2.99
2
−2.77
1
+3.39
1
+10.22
1
+20.84
7
−4.33
OP-KMNF
(Order)
79.4284.6576.8245.3544.0783.4276.3943.0248.7191.66
Rank
Improve
1
+0.09
3
−3.89
4
−0.39
7
−19.80
5
−3.11
1
+0.10
2
−1.36
1
+10.40
1
+0.25
2
−1.66
1 The rank of OP-KMNF in class accuracy. 2 The improvement of OP-KMNF compared with the method that has the best class accuracy.
Table 7. Samples and training samples used in Xiong’an dataset with different spatial resolutions.
Table 7. Samples and training samples used in Xiong’an dataset with different spatial resolutions.
ClassesSpatial_Resolution_1
(0.5 m)
Spatial_Resolution_2
(1 m)
Spatial_Resolution_3
(2 m)
SamplesTrainingSamplesTrainingSamplesTraining
Corn84,49621,12420,728518249251231
Soybean10,56226412474618523130
Pear_trees1303326312787118
Grassland27,7036926673416831583396
Sparsewood929223232254563534133
Robinia25,7616440627415681508377
Paddy30,0297507736418411761440
Populus55341384134533631880
Sophora japonica81120318245369
Peach_trees1498375348877318
Table 8. Samples and training samples used in the Xiong’an with different spectral resolutions.
Table 8. Samples and training samples used in the Xiong’an with different spectral resolutions.
ClassesSpectral_Resolution_1
(2.4 nm)
Spectral _Resolution_2
(4.8 nm)
Spectral _Resolution_3
(9.6 nm)
SamplesTrainingSamplesTrainingSamplesTraining
Corn84,49621,12484,49621,12484,49621,124
Soybean10,562264110,562264110,5622641
Pear_trees130332613033261303326
Grassland27,703692627,703692627,7036926
Sparsewood929223239292232392922323
Robinia25,761644025,761644025,7616440
Paddy30,029750730,029750730,0297507
Populus553413845534138455341384
Sophora japonica811203811203811203
Peach_trees149837514983751498375
Table 9. The runtimes and speedups of OP-KMNF on two different computing systems.
Table 9. The runtimes and speedups of OP-KMNF on two different computing systems.
Data SizesCPU RuntimeGPU1 RuntimeSpeedups1GPU2 RuntimeSpeedups2
OP-KMNF-Order
100 × 100 × 250307.730 s20.924 s14.92×12.251 s25.12×
150 × 150 × 250678.703 s30.373 s22.35×17.352 s39.11×
200 × 200 × 2501177.293 s44.535 s26.44×23.923 s49.21×
250 × 250 × 2501837.850 s63.312 s29.03×34.143 s53.83×
300 × 300 × 2502627.564 s85.893 s30.59×45.275 s58.04×
350 × 350 × 2503586.769 s113.238 s31.67×58.944 s60.85×
400 × 400 × 2504682.999 s142.582 s32.84×73.232 s63.95×
OP-KMNF-Ratio
100 × 100 × 250317.973 s21.906 s14.52×12.511 s25.42×
150 × 150 × 250704.654 s32.255 s21.85×18.161 s38.80×
200 × 200 × 2501234.365 s46.222 s26.71×24.101 s51.22×
250 × 250 × 2501918.372 s66.053 s29.04×34.686 s55.31×
300 × 300 × 2502742.355 s88.857 s30.86×46.064 s59.53×
350 × 350 × 2503739.979 s119.276 s31.36×59.001 s63.39×
400 × 400 × 2504877.802 s151.954 s32.10×75.891 s64.27×
Table 10. The computational costs of OP-KMNF when the data volume is 400 × 400 × 250.
Table 10. The computational costs of OP-KMNF when the data volume is 400 × 400 × 250.
Program × ExecutionOP-KMNF-OrderOP-KMNF-Ratio
CPU
Runtime
GPU1
Runtime
GPU2
Runtime
CPU
Runtime
GPU1
Runtime
GPU2
Runtime
Data reading29.342 s29.342 s3.365 s29.342 s29.342 s3.365 s
Noise estimation2300.203 s82.571 s61.605 s2471.537 s91.206 s63.606 s
KMNF transformation2402.198 s13.804 s4.292 s2402.198 s13.804 s4.292 s
Output31.525 s31.525 s0.202 s31.525 s31.525 s0.202 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xue, T.; Wang, Y.; Chen, Y.; Jia, J.; Wen, M.; Guo, R.; Wu, T.; Deng, X. Mixed Noise Estimation Model for Optimized Kernel Minimum Noise Fraction Transformation in Hyperspectral Image Dimensionality Reduction. Remote Sens. 2021, 13, 2607. https://doi.org/10.3390/rs13132607

AMA Style

Xue T, Wang Y, Chen Y, Jia J, Wen M, Guo R, Wu T, Deng X. Mixed Noise Estimation Model for Optimized Kernel Minimum Noise Fraction Transformation in Hyperspectral Image Dimensionality Reduction. Remote Sensing. 2021; 13(13):2607. https://doi.org/10.3390/rs13132607

Chicago/Turabian Style

Xue, Tianru, Yueming Wang, Yuwei Chen, Jianxin Jia, Maoxing Wen, Ran Guo, Tianxiao Wu, and Xuan Deng. 2021. "Mixed Noise Estimation Model for Optimized Kernel Minimum Noise Fraction Transformation in Hyperspectral Image Dimensionality Reduction" Remote Sensing 13, no. 13: 2607. https://doi.org/10.3390/rs13132607

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop