Next Article in Journal
Low Voltage Time-Based Matrix Multiplier-and-Accumulator for Neural Computing System
Previous Article in Journal
QoS Priority-Based Mobile Personal Cell Deployment with Load Balancing for Interference Reduction between Users on Coexisting Public Safety and Railway LTE Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Denoising and Classification Using Multi-Scale Weighted EMAPs and Extreme Learning Machine

1
School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, China
2
School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK
3
School of Mechanical & Automotive Engineering, South China University of Technology, Guangzhou 510641, China
4
GRGBanking Equipment Co., Ltd., Guangzhou 510663, China
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(12), 2137; https://doi.org/10.3390/electronics9122137
Submission received: 1 November 2020 / Revised: 5 December 2020 / Accepted: 10 December 2020 / Published: 14 December 2020
(This article belongs to the Section Artificial Intelligence)

Abstract

:
Recently, extended multi-attribute profiles (EMAPs) have attracted much attention due to its good performance while applied to remote sensing images feature extraction and classification. Since the EMAPs connect multiple attribute features without considering the pixel-based Hyperspectral Image (HSI) classification, homogeneous regions may become unsmooth due to the noise to be introduced. To tackle this problem, we propose the weighted EMAPs (WEMAPs) to reduce the noise and smoothen the homogeneous regions based on weighted mean filter (WMF). Then, we construct multiscale WEMAPs to product multiscale feature in order to extract different spatial structures of the HSI and produce better classification results. Finally, a new joint decision fusion and feature fusion (JDFFF) framework is proposed based on the decision fusion (DF) and the multiscale WEMAPs (MWEMAPs) based on extreme learning machine (ELM) classifier. That is, the classification results from various scales are combined into a final one with ELM to perform the HSI classification. Experiment results show that the proposed algorithm significantly outperforms many state-of-the-art HSI classification algorithms.

1. Introduction

Hyperspectral images (HSIs) have been successfully applied in a wide range of applications, such as land and ocean mapping, geological analysis, brain cancer detection, mining, and precision agriculture [1,2,3]. As HSI contains rich spectral and spatial information, each pixel can be classified in the scene [4]. However, the supervised classification of the HSIs still remains a challenging task due to the unbalanced ratio between the limited training samples and the large number of spectral bands, i.e., the Hughes phenomenon [5]. To this end, many advanced techniques have been proposed for feature extraction and dimensionality reduction [6,7], such as the principal component analysis (PCA) and its variations [8,9], the attribute profiles (APs) [10,11], the linear discriminate analysis (LDA) [12], the nonparametric weighted feature extraction (NWFE) [13], and the singular spectrum analysis (SSA) [14], etc. In addition, a number of the state-of-the-art technologies have been successfully applied for HSI data classification, which include the support vector machine (SVM) [15], the extreme learning machine (ELM) [16], the sparse multinomial logistic regression (SMLR) and its variations [17,18], the active multi-kernel domain adaptation [1] and new parallel frameworks for auto-encoder training [19]. Among these approaches, the APs and the ELM have achieved much attention due to their good performances.
The APs were employed to extract the spatial information for remote sensing image classification, where the concepts have also been expanded to the morphological profiles (MPs) [20] and the extended morphological profiles (EMPs) [21]. The APs are found to be more useful in extracting spatial information than the MPs for the high resolution images [22]. As the original APs were extracted from each individual band, the dimensionality of the APs will become very high if all the spectral bands of the HSI were used [22]. To this end, the extended APs (EAPs) were introduced that only considered to extract APs from the first few principal components of the HSI [11]. By combining different EAPs, the extended multi-attribute profiles (EMAPs) were introduced [22], which can better model the spatial information of the HSI.
The APs and its variant EAPs have been widely applied for performing the HSI classification because of their good performance. For example, EAPs and the independent component analysis were combined for the classification of urban HSIs [23]. In [24], a generalized composite kernel based on EMAPs was proposed to extract both the spectral information and the spatial information for the HSI classification. In [22], EMAPs were combined with the random subspace for hyperspectral image classification. However, the APs and its variants, EAPs and EMAPs, still have some drawbacks, as explained below.
APs can be computed by several attributes, thickening and thinning operators, on an image with a group of thresholds. Consider two different cases: (1) Given a certain attribute such as an area, the AP of the area is determined by the given thresholds, yet the resulted structure varies according to the thresholds and spectral bands of the HSIs. (2) Different attributes may have various structural information. As the EMAPs were composed of EAPs with different attributes that consider the first several principal components of the HSIs, different attributes and thresholds may produce noise in the HSIs. Recently, the weighted mean filters (WMFs) [3] were found to be successful in reducing the noise and smoothening the homogeneous regions in HSIs. As a result, the WMF and the EMAPs are combined in our paper to form a new method namely the weighted EMAPs (WEMAPs) for effective denoising and feature extraction in HSIs.
For classification of HSIs, both decision-level fusion (DF) [3,25] and feature-level fusion (FF) [17,26] have been used. The FF aims to improve the discriminant ability by combining different features [26], where two features extracted by WEMAPs and WMFs are used. The DF such as the majority voting (MV) [27] is based on the probability output of each individual classifier to reach a joint decision [26]. In [28,29], DF and FF have been successfully combined for multimodal biometric authentication and emotion recognition. In [30], the combined DF and FF have been applied to the classification of both the HSIs and the LIDAR data. To the best of our knowledge, the work in [30] is the only reported one to use the DF and the FF simultaneously for HSI classification, by merely combining different MPs of the HSIs and the LIDAR. The HSIs may contain both small and large homogeneous regions due to their complex structures [3], yet the MPs may ignore the discriminative information of the HSIs [10]. In order to capture different spatial structures for more effectively modeling the discriminant information of the HSIs, a novel multiscale FF framework namely the joint FF and DF (JFFDF) based on both the WMF and the proposed WEMAPs is proposed in this paper for HSI classification.
Both the SVM and ELM have been widely used for HSI classification because of their abilities in handing the Hughes phenomenon [31]. However, it takes too much running time for SVM to solve a large constrained optimization problem [26]. Compared to SVM, the training for the ELM is simpler and faster [26] as the input weights and the bias between the input layer and the hidden layer of the ELM are randomly generated. Then, the classification results of various scales are combined into final one based on the proposed JDFFF. Details will be discussed in Section 3, including an overall flowchart in Figure 1. The main contributions of this work can be highlighted as follows: (1) The WEMAPs are proposed for integrating different EMAPs attributes to reduce the noise and smoothen the homogeneous regions. (2) The JDFFF framework is proposed to capture different spatial structures and better model the discriminant information of the HSIs. (3) The classification results of different scales are combined into a final one using the MV with the proposed JDFFF for the HSI classification.
The rest of this paper is organized as follows: Some related algorithms are reviewed in Section 2 and the proposed framework of the JDFFF is presented in Section 3; the extended experiment results and the analysis are presented in Section 4; Section 5 concludes our works with some remarks.

2. Introductions of Related Works

2.1. Normalization

As an important preprocessing step for HSI, there are a number of normalization approaches [26]. We chose the Max method for normalization for the purpose of simplicity and consistency since it is a widely used normalization method [26]. Given a HSI data X = [ x 1 , x 2 , , x N ] R d × N , where N denotes the number of pixels in HSI and d is the number of bands of a HSI dataset. The Max normalization method can be expressed as:
x ij = x ij / max ( X )
where max ( X ) and x ij are the largest value of HSI and any pixel value of HSI, respectively.

2.2. Attribute Profiles (APs)

The mathematical morphology is a powerful framework for analyzing the spatial information of the HSIs [22,32,33]. Let γ R and ϕ R be the morphological opening and closing, and their operations on a grey level image f can be defined as [10]:
γ R i ( f ) = R f δ ( ε i ( f ) )
ϕ R i ( f ) = R f ε ( δ i ( f ) )
where δ i and ε i are the dilation and erosion with a given structure element of size i (i = 1, …, n), R f δ and R f ε are the geodesic reconstruction by the dilation and the erosion, respectively [32]. Then, the MPs can be defined as follows:
MPs ( f ) = { γ R n ( f ) ,   γ R n 1 ( f ) , ,   γ R 1 (   f ) ,   ϕ R 1 ( f ) , ,   ϕ R n 1 ( f ) ,   ϕ R n ( f ) }
Analogous to the MP, for a given sequence of thresholds λ 1 ,   , λ n , the APs can be represented as a concatenation of a series of the attribute thinning and the attribute thickening operations [10,24] as follows:
APs ( f ) = { γ ˇ R n ( f ) ,   γ ˇ R n 1 ( f ) , ,   γ ˇ R 1 ( f ) ,   f ,   ϕ ˇ R 1 ( f ) , ,   ϕ ˇ R n 1 ( f ) ,   ϕ ˇ R n ( f ) }
where γ ˇ R and ϕ ˇ R denote the thinning and thickening transformations, respectively.

2.3. Extreme Learning Machine (ELM)

The ELM [34,35] was originally proposed for the single hidden layer feedforward neural network [36] with one linear output layer and one hidden layer [26]. Let X = [ x 1 , x 2 , , x n ] R d × n and Y = [ y 1 ,   y 2 , ,   y n ] R m × n be n training samples and their corresponding labels with m classes, the output function of the ELM with L hidden neurons can be represented by:
f L ( x i ) = j = 1 L β j h ( a j , b j , x i ) = y i   for   i   =   1 ,   ,   n ,
where β j is the output weight between the hidden layer and the output layer, h , is the activation function. The above n equations can be rewritten in a matrix form as:
H β = Y T ,
where β = [ β 1 ,   , β L ] T R L × m and
H = [ h ( a 1 , b 1 , x 1 ) h ( a L , b L , x 1 ) h ( a 1 , b 1 , x n ) h ( a L , b L , x n ) ] ,
The solution of Equation (7) of the original ELM can be expressed by:
β = H Y T ,
where H is the Moore Penrose generalized inverse of the matrix H [37]. That is, H = H T ( HH T ) 1 or H = ( H T H ) 1 H T . A positive value I C is normally added to every diagonal element of HH T or H T H in order to improve the stability and the generalization of the inverse operator [26], where I is an identity mattrix. Finally, Equation (6) can be rewritten as:
f L ( x i ) = h ( x i ) β = h ( x i ) ( H T H + I C ) 1 H T Y T ,
or f L ( x i ) = h ( x i ) β = h ( x i ) H T ( HH T + I C ) 1 Y T ,
It should be noted that the sizes of I in Equations (10) and (11) are different, which depend on the dimensions of H T H and HH T , respectively. Similar to the SVM [38,39], the output function of the ELM using the RBF kernel can be represented as follows:
f L ( x i ) = h ( x i ) β = h ( x i ) H T ( HH T + I C ) 1 Y T = [ K ( x i , x 1 ) K ( x i , x N ) ] T ( Ω ELM + I C ) 1 Y T ,
where Ω ELM = HH T ,   and   Ω ELM i , j = h ( x i ) h ( x j ) = K ( x i , x j ) .
Two well-known constrained optimization models of the improved ELM are widely used. One is to define β in Equation (10) or Equation (11) without a kernel, which introduce a regularization term to ELM for overfitting problem called the generalized ELM (GELM) defined as follows:
β = H ( I C + H T H ) 1 Y T ,
Another one is to introduce new kernels in Equation (12), which introduce the kernel function to GELM for kernel data representation called the kernel ELM (KELM) as defined below:
β = ( I C + Ω ELM ) 1 Y T

3. The Proposed Joint Decision Fusion and Feature Fusion (JDFFF) Framework

3.1. Weighted Mean Filters (WMFs)

For n training samples X = [ x 1 , x 2 , , x n ] R d × n and the corresponding class labels   Y = [ y 1 ,   y 2 , ,   y n ] R m × n , let ( p i , q i ) be the spatial coordinate of the i-th training sample, x i . The local pixel neighborhood centered at x i can be expressed by [3]:
N ( x i ) = { x = ( p , q ) | p [ p i c , p i + c ] , q [ q i c , q i + c ] } ,
where c = w 0 1 2 , and w 0 is the odd width (scale) of the neighborhood window.
Let s = w 0 2 1 denote the total number of the neighboring pixels of x i , and also denote the pixels in its spatial neighborhood N ( x i ) by x i , x i 1 , x i 2 , …, x is . The spatial WMF of a labeled pixel x i can be represented by:
x i WMF = x j N ( x i ) v j x j x j N ( x i ) v j = x i + k = 1 s v k x ik 1 + k = 1 s v k ,
where the weight v k = exp ( γ x i x ik 2 ) measures the spectral distance between the center pixel and all of its neighboring pixels. According to [3], the degree of filtering γ is set to be 0.2 in this paper.

3.2. The Proposed Weighted Extended Multi-Attribute Profiles (WEMAPs)

The MPs [21] and the APs [10,11] have been successfully applied for combining both the spectral and spatial information for HSI classification [24]. The APs are acquired by applying a sequence of the attribute filters to a gray level image [10]. Since the original APs were employed to process only one spectral band, the dimensionalities of the APs will be very large if the full spectral bands of the HSIs are utilized to extract all the APs [22]. To address this problem, the first several principal components of the HSIs were used for extracting the APs to reduce the dimensionalities [11,22].
The EAPs with the first p principal components of the HSIs can be expressed as follows:
EAPs ( f ) = { AP ( PC 1 ) ,   AP ( PC 2 ) , , AP ( PCp ) } .
The EMAPs are composed of v EAPs with different attributes {1,2, …, v} as:
EMAPs = { EAP 1 , EAP 2 , , EAP v } .
Although the EMAPs lead to an increase in the feature dimensionalities, it also increases the capability of extracting the spatial information compared to a single EAP [10]. However, there exists a key drawback in the EMAPs, i.e., it combines multiple attribute features without considering the nature of pixel-based HSI classification. This will introduce noise in the HSIs and lead to non-smoothness of homogeneous regions. To tackle these problems, the WEMAPs are proposed in this paper for the HSI classification, which will be introduced in detail as follows.
Let X EMAPs = [ x 1 EMAPs ,   x 2 EMAPs ,   , x N EMAPs ] be the features of the HSIs extracted by the EMAPs and ( p i , q i ) be the pixel coordinate of the sample x i EMAPs . The local pixel neighborhood centered at x i EMAPs can be defined as:
N ( x i EMAPs ) = { x EMAPs = ( p , q ) | p [ p i c , p i + c ] , q [ q i c , q i + c ] } .
The proposed WEMAPs can be represented by:
x i WEMAPs = x j EMAPs N ( x i EMAPs ) v j WEMAPs x j EMAPs x j EMAPs N ( x i EMAPs ) v j WEMAPs = x ^ i + k = 1 s v ^ k x ^ k 1 + k = 1 s v ^ k ,
where x ^ k = x k EMAPs , v ^ k = v k WEMAPs = exp ( γ x i EMAPs x ik EMAPs 2 ) . From Equations (18) and (20), it can be seen that the proposed WEMAPs consider the ensemble situation. The multiple attribute features used here can not only reduce the noise but also smoothen the homogeneous regions of the HSIs.

3.3. Feature Fusion (FF)

Since different features such as WMF and WEMAPs may represent certain characteristics and reflect various properties [26], their combination becomes a natural choice [40]. Then, it is straightforward to stack different features into a composite one. In the proposed FF, the WMFs and the proposed WEMAPs are combined as follows:
x i FF = ( x i WMF T ,   x i WEMAPs T ) T   for   i = 1 ,   ,   N .

3.4. Joint Decision and Feature Fusion (JDFFF)

Different from the FFs, the objective of the DFs [27] is to reach a joint decision based on the classification results by multiple classification results [26]. Based on both the FF and the DF, a JDFFF framework is proposed for performing the HSI classification. The main steps of the proposed JDFFF framework can be summarized as follows. First, the discriminant features extracted by the WMFs and the proposed WEMAPs are combined to form the multiscale features. Second, the classification results of corresponding scale using the ELMs (GELM or the KELM) are combined into the final one based on using the MV within the proposed JDFFF.
For multiscale features, the widths (scales) of the neighborhood windows w 0 can be set to be 3, 5, 7, etc. Figure 1 illustrates the flowchart of the proposed JDFFF framework.

4. Experimental Results and Discussion

In this section, the proposed JDFFF framework will be evaluated on two well-known hyperspectral datasets which are detailed below.
Indian Pines dataset: This dataset [18] was acquired by the National Aeronautics and Space Visible/Infrared Imaging Spectrometer (AVIRIS) sensor in June 1992. It has 145 × 145 pixels with 220 bands between 400 nm and 2450 nm covering visible and infrared spectrum regions. The spatial resolution of this dataset is 20 m. After removing 20 water absorption bands, there are 200 bands in this image. There are in total 10,366 pixels labeled in 16 classes for classification.
Pavia University dataset: This dataset [18] was collected by the Reflective Optics System Imaging Spectrometer (ROSIS) sensor in 2002. The image contains 610 × 340 pixels with 103 valid bands after removing 12 noisy and water absorption bands. The dataset has 42,776 sample pixels labeled in 9 classes.
Kennedy Space Center (KSC): This dataset [41] was obtained by NASA AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) instrument at Kennedy Space Center in Florida. AVIRIS collected data in 224 bands with a width of 10 nm, and a center wavelength between 400 nm and 2500 nm. It has 512 × 614 pixels and 224 bands, and the spatial resolution of KSC data acquired from a height of approximately 20 km is 18 m. After removing the water absorption and low SNR frequency bands, 176 frequency bands were used for analysis. In order to classify, 13 categories are defined for the site, representing the various types of land cover that occur in this environment.
Relevant results are summarized and discussed as follows. In addition, all the abbreviations in this article have been listed in Table 1.

4.1. Evaluation Criteria and Parameter Settings

The parameter settings and evaluation criteria used in our experiments are discussed as follows. For both the EMAPs and the proposed WEAMPs, although many attributes can be utilized for extracting the discriminant features, according to [42], here only four attributes including the area, the moment of inertia, the standard deviation and the length of the diagonal of the bounding box are considered. The thresholds of the areas, moments of inertias, the standard deviations, and the lengths of the diagonals of the bounding boxes are selected respectively in the sets of {100, 200, 500, 1000}, {20, 30, 40, 50}, {0.2, 0.3, 0.4, 0.5}, and {10, 25, 50, 100}. For the number of hidden neurons of GELM, we set to be 1000 according to [43]. For all the kernel based algorithms, the radial basis function (RBF) is used, where the kernel parameter σ and the penalty parameter C are fine-tuned in the training stage. The parameters C in the GELM with the composite kernels (CKs) are also fine-tuned. The parameter σ varies in the range { 2 4 ,   2 3 , , 2 3 , 2 4 } and the penalty parameter C varies in the range { 2 1 ,   2 2 ,   ,   2 19 ,   2 20 } . All the above parameters are automatically optimized by using three fold cross validations. Other parameters in the KSVM, KSVM-CK, GELM, GELM-CK, KELM and KELM-CK are referred to [43]. The LIBSVM toolbox in the MATLAB is used for implementing the SVM algorithms [44]. The parameters for the SMLR-SPATV (the SMLR with the weighed Markov random fields) are referred to [45]. All the experiments are conducted in MATLAB R2015a on a computer with the 2.9 GHz CPU and the 32 GB RAM. All the classification results are randomly run ten times and an average of the ten groups of results is computed for performance assessment.

4.2. Investigation on the Effect of Different Strategies

In this subsection, the performance of different methods and the denoising performance of the proposed model by introducing various levels of noise in the initial raw dataset are investigated. In each class, 15 samples are randomly selected for training and the remaining ones are used for testing. The scales of the WMFs, WEMAPs and FF are set to be 3, i.e., the parameter c in Equation (19) is set to be 1. For the proposed JDFFF, the scales are set in the range from 3 to 9. We add Gaussian noise to initial raw dataset and set σ = 0.02, 0.04, 0.06 (i.i.d: zero mean with σ2 variance). Table 2 summarizes the classification results obtained from different strategies, from which some observations can be made as follows: (1) Adding WMFs can help to achieve better results than corresponding approaches without WMFs, and the proposed WEMAPs have produced better classification results than both the EMAPs and the WMFs. By combining both the WMFs and our WEMPAS, the proposed FF has achieved better results than only using the proposed WEMAPs. Additionally, the proposed JDFFF further improves the classification results. (2) We found that only using raw data or WMFs, the noise will greatly affect their performance, and the greater the noise, the greater the impact. The performance of EMAPs, WEMAPs, FF and JDFFF are almost immune to the amount of noise in the initial raw dataset.

4.3. Investigation on the Suitability of Different Datasets

In this subsection, we study the applicability of the proposed method on different datasets. In each class, 15 samples are randomly selected for training and the remaining samples are used for testing. The scales of the WEMAPs and FF are set to be 3, i.e., the parameter c in Equation (19) is set to be 1. For the proposed JDFFF, the scales are set in the range from 3 to 9. Table 3 summarizes the classification results obtained from different datasets and strategies, from which some observations can be made as follows: Our proposed method can be applied to datasets of various sizes and spatial resolutions, and it takes more time when testing larger dataset.

4.4. Investigation on the Effect of Scales

In this subsection, the impacts of the scales of the WMFs, proposed WEMAPs and FF on the GELM and KELM are investigated. Again, 15 training samples in each class are randomly selected for training and the remaining samples are used for testing. Figure 2 shows the classification results, where the following observations can be made: The proposed WEMAPs have better classification accuracies than WMFs and FF develop the classification accuracies of WEMAPs.

4.5. Classification Results and Comparisons on the Two Datasets

In this subsection, the performance of the proposed JDFFF is further evaluated by comparing with some well-known state-of-the-art approaches, where different numbers of samples are randomly chosen from each class for training, here Q = 5, 10, 15, 20, 25, 30. Note that the number of training samples in each class is capped to 50% if Q becomes more than half of the samples in that class. We also apply the proposed FF and JDFF to KSVM in order to show the good performance of proposed frameworks. For the JDFFF and FF, the scale is set to range from 3 to 9 and 3, respectively. For performance evaluation, four metrics are used including overall accuracies (OA), average accuracies (AA), kappa statistic (k) and standard deviation (S) [43]. Table 4 and Table 5 show the classification results for the Indian Pines dataset and the Pavia University dataset, respectively. Without loss of the generality, the average computational times of all the methods with 30 training samples are also listed in the tables for comparison. From these two tables, we can have the following observations:
(1) The proposed FF achieved better classification results compared with the CK and the MRF methods, which was further improved by the proposed JDFFF. Furthermore, we can also see that proposed FF-GELM and FF-KELM have produced better classification accuracies than FF-SVM. The same situations are happened to JDFFF methods, i.e., JDFFF-GELM, JDFFF-KELM and JDFFF-SVM.
(2) The proposed FF based on the ELM algorithms (the GELM and the KELM) have less computational times than other spectral spatial based SVM algorithms. This is because the proposed FF based on the ELM algorithms inherits the fast speed of the ELM. Additionally, the proposed JDFFF with the ELM algorithms (the GELM and the KELM) have the advantages of less computational time compared with other spectral spatial based SVM methods.
In addition, Figure 3 and Figure 4 show the results for the Indian Pines dataset and the Pavia University dataset with about 30 training samples per class.

5. Conclusions

In this paper, we propose a novel framework for HSI classification. First, an improved EMAPs, called the WEMAPs, is found to be able to better model the discriminant information and reduce the noise in the HSIs. Second, the features extracted by the WEMAPs and the WMFs are combined to obtain the better FF. Third, the proposed multiscale FF, namely the JDFFF, has further improved this HSI classification results. Finally, the GELM and KELM are applied to the proposed JDFFF for performing this HSI classification. Experimental results show a good performance of proposed framework.
In the future work, the dimension reduction [46] will be applied to further reduce the computational time of the proposed JDFFF. Additionally, the hyperspectral unmixing [47] will be explored to further improve the classification results.

Author Contributions

M.L., F.C. and Z.Y. implemented the proposed method, analyzed results and drafted the paper; M.L. and X.H. conceived and designed the experiments; X.H. and Z.Y. analyzed results and also revised the paper with Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by Research and Development Project in Key Areas of Guangdong Province (2018B010109004), Technology Project of Guangdong Province (nos. 2019A050513011), Guangzhou Science and Technology Plan Project (no. 202002030386), Guangdong Graduate Education Innovation Project (no. 2020XSLT16).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deng, C.; Liu, X. Active multi-kernel domain adaptation for hyperspectral image classification. Pattern Recognit. 2018, 77, 306–315. [Google Scholar] [CrossRef] [Green Version]
  2. Torti, E.; Leon, R.; La, M. Parallel classification pipelines for skin cancer detection exploiting hyperspectral imaging on hybrid systems. Electronics 2020, 9, 1053. [Google Scholar] [CrossRef]
  3. Zhou, Y.; Peng, J.; Chen, C.L.P. Dimension reduction using spatial and spectral regularized local discriminant embedding for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1082–1095. [Google Scholar] [CrossRef]
  4. Plaza, A.; Benediktsson, J.A.; Boardman, J.W. Recent advances in techniques for hyperspectral image processing. Remote Sens. Environ. 2019, 113, S110–S122. [Google Scholar] [CrossRef]
  5. Hughes, G.F. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inform. Theory. 1968, 14, 55–63. [Google Scholar] [CrossRef] [Green Version]
  6. Song, X.; Wu, L.; Hao, H. Hyperspectral image denoising based on spectral dictionary learning and sparse coding. Electronics 2019, 8, 86. [Google Scholar] [CrossRef] [Green Version]
  7. Lin, L.; Chen, C.; Yang, J. Deep transfer HSI classification method based on information measure and optimal neighborhood noise reduction. Electronics 2019, 8, 1112. [Google Scholar] [CrossRef] [Green Version]
  8. Kang, X.; Xiang, X.; Li, S.; Benediktsson, J.A. PCA-based edge-preserving features for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7140–7151. [Google Scholar] [CrossRef]
  9. Zabalza, J.; Ren, J.; Yang, M. Novel folded-PCA for improved feature extraction and data reduction with hyperspectral imaging and SAR in remote sensing. ISPRS J. Photogramm. Remote Sens. 2014, 93, 112–122. [Google Scholar] [CrossRef] [Green Version]
  10. Mura, M.D.; Benediktsson, J.A.; Waske, B. Morphological attribute profiles for the analysis of very high resolution images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3747–3762. [Google Scholar] [CrossRef]
  11. Mura, M.D.; Benediktsson, J.A.; Waske, B. Extended profiles with morphological attribute filters for the analysis of hyperspectral data. Int. J. Remote Sens. 2010, 31, 5975–5991. [Google Scholar] [CrossRef]
  12. Jerome, H.F. Regularized discriminant analysis. J. Am. Stat. Assoc. 1989, 84, 165–175. [Google Scholar]
  13. Kuo, B.C.; Landgrebe, D.A. Nonparametric weighted feature extraction for classification. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1096–1105. [Google Scholar]
  14. Zabalza, J.; Ren, J.; Zheng, J.; Zhao, H. Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging. Neurocomputing 2016, 185, 1–10. [Google Scholar] [CrossRef] [Green Version]
  15. Yu, H.; Gao, L.; Li, J. Spectral-spatial hyperspectral image classification using subspace based support vector machines and adaptive markov random fields. Remote Sens. 2016, 8, 355. [Google Scholar] [CrossRef] [Green Version]
  16. Samat, A.; Du, P.; Liu, S. Ensemble extreme learning machines for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1060–1069. [Google Scholar] [CrossRef]
  17. Li, J.; Huang, X.; Gamba, P. Multiple feature learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1592–1606. [Google Scholar] [CrossRef] [Green Version]
  18. Cao, F.; Yang, Z.; Ren, J.; Ling, W.K. Extreme Sparse Multinomial Logistic Regression: A Fast and Robust Framework for Hyperspectral Image Classification. Remote Sens. 2017, 9, 1255. [Google Scholar] [CrossRef] [Green Version]
  19. Torti, E.; Fontanella, A.; Plaza, A. Hyperspectral Image Classification Using Parallel Autoencoding Diabolo Networks on Multi-Core and Many-Core Architectures. Electronics 2018, 7, 411. [Google Scholar] [CrossRef] [Green Version]
  20. Pesaresi, M.; Benediktsson, J.A. A new approach for the morphological segmentation of high-resolution satellite imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 309–320. [Google Scholar] [CrossRef] [Green Version]
  21. Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
  22. Xia, J.; Mura, M.D.; Chanussot, J. Random subspace ensembles for hyperspectral image classification with extended morphological attribute profiles. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4768–4786. [Google Scholar] [CrossRef]
  23. Mura, M.D.; Villa, A.; Benediktsson, J.A. Classification of hyperspectral images by using extended morphological attribute profiles and independent component analysis. IEEE Geosci. Remote Sens. Lett. 2011, 8, 542–546. [Google Scholar] [CrossRef] [Green Version]
  24. Li, J.; Marpu, P.R.; Plaza, A. Generalized composite kernel framework for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4816–4829. [Google Scholar] [CrossRef]
  25. Li, W.; Prasad, S.; Fowler, J.E. Decision fusion in kernel-induced spaces for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3399–3411. [Google Scholar] [CrossRef] [Green Version]
  26. Li, W.; Chen, C.; Su, H. Local binary patterns and extreme learning machine for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3681–3693. [Google Scholar] [CrossRef]
  27. Prasad, S.; Bruce, L.M. Decision fusion with confidence-based weight assignment for hyperspectral target recognition. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1448–1456. [Google Scholar] [CrossRef]
  28. Lee, K.H. Combining Feature Fusion and Decision Fusion in Multimodal Biometric Authentication. J. Korea Inst. Inf Secur. Cryptol. 2010, 20, 133–138. [Google Scholar]
  29. Sun, B.; Li, L.; Wu, X. Combining feature-level and decision-level fusion in a hierarchical classifier for emotion recognition in the wild. J. Multimodal User In. 2015, 10, 125–137. [Google Scholar] [CrossRef]
  30. Liao, W.; Bellens, R.; Pizurica, A. Combining feature fusion and decision fusion for classification of hyperspectral and LiDAR data. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 1241–1244. [Google Scholar]
  31. Zhou, Y.; Peng, J.; Chen, C.L.P. Region-kernel-based support vector machines for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4810–4824. [Google Scholar]
  32. Soille, P. Morphological image analysis: Principles and applications. Sensor Rev. 1999, 28, 800–801. [Google Scholar]
  33. Marpu, P.R.; Pedergnana, M.; Mura, M.D. Automatic generation of standard deviation attribute profiles for spectral–spatial classification of remote sensing data. IEEE Geosci. Remote Sens. Lett. 2013, 10, 293–297. [Google Scholar] [CrossRef]
  34. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  35. Cao, F.; Yang, Z.; Ren, J.; Ling, W.K. Sparse Representation-Based Augmented Multinomial Logistic Extreme Learning Machine with Weighted Composite Features for Spectral-Spatial Classification of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6263–6279. [Google Scholar] [CrossRef] [Green Version]
  36. Huang, G.B.; Zhou, H.; Ding, X. Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. B Cybern. 2012, 42, 513–529. [Google Scholar] [CrossRef] [Green Version]
  37. Banerjee, K.S. Generalized Inverse of Matrices and its Applications; Wiley: Hoboken, HJ, USA, 1971. [Google Scholar]
  38. Camps-Valls, G.; Gomez-Chova, L.; Muñoz-Marí, J. Composite kernels for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2006, 3, 93–97. [Google Scholar] [CrossRef]
  39. Fletcher, R. Practical Methods of Optimization; Wiley: Hoboken, HJ, USA, 1980. [Google Scholar]
  40. Huang, X.; Zhang, L. An SVM ensemble approach combining spectral, structural, and semantic features for the classification of high-resolution remotely sensed imagery. IEEE Trans. Geosci. Remote Sens. 2013, 51, 257–272. [Google Scholar] [CrossRef]
  41. Ham, J.; Chen, Y.C.; Ghosh, J. Investigation of the random forest framework for classification of hyperspectral data. IEEE Geosci. Remote Sens. 2015, 43, 492–501. [Google Scholar] [CrossRef] [Green Version]
  42. Bao, R.; Xia, J.; Du, P. Combining morphological attribute profiles via an ensemble method for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2016, 13, 359–363. [Google Scholar] [CrossRef]
  43. Zhou, Y.; Peng, J.; Chen, C.L.P. Extreme learning machine with composite kernels for hyperspectral image classification. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2015, 8, 2351–2360. [Google Scholar] [CrossRef]
  44. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM 2011, 2, 1–39. [Google Scholar] [CrossRef]
  45. Sun, L.; Wu, Z.; Liu, J. Supervised spectral–spatial hyperspectral image classification with weighted Markov random fields. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1490–1503. [Google Scholar] [CrossRef]
  46. Chen, W.; Yang, Z. Dimensionality Reduction Based on Determinantal Point Process and Singular Spectrum Analysis for Hyperspectral Images. IET Image Process. 2018, 13, 299–306. [Google Scholar] [CrossRef] [Green Version]
  47. Li, J.; Bioucas-Dias, J.M.; Plaza, A. Robust collaborative nonnegative matrix factorization for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6076–6090. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The flowchart of the proposed JDFFF framework for HSI classification.
Figure 1. The flowchart of the proposed JDFFF framework for HSI classification.
Electronics 09 02137 g001
Figure 2. The impacts of the window scales of the WMFs, the proposed WEMAPs, the FF on the GELM and the KELM. (a) The classification results on Indian Pines dataset. (b) The classification results on the Pavia University dataset.
Figure 2. The impacts of the window scales of the WMFs, the proposed WEMAPs, the FF on the GELM and the KELM. (a) The classification results on Indian Pines dataset. (b) The classification results on the Pavia University dataset.
Electronics 09 02137 g002
Figure 3. Results for the Indian Pines dataset with about 30 training samples per class: (a) Ground truth. (b) KSVM (OA = 74.42). (c) SVM-CK (OA = 85.93). (d) FF-SVM (OA = 90.63). (e) JDFFF-SVM (OA = 91.54). (f) SMLR-SPATV (OA = 89.10). (g) GELM (OA = 66.85). (h) KELM (OA = 74.37). (i) GELM-CK (OA = 88.86). (j) KELM-CK (OA = 91.96). (k) GELM-FF (OA = 94.23). (l) KELM-FF (OA = 94.93). (m) GELM-JDFF (OA = 94.93). (n) KELM-JDFF (OA = 95.13).
Figure 3. Results for the Indian Pines dataset with about 30 training samples per class: (a) Ground truth. (b) KSVM (OA = 74.42). (c) SVM-CK (OA = 85.93). (d) FF-SVM (OA = 90.63). (e) JDFFF-SVM (OA = 91.54). (f) SMLR-SPATV (OA = 89.10). (g) GELM (OA = 66.85). (h) KELM (OA = 74.37). (i) GELM-CK (OA = 88.86). (j) KELM-CK (OA = 91.96). (k) GELM-FF (OA = 94.23). (l) KELM-FF (OA = 94.93). (m) GELM-JDFF (OA = 94.93). (n) KELM-JDFF (OA = 95.13).
Electronics 09 02137 g003
Figure 4. Results for the Pavia University dataset with about 30 training samples per class. (a) Ground truth. (b) KSVM (OA = 81.89). (c) SVM-CK (OA = 89.42). (d) FF-SVM (OA = 97.77). (e) JDFFF-SVM (OA = 98.50). (f) SMLR-SPATV (OA = 90.14). (g) GELM (OA = 79.16). (h) KELM (OA = 80.93). (i) GELM-CK (OA = 83.95). (j) KELM-CK (OA = 90.99). (k) GELM-FF (OA = 99.12). (l) KELM-FF (OA = 98.71). (m) GELM-JDFF (OA = 99.43). (n) KELM-JDFF (OA = 99.39).
Figure 4. Results for the Pavia University dataset with about 30 training samples per class. (a) Ground truth. (b) KSVM (OA = 81.89). (c) SVM-CK (OA = 89.42). (d) FF-SVM (OA = 97.77). (e) JDFFF-SVM (OA = 98.50). (f) SMLR-SPATV (OA = 90.14). (g) GELM (OA = 79.16). (h) KELM (OA = 80.93). (i) GELM-CK (OA = 83.95). (j) KELM-CK (OA = 90.99). (k) GELM-FF (OA = 99.12). (l) KELM-FF (OA = 98.71). (m) GELM-JDFF (OA = 99.43). (n) KELM-JDFF (OA = 99.39).
Electronics 09 02137 g004
Table 1. Abbreviations list in this paper. (Sort by first letter).
Table 1. Abbreviations list in this paper. (Sort by first letter).
Abbreviations List
AAaverage accuraciesKkappa coefficient
APsattribute profilesKELMkernel extreme learning machine
CKscomposite kernels KELM-CKskernel extreme learning machine-composite kernels
DFdecision fusionKSVMkernel support vector machine
ELMextreme learning machineLDAthe linear discriminate analysis
EMAPsextended multi-attribute profilesMWEMAPsmultiscale weighted extended multi-attribute profiles
FFfeature-level fusionMVmajority voting
FF-KSVMfeature-level fusion-kernel support vector machineMPsmorphological profiles
FF-GELMfeature-level fusion-generalized extreme learning machineOAoverall accuracies
FF-KELMfeature-level fusion-kernel extreme learning machinePCAprincipal component analysis
GELMgeneralized extreme learning machineSVMsupport vector machine
GELM-CKsgeneralized extreme learning machine-composite kernelsSVM-CKssupport vector machine-composite kernels
HSIhyperspectral imageSMLRsparse multinomial logistic regression
JDFFFjoint decision fusion and feature fusionSMLR-SPATVsparse multinomial logistic regression-weighed Markov random fields
JDFFF-KSVMjoint decision fusion and feature fusion-kernel support vector machineSSAsingular spectrum analysis
JDFFF-GELMjoint decision fusion and feature fusion-generalized extreme learning machineWEMAPsweighted extended multi-attribute profiles
JDFFF-KELMjoint decision fusion and feature fusion-kernel extreme learning machineWMFsweighted mean filters
Table 2. The classification overall accuracies (in percentages) on these two datasets with 15 labeled samples per class for training.
Table 2. The classification overall accuracies (in percentages) on these two datasets with 15 labeled samples per class for training.
DatasetsStrategiesGELMGELM (Noise 0.02)GELM (Noise 0.04)GELM (Noise 0.06)KELMKELM (Noise 0.02)KELM (Noise 0.04)KELM (Noise 0.06)
Indian Pinesraw data61.02 ± 1.5255.45 ± 1.6450.21 ± 1.3644.85 ± 1.8966.93 ± 2.4560.07 ± 2.3754.37 ± 2.1147.98 ± 2.33
WMFs75.35 ± 2.0772.26 ± 2.7869.42 ± 2.1566.32 ± 2.4578.35 ± 3.0975.12 ± 1.6172.31 ± 2.5169.21 ± 2.63
EMAPs88.34 ± 2.0288.25 ± 2.6488.12 ± 2.5188.09 ± 2.7488.93 ± 1.7388.54 ± 2.2488.29 ± 2.6688.17 ± 2.62
WEMAPs90.51 ± 1.7790.42 ± 1.9190.32 ± 2.1290.31 ± 2.3591.25 ± 1.9591.14 ± 1.6291.12 ± 2.2491.15 ± 2.33
FF91.86 ± 2.0291.77 ± 1.4691.68 ± 1.7291.71 ± 1.4292.22 ± 1.3792.18 ± 1.6592.15 ± 2.1492.08 ± 2.26
JDFFF92.74 ± 1.2392.68 ± 1.6192.57 ± 1.8792.61 ± 1.5493.09 ± 0.8692.97 ± 1.1892.92 ± 1.6492.87 ± 1.67
Pavia Universityraw data74.48 ± 3.6763.58 ± 3.4558.45 ± 3.1254.26 ± 3.5874.20 ± 5.1263.72 ± 3.4158.63 ± 3.0954.62 ± 4.35
WMFs85.07 ± 3.6976.02 ± 3.5668.91 ± 3.6866.12 ± 3.4684.21 ± 5.2075.55 ± 5.5168.31 ± 3.3665.58 ± 2.04
EMAPs95.90 ± 1.8595.85 ± 2.2395.88 ± 2.3195.78 ± 2.1895.41 ± 3.2195.34 ± 1.7595.36 ± 1.2895.31 ± 1.68
WEMAPs96.97 ± 1.1596.88 ± 1.4596.85 ± 1.9696.79 ± 1.8497.11 ± 1.0697.05 ± 1.3297.02 ± 2.6296.95 ± 1.86
FF97.31 ± 1.1597.26 ± 1.7897.28 ± 1.7997.18 ± 2.0397.15 ± 0.7497.11 ± 1.5897.08 ± 1.0597.04 ± 1.63
JDFFF97.98 ± 1.1097.96 ± 1.2197.93 ± 1.3697.95 ± 1.5898.16 ± 0.9898.15 ± 0.8598.08 ± 0.8398.06 ± 1.15
Table 3. The classification overall accuracies (in percentages) on these three datasets with 15 labeled samples per class for training.
Table 3. The classification overall accuracies (in percentages) on these three datasets with 15 labeled samples per class for training.
DatasetsStrategiesGELMTime(s)KELMTime(s)
Indian Pinesraw data61.02 ± 1.520.4366.93 ± 2.450.41
WEMAPs90.51 ± 1.770.5691.25 ± 1.950.52
FF91.86 ± 2.021.2592.22 ± 1.372.08
JDFFF92.74 ± 1.2317.1593.09 ± 0.8621.32
Pavia Universityraw data74.48 ± 3.670.6974.20 ± 5.120.57
WEMAPs96.97 ± 1.150.7697.11 ± 1.060.62
FF97.31 ± 1.155.3597.15 ± 0.745.16
JDFFF97.98 ± 1.1074.0598.16 ± 0.9872.67
Kennedy Space Centerraw data87.43 ± 1.230.8587.76 ± 0.980.74
WEMAPs88.32 ± 0.950.9188.65 ± 0.750.83
FF93.75 ± 1.127.5493.93 ± 1.067.02
JDFFF93.86 ± 1.0987.5293.94 ± 0.8384.26
Table 4. Classification accuracy (%) with different numbers of labeled samples for the Indian Pines dataset (the best result of each row is marked in bold).
Table 4. Classification accuracy (%) with different numbers of labeled samples for the Indian Pines dataset (the best result of each row is marked in bold).
QIndexKSVMSVM-CKsFF-KSVMJDFFF-KSVMSMLR-SPATVGELMKELMGELM-CKsKELM-CKsFF-GELMFF-KELMJDFFF-GELMJDFFF-KELM
5OA54.1 ± 4.159.18 ± 5.1371.6 ± 2.974.5 ± 1.969.8 ± 5.750.8 ± 4.455.2 ± 3.862.3 ± 1.673.1 ± 2.880.3 ± 2.181.3 ± 2.182.3 ± 2.382.5 ± 2.7
AA66.1 ± 2.271.1 ± 2.880.9 ± 3.183.1 ± 1.480.8 ± 0.664.8 ± 3.568.3 ± 1.575.9 ± 1.683.1 ± 2.286.6 ± 1.887.4 ± 1.488.2 ± 1.388.3 ± 1.8
k48.8 ± 4.454.4 ± 5.568.2 ± 3.271.4 ± 2.166.3 ± 6.145.7 ± 0.250.3 ± 3.858.1 ± 1.869.8 ± 3.177.8 ± 2.478.9 ± 2.380.1 ± 2.680.3 ± 3.1
10OA64.3 ± 2.871.1 ± 4.179.3 ± 2.581.5 ± 2.877.8 ± 5.357.6 ± 2.963.3 ± 3.170.5 ± 1.780.9 ± 2.386.8 ± 1.988.2 ± 1.488.3 ± 2.188.8 ± 1.9
AA74.8 ± 2.181.5 ± 2.687.5 ± 0.989.4 ± 1.288.1 ± 1.970.8 ± 2.274.5 ± 2.282.8 ± 1.189.2 ± 1.592.1 ± 1.292.8 ± 0.993.1 ± 1.393.3 ± 1.1
k59.9 ± 3.167.7 ± 4.676.6 ± 2.879.2 ± 3.175.1 ± 5.752.6 ± 3.158.8 ± 3.267.1 ± 1.978.5 ± 2.585.1 ± 2.286.6 ± 1.686.7 ± 2.287.3 ± 2.1
15OA67.1 ± 2.478.7 ± 3.586.1 ± 1.986.9 ± 1.983.1 ± 2.461.1 ± 1.365.7 ± 3.377.7 ± 2.185.3 ± 2.191.8 ± 1.591.6 ± 2.292.5 ± 1.593.1 ± 1.1
AA77.6 ± 1.886.7 ± 1.991.5 ± 1.392.2 ± 1.291.6 ± 0.775.1 ± 0.977.9 ± 2.587.4 ± 0.892.1 ± 1.695.2 ± 0.995.1 ± 1.495.6 ± 0.896.1 ± 0.7
k63.2 ± 2.676.1 ± 3.984.3 ± 2.185.1 ± 2.181.1 ± 2.656.6 ± 1.361.7 ± 3.675.1 ± 2.283.4 ± 2.490.7 ± 1.790.5 ± 2.591.4 ± 1.792.1 ± 1.1
20OA71.1 ± 2.283.0 ± 2.488.1 ± 2.189.5 ± 1.285.9 ± 2.463.5 ± 1.470.8 ± 2.682.7 ± 1.789.6 ± 1.192.8 ± 1.293.5 ± 0.994.2 ± 0.994.1 ± 1.4
AA80.9 ± 1.389.6 ± 1.292.8 ± 1.293.8 ± 0.792.5 ± 1.077.2 ± 0.981.9 ± 1.091.1 ± 1.194.6 ± 0.795.7 ± 0.996.2 ± 0.696.6 ± 0.696.6 ± 0.9
k67.4 ± 2.580.8 ± 2.686.5 ± 2.288.1 ± 1.484.1 ± 2.659.3 ± 1.467.2 ± 2.880.5 ± 1.888.2 ± 1.191.8 ± 1.492.6 ± 1.193.4 ± 1.193.3 ± 1.6
25OA72.6 ± 1.885.5 ± 1.590.1 ± 1.190.5 ± 1.187.8 ± 2.166.5 ± 1.773.1 ± 2.385.9 ± 1.390.8 ± 1.193.1 ± 1.493.8 ± 1.193.8 ± 0.994.5 ± 1.1
AA82.5 ± 1.691.8 ± 1.194.1 ± 0.794.6 ± 0.893.8 ± 0.979.5 ± 0.883.8 ± 0.992.6 ± 0.995.5 ± 0.596.1 ± 1.196.5 ± 0.696.8 ± 0.697.1 ± 0.6
k69.1 ± 1.983.6 ± 1.788.7 ± 1.189.2 ± 1.286.2 ± 2.262.5 ± 1.769.7 ± 2.584.1 ± 1.589.6 ± 1.292.1 ± 1.692.9 ± 1.392.9 ± 1.193.8 ± 1.3
30OA74.4 ± 1.885.9 ± 1.890.6 ± 1.491.5 ± 1.189.1 ± 1.566.8 ± 0.674.3 ± 1.488.8 ± 1.691.9 ± 0.994.2 ± 1.194.9 ± 1.194.9 ± 1.195.1 ± 1.1
AA83.8 ± 1.292.1 ± 0.994.5 ± 0.695.2 ± 0.894.9 ± 5.679.7 ± 0.985.3 ± 1.194.3 ± 1.196.1 ± 0.796.8 ± 0.697.2 ± 0.597.3 ± 0.697.4 ± 0.6
k71.1 ± 1.984.1 ± 2.189.3 ± 1.690.3 ± 1.387.6 ± 1.663.1 ± 0.671.2 ± 1.587.3 ± 1.890.8 ± 1.193.4 ± 1.394.2 ± 1.194.2 ± 1.294.4 ± 1.3
Time (s)23.9841.8037.51152.4730.30.451.051.869.591.832.6919.5823.77
Table 5. Accuracy (%) with different numbers of labeled samples for the Pavia University dataset (the best result of each row is marked in bold).
Table 5. Accuracy (%) with different numbers of labeled samples for the Pavia University dataset (the best result of each row is marked in bold).
QIndexKSVMSVM-CKsFF-KSVMJDFFF-KSVMSMLR-SPATVGELMKELMGELM-CKsKELM-CKsFF-GELMFF-KELMJDFFF-GELMJDFFF-KELM
5OA61.7 ± 10.763.3 ± 4.581.2 ± 2.684.76 ± 3.8767.46 ± 6.9460.9 ± 8.159.9 ± 6.763.2 ± 3.763.7 ± 8.188.8 ± 4.589.3 ± 5.491.61 ± 4.0390.1 ± 3.1
AA73.1 ± 4.973.4 ± 3.386.4 ± 0.987.7 ± 2.176.1 ± 5.570.6 ± 4.671.9 ± 3.471.5 ± 2.272.5 ± 2.891.2 ± 2.291.5 ± 2.592.7 ± 1.891.9 ± 1.5
k53.1 ± 10.854.7 ± 4.776.1 ± 2.980.3 ± 4.659.3 ± 7.252.1 ± 8.351.1 ± 6.754.4 ± 3.555.2 ± 8.785.6 ± 5.586.27 ± 6.789.1 ± 5.187.2 ± 3.7
10OA71.2 ± 4.474.2 ± 6.787.7 ± 4.889.8 ± 4.277.4 ± 4.569.8 ± 4.367.6 ± 3.371.6 ± 3.575.7 ± 6.592.1 ± 3.392.00 ± 3.193.3 ± 3.793.7 ± 3.1
AA79.1 ± 1.979.9 ± 2.892.3 ± 2.493.5 ± 1.985.1 ± 2.477.9 ± 1.478.3 ± 1.977.2 ± 1.680.9 ± 3.295.4 ± 1.695.48 ± 1.995.9 ± 3.195.9 ± 3.1
k63.8 ± 4.867.3 ± 7.884.3 ± 5.986.9 ± 5.271.5 ± 4.862.2 ± 4.559.8 ± 3.563.9 ± 4.269.2 ± 7.989.7 ± 4.189.6 ± 3.891.3 ± 4.691.8 ± 3.7
15OA74.1 ± 6.485.7 ± 2.696.2 ± 1.197.1 ± 0.879.3 ± 3.174.7 ± 3.872.8 ± 4.376.1 ± 3.286.2 ± 3.497.4 ± 1.197.4 ± 0.897.9 ± 1.198.3 ± 0.9
AA81.1 ± 3.187.3 ± 1.396.7 ± 0.797.2 ± 0.686.1 ± 1.679.7 ± 1.281.2 ± 1.879.8 ± 1.887.2 ± 0.998.1 ± 0.897.9 ± 0.698.4 ± 0.598.5 ± 0.7
k67.2 ± 7.481.4 ± 3.395.1 ± 1.496.2 ± 1.273.8 ± 3.667.6 ± 4.265.7 ± 4.969.2 ± 3.782.1 ± 4.196.60 ± 1.596.6 ± 1.197.3 ± 1.497.7 ± 1.2
20OA75.9 ± 3.686.1 ± 2.996.1 ± 1.797.1 ± 2.185.3 ± 2.974.4 ± 3.175.3 ± 3.878.3 ± 1.186.1 ± 2.997.5 ± 1.697.5 ± 1.598.4 ± 1.598.3 ± 1.8
AA82.6 ± 1.188.2 ± 1.597.5 ± 0.597.9 ± 0.787.9 ± 2.680.4 ± 1.682.6 ± 1.682.4 ± 1.187.5 ± 2.198.3 ± 0.898.4 ± 0.598.9 ± 0.698.9 ± 0.6
k69.5 ± 4.181.9 ± 3.594.8 ± 2.296.2 ± 2.681.1 ± 3.667.6 ± 3.668.8 ± 4.372.1 ± 1.381.8 ± 3.796.7 ± 2.196.7 ± 2.197.9 ± 1.997.7 ± 2.3
25OA80.6 ± 2.387.2 ± 2.197.1 ± 1.098.1 ± 1.186.3 ± 5.377.3 ± 2.578.4 ± 3.279.5 ± 4.988.8 ± 1.898.6 ± 0.698.6 ± 0.799.2 ± 0.299.1 ± 0.4
AA85.5 ± 1.289.1 ± 1.397.9 ± 0.598.2 ± 0.890.2 ± 2.282.7 ± 1.484.9 ± 1.183.4 ± 1.689.9 ± 1.398.9 ± 0.499.1 ± 0.399.3 ± 0.299.3 ± 0.2
k75.1 ± 2.783.3 ± 2.696.2 ± 1.397.4 ± 1.482.5 ± 6.471.1 ± 2.872.6 ± 3.773.8 ± 5.785.3 ± 2.398.2 ± 0.898.1 ± 0.999.1 ± 0.398.9 ± 0.5
30OA81.8 ± 1.389.4 ± 1.897.7 ± 1.198.5 ± 0.690.1 ± 2.579.1 ± 1.780.9 ± 2.483.9 ± 1.990.9 ± 1.899.1 ± 0.598.7 ± 1.799.4 ± 0.399.3 ± 0.4
AA86.1 ± 0.890.6 ± 0.898.3 ± 0.698.6 ± 0.392.1 ± 1.983.6 ± 0.585.8 ± 0.885.6 ± 0.691.4 ± 1.299.3 ± 0.299.1 ± 0.899.4 ± 0.399.4 ± 0.1
k76.6 ± 1.586.1 ± 2.397.1 ± 1.398.1 ± 0.887.1 ± 3.173.1 ± 2.175.5 ± 2.979.1 ± 2.388.1 ± 2.498.8 ± 0.698.3 ± 2.299.2 ± 0.499.1 ± 0.5
Time (s)4.9816.4814.62109.95106.60.840.626.157.866.416.2578.6376.31
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, M.; Cao, F.; Yang, Z.; Hong, X.; Huang, Y. Hyperspectral Image Denoising and Classification Using Multi-Scale Weighted EMAPs and Extreme Learning Machine. Electronics 2020, 9, 2137. https://doi.org/10.3390/electronics9122137

AMA Style

Liu M, Cao F, Yang Z, Hong X, Huang Y. Hyperspectral Image Denoising and Classification Using Multi-Scale Weighted EMAPs and Extreme Learning Machine. Electronics. 2020; 9(12):2137. https://doi.org/10.3390/electronics9122137

Chicago/Turabian Style

Liu, Meizhuang, Faxian Cao, Zhijing Yang, Xiaobin Hong, and Yuezhen Huang. 2020. "Hyperspectral Image Denoising and Classification Using Multi-Scale Weighted EMAPs and Extreme Learning Machine" Electronics 9, no. 12: 2137. https://doi.org/10.3390/electronics9122137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop