Next Article in Journal
Rice Mapping and Growth Monitoring Based on Time Series GF-6 Images and Red-Edge Bands
Next Article in Special Issue
Towards On-Board Hyperspectral Satellite Image Segmentation: Understanding Robustness of Deep Learning through Simulating Acquisition Conditions
Previous Article in Journal
Performance and Feasibility of Drone-Mounted Imaging Spectroscopy for Invasive Aquatic Vegetation Detection
Previous Article in Special Issue
CSR-Net: Camera Spectral Response Network for Dimensionality Reduction and Classification in Hyperspectral Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spectral-Spatial Joint Classification of Hyperspectral Image Based on Broad Learning System

1
School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
2
School of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(4), 583; https://doi.org/10.3390/rs13040583
Submission received: 2 January 2021 / Revised: 2 February 2021 / Accepted: 4 February 2021 / Published: 6 February 2021
(This article belongs to the Special Issue Machine Learning and Pattern Analysis in Hyperspectral Remote Sensing)

Abstract

:
At present many researchers pay attention to a combination of spectral features and spatial features to enhance hyperspectral image (HSI) classification accuracy. However, the spatial features in some methods are utilized insufficiently. In order to further improve the performance of HSI classification, the spectral-spatial joint classification of HSI based on the broad learning system (BLS) (SSBLS) method was proposed in this paper; it consists of three parts. Firstly, the Gaussian filter is adopted to smooth each band of the original spectra based on the spatial information to remove the noise. Secondly, the test sample’s labels can be obtained using the optimal BLS classification model trained with the spectral features smoothed by the Gaussian filter. At last, the guided filter is performed to correct the BLS classification results based on the spatial contextual information for improving the classification accuracy. Experiment results on the three real HSI datasets demonstrate that the mean overall accuracies (OAs) of ten experiments are 99.83% on the Indian Pines dataset, 99.96% on the Salinas dataset, and 99.49% on the Pavia University dataset. Compared with other methods, the proposed method in the paper has the best performance.

Graphical Abstract

1. Introduction

Hyperspectral images (HSI) are widely used in various fields [1,2,3,4] due to their many characteristics, such as spectral imaging with high resolution, unity of spectral image and spatial image, and rapid non-destructive testing. One of the important tasks of HSI applications is HSI classification. At first, researchers only utilized spectral features for classification because the spectral information is easily affected by some factors, for example, light, noise, and sensors. The phenomenon of “same matter with the different spectrum and the same spectrum with distinct matter” often appears. It increases the difficulty of object recognition and seriously reduces the accuracy of classification. Then researchers began to combine spectral characteristics and spatial features to improve the classification accuracy.
The spectral feature extraction of HSI can be realized by unsupervised [5,6], supervised [7,8], and semi-supervised methods [7,9,10]. Representative unsupervised methods include principal component analysis (PCA) [11], independent component analysis (ICA) [12], and locality preserving projections (LPP) [13]. Some well-known unsupervised feature extraction methods are based on PCA and ICA. The foundation of some supervised feature extraction techniques for HSIs [14,15] is the well-known linear discriminant analysis (LDA). Many semi-supervised methods of spectral feature extraction often combine supervised and unsupervised methods to classify HSIs using limited labeled samples and unlabeled samples. For example, Cai et al. [16] proposed the semi-supervised discriminant analysis (SDA), which adopts the graph Laplacian-based regularization constraint in LDA to capture the local manifold features from unlabeled samples and avoid overfitting while the labeled samples are lacking. Sugiyama et al. [17] introduced the method of the semi-supervised local fisher discriminant analysis (SELF). It consists of a supervised method (local fisher discriminant analysis [8]) and PCA. These feature extraction methods (PCA, ICA, and LDA) cannot describe the complex spectral features structure of HSIs. LPP, which is essentially a linear model of Laplacian feature mapping, can describe the nonlinear manifold structure of data and is widely used in the spectral feature extraction of HSIs [18,19]. He et al. [20] applied multiscale super-pixel-wise LPP to HSI classification. Deng et al. [21] proposed the tensor locality preserving projection (TLPP) algorithm to reduce the dimensionality of HSI. However, in LPP, it is difficult to fix the value of the quantity of nearest neighbors used to construct the adjacency graph [22]. The above spectral feature extraction methods are all realized by the dimensionality reduction, which results in losing some spectral information. The Gaussian filter [23] can smooth the spectral information without reduction of bands in order to remove the noise from HSI data. Because of the advantage of removing the noise from data and liner calculation characteristic, the Gaussian filter is widely used in the classification of HSIs [24,25]. Tu et al. [26] used the Gaussian pyramid to capture the features of different scales by stepwise filtering and down-sampling. Shao et al. [27] utilized the Gaussian filter to fit the trigger and echo signal waveforms for coal/rock classification. The spectra of four-type coal/rock specimens are captured by the 91-channel hyperspectral light detection and ranging (LiDAR) (HSL).
In terms of spatial feature extraction, a Markov model was initially adopted to capture spatial features [28]. However, it has two disadvantages, which are intractable computational problems and no enough samples to describe the desired object. Then the morphological profile (MP) model [29] was put forward. Even if MP has a strong ability to extract spatial features, it cannot achieve the flexible structuring element (SE) shape, the ability to characterize the information about the region’s grey-level features, and the less computational complexity [30]. Benediktsson et al. [31] proposed the extended morphological profile (EMP) to classify the HSI with high spatial resolution from urban areas. In order to solve the problems of MP, Mura et al. [32] proposed morphology attribute profile (MAP) as a promotion of MP. The extended morphological profiles with partial reconstruction (EMPP) [33] were introduced to achieve the classification of high-resolution HSIs in urban areas. Subsequently, the extended morphological attribute profiles (EMAP) [34] was adopted to cut down the redundancy of MAP. The framework of morphological attribute profiles with partial reconstruction [35] had gained better performance on the classification of high-resolution HSIs. Geiss et al. [36] proposed the method of object-based MPs to get a great improvement in terms of classification accuracy compared with standard MPs. Samat et al. [37] used the extra-trees and maximally stable extremal region-guided morphological profile (MSER_MP) to achieve the ideal classification effect. The broad learning system (BLS) classification architecture based on LPP and local binary pattern (LBP)(LPP_LBP_BLS) [19] was proposed to gain the high-precision classification. However, LBP only uses the local features of pixels and needs to use an adjacency matrix, which requires a lot of calculation. In recent years, the guided filter [38,39,40,41,42] has attracted much interest from many researchers due to its low computational complexity and edge-preserving ability. The hierarchical guidance filtering-based ensemble classification for hyperspectral images (HiFi-We) [42] was proposed. The method obtains individual learners by spectral-spatial joint features generated from different scales. The ensemble model, that is, the hierarchical guidance filtering (HGF) and matrix of spectral angle distance (mSAD), can be achieved via a weighted ensemble strategy.
Researchers have paid a great deal of work to build various classifiers for improving the classification accuracy of HSIs [43], such as random forests [44], neural networks [45], support vector machines (SVM) [46,47], and deep learning [48], reinforcement learning [49], and broad learning systems [50]. Among these classifiers, the BLS classifier [51,52] has attached more and more research attention due to the advantage of its simple structure, few training parameters, and fast training process. Ye et al. [53] proposed a novel regularization deep cascade broad learning system (DCBLS) method to apply to the large-scale data. The method is successful in image denoising. The discriminative locality preserving broad learning system (DPBLS) [54] was utilized to capture the manifold structure between neighbor pixels of hyperspectral images. Wang et al. [55] proposed the HSI classification method based on domain adaptation broad learning (DABL) to solve the limitation or absence of the available labeled samples. Kong et al. [56] proposed a semi-supervised BLS (SBLS). It first used the HGF to preprocess HSI data, then the class-probability structure (CP), and the BLS to classify. It achieved the semi-supervised classification of small samples.
In order to make full use of the spectral-spatial joint features for improving the HSI classification performance, we put forward the method of SSBLS. It incorporates three parts. First, the Gaussian filter is used to smooth spectral features on each band of the original HSI based on the spatial information for removing the noise. The inherent spectral characteristics of pixels are extracted. The first fusion of spectral information and spatial information is realized. Second, inputting the pixel vector of spectral-spatial joint features into the BLS, BLS extracts the sparse and compact features through a random weight matrix fine-turned by a sparse auto encoder for predicting the labels of test samples. The initial probability maps are constructed. In the last step, a guided filter corrects the initial probability maps under the guidance of a grey-scale image, which is obtained by reducing the spectral dimensionality of the original HSI to one via PCA. The spatial context information is fully utilized in the operation process of the guided filter. In SSBLS, the spatial information is used in the first and third steps. In the second step, BLS uses the spectral-spatial joint features to classify. At the same time, in the third step, the first principal component of spectral information is used to obtain the grey-scale image. Therefore, in the proposed method, the full use of spectral-spatial joint features contributes to better classification performance. The major contribution of our work can be summarized as follows:
(1)
We found the organic combination of the Gaussian filter and BLS could enhance the classification accuracy. The Gaussian filter captures the inherent spectral information of each pixel based on HSI spatial information. BLS extracts the sparse and compact features using the random weights fine-turned by the sparse auto encoder in the process of mapping feature. Sparse features can represent the low-level structures such as edges and high-level structures such as local curvatures, shapes [57], these contribute to the improvement of classification accuracy. The inherent spectral features are input to BLS for training and prediction, thereby improving the classification accuracy of the proposed method. Experimental data supports this conclusion.
(2)
We take full advantage of spectral-spatial features in SSBLS. The Gaussian filter firstly smooths each spectral band based on spatial information of HSI to achieve the first fusion of spectral-spatial information. The guided filter corrects the results of BLS classification based on the spatial context information again. The grey-scale guidance image of the guided filter is obtained via the first PCA from the original HSI. These three operations sufficiently join spectral information and spatial information together, which is useful to improve the accuracy of SSBLS.
(3)
SSBLS utilizes the guided filter to rectify the misclassified hyperspectral pixels based on the spatial contexture information for obtaining the correct classification labels, thereby improving the overall accuracy of SSBLS. The experimental results can also support this point.
The rest of this paper is organized as follows. Section 2 describes the proposed method in detail. Section 3 presents the experiments and analysis. The discussion of the proposed method is in Section 4. Section 5 is the summary.

2. Proposed Method of Spectral-Spatial Joint Classification of HSI Based on Broad Learning System

The flowchart of SSBLS proposed in this paper is shown in Figure 1, which mainly consists of three steps: (1) After inputting the original HSI data, the Gaussian filter with an appropriate-sized window is performed to extract the inherent spectral features of samples based on the spatial information. (2) The test samples labels are got using the optimal BLS classification model trained with pixel vectors smoothed by the Gaussian filter. The initial probability maps are constructed according to the results of BLS classification. (3) To improve the classification accuracy of HSI, the guided filter is adopted to correct the initial probability maps based on the spatial context information of HSI under the guiding of the grey-scale guidance image. The guidance image is obtained via the first PCA.

2.1. Spectral Feature Extraction of HSI Based on Gaussian Filter

The first step of the proposed method is that the 2-dimensional (2-D) Gaussian filter smooths spectral features on each band based on the spatial information of HSI. The Gaussian filter is one of the most widely used and effective window-based filtering methods. It is usually used as a low-pass filter to suppress the high-frequency noise, and it can repair the detected missing regions [58]. When the Gaussian filter is capturing the spectral features of HSI, the weight of each hyperspectral pixel in the Gaussian filter window decays exponentially according to the distance from the center pixel. The closer the distance of the neighboring pixel from the center pixel is, the greater the weight is, and the farther the distance is, the smaller the weight is. The weight of each pixel in the Gaussian filter window is determined by the following 2-D Gaussian function
G ( x , y ) = 1 2 π σ 2 e ( x 2 + y 2 ) / 2 σ 2
where x and y are the coordinates of the pixels in the Gaussian filter window on each band of HSI. The coordinate of the center pixel of the window is ( 0 , 0 ) . σ , is the standard deviation of the Gaussian filter. It is used to control the degree of blurring spectral information. That is to say, the greater the value of σ is, the smoother the blurred spectral features are. The Gaussian function [59] has the characteristic of being separable, so that a larger-sized Gaussian filter can be effectively realized. The 2-D Gaussian function convolution can be performed in two steps. First, the spectral image on each band of HSI is convolved with the 1-D Gaussian function, and then, the convolution result is convolved using the same 1-D Gaussian function in the way of rotating 90 degrees to the left. Therefore, the calculation of 2-D Gaussian filtering increases linearly with the size of the filter window instead of increasing squarely.
The original HSI data with n samples are denoted as X = ( x 1 , x 2 , x 3 , , x n ) , which belongs to the m-D space. Y G a F = ( y 1 , y 2 , y 3 , y n ) R m is gotten from X blurred by the Gaussian filter. Here, m is the number of HSI band. The superscript “GaF” represents the Gaussian filter. The “ O G a F ” stands for the Gaussian filtering operation. The spectral feature extraction of HSI based on the Gaussian filter can be represented as Equation (2).
Y G a F = O G a F ( X )

2.2. HSI Classification Based on the Combination of Gaussian Filter and BLS

Chen and Liu put forward a BLS based on the rapid and dynamic learning features of the functional-link network [60,61,62]. BLS is built as a flat network, in which the input data first are mapped into mapped feature nodes, then all mapped feature nodes are mapped into enhancement nodes for expansion. The BLS network expands through both mapped feature nodes and enhancement nodes. Moreover, through rigorous mathematical methods, Igelnik and Pao [63] have proven that enhancement nodes contribute to the improvement of classification accuracy. BLS is built on the basis of the traditional random vector functional-link neural network (RVFLNN) [64]. However, unlike the traditional RVFLNN, in which the enhancement nodes are constructed though using a linear combination of the input nodes and then applying a nonlinear activation function to them. BLS first maps the inputs to construct a set of mapped feature nodes via some mapping functions and then maps all mapped feature nodes into enhancement nodes through other activation functions.
The second step of the proposed method is to input HSI pixel vectors smoothed by the Gaussian filter to train the BLS classification model. Then the test sample’s labels are calculated by the optimal BLS classification model for constructing the initial probability maps. The notation in Table 1 will be used to present the described HSI classification procedure. The HSI samples smoothed by the Gaussian filter are split into a training set and test set. The training pixel vectors are mapped into mapped feature nodes applying the random weight matrix. In addition, the sparse auto encoder is used to fine-tune the random weight matrix. Then, the mapped feature nodes are mapped into enhancement nodes using other random weights. The optimal connection weights from all mapped feature nodes and enhancement nodes to the output are gained through the normalized optimization method of solving L2-norm by ridge regression approximation in order to obtain the optimal BLS model. The test sample labels are predicted by the optimal model to construct the initial probability maps.
First, the HSI data smoothed by the Gaussian filter, Y G a F with n samples and m dimensions, is mapped into mapped feature nodes. That is to say, Y G a F R n × m . Y B L S R n × C is the result of BLS classification, where C is the quantity of sample types. There are d feature mappings, and each mapping has e nodes, can be represented as in Equation (3) [19]
Z i = ϕ i ( Y G a F W i f e + β i f e ) , i = 1 , , d
where ϕ i is the mapping function, Z i is the i t h mapped features, W i f e is a random weight matrix, which has an appropriate dimension, β i f e , which is randomly generated, is the bias, “fe” represents the mapped feature operation. Z i = [ Z 1 , , Z i ] is the concatenation of all the first i groups of mapping features. Furthermore, ϕ i and ϕ k can be different functions when i k . Z d = [ Z 1 , Z 2 , , Z d ] represents all mapped feature nodes. In order to capture the sparse and compact features, we make use of the sparse auto encoder to fine-tune the initial W i f e [19].
Then, Equation (4) is utilized to compute enhancement nodes from mapped feature nodes
H l = ξ l ( Z d W l e n + β l e n )
where ξ l is an activation function, furthermore, when l k , ξ l and ξ k can be different functions. H l is the l t h group of enhancement nodes, W l e n , which has appropriate dimensions, is a randomly generated weight matrix. β l e n , which is randomly generated, is the bias. The process of mapping enhancement nodes is used the “en” to represent. H l = [ H 1 , , H l ] is the concatenation of all the first l groups of enhancement nodes [19].
Combined with Equation (4), the output result of BLS can be expressed by Equation (5)
Y B L S = [ Z 1 , , Z d | ξ 1 ( Z d W 1 e n + β 1 e n ) , , ξ l ( Z d W l e n + β l e n ) ] W o p   = [ Z 1 , , Z d | H 1 , , H l ] W o p   = [ Z d | H l ] W o p
where W o p is the connecting weight matrix from all mapped feature nodes and all enhancement nodes to the output of the BLS. The superscript “op” represents the optimal weight [19]. The optimal connecting weight matrix can be obtained using the L2-norm regularized least square problem as shown in Equation (6)
arg   min W o p : [ Z d | H l ] W o p Y G a F 2 2 + λ W o p 2 2
where λ is applied to further restrict the squared of L2-norm of W o p . 2 represents the L2-norm, and 2 2 stands for the square of L2-norm. Equation (7) is obtained by the ridge regression approximation [19].
W o p = ( λ I + [ Z d | H l ] [ Z d | H l ] T ) 1 [ Z d | H l ] T Y G a F
When λ 0 , Equation (7) can be converted into solving the least square problem. When λ , the result of Equation (7) is finite and turns to zero. So, set λ 0 , and add a positive number on the diagonal of [ Z d | H l ] T [ Z d | H l ] or [ Z d | H l ] [ Z d | H l ] T to get the approximate Moore-Penrose generalized inverse [19]. Consequently, we have Equation (8).
[ Z d | H l ] + = lim λ 0 ( ( λ I + [ Z d | H l ] [ Z d | H l ] T ) 1 ) [ Z d | H l ] T
So we have:
W o p = [ Z d | H l ] + Y G a F
Finally, the output of BLS is:
Y B L S = [ Z d | H l ] [ Z d | H l ] + Y G a F
After inputting the spectral features smoothed by the Gaussian filter into BLS, the initial result of classification is Y B L S = ( y 1 B L S , y 2 B L S , y n B L S ) . The probability maps of this results are expressed as P = ( p 1 , , p C ) , here p c is the probability map that all pixels belong to the c class. p i , c p c , i = 1 , 2 , , n is the probability that the pixel i belongs to c ( c = 1 , 2 , C ) . Specifically, as followed Equation (11).
p i , c = { 1 if   y i B L S = c 0 otherwize

2.3. Correction to the Results of BLS Classification Based on Guided Filter

In the third step of the proposed method, the guided filter is performed to correct each probability map p c with the guidance of the grey-scale guidance image V , and get the output q c ( c = 1 , 2 , C ) . V is obtained by the first PCA method from the original HSI. The output of the guided filter [38] is the local linear transformation of the guidance image and has a good edge-preserving characteristic. At the same time, the output image will become more structured and non-smooth than the input image under the guidance of the guidance image. For grey-scale and high-dimensional images, the guided filter essentially has the characteristic of low time complexity, regardless of the kernel size and the intensity range. In this step, the filtering output is Q = ( q 1 , q 2 , q C ) ( c = 1 , 2 , C ) . Here, q c is the probability map that all pixels belong to the c class. q i , c q c , i = 1 , 2 , , n , which is the probability that the pixel i belongs to c ( c = 1 , 2 , C ) , can be expressed as a linear transformation of the guidance image in a window ω k centered at the pixel k , as shown in Equation (12).
q i , c = a k V i + b k , i ω k
( a k , b k ) are some assumed linear coefficients to be restricted in ω k . ω k is a window, the radius of which is r . This local linear model guarantees that q c has an edge only if V has an edge, because q c a V . The cost function in the window ω k is minimized as shown in Equation (13), which can not only realize the linear model of Equation (12), but also minimize the difference between q c and V .
E ( a k , b k ) = i ω k ( ( a k V i + b k p i , c ) 2 + ε a k 2 )
ε , which defines the degree of the guided filter blurring, is used to regularize the parameter penalizing large a k . Equation (13) is the linear ridge regression model and is solved by Equation (14).
a k = 1 | ω | i ω k V i p i , c μ k p ¯ k , c σ k 2 + ε
b k = p ¯ k , c a k μ k
Here μ k and σ k 2 are the mean and variance of the guidance image in ω k . | ω | is the quantity of pixels in ω k . p ¯ k , c = 1 | ω | i ω k p i , c is the mean of p c in ω k .
Pixel i is involved in all the overlapping windows, which cover pixel i ; therefore, the value of q i , c in Equation (12) is different in different windows. q i , c can be acquired by averaging all the possible values which are computed in different windows. So, after calculating a k and b k for all windows ω k in V , the output is calculated by using Equation (16) as follows:
q i , c = 1 | ω | k | i ω k ( a k V i + b k )
The window ω k is symmetrical, so k | i ω k a k = k ω i a k , Equation (16) can be expressed by Equation (17)
q i , c = a ¯ i V + b ¯ i
where a ¯ i = 1 | ω | k ω i a k and b ¯ i = 1 | ω | k ω i b k are the mean coefficients of all windows covering pixel i .
In fact, a k in Equation (14) can be rewritten as a weighted sum of input image p c : a k = j A k , j ( V ) p j , c , A i , j is the weight that only depends on the guiding image V . Similarly, b k = j B k , j ( V ) p j , c . The kernel weight is explicitly expressed by:
θ i , j ( V ) = 1 | ω | 2 k ω i , k ω j ( 1 + ( V i μ k ) ( V j μ k ) σ k 2 + ε )
So, Equation (17) can be changed to Equation (19).
q i , c = j θ i , j ( V ) p j , c
After the initial probability maps are corrected by the guided filter, the probability of the pixel i has C values, that is to say, q i , c , c = 1 , 2 , , C . We take the subscript of the highest probability among the C probabilities as the label of the pixel i , namely:
y i G u F = arg   max c q i , c , c = 1 , 2 , , C
After the guided filter corrects the initial probability maps, the labels of all labeled samples of HSI are Y G u F = ( y 1 G u F , y 2 G u F , , y n G u F ) . The superscript “GuF” represents the guided filtering operation.
In summary, the algorithmic steps of HSI classification based on SSBLS are summarized in Algorithm 1.
Algorithm 1. Algorithmic details of SSBLS
1. 
Input: Original HSI Dataset, X ; S is the size of the Gaussian filter window; σ is the standard deviation of the Gaussian function; N is the number of training samples; M is the number of mapped feature windows; F is the number of mapped feature nodes per window; E is the number of enhancement nodes; r represents the radius of the guided filter window ω k ; ε is the penalty parameter of the guided filter.
2. 
Select the optimal parameters S and σ , perform Gaussian filter to smooth each spectral band of original HSI data X and get Y G a F .
3. 
Randomly select N samples from the labeled samples of each class of HSI as the training set Y t r a i n G a F , and the remaining labeled samples are the test sample set Y t e s t G a F .
4. 
Randomly produce W i f e with F columns and β i f e , fine-tune W i f e using the sparse auto encoder, and get Z t r a i n , i according to the Y t r a i n G a F , W i f e , β i f e and Equation (3), where i = 1 , 2 , , M , F = e .
5. 
Set random weights W f e = [ W 1 f e , , W M f e ] , random bias β f e = [ β 1 f e , , β M f e ] , and the feature mapping group Z t r a i n d = [ Z t r a i n , 1 , , Z t r a i n , M ] , where M = d .
6. 
Randomly produce W e n with F × M rows and E columns and β e n , map Z t r a i n d into H t r a i n l using Equation (4).
7. 
Obtain the optimal connecting weights W o p according to Equations (8) and (9).
8. 
Map Y t e s t G a F into Z t e s t d with W f e , β f e using Equation (3), and map Z t e s t d into H t e s t l by Equation (4) with W e n and β e n , compute the test samples labels Y t e s t B L S with W o p and Equation (5).
9. 
Construct the initial probability maps of the entire HSI based on the labels of training samples and test samples, namely P = ( p 1 , p 2 , , p C ) .
10. 
Based on the original HSI, the grey-level guidance map V is generated by the first PCA method. According to Equations (18) and (19), and the optimal parameters r and ε , correct each initial probability map p c respectively, then get the final probability graphs q = ( q 1 , q 2 , q C ) , where c = 1 , 2 , , C .
11. 
According to Equations (20), based on the maximum probability principle, get Y G u F , the classification results of all samples, get the test samples labels after re-moving the training samples.
12. 
Output: the test sample’s labels.

3. Experiment Results

We assess the proposed SSBLS through a lot of experiments. All experiments are performed in MATLAB R2014a using a computer with 2.90 GHz Intel Core i7-7500U central processing unit (CPU) and 32 GB memory and Windows 10.

3.1. Hyperspectral Image Dataset

The performance of SSBLS method and other comparison methods are evaluated on the three public hyperspectral datasets, which are the Indian Pines, Salinas, and Pavia University datasets (The three datasets are available at http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes accessed on 4 November 2018).
The Indian Pines dataset was acquired by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) sensor when it was flying over North-west Indiana Indian Pine test site. This scene has 21,025 pixels and 200 bands. The wavelength of bands is from 0.4 to 2.5 μm. Two-thirds agriculture and one-third forests or other perennial natural vegetation constitute this image. There are two main two-lane highways, a railway line, some low-density housing, other built structures, and pathways in this image. It has 16 types of things. In our experiments, we selected the nine categories samples with a quantity greater than 400. The original hyperspectral image and ground truth are given in Figure 2.
The Salinas scene was obtained by a 224-band AVIRIS sensor, capturing over the Salinas Valley, California, USA, with a high spatial resolution of 3.7 m. The HSI dataset has 512 × 217 pixels with 204 bands after the 20 water absorption bands were discarded. We made use of 16 classes samples in the scene. The original hyperspectral image and ground truth are given in Figure 3.
The Pavia University dataset was collected by the Reflective Optics System Imaging Spectrometer (ROSIS) sensor over Pavia in northern Italy. The image has 610 × 340 pixels with 103 bands. Some pixels containing nothing in the image were removed. There were nine different sample categories used in our experiments. Figure 4 is the original hyperspectral image, category names with labeled samples, and ground truth.

3.2. Parameters Analysis

After analyzing SSBLS, it was found that the adjustable parameters are the size of the Gaussian filter window ( S ), the standard deviation of Gaussian function ( σ ), the number of mapped feature windows in BLS ( M ), the number of mapped feature nodes per window in BLS ( F ), the number of enhancement nodes ( E ), the radius of the guided filter window ω k ( r ), and the penalty parameter of the guided filter ( ε ). The above parameters are analyzed with overall accuracy (OA) to evaluate the performance of SSBLS.

3.2.1. Influence of Parameter S and σ on OA

In this section, the influence of S and σ on OA was analyzed in three datasets. S and σ are took different values, and other parameters are fixed values, namely, M = 20 , F = 40 , E = 1000 , r = 2 , ε = 0.001 . S was chosen from [ 2 , 4 , 6 , , 30 ] , and the value range of σ was chosen from [ 1 , 2 , 3 , , 15 ] in the Indian Pines and Salinas datasets. S and σ were chosen from [ 1 , 3 , 5 , , 29 ] and [ 1 , 2 , 3 , , 15 ] , respectively, in the Pavia University dataset. The mean OAs of ten experiments are shown in Figure 5. It can be seen from this figure that as the S and σ increased, the OAs gradually increased, and gradually decreased after reaching the peak. If S is too small, the larger-sized target will divide into multiple parts distributing in the diverse Gaussian filter windows. If S is too large, the window will contain multiple small-sized targets. Both will cause misclassification. When σ is too small, the weights change drastically from the center to the boundary. When σ gradually becomes larger, the weights change smoothly from the center to the boundary, and the weights of pixels in the window are relatively well-distributed, which is close to the mean filter. Therefore, for different HSI datasets, the optimal values of S and σ were not identical. in the Indian Pines dataset, when S = 18 , σ = 7 , the OA is the largest. So S and σ were 18 and 7 respectively in the subsequent experiments. In the Salinas dataset, when S = 24 , σ = 7 , the performance of SSBLS was the best. Therefore, S and σ were taken as 24 and 7 in the later experiments severally. Similarly, the best values of S and σ were 21 and 4 respectively in the Pavia University dataset.

3.2.2. Influence of Parameter M and F on OA

The experiments are carried out on the three datasets. The values of S and σ were the optimal values obtained from the above analysis, and E = 1000 , r = 2 , ε = 0.001 . In the Indian Pines and Pavia University datasets, M and F were chosen from [ 2 , 4 , 6 , , 20 ] and [ 2 , 6 , 10 , , 38 ] . The values range of M and F were [ 2 , 4 , 6 , , 20 ] and [ 4 , 8 , 12 , , 40 ] in the Salinas dataset. As shown in Figure 6, we can see that as M and F were becoming larger, the OAs of SSBLS gradually grew. When M and F were too small, the lesser feature information was extracted and the lower the mean OA of ten experiments was. When M and F were too large, although the performance of SSBLS was improved, the computation and the consumed time also rose. Therefore, in the subsequent experiments, the best values of M and F were 6 and 34 respectively in the Indian Pines dataset, 12 and 36 in the Salinas dataset, and 8 and 26 in the Pavia University dataset.

3.2.3. Influence of Parameter E on OA

In the three datasets, S , σ , M and F were the optimal values obtained from the above experiments, r and ε were 2 and 10−3, respectively. E was chosen from [ 500 , 550 , 600 , , 1200 ] in the Indian Pines dataset. The range of E was [ 50 , 100 , 150 , , 800 ] in the Salinas and Pavia University datasets. In the three datasets, the average OAs of ten experiments had an upward trend with the increase of E as shown in Figure 7. As E gradually grew, the features extracted by BLS also increased, at the same time, the computation and consumed time also grew. Therefore, the numbers of enhanced nodes were 1050 in the Indian Pines dataset, and 700 in both the Salinas and Pavia University datasets.

3.2.4. Influence of Parameter r on OA

The experiments were carried out on the three datasets. The values of S , σ , M , F and E were the optimal values analyzed previously, ε is 10 3 , and r is chosen from [ 1 , 2 , 3 , 9 ] . Figure 8 indicates that as r grew, the average OAs of ten experiments first increased, and then decreased. In the Indian Pines dataset, when r = 3 , the mean OA was the largest, so r is 3. In the Salinas dataset, when r = 5 , the performance of SSBLS was the best, so the value of r was 5. On the Pavia University dataset, while r = 3 , the average OA was the greatest, so r was 3.

3.2.5. Influence of Parameter ε on OA

In the three datasets, S , σ , M , F , E and r were the optimal values obtained in the above experiments. The value range of ε was [ 10 7 , 10 6 , 10 5 , , 1 ] . In the Indian Pines and Salinas datasets, as ε increased, the mean OAs first increased and then decreased, as shown in Figure 9. In the Indian Pines dataset, when ε = 10 3 , the average OA was the largest, so ε was 10−3 in the subsequent compared experiments. On the Salinas dataset, while ε = 10 1 , the performance of SSBLS was the best, so the optimal value of ε was 10−1. In the Pavia University dataset, as ε = 10 7 , the classification effect was the best, then the best value of ε was 10 7 .

3.3. Ablation Studies on SSBLS

We have conducted several ablation experiments to investigate the behavior of SSBLS on the three datasets. In these ablation experiments, we randomly took 200 labeled samples as training samples and the remaining labeled samples as test samples from each class sample. We utilized OA, average accuracy (AA), kappa coefficient (Kappa) to measure the performance of different methods as shown Table 2, and the highest values of them are shown in bold.
First, we only used BLS to classify the original hyperspectral data. On the Salinas dataset, the effect was good; the OA reached 91.98%. However, the results were unsatisfactory when using the Indian Pines and Pavia University datasets.
Second, we disentangle the Gaussian filter influence on the classification results. We used the Gaussian filter to smooth the original HSI, and then used BLS to classify, namely the method of BLS based on the Gaussian filter (GBLS). In Indian Pines dataset, the OA was about 20% higher than these of BLS, about 7% higher than that of BLS in the Salinas dataset, and about 17% higher in the Pavia University dataset. These show that the Gaussian filter can help to improve the classification accuracy.
Next, we used BLS to classify the original hyperspectral data and then applied the guided filter to rectify the misclassified pixels of BLS. The results in terms of OA, AA, and Kappa were also better than those of BLS. This shows that guided filter also plays a certain role in improving classification performance.
Finally, we used the proposed method in the paper for HSI classification. This method first uses the Gaussian filter to smooth the original spectral features based on the spatial information of HSI. After using BLS classification, it finally applies the guided filter to correct the pixels that are misclassified by BLS. The results are the best in the four methods. This shows that both Gaussian filter and guided filter contribute to the improvement of classification performance.
From the above analysis, we know that the combination of the Gaussian filtering and BLS has a great effect on the improvement of classification performance, especially on Indian Pines and Pavia University datasets. Although the classification accuracy after BLS classification based on the Gaussian filter (GBLS) was relatively high, the classification accuracy was still improved after adding the guided filter to GBLS. It indicates that the guided filter can also help improve the classification accuracy.

3.4. Experimental Comparison

In order to prove the advantages of SSBLS on the three real datasets, we compare SSBLS with SVM [65], HiFi-We [42], SSG [66], spectral-spatial hyperspectral image classification with edge-preserving filtering (EPF) [41], support vector machine based on the Gaussian filter (GSVM), feature extraction of hyperspectral images with image fusion and recursive filtering (IFRF) [67], LPP_LBP_BLS [19], BLS [50], and GBLS. All methods inputs are the original HSI data. Furthermore, the experimental parameters are the optimal values. In each experiment, the 200 labeled samples are randomly selected from per class sample as the training set, and the rest labeled samples as the test samples set. We get the individual classification accuracy (ICA), OA, AA, Kappa, overall consumed time (t), and test time (tt). All results are the mean values of ten experiments as shown in Table 3, Table 4 and Table 5, and the highest values of them are shown in bold.
(1) Compared with the conventional classification method SVM—the effects of BLS approximate to those of SVM methods on the Indian Pines and Salinas datasets. However, when BLS and SVM make use of the HSI data filtered by the Gaussian filter, the performance of GBLS was obviously better than that of GSVM. In the Pavia University dataset, the OA of BLS was 16.56% lower than that of SVM. After filtering the Pavia University data using the Gaussian filter, the OA of GBLS was about 3% higher than that of GSVM. SSBLS had the best performance. From Table 3, Table 4 and Table 5, the experimental results illustrate that the combination of the Gaussian filter and BLS contributes to improving the classification accuracy.
(2) HiFi-We firstly extracts different spatial context information of the samples by HGF, which can generate diverse sample sets. As the hierarchy levels increased, the pixel spectra features tended to be smooth, and the pixel spatial features were enhanced. Based on the output of HGF, a series of classifiers could be obtained. Secondly, the matrix of spectral angle distance was defined to measure the diversity among training samples in each hierarchy. At last, the ensemble strategy was proposed to combine the obtained individual classifiers and mSAD. This method achieved a good performance. But its performance in terms of OA, AA, and Kappa were not as good as these of SSBLS. The main reasons are that SSBLS adopts the advantages of spectral-spatial joint features sufficiently in the three operations of the Gaussian filter, BLS, and guided filter; these are useful to improve the accuracy of SSBLS.
(3) SSG assigns a label to the unlabeled sample based on the graph method, integrates the spatial information, spectral information, and cross-information between spatial and spectral through a complete composite kernel, forms a huge kernel matrix of labeled and unlabeled samples, and finally applies the Nystróm method for classification. The computational complexity of the huge kernel matrix is large, resulting in increasing the consumed time of the classification. On the contrary, SSBLS not only has higher OA than SSG, but also takes lesser time than SSG.
(4) The EPF method adopts SVM for classification, constructs the initial probability map, and then utilizes the bilateral filter or the guided filter to collect the initial probability map for improving the final classification accuracy. The results of it were very good in the real three hyperspectral datasets. However, SSBLS had better performance compared with EPF. This is mainly because SSBLS firstly utilizes the Gaussian filter to extract the inherent spectral features based on spatial information, moreover, applies the guided filter to rectify the misclassification pixels of BLS based on the spatial context information.
(5) IFRF divides the HSI samples into multiple subsets according to the neighboring hyperspectral band, then applies the mean method to fuse each subset, finally makes use of the transform domain recursive filtering to extract features from each fused subset for classification using SVM. This method works very well. But the performance of SSBLS was better than that of IFRF. Specifically, the mean OA of SSBLS was 1.03% higher than that of IRRF in the Indian dataset, 0.24% higher in the Salinas dataset, and 1.5% higher in the Pavia University dataset. There were three reasons for the analysis results. Firstly, when SSBLS used the Gaussian filter to smooth the HSI spectral features based on the spatial information, the weight of each neighboring pixel decreased with the increase of the distance between it and the center pixel in the Gaussian filter window. The Gaussian filter operation could remove the noise. Secondly, in the SSBLS method, the integration of the Gaussian filter and BLS contributed to extracting the sparse and compact spectral features fusing the spatial features and achieved outstanding classification accuracy. Thirdly, SSBLS applied the guided filter based on the spatial context information to rectify the misclassified hyperspectral pixels for improving the final classification accuracy.
(6) The LPP_LBP_BLS method uses LPP to reduce the dimensionality of HSI in the spectral domain, then utilized LBP to extract spatial features in the spatial domain, and finally makes use of BLS to classify. The performance of LPP_LBP_BLS was very nice. But it has two disadvantages. First, the LBP operation led to an increase in the number of processed spectral-spatial features greatly. For example, the number of spectral bands after dimensionality reduction of each pixel was 50, and the number of each pixel spectral-spatial features after the LBP operation was 2950. Second, LPP_LBP_BLS worked very well on the Indian Pines and Salinas datasets, but the mean OA only reached 97.14% in the Pavia University dataset. It indicates that this method has a certain data selectivity and is not robust enough. The average OAs of SSBLS in the three datasets are all above 99.49%. In the Indian dataset, the mean OA is 99.83%, and the highest OA we obtained during the experiments is 99.97%. In the Salinas dataset, the average OA is 99.96%, and the highest OA can reach 100% sometimes. It shows that the robustness of SSBLS is better, especially on the Pavia University dataset. As the parameters change, the OAs change regularly, as shown in Figure 5c and Figure 6c.
(7) Compared with BLS and GBLS. It can be seen in Table 3, Table 4 and Table 5 that BLS had an unsatisfactory classification effect only using the original HSI data; however, when the GBLS adopted the spectral features smoothed by the Gaussian filter, its OA was greatly improved. It indicates that the combination of the Gaussian filtering and BLS contributed to the improvement of classification accuracy. The classification accuracy of SSBLS was higher than those of BLS and GBLS. This was because SSBLS applied the guided filter based on the spatial contextual information to rectify the misclassified pixels, further improving the classification accuracy.
In summary, using the three datasets, the OA, AA, and Kappa of SSBLS were better than those of nine other comparison methods, as can be clearly seen from Figure 10, Figure 11 and Figure 12. From Table 3, Table 4 and Table 5, it can be seen that the execution time of SSBLS was lesser than these methods (SVM, HiFi-We, SSG, EPF, GSVM, IFPF, and LPP_LBP_BLS), and the pretreatment time and the training time of SSBLS was lesser than HiFi-We, SSG, EPF, IFPF, and LPP_LBP_BLS.

4. Discussion

The experimental results of the three public datasets indicate that SSBLS had the best performance in terms of three measurements (OA, AA, and Kappa) in all the compared methods. There were three main reasons for this, as follows. Firstly, the combination of the Gaussian filter and BLS contributed to the improvement of SSBLS classification accuracy. The Gaussian filter could fuse spectral features and spatial features of HSI effectively to extract the inherent spectral characteristics of each pixel. BLS expressed the smoothed spectral information into the sparse and compact features in the process of mapping feature using random weight matrixes fine-turned by the sparse auto encoder. It also improved the classification accuracy. It can be clearly seen from Table 3, Table 4 and Table 5 that the performances of GBLS and SSBLS using the HSI data smoothed by the Gaussian filter were greatly improved. Secondly, SSBLS takes full advantage of spectral-spatial joint features to improve its performance. The Gaussian filter firstly smooths each band in the spectral domain based on the spatial information to achieved the first fusion of spectral and spatial information. The guided filter corrects the results of BLS classification under the guidance of the grey-scale guidance image, which is obtained by the first PCA based on the spectral information from the original HSI. These operations join spectral features and spatial information together sufficiently. At last, SSBLS applies the guided filter to rectify the misclassification HSI pixels to further enhance its classification accuracy.

5. Conclusions

To take full advantage of the spectral-spatial joint features for the improvement of HSI classification accuracy, we proposed the method of SSBLS in this paper. The method is divided into three parts. Firstly, the Gaussian filter smooths each spectral band to remove the noise in spectral domain based on the spatial information of HSI and fuse the spectral information and spatial information. Secondly, the optimal BLS models were obtained by training the BLS using the spectral features smoothed by the Gaussian filter. The test sample labels were computed for constructing the initial probability maps. Finally, the guided filter is applied to rectify the misclassification pixels of BLS based on the HSI spatial context information to improve the classification accuracy. The results of experiments of the three public datasets show that the proposed method outperforms other methods (SVM, HiFi-We, SSG, EPF, GSVM, IFRF, LPP_LBP_BLS, BLS, and GBLS) in terms of OA, AA, and Kappa.
This proposed method is a supervised learning classification that requires more labeled samples. However, the number of HSI labeled samples were very limited, and a high cost is required to label the unlabeled samples. Therefore, the next step is to study a semi-supervised learning classification method to improve the semi-supervised learning classification accuracy of HSI.

Author Contributions

All of the authors made significant contributions to the work. G.Z. and Y.C. conceived and designed the experiments; G.Z., X.W., Y.K., and Y.C. performed the experiments; G.Z., X.W., Y.K., and Y.C. analyzed the data; G.Z. wrote the original paper, X.W., Y.K., and Y.C. reviewed and edited the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant 61772532, Grant 61976215, and Grant 61703219.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dicker, D.; Lerner, J.; Van Belle, P.; Barth, S.F.; Guerry, D.; Herlyn, M.; El-Deiry, W.S. Differentiation of normal skin and melanoma using high resolution hyperspectral imaging. Cancer Biol. Ther. 2006, 5, 1033–1038. [Google Scholar] [CrossRef]
  2. Chawira, M.; Dube, T.; Gumindoga, W. Remote sensing based water quality monitoring in Chivero and Manyame lakes of Zimbabwe. Phys. Chem. Earth 2013, 66, 38–44. [Google Scholar] [CrossRef]
  3. Polak, A.; Kelman, T.; Murray, P.; Marshall, S.; Stothard, D.J.M.; Eastaugh, N.; Eastaugh, F. Hyperspectral imaging combined with data classification techniques as an aid for artwork authentication. J. Cult. Herit. 2017, 26, 1–11. [Google Scholar] [CrossRef]
  4. Wu, T.; Cheng, Q.; Wang, J.; Cui, S.; Wang, S. The discovery and extraction of Chinese ink characters from the wood surfaces of the Huangchangticou tomb of Western Han Dynasty. Archaeol. Anthropol. Sci. 2019, 11, 4147–4155. [Google Scholar] [CrossRef]
  5. Slavkovikj, V.; Verstockt, S.; De, N.W.; Van, H.S.; Rik, V.D.W. Unsupervised spectral sub-feature learning for hyperspectral image classification. Int. J. Remote Sens. 2016, 37, 309–326. [Google Scholar] [CrossRef] [Green Version]
  6. Chang, C.I.; Du, Q.; Sun, T.L.; Althouse, M.L.G.A. Joint Band Prioritization and Band-Decorrelation Approach to Band Selection for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2631–2641. [Google Scholar] [CrossRef] [Green Version]
  7. Samat, A.; Persello, C.; Gamba, P.; Liu, S.; Abuduwaili, J.; Li, E. Supervised and Semi-Supervised Multi-View Canonical Correlation Analysis Ensemble for Heterogeneous Domain Adaptation in Remote Sensing Image Classification. Remote Sens. 2017, 9, 337. [Google Scholar] [CrossRef] [Green Version]
  8. Sugiyama, M. Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis. J. Mach. Learn. Res. 2007, 8, 1027–1061. [Google Scholar] [CrossRef]
  9. Shao, Z.; Zhang, L. Sparse Dimensionality Reduction of Hyperspectral Image Based on Semi-Supervised Local Fisher Discriminant Analysis. Int. J. Appl. Earth Obs. Geoinf. 2014, 31, 122–129. [Google Scholar] [CrossRef]
  10. Chen, S.; Zhang, D. Semisupervised Dimensionality Reduction with Pairwise Constraints for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2011, 8, 369–373. [Google Scholar] [CrossRef]
  11. Prasad, S.; Bruce, L.M. Limitations of principal components analysis for hyperspectral target recognition. IEEE Geosci. Remote Sens. Lett. 2008, 5, 625–629. [Google Scholar] [CrossRef]
  12. Wang, J.; Chang, C.I. Applications of independent component analysis in endmember extraction and abundance quantification for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2601–2616. [Google Scholar] [CrossRef]
  13. He, X.; Niyogi, P. Locality Preserving Projections. Advances in Neural Information Processing Systems. In Proceedings of the 17th Annual Conference on Neural Information Processing Systems (NIPS), Vancouver, BC, Canada, 8 December 2003; Thrun, S., Saul, K., Scholkopf, B., Eds.; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
  14. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of hyperspectral images with regularized linear discriminant analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  15. Li, J.; Qian, Y. Dimension Reduction of Hyperspectral Images with Sparse Linear Discriminant Analysis. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; IEEE: New York, NY, USA, 2011. [Google Scholar] [CrossRef]
  16. Cai, D.; He, X.; Han, J. Semi-Supervised Discriminant Analysis. In Proceedings of the International Conference on Computer Vision 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; IEEE: New York, NY, USA, 2007. [Google Scholar]
  17. Sugiyama, M.; Idé, T.; Nakajima, S.; Sese, J. Semi-Supervised Local Fisher Discriminant Analysis for Dimensionality Reduction. Mach. Learn. 2010, 78, 35–61. [Google Scholar] [CrossRef] [Green Version]
  18. Kianisarkaleh, A.; Ghassemian, H. Spatial-spectral locality preserving projection for hyperspectral image classification with limited training samples. Int. J. Remote Sens. 2016, 37, 5045–5059. [Google Scholar] [CrossRef]
  19. Zhao, G.; Wang, X.; Cheng, Y. Hyperspectral Image Classification based on Local Binary Pattern and Broad Learning System. Int. J. Remote Sens. 2020, 44, 9393–9417. [Google Scholar] [CrossRef]
  20. He, L.; Chen, X.; Li, J.; Xie, X. Multiscale Superpixelwise Locality Preserving Projection for Hyperspectral Image Classification. Appl. Sci. 2019, 9, 2161. [Google Scholar] [CrossRef] [Green Version]
  21. Deng, Y.; Li, H.; Pan, L.; Shao, L.; Du, Q.; Emery, W.J. Modified Tensor Locality Preserving Projection for Dimensionality Reduction of Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 277–281. [Google Scholar] [CrossRef]
  22. Zhai, Y.; Zhang, L.; Wang, N.; Yi, G.; Tong, Q. A Modified Locality-Preserving Projection Approach for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1059–1063. [Google Scholar] [CrossRef]
  23. Dong, W.; Xiao, S.; Li, Y. Hyperspectral pansharpening based on guided filter and Gaussian filter. J. Vis. Commun. Image Represent. 2018, 53, 171–179. [Google Scholar] [CrossRef]
  24. Kishore, R.K.; Saradhi, V.G.P.; Rajya, L.D. Spatial residual clustering and entropy based ranking for hyperspectral band selection. Int. J. Remote Sens. 2020, 53 (Suppl. 1), 82–89. [Google Scholar] [CrossRef] [Green Version]
  25. Li, Z.; Zhu, Q.; Wang, Y.; Zhang, Z.; Zhou, X.; Lin, A.; Fan, J. Feature extraction method based on spectral dimensional edge preservation filtering for hyperspectral image classification. Int. J. Remote Sens. 2019, 41, 90–113. [Google Scholar] [CrossRef]
  26. Tu, B.; Li, N.; Fang, L.; He, D.; Ghamisi, P. Hyperspectral Image Classification with Multi-Scale Feature Extraction. Remote Sens. 2019, 11, 534. [Google Scholar] [CrossRef] [Green Version]
  27. Shao, H.; Chen, Y.; Yang, Z.; Jiang, C.; Li, W.; Wu, H.; Wen, Z.; Wang, S.; Puttonen, E.; Hyyppa, J. A 91-Channel Hyperspectral LiDAR for Coal/Rock Classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1052–1056. [Google Scholar] [CrossRef] [Green Version]
  28. Moser, G.; Serpico, S.B.; Benediktsson, J.A. Land-cover mapping by Markov modeling of spatial-contextual information in very-high-resolution remote sensing images. Proc. IEEE. 2013, 101, 631–651. [Google Scholar] [CrossRef]
  29. Pesaresi, M.; Benediktsson, J.A. A new approach for the morphological segmentation of high-resolution satellite imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 309–320. [Google Scholar] [CrossRef] [Green Version]
  30. Ghamisi, P.; Muraand, M.D.; Benediktsson, J.A. A Survey on Spectral-Spatial Classification Techniques Based on Attribute Profiles. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2335–2353. [Google Scholar] [CrossRef]
  31. Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
  32. Mura, M.D.; Benediktsson, J.A.; Waske, B.; Bruzzone, L. Morphological attribute profiles for the analysis of very high resolution images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3747–3762. [Google Scholar] [CrossRef]
  33. Liao, W.; Bellens, R.; Pizurica, A.; Philips, W.; Pi, Y. Classification of Hyperspectral Data over Urban Areas Based on Extended Morphological Profile with Partial Reconstruction. Advanced Concepts for Intelligent Vision Systems. In Proceedings of the 14th International Conference on Advanced Concepts for Intelligent Vision Systems, Brno, Czech Republic, 4–7 September 2012; BlancTalon, J., Philips, W., Popescu, D., Scheunders, P., Zemcik, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar] [CrossRef]
  34. Xia, J.; Mura, M.D.; Chanussot, J.; Du, P.; He, X. Random Subspace Ensembles for Hyperspectral Image Classification with Extended Morphological Attribute profiles. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4768–4786. [Google Scholar] [CrossRef]
  35. Liao, W.; Dalla Mura, M.; Chanussot, J.; Bellens, R.; Philips, W. Morphological attribute profiles with partial reconstruction. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1738–1756. [Google Scholar] [CrossRef]
  36. Geiss, C.; Klotz, M.; Schmitt, A.; Taubenbock, H. Object-Based Morphological Profiles for Classification of Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5952–5963. [Google Scholar] [CrossRef]
  37. Samat, A.; Persello, C.; Liu, S.; Li, E.; Miao, Z.; Abuduwaili, J. Classification of VHR Multispectral Images Using ExtraTrees and Maximally Stable Extremal Region-Guided Morphological Pofile. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 3179–3195. [Google Scholar] [CrossRef]
  38. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  39. Farbman, Z.; Fattal, R.; Lischinski, D.; Szeliski, R. Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans. Graph. 2008, 27, 67. [Google Scholar] [CrossRef]
  40. Rhemann, C.; Hosni, A.; Bleyer, M.; Rother, C.; Gelautz, M. Fast cost-volume filtering for visual correspondence and beyond. Trans. Pattern Anal. Mach. Intell. 2013, 35, 504–511. [Google Scholar] [CrossRef]
  41. Kang, X.; Li, S.; Bendiktsson, J.A. Spectral-spatial hyperspectral image classification with edge-preserving filtering. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2666–2677. [Google Scholar] [CrossRef]
  42. Pan, B.; Shi, Z.; Xu, X. Hierarchical Guidance Filtering-Based Ensemble Classification for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4177–4189. [Google Scholar] [CrossRef]
  43. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Mathieu, F.; Paolo, G.; Anthony, G.; et al. Recent advances in techniques for hyperspectral image processing. Remote Sens. Environ. 2009, 113, S110–S122. [Google Scholar] [CrossRef]
  44. Jain, V.; Phophalia, A. Exponential Weighted Random Forest for Hyperspectral Image Classification. In Proceedings of the IEEE International Symposium on Geoscience and Remote Sensing IGARSS, 2019 IEEE International Geoscience and Remote Sensing Symposium Conference, Yokohama, Japan, 28 July–2 August 2019; IEEE: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  45. Marpu, P.R.; Gamba, P.; Niemeyer, I. Hyperspectral data classification using an ensemble of class-dependent neural networks. In Proceedings of the 2009 First Workshop on Hyperspectral Image and Signal Processing-Evolution in Remote Sensing, 1st Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Grenoble, France, 26–28 August 2009; IEEE: New York, NY, USA, 2009. [Google Scholar] [CrossRef]
  46. Peng, J.; Zhou, Y.; Chen, C.L.P. Region-Kernel-Based Support Vector Machines for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4810–4824. [Google Scholar] [CrossRef]
  47. Samat, A.; Du, P.; Liu, S.; Li, J.; Cheng, L. E2LMs: Ensemble extreme learning machines for hyperspectral image classification. IEEE J. Sel. Topics. Appl. Earth Observ. Remote Sens. 2014, 7, 1060–1069. [Google Scholar] [CrossRef]
  48. Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A Fast Dense Spectral–Spatial Convolution Network Framework for Hyperspectral Images Classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef] [Green Version]
  49. Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative adversarial networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
  50. Chen, C.L.P.; Liu, Z. Broad learning system: An effective and efficient incremental learning system without the need for deep architecture. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 10–24. [Google Scholar] [CrossRef]
  51. Yao, H.; Zhang, Y.; Wei, Y.; Tian, Y. Broad Learning System with Locality Sensitive Discriminant Analysis for Hyperspectral Image Classification. Math. Probl. Eng. 2020, 2020, 1–16. [Google Scholar] [CrossRef]
  52. Chen, P. Minimum class variance broad learning system for hyperspectral image classification. IET Image Process. 2020, 14, 3039–3045. [Google Scholar] [CrossRef]
  53. Ye, H.; Li, H.; Chen, C.L.P. Adaptive Deep Cascade Broad Learning System and Its Application in Image Denoising. IEEE Trans. Cybern. 2020. [Google Scholar] [CrossRef]
  54. Chu, Y.; Lin, H.; Yang, L.; Zhang, D.; Diao, Y.; Fan, X.; Shen, C. Hyperspectral image classification based on discriminative locality broad. Knowl. Based Syst. 2020, 206, 1–17. [Google Scholar] [CrossRef]
  55. Wang, H.; Wang, X.; Chen, C.L.P.; Cheng, Y. Hyperspectral Image Classification Based on Domain Adaptation Broad Learning. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 3006–3018. [Google Scholar] [CrossRef]
  56. Kong, Y.; Wang, X.; Cheng, Y.; Chen, C.L.P. Hyperspectral Imagery Classification Based on Semi-Supervised Broad Learning System. Remote Sens. 2018, 10, 685. [Google Scholar] [CrossRef] [Green Version]
  57. Ji, N.N.; Zhang, J.S.; Zhang, C.X. A sparse-response deep belief network based on rate distortion theory. Pattern Recognit. 2014, 47, 3179–3191. [Google Scholar] [CrossRef]
  58. Teng, Y.; Zhang, Y.; Chen, Y.; Ti, C. Adaptive Morphological Filtering Method for Structural Fusion Restoration of Hyperspectral Images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 655–667. [Google Scholar] [CrossRef]
  59. Guo, H. A simple algorithm for fitting a Gaussian function [DSP tips and tricks]. IEEE Signal Process. Mag. 2011, 28, 134–137. [Google Scholar] [CrossRef]
  60. Chen, C.L.P.; Wan, J.Z. A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to time-series prediction. IEEE Trans. Syst. Man Cybern. Part B 1999, 29, 62–72. [Google Scholar] [CrossRef] [PubMed]
  61. Pao, Y.H.; Takefuji, Y. Functional-Link net computing, theory, system architecture, and functionalities. Computer 1992, 25, 76–79. [Google Scholar] [CrossRef]
  62. Chen, C.L.P. A rapid supervised learning neural network for function interpolation and approximation. IEEE Trans. Neural Netw. 1996, 7, 1220–1230. [Google Scholar] [CrossRef]
  63. Igelnik, B.; Pao, Y.H. Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans. Neural Netw. 1995, 6, 1320–1329. [Google Scholar] [CrossRef] [Green Version]
  64. Pao, Y.H.; Park, G.H.; Sobajic, D.J. Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 1994, 6, 163–180. [Google Scholar] [CrossRef]
  65. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  66. Camps-Valls, G.; Marsheva, T.V.B.; Zhou, D. Semi-supervised graph-based hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3044–3054. [Google Scholar] [CrossRef]
  67. Kang, X.; Li, S.; Bendiktsson, J.A. Feature Extraction of Hyperspectral Images with Image Fusion and Recursive Filtering. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3742–3752. [Google Scholar] [CrossRef]
Figure 1. The flowchart of hyperspectral image (HSI) classification via the spectral-spatial joint classification broad learning system (SSBLS).
Figure 1. The flowchart of hyperspectral image (HSI) classification via the spectral-spatial joint classification broad learning system (SSBLS).
Remotesensing 13 00583 g001
Figure 2. Indian Pines dataset with (a) original hyperspectral image, (b) ground truth, and (c) category names with labeled samples.
Figure 2. Indian Pines dataset with (a) original hyperspectral image, (b) ground truth, and (c) category names with labeled samples.
Remotesensing 13 00583 g002
Figure 3. Salinas dataset with (a) original hyperspectral image, (b) ground truth, and (c) category names with labeled samples.
Figure 3. Salinas dataset with (a) original hyperspectral image, (b) ground truth, and (c) category names with labeled samples.
Remotesensing 13 00583 g003
Figure 4. Pavia University dataset with (a) original hyperspectral image, (b) ground truth, and (c) category names with labeled samples.
Figure 4. Pavia University dataset with (a) original hyperspectral image, (b) ground truth, and (c) category names with labeled samples.
Remotesensing 13 00583 g004
Figure 5. The relationship of the Gaussian filter window ( S ), standard deviation of the Gaussian function ( σ ), and overall accuracy (OA) in the three datasets. (a) Indian Pines; (b) Salinas; (c) Pavia University.
Figure 5. The relationship of the Gaussian filter window ( S ), standard deviation of the Gaussian function ( σ ), and overall accuracy (OA) in the three datasets. (a) Indian Pines; (b) Salinas; (c) Pavia University.
Remotesensing 13 00583 g005
Figure 6. The relationship of the number of mapped feature windows in BLS ( M ), the number of mapped feature nodes per window in BLS ( F ), and the OA in the three datasets. (a) Indian Pines; (b) Salinas; (c) Pavia University.
Figure 6. The relationship of the number of mapped feature windows in BLS ( M ), the number of mapped feature nodes per window in BLS ( F ), and the OA in the three datasets. (a) Indian Pines; (b) Salinas; (c) Pavia University.
Remotesensing 13 00583 g006
Figure 7. The relationship of the number of enchancement nodes ( E ) and OA in the three datasets. (a) Indian Pines; (b) Salinas; (c) Pavia University.
Figure 7. The relationship of the number of enchancement nodes ( E ) and OA in the three datasets. (a) Indian Pines; (b) Salinas; (c) Pavia University.
Remotesensing 13 00583 g007
Figure 8. The relationship of OA and r in the three datasets.
Figure 8. The relationship of OA and r in the three datasets.
Remotesensing 13 00583 g008
Figure 9. The relationship of OA and ε in the three datasets.
Figure 9. The relationship of OA and ε in the three datasets.
Remotesensing 13 00583 g009
Figure 10. Classification maps of the Indian Pines dataset. (a) SVM 80.68%; (b) HiFi-We 86.88%; (c) SSG 69.14%; (d) EPF 95.83%; (e) GSVM 95.95%; (f) IFRF 98.86%; (h) LPP_LBP_BLS 99.74%; (i) BLS 78.58%; (j) GBLS 99.39%; (k) SSBLS 99.97%.
Figure 10. Classification maps of the Indian Pines dataset. (a) SVM 80.68%; (b) HiFi-We 86.88%; (c) SSG 69.14%; (d) EPF 95.83%; (e) GSVM 95.95%; (f) IFRF 98.86%; (h) LPP_LBP_BLS 99.74%; (i) BLS 78.58%; (j) GBLS 99.39%; (k) SSBLS 99.97%.
Remotesensing 13 00583 g010
Figure 11. Classification maps of the Salinas dataset. (a) SVM 91.89%; (b) HiFi-We 91.10%; (c) SSG 81.46%; (d) EPF 95.92%; (e) GSVM 96.93%; (f) IFRF 98.86%; (h) LPP_LBP_BLS 99.83%; (i) BLS 92.00%; (j) GBLS 99.84%; (k) SSBLS 99.99%.
Figure 11. Classification maps of the Salinas dataset. (a) SVM 91.89%; (b) HiFi-We 91.10%; (c) SSG 81.46%; (d) EPF 95.92%; (e) GSVM 96.93%; (f) IFRF 98.86%; (h) LPP_LBP_BLS 99.83%; (i) BLS 92.00%; (j) GBLS 99.84%; (k) SSBLS 99.99%.
Remotesensing 13 00583 g011
Figure 12. Classification maps of the Salinas dataset. (a) SVM 91.89%; (b) HiFi-We 91.10%; (c) SSG 81.46%; (d) EPF 95.92%; (e) GSVM 96.93%; (f) IFRF 98.86%; (h) LPP_LBP_BLS 99.83%; (i) BLS 92.00%; (j) GBLS 99.84%; (k) SSBLS 99.59%.
Figure 12. Classification maps of the Salinas dataset. (a) SVM 91.89%; (b) HiFi-We 91.10%; (c) SSG 81.46%; (d) EPF 95.92%; (e) GSVM 96.93%; (f) IFRF 98.86%; (h) LPP_LBP_BLS 99.83%; (i) BLS 92.00%; (j) GBLS 99.84%; (k) SSBLS 99.59%.
Remotesensing 13 00583 g012
Table 1. The meaning of notations in BLS classification procedure.
Table 1. The meaning of notations in BLS classification procedure.
NotationMeaning
Y G a F The HSI data smoothed by Gaussian filter
Z i The i t h mapped features with e nodes
ϕ i The i t h mapping function for feature mapping
W i f e The i t h random weight matrix for feature mapping
β i f e The i t h random bias for feature mapping
Z i The concatenation of all the first i groups of mapping features, i = 1 , , d
Z d all mapped feature nodes
H l The l t h group of enhancement nodes
ξ l The l t h function for computing the l t h group of enhancement nodes
W l e n The l t h random weight matrix for computing the l t h group of enhancement nodes
β l e n The l t h random bias for computing the l t h group of enhancement nodes
H l The concatenation of all the first l groups of enhancement nodes
W o p The connecting weight matrix from all mapped feature nodes and enhancement nodes to the output
Y B L S The output of BLS
Table 2. The results of ablation experiments on the three datasets in term of overall accuracy, OA (%), average accuracy, AA (%), and kappa coefficient, Kappa (%).
Table 2. The results of ablation experiments on the three datasets in term of overall accuracy, OA (%), average accuracy, AA (%), and kappa coefficient, Kappa (%).
MethodBLSGaussianF+BLSBLS+GuidedFSSBLS
Indian PinesOA78.3299.3296.6999.83
AA80.6699.1996.8499.86
Kappa74.2999.1896.0499.80
SalinasOA91.9899.7496.8499.96
AA96.2699.8098.8299.97
Kappa91.0499.7196.4699.95
Pavia UniversityOA70.2399.1585.7799.49
AA70.2198.7781.5099.35
Kappa61.2298.8687.1099.32
Table 3. Classification results of all comparison methods on the Indian Pines dataset in term of individual classification accuracy, ICA (%), overall accuracy, OA (%), average accuracy, AA (%), kappa coefficient, Kappa (%), overall consumed time, t (s), and test time, tt (s). SVM: support vector machine; HiFi-We: hierarchical guidance filtering-based ensemble classification for hyperspectral images; EPF: edge-preserving filtering; GSVM: Gaussian support vector machine; IFRF: image fusion and recursive filtering; LLP_LBP_BLS: locality preserving projections local binary pattern broad learning system.
Table 3. Classification results of all comparison methods on the Indian Pines dataset in term of individual classification accuracy, ICA (%), overall accuracy, OA (%), average accuracy, AA (%), kappa coefficient, Kappa (%), overall consumed time, t (s), and test time, tt (s). SVM: support vector machine; HiFi-We: hierarchical guidance filtering-based ensemble classification for hyperspectral images; EPF: edge-preserving filtering; GSVM: Gaussian support vector machine; IFRF: image fusion and recursive filtering; LLP_LBP_BLS: locality preserving projections local binary pattern broad learning system.
ClassSVMHiFi-WeSSGEPFGSVMIFRFLPP_LBP_BLSBLSGBLSSSBLS
ICAC177.4385.4853.2194.5390.7797.7499.6372.5998.6199.48
C277.7582.3260.8195.4795.8998.1899.6859.5699.72100.00
C395.0990.0292.0893.2297.3999.68100.0090.2398.2999.93
C498.6696.4797.9696.1799.0899.10100.0097.2297.7999.63
C599.8699.7599.21100.0099.96100.00100.0099.8599.72100.00
C681.0469.5671.5886.3595.2396.4099.7960.9499.1099.95
C764.9492.2455.4997.6995.0499.5999.2782.4999.8599.87
C883.5660.7960.2892.9099.4998.74100.0063.5299.8599.90
C997.8399.4794.2399.5299.33100.0099.8599.5399.75100.00
OA80.3186.1469.0995.3895.8498.8099.7478.3299.3299.83
AA86.2486.2376.0995.0996.9198.8399.8080.6699.1999.86
Kappa76.7983.6163.8094.4895.0098.5699.6474.2999.1899.80
t2.1583.26440.71160.702.2627.73113.450.801.251.42
tt1.470.64285.994.480.870.350.480.310.310.45
Table 4. Classification results of all comparison methods on the Salinas dataset in term of individual classification accuracy, ICA (%), overall accuracy, OA (%), average accuracy, AA (%), kappa coefficient, Kappa (%), overall consumed time, t (s), and test time, tt (s).
Table 4. Classification results of all comparison methods on the Salinas dataset in term of individual classification accuracy, ICA (%), overall accuracy, OA (%), average accuracy, AA (%), kappa coefficient, Kappa (%), overall consumed time, t (s), and test time, tt (s).
ClassSVMHiFi-WeSSGEPFGSVMIFRFLPP_LBP_BLSBLSGBLSSSBLS
ICAC199.6299.9798.03100.0099.76100.00100.0099.78100.00100.00
C299.7499.2592.3199.9299.74100.00100.0099.91100.00100.00
C399.6096.4077.9998.9199.3099.92100.0098.33100.00100.00
C499.6197.7099.4598.8797.3198.20100.0098.8498.8499.85
C598.5497.3795.2899.7698.5099.9899.4098.8799.7199.90
C699.78100.0099.6099.9799.2299.9899.4899.8899.8599.97
C799.6699.3498.0499.8199.7199.87100.0099.91100.00100.00
C884.4784.4458.4691.4688.3899.7399.7384.95100.00100.00
C999.6499.0891.6599.5099.78100.00100.0099.35100.00100.00
C1095.6490.5675.3696.4299.2899.9899.9797.35100.00100.00
C1199.3389.3278.5598.84100.0099.0899.8898.1099.99100.00
C1299.9794.3499.5299.9099.63100.0099.9498.77100.0099.96
C1399.5996.2197.6199.7899.5999.83100.0099.8999.9799.97
C1498.2185.6891.8497.5799.8398.92100.0095.2899.87100.00
C1569.7469.2868.6885.4598.2599.1099.7271.1998.5799.79
C1698.8797.9788.1999.2499.9999.97100.0099.82100.00100.00
OA91.8790.3181.4595.6396.8899.7299.8391.9899.7499.96
AA96.3893.5688.1697.8498.7499.6699.8896.2699.8099.97
Kappa90.9089.1779.3795.1196.5199.6899.8191.0499.7199.95
t9.21167.571308.90317.409.5357.13217.344.115.066.10
tt7.543.10136.2316.337.211.204.962.152.203.16
Table 5. Classification results of all comparison methods on the Pavia University dataset in term of individual classification accuracy, ICA (%), overall accuracy, OA (%), average accuracy, AA (%), kappa coefficient, Kappa (%), overall consumed time, t (s), and test time, tt (s).
Table 5. Classification results of all comparison methods on the Pavia University dataset in term of individual classification accuracy, ICA (%), overall accuracy, OA (%), average accuracy, AA (%), kappa coefficient, Kappa (%), overall consumed time, t (s), and test time, tt (s).
ClassSVMHiFi-WeSSGEPFGSVMIFRFLPP_LBP_BLSBLSGBLSSSBLS
ICAC195.2493.1964.4399.0093.7497.5893.6287.1899.5899.60
C295.5693.5759.4799.5997.0799.7497.8188.3399.7299.87
C371.8752.0847.3994.5092.1395.7798.7645.9298.9299.51
C477.8863.2397.4098.2294.1494.7189.0963.7696.8397.81
C598.17100.0099.2899.0599.7899.9099.6499.4699.6499.97
C670.4756.2079.3593.0599.3598.9399.9046.7599.4899.68
C758.7744.8394.6594.4099.8296.6999.7557.2799.3499.93
C885.3871.2079.4392.2493.5594.0099.7451.0396.8898.13
C999.9195.0099.9799.8893.5591.9494.7392.1998.5299.62
OA86.7976.8769.2097.5296.1797.9997.1470.2399.1599.49
AA83.7074.3780.1596.6795.9096.5897.0070.2198.7799.35
Kappa82.6270.3361.7296.6794.8897.3195.9561.2298.8699.32
t4.2292.92473.0997.944.4939.17189.012.193.783.97
tt3.011.8450.6317.232.483.677.211.051.081.31
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhao, G.; Wang, X.; Kong, Y.; Cheng, Y. Spectral-Spatial Joint Classification of Hyperspectral Image Based on Broad Learning System. Remote Sens. 2021, 13, 583. https://doi.org/10.3390/rs13040583

AMA Style

Zhao G, Wang X, Kong Y, Cheng Y. Spectral-Spatial Joint Classification of Hyperspectral Image Based on Broad Learning System. Remote Sensing. 2021; 13(4):583. https://doi.org/10.3390/rs13040583

Chicago/Turabian Style

Zhao, Guixin, Xuesong Wang, Yi Kong, and Yuhu Cheng. 2021. "Spectral-Spatial Joint Classification of Hyperspectral Image Based on Broad Learning System" Remote Sensing 13, no. 4: 583. https://doi.org/10.3390/rs13040583

APA Style

Zhao, G., Wang, X., Kong, Y., & Cheng, Y. (2021). Spectral-Spatial Joint Classification of Hyperspectral Image Based on Broad Learning System. Remote Sensing, 13(4), 583. https://doi.org/10.3390/rs13040583

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop