Next Article in Journal
Site-Specific Unmodeled Error Mitigation for GNSS Positioning in Urban Environments Using a Real-Time Adaptive Weighting Model
Previous Article in Journal
TerraSAR-X Time Series Fill a Gap in Spaceborne Snowmelt Monitoring of Small Arctic Catchments—A Case Study on Qikiqtaruk (Herschel Island), Canada
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spectral-Spatial Classification of Hyperspectral Images: Three Tricks and a New Learning Setting

1
Institute for Computing and Information Science, Radboud University Nijmegen, 6525 EC Nijmegen, The Netherlands
2
Institute for Molecules and Materials, Radboud University Nijmegen, 6525 AJ Nijmegen, The Netherlands
3
Corbion, 4206 AC Gorinchem, The Netherlands
4
Faculty of Management, Science and Technology, Open University of The Netherlands, 6419 AT Heerlen, The Netherlands
*
Authors to whom correspondence should be addressed.
Remote Sens. 2018, 10(7), 1156; https://doi.org/10.3390/rs10071156
Submission received: 18 May 2018 / Revised: 10 July 2018 / Accepted: 19 July 2018 / Published: 21 July 2018
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Spectral-spatial classification of hyperspectral images has been the subject of many studies in recent years. When there are only a few labeled pixels for training and a skewed class label distribution, this task becomes very challenging because of the increased risk of overfitting when training a classifier. In this paper, we show that in this setting, a convolutional neural network with a single hidden layer can achieve state-of-the-art performance when three tricks are used: a spectral-locality-aware regularization term and smoothing- and label-based data augmentation. The shallow network architecture prevents overfitting in the presence of many features and few training samples. The locality-aware regularization forces neighboring wavelengths to have similar contributions to the features generated during training. The new data augmentation procedure favors the selection of pixels in smaller classes, which is beneficial for skewed class label distributions. The accuracy of the proposed method is assessed on five publicly available hyperspectral images, where it achieves state-of-the-art results. As other spectral-spatial classification methods, we use the entire image (labeled and unlabeled pixels) to infer the class of its unlabeled pixels. To investigate the positive bias induced by the use of the entire image, we propose a new learning setting where unlabeled pixels are not used for building the classifier. Results show the beneficial effect of the proposed tricks also in this setting and substantiate the advantages of using labeled and unlabeled pixels from the image for hyperspectral image classification.

Graphical Abstract

1. Introduction

Hyperspectral images contain rich spectral information coming from contiguous spectral bands. In the spectral domain, pixels are represented by vectors for which each component is a measurement corresponding to specific wavelengths [1]. The length of the vector is equal to the number of spectral bands that the sensor collects. For hyperspectral images, several hundreds of spectral bands of the same scene are typically available, which form the features of a pixel. Current operational imaging systems provide images for various applications, e.g., in ecology, geology and precision agriculture [2].
A relevant task of hyperspectral image processing is classification, which aims at building a classifier using the pixel features in order to assign each pixel to one of a given set of classes [3]. Current state-of-the-art methods take a spectral-spatial approach, meaning that they use neighborhood information of labeled pixels. Spectral-spatial methods are based on diverse techniques, such as Markov random fields [4,5,6], discriminative feature construction [7,8,9,10,11,12], modification and fusion of classifiers [13,14], label propagation, active learning and semi-supervised learning [15,16], the use of external unlabeled data [17] and deep (convolutional) neural networks [18,19,20,21,22,23,24]. Furthermore, object-based methods utilize geometric features of the image extracted by means of segmentation techniques [25,26,27].
These methods achieve excellent performance on benchmark hyperspectral image classification tasks when a large number of labeled pixels for training is provided [18,19]. However, pixel labeling is an expensive task. Therefore, a problem of more practical relevance is to perform hyperspectral image classification with only a few manually-labeled pixels for training. A second problem is the inherent class unbalance of hyperspectral images, where some classes have many pixels, while other classes have only a few.
In this paper, we propose to tackle these problems using a simple shallow Convolutional Neural Network (CNN) and three ‘tricks’: spectral-locality-aware regularization, smoothing-based data augmentation and label-based data augmentation. The shallow architecture is used to prevent overfitting caused by the few labeled pixels and the many features. Locality-aware regularization forces neighboring wavelengths to have similar contributions to the generated features of the neural network. Smoothing-based data augmentation takes advantage of the spectra of neighboring pixels, and label-based data augmentation exploits labels of neighboring pixels in favor of small classes.
Extensive experiments indicate the effectiveness of the proposed method, which achieves comparable or better accuracy performance than existing methods, such as deep neural networks [28], multiple kernel learning [29], probabilistic class structure regularized sparse representation graph [30,31] and low-rank Gabor filtering [12] (see the results in Table 10).
Spectral-spatial methods exploit information from neighborhood pixels. Since the training and testing pixels are drawn from the same image, their features are likely to overlap in the spatial domain due to the shared source of information: for instance, [23] employed input patches, the central pixel of which is in the training set, and [12] applied Gabor filters to an L-size neighborhood of training pixels. As a consequence, the resulting learning setting used in spectral-spatial methods has an intrinsic positive bias induced by the overlap between training and test samples. In order to investigate such a bias, we consider also a non-overlapping learning setting, where only the labeled pixels initially selected for training are used for building a classifier.

Related Work

Below, we briefly mention a few selected spectral-spatial approaches and methods for hyperspectral image classification. We refer the reader to [32] for a recent survey of hyperspectral image classification methods.
We can divide methods for hyperspectral image classification into three broad categories: (1) pre-processing-based; (2) end-to-end methods; (3) hybrid methods. Pre-processing-based methods construct features prior to training a classifier. Recent methods in this category include the Discriminative Low-Rank Gabor Filtering (DLRGF) method by [12] for spectral-spatial feature extraction prior to classification, a deep CNN with 2D input patches and R-PCA [20] and a deep stacked auto-encoder with 2D input patches and PCA [33].
End-to-end methods learn features while training a classifier. These methods include (multiple) kernel learning methods, which use kernels to implicitly map the input space into a high dimensional non-linear space (see the recent survey [29]), and sparse representation-based methods, like [30,31,34], which learn a sparse representation of test pixels by the linear combination of a few training samples from a given dictionary, whereas its corresponding sparse representation coefficients encode the class information implicitly. Hybrid methods involve multi-step procedures, which include pre- and/or post-processing steps. For instance, the superpixel-based graphical model by [35] consists of three steps: the superpixel generation using the watershed segmentation algorithm after performing gradient fusion among multiple spectral bands; the superpixel-based graphical model development with the aid of pixel-level attributes; and the loopy belief propagation algorithm applied at the superpixel level. Here, a superpixel is a group of spatially-connected similar pixels. Object-based methods segment an image and simultaneously try to assign to each segment a class [25,26,27].
Methods specifically related to the one we propose are based on convolutional neural networks and data augmentation. Due to the success of convolutional neural networks in image classification, a plethora of CNN-based methods for hyperspectral image classification have been proposed. They differ mainly in the architecture that they use, the specific loss function that is optimized and the representation of the input data, that is as single pixels, patches of pixels, cubes of pixels, etc. Moreover, some CNN-based methods use preprocessing, often PCA, to either build a low dimensional set of non-linear input features or to extract additional information (e.g., edge detection). These methods include [20], a deep CNN with 2D input patches and R-PCA [33], a deep stacked auto-encoder with 2D input patches and PCA [18], a contextual deep CNN [36], a multi-hypothesis prediction [12], a low-rank Gabor filtering method [19], a deep CNN with 1D pixel spectra [23], a deep CNN with 1D pixel spectra, 2D pixel patches or 3D pixel cubes [21], a deep CNN with 1D pixel spectra and [28] a deep CNN with uniform smoothing kernel and 1D pixel spectra. Fortunately, the authors of the latter method shared the source code with us, which we could then use in our comparative experimental analysis.
Data augmentation is used to enhance the performance of deep neural networks for image classification. This approach has also been used in the context of hyperspectral image classification, in deep CNN-based methods. For instance, [37] used blocks of 5 × 5 pixels as samples and rotated and flipped the resulting training samples to enlarge the training set. In the deep CNN-based method by [38], the number of training samples was augmented four times by mirroring the training samples across the horizontal, vertical and diagonal axes. Our new data augmentation procedure is different because it takes into account the spatial locality of the data.
In [39], it has been observed that the dependence caused by overlap between the training and testing samples may be artificially enhanced by some spatial information processing techniques used in spectral-spatial classification methods, such as spatial filtering and morphological operators. Therefore, the authors introduced an alternative controlled random sampling strategy for spectral-spatial methods to reduce the overlap between training and testing samples and provided a more objective evaluation. However, the proposed strategy uses information on the class distribution, which may not be available in real-life scenarios. The non-overlapping learning setting that we propose overcomes this limitation.

2. Materials and Methods

A hyperspectral image is represented by a three-dimensional matrix of spectral pixels in R H × W × M , where H is the height, W is the width and M is the number of wavelengths. We denote such an input image by P and the original input image by P orig .
A subset of pixels I H × W from the input image has known class labels. This subset is called the training set, denoted by ( x , y ) . We denote by x i the i-th pixel of the training set, 1 i | I | and by y i its label. The rest of the pixels of the image form the test set, denoted as x test . The number of classes is denoted as K, and we will treat labels as binary vectors, so y i , k = 1 if and only if the i-th pixel belongs to class k.
Our method extends the training set by doing data augmentation. With a slight abuse of notation, we also denote the resulting training set by x with assigned labels y .

2.1. Data

We consider five groups of hyperspectral images, which are publicly available (http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes):
  • Pavia Center: obtained with a Reflective Optics System Imaging Spectrometer (ROSIS) sensor during a flight campaign over Pavia, Northern Italy
  • Pavia University: scanned using the ROSIS sensor during a flight campaign over Pavia, Northern Italy
  • Kennedy Space Center (KSC): obtained with the Airborne/Visible Infrared Imaging Spectrometer (AVIRIS) sensor over the Kennedy Space Center, Florida (USA)
  • Indian Pines: scanned using the AVIRIS sensor over the Indian Pines test site in north-western Indiana (USA)
  • Salinas: scanned using the AVIRIS sensor over Salinas Valley, California (USA)
As is common practice in hyperspectral image classification, we consider only the foreground pixels. Moreover, for the Indian Pines dataset, as done, e.g., in [12], we discard classes with very few samples and keep the remaining 12 classes (Corn-notill, Corn-min, Corn, Grass/Pasture, Grass/Trees, Hay-windrowed, Soybeans-notill, Soybeans-min, Soybean-clean, Wheat, Woods and Bldg-Grass-Tree-Drives). Characteristics of the images (size, number of features, foreground pixels and considered classes) are given in Table 1.

2.2. Learning Settings

In order to build and assess a classifier, pixels of a hyperspectral image are divided into a training (labeled pixels) and a test (unlabeled pixels) set. Different learning settings can be considered, depending on how training and test sets are used for building a classifier. Here, we consider two learning settings: (1) the transductive setting used in spectral-spatial hyperspectral image classification and (2) a new learning setting where only the labeled pixels selected for training are used to build a classifier.

2.2.1. Transductive Learning Setting

Spectral-spatial methods exploit information from neighborhood pixels. Since the training and testing pixels are drawn from the same image, their features are likely to overlap in the spatial domain due to the shared source of information [39]. In this transductive learning setting, a pixel-based random sampling strategy is used to select labeled pixels for training, and unlabeled pixels from the rest of the image can also be used when building a classifier by using information from the neighborhood of training pixels. The overall number of labeled pixels for each dataset is reported in Table 1. Unlabeled (test) pixels for each of the considered hyperspectral images are shown in Figure 1.

2.2.2. Non-Overlapping Learning Setting

The transductive learning setting used in spectral-spatial methods has an intrinsic positive bias due to the use of the neighborhood of each training pixel in the image, resulting in an overlap between training and test samples. In order to investigate such bias, we consider also a non-overlapping learning setting, where only training samples, i.e., the labeled pixels initially selected for training, are used for building a classifier.
In particular, we propose to randomly select a single patch of pixels for each class to use as training data. We use a patch of 7 × 7 labeled pixels for each class as a training set, which ensures that we have enough training pixels (at most 49) per class.
In general, there is a mismatch between both learning settings used in spectral-spatial hyperspectral image classification and the standard supervised learning. In supervised learning, methods are tested on independent identically distributed (i.i.d.) data. Therefore, in the case of image datasets, methods should be trained on a set of images that are independent of the test set images. Therefore, even if we do not use pixels other than those selected by our sampling procedure, this does not guarantee that the samples in our training set are independent. Hence, our controlled random sampling procedure is not a proper supervised learning setting. Nevertheless, this setting is useful for assessing the performance of methods without the bias caused by overlap between the training and testing samples.
In [39], another controlled random sampling method to select labeled pixels was introduced. The proposed method considers connected component areas in the image, consisting of pixels with equal class. For each such area, pixels are randomly sampled. Each selected pixel and its 8 neighbors form the training set. Pixels in the rest of the image are only used at test time. See [39] (Algorithm 1) for a detailed description of this method. Although this procedure is interesting for assessing the performance of spectral-spatial methods, it is impractical, since one would have to know the class composition of the whole image in order to perform the step ‘selects all unconnected partitions Pin the class c’. Our controlled random sampling procedure overcomes this drawback because it does not use information on the class distribution and selects pixels by randomly sampling a patch for each class.
Algorithm 1: CNN-RSL.
Input: 
Hyperspectral image P partitioned into a labeled training set ( x , y ) and an unlabeled test set x test .
Output: 
Predicted labels y ^ test of x test .
1:
( x , y ) = Augment - train - set ( x , y , P ) (see Equations (2) and (4), Section 2.6)
2:
M = Train - model ( x , y , P ) (uses loss Equation (3))
3:
y ^ test = Predict - test - labels ( x test , M )
4:
return y ^ test

2.3. The Baseline CNN

The baseline on which we build our method is a Convolutional Neural Network (CNN) with a single hidden convolutional layer (see Figure 2). Unlike in larger CNN architectures, we do not use pooling or fully-connected hidden layers. We chose this simple architecture because of the limited amount of labeled training data and the relatively high number of features. In this context, a simpler architecture has fewer parameters to learn, which reduces the risk of overfitting.
For training, we use the standard L2 regularized cross-entropy loss function,
L CNN ( W ) = 1 | I | i = 1 | I | k = 1 K y i , k log y ^ i , k Cross - entropy loss + λ 1 W 2 L2 regularization .
Here, y ^ i ψ 2 ( w 2 · ψ 1 ( w 1 x i ) ) is the network’s output, ψ 1 ( ) and ψ 2 ( ) are the activation functions and W = [ w 1 , w 2 ] are the weights from the input to the single hidden layer ( w 1 ) and from the hidden layer to the output ( w 2 ). K is the number of classes. For the hidden layer, we use a rectified linear activation function ψ 1 ( u ) = max ( 0 , u ) , and for the prediction layer, we use softmax, ψ 2 ( u ) k = exp ( u k ) / l exp ( u l ) .
To learn the weights of the neural network(s), which optimize this loss function, we use the common ‘Glorot’ procedure for initializing the weights [40] and Stochastic Gradient Descent (SGD) [41] for updating them.
The standard parameters of a CNN are:
  • learning rate ( η );
  • momentum (default value used in experiments: 0.7 );
  • number of convolutional kernels ( # k e r n e l s );
  • size of the convolutional kernels (N);
  • stride for the convolution (s);
  • L2 regularization constant ( λ 1 ).
To enhance the robustness of CNN to perturbed versions of the data, we add random noise to copies of the original data, and then, we add these copies to the original data to increase the amount of available training data. In absence of any further knowledge, it is natural to use Gaussian noise.
The new spectrum of a pixel is generated by adding random Gaussian noise to the original wavelengths of the spectrum as follows:
P i j k noise = P i j k + β · ϵ i j k ,
where P i j k noise is the k-th wavelength of the new ( i , j ) -th spectral pixel, generated by perturbing P i j k , with the addition of Gaussian noise ϵ i j k having zero mean and unit variance. β is a constant term that we fixed at 0.01. This procedure is applied to all the pixels in the training set.
We use this noise-based data augmentation together with the CNN as a baseline. In the following sections, we describe three tricks to enhance CNN by exploiting spectral-spatial locality:
  • by constraining weights of the neural network corresponding to nearby wavelengths to assume similar values (see Section 2.4);
  • by generating pixels with smoothed spectra from neighbors of labeled pixels (see Section 2.5);
  • by propagating the label of a pixel to its neighbors and adding them to the training set (see Section 2.6).

2.4. Trick 1: Locality-Aware Regularization

We add a term to the CNN loss function, which penalizes large differences between values of adjacent weights, as done in [42]. In this way we enforce that neighboring wavelengths have similar contributions to the generated features, thus taking advantage of the spectral-locality of the data. The augmented loss function consists of the regularized cross-entropy loss term plus our regularization term, which constrains nearby weights to assume similar values:
L ( W ) = 1 | I | i = 1 | I | k = 1 K y i , k log y ^ i , k Cross entropy loss + λ 1 W 2 L2   regularization + λ 2 w 1 shift ( w 1 ) 2 Locality-aware   regularization .
Here, the variables are as in Equation (1). shift ( · ) is an operation that shifts the elements of an array one position to the left, and λ 2 controls the new spectral-locality-aware regularization term.

2.5. Trick 2: Smoothing-Based Data Augmentation

Spectra of nearby pixels are assumed to be related because they are part of an image containing semantically homogeneous components, such as urban or rural areas.
Recent state-of-the-art methods exploit this property in different ways, such as the use of patches to train a (deep) neural network [18,20,33], the generation of discriminatory features using Gabor filters [12,36] or the use of additive Gaussian noise in addition to linear combinations of training pixels [28].
Here, we just use a Gaussian smoothing filter, because of its simplicity and invariance to rotation of the image. This operation has been called spatial smoothing [12,22,43].
Keeping the same notation introduced in Section 2, we generate the smoothed image P smt as:
P i j k smt = i j P i j k noise exp ( ( i , j ) ( i , j ) 2 / 2 σ ) i j exp ( ( i , j ) ( i , j ) 2 / 2 σ ) ,
where P i j k is the k-th wavelength of the ( i , j ) -th spectral pixel and P i j k smt is the new smoothed wavelength.
In practice, the above sum is computed over pixels ( i , j ) whose distance from pixel ( i , j ) is at most 3 σ . Figure 3 shows two pixel-spectra of the same class before and after spatial smoothing is applied.

2.6. Trick 3: Label-Based Data Augmentation

We also exploit spectral-spatial locality with data augmentation at the semantic level, by assuming that neighbor pixels are likely to have the same class. According to this assumption, the label of a pixel in the training set can be propagated to its neighbors. The resulting labeled neighbor pixels are inserted into the training set, which becomes larger at the cost of introducing label-noise. Indeed, this data augmentation procedure is likely to add new pixels with an incorrect label, and even copies of the same pixel labeled in different ways. In this way, the network is trained using a training set that contains pixels with uncertainty on their label.
In order to keep the probability that our assumption is wrong as low as possible, we randomly sample only a subset of pixels in the Moore neighborhood of each pixel (consisting of its 8 surrounding pixels).
Furthermore, we can use this augmentation step to tackle the class unbalance, by favoring the selection of pixels in smaller classes. Specifically, for pixel i in the training set with label y i and for each pixel j in its neighborhood, j is selected with probability:
p ( select j ) = 1 C y i min ( C ) max ( C ) min ( C ) ,
where C y i is the number of pixels in the training set with label y i , and C = [ C 1 , , C K ] is the vector consisting of the number of pixels of each class. All selected neighbors are added to the (multi-)set I of labeled pixels to give I la . For any j I la that was added with label augmentation, its label will be y j la = y i . This selection procedure biases the insertion of more pixels from smaller classes.
In summary, our label-based data augmentation procedure can be described as follows: for each pixel i in the training set,
  • find its Moore neighborhood;
  • select a subset of pixels in the neighborhood; see Equation (5);
  • propagate the label of i to the selected neighbor pixels;
  • insert the selected pixels into the training set.

2.7. Incorporating the Tricks into the Baseline CNN: CNN-RSL

The resulting method for hyperspectral image labeling, called CNN-RSL, incorporates into CNN the proposed three tricks: Regularization (R), Smoothing-based data augmentation (S) and Label-based data augmentation (L).
The ‘augment-train-set’ step of Algorithm 1 (also illustrated in Figure 4) consists of the following steps:
  • the original image is perturbed with random Gaussian noise Equation (2) (see Section 2.3);
  • the resulting image is spatially smoothed (S step);
  • label augmentation is applied (L step);
  • the spectra for the labeled pixels are selected from the original, noisy and smoothed images; these are combined to form the training set;
  • the spectra are rescaled between [ 0 , 1 ] , which is a common practice for artificial neural networks. Note that this rescaling retains the original distribution of the features, while helping the CNN training to converge faster.
The resulting algorithm, called CNN-RSL, is summarized in pseudo-code below Algorithm 1.

3. Experiments

In this section, we describe the experiments conducted on the five groups of hyperspectral images. First, we describe the 16 algorithms considered in our experiments. Next, we report the results, which we also compare with published results from existing methods based on different approaches. Finally, we discuss these results.

3.1. Algorithms

We assess the performance of our baseline CNN with all combinations of the proposed tricks:
  • R: spectral-locality-aware regularization term (see Section 2.4);
  • S: smoothing-based data augmentation (see Section 2.5);
  • L: label-based data augmentation (see Section 2.6).
The combination of the CNN with all three tricks yields CNN-RSL, while the other six combinations are: CNN-R (with R), CNN-S (with S), CNN-L (with L), CNN-RS (with R and S), CNN-RL (with R and L) and CNN-SL (with S and L).
Moreover, in order to investigate the effect of these tricks on other types of neural networks, we incorporate S and L also in the following methods:
  • SVM-RBF: a support vector machine with the Radial Basis Function (RBF) kernel;
  • HL-ELM: a deep convolutional neural network for hyperspectral image labeling for which we were able to retrieve the source code. HL-ELM has two convolutional and two max pooling hidden layers arranged one after the other (see [28]).

3.2. Parameter Setting

The parameters of the resulting neural networks and the range of values used in our experiments are:
  • the number of kernels of the convolutional layer, # k e r n e l s { 4 , 8 , 16 , 32 } ;
  • the size of kernels of the convolutional layer, N [ 2 , 91 ] ;
  • the stride for the convolution, s [ 1 , 4 ] ;
  • the parameters in the regularization terms, λ 1 , λ 2 = 10 n for n [ 4 , 4 ] ;
  • the learning rate, η = 10 n for n [ 4 , 1 ] .
In order to tune these parameters, we use the standard Random Grid Search Cross-Validation framework (RGS-CV) [44]. Resulting values of the CNN-RSL parameters are given in Table 2.
We also use RGS-CV to select the value of σ , the parameter of our spatial smoothing procedure, from the set { 1 , 1.67 , 2.33 , 3 , 3.67 , 4.33 , 5 } .
Like our neural network model, SVM-RBF has also a few parameters to be tuned:
  • the Gaussian exponent constant γ 10 n where n [ 4 , 4 ] ,
  • the regularization constant C 10 n where n [ 4 , 4 ] .
For HL-ELM, we use the parameter setting described in [28].

3.3. Results and Discussion

The results of the experiments with few labeled pixels for training are given in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. CNN-RSL achieved the best performance, with significant improvement over the baselines. On the Pavia University dataset with 1% training data, CNN-S was most effective (95.01 mean accuracy), closely followed by CNN-RSL (94.74). On the Salinas dataset, the improvement in accuracy from CNN to both CNN-RSL and CNN-S was about 10%. In this case, CNN-RSL was slightly better than CNN-S. On the other hand, with 1% training data, on the KSC dataset, the improvement in accuracy from CNN to CNN-RSL was about 12% (from 78.09–90.34), while CNN-S achieved a 84.74 mean accuracy; and on the Indian Pines dataset, the improvement in accuracy from CNN to CNN-RSL was more than 30% (from 54.83–86.42), while CNN-S achieved a 70.93 mean accuracy. Overall, statistical tests showed the superiority of our method when using all tricks. The increase in accuracy with respect to CNN, SVM-RBF and HL-ELM was higher when fewer training pixels were used, since smoothing- and label-based data augmentation were more beneficial in that case. Clearly, these two tricks also helped to improve the performance of SVM-RBF and HL-ELM. As expected, by increasing the number of training pixels per class, the average test accuracy of all methods increased.
Existing methods using a transductive setting, such as [18,19,21], have been shown to achieve very good results on these datasets when 200 labeled pixels for each class are used as the training set. Our method also achieves excellent performance in this setting: it improves significantly over the considered baselines, according to a binomial test for comparing classifiers [45]; see Table 9.
Table 10 reports the results of our method and published results of the following state-of-the-art methods based on different approaches: discriminative low-rank Gabor filtering [12], multiple kernel learning [30], kernel sparse representation [34] and probabilistic class structure regularized sparse representation graph [29,31]. Unfortunately, due to the diversity of choices regarding the number of training pixels, it is not possible to completely fill Table 10. CNN-RSL achieved higher accuracy compared to the other considered methods. Only in three cases, namely on the Pavia University dataset with 1 and 5% pixels and the Indian Pines dataset with 10% pixels available for training, [29,34] reported a higher accuracy than CNN-RSL.
The test set accuracies in the non-overlapping setting are reported in Table 11. In this setting, spatial smoothing (the S trick) can only be used in a limited way: each pixel in the training set was smoothed using only the other pixels in the training set. Label-based data augmentation (the L trick) cannot be applied any longer, since this step would add new pixels from the image to the training set. Spectral-locality-aware regularization (the R trick) can still be used, since it does not involve the use of pixels that are not in the training set. As we can see, also in the non-overlapping setting, smoothing-based data augmentation helped to achieve a higher accuracy for all the methods we used. Unsurprisingly, since we took a single 7 × 7 patch of pixels per class as the training data, the performance of the all methods was much lower than in the transductive setting reported in Table 3, Table 4, Table 5, Table 6 and Table 7. In particular, if there is a large variation in the spectra of a single class, we will miss this by using only a single patch per class. Since we did not use label-based data augmentation, our reference method here was CNN-RS.
In general, results in the non-overlapping learning setting showed a large decrease in performance compared with that in the transductive learning setting. Nevertheless, in this setting, spectral-locality-aware regularization and data augmentation, the latter used in a very limited form, were still beneficial, with significant increases in accuracy on the KSC and Indian Pines images.
Overall, the results of all experiments substantiated the beneficial effect of the proposed tricks, which we discuss below.
In general, smoothing-based data augmentation (the S trick) introduces spatial locality into each pixel’s spectrum by averaging it with its neighboring pixels’ spectra. Since neighboring pixels are likely to belong to the same area, spatial smoothing makes nearby spectra look more alike and eases the network’s classification task. Smoothing-based data augmentation has the largest impact on the test accuracy, with significant improvements across all datasets, notably on the Indian Pines and Salinas.
Label augmentation (the L trick) had a bigger impact on small classes, which were also the most difficult to classify correctly, especially in a setting with very few training samples. For a large training set, label-based data augmentation may have a decremental effect, which was nevertheless mitigated or neutralized when used in combination with the other components of CNN-RSL. In particular, label-based data augmentation had a clearly beneficial effect for the KSC and Indian Pines datasets, which were the datasets having more classes and fewer pixels. The label augmentation trick tended to balance the classes by selecting more new training samples from smaller classes. In our experiments with 10 labeled pixels per class (see Table 8), the training set was already class balanced. In this case, label augmentation still improved the results, but the difference was not as large as in the experiments with class unbalanced training data. In particular, for the KSC dataset, a 6% gain in accuracy was achieved by the baseline CNN when using 10 labeled samples per class instead of 2% of randomly selected labeled pixels as the training set (from 83.98–90.89%), although selecting 2% of the data results in 10 samples per class on average (see Table 1). On the other hand, with CNN-RSL, the gain in accuracy was only 2.5% (from 95.36–97.85%). This shows that our data augmentation tricks mitigated the negative effect of the class unbalanced distribution of the training set.
Spectral-locality-aware regularization (the R trick) helped to achieve a higher classification accuracy, when used in conjunction with data augmentation, as can be seen by the reduced accuracy of CNN-SL compared to CNN-RSL. Notably, on the Indian Pines image, with only 1% labeled pixels, a gain of almost 20% was achieved when using locality-aware regularization and data augmentation over using only data augmentation. Locality-aware regularization also helped to improve accuracy on the other datasets, although the gain was not as big as for Indian Pines.
We conclude this section with a discussion about the convergence and run time of CNN-RSL. Figure 5 illustrates the convergence behavior of our loss function during the training of CNN-RSL on one of the datasets used in the experiments. To assess convergence, we use early stopping. The training stops when the validation error does not decrease for at least 100 epochs.
The run time of CNN-RSL depends on the number of pixels and on the value of σ used for the spatial smoothing. In fact, the necessary time for spatial smoothing is proportional to both σ and the number of pixels. This is a disadvantage of CNN-RSL with respect to algorithms that use deep architectures and no spatial smoothing. However, spatial smoothing can be highly parallelized given that each pixel is smoothed independently of the other pixels. Consequently, its running time can be drastically reduced [47]. In Figure 6,we report the running time of CNN-RSL using a single CPU with a 2300-MHz clock speed. The values refer to the time needed for predicting a single pixel, and they also include the preprocessing time.

4. Conclusions

We have introduced a simple method based on convolutional neural networks and data augmentation for spectral-spatial classification of remotely-sensed hyperspectral images.
The main characteristic of our method is its capability to exploit spectral-spatial information at both the data level (through data augmentation) and at the classifier level (through the locality-aware regularization). We proposed two types of data augmentation: smoothing-based, which constructs new pixels from the spectra of neighbors of the labeled pixels, and label-based data augmentation, which expands the training set with neighbors of the labeled pixels. Smoothing-based data augmentation consistently improves the test accuracy of the tested methods, while the contribution of label-based data augmentation is mostly beneficial for datasets with many small classes and skewed class distributions. Furthermore, we modified the loss function by inserting a term to penalize the difference among networks weights corresponding to nearby wavelengths of the spectra.
Both CNNs and data augmentation have been widely used in hyperspectral classification [23]. Therefore, at first, the contribution of the proposed method seems limited. Nevertheless, CNN-RSL differs from previous methods in two main aspects: (1) the considered CNN architecture is a very basic shallow architecture with only one hidden layer, without pooling and fully-connected layers, which is advantageous because it does not need a large amount of data or computational resources for training as deep neural networks do, and it is more robust to overfitting; (2) we perform data augmentation not only with a rather standard smoothing-based technique, but also with a new label-based technique to favor the selection of pixels in smaller classes, which is beneficial when few labeled pixels are available and when the class distribution is skewed.
An advantage of the proposed method is its modularity, which favors qualitative analysis of the contribution of the single tricks, as well as their embedding in other types of neural networks. The results of our extensive comparative analysis demonstrated the usefulness of the method.
Our data augmentation approach uses neighbors of training pixels, that is test samples, when building a classifier. This transductive learning setting is the natural setting for hyperspectral image classification. When no overlap between training and test data is allowed, our label augmentation strategy cannot be used. Nevertheless, a limited form of smoothing data augmentation and the spectral-locality-aware regularization term can still be used. The results of experiments showed a substantial drop in accuracy with the non-overlapping learning setting.
Our approach considers a single image. In future work, we intend to adapt the approach to multiple images. For instance, in a dynamic setting, where time-series spectral images are given in order to study seasonal changes of vegetation species, we intend to develop multi-channel convolutional neural networks with locality-aware regularization to enforce smooth change in time.
To guarantee full reproducibility of all results and to facilitate direct usage of CNN-RSL, the source code of our method is publicly available at https://bitbucket.org/TeslaH2O/cnn_hyperspectral.

Author Contributions

J.A. implemented the proposed algorithm, run the experiments and together with T.v.L. and E.M. provided the methodologies and wrote the manuscript. L.M.C.B. and T.T. contributed to the discussion and gave valuable suggestions to improve the manuscript.

Funding

This research received no external funding.

Acknowledgments

Thanks to Jeroen Jansen for reading and providing comments on a previous version of the manuscript. Thanks to the authors of [28] for providing the source code of their method.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chang, C.I. Hyperspectral Imaging: Techniques for Spectral Detection and Classification; Springer: Berlin, Germany, 2003; Volume 1. [Google Scholar]
  2. Fauvel, M.; Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.; Tilton, J.C. Advances in spectral-spatial classification of hyperspectral images. Proc. IEEE 2013, 101, 652–675. [Google Scholar] [CrossRef]
  3. Camps-Valls, G.; Tuia, D.; Bruzzone, L.; Benediktsson, J.A. Advances in hyperspectral image classification: Earth monitoring with statistical learning methods. IEEE Sign. Proc. Mag. 2014, 31, 45–54. [Google Scholar] [CrossRef]
  4. Hoekman, D.H.; Vissers, M.A.M.; Tran, T.N. Unsupervised Full-Polarimetric SAR Data Segmentation as a Tool for Classification of Agricultural Areas. IEEE J. STARS 2011, 4, 402–411. [Google Scholar] [CrossRef]
  5. Tran, T.N.; Wehrens, R.; Buydens, L.M. Clustering multispectral images: A tutorial. Chemom. Intell. Lab. Syst. 2005, 77, 3–17. [Google Scholar] [CrossRef]
  6. Tran, T.N.; Wehrens, R.; Hoekman, D.H.; Buydens, L.M.C. Initialization of Markov random field clustering of large remote sensing images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1912–1919. [Google Scholar] [CrossRef]
  7. Ghamisi, P.; Benediktsson, J.A.; Sveinsson, J.R. Automatic spectral—Spatial classification framework based on attribute profiles and supervised feature extraction. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5771–5782. [Google Scholar] [CrossRef]
  8. Falco, N.; Benediktsson, J.A.; Bruzzone, L. Spectral and spatial classification of hyperspectral images based on ICA and reduced morphological attribute profiles. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6223–6240. [Google Scholar] [CrossRef]
  9. He, L.; Li, Y.; Li, X.; Wu, W. Spectral—Spatial classification of hyperspectral images via spatial translation-invariant wavelet-based sparse representation. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2696–2712. [Google Scholar] [CrossRef]
  10. Hu, F.; Xia, G.S.; Wang, Z.; Huang, X.; Zhang, L.; Sun, H. Unsupervised feature learning via spectral clustering of multidimensional patches for remotely sensed scene classification. IEEE J. STARS 2015, 8. [Google Scholar] [CrossRef]
  11. Yang, W.; Yin, X.; Xia, G.S. Learning high-level features for satellite image classification with limited labeled samples. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4472–4482. [Google Scholar] [CrossRef]
  12. He, L.; Li, J.; Plaza, A.; Li, Y. Discriminative Low-Rank Gabor Filtering for Spectral—Spatial Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1381–1395. [Google Scholar] [CrossRef]
  13. Veganzones, M.A.; Tochon, G.; Dalla-Mura, M.; Plaza, A.J.; Chanussot, J. Hyperspectral image segmentation using a new spectral unmixing-based binary partition tree representation. IEEE Trans. Image Process. 2014, 23, 3574–3589. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Lu, T.; Li, S.; Fang, L.; Jia, X.; Benediktsson, J.A. From Subpixel to Superpixel: A Novel Fusion Framework for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4398–4411. [Google Scholar] [CrossRef]
  15. Sun, B.; Kang, X.; Li, S.; Benediktsson, J.A. Random-walker-based collaborative learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 212–222. [Google Scholar] [CrossRef]
  16. Wang, Z.; Du, B.; Zhang, L.; Zhang, L.; Jia, X. A Novel Semisupervised Active-Learning Algorithm for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3071–3083. [Google Scholar] [CrossRef]
  17. Kemker, R.; Kanan, C. Self-taught feature learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2693–2705. [Google Scholar] [CrossRef]
  18. Lee, H.; Kwon, H. Contextual Deep CNN Based Hyperspectral Classification. arXiv, 2016; arXiv:1604.03519. [Google Scholar]
  19. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015. [Google Scholar] [CrossRef]
  20. Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep Supervised Learning for Hyperspectral Data Classification through Convolutional Neural Networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium, Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar]
  21. Slavkovikj, V.; Verstockt, S.; Neve, W.D.; Hoecke, S.V.; Walle, R.V.D. Hyperspectral Image Classification with Convolutional Neural Networks. In Proceedings of the 23rd ACM international conference on Multimedia, Brisbane, Australia, 26–30 October 2015; pp. 1159–1162. [Google Scholar]
  22. Liang, H.; Li, Q. Hyperspectral imagery classification using sparse representations of convolutional neural network features. Remote Sens. 2016, 8. [Google Scholar] [CrossRef]
  23. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  24. Hu, F.; Xia, G.S.; Hu, J.; Zhang, L. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery. Remote Sens. 2015, 7, 14680–14707. [Google Scholar] [CrossRef] [Green Version]
  25. Amini, S.; Homayouni, S.; Safari, A.; Darvishsefat, A.A. Object-based classification of hyperspectral data using Random Forest algorithm. Geo-Spat. Inf. Sci. 2018, 21, 127–138. [Google Scholar] [CrossRef] [Green Version]
  26. O’Neil-Dunne, J.; Pelletier, K.; MacFaden, S.; Troy, A.; Grove, J.M. Object-based high-resolution land-cover mapping. In Proceedings of the 2009 17th International Conference on Geoinformatics, Fairfax, VA, USA, 12–14 August 2009. [Google Scholar]
  27. Blaschke, T.; Lang, S.; Hay, G.J. Object-Based Image Analysis: Spatial Concepts for Knowledge-Driven Remote Sensing Applications; Lecture Notes in Geoinformation and Cartography; Springer: Berlin, Germany, 2008. [Google Scholar]
  28. Lv, Q.; Niu, X.; Dou, Y.; Xu, J.; Lei, Y. Classification of Hyperspectral Remote Sensing Image Using Hierarchical Local-Receptive-Field-Based Extreme Learning Machine. IEEE Geosci. Remote Sens. Lett. 2016, 13, 434–438. [Google Scholar] [CrossRef]
  29. Gu, Y.; Chanussot, J.; Jia, X.; Benediktsson, J.A. Multiple Kernel Learning for Hyperspectral Image Classification: A Review. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6547–6565. [Google Scholar] [CrossRef]
  30. Pan, L.; Li, H.C.; Meng, H.; Li, W.; Du, Q.; Emery, W.J. Hyperspectral Image Classification via Low-Rank and Sparse Representation With Spectral Consistency Constraint. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2117–2121. [Google Scholar] [CrossRef]
  31. Shao, Y.; Sang, N.; Gao, C.; Ma, L. Probabilistic class structure regularized sparse representation graph for semi-supervised hyperspectral image classification. Pattern Recognit. 2017, 63, 102–114. [Google Scholar] [CrossRef]
  32. He, L.; Li, J.; Liu, C.; Li, S. Recent Advances on Spectral-Spatial Hyperspectral Image Classification: An Overview and New Guidelines. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1579–1597. [Google Scholar] [CrossRef]
  33. Chen, Y.; Lin, Z.; Zhao, X.; Member, S.; Wang, G.; Gu, Y. Deep Learning-Based Classification of Hyperspectral Data. IEEE J. STARS 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  34. Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification via kernel sparse representation. IEEE Trans. Geosci. Remote Sens. 2013, 51, 217–231. [Google Scholar] [CrossRef]
  35. Zhang, G.; Jia, X.; Hu, J. Superpixel-based graphical model for remote sensing image mapping. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5861–5871. [Google Scholar] [CrossRef]
  36. Ma, X.; Wang, H.; Geng, J. Spectral-Spatial Classification of Hyperspectral Image Based on Deep Auto-Encoder. IEEE J. STARS 2016, 9, 4073–4085. [Google Scholar] [CrossRef]
  37. Yu, S.; Jia, S.; Xu, C. Convolutional neural networks for hyperspectral image classification. Neurocomputing 2017, 219, 88–98. [Google Scholar] [CrossRef]
  38. Lee, H.; Kwon, H. Going Deeper With Contextual CNN for Hyperspectral Image Classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Liang, J.; Zhou, J.; Qian, Y.; Wen, L.; Bai, X.; Gao, Y. On the sampling strategy for evaluation of spectral-spatial methods in hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 862–880. [Google Scholar] [CrossRef]
  40. Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010), Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
  41. Bottou, L. Large-Scale Machine Learning with Stochastic Gradient Descent. In Proceedings of the 19th International Conference on Computational Statistics, COMPSTAT’2010, Paris, France, 22–27 August 2010; Lechevallier, Y., Saporta, G., Eds.; Physica-Verlag HD: Salenstein, Switzerland, 2010; pp. 177–186. [Google Scholar] [Green Version]
  42. Acquarelli, J.; van Laarhoven, T.; Gerretzen, J.; Tran, T.N.; Buydens, L.M.; Marchiori, E. Convolutional neural networks for vibrational spectroscopic data analysis. Anal. Chim. Acta 2017, 954, 22–31. [Google Scholar] [CrossRef] [PubMed]
  43. Velasco-Forero, S.; Manian, V. Improving Hyperspectral Image Classification Using Spatial Preprocessing. IEEE Geosci. Remote Sens. Lett. 2009, 6, 297–301. [Google Scholar] [CrossRef]
  44. Bergstra, J.; Bengio, Y. Random Search for Hyper-parameter Optimization. J. Machine Learn. Res. 2012, 13, 281–305. [Google Scholar]
  45. Salzberg, S.L. On Comparing Classifiers: Pitfalls to Avoid and a Recommended Approach. Data Min. Knowl. Discov. 1997, 1, 317–328. [Google Scholar] [CrossRef]
  46. Kang, X.; Li, S.; Benediktsson, J.A. Feature Extraction of Hyperspectral Images With Image Fusion and Recursive Filtering. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3742–3752. [Google Scholar] [CrossRef]
  47. Cope, B. Implementation of 2D Convolution on FPGA, GPU and CPU; Imperial College Report: London, UK, 2006; pp. 2–5. [Google Scholar]
Figure 1. Unlabeled pixels for each of the considered hyperspectral images.
Figure 1. Unlabeled pixels for each of the considered hyperspectral images.
Remotesensing 10 01156 g001
Figure 2. Single hidden convolutional layer CNN architecture. The input of the CNN is the spectral feature vector of a pixel to which 1D convolutions are applied in the convolutional layer. Afterwards, the resulting feature maps are flattened and fed to the last, fully-connected, layer, which outputs the class prediction of the input pixels.
Figure 2. Single hidden convolutional layer CNN architecture. The input of the CNN is the spectral feature vector of a pixel to which 1D convolutions are applied in the convolutional layer. Afterwards, the resulting feature maps are flattened and fed to the last, fully-connected, layer, which outputs the class prediction of the input pixels.
Remotesensing 10 01156 g002
Figure 3. Effect of spatial smoothing on the spectra of two neighboring pixels: original image (left); image after spatial-smoothing (right). Spectra look more similar after spatial smoothing.
Figure 3. Effect of spatial smoothing on the spectra of two neighboring pixels: original image (left); image after spatial-smoothing (right). Spectra look more similar after spatial smoothing.
Remotesensing 10 01156 g003
Figure 4. CNN-RSL (Regularization (R), Smoothing-based data augmentation (S) and Label-based data augmentation (L)) data processing flowchart. Data augmentation is applied to the original hyperspectral image. The labeled pixels from each of the three hyperspectral images (original, noisy and smoothed) form the training set ( x , y ) , which is used to train the CNN with the spectral locality-aware regularization term (CNN-R).
Figure 4. CNN-RSL (Regularization (R), Smoothing-based data augmentation (S) and Label-based data augmentation (L)) data processing flowchart. Data augmentation is applied to the original hyperspectral image. The labeled pixels from each of the three hyperspectral images (original, noisy and smoothed) form the training set ( x , y ) , which is used to train the CNN with the spectral locality-aware regularization term (CNN-R).
Remotesensing 10 01156 g004
Figure 5. Convergence behavior of the CNN-RSL loss function (average over 10 folds of cross-validation on the KSC dataset).
Figure 5. Convergence behavior of the CNN-RSL loss function (average over 10 folds of cross-validation on the KSC dataset).
Remotesensing 10 01156 g005
Figure 6. Prediction time in milliseconds per pixel (ms/pixel) depending on the Gaussian window size σ .
Figure 6. Prediction time in milliseconds per pixel (ms/pixel) depending on the Gaussian window size σ .
Remotesensing 10 01156 g006
Table 1. Description of the hyperspectral images. KSC, Kennedy Space Center.
Table 1. Description of the hyperspectral images. KSC, Kennedy Space Center.
ImageSize# Features# Foreground Pixels# Classes
Pavia Center 1096 × 715 102148,15209
Pavia University 0 610 × 340 10342,77609
KSC 0 512 × 614 176521113
Indian Pines 0 145 × 145 22010,06212
Salinas 0 512 × 217 22454,12916
Table 2. Parameter values of CNN-RSL trained with 1% labeled pixels of each class using Random Grid Search Cross-Validation framework (RGS-CV).
Table 2. Parameter values of CNN-RSL trained with 1% labeled pixels of each class using Random Grid Search Cross-Validation framework (RGS-CV).
Dataset # kernels Ns λ 1 λ 2 η σ
Pavia Center324710.0010.10.0012.33
Pavia University323510.0100.10.0012.33
KSC165110.0100.10.0013.00
Indian Pines165310.0010.10.0013.67
Salinas324910.0010.10.0014.33
Table 3. Test set classification accuracy on the Pavia Center dataset, when using 1–5% randomly sampled labeled pixels per class for training. We report the mean and standard deviation over 10 runs. We also list the mean and standard deviation of the number of training pixels per class for each training %. The best accuracy for each training set is indicated in bold. An ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
Table 3. Test set classification accuracy on the Pavia Center dataset, when using 1–5% randomly sampled labeled pixels per class for training. We report the mean and standard deviation over 10 runs. We also list the mean and standard deviation of the number of training pixels per class for each training %. The best accuracy for each training set is indicated in bold. An ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
Train %1%
165 ± 211
2%
329 ± 422
3%
494 ± 633
4%
658 ± 845
5%
823 ± 1056
Method
CNN97.79 ± 0.22 *97.98 ± 0.20 *98.14 ± 0.25 *98.57 ± 0.29 *98.69 ± 0.25 *
CNN-S99.23 ± 0.16 *99.26 ± 0.15 *99.24 ± 0.12 *99.28 ± 0.11 *99.35 ± 0.22 *
CNN-L98.48 ± 0.18 *98.81 ± 0.20 *99.00 ± 0.27 *99.09 ± 0.14 *99.20 ± 0.22 *
CNN-R98.01 ± 0.28 *98.32 ± 0.25 *98.41 ± 0.22 *98.63 ± 0.30 *98.79 ± 0.22 *
CNN-RS99.21 ± 0.88 *99.28 ± 0.68 *99.28 ± 0.51 *99.33 ± 0.58*99.41 ± 0.35 *
CNN-SL98.11 ± 0.96 *98.59 ± 0.85 *98.84 ± 0.74 *98.96 ± 0.59 *99.12 ± 0.54 *
CNN-RL98.76 ± 0.39 *98.86 ± 0.45 *98.95 ± 0.41 *99.20 ± 0.43 *99.34 ± 0.29 *
CNN-RSL99.52 ± 0.0799.67 ± 0.1599.72 ± 0.0499.76 ± 0.0799.82 ± 0.05
SVM-RBF84.31 ± 2.51 *85.41 ± 1.77 *85.89 ± 1.65 *87.04 ± 1.83 *87.19 ± 1.67 *
SVM-RBF-S99.05 ± 0.52 *99.21 ± 0.51 *99.50 ± 0.34 *99.60 ± 0.38 *99.64 ± 0.33 *
SVM-RBF-L90.12 ± 5.24 *92.14 ± 4.26 *92.25 ± 1.85 *92.84 ± 0.95 *92.80 ± 0.82 *
SVM-RBF-SL98.95 ± 0.51 *99.09 ± 0.57 *99.25 ± 0.62 *99.36 ± 0.41 *99.45 ± 0.49 *
HL-ELM96.22 ± 0.08 *97.27 ± 0.11 *97.78 ± 0.08 *98.09 ± 0.11 *98.22 ± 0.10 *
HL-ELM-S98.75 ± 0.17 *98.92 ± 0.18 *99.17 ± 0.13 *99.28 ± 0.12 *99.30 ± 0.09 *
HL-ELM-L96.32 ± 0.15 *97.54 ± 0.22 *97.87 ± 0.17 *98.12 ± 0.22 *98.29 ± 0.14 *
HL-ELM-SL99.05 ± 0.23 *99.27 ± 0.25 *99.40 ± 0.10 *99.43 ± 0.06 *99.52 ± 0.07 *
Table 4. Test set classification accuracy on the Pavia University dataset, when using 1–5% randomly sampled labeled pixels per class for training. We report the mean and standard deviation over 10 runs. We also list the mean and standard deviation of the number of training pixels per class for each training %. The best accuracy for each training set is indicated in bold. An ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
Table 4. Test set classification accuracy on the Pavia University dataset, when using 1–5% randomly sampled labeled pixels per class for training. We report the mean and standard deviation over 10 runs. We also list the mean and standard deviation of the number of training pixels per class for each training %. The best accuracy for each training set is indicated in bold. An ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
Train %1%
48 ± 52
2%
95 ± 104
3%
143 ± 157
4%
190 ± 209
5%
238 ± 261
Method
CNN88.76 ± 0.20 *89.04 ± 0.26 *90.16 ± 0.27 *91.52 ± 0.23 *92.79 ± 0.21 *
CNN-S95.01 ± 0.74  95.55 ± 0.68 *95.87 ± 0.59 *95.98 ± 0.46 *96.25 ± 0.72*
CNN-L88.77 ± 0.13 *89.51 ± 0.02 *88.79 ± 0.49 *89.05 ± 0.81 *89.20 ± 0.83 *
CNN-R89.04 ± 0.51 *89.86 ± 0.45 *90.29 ± 0.50 *91.50 ± 0.36 *92.65 ± 0.45 *
CNN-RS94.25 ± 0.49*95.61 ± 0.52*95.92 ± 0.47*96.15 ± 0.53*96.51 ± 0.29*
CNN-SL91.12 ± 1.15 *92.17 ± 0.95 *92.53 ± 0.87 *93.10 ± 0.61 *93.32 ± 0.41 *
CNN-RL88.51 ± 0.55 *88.91 ± 0.58 *89.13 ± 0.49 *89.62 ± 0.52 *89.98 ± 0.77 *
CNN-RSL94.74 ± 0.2596.36 ± 0.6896.54 ± 0.4696.65 ± 0.3196.70 ± 0.44
SVM-RBF75.96 ± 2.56 *74.85 ± 0.78 *74.92 ± 1.41 *75.43 ± 1.26 *75.98 ± 1.96 *
SVM-RBF-S89.18 ± 4.23 *91.77 ± 3.59 *93.20 ± 3.57 *93.74 ± 4.21 *94.35 ± 3.68 *
SVM-RBF-L57.51 ± 2.67 *79.06 ± 1.47 *82.56 ± 2.86 *82.82 ± 0.86 *90.51 ± 1.24 *
SVM-RBF-SL91.68 ± 2.28 *93.89 ± 1.23 *95.00 ± 0.78 *95.48 ± 2.59 *95.56 ± 0.59 *
HL-ELM75.77 ± 2.02 *78.90 ± 2.13 *82.26 ± 2.55 *83.27 ± 2.18 *86.51 ± 2.27 *
HL-ELM-S90.75 ± 0.75 *92.91 ± 0.81 *93.85 ± 0.57 *94.37 ± 0.26 *95.23 ± 0.19 *
HL-ELM-L75.96 ± 1.76 *81.41 ± 2.31 *83.98 ± 1.95 *85.59 ± 1.72 *86.81 ± 1.63 *
HL-ELM-SL92.79 ± 0.91 *94.74 ± 0.79 *95.56 ± 0.58 *96.21 ± 0.36 *96.34 ± 0.18*
Table 5. Test set classification accuracy on the KSC dataset, when using 1–5% randomly sampled labeled pixels per class for training. We report the mean and standard deviation over 10 runs. We also list the mean and standard deviation of the number of training pixels per class for each training %. The best accuracy for each training set is indicated in bold. An ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
Table 5. Test set classification accuracy on the KSC dataset, when using 1–5% randomly sampled labeled pixels per class for training. We report the mean and standard deviation over 10 runs. We also list the mean and standard deviation of the number of training pixels per class for each training %. The best accuracy for each training set is indicated in bold. An ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
Train %1%
4 ± 2
2%
8 ± 5
3%
12 ± 7
4%
16 ± 9
5%
20 ± 11
Method
CNN78.09 ± 0.99 *83.98 ± 0.92 *85.37 ± 1.21 *87.06 ± 1.25 *88.27 ± 1.75 *
CNN-S84.74 ± 1.32 *85.95 ± 0.78 *86.48 ± 0.72 *88.61 ± 0.66 *90.18 ± 0.52 *
CNN-L84.65 ± 1.85 *88.02 ± 1.70 *91.94 ± 0.99 *93.58 ± 0.36 *94.21 ± 0.15 *
CNN-R80.24 ± 1.44 *84.00 ± 0.82 *85.41 ± 0.94 *88.56 ± 0.81 *90.19 ± 0.59 *
CNN-RS84.95 ± 1.14 *85.86 ± 0.70 *86.50 ± 0.85 *88.95 ± 0.91 *90.09 ± 0.32 *
CNN-SL87.07 ± 0.85 *90.24 ± 0.78 *92.85 ± 0.71 *95.05 ± 0.66 *97.28 ± 0.53 *
CNN-RL84.78 ± 1.83 *88.53 ± 0.80 *92.65 ± 0.77 *93.66 ± 0.80 *94.09 ± 0.42 *
CNN-RSL90.34 ± 0.9795.36 ± 0.0297.22 ± 0.3798.80 ± 0.6099.79 ± 0.20
SVM-RBF67.85 ± 1.97 *76.17 ± 2.85 *79.87 ± 2.59 *79.17 ± 3.25 *82.45 ± 2.62 *
SVM-RBF-S89.03 ± 2.04 *90.87 ± 3.16 *93.51 ± 3.05 *95.36 ± 2.53 *96.76 ± 1.98 *
SVM-RBF-L81.10 ± 2.81 *87.14 ± 1.59 *87.66 ± 2.51 *91.15 ± 1.95 *92.47 ± 2.06 *
SVM-RBF-SL89.14 ± 3.15 *90.74 ± 1.65 *91.12 ± 2.54 *96.00 ± 1.56 *97.38 ± 0.85 *
HL-ELM79.21 ± 0.65 *81.88 ± 1.36 *82.79 ± 0.95 *83.87 ± 1.23 *86.21 ± 0.92 *
HL-ELM-S81.52 ± 2.42 *86.95 ± 1.24 *90.58 ± 1.12 *92.17 ± 0.87 *93.46 ± 0.51 *
HL-ELM-L79.60 ± 1.54 *83.14 ± 0.98 *83.56 ± 0.64 *85.35 ± 0.53 *86.48 ± 0.45 *
HL-ELM-SL88.11 ± 2.52 *91.31 ± 1.14 *93.64 ± 1.26 *95.72 ± 0.82 *96.77 ± 0.58 *
Table 6. Test set classification accuracy on the Indian Pines dataset, when using 1–5% randomly sampled labeled pixels per class for training. We report the mean and standard deviation over 10 runs. We also list the mean and standard deviation of the number of training pixels per class for each training %. The best accuracy for each training set is indicated in bold. An ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
Table 6. Test set classification accuracy on the Indian Pines dataset, when using 1–5% randomly sampled labeled pixels per class for training. We report the mean and standard deviation over 10 runs. We also list the mean and standard deviation of the number of training pixels per class for each training %. The best accuracy for each training set is indicated in bold. An ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
Train %1%
8 ± 6
2%
17 ± 12
3%
25 ± 18
4%
34 ± 24
5%
42 ± 30
Method
CNN54.83 ± 0.23 *60.83 ± 0.87 *64.56 ± 0.74 *67.58 ± 0.67 *70.83 ± 0.77 *
CNN-S70.93 ± 0.40 *78.59 ± 0.54 *83.88 ± 0.65 *87.77 ± 0.70 *90.51 ± 0.53 *
CNN-L65.79 ± 0.51 *71.83 ± 0.42 *76.34 ± 0.46 *78.54 ± 0.27 *80.36 ± 0.88 *
CNN-R56.24 ± 0.39 *61.12 ± 0.45 *64.89 ± 0.55 *67.64 ± 0.50 *71.03 ± 0.83 *
CNN-RS72.63 ± 0.38 *78.63 ± 0.44 *84.01 ± 0.58 *88.32 ± 0.57 *90.83 ± 0.49 *
CNN-SL68.11 ± 0.54 *72.67 ± 0.38 *76.71 ± 0.53 *78.76 ± 0.57 *80.85 ± 0.38 *
CNN-RL66.02 ± 0.43 *72.00 ± 0.53 *76.14 ± 0.37 *78.82 ± 0.46 *80.78 ± 0.61 *
CNN-RSL86.42 ± 0.6692.70 ± 0.8094.45 ± 0.9296.00 ± 0.3896.42 ± 0.24
SVM-RBF58.75 ± 0.49 *60.58 ± 0.36 *61.47 ± 0.28 *63.81 ± 0.47 *64.45 ± 0.32 *
SVM-RBF-S77.23 ± 2.90 *83.23 ± 2.46 *86.44 ± 3.01 *88.91 ± 2.78 *89.52 ± 2.67 *
SVM-RBF-L65.74 ± 2.92 *69.47 ± 1.43 *70.38 ± 1.35 *77.18 ± 1.77 *77.90 ± 1.59 *
SVM-RBF-SL85.14 ± 2.53 *90.12 ± 1.91 *92.95 ± 1.57 *93.24 ± 1.72 *94.17 ± 0.86 *
HL-ELM66.29 ± 0.51 *71.69 ± 1.24 *74.28 ± 0.79 *76.60 ± 0.64 *78.04 ± 0.52 *
HL-ELM-S73.88 ± 0.54 *82.48 ± 0.96 *86.51 ± 0.74 *88.49 ± 0.68 *90.58 ± 0.55 *
HL-ELM-L66.34 ± 0.14 *73.15 ± 0.11 *73.19 ± 0.21 *76.85 ± 0.16 *78.27 ± 0.19 *
HL-ELM-SL82.05 ± 0.96 *88.12 ± 0.54 *91.39 ± 0.49 *93.37 ± 0.53 *94.41 ± 0.45 *
Table 7. Test set classification accuracy on the Salinas dataset when using 1–5% randomly sampled labeled pixels per class for training. We report the mean and standard deviation over 10 runs. We also list the mean and standard deviation of the number of training pixels per class for each training %. The best accuracy for each training set is indicated in bold. An ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
Table 7. Test set classification accuracy on the Salinas dataset when using 1–5% randomly sampled labeled pixels per class for training. We report the mean and standard deviation over 10 runs. We also list the mean and standard deviation of the number of training pixels per class for each training %. The best accuracy for each training set is indicated in bold. An ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
Train %1%
34 ± 27
2%
68 ± 54
3%
101 ± 81
4%
135 ± 107
5%
169 ± 134
Method
CNN87.13 ± 0.42 *88.21 ± 0.37 *88.54 ± 0.22 *89.46 ± 0.57 *91.14 ± 0.17 *
CNN-S96.87 ± 0.35 *97.05 ± 0.29 *97.17 ± 0.31 *97.21 ± 0.17 *97.52 ± 0.54 *
CNN-L87.13 ± 0.59 *87.93 ± 0.52 *88.29 ± 0.37 *88.45 ± 0.36 *89.56 ± 0.75 *
CNN-R88.19 ± 0.37 *89.15 ± 0.45 *89.98 ± 0.40 *90.56 ± 0.32 *91.27 ± 0.22 *
CNN-RS94.93 ± 0.65 *97.02 ± 0.51 *97.14 ± 0.57 *97.30 ± 0.72 *97.57 ± 0.60 *
CNN-SL93.15 ± 0.84 *93.96 ± 0.73 *94.23 ± 0.68 *95.17 ± 0.52 *96.25 ± 0.43 *
CNN-RL86.82 ± 0.77 *87.02 ± 0.65 *87.87 ± 0.60 *88.02 ± 0.54 *88.36 ± 0.46 *
CNN-RSL96.93 ± 0.5597.16 ± 0.6997.68 ± 0.7898.21 ± 0.4199.03 ± 0.17
SVM-RBF72.38 ± 1.85 *73.51 ± 2.67 *73.78 ± 2.54 *74.36 ± 2.16 *74.84 ± 2.04 *
SVM-RBF-S93.35 ± 3.41 *94.68 ± 2.87 *96.18 ± 2.26 *96.87 ± 2.59 *97.16 ± 1.85 *
SVM-RBF-L84.29 ± 2.85 *85.36 ± 2.16 *85.87 ± 2.14 *86.68 ± 2.27 *87.16 ± 1.55 *
SVM-RBF-SL96.23 ± 0.47 *96.31 ± 0.36 *97.38 ± 0.52 *97.47 ± 0.75 *97.68 ± 0.48 *
HL-ELM87.50 ± 0.53 *88.95 ± 0.74 *89.38 ± 0.45 *91.02 ± 0.40 *91.36 ± 0.25 *
HL-ELM-S92.16 ± 0.78 *94.64 ± 0.89 *95.36 ± 0.45 *96.17 ± 0.41 *96.42 ± 0.48 *
HL-ELM-L87.77 ± 0.56 *89.56 ± 0.46 *90.16 ± 0.37 *91.11 ± 0.32 *91.87 ± 0.40 *
HL-ELM-SL93.51 ± 0.92 *95.44 ± 0.83 *96.49 ± 0.35 *97.41 ± 0.26 *97.81 ± 0.20 *
Table 8. Average classification accuracy of test data over 10 runs using 10 randomly sampled labeled pixels per class for training. The best accuracy for each dataset is indicated in bold. An ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
Table 8. Average classification accuracy of test data over 10 runs using 10 randomly sampled labeled pixels per class for training. The best accuracy for each dataset is indicated in bold. An ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
MethodPavia CenterPavia UniversityKSCIndian PinesSalinas
CNN92.21 ± 0.56 *85.51 ± 1.65 *90.89 ± 2.83 *76.75 ± 0.56 *88.58 ± 2.15 *
CNN-S97.44 ± 0.38 *89.12 ± 1.10 *93.62 ± 2.74 *80.59 ± 0.10 *91.31 ± 0.79 *
CNN-L96.57 ± 0.48 *88.36 ± 1.39 *92.91 ± 2.54 *79.77 ± 0.52 *90.66 ± 1.42 *
CNN-R92.84 ± 0.44 *86.11 ± 1.22 *91.31 ± 2.14 *77.50 ± 0.49 *89.01 ± 1.54 *
CNN-RS97.87 ± 0.29 *90.30 ± 2.11 *94.41 ± 2.03 *81.25 ± 0.12 *91.68 ± 0.84 *
CNN-SL97.25 ± 0.41 *88.67 ± 1.02 *93.10 ± 1.95 *80.03 ± 0.46 *92.81 ± 0.79 *
CNN-RL96.88 ± 0.43 *88.39 ± 1.18 *93.04 ± 2.11 *82.92 ± 0.41 *91.75 ± 0.93 *
CNN-RSL98.65 ± 0.37  95.76 ± 1.14  97.85 ± 1.35  83.96 ± 0.60  93.01 ± 1.44  
SVM-RBF90.04 ± 0.64 *83.27 ± 1.80 *88.71 ± 2.76 *76.49 ± 0.73 *87.59 ± 2.81 *
SVM-RBF-S90.82 ± 0.72 *89.49 ± 1.46 *80.86 ± 1.94 *78.32 ± 0.91 *85.19 ± 1.61 *
SVM-RBF-L90.65 ± 0.79 *89.05 ± 1.65 *80.16 ± 1.88 *78.02 ± 1.14 *84.78 ± 1.82 *
SVM-RBF-SL91.08 ± 0.58 *89.79 ± 1.37 *81.47 ± 1.85 *78.96 ± 0.83 *85.69 ± 1.62 *
HL-ELM90.89 ± 0.85 *83.47 ± 1.92 *85.10 ± 2.74 *76.22 ± 0.68 *84.19 ± 1.69 *
HL-ELM-S78.44 ± 0.51 *88.59 ± 1.80 *91.12 ± 1.74 *78.33 ± 0.61 *84.79 ± 1.55 *
HL-ELM-L77.81 ± 0.61 *86.19 ± 1.85 *90.96 ± 1.83 *77.53 ± 0.75 *84.58 ± 1.73 *
HL-ELM-SL91.20 ± 0.67 *90.07 ± 1.77 *92.50 ± 1.53 *79.31 ± 0.66 *85.71 ± 1.33 *
Table 9. Test set classification accuracy when using 200 randomly sampled labeled pixels per class for training. We report the mean and standard deviation over 10 runs. The best accuracy for each dataset is indicated in bold. With the exception of methods from [18,19], an ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
Table 9. Test set classification accuracy when using 200 randomly sampled labeled pixels per class for training. We report the mean and standard deviation over 10 runs. The best accuracy for each dataset is indicated in bold. With the exception of methods from [18,19], an ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
MethodPavia CenterPavia UniversityKSCIndian PinesSalinas
CNN-RSL99.52 ± 0.11  97.85 ± 0.07  99.87 ± 0.17  97.73 ± 0.22  98.54 ± 0.41  
SVM-RBF96.26 ± 2.38 *78.31 ± 9.59 *92.96 ± 3.81 *71.18 ± 5.56 *87.36 ± 2.18 *
SVM-RBF-SL98.67 ± 0.14 *90.02 ± 0.19 *98.92 ± 0.09 *95.12 ± 3.58 *97.29 ± 2.01 *
Hu et al. [19]92.5692.60
Lee & Kwon [18]92.06
HL-ELM-SL [28]98.04 ± 0.09 *95.74 ± 0.49 *93.08 ± 0.66 *95.40 ± 0.54 *97.84 ± 0.09 *
MH-KELM [36]80.07 ± 0.01 *91.75 ± 0.29 *
Table 10. Average classification accuracy reported on other experimental settings used in the literature. We considered the following methods: Low-Rank and Sparse Representation Classifier with a Spectral Consistency Constraint (LRSRC-SCC), Probabilistic Class Structure Regularized Sparse Representation (PCSSR), Multiple Kernel Learning (MKL), Discriminative Low-Rank Gabor Filtering (DLRGF), Kernel Sparse Representation (KSR), Image Fusion and Recursive Filtering (IFRF) and our method (CNN-RSL). The best accuracy for each dataset and % training samples employed is indicated in bold.
Table 10. Average classification accuracy reported on other experimental settings used in the literature. We considered the following methods: Low-Rank and Sparse Representation Classifier with a Spectral Consistency Constraint (LRSRC-SCC), Probabilistic Class Structure Regularized Sparse Representation (PCSSR), Multiple Kernel Learning (MKL), Discriminative Low-Rank Gabor Filtering (DLRGF), Kernel Sparse Representation (KSR), Image Fusion and Recursive Filtering (IFRF) and our method (CNN-RSL). The best accuracy for each dataset and % training samples employed is indicated in bold.
DatasetTrain %Methods
LRSRC-SCC [30]PCSSR [31]MKL [29]DLRGF [12]KSR [34]IFRF [46] CNN-RSL
Pavia University1%94.15 ± 0.5696.8787.88 ± 1.19*94.74 ± 0.25*
Pavia University5%97.58 ± 1.5196.70 ± 0.44*
KSC1%79.8089.40 ± 0.88*84.01 ± 1.76*90.34 ± 0.97
KSC5%88.4898.73 ± 0.99*93.46 ± 1.23*99.79 ± 0.20
Indian Pines1%83.59 ± 0.81*84.50 ± 1.24*86.42 ± 0.66
Indian Pines5%95.16 ± 0.24*95.31 ± 0.85*96.42 ± 0.24
Indian Pines10%95.18 ± 0.58*98.4796.27 ± 0.34*97.62 ± 0.17*
Salinas1%95.3196.36 ± 0.51*96.93 ± 0.55
Salinas5%97.9898.36 ± 0.51*99.03 ± 0.17
Table 11. Average classification accuracy of test data over 10 runs in the non-overlapping learning setting (see Section 2.2.2). The best accuracy for each dataset is indicated in bold. An ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
Table 11. Average classification accuracy of test data over 10 runs in the non-overlapping learning setting (see Section 2.2.2). The best accuracy for each dataset is indicated in bold. An ‘*’ means that the best accuracy is significantly better than the accuracy achieved by the corresponding method according to a binomial test for comparing classifiers [45] (p-value < 0.05).
MethodPavia CenterPavia UniversityKSCIndian PinesSalinas
CNN92.68 ± 2.35 *51.03 ± 8.40 *66.76 ± 5.58 *39.10 ± 5.15 *75.66 ± 3.82 *
CNN-RS93.38 ± 3.69  52.74 ± 5.82  74.86 ± 4.24  49.22 ± 3.20  77.90 ± 3.73  
SVM-RBF80.91 ± 6.52 *32.43 ± 11.35 *65.92 ± 6.53 *24.13 ± 5.99 *76.89 ± 3.78 *
SVM-RBF-S81.46 ± 3.53 *48.44 ± 9.97 *68.95 ± 8.23 *36.48 ± 10.97 *78.29 ± 3.64 *
HL-ELM84.72 ± 2.54 *49.01 ± 3.97 *53.14 ± 4.65 *43.85 ± 5.49 *71.87 ± 5.07 *
HL-ELM-S88.82 ± 1.79 *50.28 ± 3.86 *64.23 ± 1.81 *45.12 ± 4.76 *75.84 ± 1.59 *

Share and Cite

MDPI and ACS Style

Acquarelli, J.; Marchiori, E.; Buydens, L.M.C.; Tran, T.; Van Laarhoven, T. Spectral-Spatial Classification of Hyperspectral Images: Three Tricks and a New Learning Setting. Remote Sens. 2018, 10, 1156. https://doi.org/10.3390/rs10071156

AMA Style

Acquarelli J, Marchiori E, Buydens LMC, Tran T, Van Laarhoven T. Spectral-Spatial Classification of Hyperspectral Images: Three Tricks and a New Learning Setting. Remote Sensing. 2018; 10(7):1156. https://doi.org/10.3390/rs10071156

Chicago/Turabian Style

Acquarelli, Jacopo, Elena Marchiori, Lutgarde M.C. Buydens, Thanh Tran, and Twan Van Laarhoven. 2018. "Spectral-Spatial Classification of Hyperspectral Images: Three Tricks and a New Learning Setting" Remote Sensing 10, no. 7: 1156. https://doi.org/10.3390/rs10071156

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop