Abstract
This paper proposes a novel deep learning framework named bidirectional-convolutional long short term memory (Bi-CLSTM) network to automatically learn the spectral-spatial features from hyperspectral images (HSIs). In the network, the issue of spectral feature extraction is considered as a sequence learning problem, and a recurrent connection operator across the spectral domain is used to address it. Meanwhile, inspired from the widely used convolutional neural network (CNN), a convolution operator across the spatial domain is incorporated into the network to extract the spatial feature. In addition, to sufficiently capture the spectral information, a bidirectional recurrent connection is proposed. In the classification phase, the learned features are concatenated into a vector and fed to a Softmax classifier via a fully-connected operator. To validate the effectiveness of the proposed Bi-CLSTM framework, we compare it with six state-of-the-art methods, including the popular 3D-CNN model, on three widely used HSIs (i.e., Indian Pines, Pavia University, and Kennedy Space Center). The obtained results show that Bi-CLSTM can improve the classification performance by almost as compared to 3D-CNN.
1. Introduction
Current hyperspectral sensors can acquire images with high spectral and spatial resolutions simultaneously. For example, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor covers 224 continuous spectral bands across the electromagnetic spectrum with a spatial resolution of 3.7 m. Such rich information has been successfully used in various applications such as national defense, urban planning, precision agriculture and environment monitoring [1].
For these applications, an essential step is image classification, whose purpose is to identify the label of each pixel. Hyperspectral image (HSI) classification is a challenging task. Two important issues exist [2,3]. The first one is the curse of dimensionality. HSI provides very high-dimensional data with hundreds of spectral channels ranging from the visible to the short wave-infrared region of the electromagnetic spectrum. These high-dimensional data with limited numbers of training samples can easily result in the Hughes phenomenon [4], which means that the classification accuracy starts to decrease when the number of features exceeds a threshold. The other one is the use of spatial information. The improvement of spatial resolutions may increase spectral variations among intra-class pixels while decreasing spectral variations among inter-class pixels [5,6]. Thus, only using spectral information is not enough to obtain a satisfying result.
To solve the first issue, a widely used method is to project the original data into a low-dimensional subspace, in which most of the useful information can be preserved. In the existing literature, large amounts of works have been proposed [7,8,9,10]. They can be roughly divided into two categories: unsupervised feature extraction (FE) methods and supervised ones. The unsupervised methods attempt to reveal low-dimensional data structures without using any label information of training samples. These methods retain overall structure of data and do not focus on separating information of samples. Typical methods include but are not limited to principal component analysis (PCA) [7], neighborhood preserving embedding (NPE) [11], and independent component analysis (ICA) [12]. Different from these, the aim of supervised learning methods is to explore the information of labeled data to learn a discriminant subspace. One typical method is linear discriminant analysis (LDA) [13,14], which aims to maximize the inter-class distance and minimize the intra-class distance. In [8], a non-parametric weighted FE (NWFE) method was proposed. NWFE extends LDA by integrating nonparametric scatter matrices with training samples around the decision boundary [8]. Local Fisher discriminant analysis (LFDA) was proposed in [15], which extends the LDA by assigning greater weights to closer connecting samples.
To address the second issue, many works have been proposed to incorporate the spatial information into the spectral information [16,17,18]. This is because the coverage area of one kind of material or one object usually contains more than one pixel. Current spatial-spectral feature fusion methods can be categorized into three classes: feature-level fusion, decision-level fusion, and regularization-level fusion [3]. For feature-level fusion, one often extracts the spatial features and the spectral features independently and then concatenates them into a vector [5,19,20,21]. However, the direct concatenation will lead to a high-dimensional feature space. For decision-level fusion, multiple results are first derived using the spatial and spectral information, respectively, and then combined according to some strategies such as the majority voting strategy [22,23,24]. For regularization-level fusion, a regularizer representing the spatial information is incorporated into the original object function. For example, in [25,26], Markov random field (MRF) modeling, the joint prior probabilities of each pixel and its spatial neighbors were incorporated into the Bayesian classifier as a regularizer. Although this method works well in capturing the spatial information, optimizing the objective function in MRF is time-consuming, especially on high-resolution data.
Recently, deep learning (DL) has attracted much attention in the field of remote sensing [27,28,29,30]. The core idea of DL is to automatically learn high-level semantic features from data itself in a hierarchical manner. In [31,32], the autoencoder model has been successfully used for HSI classification. In general, the inputs of the autoencoder model are a high-dimensional vector. Thus, to learn the spatial features from HSIs, an alternative method is flattening a local image patch into a vector and then feeding it into the model. However, this method may destroy the two-dimensional (2D) structure of images, leading to the loss of spatial information. Similar issues can be found in the deep belief network (DBN) [33]. To address this issue, convolutional neural network (CNN) based deep models have been popularly used [2,34]. They directly take the original image or the local image patch as network inputs, and use local-connected and weight sharing structure to extract the spatial features from HSIs. In [2], the authors designed a CNN network with three convolutional layers and one fully-connected layer. In addition, the input of the network is the first principal component of HSIs extracted by PCA. Although the experimental results demonstrate that this model can successfully learn the spatial feature of HSIs, it may fail to extract the spectral features. Recently, a three-dimensional (3D) CNN model was proposed in [34]. In order to extract the spectral-spatial features from HSIs, the authors consider the 3D image patches as the input of the network. This complex structure will inevitably increase the amount of parameters, easily leading to the overfitting problem with a limited number of training samples.
In this paper, we propose a bidirectional-convolutional long short term memory (Bi-CLSTM) network to address the spectral-spatial feature learning problem. Specifically, we regard all the spectral bands as an image sequence, and model their relationships using a powerful LSTM network [35]. Similar to other fully-connected networks such as autoencoder and DBN, LSTM can not capture the spatial information of HSIs. Inspired from [36], we replace the fully-connected operators in the network by convolutional operators, resulting in a convolutional LSTM (CLSTM) network. Thus, CLSTM can simultaneously learn the spectral and spatial features. In addition, LSTM assumes that previous states affect future states, while the spectral channels in the sequence are correlated with each other. To address this issue, we further propose a Bi-CLSTM network. During the training process of the Bi-CLSTM network, we adopt two tricks to alleviate the overfitting problem. They are dropout and data augmentation operations.
To sum up, the main contributions of this paper are as follows. First, we consider images in all the spectral bands as an image sequence, and use LSTM to effectively model their relationships; second, considering the specific characteristics of hyperspectral images, we further propose a unified framework to combine the merits of LSTM and CNN for spectral-spatial feature extraction.
2. Review of RNN and LSTM
Recurrent neural network (RNN) [37,38] is an extension of traditional neural networks and used to address the sequence learning problem. Unlike the feedforward neural network, RNN adds recurrent edges to connect the neuron to itself across time so that it can model a probability distribution over sequence data. Figure 1 demonstrates an example of RNN. The input of the network is a sequence data . The node updates its hidden state , given its previous state and present input , by
where is the weight between the input node and the recurrent hidden node, is the weight between the recurrent hidden node and itself from the previous time step, and b and are bias and nonlinear activation function, respectively.
Figure 1.
The structure of RNN.
As an important branch of the deep learning family, RNNs have recently shown promising results in many machine learning and computer vision tasks [39,40]. However, it has been observed that training RNN models to model the long-term sequence data is difficult. As can be seen from Equation (1), the contribution of recurrent hidden node at time m to itself at time n may approach infinity or zero as the time interval increases whether or . This will lead to the gradient vanishing and exploding problem [41]. To address this issue, Hochreiter and Schmidhuber proposed LSTM to replace the recurrent hidden node by a memory cell. The memory cell contains a node with a self-connected recurrent edge of a fixed weight one, ensuring that the gradient can pass across many time steps without vanishing or exploding. The LSTM unit consists of four important parts: input gate , output gate , forget gate , and candidate cell value . Based on these parts, memory cell and output can be computed by:
where is the logistic sigmoid function, ‘·’ is a matrix multiplication operator, ‘∘’ is a dot product operator, and , , as well as are bias terms. The weight matrix subscripts have obvious meanings. For instance, is the hidden-input gate matrix, and is the input-output gate matrix etc.
3. Methodology
The flowchart of the proposed Bi-CLSTM model is shown in Figure 2. Suppose an HSI can be represented as a 3D matrix with pixels and l spectral channels. Given a pixel at the spatial position where and , we can choose a small sub-cube centered at it. The goal of Bi-CLSTM is to learn the most discriminative spectral-spatial information from . Such information is the final feature representation for the pixel at the spatial position . If we split the sub-cube across the spectral channels, then can be considered as an l-length sequence . The image patches in the sequence are fed into the CLSTM one by one to extract the spectral feature via a recurrent operator and the spatial feature via a convolution operator simultaneously.
Figure 2.
Flowchart of the Bi-CLSTM network for HSI classification. For a given pixel, a local cube surrounding it is first extracted, and then unfolded across the spectral domain. The unfolded images are fed into the Bi-CLSTM network one by one.
CLSTM is a modification of LSTM, which replaces the fully-connected operators by convolutional operators [36]. The structure of CLSTM is shown in Figure 3, where the left side zooms in its core computation unit, called a memory cell. In the memory cell, ‘⊗’ and ‘⊕’ represent dot product and matrix addition, respectively. For the k-th image patch in the sequence , CLSTM firstly decides what information to throw away from the previous cell state via the forget gate . The forget gate pays attention to and , and outputs a value between 0 and 1 after an activation function. Here, 1 represents “keep the whole information” and 0 represents “throw away the information completely”. Secondly, CLSTM needs to decide what new information to store in the current cell state . This includes two parts: first, the input gate decides what information to update by the same way as forget gate; second, the memory cell creates a candidate value computed by and . After finishing these two parts, CLSTM multiplies the previous memory cell state by , adds the product to , and updates the information . Finally, CLSTM decides what information to output via the cell state and output gate . The above process can be formulated as the following equations:
where is the logistic sigmoid function, ‘∗’ is a convolutional operator, ‘∘’ is a dot product, and and are bias terms. The weight matrix subscripts have the obvious meaning. For example, is the hidden-input gate matrix, and is the input-output gate matrix etc. To implement the convolutional and recurrent operator in CLSTM simultaneously, the spatial size of and must be the same as that of (we use zero-padding [42] to ensure that input will keep the original spatial size after convolution operation).
Figure 3.
The structure of CLSTM.
In the existing literature [43,44,45], LSTM has been well acknowledged as a powerful network to address the orderly sequence learning problem based on the assumption that previous states will affect future states. However, different from the traditional sequence learning problem, the spectral channels in the sequence are correlated with each other. In [46], bidirectional recurrent neural networks (Bi-RNN) was proposed to use both latter and previous information to model sequential data. Motivated by it, we use a Bi-CLSTM network shown in Figure 2 to sufficiently extract the spectral feature. Specifically, the image patches are fed into the CLSTM network one by one with a forward and a backward sequence, respectively. After that, we can acquire two spectral-spatial feature sequences. In the classification stage, they are concatenated into a vector denoted as G and a Softmax layer is used to obtain the probability of each class that the pixel belongs to. Softmax function ensures the activation of each output unit sums to 1, so that we can deem the output as a set of conditional probabilities. Given the vector G, the probability that the input belongs to category c equals
where W and b are weights and biases of the Softmax layer and the summation is over all the output units. The pseudocode for the Bi-CLSTM model is given in Algorithm 1, where we use simplified variables to make the procedure clear.
It is well known that the performance of DL algorithms depends on the number of training samples. However, there often exists a small number of available samples in HSIs. To this end, we adopt two data augmentation methods. They are flipping and rotating operators. Specifically, we rotate the HSI patches by 90, 180, and 270 degrees anticlockwise and flip them horizontally and vertically. Furthermore, we rotate the horizontally and vertically flipped patches by 90 degrees separately. Figure 4 shows some examples of flipping and rotating operators. As a result, the number of training samples can be increased by eight times. In addition, the data augmentation method, dropout [47] is also used to improve the performance of Bi-CLSTM. We set some outputs of neurons to zeros, which means that these neurons do not propagate any information forward or participate in the back-propagation learning algorithm. Every time an input is sampled, network drops neurons randomly to form different structures. In the next section, we will validate the effectiveness of data augmentation and dropout methods.
Figure 4.
The example of data augmentation. (a) the original image; (b–d) the images after rotation of 90, 180, and 270 degrees anticlockwise; (e) vertical flip of (c); (f) horizontal flip of (d); (g–h) the horizontally and vertically flipped images of (c,d).
| Algorithm 1: Algorithm for the Bi-CLSTM model. |
![]() |
4. Experimental Results
4.1. Datasets
We test the proposed Bi-CLSTM model on three HSIs, which are widely used to evaluate classification algorithms.
- Indian Pines: The first dataset was acquired by the AVIRIS sensor over the Indian Pine test site in northwestern Indiana, USA, on 12 June 1992 and it contains 224 spectral bands. We utilize 200 bands after removing four bands containing zero values and 20 noisy bands affected by water absorption. The spatial size of the image is pixels, and the spatial resolution is 20 m. The false-colour composite image and the ground truth map are shown in Figure 5. The available number of samples is 10,249 ranging from 20 to 2455 in each class.
Figure 5. Indian Pines scene dataset. (a) false-color composite of the Indian Pines scene; (b) ground truth map containing 16 mutually exclusive land cover classes. - Pavia University: The second dataset was acquired by the reflective optics system imaging spectrometer (ROSIS) sensor during a flight campaign over Pavia, northern Italy, on 8 July 2002. The original image was recorded with 115 spectral channels ranging from 0.43 m to 0.86 m. After removing noisy bands, 103 bands are used. The image size is pixels with a spatial resolution of 1.3 m. A three band false-colour composite image and the ground truth map are shown in Figure 6. In the ground truth map, there are nine different classes of land covers with more than 1000 labeled pixels for each class.
Figure 6. Pavia University scene dataset. (a) false-color composite of the Pavia University scene; (b) ground truth map containing nine mutually exclusive land cover classes. - Kennedy Space Center (KSC): The third dataset was acquired by the AVIRIS sensor over Kennedy Space Center (KSC), Florida, on 23 March 1996. It contains 224 spectral bands. We utilize 176 bands of them after removing bands with water absorption and low signal noise ratio. The spatial size of the image is pixels, and the spatial resolution is 18 m. Discriminating different land covers in this dataset is difficult due to the similarity of spectral signatures among certain vegetation types. For classification purposes, thirteen classes representing the various land-cover types that occur in this environment are defined. Figure 7 demonstrates a false-colour composite image and the ground truth map.
Figure 7. KSC dataset. (a) false-color composite of the KSC. (b) ground truth map containing 13 mutually exclusive land cover classes.
For Indian Pines and KSC datasets, we randomly select pixels from each class as the training set, and use the remaining pixels as the testing set. The same as the experiments in [3,49], we randomly choose 3921 pixels as the training set and the rest of pixels as the testing set for the Pavia University dataset. The detailed numbers of training and testing samples are listed from Table 1, Table 2 and Table 3.
Table 1.
Number of pixels for training/testing and the total number of pixels for each class in the Indian Pines ground truth map.
Table 2.
Number of pixels for training/testing and the total number of pixels for each class in the Pavia University ground truth map.
Table 3.
Number of pixels for training/testing and the total number of pixels for each class in the KSC ground truth map.
4.2. Experimental Setup
We compared the proposed Bi-CLSTM model with several FE methods, including regularized local discriminant embedding (RLDE) [50], matrix-based discriminant analysis (MDA) [3], 2D-CNN, 3D-CNN, LSTM [49], and CNN+LSTM. We train DL models on a single TITAN X GPU and implement them in TensorFlow. Additionally, we also directly use the original pixels as a benchmark. The optimal reduced dimension for RLDE is chosen from . For MDA, the optimal window size is selected from a given set . For 2D-CNN and 3D-CNN, we take the same configuration as described in [34]. For LSTM, we build a single recurrent layer with 128 hidden nodes. For CNN+LSTM, we apply CNN to extract spatial features from each band and then employ LSTM to fuse them. The configuration of CNN is the same as that in [34], and the number of hidden nodes in LSTM is 128. For Bi-CLSTM, we build a bidirectional network with two CLSTM layers to extract features. Similar to CNN, the convolution operation are followed by max-pooling in Bi-CLSTM, and we empirically set the size of convolution kernel to and the number of convolution kernel to 32. Without loss of generality, we initialize the state of CLSTM to zeros. The detailed configuration of Bi-CLSTM is listed in Table 4. The dimension of each layer in Bi-CLSTM is detailed in Table 5, where l and C indicate the number of spectral bands and classes, respectively, and F-CLSTM and B-CLSTM indicate forward and backward CLSTM, respectively. When training Bi-CLSTM, we set the loss function to cross entropy and optimize it by Adam algorithm with a learning rate of .
Table 4.
Detailed configuration of Bi-CLSTM.
Table 5.
The dimension of each layer in Bi-CLSTM.
In order to reduce the effects of random selection, all the algorithms are repeated five times and the average results are reported. The classification performance is evaluated by the overall accuracy (OA), the average accuracy (AA), the per-class accuracy, and the Kappa coefficient . OA defines the ratio between the number of correctly classified pixels to the total number of pixels in the testing set, AA refers to the average of accuracies in all classes, and is the percentage of agreement corrected by the number of agreements that would be expected purely by chance. Clearly, larger values of the three metrics correspond to better performance.
4.3. Parameter Selection
There are four important influence factors in Bi-CLSTM, including dropout, data augmentation, network framework, and the size of input image patches. Firstly, to find the optimal size of image patches, we fix the other three factors and select the size from four candidate values . Table 6 demonstrates the effects of different sizes on OA of the KSC dataset. From this table, we can observe that OA increases as the patch size increases, and size can achieve a high enough accuracy. Since larger size will dramatically increase the computation time and the accuracy improvement is limited, the optimal size can be chosen as .
Table 6.
OA of Bi-CLSTM with different sizes of input image patches on the KSC dataset.
Secondly, to investigate the performance of bidirectional network structure, we fix the other influence factors and compare forward CLSTM (F-CLSTM) with Bi-CLSTM on the KSC dataset. Here, F-CLSTM is a forward network with the same configuration as Bi-CLSTM listed in Table 4. As shown in Table 7, the bidirectional network indeed outperforms the ordinary forward network. This result certifies the effectiveness of Bi-CLSTM as compared to the forward CLSTM. Finally, we also validate the effectiveness of dropout and data augmentation operators. We set the probability of dropout to the common value 0.6, and fix the other influence factors. Table 8 reports the OA values with or without dropout operator on the KSC dataset. It can be observed that using dropout can significantly improve the accuracy from 94.41% to 99.13%. Similarly, we expand the number of training samples by eight times as described in Section II and fix the other influence factors. Table 8 demonstrates that data augmentation can improve the accuracy from 95.07% to 99.13%.
Table 7.
OA of F-CLSTM and Bi-CLSTM on the KSC dataset.
Table 8.
OA of Bi-CLSTM on the KSC dataset with and without dropout and data augmentation.
4.4. Performance Comparison
To demonstrate the superiority of the proposed Bi-CLSTM model, we quantitatively and qualitatively compare it with the aforementioned methods. Table 9 reports the quantitative results acquired by eight methods on the Indian Pines dataset. From these results, we can observe that most of the DL methods perform better than traditional methods. For 2D-CNN, it only uses the principal component of all spectral bands, leading to the loss of spectral information. Therefore, the performance obtained by 2D-CNN is inferior to that by MDA. For LSTM, it takes the hyperspectral pixel vector as input without considering spatial-domain information, achieving the worst performance among all methods. Different from 2D-CNN and LSTM, CNN+LSTM attempts to feed spatial features from each band into the LSTM model to capture the spectral information, obtaining better performance than MDA. This is because, as a neural network, CNN+LSTM is able to capture the nonlinear distribution property of hyperspectral data, while the linear FE method MDA may fail. Nevertheless, the spectral FE and spatial FE processes are independent, making the trained parameters in CNN+LSTM may be not the optimal ones. 3D-CNN and Bi-CLSTM can address this issue by extracting spectral and spatial features simultaneously, and achieve the higher OA, AA, and than CNN+LSTM. For 3D-CNN, a specific number of spectral bands is taken as an input of the network every time. Therefore, it cannot learn the relationships between non-adjacent spectral bands. Via recurrent connections, Bi-CLSTM can model the correlations across all the spectral bands. Thus, compared to 3D-CNN, Bi-CLSTM improves OA from 95.30% to 96.78%. Figure 8 demonstrates the classification maps achieved by eight different methods on the Indian Pines dataset. It can be observed that Bi-CLSTM obtains more homogeneous maps than other methods.
Table 9.
OA, AA, per-class accuracy (%), and standard deviations after five runs performed by eight methods on the Indian Pines dataset using pixels from each class as the training set.
Figure 8.
Classification maps using eight different methods on the Indian Pines dataset: (a) original; (b) RLDE; (c) MDA; (d) 2D-CNN; (e) 3D-CNN; (f) LSTM; (g) CNN+LSTM; (h) Bi-CLSTM.
Similar results are demonstrated in Table 10 and Figure 9 on the Pavia University Scene dataset. Again, 3D-CNN, CNN+LSTM, and Bi-CLSTM achieve better performance than other methods. Specifically, OA, AA and obtained by 3D-CNN and CNN+LSTM are higher than MDA, and Bi-CLSTM obtains better performance than 3D-CNN and CNN+LSTM. It is worth noting that the improvement of OA, AA and from MDA to Bi-CLSTM is not remarkable as those on the Indian Pines dataset because MDA has already obtained a high performance and a further improvement is very difficult. Table 11 and Figure 10 show the classification results of different methods on the KSC dataset. Similar to the other two datasets, Bi-CLSTM achieves the highest OA, AA and than other methods.
Table 10.
OA, AA, per-class accuracy (%), and standard deviations after five runs performed by eight methods on the Pavia University Scene dataset using 3921 pixels as the training set.
Figure 9.
Classification maps using eight different methods on the Pavia University dataset: (a) original; (b) RLDE; (c) MDA; (d) 2D-CNN; (e) 3D-CNN; (f) LSTM; (g) CNN+LSTM; (h) Bi-CLSTM.
Table 11.
OA, AA, per-class accuracy (%), and standard deviations after five runs performed by eight methods on the KSC dataset using pixels from each class as the training set.
Figure 10.
Classification maps using eight different methods on the KSC dataset: (a) original; (b) RLDE; (c) MDA; (d) 2D-CNN; (e) 3D-CNN; (f) LSTM; (g) CNN+LSTM; (h) Bi-CLSTM.
To test the computational efficiency of different deep learning methods, we train and test them on a personal computer with CPU of Intel Core i7-4790 and GPU of GTX TITAN X, using the TensorFlow framework. As shown in Table 12, 3D-CNN and Bi-CLSTM cost more training and testing time than 2D-CNN, LSTM and CNN+LSTM because their inputs are sub-cubes while others are vectors or matrices. In addition, compared to 3D-CNN, training or testing Bi-CLSTM is faster. This is because the convolutional kernel sizes (i.e., ) in each direction of Bi-CLSTM are smaller than those of 3D-CNN (i.e., ), and the depth of Bi-CLSTM is shallower than it.
Table 12.
Computation time (min.) of five deep learning methods on three datasets.
5. Conclusions
In this paper, we propose a novel bidirectional-convolutional long short term memory (Bi-CLSTM) network to automatically learn the spectral-spatial feature from hyperspectral images (HSIs). The input of the network is the whole spectral channels of HSIs, and a bidirectional recurrent connection operator across them is used to sufficiently explore the spectral information. In addition, motivated by the widely used convolutional neural network (CNN), fully-connected operators in the network are replaced by convolution operators across the spatial domain to capture the spatial information. By conducting experiments on three HSIs collected by different instruments (AVIRIS and ROSIS), we compare the proposed method with several feature extraction methods including deep learning algorithms, i.e., CNN, LSTM and CNN+LSTM. The experimental results indicate that using spatial information improves the classification performance and results in more homogeneous regions in classification maps compared to only using spectral information. In addition, the proposed method can improve the OA, AA, and on three HSIs as compared to other methods. We also evaluate the influences of different components in the network, including dropout, data augmentation and patch size.
Acknowledgments
This work was supported in part by the Natural Science Foundation of China under Grant Numbers: 61532009, 61522308 and, in part, by the Natural Science Foundation of Jiangsu Province, China, under Grant 15KJA520001.
Author Contributions
Qingshan Liu proposed the algorithm. Renlong Hang and Feng Zhou performed the experiment. Xiaotong Yuan and Renlong Hang supervised the study, analyzed the results and gave insightful suggestions for the manuscript. Renlong Hang and Feng Zhou drafted the manuscript. All coauthors contributed to the revision of the manuscript.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Zhang, S.; Li, S.; Fu, W.; Fang, L. Multiscale Superpixel-Based Sparse Representation for Hyperspectral Image Classification. Remote Sens. 2017, 9, 139. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Spectral-Spatial Feature Extraction for Hyperspectral Image Classification: A Dimension Reduction and Deep Learning Approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
- Hang, R.; Liu, Q.; Song, H.; Sun, Y. Matrix-Based Discriminant Subspace Ensemble for Hyperspectral Image Spatial-Spectral Feature Fusion. IEEE Trans. Geosci. Remote Sens. 2015, 54, 783–794. [Google Scholar] [CrossRef]
- Hughes, G. On the Mean Accuracy of Statistical Pattern Recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Tao, D.; Huang, X. On Combining Multiple Features for Hyperspectral Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2012, 50, 879–893. [Google Scholar] [CrossRef]
- Xu, J.; Hang, R.; Liu, Q. Patch-Based Active Learning PTAL for Spectral-Spatial Classification on Hyperspectral Data. Int. J. Remote Sens. 2014, 35, 1846–1875. [Google Scholar] [CrossRef]
- Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O.; Benediktsson, J.A. Model-Based Fusion of Multi- and Hyperspectral Images Using PCA and Wavelets. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2652–2663. [Google Scholar] [CrossRef]
- Kuo, B.C.; Landgrebe, D.A. Nonparametric Weighted Feature Extraction for Classification. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1096–1105. [Google Scholar]
- Chen, H.T.; Chang, H.W.; Liu, T.L. Local Discriminant Embedding and Its Variants. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 846–853. [Google Scholar]
- Wang, Q.; Meng, Z.; Li, X. Locality Adaptive Discriminant Analysis for Spectral–Spatial Classification of Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2077–2081. [Google Scholar] [CrossRef]
- He, X.; Cai, D.; Yan, S.; Zhang, H.J. Neighborhood Preserving Embedding. In Proceedings of the Tenth IEEE International Conference on Computer Vision, Beijing, China, 17–21 October 2005; Volume 2, pp. 1208–1213. [Google Scholar]
- Villa, A.; Benediktsson, J.A.; Chanussot, J.; Jutten, C. Hyperspectral Image Classification with Independent Component Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4865–4876. [Google Scholar] [CrossRef]
- Friedman, J.H. Regularized Discriminant Analysis. J. Am. Stat. Assoc. 1989, 84, 165–175. [Google Scholar] [CrossRef]
- Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of Hyperspectral Images with Regularized Linear Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
- Sugiyama, M. Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis. J. Mach. Learn. Res. 2007, 8, 1027–1061. [Google Scholar]
- Fauvel, M.; Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J. Advances in Spectral-Spatial Classification of Hyperspectral Images. Proc. IEEE 2013, 101, 652–675. [Google Scholar] [CrossRef]
- Sun, L.; Wu, Z.; Liu, J.; Xiao, L. Supervised Spectral-Spatial Hyperspectral Image Classification with Weighted Markov Random Fields. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1490–1503. [Google Scholar] [CrossRef]
- Liu, J.; Wu, Z.; Wei, Z.; Xiao, L.; Sun, L. Spatial-Spectral Kernel Sparse Representation for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2462–2471. [Google Scholar] [CrossRef]
- Fauvel, M.; Benediktsson, J.A.; Chanussot, J.; Sveinsson, J.R. Spectral and Spatial Classification of Hyperspectral Data Using SVMs and Morphological Profiles. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3804–3814. [Google Scholar] [CrossRef]
- Mura, M.D.; Villa, A.; Benediktsson, J.A.; Chanussot, J. Classification of Hyperspectral Images by Using Extended Morphological Attribute Profiles and Independent Component Analysis. IEEE Geosci. Remote Sens. Lett. 2011, 8, 542–546. [Google Scholar] [CrossRef]
- Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of Hyperspectral Data from Urban Areas Based on Extended Morphological Profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
- Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J. Spectral-Spatial Classification of Hyperspectral Imagery Based on Partitional Clustering Techniques. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2973–2987. [Google Scholar] [CrossRef]
- Jimenez, L.O.; Rivera-Medina, J.L.; Rodriguez-Diaz, E.; Arzuaga-Cruz, E. Integration of Spatial and Spectral Information by Means of Unsupervised Extraction and Classification for Homogenous Objects Applied to Multispectral and Hyperspectral Data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 844–851. [Google Scholar] [CrossRef]
- Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.; Tilton, J.C. Multiple Spectral-Spatial Classification Approach for Hyperspectral Data. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4122–4132. [Google Scholar] [CrossRef]
- Jia, X.; Richards, J.A. Managing the Spectral-Spatial Mix in Context Classification Using Markov Random Fields. IEEE Geosci. Remote Sens. Lett. 2008, 5, 311–314. [Google Scholar] [CrossRef]
- Jackson, Q.; Landgrebe, D.A. Adaptive Bayesian Contextual Classification Based on Markov Random Fields. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2454–2463. [Google Scholar] [CrossRef]
- Wu, H.; Prasad, S. Convolutional Recurrent Neural Networks for Hyperspectral Data Classification. Remote Sens. 2017, 9, 298. [Google Scholar] [CrossRef]
- Liang, H.; Li, Q. Hyperspectral Imagery Classification Using Sparse Representations of Convolutional Neural Network Features. Remote Sens. 2016, 8, 99. [Google Scholar] [CrossRef]
- He, Z.; Liu, H.; Wang, Y.; Hu, J. Generative Adversarial Networks-Based Semi-Supervised Learning for Hyperspectral Image Classification. Remote Sens. 2017, 9, 1042. [Google Scholar] [CrossRef]
- Ding, C.; Li, Y.; Xia, Y.; Wei, W.; Zhang, L.; Zhang, Y. Convolutional Neural Networks Based Hyperspectral Image Classification Method with Adaptive Kernels. Remote Sens. 2017, 9, 618. [Google Scholar] [CrossRef]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep Learning-Based Classification of Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Tao, C.; Pan, H.; Li, Y.; Zou, Z. Unsupervised Spectral-Spatial Feature Learning with Stacked Sparse Autoencoder for Hyperspectral Imagery Classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2438–2442. [Google Scholar]
- Chen, Y.; Zhao, X.; Jia, X. Spectral-Spatial Classification of Hyperspectral Data Based on Deep Belief Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735. [Google Scholar] [CrossRef] [PubMed]
- Xingjian, S.; Chen, Z.; Wang, H.; Yeung, D.Y.; Wong, W.K.; Woo, W.C. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing Systems; 2015; pp. 802–810. Available online: papers.nips.cc/paper/5955-convolutional-lstm-network-a-machine-learning-approach-for-precipitation-nowcasting.pdf (accessed on 15 December 2017).
- Williams, R.; Zipser, D. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. Neural Comput. 1989, 1, 270–280. [Google Scholar] [CrossRef]
- Rodriguez, P.; Wiles, J.; Elman, J.L. A Recurrent Neural Network That Learns to Count. Connect. Sci. 1999, 11, 5–40. [Google Scholar] [CrossRef]
- Cho, K.; Merrienboer, B.V.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation. arXiv. 2014. Available online: https://arxiv.org/pdf/1406.1078 (accessed on 15 December 2017).
- Ranzato, M.; Szlam, A.; Bruna, J.; Mathieu, M.; Collobert, R.; Chopra, S. Video (Language) Modeling: A Baseline for Generative Models of Natural Videos. arXiv. 2014. Available online: https://arxiv.org/pdf/1412.6604 (accessed on 15 December 2017).
- Hochreiter, S.; Bengio, Y.; Frasconi, P.; Schmidhuber, J. Gradient Flow in Recurrent Nets: The Difficulty of Learning Long-Term Dependencies. In A Field Guide to Dynamical Recurrent Neural Networks; IEEE Press, 2001; Available online: www.bioinf.jku.at/publications/older/ch7.pdf (accessed on 15 December 2017).
- Dumoulin, V.; Visin, F. A Guide to Convolution Arithmetic for Deep Learning. arXiv. 2016. Available online: https://arxiv.org/pdf/1603.07285 (accessed on 15 December 2017).
- Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems; 2014; pp. 3104–3112. Available online: papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf (accessed on 15 December 2017).
- Mikolov, T.; Karafiát, M.; Burget, L.; Cernocky, J.; Khudanpur, S. Recurrent neural network based language model. In Proceedings of the Conference of the International Speech Communication Association (INTERSPEECH 2010), Chiba, Japan, 26–30 September 2010; pp. 1045–1048. [Google Scholar]
- Graves, A.; Fernández, S.; Schmidhuber, J. Bidirectional LSTM networks for improved phoneme classification and recognition. In Proceedings of the Artificial Neural Networks: Formal Models and Their Applications (ICANN 2005), Warsaw, Poland, 11–15 September 2005; p. 753. [Google Scholar]
- Schuster, M.; Paliwal, K.K. Bidirectional Recurrent Neural Networks. IEEE Trans. Signal Process. 1997, 45, 2673–2681. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv. 2014. Available online: https://arxiv.org/pdf/1412.6980 (accessed on 15 December 2017).
- Mou, L.; Ghamisi, P.; Zhu, X.X. Deep Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef]
- Zhou, Y.; Peng, J.; Chen, C.L.P. Dimension Reduction Using Spatial and Spectral Regularized Local Discriminant Embedding for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1082–1095. [Google Scholar] [CrossRef]
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
