Next Article in Journal
Open Data and Deep Semantic Segmentation for Automated Extraction of Building Footprints
Previous Article in Journal
Detection of Ionospheric Scintillation Based on XGBoost Model Improved by SMOTE-ENN Technique
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Wide and Deep Neural Network for Hyperspectral Image Classification

1
College of Geological Engineering and Geomatics, Chang’an University, Xi’an 710054, China
2
Key Laboratory of Western China’s Mineral Resources and Geological Engineering, Ministry of Education, Xi’an 710054, China
3
School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907, USA
4
Department of Mathematics and Information Science, College of Science, Chang’an University, Xi’an 710064, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2021, 13(13), 2575; https://doi.org/10.3390/rs13132575
Submission received: 8 June 2021 / Revised: 22 June 2021 / Accepted: 28 June 2021 / Published: 1 July 2021

Abstract

:
Recently, deep learning has been successfully and widely used in hyperspectral image (HSI) classification. Considering the difficulty of acquiring HSIs, there are usually a small number of pixels used as the training instances. Therefore, it is hard to fully use the advantages of deep learning networks; for example, the very deep layers with a large number of parameters lead to overfitting. This paper proposed a dynamic wide and deep neural network (DWDNN) for HSI classification, which includes multiple efficient wide sliding window and subsampling (EWSWS) networks and can grow dynamically according to the complexity of the problems. The EWSWS network in the DWDNN was designed both in the wide and deep direction with transform kernels as hidden units. These multiple layers of kernels can extract features from the low to high level, and because they are extended in the wide direction, they can learn features more steadily and smoothly. The sliding windows with the stride and subsampling were designed to reduce the dimension of the features for each layer; therefore, the computational load was reduced. Finally, all the weights were only from the fully connected layer, and the iterative least squares method was used to compute them easily. The proposed DWDNN was tested with several HSI data including the Botswana, Pavia University, and Salinas remote sensing datasets with different numbers of instances (from small to big). The experimental results showed that the proposed method had the highest test accuracies compared to both the typical machine learning methods such as support vector machine (SVM), multilayer perceptron (MLP), radial basis function (RBF), and the recently proposed deep learning methods including the 2D convolutional neural network (CNN) and the 3D CNN designed for HSI classification.

Graphical Abstract

1. Introduction

Hyperspectral images (HSIs) can be viewed as 3D data blocks, which contain abundant 2D spatial information and hundreds of spectral bands. Therefore, they can be used to recognize different materials based on the pixels. These images have been used widely in many applications such as natural resource monitoring, geological mapping, and object detection [1]. Different machine learning methods have been used and proposed for HSI classification in the past few decades, which mainly include supervised learning methods such as probabilistic approaches: multinomial logistic regression (MLR) [2], SVM [3], decision trees [4], MLP [5], random forest (RF) [6], and sparse representation classifiers [7].
Recently, deep learning methods have been introduced for HSI classification to combine both spatial and spectral features [8,9,10,11,12,13,14]. Considering the size of the pixel patches, the deep learning models are usually not very deep. A five-layer CNN [9] was proposed, which combined various properties including batch normalization, dropout, and the parametric rectified linear unit (PReLU) activation function. The local contextual features were also learned by the CNN [8] and diverse region-based CNN [13] to improve the performance. Different features such as multiscale features [11], multiple morphological profiles [10], and diversified metrics [15] were learned by different CNNs. The CNN was also extended to three dimensions such as the 3D CNN [16], mixed CNN [17], and hybrid spectral CNN (HybridSN) [18]. As new methods such as the fully convolutional network (FCN), attention mechanism, active learning, and transfer learning have been proposed and used successfully in computer vision problem, they have also been applied to HSI classification. These learning methods include the FCN with an efficient nonlocal module (ENL-FCN), active learning methods [19,20], 3D octave convolution with the spatial–spectral attention network (3DOC-SSAN) [21], the CNN [22] with transfer learning that uses an unsupervised pre-training step, the superpixel pooling convolutional neural network with transfer learning [23], and the lightweight spectral–spatial attention network [24]. Researchers have also tried to learn features more efficiently and robustly using the proxy-based deep learning framework [25].
Other than CNNs, new technologies have also been applied to HSI classification, such as the deep recurrent neural network [26] and spectral–spatial attention networks based on the RNN and CNN learning spectral correlations within a continuous spectrum and spatial neighboring relevance. The deep support vector machine (DSVM) [27] extended the SVM in the deep direction. Harmonic networks [28] were proposed using circular harmonics instead of CNN kernels. These were extended as naive Gabor networks [29] for HSI classification, which can reduce the number of learning parameters. A cascaded dual-scale crossover network [30] was proposed to extract more features without extending the architecture in the deep direction. Recently, a recurrent feedback convolutional neural network [31] was proposed to overcome overfitting, and a generative adversarial minority oversampling method [32] was proposed to deal with imbalanced data in HSIs.
It is usually not efficient to train a learning model with a large number of training samples with a single HSI dataset. Therefore, this is the drawback of using deep learning models with a large amount of hyperparameters and layers. Incremental learning is learning new knowledge without forgetting learned knowledge to overcome catastrophic forgetting [33]. It can also make the learning model be generated according to the complexity of the learning models, which is useful for HSI classification with limited training samples. Researcher have proposed different incremental methods such as elastic weight consolidation (EWC) [34], remembering old knowledge by selectively reducing important weights, and incremental moment matching (IMM) [35], assuming that the posterior distribution of the parameters of the Bayesian neural networks matches the moment of the posteriors incrementally. Another similar idea is scalable learning, which mainly includes multistage scalable learning and tree-like scalable learning. The parallel, self-organizing, hierarchical neural networks (PSHNNs) [36] and parallel consensual neural networks (PCNNs) [37] have been proposed, which usually combine multiple stages of neural networks with an instance rejection mechanism or statistical consensus theory. The final output of these networks is the consensus result among all the stages of the neural networks. Scalable-effort classifiers [38,39] were proposed including multiple stages of classifiers, which can be increased with increasing architectural complexity. A Tree-CNN was proposed [40], and the deep learning models were organized hierarchically to learn incrementally. Conditional deep learning (CDL) [41] was proposed for the active convolutional layer for inputs that hard to classify. Stochastic configuration networks (SCNs) [42] were proposed using a stochastic configuration to grow the hidden units incrementally. Other than different types of learning methods, researchers proposed methods on learning security such as adversarial examples [43] and the backdoor attack on multiple learning models [44].
In addition to the deep direction, the learning model can be generated in wide [45,46] or both wide and deep [47] directions. It was discovered that the training and learning process of the wide fully connected neural networks can be represented by the evaluation of the Gaussian process, and the wide neural network usually has better performance in generalization [45,46]. Recently, researchers also tried to combine HSI with LiDAR data to perform land cover classification, such as discriminant correlation analysis [48] and the inverse coefficient of variation features and multilevel fusion method [49].
In this paper, we proposed a dynamic wide and deep neural network (DWDNN) for hyperspectral image classification, which combines wide and deep learning and generates the learning model with the proper architectural complexity for different learning data and tasks. It was based on multiple dynamically organized efficient wide sliding windows and subsampling (EWSWS) networks. Each EWSWS network has multiple layers of transform kernels with sliding windows and strides, which can be extended in the wide direction to learn both the spatial and spectral features sufficiently. The parameters of these transform kernels can be learned with randomly chosen training samples, and the number of the outputs of the transform kernels can be reduced. With multiple EWSWS layers combined in the deep direction, the spatial and spectral features from the lower level to the higher level can be learned efficiently with a proper configuration of the hyperparameters for EWSWS networks. The EWSWS network can be generated one-by-one dynamically. In this way, the DWDNN can learn features from HSI data more smoothly and efficiently to overcome overfitting. The weights of the DWDNN are mainly in the fully connected layer, which can be learned easily using iterative least squares. The contributions of the proposed DWDNN are as follows:
  • Extracting spatial and spectral features from the low level to the high level efficiently by the EWSWS network with a proper architectural complexity;
  • Training the DWDNN easily, because the parameters of the transform kernels in EWSWS networks can be obtained directly by randomly choosing training samples or with the unsupervised learning method. The only weights are those in the fully connected layers, which can be computed with iterative least squares;
  • Generating learning models with the proper architectural complexity according to the characteristics of the HSI data. Therefore, learning can be more efficient and smooth.
The rest of the paper is organized as follows: Section 2 presents the detailed description of the proposed DWDNN. Section 3 presents the datasets and the experimental settings. Section 4 gives the classification results for the HSI data. Section 5 and Section 6 provide the discussions and conclusions.

2. Dynamic Wide and Deep Neural Network for Hyperspectral Image Classification

The DWDNN mainly includes the following: (1) hyperspectral image preprocessing; (2) an efficient wide sliding window and subsampling (EWSWS) network; and (3) the dynamic growth of the wide and deep neural network. The architecture of the DWDNN is shown in Figure 1.

2.1. Hyperspectral Image Preprocessing and Instances’ Preparation

Suppose the HSI data are X R W × H × B , where W, H, and B are the width, height, and number of bands in the HSI data. Principal component analysis (PCA) was performed to reduce the redundant spectral information firstly, and data normalization was implemented. Suppose the number of classes is C. The image patches after PCA can be generated with a size of P W × P H × B 1 . These image patches were split into training, validation, and testing instances with the corresponding splitting ratios. The image patches were flattened into vectors. The sliding window was used to generate the input subvectors with a given stride to reduce the computational load. The strides can be adjusted with different scopes. These strides can reduce the number of generated RBF networks for the EWSWS network to save on the computational load if large values can be used. These strides can also be adjusted flexibly to extract features from different levels. The HSI preprocessing process is shown in Figure 2.

2.2. Efficient Wide Sliding Window and Subsampling Network

For HSI classification, one key point is how to use a large number of bands efficiently, even though the PCA is usually used to reduce the redundant information. The CNN-based deep learning methods usually combine spatial and spectral features to obtain higher HSI classification performance. However, the limited training samples make it hard to use very deep architectures, and there are a large number of convolutional kernels to be learned. Recently, researcher have used different kernels to enhance the HSI classification performance and reduce the number of learned kernels, such as circular harmonics [28], naive Gabor filters [29], and the recently proposed WSWS network with Gaussian kernels [50]. As we discussed in Section 2.1, in this paper, the strides were combined with the WSWS layers to make them efficient when learning the features of different levels, and the EWSWS layers were generated to learn these bands of images more efficiently. The architecture of the EWSWS layer is shown in Figure 3.
For HSI classification, the N p 3D hyperspectral image patches after using PCA are the input to the EWSWS network. Suppose the size of the one-dimensional sliding window is set as w, which denotes the length of the sliding window. The stride s is used to reduce the redundant number of sliding times according to the characteristic of the input data. For the n th sliding time, the 1D vectors are p n R w B P C A . They are fed into the EWSWS composed of layers of transform kernels denoted as { g n 1 , g n 2 , , g nM n } , where M n denotes the number of transform kernels for the n th sliding time. The outputs of the transform kernels from the k th sliding window are denoted as G n , representing [ g n 1 p n , g n 2 p n , , g n M n p n ] . In general, suppose there is more than one channel. Then, the summation along the channel direction is performed for each set of transform kernels, which are:
G sum _ n = sum G n = [ sum g n 1 p n , sum g n 2 p n , , sum g n M n p n ]
Then, sorting is performed along the sample dimension, and in order to reduce the number of the outputs of the transform kernels, subsampling is performed with the sorted output. These two operations are denoted by:
G sort _ n = sort i = 1 N p G sum _ n
G S _ n = subsampling G sort _ n , N n S
where N n S is the subsampling interval.
For each sliding window, a set of transformed outputs is generated in the wide direction, and finally, they can be combined together as:
G E W S W S = G S _ 1 , G S _ 2 , , G S _ N
In order to obtain higher level features, the transform layers are extended in the deep direction with multiple transform kernel layers and different strides. Suppose the sets of the outputs of the first layer are G E W S W S 1 = G S _ 1 1 , G S _ 1 1 , , G S _ N 1 1 . Then, the outputs of the following layers are denoted as:
G E W S W S l = G S _ 1 l , G S _ 1 l , , G S _ N l l
where N l is the number of sliding windows in the l th layer of the transform kernels. For each layer, the strides are denoted as s l .
Suppose there are L layers. The output of the L th layer of transform kernels is G E W S W S L . The final outputs of the EWSWS network are given by:
Y = G E W S W S L W
The least squares estimation of W ^ is computed by minimizing the squared error:
W ^ = arg min W G E W S W S L W D 2
where D is the vector of the desired ground truths of the classes.
Finally, the pseudoinverse G E W S W S L + of G E W S W S L is used to compute W ^ [51]:
W ^ = G E W S W S L + D = G E W S W S L T G E W S W S L + G E W S W S L T D

2.3. Dynamic Wide and Deep Neural Network

When the EWSWS layers are extended in both the wide and deep directions, the learning model has the ability to learn high-level features from both the spatial and spectral domain with the HSI data. Unlike the CNN, which uses gradient descent to learn the weights gradually, the least squares method learns all the linear weights once, which is much faster. On the negative side, the learning model may switch between the states of underfitting and overfitting easily if there are no proper hyperparameters for the proposed model. Another problem is that when there are a large number of training samples and the sizes of patches are large, the computing weights need a large number of computing resources.
Considering the above issues, the iterative least squares [52] method was combined with EWSWS Nets, and the DWDNN was proposed. Multiple EWSWS networks can be added into the DWDNN one-by-one using iterative least squares to gain more learning ability dynamically. When the training data are learned sufficiently, the growing process is stopped automatically. During this process, the training data are split into both the feature and sample domain; therefore, it can be used for the HSI data with a high feature dimension and a large number of training samples. Another advantage of the DWDNN is that, while finding the proper architecture dynamically, the learning process is much more stable with well-trained weights in the fully connected layer. The dynamic learning process is shown in Figure 4.

2.3.1. Data Splitting in the Sample Space

In order to learn the information in the training set sufficiently, training samples are split into a number of batches as in the BP neural network or the CNN, with a proportion of overlap for each training batch, which can be determined by an overlapping factor λ 0 λ 1 . For HSI classification, the 3D training patches p t r are split into N b a t c h training subsets with the overlapping factor λ , which can be described as:
p t r _ 1 , p t r _ 2 , , p t r _ N b a t c h
The whole training set can be learned in N e p o c h epochs with the DWDNN using the iterative least squares method. After each training epoch, the training set is shuffled, and the new training batches are split by using the overlapping factor.

2.3.2. Generating the DWDNN Dynamically

In this section, we describe how to generate the DWDNN dynamically. The generating process starts with the first EWSWS network. Then, the succeeding EWSWS Nets are added one by one according to the trained errors of the previous EWSWS Nets. If the problem scale can be estimated, a certain number of EWSWS Nets can be given in advance. Then, the DWDNN can grow quickly to reach the proper number of EWSWS Nets. Suppose the first EWSWS network is N e t 1 and the first subset p t r _ 1 is used to train it. The outputs of the EWSWS layer for the first iteration are G E W S W S 1 . The learning of the desired outputs, the computed weights, and the outputs are as follows:
G E W S W S 1 W 1 1 = D
W 1 1 = G E W S W S 1 + D
Y 1 1 = G E W S W S 1 G E W S W S 1 + D
The remaining error of N e t 1 is:
e 1 1 = D Y 1 = D G E W S W S 1 G E W S W S 1 + D
The second EWSWS network N e t 2 is added to reduce the learning error, and G E W S W S 2 is the output of the EWSWS layers. The learning of the remaining error, the computed weights, and the outputs are given by:
G E W S W S 2 W 2 1 = e 1 1
W 2 1 1 = G E W S W S 2 + e 1 1
Y 2 1 1 = G E W S W S 2 G E W S W S 2 + e 1 1
The remaining error of N e t 2 is:
e 2 1 = e 1 1 Y 2 1 = e 1 1 G E W S W S 2 G E W S W S 2 + e 1 1
A desired error threshold ε and the maximum number of EWSWS networks for the DWDNN denoted as N m a x are given as required according to the learning tasks. The succeeding EWSWS networks can be generated one-by-one until the remaining error is less than ε or the number of EWSWS networks reaches the maximum number N m a x . Suppose P 1 EWSWS networks were generated in the first iteration. Then, the second training subset p t r _ 2 is used to train the current DWDNN. Using N e t 1 , the learning of the remaining error during the first iteration, the computed weights, and the outputs are given by:
G E W S W S 1 W 1 2 = Y 1 1 + e P 1 1
W 1 2 = G E W S W S 1 + Y 1 1 + e P 1 1
Y 1 2 = G E W S W S 1 G E W S W S 1 + Y 1 1 + e P 1 1
The remaining error of N e t 2 for the second iteration is:
e 1 2 = Y 1 1 + e P 1 1 Y 1 2
The weights are updated for the succeeding EWSWS Nets, and the remaining error of N e t P 1 for the second iteration is:
e P 1 2 = e P 1 1 2 Y P 1 2
Then, P 2 EWSWS networks are added until the remaining error is less than ε or the number of EWSWS Nets reaches the maximum number N m a x . The succeeding training subsets are used similarly, and P t o t a l = P 1 + P 2 + + P N b a t c h EWSWS Nets are generated and combined as the final DWDNN. If the training set is not very big, a single dataset is enough to train a DWDNN iteratively. During the iterations, the validation set can also be used to stop the training to avoid overfitting.

2.3.3. Weights’ Resplitting Method

Suppose the outputs of the transform kernels of the DWDNN are denoted as G DWDNN , which are also the outputs of P t o t a l EWSWS networks. This can be expressed as:
G DWDNN = G EWSWS 1 , G EWSWS 2 , , G EWSWS P total
When the feature dimension of the matrix G DWDNN extracted by P t o t a l EWSWS networks is larger than the number of training samples, the feature matrix is split along the feature dimension to learn more stably. The process can be rewritten as:
Y = G DWDNN W = Ψ 1 , Ψ 2 , , Ψ Q W = Ψ 1 W 1 + Ψ 2 W 2 + + Ψ Q W Q
Then, Ψ 1 , Ψ 2 , , Ψ Q is used to compute the weights in the DWDNN.

3. Datasets and Experimental Settings

The HSI datasets Botswana, Pavia University, and Salinas [53] used in the experiments and shown in Table 1 are as follows:
(1) Botswana:This was acquired by the NASA EO-1 satellite over the Okavango Delta, Botswana, in 2001–2004. The data have a 30 m pixel resolution over a 7.7 km strip in 242 bands. The spectrum is from 400–2500 nm. After removing the uncalibrated and noisy bands that cover water absorption, one-hundred forty-five bands are included as the candidate features. There are 14 classes of land cover;
(2) Pavia University: The data were acquired by the ROSIS sensor over Pavia. The images are 610 × 610 (after discarding the pixels without information, the image is 610 × 340 ). There are 9 classes in total in the images;
(3) Salinas: This was obtained in Salinas Valley, California, with 204 bands (abandoning bands of water absorption), and the size is 512 × 217 . There are 16 classes in total in the images.
The experiments were performed with a desktop with Intel i7-8700K CPU, NVDIA RTX 2080TI GPU, and 32GB memory. The number of hyperspectral bands was reduced to 15 using principal component analysis. The different patch sizes of the HSIs have different effects on the classification results. The performance usually increases as the size of the patches increases. However the computational load can also increase [21]. Therefore, there is a balance between the patch size and the computational load. We chose 9 × 9 as the patch size for all datasets to ensure that the DWDNN could achieve good performance and the computation process was efficient. For the hyperspectral data, there was one channel after the selected bands were concatenated as a vector. The proportions of the instances for training and validation were 0.2 for Pavia University and Salinas. For Botswana, the ratios were 0.14 and 0.01. The training instances were organized together as a single set to train the DWDNN. The patch size for the proposed method was 9 × 9 . The remaining instances were used for testing. The overall accuracy (OA), average accuracy (AA), and Kappa coefficient [50] were used to evaluate the performance. The DWDNN was composed of 10 EWSWS networks, and the detailed parameters for each EWSWS network on different datasets are show in Table 2. There are two formats of the parameters for the size of the sliding window: the integers represent the absolute window sizes, and the decimals represent the ratio of the length of the input vectors as the window sizes.

4. Performance of Classification with Different Hyperspectral Datasets

For the Botswana dataset, the proposed DWDNN was compared with SVM [21,26], multilayer perceptron (MLP) [50], RBF [50], the CNN [50], the recently proposed 2D CNN [21,26], the 3D CNN [21,26], deep recurrent neural networks (DRNNs) [21,26], and the wide sliding window and subsampling (WSWS) network [50]. MLP was implemented with 1000 hidden units. The RBF network had 100 Gaussian kernels, and the centers of these kernels were chosen randomly from the training instances. The CNN was composed of a convolutional layer with 6 convolutional kernels, a pooling layer (scale 2), a convolutional layer with 12 convolutional kernels, and a pooling layer (scale 2). The patch size was 9 × 9 for MLP and RBF and 3 × 3 for the CNN.
The classification results are shown in Table 3 and Figure 5. The proposed DWDNN had the best OA and Kappa coefficient among the compared methods. The OA, AA, and Kappa coefficient of the DWDNN could reach 99.64%, 99.60%, and 99.61%, respectively. The test accuracy for each class in the table represents how many samples could be classified correctly among the total number of samples in the corresponding class. The test accuracies of 12 of the 14 classes reached 100%. Figure 5 shows the predicted results of the whole hyperspectral image. The proposed DWDNN had much smoother classification results for almost all classes. For example, the class of exposed soils with yellow color in Figure 5g has much smoother connected regions.
For the Pavia University and Salinas datasets, the proposed DWDNN was compared with SVM [17,54], MLP [50], RBF [50], the CNN [50], the 2D CNN [17,55], the 3D CNN [16,17], sparse modeling of spectral blocks (SMSB) [56], and the WSWS Net [50]. MLP had 1000 hidden units. The RBF network had 2000 Gaussian kernels, and the centers of these kernels were chosen randomly from the training instances. The CNN was composed of a convolutional layer with 6 convolutional kernels, a pooling layer (scale 2), a convolutional layer with 12 convolutional kernels, and a pooling layer (scale 2). The patch sizes were 9 × 9 for MLP, 5 × 5 for RBF, and 3 × 3 for the CNN, respectively.
The classification results for the Pavia University dataset are shown in Table 4 and Figure 6. The proposed DWDNN had the highest classification results compared with the other methods. The OA, AA, and Kappa coefficient of the DWDNN were 99.69%, 99.31%, and 99.59%, respectively. The test accuracy for each class in the table represents the percentage of samples that could be classified correctly among the total number of samples in the corresponding class. Figure 6 shows the predicted results of the whole hyperspectral image. The instances without class information were predicted. The proposed DWDNN and the compared WSWS Net had smoother classification results than other compared methods. For example, the predicted classes of bare soil with brown color and trees with green color in Figure 6g are much smoother than the predicted results of the compared methods.
The classification results for the Salinas dataset are shown in Table 5 and Figure 7. The proposed DWDNN had the best classification results compared with the other methods, and the OA, AA, and Kappa coefficient were 99.76%, 99.73%, and 99.73%, respectively. The test accuracy for each class in the table represents how many samples can be classified correctly among the total number of samples in the corresponding class. The instances without class information were predicted as shown in Figure 7. The proposed DWDNN had smoother predicted results than the compared methods, which can be seen from the predicted classes of grapes-untrained with purple color and vineyard-untrained with dark yellow color in Figure 7g.

5. Discussion

5.1. Visualization of the Extracted Features from the DWDNN

In this discussion, the extracted high-level features are visualized to show that the proposed DWDNN can extract different features effectively. Therefore, it can have a good performance on the HSI classification task.
For the Botswana dataset, the proportion for training and validation was 0.2, and 0.2. The basic parameter group of each single EWSWS network had: 4 EWSWS layers and 12, 71, 80, and 40 for the stride number, window size, number of transform kernels, and subsampling number for the first EWSWS layer; 400, 0.1 of the length of the current input vector, 40, and 20 for the stride number, window size, number of transform kernels, and subsampling number for the second EWSWS layer; 60, 0.7 of the length of the current input vector, 40, and 20 for the stride number, window size, number of transform kernels, and subsampling number for the third EWSWS layer; and 2, 0.5 of the length of the current input vector, 20, and 10 for the stride number, window size, number of transform kernels, and subsampling number for the fourth EWSWS layer, respectively.
For the Pavia University dataset, the proportion of training and validation was 0.2 and 0.2. The basic parameter group of each single EWSWS network was: 4 EWSWS layers and 2, 5, 100, 50 for the stride number, window size, number of transform kernels, and subsampling number for the first EWSWS layer; 320, 0.1 of the length of the current input vector, 100, and 50 for the stride number, window size, number of transform kernels, and subsampling number for the second EWSWS layer; 40, 0.2 of the length of the current input vector, 32, and 16 for the stride number, window size, number of transform kernels, and subsampling number for the third EWSWS layer; and 12, 0.3 of the length of the current input vector, 12, and 6 for the stride number, window size, number of transform kernels, and subsampling number for the fourth EWSWS layer, respectively.For the Salinas datasets, the settings were the same as in Table 2.
The DWDNN was composed of multiple EWSWS networks, and each EWSWS network had multiple layers to extract features from the low level to the high level. The extracted features of the DWDNN with the hyperspectral datasets are shown in Figure 8, Figure 9 and Figure 10. These extracted features were from the training samples and combined in a cascade from the fourth layer of the transform kernels in the DWDNN. Each curve in the figures represents an extracted feature vector from the training instances with the same classes. For the Botswana dataset, the extracted features from Classes 2, 10, 13, and 14 are shown. All the training instances of the same class were stacked together. It is observed in Figure 8 that the extracted features of the same class have very similar curves. Classes 10 and 13 have more training samples, but they still have very similar feature curves. For Pavia University, the extracted features from Classes 1, 3, 5, and 7 are shown. There were 0.1 of the total number of the training instances of the same class stacked together. It is observed in Figure 9 that all the classes have very similar feature curves. Class 1 has more training samples, but it still has similar feature curves for these samples. For Salinas, the extracted features from Classes 1, 11, 13, and 14 are shown. There were 0.1 of the total number of the training instances of the same class stacked together. It is observed in Figure 10 that all the classes have very similar feature curves. Because 210 features were extracted and shown, these feature curves are denser compared with the other two datasets.
It is observed from all the figures with the three datasets that different classes have different feature curves, and the instances in the same class have similar feature curves. This demonstrated that the proposed DWDNN can learn features effectively with the HSI data.

5.2. Smooth Fine-grained Learning with Different Numbers of EWSWS Networks

In this part, the experimental setting of the Botswana data was the same as in the previous discussion.
For the Pavia University and Salinas datasets, the settings were the same as in Table 2. The main advantage of the DWDNN is that it can learn smoothly with fine-grained hyperparameter settings. That is because the features can be learned in both the deep and wide directions iteratively. The DWDNN was composed of a number of EWSWS Nets, and the training can start from an EWSWS network with a basic group of parameters, then it can learn incrementally with the succeeding EWSWS networks one after the other. Therefore, the DWDNN with the proper architectural complexity can be obtained without overfitting.
It is observed in Table 6, Table 7 and Table 8 that the performance can be improved smoothly as the number of EWSWS networks increases. The testing performance of the iteration process with the DWDNN including 10 EWSWS networks is shown in Figure 11. During the iterations, the testing performance improved steadily. The iterations were stopped by the validation process. In Table 6, Table 7 and Table 8, the testing accuracies started increasing above 98% as the number of EWSWS networks increased. The number of EWSWS networks can actually be reduced to reach the desired and sufficient testing performance. It is also observed in Figure 11 that the iteration can start from a point with good performance to reduce the number of iterations.

5.3. Running Time Analysis

The proposed method was extended in both the wide and deep direction. The number of EWSWS networks, the strides for the sliding windows, and the subsampling ratios at each EWSWS layer can be used to balance the performance and the computational load. The running time including the training and testing were compared with different methods on the Botswana dataset for further discussion. The parameter settings were the same as in Section 4. The results are shown in Table 9. Compared with the classical machine learning models MLP and RBF and the recently proposed WSWS network, the proposed DWDNN has longer training time, but it still has a good testing time. That is because the DWDNN was composed of 10 EWSWS networks, and it needed a bit more time to train the model iteratively. During the testing process, it can compute quickly because the parameters of the DWDNN were reduced greatly using the measures such as the strides for the sliding windows and subsampling for the transform kernels. The proposed DWDNN had both a shorter training and testing time compared to the CNN.

6. Conclusions

Recently, deep learning has been used effectively in hyperspectral image classification. However, it is hard to take full advantage of deep learning networks because of the limited training instances with hyperspectral remote sensing scenes. That is because the acquisition of hyperspectral remote sensing images is usually expensive. How to overcome overfitting as a result becomes an important issue for hyperspectral image classification with deep learning. In this paper, we proposed a dynamic wide and deep neural network (DWDNN) for hyperspectral image classification. It was composed of multiple efficient wide sliding windows and subsampling (EWSWS) networks, which can be generated dynamically according to the complexity of the learning tasks. Therefore, a learning architecture with proper complexity can be generated to overcome overfitting. Each EWSWS network was extended both in the wide and deep directions with the transform kernels as the hidden units. The fine-grained features can be learned smoothly and effectively because the sliding windows with strides and subsampling were designed to both reduce the dimension of the features and retain the important features. In the DWDNN, only the weights of the fully connected layers are computed using iterative least squares, which makes it easy to train the DWDNN. The proposed method was evaluated with the Botswana, Pavia University, and Salinas HSI datasets. The experimental results of the proposed DWDNN had the best performance among the compared methods including the classical machine learning methods and some recently proposed deep learning models for hyperspectral image classification. The limitation of the proposed method is how to extract two-dimensional spatial features effectively in images as the CNN does. This is because the image patches were flattened into vectors, and one-dimensional features were extracted in the sliding windows through the EWSWS layers, which may weaken the spatial structure of the HSIs. This can be studied in the future by using two-dimensional transform kernels directly for HSIs.

Author Contributions

All the authors made significant contributions to this work. All authors contributed to the methodology validation and results analysis and reviewed the manuscript. Conceptualization, O.K.E. and J.X.; methodology, J.X. and M.C.; software and experiments, J.X. and M.C.; validation, W.Z., J.G. and T.W.; writing, original draft preparation, J.X.; funding acquisition, Z.L., C.Z. and W.Z. All authors read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key R&D Program of China under Grant 2018YFC1504805; in part by the National Natural Science Foundation of China under Grants 61806022, 41941019, and 41874005; in part by the Fundamental Research Funds for the Central Universities, 300102269103, 300102269304, 300102260301, 300102261404, and 300102120201; in part by Fund No.19-163-00-KX-002-030-01; in part by the Key Research and Development Program of Shaanxi (Grant No. 2021NY-170); in part by the Special Project of Forestry Science and Technology Innovation Plan in Shaanxi Province SXLK2021-0225; and in part by the China Scholarship Council (CSC) under Scholarship 201404910404.

Acknowledgments

The authors are grateful to the Editor and reviewers for their constructive comments, which significantly improved this work.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HSIHyperspectral image
DWDNNDynamic wide and deep neural network
EWSWSWide sliding window and subsampling
MLRMultinomial logistic regression
SVMSupport vector machine
MLPMultilayer perceptron
RFRandom forest
CNNConvolutional neural network
DSVMDeep support vector machine
FCNFully convolutional network
EWCElastic weight consolidation
IMMIncremental moment matching
PSHNNParallel, self-organizing, hierarchical neural network
PCNNParallel consensual neural network
CDLConditional deep learning
SCNsStochastic configuration network
PCAPrincipal component analysis
OAOverall accuracy
AAAverage accuracy
DRNNDeep recurrent neural network
WSWSWide sliding window and subsampling
SMSBSparse modeling of spectral block

References

  1. Li, W.; Wu, G.; Zhang, F.; Du, Q. Hyperspectral image classification using deep pixel-pair features. IEEE Trans. Geosci. Remote Sens. 2016, 55, 844–853. [Google Scholar] [CrossRef]
  2. Böhning, D. Multinomial logistic regression algorithm. Ann. Inst. Stat. Math. 1992, 44, 197–200. [Google Scholar] [CrossRef]
  3. Tarabalka, Y.; Fauvel, M.; Chanussot, J.; Benediktsson, J.A. SVM-and MRF-based method for accurate classification of hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2010, 7, 736–740. [Google Scholar] [CrossRef] [Green Version]
  4. Safavian, S.R.; Landgrebe, D. A survey of decision tree classifier methodology. IEEE Trans. Syst. Man Cybern. 1991, 21, 660–674. [Google Scholar] [CrossRef] [Green Version]
  5. Xi, J.; Ersoy, O.K.; Fang, J.; Wu, T.; Wei, X.; Zhao, C. Parallel Multistage Wide Neural Network; Technical Reports, 757; Department of Electrical and Computer Engineering, Purdue University: West Lafayette, Indiana, 2020. [Google Scholar]
  6. Li, J.; Bioucas-Dias, J.M.; Plaza, A. Semisupervised hyperspectral image classification using soft sparse multinomial logistic regression. IEEE Geosci. Remote Sens. Lett. 2012, 10, 318–322. [Google Scholar]
  7. Wu, Z.; Wang, Q.; Plaza, A.; Li, J.; Liu, J.; Wei, Z. Parallel implementation of sparse representation classifiers for hyperspectral imagery on GPUs. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2912–2925. [Google Scholar] [CrossRef]
  8. Lee, H.; Kwon, H. Going deeper with contextual CNN for hyperspectral image classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Mei, S.; Ji, J.; Hou, J.; Li, X.; Du, Q. Learning sensor-specific spatial-spectral features of hyperspectral images via convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4520–4533. [Google Scholar] [CrossRef]
  10. Gao, Q.; Lim, S.; Jia, X. Hyperspectral image classification using convolutional neural networks and multiple feature learning. Remote Sens. 2018, 10, 299. [Google Scholar] [CrossRef] [Green Version]
  11. Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A fast dense spectral–spatial convolution network framework for hyperspectral images classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef] [Green Version]
  12. Paoletti, M.; Haut, J.; Plaza, J.; Plaza, A. A new deep convolutional neural network for fast hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2018, 145, 120–147. [Google Scholar] [CrossRef]
  13. Zhang, M.; Li, W.; Du, Q. Diverse region-based CNN for hyperspectral image classification. IEEE Trans. Image Process. 2018, 27, 2623–2634. [Google Scholar] [CrossRef]
  14. Cheng, G.; Li, Z.; Han, J.; Yao, X.; Guo, L. Exploring hierarchical convolutional features for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6712–6722. [Google Scholar] [CrossRef]
  15. Gong, Z.; Zhong, P.; Yu, Y.; Hu, W.; Li, S. A CNN With Multiscale Convolution and Diversified Metric for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3599–3618. [Google Scholar] [CrossRef]
  16. Hamida, A.B.; Benoit, A.; Lambert, P.; Amar, C.B. 3D deep learning approach for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4420–4434. [Google Scholar] [CrossRef] [Green Version]
  17. Zheng, J.; Feng, Y.; Bai, C.; Zhang, J. Hyperspectral Image Classification Using Mixed Convolutions and Covariance Pooling. IEEE Trans. Geosci. Remote Sens. 2020, 59, 522–534. [Google Scholar] [CrossRef]
  18. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3D–2D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
  19. Haut, J.M.; Paoletti, M.E.; Plaza, J.; Li, J.; Plaza, A. Active Learning With Convolutional Neural Networks for Hyperspectral Image Classification Using a New Bayesian Approach. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6440–6461. [Google Scholar] [CrossRef]
  20. Cao, X.; Yao, J.; Xu, Z.; Meng, D. Hyperspectral image classification with convolutional neural network and active learning. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4604–4616. [Google Scholar] [CrossRef]
  21. Tang, X.; Meng, F.; Zhang, X.; Cheung, Y.; Ma, J.; Liu, F.; Jiao, L. Hyperspectral Image Classification Based on 3D Octave Convolution With Spatial-Spectral Attention Network. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1–18. [Google Scholar] [CrossRef]
  22. Masarczyk, W.; Głomb, P.; Grabowski, B.; Ostaszewski, M. Effective Training of Deep Convolutional Neural Networks for Hyperspectral Image Classification through Artificial Labeling. Remote Sens. 2020, 12, 2653. [Google Scholar] [CrossRef]
  23. Xie, F.; Gao, Q.; Jin, C.; Zhao, F. Hyperspectral image classification based on superpixel pooling convolutional neural network with transfer learning. Remote Sens. 2021, 13, 930. [Google Scholar] [CrossRef]
  24. Cui, Y.; Xia, J.; Wang, Z.; Gao, S.; Wang, L. Lightweight Spectral-Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 1–14. [Google Scholar] [CrossRef]
  25. Yuan, Y.; Wang, C.; Jiang, Z. Proxy-Based Deep Learning Framework for Spectral-Spatial Hyperspectral Image Classification: Efficient and Robust. IEEE Trans. Geosci. Remote Sens. 2021, 1–15. [Google Scholar] [CrossRef]
  26. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef] [Green Version]
  27. Okwuashi, O.; Ndehedehe, C.E. Deep support vector machine for hyperspectral image classification. Pattern Recognit. 2020, 103, 107298. [Google Scholar] [CrossRef]
  28. Worrall, D.E.; Garbin, S.J.; Turmukhambetov, D.; Brostow, G.J. Harmonic networks: Deep translation and rotation equivariance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5028–5037. [Google Scholar]
  29. Liu, C.; Li, J.; He, L.; Plaza, A.; Li, S.; Li, B. Naive Gabor Networks for Hyperspectral Image Classification. IEEE Trans. Neural Netw. Learn. Syst. 2020, 103, 376–390. [Google Scholar] [CrossRef]
  30. Cao, F.; Guo, W. Cascaded dual-scale crossover network for hyperspectral image classification. Knowl. Based Syst. 2020, 189, 105122. [Google Scholar] [CrossRef]
  31. Li, H.C.; Li, S.S.; Hu, W.S.; Feng, J.H.; Sun, W.W.; Du, Q. Recurrent Feedback Convolutional Neural Network for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2021, 1–5. [Google Scholar] [CrossRef]
  32. Roy, S.K.; Haut, J.M.; Paoletti, M.E.; Dubey, S.R.; Plaza, A. Generative Adversarial Minority Oversampling for Spectral-Spatial Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 1–15. [Google Scholar] [CrossRef]
  33. Parisi, G.I.; Kemker, R.; Part, J.L.; Kanan, C.; Wermter, S. Continual lifelong learning with neural networks: A review. Neural Netw. 2019, 113, 54–71. [Google Scholar] [CrossRef]
  34. Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Desjardins, G.; Rusu, A.A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. USA 2017, 114, 3521–3526. [Google Scholar] [CrossRef] [Green Version]
  35. Lee, S.W.; Kim, J.H.; Jun, J.; Ha, J.W.; Zhang, B.T. Overcoming catastrophic forgetting by incremental moment matching. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2017; pp. 4652–4662. [Google Scholar]
  36. Ersoy, O.K.; Hong, D. Parallel, self-organizing, hierarchical neural networks. IEEE Trans. Neural Netw. 1990, 1, 167–178. [Google Scholar] [CrossRef] [Green Version]
  37. Benediktsson, J.A.; Sveinsson, J.R.; Ersoy, O.K.; Swain, P.H. Parallel consensual neural networks. IEEE Trans. Neural Netw. 1997, 8, 54–64. [Google Scholar] [CrossRef] [Green Version]
  38. Venkataramani, S.; Raghunathan, A.; Liu, J.; Shoaib, M. Scalable-effort classifiers for energy-efficient machine learning. In Proceedings of the 52nd Annual Design Automation Conference, San Francisco, CA, USA, 8–12 June 2015; ACM: New York, NY, USA, 2015; p. 67. [Google Scholar]
  39. Panda, P.; Venkataramani, S.; Sengupta, A.; Raghunathan, A.; Roy, K. Energy-Efficient Object Detection Using Semantic Decomposition. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2017, 25, 2673–2677. [Google Scholar] [CrossRef]
  40. Roy, D.; Panda, P.; Roy, K. Tree-CNN: A Deep Convolutional Neural Network for Lifelong Learning. arXiv 2018, arXiv:1802.05800. [Google Scholar]
  41. Panda, P.; Sengupta, A.; Roy, K. Conditional Deep Learning for energy-efficient and enhanced pattern recognition. In Proceedings of the 2016 Design, Automation Test in Europe Conference Exhibition (DATE), Dresden, Germany, 14–18 March 2016; pp. 475–480. [Google Scholar]
  42. Wang, D.; Li, M. Stochastic Configuration Networks: Fundamentals and Algorithms. IEEE Trans. Cybern. 2017, 47, 3466–3479. [Google Scholar] [CrossRef] [Green Version]
  43. Kwon, H.; Lee, J. AdvGuard: Fortifying Deep Neural Networks against Optimized Adversarial Example Attack. IEEE Access 2020. [Google Scholar] [CrossRef]
  44. Kwon, H.; Yoon, H.; Park, K.W. Multi-targeted backdoor: Indentifying backdoor attack for multiple deep neural networks. IEICE Trans. Inf. Syst. 2020, 103, 883–887. [Google Scholar] [CrossRef]
  45. Neyshabur, B.; Li, Z.; Bhojanapalli, S.; LeCun, Y.; Srebro, N. The role of over-parametrization in generalization of neural networks. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  46. Lee, J.; Xiao, L.; Schoenholz, S.S.; Bahri, Y.; Sohl-Dickstein, J.; Pennington, J. Wide neural networks of any depth evolve as linear models under gradient descent. arXiv 2019, arXiv:1902.06720. [Google Scholar]
  47. Cheng, H.T.; Koc, L.; Harmsen, J.; Shaked, T.; Chandra, T.; Aradhye, H.; Anderson, G.; Corrado, G.; Chai, W.; Ispir, M.; et al. Wide & deep learning for recommender systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, Boston, MA, USA, 15 September 2016; ACM: New York, NY, USA, 2016; pp. 7–10. [Google Scholar]
  48. Jahan, F.; Zhou, J.; Awrangjeb, M.; Gao, Y. Fusion of hyperspectral and LiDAR data using discriminant correlation analysis for land cover classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3905–3917. [Google Scholar] [CrossRef] [Green Version]
  49. Jahan, F.; Zhou, J.; Awrangjeb, M.; Gao, Y. Inverse Coefficient of Variation Feature and Multilevel Fusion Technique for Hyperspectral and LiDAR Data Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 367–381. [Google Scholar] [CrossRef] [Green Version]
  50. Xi, J.; Ersoy, O.K.; Fang, J.; Cong, M.; Wu, T.; Zhao, C.; Li, Z. Wide Sliding Window and Subsampling Network for Hyperspectral Image Classification. Remote Sens. 2021, 13, 1290. [Google Scholar] [CrossRef]
  51. Aghagolzadeh, S.; Ersoy, O.K. Optimal adaptive multistage image transform coding. IEEE Trans. Circuits Syst. Video Technol. 1991, 1, 308–317. [Google Scholar] [CrossRef] [Green Version]
  52. Xi, J.; Ersoy, O.K.; Fang, J.; Cong, M.; Wei, X.; Wu, T. Scalable Wide Neural Network: A Parallel, Incremental Learning Model Using Splitting Iterative Least Squares. IEEE Access 2021, 9, 50767–50781. [Google Scholar] [CrossRef]
  53. Hyperspectral Remote Sensing Scenes. Available online: http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes#Indian_Pines (accessed on 25 March 2021).
  54. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  55. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
  56. Azar, S.G.; Meshgini, S.; Rezaii, T.Y.; Beheshti, S. Hyperspectral image classification based on sparse modeling of spectral blocks. Neurocomputing 2020, 407, 12–23. [Google Scholar] [CrossRef]
Figure 1. Architecture of the DWDNN. The DWDNN is composed of multiple EWSWS networks, which are generated in the wide direction. Each EWSWS network has L layers of transform kernels with sliding windows and strides in the deep direction. Each WSWS layer can be extended in the wide direction. Therefore, the DWDNN can learn features effectively with both the wide and deep architecture. The parameters of these transform kernels can be learned with randomly chosen training samples. The weights of the DWDNN are mainly in the fully connected layer and can be learned easily using iterative least squares.
Figure 1. Architecture of the DWDNN. The DWDNN is composed of multiple EWSWS networks, which are generated in the wide direction. Each EWSWS network has L layers of transform kernels with sliding windows and strides in the deep direction. Each WSWS layer can be extended in the wide direction. Therefore, the DWDNN can learn features effectively with both the wide and deep architecture. The parameters of these transform kernels can be learned with randomly chosen training samples. The weights of the DWDNN are mainly in the fully connected layer and can be learned easily using iterative least squares.
Remotesensing 13 02575 g001
Figure 2. Hyperspectral image preprocessing and instances’ preparation.
Figure 2. Hyperspectral image preprocessing and instances’ preparation.
Remotesensing 13 02575 g002
Figure 3. The architecture of the EWSWS layers. The EWSWS layers are in both the wide and deep direction. Each layer includes the transform kernels, sorting, and subsampling. The sliding window with a stride is used for each layer, and the layer is extended in the wide direction by using these sliding windows.
Figure 3. The architecture of the EWSWS layers. The EWSWS layers are in both the wide and deep direction. Each layer includes the transform kernels, sorting, and subsampling. The sliding window with a stride is used for each layer, and the layer is extended in the wide direction by using these sliding windows.
Remotesensing 13 02575 g003
Figure 4. Dynamic iterative learning of the DWDNN. The weights are related to the combined outputs of the final EWSWS layers from the EWSWS networks. The weights’ resplitting block and iterative least squares are used to learn these weights in the fully connected layer.
Figure 4. Dynamic iterative learning of the DWDNN. The weights are related to the combined outputs of the final EWSWS layers from the EWSWS networks. The weights’ resplitting block and iterative least squares are used to learn these weights in the fully connected layer.
Remotesensing 13 02575 g004
Figure 5. Classification results of the Botswana dataset (the instances without class information were also predicted). (a) False color image, (b) reference image, (c) MLP, (d) RBF network, (e) CNN, (f) WSWS network, and (g) DWDNN.
Figure 5. Classification results of the Botswana dataset (the instances without class information were also predicted). (a) False color image, (b) reference image, (c) MLP, (d) RBF network, (e) CNN, (f) WSWS network, and (g) DWDNN.
Remotesensing 13 02575 g005
Figure 6. Classification results of the Pavia University dataset (the instances without class information were also predicted). (a) False color image, (b) reference image, (c) MLP, (d) RBF Network, (e) CNN, (f) WSWS Network, and (g) DWDNN.
Figure 6. Classification results of the Pavia University dataset (the instances without class information were also predicted). (a) False color image, (b) reference image, (c) MLP, (d) RBF Network, (e) CNN, (f) WSWS Network, and (g) DWDNN.
Remotesensing 13 02575 g006
Figure 7. Classification results of the Salinas dataset (the instances without class information were also predicted). (a) false color image, (b) Reference image, (c) MLP, (d) RBF Network, (e) CNN, (f) WSWS Network, and (g) DWDNN.
Figure 7. Classification results of the Salinas dataset (the instances without class information were also predicted). (a) false color image, (b) Reference image, (c) MLP, (d) RBF Network, (e) CNN, (f) WSWS Network, and (g) DWDNN.
Remotesensing 13 02575 g007
Figure 8. Stacked extracted features of the training instances from the last transform kernel layers of the DWDNN (10 EWSWS networks) with the Botswana dataset. (a) Extracted features for class 2. (b) Extracted features for class 10. (c) Extracted features for class 13. (d) Extracted features for class 14.
Figure 8. Stacked extracted features of the training instances from the last transform kernel layers of the DWDNN (10 EWSWS networks) with the Botswana dataset. (a) Extracted features for class 2. (b) Extracted features for class 10. (c) Extracted features for class 13. (d) Extracted features for class 14.
Remotesensing 13 02575 g008
Figure 9. Stacked extracted features of the training instances from the last transform kernel layers of the DWDNN (10 EWSWS networks) with the Pavia University dataset. (a) Extracted features for class 1. (b) Extracted features for class 3. (c) Extracted features for class 5. (d) Extracted features for class 7.
Figure 9. Stacked extracted features of the training instances from the last transform kernel layers of the DWDNN (10 EWSWS networks) with the Pavia University dataset. (a) Extracted features for class 1. (b) Extracted features for class 3. (c) Extracted features for class 5. (d) Extracted features for class 7.
Remotesensing 13 02575 g009
Figure 10. Stacked extracted features of the training instances from the last transform kernel layers of the DWDNN (10 EWSWS networks) with the Salinas dataset. (a) Extracted features for class 1. (b) Extracted features for class 11. (c) Extracted features for class 13. (d) Extracted features for class 14.
Figure 10. Stacked extracted features of the training instances from the last transform kernel layers of the DWDNN (10 EWSWS networks) with the Salinas dataset. (a) Extracted features for class 1. (b) Extracted features for class 11. (c) Extracted features for class 13. (d) Extracted features for class 14.
Remotesensing 13 02575 g010
Figure 11. Classification results of the DWDNN with different numbers of EWSWS networks and different iterations (10 EWSWS networks). (ac) Classification results of the DWDNN with different numbers of EWSWS networks. (df) Classification results of the DWDNN with different iterations (10 EWSWS networks).
Figure 11. Classification results of the DWDNN with different numbers of EWSWS networks and different iterations (10 EWSWS networks). (ac) Classification results of the DWDNN with different numbers of EWSWS networks. (df) Classification results of the DWDNN with different iterations (10 EWSWS networks).
Remotesensing 13 02575 g011
Table 1. Description of the hyperspectral remote sensing datasets.
Table 1. Description of the hyperspectral remote sensing datasets.
ClassBotswanaPavia UniversitySalinas
No.NameNo. *NameNo. *NameNo. *
1Water270Asphalt6631Broccoli-green-weeds-12009
2Hippo grass101Meadows18,649Broccoli-green-weeds-23726
3floodplain grasses1251Gravel2099Fallow1976
4floodplain grasses2215Trees3064Fallow-rough-plow1394
5Reeds1269Painted metal sheets1345Fallow-smooth2678
6Riparian269Bare soil5029Stubble3959
7Firescar2259Bitumen1330Celery3579
8Island interior203Self-blocking bricks3682Grapes-untrained11,271
9Acacia woodlands314Shadows947Soil-vineyard-develop6203
10Acacia shrublands248 Corn-senesced-green-weeds3278
11Acacia grasslands305 Lettuce-romaine-4wk1068
12short mopane181 Lettuce-romaine-5wk1927
13Mixed mopane268 Lettuce-romaine-6wk916
14Exposed soils Lettuce-romaine-7wk1070
15 Vineyard-untrained7268
16 Vineyard-vertical-trellis1807
Total 3248 42,776 54,129
* No. represents the number of samples.
Table 2. Parameter settings for the DWDNN on different hyperspectral datasets.
Table 2. Parameter settings for the DWDNN on different hyperspectral datasets.
DatasetsBasic Parameters of Each EWSWS Network in DWDNN
Botswana1st EWSWS Layer2nd EWSWS Layer
StrideWindowKernelsSubsamplingStrideWindowKernelsSubsampling
1212100504000.910050
3rd EWSWS Layer4th EWSWS Layer
StrideWindowKernelsSubsamplingStrideWindowKernelsSubsampling
600.7402020.52010
Pavia University1st EWSWS Layer2nd EWSWS Layer
StrideWindowKernelsSubsamplingStrideWindowKernelsSubsampling
24100503200.110050
3rd EWSWS Layer4th EWSWS Layer
StrideWindowKernelsSubsamplingStrideWindowKernelsSubsampling
400.13216120.33216
Salinas1st EWSWS Layer2nd EWSWS Layer
StrideWindowKernelsSubsamplingStrideWindowKernelsSubsampling
1251100504000.110050
3rd EWSWS Layer4th EWSWS Layer
StrideWindowKernelsSubsamplingStrideWindowKernelsSubsampling
600.7402020.52010
Table 3. Classification results of different methods on the Botswana dataset.
Table 3. Classification results of different methods on the Botswana dataset.
Class No. *SVMMLPRBFCNN2D CNN3D CNNDRNNWSWSDWDNN
198.5098.25100.00100.0098.7999.9297.7197.38100.00
298.87100.0097.67100.0098.9199.1599.29100.00100.00
396.2996.7188.7395.3199.0497.2099.66100.00100.00
479.7896.72100.0095.6399.2499.4699.6598.91100.00
578.4284.2190.7984.2194.2785.2597.6998.6898.14
690.6382.8968.4281.1491.6493.3598.75100.00100.00
7100.0099.5590.91100.0099.3599.83100.00100.00100.00
895.0398.84100.0095.3898.8799.7798.84100.00100.00
990.8594.7693.6397.3898.2497.5499.81100.00100.00
10100.0091.9434.6078.6798.62100.00100.00100.00100.00
1198.0498.0783.4093.8294.4597.1698.00100.00100.00
1296.1691.5678.5797.4098.8997.0297.52100.0096.33
1393.7099.1283.2688.5598.7499.5899.58100.00100.00
14100.0091.36100.0095.0699.8399.69100.00100.00100.00
OA (%)95.2294.4585.2192.5098.3497.1899.0199.6099.64
AA (%)94.0994.5786.4393.0497.9997.5399.0099.6499.60
Kappa (%)94.4294.0284.1891.9298.2096.9498.9299.5799.61
* Class No.: 1, Water; 2, Hippo grass; 3, floodplain grasses1; 4, floodplain grasses2; 5, Reeds1; 6, Riparian; 7, Firescar2; 8, Island interior; 9, Acacia woodlands; 10, Acacia shrublands; 11, Acacia grasslands; 12, Short mopane; 13, Mixed mopane; 14, Exposed soils.
Table 4. Classification results of different methods on the Pavia University dataset.
Table 4. Classification results of different methods on the Pavia University dataset.
Class No. *SVMMLPRBFCNN2D CNN3D CNNSMSBWSWSDWDNN
194.7297.1397.6596.1898.5198.4099.1199.1099.87
297.1598.4399.5396.6999.5496.9198.97100.00100.00
382.7385.1580.2680.8684.6297.0598.8993.0196.98
496.8295.0593.8487.2198.0498.8498.7498.3799.29
599.7199.8891.5899.63100.00100.00100.0099.8899.75
690.4896.3587.0688.3097.1099.3299.8799.97100.00
787.7390.8590.3082.5895.0598.9299.7999.0098.62
888.2993.2192.4394.1296.3998.3398.9998.3399.59
999.9099.3094.8499.3099.6999.9098.0498.9599.65
OA (%)94.3396.4795.1893.6697.8496.5299.1199.1999.69
AA (%)92.9795.0491.9891.6596.5697.4799.1698.5199.31
Kappa (%)92.5195.3693.6691.7297.1995.5098.7998.9399.59
* Class No.: 1, Asphalt; 2, Meadows; 3, Gravel; 4, Trees; 5, Painted metal sheets; 6, Bare soil; 7, Bitumen; 8, Self-blocking bricks; 9, Shadows.
Table 5. Classification results of different methods on the Salinas dataset.
Table 5. Classification results of different methods on the Salinas dataset.
Class No. *SVMMLPRBFCNN2D CNN3D CNNSMSBWSWSDWDNN
199.60100.0099.7598.51100.0098.4199.78100.00100.00
299.82100.00100.0099.8299.96100.0099.9799.8799.91
399.2699.4199.8399.6699.6399.2399.9498.8299.24
499.4099.5299.5298.6899.2899.9099.2897.7398.80
599.4297.7097.1499.3899.2099.4399.5499.3899.88
6100.00100.00100.0099.96100.0099.5599.9799.96100.00
799.830.00100.0099.95100.0099.7299.8899.9199.95
885.2590.6491.2274.2493.6289.7598.8799.7299.81
999.71100.00100.00100.00100.0099.8199.9199.7699.70
1097.0399.0898.9893.4498.8298.3698.8599.6499.95
1198.2499.5399.6996.7299.7398.1299.79100.0099.84
1299.46100.00100.0099.74100.0098.9699.9499.9199.74
1398.7799.6499.2798.91100.0098.9399.0399.8299.82
1497.3099.8499.69100.0099.8698.6098.86100.0099.69
1592.7185.5379.3388.6591.5279.3197.6399.5299.56
1699.4199.91100.0098.5399.9294.5199.92100.0099.72
OA (%)92.9489.2795.1492.4297.3993.9599.2699.6799.76
AA (%)94.6191.9297.7896.6498.8597.0299.4599.6399.73
Kappa (%)92.1288.2094.6491.6997.0793.3199.1799.6399.73
* Class No.: 1, Broccoli-green-weeds-1; 2, Broccoli-green-weeds-2; 3, Fallow; 4, Fallow-rough-plow; 5, Fallow-smooth; 6, Stubble; 7, Celery; 8, Grapes-untrained; 9, Soil-vineyard-develop; 10, Corn-senesced-green-weeds; 11, Lettuce-romaine-4wk; 12, Lettuce-romaine-5wk; 13, Lettuce-romaine-6wk; 14, Lettuce-romaine-6wk; 15, Vineyard-untrained; 16, Vineyard-vertical-trellis.
Table 6. Classification results with different numbers of EWSWS networks with the Botswana dataset.
Table 6. Classification results with different numbers of EWSWS networks with the Botswana dataset.
ClassNumber of EWSWS Networks
Number *2345678910
198.38100.0099.38100.0096.9198.1598.77100.00100.00
2100.00100.00100.00100.00100.00100.00100.00100.00100.00
3100.00100.00100.00100.00100.00100.00100.00100.00100.00
4100.00100.00100.00100.00100.00100.00100.00100.00100.00
595.6596.2796.2796.8999.3898.1498.7699.3898.76
6100.0098.76100.0098.14100.00100.0099.3899.38100.00
799.3599.35100.00100.0099.35100.00100.00100.00100.00
8100.00100.00100.00100.00100.00100.00100.00100.00100.00
9100.00100.00100.00100.00100.00100.00100.0099.47100.00
10100.00100.00100.00100.00100.00100.00100.00100.00100.00
11100.00100.00100.00100.00100.00100.00100.00100.00100.00
1298.1799.08100.00100.00100.00100.00100.00100.00100.00
13100.00100.00100.00100.00100.00100.00100.00100.00100.00
1489.4792.9892.9898.2598.2598.2598.2598.25100.00
OA (%)99.1399.2899.4399.5499.5999.6499.6999.7499.90
AA (%)98.7299.0399.1999.5299.5699.6199.6599.6299.91
Kappa (%)99.0599.2299.3999.5099.5599.6199.6799.7299.89
* Class Number: 1, Water; 2, Hippo grass; 3, floodplain grasses1; 4, floodplain grasses2; 5, Reeds1; 6, Riparian; 7, Firescar2; 8, Island interior; 9, Acacia woodlands; 10, Acacia shrublands; 11, Acacia grasslands; 12, Short mopane; 13, Mixed mopane; 14, Exposed soils.
Table 7. Classification results with different numbers of EWSWS networks with the Pavia University dataset.
Table 7. Classification results with different numbers of EWSWS networks with the Pavia University dataset.
ClassNumber of EWSWS Networks
Number *2345678910
199.5599.5299.5299.7799.5799.8299.8599.7799.97
299.8799.9099.9999.9899.9699.97100.0099.99100.00
392.6194.1294.3694.9294.8496.5195.7996.1996.98
497.2397.9997.7798.4299.1398.5399.2998.9199.29
599.6399.75100.00100.00100.0099.8899.50100.0099.75
6100.00100.00100.00100.0099.97100.00100.00100.00100.00
797.4997.3798.5099.0099.1299.2599.0099.8798.62
899.1999.2399.4699.1999.4199.2399.3799.5999.05
997.5497.0198.4298.0797.5497.5497.5498.0798.07
OA (%)99.1099.2399.3699.4599.4699.5499.5899.6199.69
AA (%)98.1298.3298.6798.8298.8498.9798.9799.1699.31
Kappa (%)98.8098.9899.1599.2799.2999.3999.4499.4999.59
* Class Number: 1,Asphalt; 2, Meadows; 3, Gravel; 4, Trees; 5, Painted metal sheets; 6, Bare soil; 7, Bitumen; 8, Self-blocking bricks; 9, Shadows.
Table 8. Classification results with different numbers of EWSWS networks with the Salinas dataset.
Table 8. Classification results with different numbers of EWSWS networks with the Salinas dataset.
ClassNumber of EWSWS Networks
Number *2345678910
199.92100.00100.00100.00100.00100.00100.00100.00100.00
299.1599.2899.2899.4299.4299.5599.6999.7899.91
396.6999.2498.5798.3198.8298.9098.9099.0799.24
498.2198.4498.9299.0498.8099.1698.5698.9298.80
599.2599.1399.7599.75100.0099.9499.9499.8199.88
6100.00100.00100.00100.00100.00100.00100.00100.00100.00
799.4499.9199.4499.3599.6799.5899.6399.7299.95
897.7198.5499.3599.2699.3199.6299.7299.7299.81
999.5799.5299.5299.5799.6299.6599.5499.7099.70
1097.7698.3798.8398.8899.4999.4499.3999.4999.95
1199.0699.2299.2299.38100.00100.00100.0099.8499.84
1299.5799.7499.83100.00100.00100.00100.00100.0099.74
1398.1899.64100.00100.00100.00100.00100.00100.0099.82
1497.66100.00100.0099.8499.8499.69100.00100.0099.69
1595.4496.9097.4597.7898.1998.3098.9298.9099.56
1699.0898.2599.08 98.8999.6399.8299.7299.72
OA (%)98.2998.8799.1799.2099.3999.4799.5899.6199.76
AA (%)98.5699.1499.3399.3499.5499.5999.6399.6799.73
Kappa (%)98.1098.7499.0799.1199.3299.4199.5399.6799.73
* Class Number: 1, Broccoli-green-weeds-1; 2, Broccoli-green-weeds-2; 3, Fallow; 4, Fallow-rough-plow; 5, Fallow-smooth; 6, Stubble; 7, Celery; 8, Grapes-untrained; 9, Soil-vineyard-develop; 10, Corn-senesced-green-weeds; 11, Lettuce-romaine-4wk; 12, Lettuce-romaine-5wk; 13, Lettuce-romaine-6wk; 14, Lettuce-romaine-6wk; 15, Vineyard-untrained; 16, Vineyard-vertical-trellis.
Table 9. Running time analysis with the Botswana dataset.
Table 9. Running time analysis with the Botswana dataset.
MLPRBFCNNWSWSDWDNN
Training time (s)7.20.1139.43.417.4
Test time (s)0.30.10.25.60.1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xi, J.; Cong, M.; Ersoy, O.K.; Zou, W.; Zhao, C.; Li, Z.; Gu, J.; Wu, T. Dynamic Wide and Deep Neural Network for Hyperspectral Image Classification. Remote Sens. 2021, 13, 2575. https://doi.org/10.3390/rs13132575

AMA Style

Xi J, Cong M, Ersoy OK, Zou W, Zhao C, Li Z, Gu J, Wu T. Dynamic Wide and Deep Neural Network for Hyperspectral Image Classification. Remote Sensing. 2021; 13(13):2575. https://doi.org/10.3390/rs13132575

Chicago/Turabian Style

Xi, Jiangbo, Ming Cong, Okan K. Ersoy, Weibao Zou, Chaoying Zhao, Zhenhong Li, Junkai Gu, and Tianjun Wu. 2021. "Dynamic Wide and Deep Neural Network for Hyperspectral Image Classification" Remote Sensing 13, no. 13: 2575. https://doi.org/10.3390/rs13132575

APA Style

Xi, J., Cong, M., Ersoy, O. K., Zou, W., Zhao, C., Li, Z., Gu, J., & Wu, T. (2021). Dynamic Wide and Deep Neural Network for Hyperspectral Image Classification. Remote Sensing, 13(13), 2575. https://doi.org/10.3390/rs13132575

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop