Next Article in Journal
Cloud Processing for Simultaneous Mapping of Seagrass Meadows in Optically Complex and Varied Water
Next Article in Special Issue
Multiscale Spatial–Spectral Interaction Transformer for Pan-Sharpening
Previous Article in Journal
Assessing Railway Landscape by AHP Process with GIS: A Study of the Yunnan-Vietnam Railway
Previous Article in Special Issue
A Lightweight Convolutional Neural Network Based on Channel Multi-Group Fusion for Remote Sensing Scene Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Classification Based on 3D Coordination Attention Mechanism Network

1
College of Communication and Electronic Engineering, Qiqihar University, Qiqihar 161000, China
2
College of Information and Communication Engineering, Dalian Nationalities University, Dalian 116000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(3), 608; https://doi.org/10.3390/rs14030608
Submission received: 26 December 2021 / Revised: 11 January 2022 / Accepted: 19 January 2022 / Published: 27 January 2022
(This article belongs to the Special Issue Deep Reinforcement Learning in Remote Sensing Image Processing)

Abstract

:
In recent years, due to its powerful feature extraction ability, the deep learning method has been widely used in hyperspectral image classification tasks. However, the features extracted by classical deep learning methods have limited discrimination ability, resulting in unsatisfactory classification performance. In addition, due to the limited data samples of hyperspectral images (HSIs), how to achieve high classification performance under limited samples is also a research hotspot. In order to solve the above problems, this paper proposes a deep learning network framework named the three-dimensional coordination attention mechanism network (3DCAMNet). In this paper, a three-dimensional coordination attention mechanism (3DCAM) is designed. This attention mechanism can not only obtain the long-distance dependence of the spatial position of HSIs in the vertical and horizontal directions, but also obtain the difference of importance between different spectral bands. In order to extract the spectral and spatial information of HSIs more fully, a convolution module based on convolutional neural network (CNN) is adopted in this paper. In addition, the linear module is introduced after the convolution module, which can extract more fine advanced features. In order to verify the effectiveness of 3DCAMNet, a series of experiments were carried out on five datasets, namely, Indian Pines (IP), Pavia University (UP), Kennedy Space Center (KSC), Salinas Valley (SV), and University of Houston (HT). The OAs obtained by the proposed method on the five datasets were 95.81%, 97.01%, 99.01%, 97.48%, and 97.69% respectively, 3.71%, 9.56%, 0.67%, 2.89% and 0.11% higher than the most advanced A2S2K-ResNet. Experimental results show that, compared with some state-of-the-art methods, 3DCAMNet not only has higher classification performance, but also has stronger robustness.

1. Introduction

In the past decades, with the rapid development of hyperspectral imaging technology, sensors can capture hyperspectral images (HSIs) in hundreds of bands. In the field of remote sensing, an important task is hyperspectral image classification. Hyperspectral image classification is used to assign accurate labels to different pixels according to multidimensional feature space [1,2,3]. In practical applications, hyperspectral image classification technology has been widely used in many fields, such as military reconnaissance, vegetation and ecological monitoring, specific atmospheric assessment, and geological disasters [4,5,6,7,8].
Traditional machine-learning methods mainly include two steps: feature extraction and classification [9,10,11,12,13,14]. In the early stage of hyperspectral image classification, many classical methods appeared, such as feature mining technology [15] and Markov random field [16]. However, these methods cannot effectively extract features with strong discrimination ability. In order to adapt to the nonlinear structure of hyperspectral data, a pattern recognition algorithm support vector machine (SVM) was proposed [17], but this method struggles to effectively solve the multi classification problem.
With the development of deep learning (DL) technology, some methods based on DL have been widely used in hyperspectral image classification [18,19,20]. In particular, the hyperspectral image classification method based on convolutional neural network (CNN) has attracted extensive attention because it can effectively deal with nonlinear structure data [21,22,23,24,25,26,27,28]. In [29], the first attempt to extract the spectral features of HSIs by stacking multilayer one-dimensional neural network (1DCNN) was presented. In addition, Yu et al. [30] proposed a CNN with deconvolution and hashing method (CNNDH). According to the spectral correlation and band variability of HSIs, a recurrent neural network (RNN) was used to extract spectral features [31]. In recent years, some two-dimensional neural networks have also been applied to hyperspectral image classification, and satisfactory classification performance has been obtained. For example, a two-dimensional stacked autoencoder (2DSAE) was used to attempt to extract depth features from space [32]. In addition, Makantasis et al. [33] proposed a two-dimensional convolutional neural network (2DCNN), which was used to extract spatial information and classify the original HSIs pixel by pixel in a supervised manner. In [34], Feng et al. proposed a CNN-based multilayer spatial–spectral feature fusion and sample augmentation with local and nonlocal constraints (MSLN-CNN). MSLN-CNN not only fully extracts the complementary spatial–spectral information between shallow and deep layers, but also avoids the overfitting phenomenon caused by an insufficient number of samples. In addition, in [35], Gong et al. proposed a multiscale convolutional neural network (MSCNN), which improves the representation ability of HSIs by extracting depth multiscale features. At the same time, a spatial spectral unified network (SSUN) based on HSIs was proposed [36]. This method shares a unified objective function for feature extraction and classifier training, and all parameters can be optimized at the same time. Considering the inherent data attributes of HSIs, spatial–spectral features can be extracted more fully by using a three-dimensional convolutional neural network (3DCNN). In [37], an unsupervised feature learning strategy of a three-dimensional convolutional autoencoder (3DCAE) was used to maximize the exploration of spatial–spectral structure information and learn effective features in unsupervised mode. Roy et al. [38] proposed a mixed 3DCNN and 2DCNN feature extraction method (Hybrid-SN). This method first extracts spatial and spectral features through 3DCNN, then extracts depth spatial features using 2DCNN, and finally realizes high-precision classification. In [39], a robust generative adversarial network (GAN) was proposed, and the classification performance was effectively improved. In addition, Paoletti et al. [40] proposed the pyramid residual network (PyResNet).
Although the above methods can effectively improve the classification performance of high HSIs, they are still not satisfactory. In recent years, in order to further improve the classification performance, computer vision has widely studied the channel attention mechanism and applied it to the field of hyperspectral image classification [41,42,43,44]. For example, a squeeze-and-excitation network (SENet) improved classification performance by introducing the channel attention mechanism [45]. Wang et al. [46] proposed the spatial–spectral squeeze-and-excitation network (SSSE), which utilized a squeeze operator and excitation operation to refine the feature maps. In addition, embedding the attention mechanism into the popular model can also effectively improve the classification performance. In [47], Mei et al. proposed bidirectional recurrent neural networks (bi-RNNs) based on an attention mechanism. The attention map was calculated by the tanh function and sigmoid function. Roy et al. [48] proposed a fused squeeze-and-excitation network (FuSENet), which obtains channel attention through global average pooling (GAP) and global max pooling (GMP). Ding et al. [49] proposed local attention network (LANet), which enriches the semantic information of low-level features by embedding local attention in high-level features. However, channel attention can only obtain the attention map of channel dimension, ignoring spatial information. In [50], in order to obtain prominent spatial features, the convolutional block attention module (CBAM) not only emphasizes the differences of different channels through channel attention, but also uses the pooling operation of channel axis to generate a spatial attention map to highlight the importance of different spatial pixels. In order to fully extract spatial and spectral features, Zhong et al. [51] proposed a spatial–spectral residuals network (SSRN). Recently, Zhu et al. [52] added a spatial and spectral attention network (RSSAN) to SSRN and achieved better classification performance. In the process of feature extraction, in order to avoid the interference between the extracted spatial features and spectral features, Ma et al. [53] designed a double-branch multi-attention (DBMA) network to extract spatial features and spectral features, using different attention mechanisms in the two branches. Similarly, Li et al. [54] proposed a double-attention network (DANet), incorporating spatial attention and channel attention. Specifically, spatial attention is used to obtain the dependence between any two positions of the feature graph, and channel attention is used to obtain the channel dependence between different channels. In [55], Li et al. proposed double-branch dual attention (DBDA). By adding spatial attention and channel attention modules to the two branches, DBDA achieves better classification performance. In order to highlight important features as much as possible, Cui et al. [56] proposed a new dual triple-attention network (DTAN), which uses three branches to obtain cross-dimensional interactive information and obtain attention maps between different dimensions. In addition, in [57], in order to expand the receptive field and extract more effective features, Roy et al. proposed an attention-based adaptive spectral–spatial kernel improved residual network (A2S2K-ResNet).
Although many excellent classification methods have been used for hyperspectral image classification, extracting features with strong discrimination ability and realizing high-precision image classification in small samples are still big challenges for hyperspectral image classification. In recent years, although the spatial attention mechanism and channel attention mechanism could obtain spatial dependence and channel dependence, there were still limitations in obtaining long-distance dependence. Considering the spatial location relationship and the different importance of different bands, we propose a three-dimensional coordination attention mechanism network (3DCAMNet). 3DCAMNet mainly includes three main components: a convolution module, linear convolution, and three-dimensional coordination attention mechanism (3DCAM). Firstly, the convolution module uses 3DCNN to fully extract spatial and spectral features. Secondly, the linear module aims to generate a feature map containing more information. Lastly, the designed 3DCAM not only considers the vertical and horizontal directions of spatial information, but also highlights the importance of different bands.
The main contributions of this paper are summarized as follows:
(1)
The three-dimensional coordination attention mechanism-based network (3DCAMNet) proposed in this paper is mainly composed of a three-dimensional coordination attention mechanism (3DCAM), linear module, and convolution module. This network structure can extract features with strong discrimination ability, and a series of experiments showed that 3DCAMNet can achieve good classification performance and has strong robustness.
(2)
In this paper, a 3DCAM is proposed. This attention mechanism obtains the 3D coordination attention map of HSIs by exploring the long-distance relationship between the vertical and horizontal directions of space and the importance of different channels of spectral dimension.
(3)
In order to extract spatial–spectral features as fully as possible, a convolution module is used in this paper. Similarly, in order to obtain the feature map containing more information, a linear module is introduced after the convolution module to extract more fine high-level features.
The main structure of the remainder of this paper is as follows: in Section 2, the components of 3DCAMNet are introduced in detail. Some experimental results and experimental analysis are provided in Section 3. Section 4 draws the conclusions.

2. Methodology

In this section, we introduce the three components of 3DCAMNet in detail: the 3D coordination attention mechanism (3DCAM), linear module, and convolution module.

2.1. Overall Framework of 3DCAMNet

For a hyperspectral image, Z = X , Y , where X is the set of all pixel data of the image, and Y is the set of labels corresponding to all pixels. In order to effectively learn edge features, the input image is processed and filled pixel by pixel to obtain N cubes with the size S R H × W × L . Here, H × W is the space size of the cube, and L is the number of spectral bands. The designed 3DCAMNet is mainly composed of three parts. Firstly, the input image is extracted by convolution module. Secondly, in order to fully consider the importance of the space and spectrum of the input image, a 3D coordination attention mechanism (3DCAM) is designed. After feature extraction, in order to extract advanced features more accurately, inspired by the ghost module, a linear module is designed. Lastly, the final classification results are obtained through the full connection layer (FC) and softmax layer. The overall framework of 3DCAMNet is shown in Figure 1. Next, we introduce the principle and framework of each module in 3DCAMNet step by step.

2.2. DCAM

Application of the attention mechanism in a convolutional neural network (CNN) can effectively enhance the ability of feature discrimination, and it is widely used in hyperspectral image classification. Hyperspectral images contain rich spatial and spectral information. However, in feature extraction, effectively extracting spatial and spectral dimensional features is the key to better classification. Therefore, we propose a 3D coordination attention mechanism (3DCAM), which is used to explore the long-distance relationship between the vertical and horizontal directions of spatial dimension and the difference of band importance of spectral dimension. The attention mechanism obtains the attention masks of the spatial dimension and spectral dimension according to the long-distance relationship between the vertical and horizontal directions of spatial information and the difference of importance of spectral information.
The structure of the proposed 3DCAM is shown in Figure 2. 3DCAM includes two parts (spectral attention and spatial coordination attention). Spectral and spatial attention can adaptively learn different spectral bands and spatial backgrounds, so as to improve the ability to distinguish different bands and obtain more accurate spatial relationships. Assuming that the input of 3DCAM is F R H × W × L , the output F o u t can be represented as
F o u t = F M H F M W F M L F ,
where F and F o u t represent the input and output of 3DCAM, respectively. M H represents the attention map in direction H , and the output size is H × 1 × 1 . M W represents the attention map in direction W , and the output size is 1 × W × 1 . Similarly, M L represents the attention map in direction L , and the output size is 1 × 1 × L . M H and M W are obtained by considering the vertical and horizontal directions of spatial information, so as to obtain long-distance dependent information. Specifically, F obtains F H R H × 1 × 1 in the vertical direction and F W R 1 × W × 1 in the horizontal direction through the global average pooling layer, and the obtained results are cascaded. In order to obtain the long-distance dependence in the vertical and horizontal directions, the cascaded results are sent to the unit convolution layer, batch normalization layer (BN), and nonlinear activation layer. The activation function of the nonlinear activation layer is h_swish [58], this kind of activation function has relatively few parameters, which results in the neural network having richer representation ability. The h_swish function can be expressed as
f x = x s i g m o i d α x ,
where α is a trainable parameter. Finally, the obtained results are separated and convoluted to obtain the vertical attention map M H and the horizontal attention map M W .
Similarly, F passes through the global average pool layer to obtain F L R 1 × 1 × L , and then the obtained result passes through the unit convolution layer and the activation function layer to obtain the spectral attention map M L F . The implementation process of 3DCAM is shown in Algorithm 1.
Algorithm 1 Details of 3DCAM.
1: Input:
2: Features: F R H × W × L .
3: Output:
4: Feature of 3DCAM: F o u t R H × W × L .
5: Initialzation:
6: Initialize all weight parameters of convolutional kernels.
7: F passes through L Avgpool, H AvgPool, and W AvgPool layers to generate F L R 1 × 1 × L , F H R H × 1 × 1 , and
8: F W R 1 × W × 1 , respectively;
9: Reshape the size of feature F H to 1 × H × 1 and cascade with F W to generate F H W ;
10: Convolute F H W with the 3D unit convolution kernel and the results through regularization and nonlinear a:
11: tivation function layer to generate F H W ;
12: Split F H W and convolute the results with 3D unit convolution kernel to generate F H and F W ;
13: Normalize F H and F W with the sigmoid function to generate the attention features M H ( F ) R H × 1 × 1 and
14: M W ( F ) R 1 × W × 1 ;
15: Convolute F L with the 3D unit convolution kernel to generate F L ;
16: Normalize F L with the sigmoid function to generate the attention feature M L ( F ) R 1 × 1 × L ;
17: Finally, the attention features M H ( F ) R H × 1 × 1 , M W ( F ) R 1 × W × 1 , and M L ( F ) R 1 × 1 × L are added to the input feature F to
18: obtain F o u t R H × W × L .

2.3. Convolution Module

CNNs have strong feature extraction abilities. In particular, it is possible to use the convolution and pooling operations in a CNN to get deeper information from input data. Due to the data properties of HSIs, the application of a three-dimensional convolutional neural network (3DCNN) can preserve the correlation between data pixels, so that the data will not be lost. In addition, the effective extraction of spatial and spectral information in hyperspectral images is still the focus of hyperspectral image classification.
In order to effectively extract the spatial–spectral features of HSIs, a convolution block based on space and spectrum is proposed in this paper. Inspired by Inception V3 [58], the convolution layer uses a smaller convolution kernel, which can not only learn the spatial–spectral features of HSIs, but also effectively reduce the parameters. The structure of the convolution module based on space and spectrum is shown in Figure 3.
As can be seen from Figure 3, input X i consists of c feature maps with the size of n × n × b . X o is the output of input X i after multilayer convolution, which can be expressed as
X o = F X i ,
where F is a nonlinear composite function. Specifically, the neural network consists of three layers, and each layer is composed of a convolution, batch normalization (BN), and nonlinear activation function (ReLU). The convolution kernel size of the convolution layer is 1 × 1 × 3. The use of the ReLU function can increase the nonlinear relationship between various layers of neural network, and then complete the complex tasks of neural network, as shown below.
g a c t i v a t e x = x o t h e r s 0 x 0 ,
where x represents the input of the nonlinear activation function, and g a c t i v a t e represents the nonlinear activation function.
In addition, in order to accelerate the convergence speed, BN layer is added before ReLU to normalize the data, which alleviates the problem of gradient dispersion to a certain extent [59]. The normalization formula is as follows:
x ^ i = x i E x i V a r x i ,
where E [ x ( i ) ] represents the average input value of each neuron, and V a r [ x ( i ) ] represents the standard deviation of the input value of each neuron.

2.4. Linear Module

In the task of hyperspectral image classification, extracting feature information as much as possible is the key to improve the classification performance. Inspired by the ghost module [60], this paper adopts a linear module. On the basis of the features output after the fusion of 3DCAM and convolution module, the feature map containing more information is generated by linear module.
The structure of the linear module is shown in Figure 4. The input y i is linearly convoluted to obtain y m , and then the obtained feature map y m is cascaded with the input y i to obtain the output y o . The output y m of linear convolution is calculated as follows:
y m = φ y i = v i , j x , y , z ,
v i , j x , y , z = C α = 0 h i 1 β = 0 w i 1 γ = 0 l i 1 K i , j , C α , β , γ v i 1 , C x + α , y + β , z + γ + b i , j ,
where φ is a linear convolution function, v i , j x , y , z represents the neuron at the position x , y , z of the j -th feature map on the i -th layer, h i , w i , and l i represent the height, width, and spectral dimension of the convolution kernel, respectively, and C is the index of i 1 feature map. In addition, K i , j , C α , β , γ represents the weight of the j -th convolution kernel on α , β , γ at the C -th feature map position of layer i . v i 1 , C x + α , y + β , z + γ represents the value of the neuron at x + α , y + β , z + γ of the C -th feature map on layer i 1 , and b i , j is the bias term.

3. Experimental Results and Analysis

In order to verify the classification performance of 3DCAMNet, this section conducts a series of experiments using five datasets. All experiments are implemented on the same configuration, i.e., an Intel (R) core (TM) i9-9900k CPU, NVIDIA Geforce RTX 2080TI GPU, and 32 GB random access memory server. The contents of this section include the experimental setup, comparison of results, and discussion.

3.1. Experimental Setting

3.1.1. Datasets

Five common datasets were selected, namely, Indian Pines (IP), Pavia University (UP), Kennedy Space Center (KSC), Salinas Valley (SV), and University of Houston (HT). The IP, KSC, and SV datasets were captured by airborne visible infrared imaging spectrometer (AVIRIS) sensors. The UP and HT datasets were obtained by the reflective optical spectral imaging system (ROSIS-3) sensor and the compact airborne spectral imager (CASI) sensor, respectively.
Specifically, IP has 16 feature categories with a space size of 145 × 145, and 200 spectral bands can be used for experiments. Compared with IP, UP has fewer feature categories, only nine, and the image size is 610 × 340. In addition to 13 noise bands, 103 bands are used in the experiment. The spatial resolution of KSC is 20 m and the spatial size of each image is 512 × 614. Similarly, after removing the water absorption band, 176 bands are left for the experiment. The SV space size is 512 × 217 and contains 16 feature categories, while there are 204 spectral bands available for experiments. The last dataset HT has a high spatial resolution and a spatial size of 349 × 1905, the number of bands is 114, and the wavelength range is 380–1050 nm, including 15 feature categories. The details of the dataset are shown in Table 1.

3.1.2. Experimental Setting

In 3DCAMNet, the batch size and maximum training rounds used were 16 and 200, respectively, and the “Adam” optimizer was selected during the training process. The learning rate and input space size were 0.0005 and 9 × 9, respectively. In addition, the cross-loss entropy was used to measure the difference between the real probability distribution and the predicted probability distribution. Table 2 shows the superparameter settings of 3DCAMNet.

3.1.3. Evaluation Index

Three evaluation indicators were adopted in the experiments, namely, overall accuracy (OA), average accuracy (AA), and Kappa coefficient (Kappa) [61]. The measuremnet units of these evaluation indicators are all dimensionless. The confusion matrix H = a i , j n × n is constructed with the real category information of the original pixel and the predicted category information, where n is the number of categories, and a i , j is the number of samples classified as category i by category j . Assuming that the total number of samples of HSIs is M , the ratio of the number of accurately classified samples to the total number of samples OA is
OA = i = 1 n a i , i M × 100 % ,
where, a i , i is the correctly classified element in the confusion matrix. Similarly, AA is the average value of classification accuracy for each category,
AA = 1 n i = 1 n a i , j j = 1 n a i , j × 100 % ,
The Kappa matrix is another performance evaluation index. The specific calculation is as follows:
Kappa = i = 1 n a i , i i = 1 n a i , _ a _ , i M M i = 1 n a i , _ a _ , i M ,
where a i , _ and a _ , i represent all column elements in row i and all row elements in column i of confusion matrix H , respectively.

3.2. Experimental Results

In this section, the proposed method 3DCAMNet is compared with other advanced classification methods, including SVM [17], SSRN [52], PyResNet [40], DBMA [53], DBDA [55], Hybrid-SN [35], and A2S2K-ResNet [57]. In the experiment, the training proportion of IP, UP, KSC, SV, and HT datasets was 3%, 0.5%, 5%, 0.5%, and 5%. In addition, for fair comparison, the input space size of all methods was 9 × 9, and the final experimental results were the average of 30 experiments.
SVM is a classification method based on the radial basis kernel function (RBF). SSRN designs a residual module of space and spectrum to extract spatial–spectral information for the neighborhood blocks of input three-dimensional cube data. PyResNet gradually increases the feature dimension of each layer through the residual method, so as to get more location information. In order to further improve the classification performance, DBMA and DBDA designed spectral and spatial branches to extract the spectral–spatial features of HSIs, respectively, and used an attention mechanism to emphasize the channel features and spatial features in the two branches, respectively. Hybrid-SN verifies the effectiveness of a hybrid spectral CNN network, whereby spectral–spatial features are first extracted through 3DCNN, and then spatial features are extracted through 2DCNN. A2S2K-ResNet designs an adaptive kernel attention module, which not only solves the problem of automatically adjusting the receptive fields (RFs) of the network, but also jointly extracts spectral–spatial features, so as to enhance the robustness of hyperspectral image classification. Unlike the attention mechanism proposed in the above methods, in order to obtain the long-distance dependence in the vertical and horizontal directions and the importance of the spectrum, a 3D coordination attention mechanism is proposed in this paper. Similarly, in order to further extract spectral and spatial features with more discriminant features, the 3DCNN and linear module are used to fully extract joint spectral–spatial features, so as to improve the classification performance.
The classification accuracy of all methods on IP, UP, KSC, SV, and HT datasets are show in Table A1, Table A2, Table A3, Table A4 and Table A5, respectively. It can be seen that, in the five datasets, compared with other methods, the method proposed in this paper not only obtained the best OA, AA, and Kappa, but also almost every class had greater advantages in classification accuracy. Specifically, due to the complex distribution of features in the IP dataset, the classification accuracy of all methods on this dataset was low, but the method in this paper not only obtained better accuracy in the categories that were easy to classify, but also obtained better accuracy in the categories that were difficult to classify such as Class 2, Class 4, and Class 9. Similarly, in the UP dataset, we can clearly see that the accuracy of the method proposed in this paper, according to OA, AA, and Kappa or various categories, has great advantages over other methods. Compared with the IP dataset, the UP dataset has fewer feature categories, and all methods exhibited better classification results, but the method in this paper obtained the highest classification accuracy. The KSC dataset has the same number of categories as the IP dataset, in addition to 16 feature categories, but the KSC feature categories are scattered. It can be seen from Table A3 that all classification methods obtained ideal results, but the proposed method obtained the best classification accuracy. In addition, because the sample distribution of the SV dataset is relatively balanced and the ground object distribution is relatively regular, the classification accuracy of all methods was high. On the contrary, HT images were collected from the University of Houston Campus, with complex distribution and many categories, but the method proposed in this paper could still achieve high-precision classification.
In addition, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 shows the classification visualization results of all methods, including the false-color composite image and the classification visualization results of each method. Because the traditional classification methods cannot effectively extract spatial–spectral features, the classification effect was poor, while the image was rough and noisy, as seen for SVM and the deep network methods based on ResNet, including SSRN and PyResNet. Although these kinds of method can obtain good classification results, there was still a small amount of noise. In addition, DBMA, DBDA, and A2S2K-ResNet all added an attention mechanism to the network, which yielded better classification visualization results, but there were still many classification errors. However, the classification visualization results obtained by the method proposed in this paper were smoother and closer to the real feature map. This fully verifies the superiority of the proposed method.
In conclusion, through multiple angle analysis, it was verified that this method has more advantages than other methods. First, among all methods, the proposed method had the highest overall accuracy (OA), average accuracy (OA), and Kappa coefficient (Kappa). In addition, the method proposed in this paper could not only achieve high classification accuracy in the categories that were easy to classify, but also had strong judgment ability in the categories that were difficult to classify. Second, among the classification visualization results of all methods, the method in this paper obtained smoother results that were closer to the false-color composite image.

4. Discussion

In this section, we discuss in detail the modules and parameters that affect the classification performance of the proposed method, including the impact of different attention mechanisms on classification accuracy OA, the impact of different input space sizes and different training sample ratios on classification accuracy OA, ablation experiments of different modules in 3DCAMNet, and the comparison of running time and parameters of different methods on IP datasets.

4.1. Effects of Different Attention Mechanisms on OA

In order to verify the effectiveness of 3DCAM, we consider two other typical attention mechanisms for comparison, SE and CBAM, as shown in Figure 10. The experimental results of the three attention mechanisms are shown in Table 3. The results show that the classification accuracy of 3DCAM on the five datasets was better than SE and CBAM, and the attention mechanism of CBAM was better than SE on a whole. The reason is that SE attention only emphasizes the importance differences of channels, without considering spatial differences. Although CBAM considers the channel dependence and spatial dependence, it does not fully consider the spatial location information. Lastly, for hyperspectral data types, 3DCAM fully considers the position relationship in the horizontal and vertical directions of space, obtains the long-distance dependence, and considers the differences in spectral dimension. Therefore, our proposed 3DCAM can better mark important spectral bands and spatial location information.

4.2. Effects of Different Input Space Sizes and Different Training Sample Ratios on OA

The size n × n of input space and the proportion p of different training samples are two important superparameters of 3DCAMNet, and their changes have a great impact on the classification performance. In particular, the selected input space sizes of 5 × 5, 7 × 7, 9 × 9, 11 × 11, and 13 × 13 were used to explore the optimal space size of 3DCAMNet method. In addition, the proportion of training samples p refers to the proportion of training samples used by the network. Among them, the value of p for the IP, KSC, and HT datasets was 1.0 % , 2.0 % , 3.0 % , 4.0 % , 5.0 % , while the value of p for the UP and SV datasets was 0.5 % , 1.0 % , 1.5 % , 2.0 % , 2.5 % . Figure 11 shows the OA results of 3DCAMNet with different input size n and different training sample ratio p for all datasets. As can be seen from Figure 11, when n = 5 and the proportion of training samples of IP, UP, KSC, SV, and HT datasets was 1.0%, 0.5%, 1.0%, 0.5%, and 1.0%, respectively, the OA value obtained by the proposed method was the lowest. With the increase in proportion of training samples, OA increased slowly. In addition, when n = 9 and the number of training samples was the highest, the classification performance obtained better results.

4.3. Comparison of Contributions of Different Modules in 3DCAMNet

In order to verify the effectiveness of the method proposed in this paper, we conducted ablation experiments on two important modules of the method: the linear module and 3DCAM. The experimental results are shown in Table 4. It can be seen that, when both the linear module and 3DCAM were implemented, the OA value obtained on all datasets was the largest, which fully reflects the strong generalization ability of the proposed method. On the contrary, when neither module was implemented, the OA value obtained on all datasets was the lowest. In addition, when either the linear module or the 3DCAM module was applied to the network, the overall accuracy OA was improved. In general, the ablation experiment shows that the classification performance of the basic network was the lowest, but with the gradual addition of modules, the classification performance was also gradually improved. The ablation experiments fully verified the effectiveness of the linear module and 3DCAM.

4.4. Comparison of Running Time and Parameters of Different Methods on IP Dataset

When the input size was 9 × 9 × 200, the comparison results of parameter quantity and running time between 3DCAMNet and other advanced methods were as shown in Table 5. It can be seen that the PyResNet based on space and spectrum needed the most parameters. This is because it obtains more location information by gradually increasing the feature dimension of all layers, which inevitably necessitates more parameters. In addition, the longest running time of all methods was DBDA. However, the parameter amount of the proposed method was similar to that of other methods, and the running time was also moderate. For further comparison, the OA values obtained by these methods on the IP dataset are shown in Figure 12. Combined with Table 5, it can be seen that, compared with other methods, the parameter quantity and running time of the proposed 3DCAMNet were moderate, while 3DCAMNet method could achieve the highest OA.

5. Conclusions

A 3DCAMNet method was proposed in this paper. It is mainly composed of three modules: a convolution module, linear module, and 3DCAM. Firstly, the convolution module uses 3DCNN to fully extract spatial–spectral features. Secondly, the linear module is introduced after the convolution module to extract more fine features. Lastly, 3DCAM was designed, which can not only obtain the long-distance dependence between vertical and horizontal directions in HSI space, but also obtain the importance difference between different spectral bands. The proposed 3DCAM was compared with two classical attention mechanisms, i.e., SE and CBAM. The experimental results show that the classification method based on 3DCAM could obtain better classification performance. Compared with some state-of-the art methods, such as A2S2K-ResNet and Hybrid-SN, 3DCAMNet could achieve better classification performance. The reason is that, although A2S2K-ResNet can expand the receptive field (RF) via the adaptive convolution kernel, the deep features cannot be reused. Similarly, Hybrid-SN can extract spatial and spectral features using 2DCNN and 3DCNN, but the classification performance was still worse than that of 3DCAMNet because of its small RF and insufficient extracted features. In addition, in order to verify the effectiveness of the proposed method, a series of experiments were carried out on five datasets. The experimental results show that 3DCAMNet had higher classification performance and stronger robustness than other state-of-the-art methods, highlighting the effectiveness of the proposed 3DCAMNet method in hyperspectral classification. In future work, we will consider a more efficient attention mechanism module and spatial–spectral feature extraction module.

Author Contributions

Conceptualization, C.S. and D.L.; data curation, D.L.; formal analysis, T.Z.; methodology, C.S. and D.L.; software, D.L.; validation, D.L. and C.S.; writing—original draft, D.L.; writing—review and editing, C.S. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Heilongjiang Science Foundation Project of China under Grant LH2021D022, in part by the National Natural Science Foundation of China (41701479, 62071084), and in part by the Fundamental Research Funds in Heilongjiang Provincial Universities of China under Grant 135509136.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data associated with this research are available online. The IP dataset is available for download at http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 21 November 2021). The UP dataset is available for download at http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes#Pavia_University_scene (accessed on 21 November 2021). The KSC dataset is available for download at http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 21 November 2021). The SV dataset is available for download at http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 21 November 2021). The HT dataset is available for download at http://www.grss-ieee.org/community/technical-committees/data-fusion/2013-ieee-grss-data-fusion-contest/ (accessed on 21 November 2021).

Acknowledgments

We would like to thank the handling editor and the anonymous reviewers for their careful reading and helpful remarks.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Classification results of different methods on IP dataset (%).
Table A1. Classification results of different methods on IP dataset (%).
ClassSVM [17]SSRN [52]PyResNet [40]DBMA [53]DBDA [55]Hybrid-SN [35]A2S2K-ResNet [57]Proposed
136.6287.2226.6782.9087.9973.0589.4198.78
255.4985.7180.9282.8490.6166.1490.4996.38
362.3390.6581.2479.5392.0777.6392.3294.80
442.5483.8662.1787.4293.9662.6193.7894.98
585.0598.4191.7596.3699.0289.5697.8398.89
683.3298.2994.2696.6296.8292.2397.2098.11
759.8782.9819.7549.2867.6344.9088.7070.53
889.6797.81100.0099.1498.9490.6598.81100.00
939.2867.0369.0954.7778.2437.7964.5692.71
1092.3288.2382.9683.9284.0370.2388.5991.24
1164.7389.3989.5990.9793.9277.3889.7696.67
1250.5586.9859.8280.1588.9167.6092.4892.01
1386.7499.0680.0797.4697.8182.1096.8999.58
1488.6797.1696.3195.6897.6393.1296.0297.51
1561.8282.0186.3682.4691.4876.2191.3494.31
1698.6696.3090.3794.5089.8145.1293.3197.29
OA (%)68.7690.2485.6586.5992.4477.6192.1095.81
AA (%)66.7389.4475.6784.6390.5571.6591.3694.61
Kappa (%)63.9888.8683.684.7991.3874.3590.9795.22
Table A2. Classification results of different methods on UP dataset (%).
Table A2. Classification results of different methods on UP dataset (%).
ClassSVM [17]SSRN [52]PyResNet [40]DBMA [53]DBDA [55]Hybrid-SN [38]A2S2K-ResNet [57]Proposed
181.2694.6088.1192.2294.2474.6783.8195.57
284.5398.1597.7796.3499.1692.0892.7299.38
356.5674.3830.9783.7991.0363.0072.9792.63
494.3496.1184.7995.9297.0183.4498.1297.77
595.3898.9496.6498.8598.8388.9598.6898.74
680.6692.0954.391.5898.2783.4286.5198.45
794.1369.8638.388.0498.4868.8688.0799.66
871.1284.5475.581.6488.3856.9674.1187.19
999.9488.8691.1593.2297.9865.3190.9798.87
OA (%)82.0692.7683.0192.3296.5281.3387.4597.01
AA (%)79.2288.6173.0691.2995.9375.1987.3396.47
Kappa (%)75.4490.4376.989.7995.3775.0183.1696.02
Table A3. Classification results of different methods on KSC dataset (%).
Table A3. Classification results of different methods on KSC dataset (%).
ClassSVM [17]SSRN [52]PyResNet [40]DBMA [53]DBDA [55]Hybrid-SN [38]A2S2K-ResNet [57]Proposed
192.4397.8894.10100.00100.0099.41100.0099.82
287.1492.6785.5993.3896.1793.2599.1397.61
372.4786.1181.1580.7091.2886.4587.8198.94
454.4586.5077.2368.9183.6293.3498.5392.40
564.1174.7974.9774.4179.3093.8692.3695.94
665.2399.0578.7795.5196.1195.7299.9299.53
775.5084.9284.7485.8194.8994.9495.8596.97
887.3398.4895.2294.9398.9097.7599.4199.97
987.9498.4793.9496.8199.9898.9499.7699.98
1097.0199.2198.9799.27100.0099.97100.00100.00
1196.0399.2399.4899.5999.1699.14100.0098.86
1293.7698.4696.1497.4799.3099.1399.6499.48
1399.7299.8999.73100.00100.0099.61100.00100.00
OA (%)87.9695.4291.4994.1597.3397.3298.3499.01
AA (%)82.5593.5189.2391.2995.2896.2797.8798.42
Kappa (%)86.5994.9190.5293.4897.0297.0298.8498.88
Table A4. Classification results of different methods on SV dataset (%).
Table A4. Classification results of different methods on SV dataset (%).
SVSVM [17]SSRN [52]PyResNet [40]DBMA [53]DBDA [55]Hybrid-SN [38]A2S2K-ResNet [57]Proposed
199.4296.5698.49100.0099.6296.7099.84100.00
298.7999.7299.6999.9899.2597.1199.9999.95
387.9893.6496.3797.4396.8595.8394.9897.80
497.5497.2996.6993.4694.3453.8796.1697.06
595.1094.4791.0398.7095.4290.3499.1398.92
699.9099.7499.6198.8699.9997.0399.7399.96
795.5998.8698.6997.9898.5898.3599.7299.89
871.6688.7383.0991.9886.8085.1790.1595.84
998.0899.5298.8698.6298.9997.9399.6799.67
1085.3997.0597.5596.9597.6294.6598.5299.10
1186.9894.6995.3192.8394.2859.1895.2196.43
1294.2098.1598.1998.6397.9593.8797.6499.79
1393.4397.8675.1198.5199.4554.3597.1099.93
1492.0393.2487.3094.2895.2959.0693.2996.53
1571.0276.4181.1587.5481.1883.3484.7992.20
1697.8299.2098.5599.5599.7185.7599.77100.0
OA (%)86.9891.1291.5295.2292.3289.1094.5997.48
AA (%)91.5695.3393.4896.5895.9683.9196.6198.32
Kappa (%)85.4590.1590.5494.6791.4487.8593.9897.19
Table A5. Classification results of different methods on HT dataset (%).
Table A5. Classification results of different methods on HT dataset (%).
HTSVM [17]SSRN [52]PyResNet [40]DBMA [53]DBDA [55]Hybrid-SN [38]A2S2K-ResNet [57]Proposed
195.9994.6389.0593.1395.5578.3498.5197.37
296.9798.8295.9297.1098.1483.1399.3899.39
399.5699.9599.9599.9199.9797.1599.98100.00
497.9499.0095.8698.3498.1084.5597.8199.67
595.5897.0798.6698.2499.8885.8299.4299.13
699.5499.9394.2499.2099.6687.6697.8999.86
788.5595.0794.9993.2995.5268.4397.3595.32
884.1490.5990.4294.1298.0466.5499.0599.36
982.5694.9083.4893.2095.2261.0494.0896.06
1086.8292.4378.9591.3892.7865.3494.8196.11
1187.9498.7187.8795.2496.2765.4197.2597.99
1284.2995.3488.3193.2095.2862.8696.8297.21
1376.4096.9394.4191.5394.6979.1497.0990.59
1497.2999.1897.9598.7699.9279.8597.6599.24
1599.3798.6898.6097.9998.0680.7799.1699.02
OA (%)90.9396.0290.6794.8896.6973.3197.5897.69
AA (%)91.5396.7592.5895.6497.1476.4097.7597.75
Kappa (%)90.1995.7089.9294.4696.4271.1697.3897.50

References

  1. Li, C.; Ma, Y.; Mei, X.; Liu, C.; Ma, J. Hyperspectral image classification with robust sparse representation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 641–645. [Google Scholar] [CrossRef]
  2. Yu, C.; Wang, Y.; Song, M.; Chang, C.-I. Class signature-constrained background-suppressed approach to band selection for classification of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 14–31. [Google Scholar] [CrossRef]
  3. Yu, H.; Gao, L.; Li, W.; Du, Q.; Zhang, B. Locality sensitive discriminant analysis for group sparse representation-based hyperspectral imagery classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1358–1362. [Google Scholar] [CrossRef]
  4. Yuen, P.W.; Richardson, M. An introduction to hyperspectral imaging and its application for security, surveillance and target acquisition. Imaging Sci. J. 2010, 58, 241–253. [Google Scholar] [CrossRef]
  5. Li, H.; Song, Y.; Chen, C.L.P. Hyperspectral image classification based on multiscale spatial information fusion. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5302–5312. [Google Scholar] [CrossRef]
  6. Gevaert, C.M.; Suomalainen, J.; Tang, J.; Kooistra, L. Generation of spectral–temporal response surfaces by combining multispectral satellite and hyperspectral UAV imagery for precision agriculture applications. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 3140–3146. [Google Scholar] [CrossRef]
  7. Van der Meer, F. Analysis of spectral absorption features in hyperspectral imagery. Int. J. Appl. Earth Observ. Geoinf. 2004, 5, 55–68. [Google Scholar] [CrossRef]
  8. Makki, I.; Younes, R.; Francis, C.; Bianchi, T.; Zucchetti, M. A survey of landmine detection using hyperspectral imaging. ISPRS J. Photogramm. Remote Sens. 2017, 124, 40–53. [Google Scholar] [CrossRef]
  9. Kang, X.; Li, S.; Benediktsson, J.A. Spectral–spatial hyperspectral image classification with edge-preserving filtering. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2666–2677. [Google Scholar] [CrossRef]
  10. Fauvel, M.; Benediktsson, J.A.; Chanussot, J.; Sveinsson, J.R. Spectral and spatial classification of hyperspectral data using SVMs and morphological profiles. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3804–3814. [Google Scholar] [CrossRef] [Green Version]
  11. Li, J.; Huang, X.; Gamba, P.; Bioucas-Dias, J.M.; Zhang, L.; Benediktsson, J.A.; Plaza, A. Multiple feature learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1592–1606. [Google Scholar] [CrossRef] [Green Version]
  12. Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification via kernel sparse representation. IEEE Trans. Geosci. Remote Sens. 2013, 51, 217–231. [Google Scholar] [CrossRef] [Green Version]
  13. Li, J.; Khodadadzadeh, M.; Plaza, A.; Jia, X.; Bioucas-Dias, J.M. A discontinuity preserving relaxation scheme for spectral—Spatial hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 625–639. [Google Scholar] [CrossRef]
  14. Yu, C.; Xue, B.; Song, M.; Wang, Y.; Li, S.; Chang, C.-I. Iterative target-constrained interference-minimized classifier for hyperspectral classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 1095–1117. [Google Scholar] [CrossRef]
  15. Jia, X.; Kuo, B.-C.; Crawford, M.M. Feature mining for hyperspectral image classification. Proc. IEEE 2013, 101, 676–697. [Google Scholar] [CrossRef]
  16. Ghamisi, P.; Benediktsson, J.A.; Ulfarsson, M.O. Spectral spatial classification of hyperspectral images based on hidden Markov random fields. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2565–2574. [Google Scholar] [CrossRef] [Green Version]
  17. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  18. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep learning for hyperspectral image classification: An overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef] [Green Version]
  19. Audebert, N.; le Saux, B.; Lefevre, S. Deep learning for classification of hyperspectral data: A comparative review. IEEE Geosci. Remote Sens. Mag. 2019, 7, 159–173. [Google Scholar] [CrossRef] [Green Version]
  20. Rasti, B.; Hong, D.; Hang, R.; Ghamisi, P.; Kang, X.; Chanussot, J.; Benediktsson, J.A. Feature Extraction for Hyperspectral Imagery: The Evolution from Shallow to Deep. arXiv 2020, arXiv:2003.02822. [Google Scholar]
  21. Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 2018, 27, 4608–4622. [Google Scholar] [CrossRef] [Green Version]
  22. Lu, X.; Zheng, X.; Yuan, Y. Remote sensing scene classification by unsupervised representation learning. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5148–5157. [Google Scholar] [CrossRef]
  23. Ma, X.; Wang, H.; Geng, J. Spectral-spatial classification of hyperspectral image based on deep auto-encoder. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 4073–4085. [Google Scholar] [CrossRef]
  24. Song, W.; Li, S.; Fang, L.; Lu, T. Hyperspectral image classification with deep feature fusion network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3173–3184. [Google Scholar] [CrossRef]
  25. Huang, H.; Xu, K. Combing triple-part features of convolutional neural networks for scene classification in remote sensing. Remote Sens. 2019, 11, 1687. [Google Scholar] [CrossRef] [Green Version]
  26. Chen, Y.; Zhu, K.; Zhu, L.; He, X.; Ghamisi, P.; Benediktsson, J.A. Automatic design of convolutional neural network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7048–7066. [Google Scholar] [CrossRef]
  27. Huang, H.; Duan, Y.; He, H.; Shi, G. Local linear spatial–spectral probabilistic distribution for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1259–1272. [Google Scholar] [CrossRef]
  28. Li, Y.; Xie, W.; Li, H. Hyperspectral image reconstruction by deep convolutional neural network for classification. Pattern Recognit. 2017, 63, 371–383. [Google Scholar] [CrossRef]
  29. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef] [Green Version]
  30. Yu, C.; Zhao, M.; Song, M.; Wang, Y.; Li, F.; Han, R.; Chang, C.-I. Hyperspectral image classification method based on CNN architecture embedding with hashing semantic feature. IEEE J. Sel. Top. Appl. Earth Observ. 2019, 12, 1866–1881. [Google Scholar] [CrossRef]
  31. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef] [Green Version]
  32. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  33. Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar]
  34. Feng, J.; Chen, J.; Liu, L.; Cao, X.; Zhang, X.; Jiao, L.; Yu, T. CNN-based multilayer spatial–spectral feature fusion and sample augmentation with local and nonlocal constraints for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 1299–1313. [Google Scholar] [CrossRef]
  35. Gong, Z.; Zhong, P.; Yu, Y.; Hu, W.; Li, S. A CNN with multiscale convolution and diversified metric for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3599–3618. [Google Scholar] [CrossRef]
  36. Xu, Y.; Zhang, L.; Du, B.; Zhang, F. Spectral-spatial unified networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5893–5909. [Google Scholar] [CrossRef]
  37. Mei, S.; Ji, J.; Geng, Y.; Zhang, Z.; Li, X.; Du, Q. Unsupervised spatial-spectral feature learning by 3D convolutional autoencoder for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6808–6820. [Google Scholar] [CrossRef]
  38. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
  39. Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative adversarial networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
  40. Paoletti, M.E.; Haut, J.M.; Fernandez-Beltran, R.; Plaza, J.; Plaza, A.J.; Pla, F. Deep pyramidal residual networks for spectral-spatial hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 740–754. [Google Scholar] [CrossRef]
  41. Chen, L.; Zhang, H.; Xiao, J.; Nie, L.; Shao, J.; Liu, W.; Chua, T.-S. SCA-CNN: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5659–5667. [Google Scholar]
  42. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  43. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the 15th European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
  44. Hu, Y.; Li, J.; Huang, Y.; Gao, X. Channel-wise and spatial feature modulation network for single image super-resolution. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 3911–3927. [Google Scholar] [CrossRef] [Green Version]
  45. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 7132–7141. [Google Scholar]
  46. Wang, L.; Peng, J.; Sun, W. Spatial—Spectral squeeze-and-excitation residual network for hyperspectral image classification. Remote Sens. 2019, 11, 884. [Google Scholar] [CrossRef] [Green Version]
  47. Mei, X.; Pan, E.; Ma, Y.; Dai, X.; Huang, J.; Fan, F.; Du, Q.; Zheng, H.; Ma, J. Spectral-spatial attention networks for hyperspectral image classification. Remote Sens. 2019, 11, 963. [Google Scholar] [CrossRef] [Green Version]
  48. Roy, S.K.; Dubey, S.R.; Chatterjee, S.; Chaudhuri, B.B. FuSENet: Fused squeeze-and-excitation network for spectral-spatial hyperspectral image classification. IET Image Process. 2020, 14, 1653–1661. [Google Scholar] [CrossRef]
  49. Ding, L.; Tang, H.; Bruzzone, L. LANet: Local attention embedding to improve the semantic segmentation of remote sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 59, 426–435. [Google Scholar] [CrossRef]
  50. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional block attention module. In Proceedings of the 2018 European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  51. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 847–858. [Google Scholar] [CrossRef]
  52. Zhu, M.; Jiao, L.; Liu, F.; Yang, S.; Wang, J. Residual spectral-spatial attention network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 449–462. [Google Scholar] [CrossRef]
  53. Ma, W.; Yang, Q.; Wu, Y.; Zhao, W.; Zhang, X. Double-branch multiattention mechanism network for hyperspectral image classification. Remote Sens. 2019, 11, 1307. [Google Scholar] [CrossRef] [Green Version]
  54. Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3146–3154. [Google Scholar]
  55. Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network. Remote Sens. 2020, 12, 582. Available online: https://www.mdpi.com/2072-4292/12/3/582 (accessed on 21 November 2021). [CrossRef] [Green Version]
  56. Cui, Y.; Yu, Z.; Han, J.; Gao, S.; Wang, L. Dual-Triple Attention Network for Hyperspectral Image Classification Using Limited Training Samples. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  57. Roy, S.K.; Manna, S.; Song, T.; Bruzzone, L. Attention-Based Adaptive Spectral–Spatial Kernel ResNet for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 7831–7843. [Google Scholar] [CrossRef]
  58. Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNetV3. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
  59. Nair, V.; Hinton, G.E. Rectified linear units improve restricted Boltzmann machines. In Proceedings of the International Conference on Machine Learning (ICML), Baltimore, MD, USA, 21–24 June 2010; pp. 807–814. [Google Scholar]
  60. Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. GhostNet: More features from cheap operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 1580–1589. [Google Scholar]
  61. Pontius, R.G.; Millones, M. Death to Kappa: Birth of quantity disagreement and allocation disagreement for accuracy assessment. Int. J. Remote Sens. 2011, 32, 4407–4429. [Google Scholar] [CrossRef]
Figure 1. The overall framework of the proposed method.
Figure 1. The overall framework of the proposed method.
Remotesensing 14 00608 g001
Figure 2. Block diagram of 3DCAM module.
Figure 2. Block diagram of 3DCAM module.
Remotesensing 14 00608 g002
Figure 3. Convolution module structure diagram.
Figure 3. Convolution module structure diagram.
Remotesensing 14 00608 g003
Figure 4. Structure diagram of linear module.
Figure 4. Structure diagram of linear module.
Remotesensing 14 00608 g004
Figure 5. Classification visualization results for IP dataset obtained using eight methods: (a) ground-truth map, (b) SVM, (c) SSRN, (d) PyResNet, (e) DBMA, (f) DBDA, (g) Hybrid-SN, (h) A2S2K-ResNet, and (i) proposed method.
Figure 5. Classification visualization results for IP dataset obtained using eight methods: (a) ground-truth map, (b) SVM, (c) SSRN, (d) PyResNet, (e) DBMA, (f) DBDA, (g) Hybrid-SN, (h) A2S2K-ResNet, and (i) proposed method.
Remotesensing 14 00608 g005
Figure 6. Classification visualization results for KSC dataset obtained using eight methods: (a) ground-truth map, (b) SVM, (c) SSRN, (d) PyResNet, (e) DBMA, (f) DBDA, (g) Hybrid-SN, (h) A2S2K-ResNet, and (i) proposed method.
Figure 6. Classification visualization results for KSC dataset obtained using eight methods: (a) ground-truth map, (b) SVM, (c) SSRN, (d) PyResNet, (e) DBMA, (f) DBDA, (g) Hybrid-SN, (h) A2S2K-ResNet, and (i) proposed method.
Remotesensing 14 00608 g006
Figure 7. Classification visualization results for UP dataset obtained using eight methods: (a) ground-truth map, (b) SVM, (c) SSRN, (d) PyResNet, (e) DBMA, (f) DBDA, (g) Hybrid-SN, (h) A2S2K-ResNet, and (i) proposed method.
Figure 7. Classification visualization results for UP dataset obtained using eight methods: (a) ground-truth map, (b) SVM, (c) SSRN, (d) PyResNet, (e) DBMA, (f) DBDA, (g) Hybrid-SN, (h) A2S2K-ResNet, and (i) proposed method.
Remotesensing 14 00608 g007
Figure 8. Classification visualization results for SV dataset obtained using eight methods: (a) ground-truth map, (b) SVM, (c) SSRN, (d) PyResNet, (e) DBMA, (f) DBDA, (g) Hybrid-SN, (h) A2S2K-ResNet, and (i) proposed method.
Figure 8. Classification visualization results for SV dataset obtained using eight methods: (a) ground-truth map, (b) SVM, (c) SSRN, (d) PyResNet, (e) DBMA, (f) DBDA, (g) Hybrid-SN, (h) A2S2K-ResNet, and (i) proposed method.
Remotesensing 14 00608 g008
Figure 9. Classification visualization results for HT datasets: (a) ground-truth map, and (b) map of the proposed method.
Figure 9. Classification visualization results for HT datasets: (a) ground-truth map, and (b) map of the proposed method.
Remotesensing 14 00608 g009
Figure 10. Comparison of classification results using different attention mechanisms in the proposed method: (a) SE, (b) CBAM, and (c) 3DCAM.
Figure 10. Comparison of classification results using different attention mechanisms in the proposed method: (a) SE, (b) CBAM, and (c) 3DCAM.
Remotesensing 14 00608 g010
Figure 11. Relationship between the training proportion and OA with different patch sizes of n × n for the proposed 3DCAMNet: (a) IP dataset, (b) UP dataset, (c) KSC dataset, (d) SV dataset, and (e) HT dataset.
Figure 11. Relationship between the training proportion and OA with different patch sizes of n × n for the proposed 3DCAMNet: (a) IP dataset, (b) UP dataset, (c) KSC dataset, (d) SV dataset, and (e) HT dataset.
Remotesensing 14 00608 g011
Figure 12. Comparison of the OA values obtained by the method on the IP dataset.
Figure 12. Comparison of the OA values obtained by the method on the IP dataset.
Remotesensing 14 00608 g012
Table 1. Experimental dataset information.
Table 1. Experimental dataset information.
IPUPKSC
No.ClassNumberNo.ClassNumberNo.ClassNumber
1Alfalfa461Asphalt66311Scrub761
2Corn-notill14282Meadows18,6492Willow-swamp243
3Corn-mintill8303Gravel20993CP-hammock256
4Corn2374Trees30644Slash-pine252
5Grass/pasture4835Painted metal sheets13455Oak/Broadleaf161
6Grass/trees7306Bare Soil50296Hardwood229
7Grass/pasture-mowed287Bitumen13307Swap105
8Hay-windrowed4788Self-Blocking Bricks36828Graminoid-marsh431
9Oats209Shadows9479Spartina-marsh520
10Soybean-notill972///10Cattail-marsh404
11Soybean-mintill2455///11Salt-marsh419
12Soybean-clean593///12Mud-flats503
13Wheat205// 13Water927
14Woods1265//////
15Bldg-Grass-Tree-Drivers386//////
16Stone-Steel-Towers93//////
Total/10,249Total/42,776Total/5211
SVHT
No.ClassNumberNo.ClassNumber
1Brocoil-green-weeds_120091Healthy grass1251
2Brocoil-green-weeds_237262Stressed grass1254
3Fallow19763Synthetic grass697
4Fallow-rough-plow13944Trees1244
5Fallow-smooth26785Soil1242
6Stubble39596Water325
7Celery35797Residential1268
8Grapes-untrained11,2718Commercial1244
9Soil-vinyard-develop62039Road1252
10Corn-senesced-green-weeds327810Highway1227
11Lettuce-romaine-4wk106811Railway1235
12Lettuce-romaine-5wk192712Parking Lot 11233
13Lettuce-romaine-6wk91613Parking Lot 2469
14Lettuce-romaine-7wk107014Tennis Court428
15Vinyard-untrained726815Running Track660
16Vinyard-vertical-trellis1807///
Total/54,129Total/15,029
Table 2. Superparameter setting of 3DCAMNet.
Table 2. Superparameter setting of 3DCAMNet.
Layer NameOutput ShapeFilter SizePadding
Conv19 × 9 × L,241 × 1 × 7,24N
ConvBlock_19 × 9 × L,241 × 1 × 3,24Y
ConvBlock_29 × 9 × L,241 × 1 × 3,24Y
ConvBlock_39 × 9 × L,241 × 1 × 3,24Y
Avgpooling_h1 × 9 × 1,24//
Avgpooling_w9 × 1 × 1,24//
Avgpooling_l1 × 1 × L,24//
Conv_h1 × 9 × 1,241 × 1 × 1,24Y
Conv_w9 × 1 × 1,241 × 1 × 1,24Y
Conv_l1 × 1 × L,241 × 1 × 1,24Y
Linear Conv9 × 9 × L,481 × 1 × 1,48Y
Conv29 × 9 × 1,481 × 1 × L,48N
Avgpooling1 × 1 × 1,48//
Flatten(out)class × 148N
Table 3. OA comparison of classification results obtained using different attention mechanisms (%).
Table 3. OA comparison of classification results obtained using different attention mechanisms (%).
DatasetsSECBAM3DCAM
IP94.2294.9695.81
UP96.9296.6497.01
KSC98.3298.5199.01
SV96.8697.1097.48
HT97.3097.4397.69
Table 4. OA value of different modules in 3DCAMNet (%).
Table 4. OA value of different modules in 3DCAMNet (%).
ModulesLinear Module
3DCAM
HSI datasetsIP95.0295.7895.0095.81
UP96.2896.9496.5897.01
KSC98.3398.8098.7599.01
SV96.2096.5696.8797.48
HT96.2897.1497.2597.69
Table 5. Comparison of running time and parameters of different methods on IP dataset.
Table 5. Comparison of running time and parameters of different methods on IP dataset.
NetworkInput SizeParametersRunning Time (s)
SSRN [52]9 × 9 × 200364 k106
PyResNet [40]9 × 9 × 20022.4 M56
DBMA [53]9 × 9 × 200609 k222
DBDA [55]9 × 9 × 200382 k194
Hybrid-SN [38]9 × 9 × 200373 k37
A2S2K-ResNet [57]9 × 9 × 200403 k40
3DCAMNet9 × 9 × 200423 k146
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shi, C.; Liao, D.; Zhang, T.; Wang, L. Hyperspectral Image Classification Based on 3D Coordination Attention Mechanism Network. Remote Sens. 2022, 14, 608. https://doi.org/10.3390/rs14030608

AMA Style

Shi C, Liao D, Zhang T, Wang L. Hyperspectral Image Classification Based on 3D Coordination Attention Mechanism Network. Remote Sensing. 2022; 14(3):608. https://doi.org/10.3390/rs14030608

Chicago/Turabian Style

Shi, Cuiping, Diling Liao, Tianyu Zhang, and Liguo Wang. 2022. "Hyperspectral Image Classification Based on 3D Coordination Attention Mechanism Network" Remote Sensing 14, no. 3: 608. https://doi.org/10.3390/rs14030608

APA Style

Shi, C., Liao, D., Zhang, T., & Wang, L. (2022). Hyperspectral Image Classification Based on 3D Coordination Attention Mechanism Network. Remote Sensing, 14(3), 608. https://doi.org/10.3390/rs14030608

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop