Next Article in Journal
Road Extraction from Remote Sensing Imagery with Spatial Attention Based on Swin Transformer
Previous Article in Journal
Ocean Satellite Data Fusion for High-Resolution Surface Current Maps
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Transformer Network with a CNN-Enhanced Cross-Attention Mechanism for Hyperspectral Image Classification

1
School of Computer Science, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of Information Science and Technology, Nanjing 210044, China
3
School of Atmospheric Sciences, Nanjing University of Information Science and Technology, Nanjing 210044, China
4
Internet of Things & Smart City Innovation Platform, Zhuhai Fudan Innovation Institute, Zhuhai 519031, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(7), 1180; https://doi.org/10.3390/rs16071180
Submission received: 23 January 2024 / Revised: 13 March 2024 / Accepted: 26 March 2024 / Published: 28 March 2024
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Recently, with the remarkable advancements of deep learning in the field of image processing, convolutional neural networks (CNNs) have garnered widespread attention from researchers in the domain of hyperspectral image (HSI) classification. Moreover, due to the high performance demonstrated by the transformer architecture in classification tasks, there has been a proliferation of neural networks combining CNNs and transformers for HSI classification. However, the majority of the current methods focus on extracting spatial–spectral features from the HSI data of a single size for a pixel, overlooking the rich multi-scale feature information inherent to the data. To address this problem, we designed a novel transformer network with a CNN-enhanced cross-attention (TNCCA) mechanism for HSI classification. It is a dual-branch network that utilizes different scales of HSI input data to extract shallow spatial–spectral features using a multi-scale 3D and 2D hybrid convolutional neural network. After converting the feature maps into tokens, a series of 2D convolutions and dilated convolutions are employed to generate two sets of Q (queries), K (keys), and V (values) at different scales in a cross-attention module. This transformer with CNN-enhanced cross-attention explores multi-scale CNN-enhanced features and fuses them from both branches. Experimental evaluations conducted on three widely used hyperspectral image (HSI) datasets, under the constraint of limited sample size, demonstrate excellent classification performance of the proposed network.

Graphical Abstract

1. Introduction

Hyperspectral imaging (HSI) has emerged as a powerful technique for remote sensing and the analysis of the Earth’s surface [1,2]. By capturing and analyzing a large number of narrow and contiguous spectral bands, HSI data provides rich and detailed information about the composition and properties of observed objects [3,4]. The ability to differentiate between different land cover types and detect subtle variations in materials has made HSI classification a crucial task in various fields, including agriculture [5], environmental monitoring [6], mineral exploration [7], and military reconnaissance [8]. HSI classification has become a hot research topic [9,10,11,12,13].
Currently, several HSI classification methods based on traditional machine learning algorithms have been proposed. These methods include Support Vector Machines (SVMs) [14,15] and Random Forest (RF) [16]. In addition, the k-Nearest Neighbors (k-NN) [17] algorithm is a non-parametric classification method that is based on the assumption of similar feature values. It assigns the class of an unlabeled pixel as the most frequent class among its k-nearest-neighboring pixels in the feature space. Linear Discriminant Analysis (LDA) [18] is a supervised dimensionality reduction and classification algorithm. It aims to find a linear transformation that maximizes the differences between different classes and minimizes the within-class scatter, resulting in discriminative features used for pixel classification. The Endmember Extraction and Classification Algorithm (EMAP) [19] is a comprehensive algorithm that combines endmember extraction and classification in hyperspectral image analysis. It involves extracting endmembers, which are pure spectral signatures, and using a linear mixing model to classify pixels based on their linear combinations of endmembers. EMAP enables the accurate characterization of materials present in hyperspectral data.
Traditional machine learning methods for hyperspectral classification have limitations in feature extraction, high-dimensional data, and modeling nonlinear relationships [20]. In contrast, deep learning offers advantages such as automatic feature learning, strong nonlinear modeling capabilities, compact data representation, and data augmentation for improved generalization [21]. These benefits make deep learning well-suited to handling high-dimensional, nonlinear, and complex hyperspectral data, leading to enhanced classification accuracy and robustness.
Due to the popularity of deep learning research, deep learning methods have also been applied to HSI classification tasks. Initially, researchers only used convolutional layers to solve classification tasks, such as 1D-CNN [22], 2D-CNN [23], and 3D-CNN [24]. However, more complex and deeper networks have been designed. He et al. [25] discovered that HSI differed significantly from 3D object images due to its combination of 2D spatial and 1D spectral features. Existing deep neural networks cannot be directly applied to HSI classification tasks. To address this issue, they proposed a Multiscale 3D Deep Convolutional Neural Network (M3D-CNN), which jointly learned both two-dimensional multiscale spatial features and one-dimensional spectral features from HSI data in an end-to-end manner. To achieve better classification performance by combining two types of convolutions, Roy et al. [26] effectively combined 3D-CNN with 2D-CNN. Zhu et al. [27] discovered the remarkable capabilities of Generative Adversarial Networks (GANs) in various applications. As a result, they explored the application of GANs in the field of HSI classification and designed a CNN for discriminating samples and another CNN for generating synthetic input samples. Their approach achieved superior classification accuracy compared to previous methods. Due to the sequential nature of hyperspectral pixels, Mou et al. [28] applied Recurrent Neural Networks (RNNs) to HSI classification tasks. Then, they proposed a novel RNN model that effectively analyzed HSI pixels as sequential data. Their research demonstrated the significant potential of RNNs in HSI classification tasks. Traditional CNN models can only capture fixed receptive fields for HSI, making it challenging to extract feature information with different object distributions. To address this issue, Wan et al. [29] applied Graph Convolutional Networks (GCNs) to HSI classification tasks. They designed a multi-scale dynamic GCN (MDGCN) that updated the graph dynamically during the convolution process, leveraging multiscale features in HSI.
With the introduction of attention mechanisms, Haut et al. [30] combined CNNs and Residual Networks (ResNets) with visual attention. Visual attention effectively assisted in identifying the most representative parts of the data. Experimental results demonstrated that deep attention models had a strong competitive advantage. Sun et al. [31] discovered that CNN-based methods, due to the presence of interfering pixels, weaken the discriminative power of spatial–spectral features. Hence, they proposed a Spectral–Spatial Attention Network (SSAN) that captured discriminative spatial–spectral features from attention areas in HSI. To leverage the diverse spatial–spectral features inherent in different regions of the training data, Hang et al. [32] proposed a novel attention-aided CNN. It consisted of two subnetworks responsible for extracting spatial and spectral features, respectively. Both subnetworks incorporated attention modules to assist in constructing a discriminative network. To mitigate the interference between spatial and spectral features during the extraction process, Ma et al. [33] designed a Double-Branch Multi-Attention mechanism network (DBMA). It employed two branches, each focusing on extracting spatial and spectral features, respectively, thereby reducing mutual interference. Subsequently, Zhu et al. [34] discovered that the equal treatment of all spectral bands using deep neural networks restricted feature learning and was not conducive to classification performance in HSI. Therefore, they proposed a Residual Spectral–Spatial Attention Network (RSSAN) to address this issue. The RSSAN took raw 3D cubes as input data and employed spectral attention and spatial attention to suppress irrelevant components and emphasize relevant components, achieving adaptive feature refinement.
Recently, with the introduction of Vision Transformer [35] into image processing, which originated from the transformer model in natural language processing, more and more efficient transformer structures have been designed [36]. To fully exploit the sequential properties inherent in the spectral feature of HSI, Hong et al. [37] proposed a new classification network called SpectralFormer. It can learn the spectral sequence information. Similarly, He et al. [38] also addressed this issue and designed a classification framework called Spatial–Spectral Transformer to capture the sequential spectral relationships in HSI. Due to the limited ability of CNN to capture deep semantic features, Sun et al. [39] discovered that transformer structures can effectively complement this drawback. They proposed a method called Spectral–Spatial Feature Tokenization Transformer (SSFTT). It combined CNNs and transformers to extract abundant spectral–spatial features. Mei et al. [40] found that the features extracted using the current transformer structures exhibited excessive discretization and, thus, proposed a Group-Aware Hierarchical Transformer (GAHT) based on group perception. This network used a hierarchical structure and achieved a significant improvement in classification performance. Fang et al. [41] introduced a Multi-Attention Joint Representation with Lightweight Transformer (MAR-LWFormer) for scenarios with extremely limited samples. They employed a three-branch structure to extract multi-scale features and demonstrated excellent classification performance. To utilize morphological features, Roy et al. [42] proposed a novel transformer (morphFormer) that combined morphological convolutional operations with attention mechanisms.
In the current research, most models are capable of effectively extracting spatial–spectral information from HSI. However, training on fixed-size sample cubes constrained the model’s ability to extract multi-scale features. Additionally, in practical applications, there is often a scarcity of labeled samples in HSI datasets [43]. Therefore, it is crucial to develop a network model that can adequately extract spatial–spectral features from HSI even in scenarios with limited samples.
The TNCCA model proposed by us offers the following three main contributions:
  • Taking blocks of different sizes from HSI, we employ a mixed fusion multi-scale extraction shallow spatial–spectral feature module to process shallow features. This module primarily consists of two multi-scale convolutional neural networks designed for different-sized data. The network utilizes convolutional kernels of varying sizes to extract shallow feature information at different scales.
  • An efficient transformer encoder was designed in which we apply 2D convolution and dilated convolution to tokens to obtain two sets of Q, K, and V with different scale information. This enables the transformer architecture with cross-attention to not only learn deeper feature information and promote the interaction of deep semantic information but also effectively fuse feature information of different sizes from the two branches.
  • We designed an innovative dual-branch network specifically for classification tasks in small-sample scenarios. This network efficiently integrates a multi-scale CNN with a transformer encoder to fully exploit the multi-scale spatial–spectral features of HSI. We validated this network on three datasets, and the experimental results indicated that our proposed network was competitive compared to state-of-the-art methods.

2. Materials and Methods

In Figure 1, we illustrate an overview diagram of the proposed TNCCA model, which is an efficient dual-branch deep learning network for HSI classification. The network consists of the following sub-modules: the data preprocessing module for HSI, the shallow feature extraction module that utilizes different fusion methods to combine multi-scale spatial–spectral features, the module that converts the shallow features into tokens with different quantities assigned to different sizes, and the transformer module with CNN-enhanced cross-attention. Finally, there is the classifier head, which takes the input pixels and outputs the corresponding classification labels.
In summary, the TNCCA model consists of the following five components: HSI data preprocessing, a dual-branch multi-scale shallow feature extraction module, a feature-maps-to-tokens conversion module, a transformer with a CNN-enhanced cross-attention module, and a classifier head.

2.1. HSI Data Preprocessing

The processing of the original HSI ( X R a × b × l ) is described in this section, where a and b represent different spatial sizes, and l represents the spectral dimension. Due to the typically large number of spectral dimensions in HSI, it increases computational complexity and consumes significant computational resources. Therefore, we use the PCA operation to solve this problem by reducing the dimensionality of the original image from l to r.
To obtain information at different scales, we extract two square patches of different sizes, X 1 p R s 1 × s 1 × r and X 2 p R s 2 × s 2 × r ( s 1 > s 2 ), centered at each pixel. We combine these two variables into a dataset and feed it into the network together. Finally, the set of data generated via each pixel is placed into a collection, A, and the training and test sets are randomly partitioned from A based on the sampling rate. Each group of training and testing data contains the corresponding ground truth labels. The labels, denoted as Y R a × b , are obtained from the set of ground truth labels.

2.2. Dual-Branch Multi-Scale Shallow Feature Extraction Module

As shown in Figure 2, a group of cubes, denoted as X 1 p and X 2 p , with different sizes are fed into the network. Firstly, they pass through a 3D convolutional layer. In the first branch, a larger-sized cube is processed, and 8 convolutional kernels are allocated. The size of each kernel is ( 3 × 5 × 5 ). In the second branch, a cube with smaller dimensions is processed, and 4 convolutional kernels are allocated. The size of each kernel is ( 1 × 3 × 3 ). To maintain the original size of the cubes, padding is applied. The above process can be represented in the following equation:
X 1 3 d = C o n v 3 D ( 3 × 5 × 5 ) ( X 1 p ) X 2 3 d = C o n v 3 D ( 1 × 3 × 3 ) ( X 2 p )
where Conv3D and Conv2D represent 3D convolutional layers and 2D convolutional layers with different kernel sizes, respectively.
After passing through a 3D convolutional layer, we extract shallow spatial features at different scales using multi-scale 2D convolutional layers. Similarly, we use different numbers of convolutional kernels and different kernel sizes in different branches. In the first branch, we use 32 2D convolutional kernels of size ( 7 × 7 ), 16 kernels of size ( 5 × 5 ), and 16 kernels of size ( 1 × 1 ). The information from these three different scales is fused through the Concatenation operation. In the second branch, smaller kernel sizes are used to extract shallow spatial features. Specifically, we use 64 2D convolutional kernels of size ( 3 × 3 ), 64 2D dilated convolutional kernels with a dilation rate of 2 and size ( 3 × 3 ), and 64 2D convolutional kernels of size ( 1 × 1 ). The information from these three different scales is fused through element-wise addition.
Finally, we obtain two sets of 2D features, F 1 and F 2 , respectively. This process can be represented in the following equations:
F 1 = C o n v 2 D ( 7 × 7 ) ( X 1 3 d ) C o n v 2 D ( 5 × 5 ) ( X 1 3 d ) C o n v 2 D ( 1 × 1 ) ( X 1 3 d ) F 2 = C o n v 2 D ( 3 × 3 ) ( X 2 3 d ) D i l a t e d C o n v 2 D ( 3 × 3 ) ( X 2 3 d ) C o n v 2 D ( 1 × 1 ) ( X 2 3 d )

2.3. Feature-Maps-to-Tokens Conversion Module

After obtaining the multi-scale 2D feature information from the dual-branch shallow feature extraction module, in order to better adapt to the structure of the Transformer, these features need to be tokenized.
The flattened feature maps are denoted as F 1 f l a t and F 2 f l a t , respectively. These two variables can be represented in the following equation:
F 1 f l a t = TS ( F l a t t e n ( F 1 ) ) F 2 f l a t = TS ( F l a t t e n ( F 2 ) )
where TS (·) is a transpose function. Next, F 1 f l a t is multiplied by a learnable weight matrix W 1 using a 1 × 1 operation, and similarly, F 2 f l a t is multiplied by a learnable weight matrix W 2 using a 1 × 1 operation. We use weight matrices of different shapes to achieve the purpose of assigning a different number of tokens. Then, the feature maps are transformed into feature tokens multiplied by themselves. The above process can be achieved using the following equation:
T 1 f = ( s o f t m a x ( TS ( F 1 f l a t W 1 ) ) ) F 1 f l a t T 2 f = ( s o f t m a x ( TS ( F 2 f l a t W 2 ) ) ) F 2 f l a t
To accomplish the classification task, we also embed a learnable classification token consisting of all zeros. Then, to preserve the original positional information, positional information is embedded into the tokens. The tokens of the two branches can be obtained from the following equation:
T 1 = ( T 1 f T 1 c l s ) T 1 f T 2 = ( T 2 f T 2 c l s ) T 2 f

2.4. Transformer with CNN-Enhanced Cross-Attention Module

The transformer possesses powerful feature-information-mining capabilities, as it can capture long-range dependencies and acquire global contextual information. To further explore the deep feature information contained in the data and fully integrate the multi-scale feature information extracted via the two branches, we embed a cross-attention in the transformer structure.
As shown in Figure 3, We utilize different convolutional layers to obtain the attention mechanism’s Q, K, and V tensors from one of the outputs T 1 obtained from the previous module. Firstly, we apply a 2D convolutional layer with kernel sizes of ( 3 × 3 ) and padding of 1 to obtain Q 1 . Next, a 2D convolutional layer with kernel sizes of ( 5 × 5 ) and padding of 2 is used to obtain K 1 . Finally, we employ a dilated convolutional layer with kernel sizes of ( 3 × 3 ), padding of 2, and a dilation rate of 2 to obtain V 1 .
Next, we apply similar multi-scale convolutions to another output, T 2 , to obtain Q 2 , K 2 , and V 2 . Firstly, we use a 2D convolutional layer with a kernel size of ( 3 × 3 ) and padding of 1 to obtain Q 2 . Then, we employ a dilated convolutional layer with a kernel size of ( 3 × 3 ), padding of 2, and a dilation rate of 2 to obtain K 2 . Finally, we utilize a 2D convolutional layer with a kernel size of ( 5 × 5 ) and padding of 2 to obtain V 2 . Once we have obtained these tensors, we perform element-wise multiplication among them to obtain deep features A 1 and A 2 that have undergone the attention mechanism. The process can be represented in the following formula:
A 1 = s o f t m a x Q 1 ( TS ( K 1 ) ) d K 1 V 2
A 2 = s o f t m a x Q 2 ( TS ( K 2 ) ) d K 2 V 1
where d K 1 is the dimension of K 1 , and d K 2 is the dimension of K 2 . We obtain the deep features from two branches and sum them pixel-wise. Then, we pass the summed features through a multi-layer perceptron block using a residual structure to obtain the final deep feature, DF . This can be obtained using the following equation:
DF = LN [ MLP [ A 1 A 2 ] ] ( A 1 A 2 )
where MLP [ · ] is the multi-layer perceptron, and LN is the abbreviation for layer normalization. The MLP mainly includes two linear layers, with the addition of the Gaussian Error Linear Unit (GELU) activation function in between.

2.5. Classifier Head

We extract the learnable classification token, T c l s D F , from the output tokens, DF , of the transformer encoder. Then, we pass it through a linear layer to obtain a one-dimensional vector, denoted as I R 1 × c , where c represents the number of classes. The softmax function is used to ensure that the total activation of each output unit is 1. By selecting the corresponding maximum value, we obtain the class label for that pixel. The entire process can be represented in the following equation:
L a b e l = m a x ( S o f t m a x ( L i n e a r ( T c l s D F ) ) I )
The complete procedure of the TNCCA method, as proposed, is outlined in Algorithm 1.
Algorithm 1 Multi-scale Feature Transformer with CNN-Enhanced Cross-Attention Model
Input: Input HSI data X R a × b × l and ground truth labels Y R a × b ; the original data are reduced in spectral dimension to r = 30 using PCA operation. A set of small cubes with sizes s 1 = 13 and s 2 = 7 is then extracted. Subsequently, the training set of the model is randomly sampled at a sampling rate of 1%.
Output: Predicted labels for the test dataset.
1:
Set the batch size of the training data to 64, and use the Adam optimizer with a learning rate of l r = 5 × 10 4 . Decay the learning rate to l r * 0.9 every 50 steps. Set the total number of training epochs to ϵ = 500.
2:
After the dimensionality reduction of the original HSI using PCA, cubes corresponding to each pixel are extracted with the pixel as the center. Subsequently, each extracted set of data, X 1 p and X 2 p , is placed into a collection. Then, the collection is divided into a training set and a testing set according to Table 1.
3:
Create training and test data loaders. Each group of training and testing data will obtain corresponding ground truth labels from Y .
4:
for  i = 1 to ϵ  do
5:
    The dual-branch, multi-scale shallow feature extraction module is used to extract the multi-scale shallow spatial–spectral features F 1 and F 2 .
6:
    The outputs of the feature maps to the token conversion module are used as inputs for the next module, denoted as T 1 and T 2 .
7:
    Passing tokens through a transformer encoder with cross-attention yields deep semantic features, referred to as deep semantic features, DF .
8:
    Extracting a learnable classification token, T c l s D F , from DF and feeding it into a classification head yields the predicted class for the current pixel.
9:
end for
10:
Apply the trained model to the test dataset to generate predicted labels.
Table 1. Explanation of the division of training samples and test samples in the Houston2013 dataset, the Trento dataset, and the Pavia University dataset.
Table 1. Explanation of the division of training samples and test samples in the Houston2013 dataset, the Trento dataset, and the Pavia University dataset.
NO.Houston2013 DatasetTrento DatasetPavia University Dataset
ClassTraining ( 1 % ).Test.ClassTraining ( 1 % ).Test.ClassTraining ( 1 % ).Test.
#1Healthy Grass131238Apple Trees403994Asphalt666565
#2Stressed Grass131241Buildings292874Meadows18618,463
#3Synthetic Grass7690Ground5474Gravel212078
#4Tree121232Woods919032Trees313033
#5Soil121230Vineyard10510,396Metal Sheets131332
#6Water3322Roads313143Bare Soil504979
#7Residential131255 Bitumen131317
#8Commercial121232 Bricks373645
#9Road131239 Shadows9938
#10Highway121215
#11Railway121223
#12Parking Lot 1121221
#13Parking Lot 25464
#14Tennis Court4424
#15Running Track7653
Total15014,879Total30129,913Total42642,350

3. Results

3.1. Data Description

The proposed TNCCA model was tested on three widely used datasets. Below, we introduce these three datasets one by one.
Houston2013 dataset: The Houston2013 dataset was jointly provided by the research group at the University of Houston and the National Mapping Center of the United States. It contained a wide range of categories and has been widely used by researchers. The dataset consisted of 144 bands and contained 349 × 1905 classified pixels. There were 15 different classification categories. Figure 4 displayed the pseudocolored image and ground truth map of the Houston2013 dataset.
Trento dataset: The Trento dataset was captured in the southern region of Trento, Italy. It was an HSI obtained using the Airborne lmaging Spectrometer for Application (AISA) Eagle sensor. The dataset consisted of 63 spectral bands and had dimensions of 600 × 166 pixels for classification. It included six different categories of ground objects. Figure 5a,b respectively display the pseudocolored image and ground truth map.
Pavia University dataset: The Pavia University dataset was a collection of HSI taken in 2001, specifically at Pavia University in Italy. The dataset was an HSI obtained using a Reflective Optics System Imaging Spectrometer (ROSIS) sensor. The image comprised 115 bands and had dimensions of 610 × 340 classified pixels. There were a total of nine land cover classification categories. To reduce the interference of noise, we removed 12 bands that contained noise. Figure 6 displays the pseudocolored image and ground truth map of the dataset.
We present the division of training and test samples for the three datasets in Table 1, which includes the specific data for each category. For each category, we used 1 % of the total number of samples as the training set.

3.2. Parameter Analysis

In the model we proposed, there was a set of hyperparameters, such as batch size, the size of the first cubic patch, and the size of the second cubic patch. We conducted experimental analysis on these parameters to ensure that their values were optimal. The analysis results are shown in Figure 7, Figure 8 and Figure 9.
(1) Batch Size: Due to our observation that the performance of the transformer architecture was highly sensitive to the batch size, different sizes resulted in varying classification performance. We set the batch size to the following candidate values: { 16 , 32 , 64 , 128 , 256 } . Additionally, we experimentally determined the batch size that yielded the best performance for our proposed model.
(2) Patch Size: Since the cubic patch served as the input to the model, selecting a patch size that was too small could limit the model’s receptive field, while choosing a size that was too large could result in excessive data volume and increased computational complexity. Our proposed TNCCA selected two different sizes of cubic patches to extract multi-scale features, for which the size of the cubic patch in the first branch was slightly larger than that in the second branch. These two cubic patches served as inputs to the model, and their sizes significantly impacted the classification accuracy. Therefore, we conducted experiments on these two hyperparameters.
We first selected the parameter for the first branch from the set { 9 , 11 , 13 , 15 , 17 } , and the experimental results showed that the model achieved the best classification performance when its value was 13. Then, for the second branch, we selected the parameter from the set { 3 , 5 , 7 , 9 , 11 } . From Figure 7, Figure 8 and Figure 9, it can be observed that the model achieved the highest classification metrics when its value was 7.

3.3. Classification Results and Analysis

We explored eight advanced classification models, and in this section, we describe the conducted experiments and analyze them to compare the classification performance of our proposed model with these models. They comprised SVM [14], 1D-CNN [22], 3D-CNN [24], M3D-CNN [25], 3D-DLA [44], Hybrid [26], SSFTT [39], and morphFormer [42]. To maintain the original performance of the comparative models, we used the training strategies described in their respective papers. The number of training and testing samples for each model was the same as the numbers listed in Table 1, and random sampling was employed. If you wish to reproduce our experiments, you can download the code from the following link: https://github.com/cupid6868/TNCCA.git (accessed on 25 March 2024).
(1) Quantitative results and analysis: We present the results in Table 2, Table 3 and Table 4, where we demonstrate the superior performance of our proposed model. We highlight the best results for each metric. We conducted experiments on three datasets: the Houston2013 dataset, the Trento dataset, and the Pavia University dataset. The comparative classification metrics included overall accuracy (OA), average accuracy (AA), the Kappa coefficient ( κ ), and class-wise accuracy. The data in the tables clearly indicate that our proposed TNCCA outperformed the other seven models on the experimental datasets. Let us take the Houston2013 dataset as an example. The proposed TNCCA exhibited the best classification performance for classes such as ‘Synthetic Grass’, ‘Soil’, ’Water’, ‘Commercial’, ‘Parking Lot 2’, ‘Tennis Court’, and ‘Running Track’. Additionally, for classes like ‘Healthy Grass’, ‘Stressed Grass’, and ‘Parking Lot 1’, although our model’s performance was not the best, it still ranked among that of the top methods. In contrast, SVM and 1D-CNN showed extremely low classification performance for certain classes. This clearly demonstrated that, in the context of small sample sizes, our proposed model effectively utilized multi-scale feature information and fully exploited the spatial–spectral characteristics in HSI.
(2) Visual evaluation and analysis: We present the aforementioned experimental results in the form of classification maps, shown in Figure 10, Figure 11 and Figure 12. By comparing the spatial contours of the classification maps with the noise contained in the images, we can clearly observe the superior classification performance of the proposed TNCCA compared to other models.
In the classification maps, it is obvious that the classification map of TNCCA exhibited the clearest spatial contours and contained the least amount of noise. Conversely, the classification maps of the other models showed more instances of misclassifications and interfering noise. Let us take the classification map of the Houston2013 dataset as an example. The classification map of our proposed model closely resembles the ground truth map. On the other hand, the classification maps of SVM, 1D-CNN, 3D-CNN, M3D-CNN, and 3D-DLA exhibited more misclassifications and noise. In the zoomed-in window, we can clearly observe the high classification performance of our proposed model for classes such as ‘Parking Lot 2’, ‘Road’, and ‘Synthetic Grass’.
In conclusion, our proposed model outperformed the compared models and demonstrated the best classification performance. It highlighted the model’s capability of extracting features effectively in small sample scenarios.

3.4. Analysis of Inference Speed

To demonstrate the inference speed of our proposed model, TNCCA, we present the training time and testing time of the model with different datasets in Table 5. The data show that our training speed is fast, as the model can complete 500 epochs in a very short period. To facilitate the observation of model performance during the training process, we adopted a training strategy of conducting a test after each epoch. This resulted in a significantly longer testing time compared to the training time. Additionally, we employed dynamic learning rates to accelerate the convergence speed.
Among the three tested datasets, the Pavia University dataset, which had larger spatial dimensions and higher spectral dimensions, took the longest time, with 1.26 min for training and only 0.153 s per epoch. The training times for the other datasets were shorter. From this table, it is easy to conclude that our proposed model not only achieved high classification accuracy but also trained at a fast speed, demonstrating high efficiency.

3.5. Ablation Analysis

To validate the effectiveness of each module in our proposed model, we conducted ablation experiments on the four modules using the Houston2013 dataset. These four modules comprised a 3D convolutional layer (3D-Conv), a multi-scale 2D convolutional module (Ms2D-Conv), a feature map tokenization module (Tokenizer), and a transformer encoder module (TE). We evaluated their performance in terms of OA, AA, and κ by considering five different combinations of these modules. The results are listed in Table 6.
Specifically, we first kept only the 3D convolutional layer, and it was evident that the performance was extremely poor. In the next step, we removed the transformer encoder with the CNN-enhanced cross-attention mechanism, which was one of the main innovations of this paper. The results showed a significant decrease in classification performance. The OA, AA, and κ values of the model decreased by 4.71 % , 5.89 % , and 5.32 % , respectively, compared to TNCCA. Next, we removed the 3D convolutional layer and replaced the multi-scale 2D convolutional module with a regular 2D convolutional layer. In this configuration, the model’s OA decreased by 1.14 % , and its AA decreased by 1.66 % , compared to TNCCA. Then, we removed the 3D convolutional layer, which resulted in the loss of rich spectral information in the HSI. We observed that the model’s OA decreased by 0.64 % , and its AA decreased by 1.8 % , compared to TNCCA. Finally, we replaced only the multi-scale 2D convolutional module with a regular 2D convolutional layer. In this case, the model’s OA decreased by 0.17 % , and its AA decreased by 0.20 % , compared to TNCCA. This clearly demonstrated the positive contributions of these four modules in enhancing the accuracy of network classification.

4. Conclusions

The paper has introduced a novel dual-branch deep learning classification model that effectively captures spatial–spectral feature information from HSI and achieves high classification performance in small sample scenarios. The two branches of the model utilize cubic patches of different sizes as inputs to fully exploit the limited samples and extract features at different scales. First, we employed a 3D convolutional layer and a multi-scale 2D convolutional module to extract shallow-level features. Then, the obtained feature maps were transformed into tokens, assigning a larger number of tokens to the larger cubic patches. Next, we utilized a transformer with CNN-enhanced cross-attention to delve into the deep-level feature information and fuse the different-scale information from the two branches. Finally, through extensive experiments, we demonstrated that the proposed TNCCA model exhibits superior classification performance.
In our future work, we aim to explore the rich multi-scale spatial–spectral features in HSI from different perspectives to improve classification accuracy. However, as the classification accuracy improves, there is an increasing demand for lightweight operations and reducing the computational complexity of the models. We will utilize more novel lightweight operations to design more efficient classification models.

Author Contributions

Methodology, X.W. and B.L.; conceptualization, X.W. and L.S.; software, X.W. and L.S.; validation, X.W. and C.L.; investigation, B.L. and X.W.; writing—original draft preparation, X.W.; writing—review and editing, B.L. and L.S.; visualization, C.L. and X.W.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Jiangsu key R&D plan, no. BE2022161.

Data Availability Statement

The data presented in this study are available in the article.

Acknowledgments

The authors thank the anonymous reviewers and the editors for their insightful comments and helpful suggestions that helped improve the quality of our manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HSIHyperspectral image
RFRandom Forest
SVMSupport Vector Machine
LDALinear Discriminant Analysis
PCAPrincipal Component Analysis
CNNConvolutional Neural Network
GANGenerative Adversarial Network
GCNGraph Convolutional Network
RNNRecurrent Neural Network
ResNetResidual Network
TETransformer encoder
QQueries
KKeys
VValues
MLPMulti-layer perceptron
LNNormalization layers

References

  1. He, C.; Cao, Q.; Xu, Y.; Sun, L.; Wu, Z.; Wei, Z. Weighted Order-p Tensor Nuclear Norm Minimization and Its Application to Hyperspectral Image Mixed Denoising. IEEE Geosci. Remote Sens. Lett. 2023, 20, 5510505. [Google Scholar] [CrossRef]
  2. Sun, L.; Wang, Q.; Chen, Y.; Zheng, Y.; Wu, Z.; Fu, L.; Jeon, B. CRNet: Channel-Enhanced Remodeling-Based Network for Salient Object Detection in Optical Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5618314. [Google Scholar] [CrossRef]
  3. Gao, H.; Zhang, Y.; Chen, Z.; Xu, S.; Hong, D.; Zhang, B. A Multidepth and Multibranch Network for Hyperspectral Target Detection Based on Band Selection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5506818. [Google Scholar] [CrossRef]
  4. Gao, H.; Zhang, Y.; Chen, Z.; Xu, F.; Hong, D.; Zhang, B. Hyperspectral Target Detection via Spectral Aggregation and Separation Network With Target Band Random Mask. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5515516. [Google Scholar] [CrossRef]
  5. Gevaert, C.M.; Suomalainen, J.; Tang, J.; Kooistra, L. Generation of Spectral–Temporal Response Surfaces by Combining Multispectral Satellite and Hyperspectral UAV Imagery for Precision Agriculture Applications. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3140–3146. [Google Scholar] [CrossRef]
  6. Gong, P.; Li, Z.; Huang, H.; Sun, G.; Wang, L. ICESat GLAS Data for Urban Environment Monitoring. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1158–1172. [Google Scholar] [CrossRef]
  7. Wang, J.; Zhang, L.; Tong, Q.; Sun, X. The Spectral Crust project—Research on new mineral exploration technology. In Proceedings of the 2012 4th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Shanghai, China, 4–7 June 2012; pp. 1–4. [Google Scholar] [CrossRef]
  8. Ardouin, J.P.; Levesque, J.; Rea, T.A. A demonstration of hyperspectral image exploitation for military applications. In Proceedings of the 2007 10th International Conference on Information Fusion, Québec, QC, Canada, 9–12 July 2007; pp. 1–8. [Google Scholar] [CrossRef]
  9. Su, Y.; Gao, L.; Jiang, M.; Plaza, A.; Sun, X.; Zhang, B. NSCKL: Normalized Spectral Clustering With Kernel-Based Learning for Semisupervised Hyperspectral Image Classification. IEEE Trans. Cybern. 2023, 53, 6649–6662. [Google Scholar] [CrossRef] [PubMed]
  10. Su, Y.; Chen, J.; Gao, L.; Plaza, A.; Jiang, M.; Xu, X.; Sun, X.; Li, P. ACGT-Net: Adaptive Cuckoo Refinement-Based Graph Transfer Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5521314. [Google Scholar] [CrossRef]
  11. Yu, H.; Gao, L.; Liao, W.; Zhang, B.; Zhuang, L.; Song, M.; Chanussot, J. Global Spatial and Local Spectral Similarity-Based Manifold Learning Group Sparse Representation for Hyperspectral Imagery Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3043–3056. [Google Scholar] [CrossRef]
  12. Gao, H.; Yang, Y.; Li, C.; Gao, L.; Zhang, B. Multiscale Residual Network With Mixed Depthwise Convolution for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3396–3408. [Google Scholar] [CrossRef]
  13. Yan, L.; Fan, B.; Liu, H.; Huo, C.; Xiang, S.; Pan, C. Triplet Adversarial Domain Adaptation for Pixel-Level Classification of VHR Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3558–3573. [Google Scholar] [CrossRef]
  14. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  15. Ye, Q.; Huang, P.; Zhang, Z.; Zheng, Y.; Fu, L.; Yang, W. Multiview Learning With Robust Double-Sided Twin SVM. IEEE Trans. Cybern. 2022, 52, 12745–12758. [Google Scholar] [CrossRef] [PubMed]
  16. Ham, J.; Chen, Y.; Crawford, M.; Ghosh, J. Investigation of the random forest framework for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 492–501. [Google Scholar] [CrossRef]
  17. Guo, Y.; Han, S.; Li, Y.; Zhang, C.; Bai, Y. K-Nearest Neighbor combined with guided filter for hyperspectral image classification. Procedia Comput. Sci. 2018, 129, 159–165. [Google Scholar] [CrossRef]
  18. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of Hyperspectral Images With Regularized Linear Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  19. Dalla Mura, M.; Villa, A.; Benediktsson, J.A.; Chanussot, J.; Bruzzone, L. Classification of Hyperspectral Images by Using Extended Morphological Attribute Profiles and Independent Component Analysis. IEEE Geosci. Remote Sens. Lett. 2011, 8, 542–546. [Google Scholar] [CrossRef]
  20. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef]
  21. Lu, W.; Wang, X.; Sun, L.; Zheng, Y. Spectral–Spatial Feature Extraction for Hyperspectral Image Classification Using Enhanced Transformer with Large-Kernel Attention. Remote Sens. 2024, 16, 67. [Google Scholar] [CrossRef]
  22. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
  23. Zhao, W.; Du, S. Spectral–Spatial Feature Extraction for Hyperspectral Image Classification: A Dimension Reduction and Deep Learning Approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
  24. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  25. He, M.; Li, B.; Chen, H. Multi-scale 3D deep convolutional neural network for hyperspectral image classification. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3904–3908. [Google Scholar] [CrossRef]
  26. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
  27. Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative Adversarial Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
  28. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef]
  29. Wan, S.; Gong, C.; Zhong, P.; Du, B.; Zhang, L.; Yang, J. Multiscale Dynamic Graph Convolutional Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3162–3177. [Google Scholar] [CrossRef]
  30. Haut, J.M.; Paoletti, M.E.; Plaza, J.; Plaza, A.; Li, J. Visual Attention-Driven Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8065–8080. [Google Scholar] [CrossRef]
  31. Sun, H.; Zheng, X.; Lu, X.; Wu, S. Spectral–Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3232–3245. [Google Scholar] [CrossRef]
  32. Hang, R.; Li, Z.; Liu, Q.; Ghamisi, P.; Bhattacharyya, S.S. Hyperspectral Image Classification With Attention-Aided CNNs. IEEE Trans. Geosci. Remote Sens. 2021, 59, 2281–2293. [Google Scholar] [CrossRef]
  33. Ma, W.; Yang, Q.; Wu, Y.; Zhao, W.; Zhang, X. Double-Branch Multi-Attention Mechanism Network for Hyperspectral Image Classification. Remote Sens. 2019, 11, 1307. [Google Scholar] [CrossRef]
  34. Zhu, M.; Jiao, L.; Liu, F.; Yang, S.; Wang, J. Residual Spectral–Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 449–462. [Google Scholar] [CrossRef]
  35. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  36. Sun, L.; Wang, X.; Zheng, Y.; Wu, Z.; Fu, L. Multiscale 3-D–2-D Mixed CNN and Lightweight Attention-Free Transformer for Hyperspectral and LiDAR Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 2100116. [Google Scholar] [CrossRef]
  37. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking Hyperspectral Image Classification With Transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5518615. [Google Scholar] [CrossRef]
  38. He, X.; Chen, Y.; Lin, Z. Spatial-Spectral Transformer for Hyperspectral Image Classification. Remote Sensing 2021, 13, 498. [Google Scholar] [CrossRef]
  39. Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–Spatial Feature Tokenization Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
  40. Mei, S.; Song, C.; Ma, M.; Xu, F. Hyperspectral Image Classification Using Group-Aware Hierarchical Transformer. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5539014. [Google Scholar] [CrossRef]
  41. Fang, Y.; Ye, Q.; Sun, L.; Zheng, Y.; Wu, Z. Multiattention Joint Convolution Feature Representation With Lightweight Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5513814. [Google Scholar] [CrossRef]
  42. Roy, S.K.; Deria, A.; Shah, C.; Haut, J.M.; Du, Q.; Plaza, A. Spectral–Spatial Morphological Attention Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5503615. [Google Scholar] [CrossRef]
  43. Gao, H.; Chen, Z.; Xu, F. Adaptive spectral-spatial feature fusion network for hyperspectral image classification using limited training samples. Int. J. Appl. Earth Obs. Geoinf. 2022, 107, 102687. [Google Scholar] [CrossRef]
  44. Ben Hamida, A.; Benoit, A.; Lambert, P.; Ben Amar, C. 3-D Deep Learning Approach for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4420–4434. [Google Scholar] [CrossRef]
Figure 1. Overview diagram of the proposed TNCCA model.
Figure 1. Overview diagram of the proposed TNCCA model.
Remotesensing 16 01180 g001
Figure 2. Dual-branch multi-scale shallow feature extraction module.
Figure 2. Dual-branch multi-scale shallow feature extraction module.
Remotesensing 16 01180 g002
Figure 3. Transformer with CNN-enhanced cross-attention module.
Figure 3. Transformer with CNN-enhanced cross-attention module.
Remotesensing 16 01180 g003
Figure 4. Presentation of the Houston2013 dataset. (a) Pseudo-color image composed of three spectral bands. (b) Ground truth map.
Figure 4. Presentation of the Houston2013 dataset. (a) Pseudo-color image composed of three spectral bands. (b) Ground truth map.
Remotesensing 16 01180 g004
Figure 5. Presentation of the Trento dataset. (a) Pseudo-color image composed of three spectral bands. (b) Ground truth map.
Figure 5. Presentation of the Trento dataset. (a) Pseudo-color image composed of three spectral bands. (b) Ground truth map.
Remotesensing 16 01180 g005
Figure 6. Presentation of the Pavia University dataset. (a) Pseudo-color image composed of three spectral bands. (b) Ground truth map.
Figure 6. Presentation of the Pavia University dataset. (a) Pseudo-color image composed of three spectral bands. (b) Ground truth map.
Remotesensing 16 01180 g006
Figure 7. Validation of the optimal hyperparameters with different classification metrics for the Houston2013 dataset. (a) Batch size. (b) Size of the cubic patch in the first branch. (c) Size of the cubic patch in the second branch.
Figure 7. Validation of the optimal hyperparameters with different classification metrics for the Houston2013 dataset. (a) Batch size. (b) Size of the cubic patch in the first branch. (c) Size of the cubic patch in the second branch.
Remotesensing 16 01180 g007
Figure 8. Validation of the optimal hyperparameters with different classification metrics for the Trento dataset. (a) Batch size. (b) Size of the cubic patch in the first branch. (c) Size of the cubic patch in the second branch.
Figure 8. Validation of the optimal hyperparameters with different classification metrics for the Trento dataset. (a) Batch size. (b) Size of the cubic patch in the first branch. (c) Size of the cubic patch in the second branch.
Remotesensing 16 01180 g008
Figure 9. Validation of the optimal hyperparameters with different classification metrics for the Pavia University dataset. (a) Batch size. (b) Size of the cubic patch in the first branch. (c) Size of the cubic patch in the second branch.
Figure 9. Validation of the optimal hyperparameters with different classification metrics for the Pavia University dataset. (a) Batch size. (b) Size of the cubic patch in the first branch. (c) Size of the cubic patch in the second branch.
Remotesensing 16 01180 g009
Figure 10. Visualization of classification results using different classification methods with the Houston2013 dataset. (a) Ground truth map, (b) SVM (OA = 33.24%), (c) 1D-CNN (OA = 33.63%), (d) 3D-CNN (OA = 48.39%), (e) M3D-CNN (OA = 68.44%), (f) 3D-DLA (OA = 70.49%), (g) hybrid (OA = 77.29%), (h) SSFTT (OA = 87.85%), (i) morphFormer (OA = 87.17%), and (j) the proposed method (OA = 90.72%).
Figure 10. Visualization of classification results using different classification methods with the Houston2013 dataset. (a) Ground truth map, (b) SVM (OA = 33.24%), (c) 1D-CNN (OA = 33.63%), (d) 3D-CNN (OA = 48.39%), (e) M3D-CNN (OA = 68.44%), (f) 3D-DLA (OA = 70.49%), (g) hybrid (OA = 77.29%), (h) SSFTT (OA = 87.85%), (i) morphFormer (OA = 87.17%), and (j) the proposed method (OA = 90.72%).
Remotesensing 16 01180 g010
Figure 11. Visualization of classification results using different classification methods with the Trento dataset. (a) Ground truth map, (b) SVM (OA = 67.74%), (c) 1D-CNN (OA = 70.75%), (d) 3D-CNN (OA = 91.27%), (e) M3D-CNN (OA = 94.91%), (f) 3D-DLA (OA = 92.91%), (g) hybrid (OA = 92.21%), (h) SSFTT (OA = 97.88%), (i) morphFormer (OA = 98.20%), and (j) the proposed method (OA = 98.98%).
Figure 11. Visualization of classification results using different classification methods with the Trento dataset. (a) Ground truth map, (b) SVM (OA = 67.74%), (c) 1D-CNN (OA = 70.75%), (d) 3D-CNN (OA = 91.27%), (e) M3D-CNN (OA = 94.91%), (f) 3D-DLA (OA = 92.91%), (g) hybrid (OA = 92.21%), (h) SSFTT (OA = 97.88%), (i) morphFormer (OA = 98.20%), and (j) the proposed method (OA = 98.98%).
Remotesensing 16 01180 g011
Figure 12. Visualization of classification results using different classification methods with the Pavia University dataset. (a) Ground truth map, (b) SVM (OA = 33.24%), (c) 1D-CNN (OA = 33.63%), (d) 3D-CNN (OA = 48.39%), (e) M3D-CNN (OA = 68.44%), (f) 3D-DLA (OA = 70.49%), (g) hybrid (OA = 77.29%), (h) SSFTT (OA = 87.85%), (i) morphFormer (OA = 87.17%), and (j) the proposed method (OA = 90.72%).
Figure 12. Visualization of classification results using different classification methods with the Pavia University dataset. (a) Ground truth map, (b) SVM (OA = 33.24%), (c) 1D-CNN (OA = 33.63%), (d) 3D-CNN (OA = 48.39%), (e) M3D-CNN (OA = 68.44%), (f) 3D-DLA (OA = 70.49%), (g) hybrid (OA = 77.29%), (h) SSFTT (OA = 87.85%), (i) morphFormer (OA = 87.17%), and (j) the proposed method (OA = 90.72%).
Remotesensing 16 01180 g012
Table 2. Comparison of classification performance using the Houston2013 dataset with different methods (The optimal results are shown in bold, and the names of land-covers are shown in italics).
Table 2. Comparison of classification performance using the Houston2013 dataset with different methods (The optimal results are shown in bold, and the names of land-covers are shown in italics).
InstancesSVM [14]1D-CNN [22]3D-CNN [24]M3D-CNN [25]3D-DLA [44]Hybrid [26]SSFTT [39]morphFormer [42]TNCCA
H e a l t h y G r a s s 85.78 ± 0.0085.70 ± 0.0073.99 ± 6.9694.74 ± 5.1085.11 ± 0.2889.68 ± 2.8885.78 ± 6.7196.66 ± 1.7994.82 ± 2.49
S t r e s s e d G r a s s 1.39 ± 2.410.00 ± 0.0041.94 ± 0.1781.35 ± 5.6075.10 ± 7.7083.42 ± 2.5689.79 ± 6.9496.21 ± 1.6996.13 ± 1.78
S y n t h e t i c G r a s s 0.00 ± 0.000.00 ± 0.0047.89 ± 4.2090.33 ± 3.0292.89 ± 1.0173.04 ± 11.2992.41 ± 10.1598.26 ± 0.7299.34 ± 0.32
T r e e 42.77 ± 28.1937.85 ± 8.8148.98 ± 16.5884.68 ± 3.1293.37 ± 3.1172.64 ± 21.0990.99 ± 3.3793.15 ± 1.0590.85 ± 2.58
S o i l 61.76 ± 52.7195.09 ± 0.9278.78 ± 2.0687.66 ± 8.4896.61 ± 1.7499.72 ± 0.4699.75 ± 0.2192.62 ± 5.57100 ± 0.00
W a t e r 0.00 ± 0.000.00 ± 0.0016.77 ± 2.6338.61 ± 7.1938.81 ± 16.9778.98 ± 11.4782.60 ± 1.3579.60 ± 3.4391.55 ± 3.31
R e s i d e n t i a l 87.94 ± 1.9895.75 ± 0.0448.96 ± 2.0853.30 ± 6.2149.61 ± 3.5053.01 ± 3.4374.42 ± 3.8777.71 ± 4.5982.54 ± 2.94
C o m m e r c i a l 30.65 ± 12.640.00 ± 0.0029.54 ± 3.5554.49 ± 7.9145.83 ± 2.6070.94 ± 2.0369.23 ± 2.5967.28 ± 1.7882.88 ± 3.53
R o a d 7.02 ± 6.7486.54 ± 2.9839.87 ± 16.2058.59 ± 6.5768.44 ± 0.4255.82 ± 0.8087.27 ± 3.5188.86 ± 3.9184.57 ± 4.02
H i g h w a y 17.55 ± 30.410.21 ± 0.3845.59 ± 7.9160.90 ± 4.4557.17 ± 33.0277.91 ± 3.0395.08 ± 1.0787.57 ± 9.5491.59 ± 3.60
R a i l w a y 23.73 ± 28.900.00 ± 0.0039.98 ± 10.2938.37 ± 8.6455.79 ± 30.3672.03 ± 5.3392.58 ± 5.1187.18 ± 1.0681.26 ± 5.85
P a r k i n g L o t 1 7.88 ± 13.662.48 ± 3.5539.68 ± 13.0373.16 ± 9.2173.32 ± 16.0689.62 ± 2.3383.48 ± 5.4074.50 ± 3.3089.46 ± 2.26
P a r k i n g L o t 2 0.50 ± 0.690.00 ± 0.0041.48 ± 4.7239.87 ± 10.3729.45 ± 8.0252.94 ± 7.0584.69 ± 5.4084.77 ± 3.1490.77 ± 3.21
T e n n i s C o u r t 0.00 ± 0.000.00 ± 0.0040.33 ± 26.6851.02 ± 6.6074.92 ± 11.14100 ± 0.0099.76 ± 0.2390.09 ± 3.42100 ± 0.00
R u n n i n g T r a c k 62.68 ± 54.520.00 ± 0.0067.99 ± 15.5985.96 ± 7.3997.54 ± 1.86100 ± 0.00100 ± 0.0097.65 ± 0.57100 ± 0.00
OA (%)33.24 ± 5.3733.63 ± 0.6748.39 ± 2.4868.44 ± 1.8170.49 ± 1.2777.29 ± 1.1987.85 ± 1.2087.17 ± 0.8090.72 ± 0.89
AA (%)28.64 ± 4.5726.91 ± 0.5446.78 ± 1.5266.20 ± 1.5468.93 ± 0.4277.98 ± 1.2188.52 ± 0.8287.47 ± 0.7991.72 ± 0.74
κ × 10027.46 ± 5.6927.58 ± 0.7344.12 ± 2.6365.82 ± 1.9568.06 ± 1.3775.44 ± 1.2986.87 ± 1.2986.13 ± 0.8789.97 ± 0.97
Table 3. Comparison of classification performance using the Trento dataset with different methods (The optimal results are shown in bold, and the names of land-covers are shown in italics).
Table 3. Comparison of classification performance using the Trento dataset with different methods (The optimal results are shown in bold, and the names of land-covers are shown in italics).
InstancesSVM [14]1D-CNN [22]3D-CNN [24]M3D-CNN [25]3D-DLA [44]Hybrid [26]SSFTT [39]morphFormer [42]TNCCA
A p p l e T r e e s 0.37 ± 0.320.00 ± 0.0078.58 ± 34.2897.72 ± 0.5586.28 ± 4.6299.04 ± 0.5699.64 ± 0.2399.49 ± 0.2399.58 ± 0.25
B u i l d i n g s 66.96 ± 5.5673.56 ± 0.6775.59 ± 11.9980.15 ± 3.3282.65 ± 1.4267.16 ± 12.9398.08 ± 0.3891.66 ± 1.6398.32 ± 0.31
G r o u n d 0.00 ± 0.000.00 ± 0.0045.44 ± 16.4371.49 ± 13.0857.63 ± 18.0635.43 ± 14.9751.26 ± 2.5391.20 ± 5.0597.79 ± 1.90
W o o d s 92.87 ± 0.9389.39 ± 0.5398.11 ± 2.5998.82 ± 0.5597.75 ± 0.40100 ± 0.00100 ± 0.0099.97 ± 0.01100 ± 0.00
V i n e y a r d 75.15 ± 1.6184.40 ± 1.0099.54 ± 0.1399.49 ± 0.4599.49 ± 0.07100 ± 0.0099.91 ± 0.0999.92 ± 0.11100 ± 0.00
R o a d s 67.53 ± 2.7070.08 ± 1.6781.61 ± 8.0882.04 ± 4.1580.40 ± 2.6266.89 ± 2.8689.71 ± 2.4992.84 ± 1.3393.17 ± 1.48
OA (%)67.74 ± 0.5270.75 ± 0.2291.27 ± 6.4594.91 ± 0.5692.91 ± 0.6592.21 ± 1.1997.88 ± 0.2598.20 ± 0.1298.98 ± 0.22
AA (%)50.48 ± 1.1752.90 ± 0.0179.81 ± 10.2388.28 ± 3.1084.03 ± 2.4578.09 ± 0.3489.77 ± 0.6195.85 ± 0.9397.64 ± 0.62
κ × 10055.45 ± 0.8059.46 ± 0.2988.22 ± 8.8193.21 ± 0.7690.49 ± 0.8889.54 ± 1.5997.17 ± 0.3397.60 ± 0.1698.64 ± 0.30
Table 4. Comparison of classification performance using the Pavia University dataset with different methods (The optimal results are shown in bold, and the names of land-covers are shown in italics).
Table 4. Comparison of classification performance using the Pavia University dataset with different methods (The optimal results are shown in bold, and the names of land-covers are shown in italics).
InstancesSVM [14]1D-CNN [22]3D-CNN [24]M3D-CNN [25]3D-DLA [44]Hybrid [26]SSFTT [39]morphFormer [42]TNCCA
A s p h a l t 94.76 ± 0.6191.32 ± 0.2783.24 ± 3.0394.44 ± 1.6988.32 ± 5.0492.46 ± 0.9397.91 ± 0.6696.75 ± 0.9898.61 ± 0.57
M e a d o w s 92.45 ± 1.2095.58 ± 1.2293.89 ± 4.2798.14 ± 1.3596.42 ± 1.0699.95 ± 0.0798.39 ± 0.3399.75 ± 0.2099.98 ± 0.02
G r a v e l 0.00 ± 0.000.00 ± 0.0054.52 ± 20.9368.65 ± 5.0480.95 ± 1.3994.80 ± 0.5082.53 ± 1.1082.17 ± 1.6387.11 ± 0.87
T r e e s 15.81 ± 2.2860.44 ± 4.7766.00 ± 21.7395.57 ± 1.5291.10 ± 1.7376.81 ± 4.4095.73 ± 1.6796.03 ± 1.1198.48 ± 0.55
M e t a l S h e e t s 99.07 ± 0.1899.44 ± 0.1790.29 ± 15.2099.62 ± 0.5297.99 ± 1.3686.76 ± 19.29100 ± 0.0099.82 ± 0.30100 ± 0.00
B a r e s o i l 18.51 ± 6.589.58 ± 1.7278.17 ± 8.1377.51 ± 11.1574.37 ± 2.2799.43 ± 0.8799.66 ± 0.4299.16 ± 1.1899.69 ± 0.14
B i t u m e n 0.00 ± 0.000.00 ± 0.0057.27 ± 5.2481.87 ± 7.2181.67 ± 6.2381.67 ± 21.3799.16 ± 0.6279.87 ± 4.3799.56 ± 0.31
B r i c k s 86.91 ± 2.9892.42 ± 1.2773.79 ± 8.0392.83 ± 2.1277.66 ± 10.1972.84 ± 7.4295.40 ± 1.8195.70 ± 1.1995.93 ± 1.50
S h a d o w s 0.00 ± 0.0098.36 ± 0.5357.78 ± 21.1196.97 ± 1.6194.34 ± 2.3064.81 ± 14.4082.37 ± 7.0493.85 ± 1.6098.11 ± 0.70
OA (%)68.90 ± 0.7674.54 ± 0.2882.68 ± 1.8492.56 ± 1.4889.36 ± 1.3292.72 ± 1.9696.96 ± 0.4196.99 ± 0.4798.59 ± 0.12
AA (%)45.28 ± 0.6160.79 ± 0.2372.77 ± 1.6389.51 ± 2.7686.98 ± 2.0485.50 ± 6.2794.57 ± 0.8493.68 ± 0.9797.50 ± 0.17
κ × 10056.26 ± 0.9864.42 ± 0.2476.85 ± 2.4890.04 ± 2.0785.81 ± 1.7790.29 ± 2.6395.98 ± 0.5496.01 ± 0.6398.14 ± 0.16
Table 5. The inference speed of TNCCA on different datasets (epoch = 500).
Table 5. The inference speed of TNCCA on different datasets (epoch = 500).
DatasetHouston2013 Trento Pavia University
Train.Test.Train.Test.Train.Test.
Time (min) 0.58 13.85 0.91 23.47 1.26 33.39
Table 6. Conducting ablation experiments on different modules (using the Houston2013 dataset).
Table 6. Conducting ablation experiments on different modules (using the Houston2013 dataset).
CasesComponentsIndicators
3D-ConvMs2D-ConvTokenizerTEOA (%)AA (%) κ × 100
1×××48.3946.7844.12
2×85.8185.6384.65
3×2D-Conv89.5890.0688.73
42D-Conv90.5591.5289.56
590.7291.7289.97
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Sun, L.; Lu, C.; Li, B. A Novel Transformer Network with a CNN-Enhanced Cross-Attention Mechanism for Hyperspectral Image Classification. Remote Sens. 2024, 16, 1180. https://doi.org/10.3390/rs16071180

AMA Style

Wang X, Sun L, Lu C, Li B. A Novel Transformer Network with a CNN-Enhanced Cross-Attention Mechanism for Hyperspectral Image Classification. Remote Sensing. 2024; 16(7):1180. https://doi.org/10.3390/rs16071180

Chicago/Turabian Style

Wang, Xinyu, Le Sun, Chuhan Lu, and Baozhu Li. 2024. "A Novel Transformer Network with a CNN-Enhanced Cross-Attention Mechanism for Hyperspectral Image Classification" Remote Sensing 16, no. 7: 1180. https://doi.org/10.3390/rs16071180

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop