Next Article in Journal
MBT-UNet: Multi-Branch Transform Combined with UNet for Semantic Segmentation of Remote Sensing Images
Next Article in Special Issue
Deep-Learning for Change Detection Using Multi-Modal Fusion of Remote Sensing Images: A Review
Previous Article in Journal
Near-Complete Sampling of Forest Structure from High-Density Drone Lidar Demonstrated by Ray Tracing
Previous Article in Special Issue
A Metadata-Enhanced Deep Learning Method for Sea Surface Height and Mesoscale Eddy Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Feature Cross Attention-Induced Transformer Network for Hyperspectral and LiDAR Data Classification

by
Zirui Li
1,
Runbang Liu
1,
Le Sun
2,3 and
Yuhui Zheng
3,*
1
Ocean College, Jiangsu University of Science and Technology, Zhenjiang 212100, China
2
Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of Information Science and Technology, Nanjing 210044, China
3
School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(15), 2775; https://doi.org/10.3390/rs16152775
Submission received: 18 June 2024 / Revised: 24 July 2024 / Accepted: 26 July 2024 / Published: 29 July 2024

Abstract

:
Transformers have shown remarkable success in modeling sequential data and capturing intricate patterns over long distances. Their self-attention mechanism allows for efficient parallel processing and scalability, making them well-suited for the high-dimensional data in hyperspectral and LiDAR imagery. However, further research is needed on how to more deeply integrate the features of two modalities in attention mechanisms. In this paper, we propose a novel Multi-Feature Cross Attention-Induced Transformer Network (MCAITN) designed to enhance the classification accuracy of hyperspectral and LiDAR data. The MCAITN integrates the strengths of both data modalities by leveraging a cross-attention mechanism that effectively captures the complementary information between hyperspectral and LiDAR features. By utilizing a transformer-based architecture, the network is capable of learning complex spatial-spectral relationships and long-range dependencies. The cross-attention module facilitates the fusion of multi-source data, improving the network’s ability to discriminate between different land cover types. Extensive experiments conducted on benchmark datasets demonstrate that the MCAITN outperforms state-of-the-art methods in terms of classification accuracy and robustness.

1. Introduction

Hyperspectral image classification (HSIC) [1] is of great significance in the field of remote sensing, and it is widely used in agriculture [2,3,4], environmental monitoring [5,6], urban planning [7,8,9], military reconnaissance [10,11], and other fields. HSI can provide detailed spectral features by capturing spectral information in multiple continuous bands, helping distinguish different types of ground objects [12]. However, due to the high dimensionality and complexity of hyperspectral data, classification solely relying on HSI faces challenges such as data redundancy and noise interference [13,14]. For this reason, joint classification, combined with LiDAR data, has become an effective solution. LiDAR data provides high-resolution spatial and structural information, which makes up for the lack of spatial resolution of HSI and complements the spectral information of hyperspectral data. The combination of the two types of data can significantly improve classification performance [15]. By jointly utilizing the three-dimensional spatial information of LiDAR and the spectral information of HSI, we can more accurately identify and classify ground objects and reduce confusion. The fusion of hyperspectral and LiDAR data can provide rich information in multiple dimensions, making the classification results more comprehensive and reliable. Therefore, it is very necessary to combine HSI and LiDAR data for joint classification.
In the past five years, HSIC methods have made significant progress [16], mainly reflected in the widespread application of deep learning technology and the development of multi-source data fusion methods [17]. Recently, Yang et al. proposed an HSIC method based on a multi-level feature fusion network of interactive transformer and convolutional neural networks (CNNs) [18]. In addition, Yang et al. also proposed a method based on deformable dilated convolution pyramid feature extraction [19]. Cao et al. investigated the use of convolutional neural networks (CNNs) combined with active learning for classifying HSI [20]. Xue et al. explored a self-calibrating convolution [21] for collaborative classification of hyperspectral and LiDAR data. From the perspective of development history, HSIC methods have experienced a transformation from traditional machine learning methods to deep neural network methods. Early HSIC methods mainly relied on machine learning algorithms such as SVM [22,23,24] and random forest (RF) [25,26,27]. These methods improved the accuracy of classification to a certain extent. However, with the increase in data volume and computing power, deep learning methods [28,29,30] have gradually become the mainstream of HSIC. Deep learning models such as CNN, RNN, and GAN [31] have greatly improved the performance of HSIC by automatically extracting multi-level feature representations. Although deep learning methods have made significant progress, there are still some problems, such as dependence on large amounts of annotated data, a high computational cost, and insufficient generalizability to different data sets. To overcome these limitations, combining multi-source data (such as LiDAR data) for classification becomes an effective solution.
Deep learning has demonstrated remarkable capabilities in extracting features from raw data and adjusting parameters, particularly through its multi-layered network structure that can automatically capture complex feature representations from data [32,33,34]. Common deep learning structures include RNN [35], LSTM [36], CNN [37], etc. Among these, CNNs have particularly strong feature extraction capabilities and can automatically learn deep semantic features from images [38,39]. Some approaches based on CNN depth features have emerged. For instance, Li et al. [40] proposed a spatial-spectral saliency reinforcement network (Sal2RN) to enhance joint classification performance. Despite these advancements, the mainstream methods still face challenges due to the very different dimensions and feature distribution of HSI and LiDAR data [41,42]. To address this, Gao et al. [43] proposed an adaptive, multiscale spatial-spectral enhancement network (AMSSE-Net) that includes an adaptive feature-fusion module. In addition to CNNs, other advanced network structures have been used for joint HSI and LiDAR data classification to improve accuracy, such as autoencoders (SAEs) [44], GCNs [45], GANs [46], etc. While these classical deep learning methods can effectively extract local features from images, they are not as effective in dealing with global relationships, and they lack a consideration of location information. To address this, the transformer network has been applied to joint classification [47].
In learning overall sequence features, transformers [48,49,50,51] rely on their global modeling capability, self-attention mechanism, adaptability to different-length sequences, and multitasking ability. They are widely used in the field of hyperspectral LiDAR classification; combining HSI and LiDAR for land-cover classification at the same time can establish long-term dependencies and help make full use of spectral information and global features. Through multiple-branch networks, and combining self- and cross-guided attention mechanisms, effective fusion and classification of hyperspectral and LiDAR data are achieved. Ni et al. [52] proposed a model called the Multiscale Head Selection Transformer. Through a multiscale head selection mechanism, the transformer network selects and integrates hyperspectral and LiDAR features at various scales. This mechanism allows MHST to capture features at different scales, enhancing classification accuracy and robustness. Yang et al. [53] proposed a LiDAR-guided cross-attention fusion method. This approach uses LiDAR data to guide band selection in HSI and employs a cross-attention mechanism to fuse LiDAR and hyperspectral data, thereby enhancing classification performance. Roy et al. [54] proposed a Cross-HL Attention Transformer model, extending the self-attention mechanism by cross-attending to hyperspectral and LiDAR data. This approach facilitates the effective fusion and feature extraction from multiple data sources. The transformer network enables end-to-end classification processing, yielding superior classification results. The implementation of cross-attention demonstrates the capability to integrate information from diverse data sources in hyperspectral and LiDAR classification, thereby enhancing the performance and accuracy of classification models. By employing multi-feature fusion, the multidimensional features of land cover are captured more comprehensively, improving the classifier’s ability to recognize complex scenes, and thus achieving more precise hyperspectral and LiDAR data classification.
In a word, the contributions of the proposed MCAITN method can be summarized into the following threefold list:
(1) The proposed method introduces a novel architecture that leverages the strengths of transformer networks to enhance classification accuracy in HSI and LiDAR data fusion. The MCAITN effectively captures the complementary information between the two modalities, leading to improved feature representation and classification performance.
(2) The MCAITN incorporates a cross-attention module that selectively focuses on the most relevant features from each modality. This targeted attention mechanism enables the network to emphasize informative features from both HSI and LiDAR data, enhancing its ability to discriminate between different land-cover types and improving the overall classification accuracy.
(3) Comprehensive experiments on benchmark datasets reveal that MCAITN exceeds current SOTA methods in classification accuracy, resilience to noise and data variability, and computational efficiency.

2. Materials and Methods

The MCAITN architecture is illustrated in Figure 1. Initially, PCA was applied to reduce the dimensionality of the original HSI data. Then, LiDAR data and the reduced dimensionality HSI data were segmented into small three-dimensional blocks as the input of the shallow CNN feature-extraction module. Next, the extracted joint spectral–spatial features were input into the HSI and LiDAR branches for processing so as to preserve the local context and crucial information. Finally, they were input into the cross-feature enhanced-attention transformer encoder for comprehensive feature cross-learning, the classification tags were extracted for final classification, and the number of encoders was N.

2.1. HSI and LiDAR-Data Preprocessing

Assume a hyperspectral data cube and LiDAR data, denoted as X R W × H × C , Y R W × H , respectively, where X and Y are the original inputs to the whole classification framework, W and H correspond to the width and height of the HSI, and C represents the number of bands.
Typically, HSI is rich in spectral bands, which contain very valuable information but also introduce a lot of redundancy. To reduce the calculation complexity, PCA is used to reduce the dimension of HSI data. The label of each pixel in X is denoted as a one-shot vector, Z R 1 × 1 × B , where B denotes the number of land cover analogs. Then, PCA is performed along the spectral dimension on the HSI data X. After the dimensionality reduction is performed, the spatial resolution of X remains unchanged, but the number of spectral bands is reduced from C to c, i.e., X p c a R W × H × c . In other words, PCA eliminates the redundant spectral information in HSI and retains the spatial information without degradation.
Then, we divided X p c a into small, overlapping three-dimensional adjacent patches. Each patch was denoted as X p a t c h R m × m × c , where m × m represents the spatial size of the patch, and c represents the number of spectral bands. The label for each patch was derived from the ground-truth label of the central pixel within that patch. For all adjacent patches, we took the patch X patch i j with the central pixel position ( i , j ) as an example; its spatial coverage ranged from i ( m 1 ) / 2 to i ( m 1 ) / 2 in width and from j ( m 1 ) / 2 to j + ( m 1 ) / 2 in height. It contained all spectra within this spatial range. It should be emphasized that, when generating patches for edge pixels, one side of these pixels is smaller than ( m 1 ) / 2 ) due to the asymmetric overlay dimensions; therefore, a padding operation is required. The remaining patches were divided into training and test sets according to the proportion.
And for LiDAR image Y, a similar operation was performed. It was segmented into small overlapping patches. We denoted each small patch as Y p a t c h R n × n , where n × n denotes the size of the patch.

2.2. Shallow CNN Feature-Extraction Module

In recent years, CNNs have become extensively utilized in HSIC, primarily due to their remarkable proficiency in extracting local features, setting them apart as a dominant approach in the field. In MCAITN, we use a shallow CNN feature-extraction module to effectively extract the spectral and spatial information of HSI. For LiDAR data, we use a two-dimensional convolution block to extract its features. The shallow CNN feature extraction module mainly consists of a 3D convolution block and a 2D convolution block. Such a structure helps effectively integrate spectral and spatial information in the early stage of feature extraction.
First, we introduce a 3D convolution-based block to process the input 3D neighborhood patch. The convolution block consists of a 3D convolution layer with a kernel size of 8 @ 3 × 3 × 3 and a nonlinear activation layer. The stride and padding are 1 and 0, respectively. Specifically, the 3D convolution layer performs convolution operations along the spectral and spatial dimensions to generate a 3D feature map containing spectral–spatial features. The calculation process of the 3D convolution block is as follows:
F 3 d = Φ X patch Θ w 3 d + b 3 d
where F 3 d represents the three-dimensional feature map, and w 3 d and b 3 d represent the weight and bias parameters, respectively. Θ is the three-dimensional convolution operator, and Φ is the activation function.
Then, the obtained feature map is flattened along the spectral dimension and used as the input of the 2D convolution block. Similarly, the 2D convolution is composed of a 2D convolution block with a kernel size of 64 @ 3 × 3 × 3 and a subsequent nonlinear activation layer. The 2D convolution block performs convolution along the spatial dimension to extract more discriminative spatial information. The calculation process is as follows.
F 2 d = Φ ( f ( F 3 d ) w 2 d + b 2 d )
where F 2 d represents the 2D feature map, f denotes the flattening operation, and w 2 d and b 2 d denote the weight and bias parameters, respectively. ⊙ is the 2D convolution operator, and Φ is the activation function.
For LiDAR data, since they are two-dimensional, we use the two-dimensional convolution block in the shallow CNN feature-extraction module to extract their spatial features. The calculation process is as follows:
Y 2 d = Φ ( Y p a c h w 2 d + b 2 d )
Finally, we flatten the 2D feature map of F 2 d and Y 2 d along the spatial dimension and then output the features F H and F L . Through this step, we achieve the exploration of spatial and spectral information in the data at a relatively low computational cost.

2.3. HSI Semantic Tokenizer and LiDAR Gaussian-Weighted Feature Tokenizer

As shown in Figure 1, for HSI, we use position embedding to sign the position information of each semantic token for the feature F2d extracted via the shallow CNN feature-extraction module. Each token is represented by [ F H 1 , F H 2 , , F H w ] . These tokens are connected together with a learnable classification token, T 0 cls , for classification tasks. The position information, P E p o s , encoding is then attached to the token representation. The resulting semantic tag embedding sequence is as follows:
T H = [ T 0 c l s , F H 1 , F H 2 , , F H w ] + P E p o s
For LiDAR data, we apply semantic labels to the LiDAR data to enable the representation and processing of advanced semantic ideas at the level of LiDAR features. The flattened feature map of the input is defined as F L R n n × z , where n n is the size, and z is the number of channels. The feature token is defined as T R w × z , where w denotes the number of tokens. For the feature mapping of F L , T L can be obtained from the following formula:
T L = softmax ( F L W a ) T A F L
where W a represents the weight matrix initialized with Gaussian distribution, and F L W a represents performing 1 × 1 dot product to map F L to a group. At this juncture, the size of the semantic group is denoted as A. Following this, A undergoes specialization, and s o f t m a x ( . ) is employed to emphasize the relatively crucial semantic components. Subsequently, it is combined with F L to yield the semantic sequence T L = [ F L 1 , F L 2 , , F L w ] .
Finally, we input the obtained T H and T L into the CFETE module to learn the relationship between high-level semantic features.

2.4. Cross-Feature Enhanced-Attention Transformer Encoder

As shown in Figure 1, CFETE is mainly composed of a cross-feature enhanced attention (CFEA) block and a simple, fully connected feed-forward network (FFN).
The original MHSA mechanism aims to establish global, long-range dependencies between input feature sequences. Now, in order to provide MHSA with more valuable context information, we extend the traditional MHSA mechanism to CFEA and take HSI and LiDAR semantic-feature sequences as the input of CFEA, which can be expressed as follows:
T i n = [ T H , T L ]
In the CFEA module, firstly, the two feature sequences are linearly transformed to obtain five different matrices: query Q H R m × d , key K H R m × d , the value V H R m × d of HSI, and K L R n × d , V L R n × d of the LiDAR. The linear transformation process is defined as follows:
Q H = T H W q K H = T H W k V H = T H W v K L = T L W k V L = T L W v
where m and n are the number of HSI feature vectors T H and LiDAR feature vectors T L , respectively, and d represents their dimensions. W q , W v , and W k are learnable weight matrices. Then, Q, K, and V are divided into h parts along the d dimension, expressed as follows:
Q H = [ Q H 1 , Q H 2 , , Q H h ] K H = [ K H 1 , K H 2 , , K H h ] V H = [ V H 1 , V H 2 , , V H h ] K L = [ K L 1 , K L 2 , , K L h ] V L = [ V L 1 , V L 2 , , V L h ]
where h represents the number of the attention heads, Q H i R m × ( d / h ) , K H i R m × ( d / h ) , V H i R m × ( d / h ) , K L i R n × ( d / h ) , and V L i R n × ( d / h ) . Next, we use K L and V L to expand K H and V H ; the process is as follows:
K i = C o n c a t ( K H i , K L i ) V i = C o n c a t ( V H i , V L i )
Since K L and V L represent the projection matrix of LiDAR data, they preserve local context and salient spatial feature representation. We utilize the extended K and V matrices and integrate them into the self-attention mechanism. This allows the model to consider not only the spatial–spectral characteristics of HSI but also the feature representation of LiDAR data during the self-attention operation, thereby incorporating a broader range of contextual information. This helps provide a more comprehensive set of information, allowing the model to better understand the relationships and dependencies between different parts of the input sequence. Furthermore, employing a multi-head mechanism allows the model to process different subspaces of information concurrently. The extended parts of K and V of each head provide additional information, enhance the cross-fusion between different data source information, and improve the model’s generalization ability and prediction performance with location data.
Following this, the attention scores between each Q and K in each attention head are calculated, and then these scores are converted into attention weights using the s o f t m a x function. Finally, these weights are multiplied by V. The calculation process for each head is as follows:
H i = Attention Q i , K i , V i = Softmax Q i K i T d V i
The final output of CFEA is constructed by concatenating the attention results from all attention heads and further projecting them. We represent it as follows:
C F E A ( T i n ) = C o n c a t ( H 1 , H 1 , , H h ) W o
Among them, W o is the parameter matrix.
Then, the output of CFEA is used as the input of FFN. The feed-forward layer consists of two FC layers with a Gaussian error linear unit (GELU) inserted in between. It is defined as follows:
F F N ( X ) = F C 2 ( G E L U ( F C 1 ( X ) ) )
In summary, the entire calculation process of CFETE can be summarized as follows:
z l = C F E A ( L N ( z l ) ) + z l z l + 1 = F F N ( L N ( z l ) ) + z l
where LN is the layer norm, which alleviates the gradient-vanishing and -exploding problems, thereby speeding up the training process. z l represents the input of the l t h layer of CFETE.

2.5. Classification Head

To achieve the final classification, we use a multi-layer perceptron (MLP) head. Typically, an MLP consists of multiple FC layers, and the MLP head refers to its last layer. In this paper, the MLP head consists of layer norms and FC layers. The classification tokens are extracted from the output of MATE and used as the input for the MLP head; the output dimension of the MLP head is equal to the total number of classes predicted in the end. The unit with the highest value in this output corresponds to the predicted label for that pixel. Algorithm 1 outlines the entire execution process of the method.
Algorithm 1 MCAITN network.
Require:
 HSI data X R W × H × C , LiDAR data Y R W × H ; PCA bands number c; patch size S; train rate α % .
Ensure:
 Predicted labels of the test set.
1:
Obtain the HSI after PCA transformation, denoted as X p c a .
2:
Obtain patches from X p c a and Y, respectively, and divide the patches into the train set and test set.
3:
Set batch size b s = 32 ; learning rate l r = 5 e 4 ; epochs e = 100 .
4:
for  i = 1 to e do do
5:
   Perform Conv3D and Conv2D on the HSI patch to obtain spatial spectral features F 3 d and F 2 d ; perform Conv2D on the LiDAR patch to obtain spatial feature Y 2 d .
6:
   Flatten the 2D feature map to obtain F L and F H .
7:
   Use Gaussian-Weighted Feature Tokenizer to generate feature sequence T L for F L .
8:
   Obtain T H by adding position information and an additional classification token to F H .
9:
   Input features T H and T L into CFETE for feature learning.
10:
   for  j = 1 to L do do
11:
     Perform CFEA operation according to Equation (11).
12:
     Perform FFN operation according to Equation (13).
13:
   end for
14:
   Input the first classification token T o u t 0 into the MLP head to obtain the category label.
15:
end for
16:
Use the trained model to predict the test set.

3. Experiments

To validate the proposed method’s effectiveness, we conducted experiments on three classic hyperspectral and LiDAR joint classification datasets and compared them with current mainstream methods. In the experiments, three classification metrics, overall accuracy (OA), average accuracy (AA), and the kappa coefficient, were used to quantitatively assess the experimental performance.

3.1. Dataset Description

3.1.1. MUUFL

The MUUFL was obtained using a reflective optical system-imaging spectrometer sensor, capturing the area around the Gulf Park University of Southern Mississippi campus. The spatial dimensions of both the HSI and LiDAR data are 325 × 220 pixels, with 325 representing the height and 220 the width. After noisy bands were filtered out, the HSI data were reduced to 64 spectral bands. This dataset categorizes the land into 11 different classes. Figure 2 displays the specific situation of the dataset.

3.1.2. Trento

The Trento dataset features HSI and LiDAR data collected from across a rural area south of Trento, Italy. The dataset boasts a spatial resolution of 1 m and dimensions of 600 × 166 pixels. It includes 63 spectral bands in the HSI data, with wavelengths ranging from 0.42 to 0.99 μ m. There are six distinct land-cover classes within this dataset. Figure 3 shows the specific situation of the dataset.

3.1.3. Augsburg

The Augsburg dataset includes HSI data and LiDAR-based DSM data collected from across the city of Augsburg, Germany. The dataset’s spatial dimensions are 332 × 485 pixels, representing height and width. The HSI data comprise 180 spectral bands, with wavelengths ranging from 0.4 to 2.5 μ m. This dataset encompasses seven land-cover classes. Figure 4 presents the specific situation of the dataset.
The datasets are available at (accessed on 1 January 2024): https://github.com/AnkurDeria/MFT. The names of land categories, along with the numbers of training and testing samples used in the experiments for the three datasets mentioned above, are presented in Table 1.

3.2. Experimental Setup

3.2.1. Evaluation Indicators

We selected four widely used evaluation metrics to quantitatively assess the classification performance of all methods: single-class accuracy, overall accuracy (OA), average accuracy (AA), and the kappa coefficient ( κ ). Higher values for each metric signify better classification performance.

3.2.2. Configurations

To ensure a fair comparison of the classification performance of the models, both the proposed method and the comparison methods were implemented using the PyTorch framework. All training and testing experiments were conducted on Intel Xeon Silver 4210 and an NVIDIA GeForce RTX 2080Ti GPU. Additionally, the parameters for the comparison methods were maintained as per their original settings to achieve optimal performance. For our method, network parameters were updated using the Adam optimizer, with a batch size of 32 and training epochs set to 100.

3.3. Classification Results and Analysis

In this subsection, we will quantitatively and qualitatively analyze the comparison results between the proposed MCAITN method and the current mainstream methods. These methods include SVM [22], S2FL [55], EndNet [44], MDL [56], LSAF [57], CCRNet [58], CoupledCNN [59], and HCT [12].

3.3.1. Quantitative Results and Analysis

Table 2, Table 3 and Table 4 present the quantitative results for the three classic datasets Trento, MUUFL, and Augsburg, along with the standard deviations for each metric. From Table 2, it is evident that traditional machine learning methods, such as SVM, have a lower joint classification accuracy, achieving only an OA value of 72.23. In contrast, neural network methods perform relatively better, with methods like CCRNet, CoupleCNN, and HCT having OA values mostly above 83% and AA values around 90%. MCAITN demonstrates significant improvements over competing methods across all evaluation metrics (OA, AA, and Kappa), reaching an OA of 90.43%, an AA of 91.94%, and a Kappa of 0.8745. Additionally, the table shows that the standard deviations for the OA, AA, and Kappa values of our method are relatively low, indicating that the proposed method consistently produces stable classification results across ten random experiments.
From the quantitative results of Table 3 and Table 4, conclusions similar to those in Table 2 can be drawn; that is, the MCAITN method proposed in this paper achieved the best quantitative indicators in terms of OA, AA, and kappa. The joint-classification methods related to deep learning are significantly better than traditional classification methods such as SVM classifiers, mainly due to the powerful nonlinear feature-extraction capabilities of neural networks. Although the MCAITN method improved on various classification indicators over the second-best method, HCT, in the Augsburg and Trento databases, the OA only increased by 0.26% and 0.11%, the AA increased by 1.73% and 0.21%, and the kappa increased by 0.36 and 0.17. It can be seen that the improvement in the MCAITN method was the least for the Trento dataset. The main reason for this may be that the features in the MUUFL are mainly intertwined buildings and vegetation, and the elevation information in the LiDAR data has a better positive effect on the classification results; meanwhile for the Augsburg dataset, there is mainly vegetation, and the auxiliary classification ability of elevation information is limited. In the Trento dataset, there are also fields, houses, roads, and trees, but they are more scattered, and better results can be obtained simply through hyperspectral information.

3.3.2. Visual Evaluation and Analysis

To further assess the performance of the proposed MCAITN method and the other methods, a qualitative visual analysis was conducted using representative samples from the MUUFL, Augsburg, and Trento databases; the results are illustrated in Figure 5, Figure 6 and Figure 7.
The visual results indicate that the MCAITN method is capable of producing more accurate and detailed classifications compared to the other methods. In the MUUFL dataset, for instance, the MCAITN method was able to distinctly classify various land cover types such as grassland, forests, and buildings. The other methods often struggled to differentiate between these classes, resulting in more overlapping classifications.
With the Augsburg dataset, the MCAITN method accurately captured the roads, buildings, and vegetation, especially in terms of edge delineation. The other methods either produced less clear classifications or misclassified some of the areas.
With the Trento dataset, which is characterized by high complexity and varying texture information, the MCAITN method once again demonstrated its robustness by identifying different land cover types more accurately than the other methods. The complex nature of the dataset posed a challenge for some of the methods, leading to confusion in classifications.
In summary, both the quantitative and qualitative analyses indicate that the MCAITN method provides better results compared to the current mainstream methods for HSI and LiDAR data classification tasks.

4. Discussion

4.1. Parameter Analysis

HSIs have a very high number of spectral dimensions, and directly processing these high-dimensional data significantly increases computational complexity. Moreover, adjacent bands often exhibit high correlation, leading to a large amount of redundant information, which adversely affects classification performance. Therefore, we considered a set of candidate values {10, 20, 30, and 40} for the retained spectral dimensions and fixed other hyperparameters to explore their impact on classification performance. As shown in Figure 8, with an increase in spectral dimensions, the classification performance of all three datasets initially rose and then stabilized. Considering both classification performance and computational complexity, we set the spectral dimension to 30.
On one hand, directly processing the entire hyperspectral and LiDAR images consumes significant computational resources and memory. By dividing the images into smaller patches, we can reduce the amount of data processed at each step, thereby improving computational efficiency. On the other hand, HSI and LiDAR images have different spectral and spatial resolutions, so the patch size for these images can also impact classification performance. We fixed other hyperparameters and considered a set of candidate patch sizes {5, 7, 9, 11, and 13} for both types of image inputs. As shown in Figure 9, with the increase in the HSI patch size, the classification performance for all three datasets initially improved and then stabilized. Considering both computational complexity and classification performance, we set the HSI patch size to 11.
As illustrated in Figure 10, it is evident that the MUUFL, Augsburg, and Trento databases achieved the best classification performance with a LiDAR patch size of 5, 13, and 7, respectively.
HSIs are typically high-dimensional and sparse data, and an appropriate learning rate helps the model find stable feature representations in such data, enhancing classification performance. Moreover, a suitable learning rate balances the convergence speed and stability, enabling the model to reach the optimal solution in a shorter time. We kept other hyperparameters unchanged and considered a set of candidate learning rates {1e-5, 5e-5, 1e-4, 5e-4, and 1e-3}. As shown in Figure 11, with an increasing learning rate, the AA for the MUUFL database gradually increased, while OA and kappa first increased and then decreased, achieving the best performance at 1e-4. For the Augsburg and Trento databases, the best classification performance was achieved at 5e-4 and 1e-4, respectively.

4.2. Ablation Study

To validate the effectiveness of each component in our proposed network on classification performance, we conducted ablation experiments on the MUUFL database involving four components: Conv3D, Conv2D, LiDAR-branch, and CFEA-TE. The results are listed in Table 5.
In Case 1 and Case 2, we removed the 2D convolution block and the 3D convolution block, respectively. The results showed a decrease in the model’s classification performance in both scenarios. However, the OA, AA, and kappa in Case 1 were slightly higher than those in Case 2, suggesting that the 3D convolution block, which performs joint convolution in both spatial and spectral dimensions, is more effective at feature extraction compared to the 2D convolution block, which only performs spatial convolution. In Case 3 and Case 4, we performed classification experiments using only the LiDAR branch and the HSI branch, respectively, with the encoder utilizing a conventional transformer encoder. The results show a significant decrease in performance when only LiDAR data were used for classification, while using only HSI data yielded better classification performance. This suggests that the information contained in the LiDAR data is considerably less than that in the HSI data, and solely using LiDAR data is insufficient for classification. Finally, Case 5 represents our complete proposed classification model. Compared to Case 3 and Case 4, it achieves the best classification performance, demonstrating that our cross-feature enhanced-attention transformer encoder effectively integrates LiDAR and HSI data for joint classification. In summary, each component of the proposed MCAITN network positively contributes to the final classification performance.

5. Conclusions

This paper has introduced a novel, multi-feature, cross-attention transformer classification network named MCAITN for the joint classification of HSI and LiDAR data. The innovation of this method lies in its effective coupling of hyperspectral features with LiDAR data features through the Q, K, and V vectors in the cross-attention mechanism. It further integrates the two discriminative features iteratively, adaptively adjusting their respective advantageous features to enhance classification accuracy. The experimental results show that, compared to mainstream joint classification methods for hyperspectral and LiDAR data, the MCAITN method can better fuse the features of the two modalities, achieving an average classification accuracy improvement of about 1% at a 3% sampling rate. Another advantage of this type of method is that its architecture can easily be generalized to feature extraction for the fusion of more modalities.
In the future, directions for improvement include altering the way Q, K, and V connections are established between the two types of data markings (currently concatenation) to enable a more effective fusion of the features from the two modalities, thus further enhancing accuracy. Additionally, designing a more lightweight network architecture is also a direction for research.

Author Contributions

Conceptualization, Z.L., L.S. and Y.Z.; methodology, R.L.; software, Z.L.; validation, L.S. and R.L.; writing—original draft preparation, Z.L. and L.S.; writing—review and editing, Y.Z.; visualization, Z.L.; supervision, L.S. and Y.Z.; funding acquisition, R.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China under grants No. 62076137.

Data Availability Statement

Suggested Data Availability Statements are available in Section 3.1 Dataset Description.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

MCAITNMulti-feature, cross-attention-induced transformer network
CNNsConvolutional neural networks
SVNSupport vector machine
RFRandom forest
RNNsRecurrent neural networks
GANsgenerative adversarial networks
LSTMLong short-term memory
HSIHyperspectral image
HSICHyperspectral image classification
IP-CNNInterleaving perception convolutional neural network
Sal2RNSaliency reinforcement network
DSHFNetDynamic-scale hierarchical fusion network
AMSSE-NetAdaptive multiscale spatial–spectral enhancement network
CMSECross-modal semantic enhancement
SAEsAutoencoders
GCNsGraph convolutional networks
CFEACross-feature enhanced attention
FFNFeed-forward network
MLPMulti-layer perceptron
FCFully connected

References

  1. He, L.; Li, J.; Liu, C.; Li, S. Recent Advances on Spectral–Spatial Hyperspectral Image Classification: An Overview and New Guidelines. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1579–1597. [Google Scholar] [CrossRef]
  2. Teke, M.; Deveci, H.S.; Haliloğlu, O.; Gürbüz, S.Z.; Sakarya, U. A short survey of hyperspectral remote sensing applications in agriculture. In Proceedings of the 2013 6th International Conference on Recent Advances in Space Technologies (RAST), Istanbul, Turkey, 12–14 June 2013; pp. 171–176. [Google Scholar] [CrossRef]
  3. Agilandeeswari, L.; Prabukumar, M.; Radhesyam, V.; Phaneendra, K.L.N.B.; Farhan, A. Crop Classification for Agricultural Applications in Hyperspectral Remote Sensing Images. Appl. Sci. 2022, 12, 1670. [Google Scholar] [CrossRef]
  4. Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent Advances of Hyperspectral Imaging Technology and Applications in Agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
  5. Camps-Valls, G.; Tuia, D.; Bruzzone, L.; Benediktsson, J.A. Advances in Hyperspectral Image Classification: Earth Monitoring with Statistical Learning Methods. IEEE Signal Process. Mag. 2014, 31, 45–54. [Google Scholar] [CrossRef]
  6. Stuart, M.B.; Davies, M.; Hobbs, M.J.; Pering, T.D.; McGonigle, A.J.S.; Willmott, J.R. High-Resolution Hyperspectral Imaging Using Low-Cost Components: Application within Environmental Monitoring Scenarios. Sensors 2022, 22, 4652. [Google Scholar] [CrossRef]
  7. Weber, C.; Aguejdad, R.; Briottet, X.; Avala, J.; Fabre, S.; Demuynck, J.; Zenou, E.; Deville, Y.; Karoui, M.; Benhalouche, F.; et al. Hyperspectral Imagery for Environmental Urban Planning. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 1628–1631. [Google Scholar] [CrossRef]
  8. Brabant, C.; Alvarez-Vanhard, E.; Laribi, A.; Morin, G.; Thanh Nguyen, K.; Thomas, A.; Houet, T. Comparison of Hyperspectral Techniques for Urban Tree Diversity Classification. Remote Sens. 2019, 11, 1269. [Google Scholar] [CrossRef]
  9. Nisha, A.; Anitha, A. Current Advances in Hyperspectral Remote Sensing in Urban Planning. In Proceedings of the 2022 Third International Conference on Intelligent Computing Instrumentation and Control Technologies (ICICICT), Kannur, India, 11–12 August 2022; pp. 94–98. [Google Scholar] [CrossRef]
  10. Shimoni, M.; Haelterman, R.; Perneel, C. Hypersectral Imaging for Military and Security Applications: Combining Myriad Processing and Sensing Techniques. IEEE Geosci. Remote Sens. Mag. 2019, 7, 101–117. [Google Scholar] [CrossRef]
  11. Zhao, J.; Zhou, B.; Wang, G.; Ying, J.; Liu, J.; Chen, Q. Spectral Camouflage Characteristics and Recognition Ability of Targets Based on Visible/Near-Infrared Hyperspectral Images. Photonics 2022, 9, 957. [Google Scholar] [CrossRef]
  12. Zhao, G.; Ye, Q.; Sun, L.; Wu, Z.; Pan, C.; Jeon, B. Joint Classification of Hyperspectral and LiDAR Data Using a Hierarchical CNN and Transformer. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16. [Google Scholar] [CrossRef]
  13. Sun, L.; He, C.; Zheng, Y.; Wu, Z.; Jeon, B. Tensor cascaded-rank minimization in subspace: A unified regime for hyperspectral image low-level vision. IEEE Trans. Image Process. 2022, 32, 100–115. [Google Scholar] [CrossRef]
  14. Sun, L.; Cao, Q.; Chen, Y.; Zheng, Y.; Wu, Z. Mixed noise removal for hyperspectral images based on global tensor low-rankness and nonlocal SVD-aided group sparsity. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–17. [Google Scholar] [CrossRef]
  15. Song, T.; Zeng, Z.; Gao, C.; Chen, H.; Li, J. Joint Classification of Hyperspectral and LiDAR Data Using Height Information Guided Hierarchical Fusion-and-Separation Network. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–15. [Google Scholar] [CrossRef]
  16. Ahmad, M.; Shabbir, S.; Roy, S.K.; Hong, D.; Wu, X.; Yao, J.; Khan, A.M.; Mazzara, M.; Distefano, S.; Chanussot, J. Hyperspectral Image Classification—Traditional to Deep Models: A Survey for Future Prospects. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 968–999. [Google Scholar] [CrossRef]
  17. Song, W.; Li, S.; Fang, L.; Lu, T. Hyperspectral Image Classification with Deep Feature Fusion Network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3173–3184. [Google Scholar] [CrossRef]
  18. Yang, H.; Yu, H.; Zheng, K.; Hu, J.; Tao, T.; Zhang, Q. Hyperspectral Image Classification Based on Interactive Transformer and CNN With Multilevel Feature Fusion Network. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  19. Yang, J.; Li, A.; Qian, J.; Qin, J.; Wang, L. A Hyperspectral Image Classification Method Based on Pyramid Feature Extraction with Deformable–Dilated Convolution. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
  20. Cao, X.; Yao, J.; Xu, Z.; Meng, D. Hyperspectral Image Classification with Convolutional Neural Network and Active Learning. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4604–4616. [Google Scholar] [CrossRef]
  21. Xue, Z.; Yu, X.; Tan, X.; Liu, B.; Yu, A.; Wei, X. Multiscale Deep Learning Network with Self-Calibrated Convolution for Hyperspectral and LiDAR Data Collaborative Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  22. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  23. Baassou, B.; He, M.; Mei, S. An accurate SVM-based classification approach for hyperspectral image classification. In Proceedings of the 2013 21st International Conference on Geoinformatics, Kaifeng, China, 20–22 June 2013; pp. 1–7. [Google Scholar] [CrossRef]
  24. Xie, L.; Li, G.; Xiao, M.; Peng, L.; Chen, Q. Hyperspectral Image Classification Using Discrete Space Model and Support Vector Machines. IEEE Geosci. Remote Sens. Lett. 2017, 14, 374–378. [Google Scholar] [CrossRef]
  25. Amini, S.; Homayouni, S.; Safari, A. Semi-supervised classification of hyperspectral image using random forest algorithm. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 2866–2869. [Google Scholar] [CrossRef]
  26. Wang, S.; Dou, A.; Yuan, X.; Zhang, X. The airborne hyperspectral image classification based on the random forest algorithm. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 2280–2283. [Google Scholar] [CrossRef]
  27. Zhang, Y.; Cao, G.; Li, X.; Wang, B. Cascaded Random Forest for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1082–1094. [Google Scholar] [CrossRef]
  28. Yang, X.; Ye, Y.; Li, X.; Lau, R.Y.K.; Zhang, X.; Huang, X. Hyperspectral Image Classification with Deep Learning Models. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5408–5423. [Google Scholar] [CrossRef]
  29. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef]
  30. Ullah, F.; Ullah, I.; Khan, R.U.; Khan, S.; Khan, K.; Pau, G. Conventional to Deep Ensemble Methods for Hyperspectral Image Classification: A Comprehensive Survey. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 3878–3916. [Google Scholar] [CrossRef]
  31. Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative Adversarial Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
  32. Deng, X.; Dragotti, P.L. Deep Convolutional Neural Network for Multi-Modal Image Restoration and Fusion. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 3333–3348. [Google Scholar] [CrossRef] [PubMed]
  33. Sun, L.; Wang, X.; Zheng, Y.; Wu, Z.; Fu, L. Multiscale 3-D–2-D Mixed CNN and Lightweight Attention-Free Transformer for Hyperspectral and LiDAR Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–16. [Google Scholar] [CrossRef]
  34. Fang, Y.; Ye, Q.; Sun, L.; Zheng, Y.; Wu, Z. Multiattention Joint Convolution Feature Representation with Lightweight Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–14. [Google Scholar] [CrossRef]
  35. Liang, L.; Zhang, S.; Li, J. Multiscale DenseNet Meets With Bi-RNN for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 5401–5415. [Google Scholar] [CrossRef]
  36. Hu, W.S.; Li, H.C.; Pan, L.; Li, W.; Tao, R.; Du, Q. Spatial–Spectral Feature Extraction via Deep ConvLSTM Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4237–4250. [Google Scholar] [CrossRef]
  37. Huang, L.; Chen, Y. Dual-Path Siamese CNN for Hyperspectral Image Classification with Limited Training Samples. IEEE Geosci. Remote Sens. Lett. 2021, 18, 518–522. [Google Scholar] [CrossRef]
  38. Yu, C.; Han, R.; Song, M.; Liu, C.; Chang, C.I. Feedback Attention-Based Dense CNN for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  39. Bhatti, U.A.; Yu, Z.; Chanussot, J.; Zeeshan, Z.; Yuan, L.; Luo, W.; Nawaz, S.A.; Bhatti, M.A.; Ain, Q.U.; Mehmood, A. Local Similarity-Based Spatial—Spectral Fusion Hyperspectral Image Classification with Deep CNN and Gabor Filtering. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  40. Li, J.; Liu, Y.; Song, R.; Li, Y.; Han, K.; Du, Q. Sal2RN: A Spatial—Spectral Salient Reinforcement Network for Hyperspectral and LiDAR Data Fusion Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–14. [Google Scholar] [CrossRef]
  41. Zhang, Y.; Xu, S.; Hong, D.; Gao, H.; Zhang, C.; Bi, M.; Li, C. Multimodal Transformer Network for Hyperspectral and LiDAR Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–17. [Google Scholar] [CrossRef]
  42. Wang, X.; Zhu, J.; Feng, Y.; Wang, L. MS2CANet: Multiscale Spatial—Spectral Cross-Modal Attention Network for Hyperspectral Image and LiDAR Classification. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
  43. Gao, H.; Feng, H.; Zhang, Y.; Xu, S.; Zhang, B. AMSSE-Net: Adaptive Multiscale Spatial–Spectral Enhancement Network for Classification of Hyperspectral and LiDAR Data. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–17. [Google Scholar] [CrossRef]
  44. Hong, D.; Gao, L.; Hang, R.; Zhang, B.; Chanussot, J. Deep Encoder—Decoder Networks for Classification of Hyperspectral and LiDAR Data. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  45. Du, X.; Zheng, X.; Lu, X.; Doudkin, A.A. Multisource Remote Sensing Data Classification with Graph Fusion Network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10062–10072. [Google Scholar] [CrossRef]
  46. Dam, T.; Anavatti, S.G.; Abbass, H.A. Mixture of Spectral Generative Adversarial Networks for Imbalanced Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  47. Zhang, Y.; Peng, Y.; Tu, B.; Liu, Y. Local Information Interaction Transformer for Hyperspectral and LiDAR Data Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 1130–1143. [Google Scholar] [CrossRef]
  48. Sun, L.; Zhang, H.; Zheng, Y.; Wu, Z.; Ye, Z.; Zhao, H. MASSFormer: Memory-Augmented Spectral-Spatial Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 8257–8268. [Google Scholar] [CrossRef]
  49. Fu, L.; Zhang, D.; Ye, Q. Recurrent thrifty attention network for remote sensing scene recognition. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–17. [Google Scholar] [CrossRef]
  50. Ding, K.; Lu, T.; Fu, W.; Li, S.; Ma, F. Global–Local Transformer Network for HSI and LiDAR Data Joint Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  51. Zhang, M.; Gao, F.; Zhang, T.; Gan, Y.; Dong, J.; Yu, H. Attention Fusion of Transformer-Based and Scale-Based Method for Hyperspectral and LiDAR Joint Classification. Remote Sens. 2023, 15, 650. [Google Scholar] [CrossRef]
  52. Ni, K.; Wang, D.; Zheng, Z.; Wang, P. MHST: Multiscale Head Selection Transformer for Hyperspectral and LiDAR Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 5470–5483. [Google Scholar] [CrossRef]
  53. Yang, J.X.; Zhou, J.; Wang, J.; Tian, H.; Liew, A.W.C. LiDAR-Guided Cross-Attention Fusion for Hyperspectral Band Selection and Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–15. [Google Scholar] [CrossRef]
  54. Roy, S.K.; Sukul, A.; Jamali, A.; Haut, J.M.; Ghamisi, P. Cross Hyperspectral and LiDAR Attention Transformer: An Extended Self-Attention for Land Use and Land Cover Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–15. [Google Scholar] [CrossRef]
  55. Hong, D.; Hu, J.; Yao, J.; Chanussot, J.; Zhu, X.X. Multimodal remote sensing benchmark datasets for land cover classification with a shared and specific feature learning model. ISPRS J. Photogramm. Remote Sens. 2021, 178, 68–80. [Google Scholar] [CrossRef]
  56. Hong, D.; Gao, L.; Yokoya, N.; Yao, J.; Chanussot, J.; Du, Q.; Zhang, B. More Diverse Means Better: Multimodal Deep Learning Meets Remote-Sensing Imagery Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 4340–4354. [Google Scholar] [CrossRef]
  57. Feng, M.; Gao, F.; Fang, J.; Dong, J. Hyperspectral and Lidar Data Classification Based on Linear Self-Attention. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 2401–2404. [Google Scholar] [CrossRef]
  58. Wu, X.; Hong, D.; Chanussot, J. Convolutional Neural Networks for Multimodal Remote Sensing Data Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–10. [Google Scholar] [CrossRef]
  59. Hang, R.; Li, Z.; Ghamisi, P.; Hong, D.; Xia, G.; Liu, Q. Classification of Hyperspectral and LiDAR Data Using Coupled CNNs. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4939–4950. [Google Scholar] [CrossRef]
Figure 1. Overall architecture of MCAITN.
Figure 1. Overall architecture of MCAITN.
Remotesensing 16 02775 g001
Figure 2. MUUFL. (a) Pseudo-color image of HSI. (b) Gra-image of the LiDAR-based DSM. (c) Ground-truth map.
Figure 2. MUUFL. (a) Pseudo-color image of HSI. (b) Gra-image of the LiDAR-based DSM. (c) Ground-truth map.
Remotesensing 16 02775 g002
Figure 3. Trento. (a) Pseudo-color image of HSI. (b) Gray image of the LiDAR-based DSM. (c) Ground-truth map.
Figure 3. Trento. (a) Pseudo-color image of HSI. (b) Gray image of the LiDAR-based DSM. (c) Ground-truth map.
Remotesensing 16 02775 g003
Figure 4. Augsburg. (a) Pseudo-color image of HSI. (b) Gray image of the LiDAR-based DSM. (c) Ground-truth map.
Figure 4. Augsburg. (a) Pseudo-color image of HSI. (b) Gray image of the LiDAR-based DSM. (c) Ground-truth map.
Remotesensing 16 02775 g004
Figure 5. Classification maps of the MUUFL dataset: (a) ground truth, (b) SVM, (c) S2FL, (d) EndNet, (e) MDL, (f) LSAF, (g) CCRNet, (h) CoupledCNN, (i) HCT, and (j) MCAITN.
Figure 5. Classification maps of the MUUFL dataset: (a) ground truth, (b) SVM, (c) S2FL, (d) EndNet, (e) MDL, (f) LSAF, (g) CCRNet, (h) CoupledCNN, (i) HCT, and (j) MCAITN.
Remotesensing 16 02775 g005
Figure 6. Classification maps of the Augsburg dataset: (a) ground truth, (b) SVM, (c) S2FL, (d) EndNet, (e) MDL, (f) LSAF, (g) CCRNet, (h) CoupledCNN, (i) HCT, and (j) MCAITN.
Figure 6. Classification maps of the Augsburg dataset: (a) ground truth, (b) SVM, (c) S2FL, (d) EndNet, (e) MDL, (f) LSAF, (g) CCRNet, (h) CoupledCNN, (i) HCT, and (j) MCAITN.
Remotesensing 16 02775 g006
Figure 7. Classification maps of the Trento dataset: (a) ground truth, (b) SVM, (c) S2FL, (d) EndNet, (e) MDL, (f) LSAF, (g) CCRNet, (h) CoupledCNN, (i) HCT, and (j) MCAITN.
Figure 7. Classification maps of the Trento dataset: (a) ground truth, (b) SVM, (c) S2FL, (d) EndNet, (e) MDL, (f) LSAF, (g) CCRNet, (h) CoupledCNN, (i) HCT, and (j) MCAITN.
Remotesensing 16 02775 g007
Figure 8. The impact of retained spectral dimensions on OA, AA, and the kappa coefficient. (a) MUUFL. (b) Augsburg. (c) Trento.
Figure 8. The impact of retained spectral dimensions on OA, AA, and the kappa coefficient. (a) MUUFL. (b) Augsburg. (c) Trento.
Remotesensing 16 02775 g008
Figure 9. The impact of the HSI patch size on OA, AA, and the kappa coefficient. (a) MUUFL. (b) Augsburg. (c) Trento.
Figure 9. The impact of the HSI patch size on OA, AA, and the kappa coefficient. (a) MUUFL. (b) Augsburg. (c) Trento.
Remotesensing 16 02775 g009
Figure 10. The impact of the LiDAR patch size on OA, AA, and the kappa coefficient. (a) MUUFL. (b) Augsburg. (c) Trento.
Figure 10. The impact of the LiDAR patch size on OA, AA, and the kappa coefficient. (a) MUUFL. (b) Augsburg. (c) Trento.
Remotesensing 16 02775 g010
Figure 11. The impact of the learning rate on OA, AA, and the kappa coefficient. (a) MUUFL. (b) Augsburg. (c) Trento.
Figure 11. The impact of the learning rate on OA, AA, and the kappa coefficient. (a) MUUFL. (b) Augsburg. (c) Trento.
Remotesensing 16 02775 g011
Table 1. Training and test samples in the MUUFL, Augsburg, and Trento datasets.
Table 1. Training and test samples in the MUUFL, Augsburg, and Trento datasets.
No.MUUFLAugsburgTrento
ClassTraining.Test.ClassTraining.Test.ClassTraining.Test.
C01Trees15023,096Forest67512,832Apple Trees1293905
C02Mostly Grass1504120Residential Area151628,813Buildings1252778
C03Mixed Ground Surface1506732Industrial Area1923659Ground105374
C04Dirt and Sand1501676Low Plants134225,515Woods1548969
C05Road1506537Allotment28547Vineyard18410,317
C06Water150316Commercial Area821563Roads1223052
C07Buildings Shadow1502083Water761454
C08Buildings1506090
C09Sidewalk1501235
C10Yellow Curb15033
C11Cloth Panels150119
-Total165052,037Total391174,383Total81929,395
Table 2. Performance of various classifiers with the MUUFL dataset (best results are in boldface).
Table 2. Performance of various classifiers with the MUUFL dataset (best results are in boldface).
No.SVM [22]S2FL [55]EndNet [44]MDL [56]LSAF [57]CCRNet [58]CoupledCNN [59]HCT [12]MCAITN
174.81 ± 01.7981.02 ± 00.7484.21 ± 00.9689.42 ± 03.8888.70 ± 00.9984.81 ± 01.9189.16 ± 01.8591.12 ± 01.4391.75 ± 01.57
272.94 ± 01.9176.99 ± 02.3683.28 ± 02.0974.57 ± 13.1285.29 ± 02.4885.81 ± 01.5786.96 ± 00.9485.47 ± 02.3586.62 ± 03.42
357.59 ± 02.0666.16 ± 01.5171.85 ± 02.0777.96 ± 04.6778.97 ± 02.2666.15 ± 03.6381.60 ± 02.3281.53 ± 04.8882.21 ± 03.20
463.39 ± 01.3482.56 ± 02.9887.65 ± 01.4990.83 ± 09.1397.16 ± 01.4194.39 ± 02.9894.36 ± 02.9796.07 ± 00.4496.92 ± 00.89
579.06 ± 00.9384.46 ± 01.2288.96 ± 01.8675.93 ± 04.3987.72 ± 01.2885.77 ± 02.6389.66 ± 02.6987.57 ± 04.9689.26 ± 01.97
692.51 ± 01.3194.49 ± 00.6294.38 ± 01.5399.79 ± 00.18100 ± 00.0099.03 ± 00.6498.92 ± 00.6499.45 ± 00.6999.50 ± 00.48
782.45 ± 01.0384.19 ± 01.2388.70 ± 01.5290.01 ± 07.0894.46 ± 01.4088.54 ± 01.8291.84 ± 01.6594.23 ± 00.4293.64 ± 01.78
866.16 ± 01.5079.49 ± 01.7280.56 ± 02.0596.32 ± 02.0595.59 ± 00.3594.48 ± 00.7296.71 ± 01.2995.15 ± 02.9296.84 ± 00.84
979.48 ± 01.7371.55 ± 02.7875.39 ± 03.0270.23 ± 11.8077.14 ± 02.7865.45 ± 02.2772.71 ± 02.5678.38 ± 01.5580.53 ± 02.33
1082.93 ± 03.4992.33 ± 03.8997.31 ± 02.4884.85 ± 08.0293.94 ± 05.2587.72 ± 03.0894.32 ± 02.8595.45 ± 02.6295.67 ± 04.58
1175.09 ± 02.4685.82 ± 02.5398.18 ± 01.18100.00 ± 00.0099.72 ± 00.4897.95 ± 01.8198.47 ± 00.5999.02 ± 01.0698.44 ± 01.32
OA (%)72.23 ± 01.3778.31 ± 00.1882.92 ± 00.6485.58 ± 00.4588.18 ± 00.4383.12 ± 01.0188.73 ± 00.3988.93 ± 00.9790.43 ± 00.67
AA (%)75.13 ± 01.5381.73 ± 01.8586.41 ± 00.8786.36 ± 01.2390.79 ± 00.5086.37 ± 00.9990.43 ± 01.3491.22 ± 01.5791.94 ± 00.52
κ × 100 65.41 ± 01.4272.47 ± 00.3377.82 ± 01.0481.17 ± 00.3784.60 ± 00.5278.25 ± 01.1785.16 ± 01.0385.29 ± 00.8587.45 ± 00.83
Table 3. Performance of various classifiers with the Augsburg dataset (best results are in boldface).
Table 3. Performance of various classifiers with the Augsburg dataset (best results are in boldface).
No.SVM [22]S2FL [55]EndNet [44]MDL [56]LSAF [57]CCRNet [58]CoupledCNN [59]HCT [12]MCAITN
195.78 ± 00.3997.18 ± 00.1592.49 ± 00.4995.56 ± 03.0499.15 ± 00.2196.44 ± 00.9797.47 ± 00.9798.98 ± 00.1798.97 ± 00.27
289.41 ± 01.2772.29 ± 01.2688.61 ± 00.5393.82 ± 03.8898.53 ± 00.0596.69 ± 00.7697.71 ± 00.6598.69 ± 00.2698.82 ± 00.29
306.47 ± 01.3532.25 ± 03.0941.38 ± 03.1379.42 ± 07.1889.22 ± 03.0282.76 ± 03.8384.71 ± 03.5788.33 ± 04.2088.92 ± 03.16
467.32 ± 01.3987.45 ± 01.0494.25 ± 00.5399.74 ± 00.0699.22 ± 00.3198.02 ± 00.4197.56 ± 00.5398.94 ± 00.2999.07 ± 00.25
506.86 ± 01.8240.34 ± 05.1931.75 ± 03.1356.26 ± 13.2687.08 ± 05.6441.69 ± 06.0669.43 ± 03.0580.04 ± 08.5586.66 ± 08.75
610.90 ± 01.8739.97 ± 02.8128.32 ± 04.2157.34 ± 21.9954.36 ± 05.1633.38 ± 05.0572.84 ± 02.3670.38 ± 02.0674.89 ± 03.78
753.27 ± 01.8570.35 ± 01.2950.65 ± 02.0747.58 ± 12.5770.02 ± 02.6459.39 ± 06.2461.98 ± 02.5872.81 ± 04.6272.91 ± 03.00
OA (%)76.01 ± 00.8378.77 ± 00.3786.77 ± 00.5693.50 ± 01.4696.85 ± 00.2394.35 ± 00.7495.59 ± 00.7597.08 ± 00.2197.34 ± 00.15
AA (%)47.15 ± 00.7862.83 ± 01.0661.06 ± 00.9575.68 ± 00.5185.37 ± 01.3372.63 ± 02.8083.1 ± 01.9086.88 ± 01.0788.61 ± 01.14
κ × 100 64.82 ± 01.0970.87 ± 00.7180.73 ± 00.5890.63 ± 02.0895.48 ± 00.3393.09 ± 00.6193.92 ± 00.7995.82 ± 00.2996.18 ± 00.21
Table 4. Performance of various classifiers with the Trento dataset (best results are in boldface).
Table 4. Performance of various classifiers with the Trento dataset (best results are in boldface).
No.SVM [22]S2FL [55]EndNet [44]MDL [56]LSAF [57]CCRNet [58]CoupledCNN [59]HCT [12]MCAITN
180.05 ± 01.0880.35 ± 00.7187.52 ± 00.6298.06 ± 01.3999.66 ± 00.0999.13 ± 00.9199.32 ± 00.2199.59 ± 00.1799.41 ± 00.44
277.03 ± 03.1380.32 ± 01.2387.43 ± 00.8299.37 ± 00.7498.85 ± 00.7996.74 ± 01.4197.87 ± 00.2998.49 ± 01.1599.32 ± 00.32
385.64 ± 02.5790.47 ± 00.7198.22 ± 01.0499.07 ± 00.2797.86 ± 00.9294.17 ± 02.1198.39 ± 00.31100.00 ± 00.00100.00 ± 00.00
492.48 ± 01.2393.14 ± 00.3198.37 ± 00.32100.00 ± 00.00100.00 ± 00.0099.97 ± 00.04100.00 ± 00.00100.00 ± 00.00100.00 ± 00.00
582.43 ± 01.0182.14 ± 00.3993.66 ± 00.3699.98 ± 00.0399.87 ± 00.1999.95 ± 00.05100.00 ± 00.0099.98 ± 00.02100.00 ± 00.00
682.21 ± 01.3980.78 ± 01.2286.68 ± 01.6596.16 ± 02.1498.31 ± 00.6096.46 ± 01.4997.96 ± 00.8997.96 ± 01.0198.56 ± 01.01
OA (%)84.43 ± 00.5185.14 ± 00.4893.01 ± 00.3199.27 ± 00.1999.31 ± 00.1998.68 ± 00.6399.04 ± 00.3399.59 ± 00.0999.70 ± 00.09
AA (%)83.31 ± 01.7284.53 ± 00.3191.98 ± 00.8098.78 ± 00.3099.09 ± 00.0897.74 ± 00.8198.92 ± 00.2599.34 ± 00.1599.55 ± 00.14
κ × 100 79.45 ± 00.6880.25 ± 00.6590.55 ± 00.2999.02 ± 00.2699.23 ± 00.1398.19 ± 00.5798.97 ± 00.2699.44 ± 00.1299.61 ± 00.12
Table 5. Evaluating model components: ablation analysis with the MUUFL database (the best results are in boldface).
Table 5. Evaluating model components: ablation analysis with the MUUFL database (the best results are in boldface).
CasesComponentIndicators
Conv3DConv2DLidar-BranchCFEA-TEOA (%)AA (%) κ × 100
1-87.5788.1483.75
2-86.8987.7882.93
3--TE55.6150.4443.16
4-TE88.6990.6385.19
590.4391.9487.45
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Z.; Liu, R.; Sun, L.; Zheng, Y. Multi-Feature Cross Attention-Induced Transformer Network for Hyperspectral and LiDAR Data Classification. Remote Sens. 2024, 16, 2775. https://doi.org/10.3390/rs16152775

AMA Style

Li Z, Liu R, Sun L, Zheng Y. Multi-Feature Cross Attention-Induced Transformer Network for Hyperspectral and LiDAR Data Classification. Remote Sensing. 2024; 16(15):2775. https://doi.org/10.3390/rs16152775

Chicago/Turabian Style

Li, Zirui, Runbang Liu, Le Sun, and Yuhui Zheng. 2024. "Multi-Feature Cross Attention-Induced Transformer Network for Hyperspectral and LiDAR Data Classification" Remote Sensing 16, no. 15: 2775. https://doi.org/10.3390/rs16152775

APA Style

Li, Z., Liu, R., Sun, L., & Zheng, Y. (2024). Multi-Feature Cross Attention-Induced Transformer Network for Hyperspectral and LiDAR Data Classification. Remote Sensing, 16(15), 2775. https://doi.org/10.3390/rs16152775

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop