Next Article in Journal
An Analytical Model of Motion Artifacts in a Measured Arterial Pulse Signal—Part II: Tactile Sensors
Previous Article in Journal
BcDKM: Blockchain-Based Dynamic Key Management Scheme for Crowd Sensing in Vehicular Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CCFormer: Cross-Modal Cross-Attention Transformer for Classification of Hyperspectral and LiDAR Data

by
Hufeng Guo
1,2,*,
Baohui Tian
2 and
Wenyi Liu
1,*
1
State Key Laboratory of Dynamic Measurement Technology, School of Instrument and Electronics, North University of China, Taiyuan 030051, China
2
Department of Transportation Information Engineering, Henan College of Transportation, Zhengzhou 451460, China
*
Authors to whom correspondence should be addressed.
Sensors 2025, 25(18), 5698; https://doi.org/10.3390/s25185698
Submission received: 6 August 2025 / Revised: 4 September 2025 / Accepted: 11 September 2025 / Published: 12 September 2025
(This article belongs to the Special Issue Remote Sensing in Urban Surveying and Mapping)

Abstract

The fusion of multi-source remote sensing data has emerged as a critical technical approach to enhancing the accuracy of ground object classification. The synergistic integration of hyperspectral images and light detection and ranging data can significantly improve the capability of identifying ground objects in complex environments. However, modeling the correlation between their heterogeneous features remains a key technical challenge. Conventional methods often result in feature redundancy due to simple concatenation, making it difficult to effectively exploit the complementary information across modalities. To address this issue, this paper proposes a cross-modal cross-attention Transformer network for the classification of hyperspectral images combined with light detection and ranging data. The proposed method aims to effectively integrate the complementary characteristics of hyperspectral images and light detection and ranging data. Specifically, it employs a two-level pyramid architecture to extract multi-scale features at the shallow level, thereby overcoming the redundancy limitations associated with traditional stacking-based fusion approaches. Furthermore, an innovative cross-attention mechanism is introduced within the Transformer encoder to dynamically capture the semantic correlations between the spectral features of hyperspectral images and the elevation information from light detection and ranging data. This enables effective feature alignment and enhancement through the adaptive allocation of attention weights. Extensive experiments conducted on three publicly available datasets demonstrate that the proposed method exhibits notable advantages over existing state-of-the-art approaches.

1. Introduction

In recent decades, remote sensing technology has witnessed remarkable advancements. Notably, imaging technology has undergone rapid development, paving the way for more in-depth analysis in the field of intelligent Earth observation. With enhanced data acquisition capabilities—including high-resolution imagery, multi-spectral and hyperspectral sensing, and synthetic aperture radar—remote sensing data are now widely applied in critical domains such as land use planning, mineral resource exploration, agricultural quality assessment, precision agriculture, ecological monitoring, and national defense [1,2,3,4]. Among these applications, pixel-based remote sensing image classification serves as a core research task and has become a key driver in advancing remote sensing technology toward greater quantification and intelligence through the accurate identification of ground objects and detailed information extraction.
In the early stages, research on remote sensing image classification primarily relied on single data sources, such as multispectral or hyperspectral imagery [5]. Currently, hyperspectral data has emerged as a prominent resource in the field of remote sensing, attracting significant attention due to its hundreds or even thousands of spectral channels, which enable the acquisition of rich spectral information. It demonstrates notable advantages in identifying surface objects and has yielded favorable application outcomes [2,4]. However, the use of hyperspectral image (HSI) data alone for land classification is constrained by two key phenomena: “same object with different spectra” (where the spectral characteristics of the same ground object vary under different environmental conditions) and “different objects with the same spectra” (where distinct ground objects exhibit similar spectral signatures). These phenomena limit classification accuracy [6]. For instance, in urban environments, sidewalks and building rooftops may be constructed from similar materials, resulting in highly similar spectral curves. Additionally, environmental factors such as illumination conditions and atmospheric scattering can alter the spectral properties of surface objects, further exacerbating the “different objects with the same spectra” issue [1]. Consequently, relying solely on spectral data makes it challenging to accurately distinguish between certain land use and surface cover types.
Light Detection and Ranging (LiDAR) data can generate precise three-dimensional profiles by measuring the distance between the sensor and the Earth’s surface, thereby providing high-precision topographic and structural information [3]. Considering the challenges in HSI classification mentioned in the previous paragraph, the elevation information provided by LiDAR offers critical vertical dimension data that can help mitigate these classification ambiguities. Therefore, the fusion of hyperspectral and LiDAR data enables the full utilization of the complementary strengths of both data modalities, thereby effectively enhancing the classification accuracy [7].
The integration of HSI and LiDAR data has garnered considerable attention within the domain of multimodal remote sensing. However, effectively combining the rich spectral information from HSI with the elevation features provided by LiDAR remains a critical challenge. Traditional classification approaches predominantly emphasize data-level fusion strategies [8]. For instance, Ghamisi et al. [9] employed the Attribute Profiles model to capture spatial characteristics of both data types and further extracted multimodal spatial features using the Extinction Profiles technique. Although conventional machine learning techniques—such as Support Vector Machine (SVM) [10], Extreme Learning Machine (ELM) [11], and Random Forest (RF) [12]—have demonstrated moderate success in the classification of multimodal remote sensing data, these shallow models are limited in their ability to uncover deep and complex data relationships. Particularly when confronted with the nonlinear characteristics of HSI data, such methods often compromise the original spatial and spectral structure, leading to the loss of valuable implicit information [7]. For example, SVM and RF rely heavily on manually engineered features and linear assumptions, which hinder their capacity to accurately model the intricate nonlinear relationships inherent in high-dimensional hyperspectral data [13]. Furthermore, these models frequently encounter the “curse of dimensionality”—even with sufficient training samples, their classification performance falls short compared to that of deep learning approaches [14].
Compared with traditional approaches, deep learning methods demonstrate superior feature representation capabilities, enabling the automatic extraction of multi-level and highly abstract features from raw data [15,16]. Among these, Convolutional Neural Networks (CNNs) are widely employed as fundamental models owing to their effective local receptive fields and parameter sharing mechanisms [17]. Zhao et al. [18] proposed a dual-interactive hierarchical adaptive fusion network built upon a dual-branch CNN architecture. This network is capable of extracting discriminative, high-level semantic features from both HSI and LiDAR data, thereby achieving improved classification performance. Huang et al. [19] proposed a method based on convolutional neural networks. By incorporating a cross-attention mechanism, significant spatial weights are assigned to LiDAR data with respect to HSI, thereby enhancing the interaction between the two modalities and fully exploiting the complementary information from data fusion. Ge et al. [20] proposed a multi-scale CNN framework that integrates parameter sharing with a local-global cross-attention mechanism. This approach enables the joint deep semantic representation and data fusion of HSI and LiDAR data. Liu et al. [21] developed a multi-scale spatial feature module and achieved feature fusion through concatenation operations, thereby proposing a multi-scale and multi-directional feature extraction network. Indeed, the design of multi-scale modules and the incorporation of attention mechanisms into CNNs can mitigate the limitation of fixed receptive fields and enhance the capability of extracting remote sensing features. However, stacking such modules significantly increases the number of parameters and computational complexity [22,23]. In addition, due to the inherent local receptive field of convolutional operations, these models still face challenges in effectively capturing long-range dependencies across different scenarios and in handling the long-sequence characteristics of spectral features.
Given the exceptional capability of Vision Transformers (ViT) in modeling long-distance dependencies in visual tasks, researchers have incorporated the Transformer architecture into remote sensing image classification tasks [24,25]. Yang et al. [26] developed a stackable modal fusion block as the central component of the model and introduced a multi-modal data fusion framework tailored for the integration and classification of HSI and LiDAR data, achieving an overall classification accuracy of 99.91%. Huang et al. [27] integrated a CNN with the latest Transformer architecture and proposed a novel multi-modal cross-layer fusion Transformer network, aiming to enhance both the stability and performance of the model. Sun et al. [28] proposed a multi-scale 3D-2D hybrid CNN feature extraction framework combined with a lightweight attention-free fusion network for multi-source data, based on the integration of a convolutional neural network and Transformer architectures, thereby substantially enhancing the performance of joint classification. Ni et al. [29] focused on the selective convolutional kernel mechanism and the spectral-spatial interaction transformer for feature learning, and subsequently proposed a selective spectral-spatial aggregation transformer network. Roy et al. [30] introduced cross-attention, extended the traditional self-attention mechanism, and proposed a novel multi-modal deep learning framework that effectively integrates remote sensing data.
Although current methodologies have demonstrated substantial improvements in classification accuracy, the integration of remote sensing data for fusion-based classification continues to present certain challenges.
  • Feature extraction remains suboptimal. Solely relying on a simplistic network architecture to extract basic information from HSI and LiDAR data limits the in-depth exploration of the complementary characteristics of multimodal data. For instance, the integrated representation and cross-scale interaction among spectral, spatial, and elevation features are not effectively achieved, leading to underutilization of feature diversity.
  • There exists a feature fusion defect. Relying solely on simple feature stacking and fusion overlooks the correlations between heterogeneous features, which in turn limits the model’s final decision-making capability.
To address the aforementioned challenges, this paper proposes a cross-modal cross-attention Transformer (CCFormer) network that enables efficient fusion and classification through multi-scale feature interaction and semantic guidance. The network features a three-level architectural design. First, shallow feature extraction is performed using a dual-stream pyramid module. In the spectral branch, multi-scale convolutional kernels are employed to capture fine-grained spectral features from hyperspectral data, while the LiDAR branch utilizes multi-scale convolutional kernels to extract elevation structure features. Second, a cross-attention Transformer encoder is introduced to achieve cross-modal feature alignment and bidirectional semantic guidance through a cross self-attention mechanism. Finally, class probabilities are generated via a lightweight classification head. The proposed framework effectively mitigates the redundancy caused by simple feature concatenation and enhances the model’s decision-making capability through pyramid-based multi-scale feature refinement and cross-modal attention interaction.
The main contributions of this work are summarized as follows.
  • In the shallow feature extraction stage, considering the rich spectral characteristics of HSI and the elevation information provided by LiDAR, a pyramid spectral multi-scale module and a spatial multi-scale module are, respectively, designed. Through a two-level multi-scale feature extraction process, the features are progressively refined, allowing for a comprehensive exploration of the intrinsic representations of each modality. This lays a solid foundation for subsequent cross-attention fusion by providing complementary features.
  • A CCFormer is proposed to enable bidirectional interaction between HSI and LiDAR features by reformulating the attention calculation paradigm. Utilize the spectral features of HSI as the query, and the elevation-structure features of LiDAR as the key and value. Dynamically assign weights to enhance the correlation between heterogeneous features. Meanwhile, employ the LiDAR features as the query and the HSI features as the key and value to construct a bidirectional attention mechanism.
  • Performance evaluation was conducted on three representative remote sensing datasets. The experimental results demonstrate that the proposed algorithm surpasses the current state-of-the-art methods.

2. Related Work

2.1. Multiscale Feature Extraction

In image segmentation and classification tasks, multi-scale methods enhance model performance by capturing features across various spatial dimensions [31]. The theoretical foundation of these methods can be traced back to the concept of scale space introduced in 1962, with a significant quantitative advancement achieved in 1983 through the application of Gaussian filters. Initially, multi-scale features were primarily extracted using image pyramids. However, this approach has two major limitations. First, it lacks the capability for effective cross-scale feature interaction and analysis, making it difficult to establish meaningful associations across different scales. Second, the Gaussian function, which forms the core of this method, tends to blur edge and detail information in the image, thereby reducing the accuracy in locating target boundaries [14]. With the advent of deep learning, architectures such as GoogLeNet (featuring the Inception module) and VGGNet in 2014 advanced multi-scale modeling capabilities [32,33]. In recent years, innovations such as the Feature Pyramid Network (FPN) and skip connections have been introduced [34]. More recently, hybrid architectures combining CNNs and Transformers have enabled more sophisticated feature fusion, leading to notable improvements in both classification accuracy and the precision of segmentation boundary detection [25].
Owing to the high-dimensional nature of the data and the complex spectral characteristics inherent in HSIs, multi-scale approaches have emerged as a crucial strategy for effective feature extraction [24,25,26,27,28]. Meanwhile, the application of multi-scale methods in the field of remote sensing has also become a research hotspot.

2.2. Vision Transformer

Since its introduction in 2017, the Transformer model has overcome the limitations of traditional recurrent neural network (RNN) architectures by employing a self-attention mechanism. This innovation enables the efficient capture of long-range dependencies and supports parallel computation, thereby driving significant advancements in the field of natural language processing (NLP) [35]. The core self-attention mechanism enhances semantic modeling by dynamically assigning weights to features within the input sequence, establishing the Transformer as the dominant architecture for tasks such as machine translation. In 2020, ViT extended the pure Transformer framework into the domain of computer vision [36]. By utilizing patch-based image representation and hierarchical attention mechanisms, ViT achieved performance on image classification tasks that was comparable to or exceeded that of state-of-the-art CNNs, demonstrating the potential for cross-modal generalization. This milestone not only broadens the applicability of the Transformer architecture but also highlights the universal strengths of self-attention in modeling global feature relationships, signifying a paradigm shift in deep learning from localized convolutional operations to global, dynamic feature interactions.
As illustrated in Figure 1, the multi-head self-attention mechanism, which serves as a core component of the Transformer encoder, enables efficient global feature interaction by performing parallel computations across multiple scaled dot-product attention modules. We define a sequence of n entity vectors x 1 , x 2 , , x n as the input matrix X R n × d , where d denotes the embedding dimension of each entity. The purpose of the self-attention mechanism is to encode each entity using global contextual information, thereby capturing the interactions among all n entities.
In the specific implementation, through three learnable weight matrices W Q R d × d q , W K R d × d k , and W V R d × d v (where the dimensions of the query and the key are the same, i.e., d q = d k ), the input X is linearly projected, respectively, to obtain the query matrix Q , the key matrix K , and the value matrix V . This process can be expressed by the following formula:
Q = X W Q , K = X W K , V = X W V
Subsequently, compute the scaled dot-product attention α i j between the query vector q i (derived from matrix Q ) and the key vector k j (obtained from K ), This process can be represented by the following formula:
α i j = s o f t max q i k j T d k = exp ( q i k j T / d k ) j exp ( q i k j T / d k )
Among them, the term d k denotes the scaling factor, which corresponds to the dimension of q i and k j . Then, applying the derived attention map α (weight parameter matrix, consisting of the elements denoted by α i j ) to the value matrix V generates the attention output, which can be represented by the following formula:
A = i j α i j v j
where A is the output of single-head attention, obtained by computing a weighted sum based on the attention scores. v j is the j-th column vector of matrix V .
The multi-head self-attention mechanism partitions the input features into h distinct sub-spaces. Each attention head independently computes the attention scores, after which the results are concatenated and integrated. This mechanism enables multi-dimensional parallel modeling, there by simultaneously improving feature representation and enhancing the capability to capture complex patterns. The corresponding representation formula is as follows:
M H = C o n c a t A 1 , A 2 , , A h W
where M H R h × n represents the multi-head attention of h heads, and W R h d v × d represents the learnable weight matrix. C o n c a t denote the Concatenate functions.

3. Methodology

In this section, the overall architecture of the proposed method is introduced, followed by a detailed analysis of its key internal modules, which include shallow feature extraction module and a cross-attention Transformer.

3.1. Overall Architecture

Considering the complementary characteristics of HSIs—which offer high spectral resolution but limited spatial detail—and LiDAR data—which provide rich three-dimensional structural information but lack spectral features—this study addresses the challenge of enhancing the robustness of ground object classification in scenarios with limited training samples. To this end, a multi-scale cross-guided CCFormer framework is proposed. By leveraging a cross-modal feature mutual guidance and fusion mechanism, the framework effectively integrates the spectral-spatial information from hyperspectral images with the elevation-structural information from LiDAR data, thereby mitigating classification ambiguities caused by the limitations of single-source data. Figure 2 illustrates the multi-modal cross-guidance and fusion strategy employed in the proposed framework. This framework achieves collaborative enhancement of spectral details and geometric features through a dual-guided cross-attention mechanism, offering a novel solution for small-sample classification in complex scenarios. The method follows a hierarchical architecture, comprising a shallow multi-scale feature extraction module, a CCFormer encoder, and a classification decision layer, thereby establishing an end-to-end cross-modal fusion classification framework.
The inputs to the model consist of the raw HSI data and LiAR data, denoted as X H R W × H × L and X L R W × H , respectively, where W and H correspond to the spatial width and height dimensions of the data, while L denotes the spectral dimension of the HSI. During the data preprocessing phase, a square window of size s × s is applied in a sliding manner pixel by pixel, and each resulting s × s block is treated as an individual sample. The samples derived from the HSI data and LiDAR data are denoted as X h R s × s × L and X l R s × s , respectively. Subsequently, a shallow multiscale feature extraction process is applied to each sample, involving a two-level multiscale feature mining strategy. With respect to the spectral dimension of HSI, multiscale band grouping is performed along with local spectral correlation modeling. Regarding the spatial dimension of LiDAR data, multiscale spatial filtering and local geometric structure analysis are implemented. In the cross-modal feature fusion stage, the HSI features and LiDAR features are fed into the cross-attention Transformer encoder unit. Bidirectional information exchange is facilitated through cross-modal feature interaction guidance, enabling comprehensive feature fusion at multiple levels via a multi-level attention mechanism. Finally, the classification head performs the final classification by mapping the extracted high-dimensional features into the corresponding category label space.
It is worth noting that traditional HSIC methods typically flatten either a single-pixel spectral vector or the spectral information of a local region into a one-dimensional vector to serve as input tokens for the Transformer. In contrast, this study employs a per-band feature serialization strategy. Specifically, given an input feature X h R m × m × b , a linear projection is applied to generate b band-level tokens, as shown in the following equation:
X t o k e n s = x s e q 1 ; x s e q 2 ; ; x s e q b , x s e q i R m × m × 1
where m × m represents the spatial dimension, b represents the spectral dimension, X t o k e n s is a sequence of tokens (a total of b tokens), and x s e q i is a single token. This approach aims to provide a structured input format that facilitates the evaluation of the importance of each spectral band, particularly when integrated with LiDAR data in subsequent analyses.

3.2. Shallow Multiscale Feature Extraction

In the shallow feature extraction stage, considering the distinct characteristics of HSI and LiDAR data, a spectral pyramid-based multiscale feature extraction module and a spatial pyramid-based multiscale feature extraction module are designed, respectively. The former achieves multi-granularity decoupling of spectral features through multi-scale band grouping and local spectral correlation modeling. The latter enables multi-level decomposition of spatial features by employing multi-scale spatial filtering and local geometric structure analysis.
As illustrated in Figure 3, the Spectral Pyramid-Based Multiscale Feature Extraction Module comprises a spectral-dimensional pyramid structure composed of three layers. The pyramid structure performs feature extraction at multiple scales by utilizing convolutional kernels of dimensions 1 × 1 × 1 , 1 × 1 × 3 , and 1 × 1 × 5 respectively along the spectral dimension. Following convolution in each layer, batch normalization is applied to standardize the data distribution across batches, thereby maintaining a relatively stable input distribution for subsequent layers. Subsequently, the ReLU activation function is employed to introduce nonlinearity, which facilitates faster network convergence and helps mitigate the risk of overfitting. This structure can be mathematically represented by the following equation:
F h 1 X h = β 3 D C on v 1 × 1 × 1 ( X h )
F h 2 X h = β 3 D C on v 1 × 1 × 3 ( X h )
F h 3 X h = β 3 D C on v 1 × 1 × 5 ( X h )
F h o u t = C a t F h 1 X h , F h 2 X h , F h 3 X h
where X h R s × s × L represents a sample of the input HSI, F h i and F h o u t denote the functional output of the i-th layer of the pyramid and the overall output features of the pyramid, respectively. Additionally, , β , 3 D C o n v and C a t denote the ReLu, 3DBatchNorm, 3Dconvolution, and Concatenate functions, respectively.
As illustrated in Figure 4, the Spatial Pyramid-Based Multiscale Feature Extraction Module comprises a spatial dimension pyramid formed by three layers of convolutional kernels with kernel sizes of 1 × 1 , 3 × 3 , and 5 × 5 , respectively. Its structure is similar to that of the Spectral Pyramid-Based Multiscale Feature Extraction Module. The difference is that one is based on two dimensions and the other is based on three dimensions. The module can be mathematically represented by the following equation:
F l 1 X l = β 2 D C on v 1 × 1 ( X l )
F l 2 X l = β 2 D C on v 3 × 3 ( X l )
F l 3 X l = β 2 D C on v 5 × 5 ( X l )
F l o u t = C a t F l 1 X l , F l 2 X l , F l 3 X l
where X l R s × s represents a sample of the input HSI, F l i and F l o u t denote the functional output of the i-th layer of the pyramid and the overall output features of the pyramid, respectively, and 2 D C o n v represents the 2Dconvolution function.
The two feature extraction modules have similar structures. The primary distinction lies in their operational spaces: one performs multi-scale feature extraction in the planar domain, with a focus on spatial feature extraction; the other operates in the three-dimensional space, emphasizing spectral feature extraction.

3.3. Cross-Attention Transformer

As the core component of the Transformer architecture, the multi-head self-attention mechanism plays a crucial role in capturing long-range dependencies among features. Inspired by the cross-modal complementarity between elevation features and spectral-spatial features in remote sensing scenarios, this study enhances the multi-head self-attention mechanism by incorporating a feature interaction strategy, thereby constructing a cross-attention Transformer module. The architectural design of CCFormer is elaborated in the left portion of Figure 2.
The query vector matrix Q , as a central element of the self-attention mechanism, functions as a dynamic interface that facilitates cross-modal feature interaction. To effectively integrate multi-source remote sensing data, this mechanism computes similarity scores between the query vector matrix Q and the key vector matrix K across different modalities, followed by a weighted aggregation of the value vector matrix V , thereby enhancing the spatial-spectral correlation within key regions. Specifically, the query vector matrix Q L from the LiDAR branch offers attention-based guidance for the spectral features of the HSI, whereas the query vector matrix Q H from the HSI branch assesses the discriminative significance of the LiDAR elevation data, establishing a bidirectional cross-modal attention framework. This interactive guidance approach enables the adaptive fusion of elevation and spectral-spatial information through dynamic feature weighting, laying a theoretical foundation for the collaborative representation of multi-modal remote sensing data.
Specifically, the token sequences of HSI and LiDAR are first subjected to feature normalization using LayerNorm. Subsequently, each modality undergoes feature projection through linear transformation and chunking operations to generate the corresponding query matrix ( Q H and Q L ), key matrix ( K H and K L ), and value matrix ( V H and V L ). In the cross-modal interaction phase, the affinity score is computed by constructing the similarity matrix between the query vector from one modality and the key vector from the other. To preserve spatial relative position information, rotary position encoding is integrated into this computation. Finally, regularization is applied through the Dropout mechanism. This process can be represented by the following formula:
A t t 1 = D p s o f t max q l k h T d + P o s q l , k h
A t t 2 = D p s o f t max q h k l T d + P o s q h , k l
where A t t 1 and A t t 2 denote the similarity scores between the LiDAR elevation feature query vector q l and the HSI key vector k h , and between the HSI query vector q h and the LiDAR key vector k l , respectively. q h , q l , q l , and k l are derived from the matrices Q H , Q L , K H , and K L , respectively. The function P o s refers to the rotary position embedding operation, and D p denotes the dropout regularization operation.
After that, perform a weighted sum of the two sets of cross-modal attention scores A t t 1 and A t t 2 with a set of value vectors v h , respectively. The resulting output is then multiplied by another set of value vectors v l to derive the final single-head attention output A t t . As illustrated in the following equation:
A t t = L i n A t t 1 v h + A t t 2 v h v l
where v h , and v l are derived from the matrices V H and V L , respectively. L i n represents a linear function, specifically expressed as Y = X W l + b ( W l denotes the weight matrix and b represents the bias term, both parameters are learnable). Subsequently, the output of multi-head attention is obtained through a weighted concatenation of the outputs from individual single-head attention mechanisms. The process is illustrated in Equation (4) presented above.
Then, the output of the multi-head attention is first combined with the original input through a residual connection, which helps alleviate the issue of gradient vanishing. Following this, a feed-forward neural network is applied to extract high-order features, ultimately yielding the output tensor T o u t . This procedure can be concisely expressed using the following mathematical formulation:
T o u t = L N L N X + MH X + F F N L N X + MH X
where X represents the input of the multi-head attention. M H , L N and F F N denote the multi-head attention, the layer normalization operation, and the feed-forward neural network, respectively. And F F N x = Re L U ( x W 1 + b 1 ) W 2 + b 2 , W 1 and W 2 are, respectively, the weight matrix of the first layer and the weight parameter of the second layer. Both b 1 and b 2 are biases.

4. Experiments and Analysis

In this section, we first present the experimental data and subsequently provide a detailed description of the experimental settings. To evaluate the performance of the proposed network, we conducted comparative experiments with several widely adopted models. Furthermore, ablation studies were carried out to validate the contribution of each individual component within the model.

4.1. Datasets Description

To comprehensively evaluate the applicability and effectiveness of the proposed method across diverse scenarios, we selected three widely recognized remote sensing datasets: MUUFL, Trento, and Houston2013. These datasets encompass a range of typical land cover types, including parks, agricultural areas, university campuses, and urban environments. They vary in terms of spatial resolution and spectral bands, thereby enabling the simulation of realistic application conditions. Detailed parameter specifications for each dataset are provided in Table 1.
MUUFL: The MUUFL dataset integrates HSI and LiDAR data, which were collected in November 2010 over the campus of the University of Southern Mississippi Gulf Park in Long Beach, Mississippi, USA. The hyperspectral data were acquired using the ITRES CASI-1500 sensor, covering a spectral range of 375–1050 nm (0.38–1.05 µm) with 64 spectral bands. The spatial dimensions of the HSI data are 325 × 220 pixels, with a spatial resolution of 0.54 × 1.0 m2. The LiDAR data were collected using the Gemini airborne system, with a spatial resolution of 0.60 × 0.78 m2. The dataset comprises 11 land cover classes and includes a total of 53,687 labeled samples. Figure 5 presents the false-color composite of the hyperspectral data, the digital surface model (DSM) derived from LiDAR data, and the land cover map. The distribution of samples across the training and test sets is summarized in Table 2.
Trento: The Trento dataset combines HSI and LiDAR data collected from rural areas of Trento, located in southern Italy. The HSI data was captured using the airborne Eagle sensor and comprises 63 spectral bands ranging from 0.42 to 0.99 µm. The dataset covers an area of 166 × 600 pixels with a spatial resolution of 1 m. The LiDAR data were obtained from the Optech ALTM 3100 EA airborne sensor, which produced a single raster dataset and, together with the HS data, was used to generate a DSM. This dataset is primarily intended for land cover classification involving six distinct categories and includes a total of 30,214 labeled samples. Figure 6 presents the hyperspectral false-color image of the Trento data, the LiDAR-derived DSM, and the ground truth map. The distribution of samples in the training and test sets is summarized in Table 3.
Houston 2013: The Houston 2013 dataset was provided by the IEEE GRSS Data Fusion Contest. It was collected in June 2012 by the National Center for Airborne Laser Mapping (NCALM) in the United States, covering areas within and around the campus of the University of Houston, Texas. This dataset combines HSI data with a DSM derived from LiDAR. The HS data was captured using the ITRES CASI-1500 sensor, comprising 144 spectral bands within the wavelength range of 0.38 to 1.05 µm. It has a spatial resolution of 2.5 m and a scene size of 349 × 1905 pixels. The LiDAR data is single-band and matches the HS data in both spatial resolution and scene dimensions. The primary objective of this dataset is to enable the classification of 15 distinct land use/land cover categories, and it includes a total of 15,029 labeled samples. Figure 7 presents the false-color HSI of the Trento data, the LiDAR-derived DSM, and the ground truth map. The distribution of samples in the training and test sets is summarized in Table 4.

4.2. Experimental Setup

This study constructs an experimental system using the PyTorch 1.13.1 framework. The hardware environment is equipped with an NVIDIA RTX 4090 GPU (24 GB VRAM, Santa Clara, CA, USA), and the programming environment is established on Python 3.8. The experimental parameters are configured as follows: 20 training samples are assigned to each class, with an input window size of 11 × 11 pixels. During training, the model is trained for 200 epochs, with an initial learning rate of 0.001, a batch size of 512, and the Adam optimizer is employed for model optimization; additionally, a dropout probability of 0.4 is applied to mitigate overfitting. At the network architecture level, the number of hidden channels is set to 32, the encoder adopts a two-layer structure with 6 attention heads, the temperature parameter is set to 1.0, and the hierarchical loss coefficient is set to 0.005.
The experiment uses the Overall Accuracy (OA), Average Accuracy (AA), and Kappa coefficient as quantitative indicators to evaluate the performance of the method. Among them, OA is the proportion of correctly classified pixels to the total number of pixels; AA is the average of the accuracies of each category; the Kappa coefficient measures the overall performance of the classifier by statistically analyzing the consistency between the model predictions and the true labels. The specific mathematical representation formulas are as follows.
O A = i = 1 c m i i N t o t a l
A A = 1 c i = 1 c m i i j = 1 c m i j
K a p p a = O A i = 1 c R i N t o t a l C i N t o t a l 1 i = 1 c R i N t o t a l C i N t o t a l
Among them, m i j denotes the number of pixels that truly belong to class i but are classified as class j (with m i i representing the number of correctly classified samples). c refers to the total number of object class categories, and N t o t a l indicates the total number of test samples. Furthermore, R i and C i denote the sum of the i-th row and the sum of the i-th column of the confusion matrix, respectively.

4.3. Classification Results

To evaluate the performance of the proposed method, we conducted comparative experiments involving seven SOTA models: HybridSN [15], SSFTT [16], DCTN [23], MS2CANet [24], MFT [8], MADNet [25], and PyionNet [3]. In the experiment, HybridSN, SSFTT, and DCTN—recognized as classical models for HSI classification—utilize a single HSI as input. In contrast, MS2CANet, MFT, MADNet, and PyHypeNet, which are SOTA models for HSI-LiDAR data fusion and classification, incorporate both the HSI and the DSM derived from LiDAR as input modalities.
HybridSN is a hybrid neural network specifically designed for HSI classification. Its core innovation lies in integrating the advantages of 3D convolution and 2D convolution to effectively capture the spatial-spectral joint features in hyperspectral data. SSFTT is based on the Transformer architecture. Through spatial-spectral feature fusion and a self-attention mechanism, it achieves high accuracy and robustness in the HSI classification task. DCTN employs a dual-branch convolutional Transformer architecture and incorporates an efficient interactive adaptive mechanism, thereby achieving outstanding performance in the HSIC task. MS2CANet is a multi-scale pyramid fusion framework that incorporates spatial-spectral cross-modal attention. It enhances the model’s capacity to learn multi-scale information, thereby improving classification accuracy. MFT is a multi-modal fusion transformer network that incorporates the mCrossPA mechanism to integrate complementary information sources with HSI tokens for land cover classification. MADNet is a multi-level attention-based dynamic scaling network that employs an attention module to extract features from HSIs and LiDAR data across multiple levels. PyionNet incorporates a pyramid multi-scale feature extraction module and a progressive cross-fusion mechanism, thereby significantly enhancing the classification accuracy of multi-source data integration.
To ensure the fairness of the experiment, all models are configured with the optimal parameters reported in the literature and are independently executed 10 times using the same training and test sets. Random errors are minimized by statistically analyzing the average performance and standard deviations, thereby improving the comparability across different methods.

4.3.1. Quantitative Analysis

The classification performance of the eight methods was evaluated using three datasets: MUUFL, Trento, and Houston2013. The results of these evaluations are summarized in Table 5, Table 6 and Table 7. For clarity, the highest OA, AA, Kappa coefficient, and per-class classification accuracies are highlighted in bold.
Table 5 presents the classification results of various methods on the MUUFL dataset. The results indicate that the proposed method in this study achieves the highest performance, with an OA, AA, and Kappa coefficient of 88.35%, 87.03%, and 87.43%, respectively. These metrics are 2.27%, 1.46%, and 1.79% higher than those of the second-best method, PyionNet. PyionNet demonstrates relatively balanced classification accuracy across object classes, achieving an OA of 86.08%. In contrast, HybridSN performs the least effectively, with an OA of only 71.5%. Other methods improve feature fusion and spatial-spectral relationship modeling by incorporating attention mechanisms or Transformer architectures, resulting in significantly higher classification accuracy compared to HybridSN. Furthermore, all classification methods that integrate HSI and LiDAR data achieve OA values exceeding 80%, which is notably higher than the performance of the three methods based solely on single-source HSI data.
Table 6 presents a comparison of the classification performance of eight different methods on the Trento dataset. This dataset was acquired from a rural area characterized by large, homogeneous farmlands with significant inter-class spectral variations, thereby offering favorable conditions for classification. The experimental results indicate that, with the exception of HybridSN (OA = 85.51%), the OA of the remaining seven methods exceed 94%. Among them, PyionNet achieved commendable classification performance, with the OA, AA, and Kappa coefficient reaching 98.09%, 97.23%, and 97.65%, respectively. This performance can be attributed to its mechanism of multi-scale feature extraction and cross-fusion. Notably, the method proposed in this study demonstrated superior results, achieving an OA of 99.02%, an AA of 98.59%, and a Kappa value of 99.15%.
Table 7 presents a comparison of the classification performance of eight models on the Houston 2013 dataset. The results indicate that the five methods employing joint classification of HSI and LiDAR data yield significantly better performance than the three methods based on single-source HSI classification. Furthermore, MS2CANet, MADNet, and PyionNet demonstrate superior performance in certain specific categories. For instance, PyionNet achieves the highest classification accuracy for the categories Trees, Water, and Parking Lot2. Notably, the method proposed in this study exhibits outstanding performance, achieving the highest OA (91.85%), AA (92.33%), and Kappa coefficient (91.63%). These results not only validate the effectiveness of multi-modal data fusion in classification tasks but also highlight the superiority of the proposed method in handling complex urban environments.
In conclusion, the following conclusions can be drawn: Across the three datasets, models based on multi-source data demonstrate superior classification performance compared to single-source input models. Among the single-source HSI models, DCTN significantly outperforms SSFT and HybridSN in classification accuracy, as it integrates the technical strengths of both CNNs and Transformer. Among the multimodal models, MADNet employs a spectral angle attention mechanism that enables dynamic scale selection, resulting in overall performance that surpasses that of MS2CANet and MFT, which rely on simple fusion strategies. Furthermore, PyionNet exhibits strong competitiveness due to its efficient fusion architecture. Particularly on the Trento dataset, it achieves an OA of 98.09%, with consistently high and balanced performance across all categories on the other two datasets. Notably, the method proposed in this study achieves the highest classification accuracy across multiple object categories, with an overall accuracy surpassing all comparative models, thereby fully demonstrating its effectiveness and robustness.

4.3.2. Qualitative Analysis

To systematically evaluate the performance differences between the proposed method and existing comparative approaches, this study conducted qualitative visual analysis experiments on three publicly available datasets. The experimental results are presented in Figure 8, Figure 9 and Figure 10. Through direct comparison of the classification performance of different methods in representative scenarios, the advantages of the proposed method in terms of boundary preservation, detail representation, and noise suppression can be further substantiated.
Figure 8 presents the classification visualization results of each model on the MUUFL dataset. Due to the relatively low spectral discrimination among multiple adjacent object classes in this dataset, single-source input models (HybridSN, SSFTT, DCTN) exhibit noticeable confusion at the boundaries of different land cover types, particularly at the junctions of water/grass/forest and sand/mixed land surfaces in the upper region. Among the multi-source data models, PyionNet demonstrates clear classification boundaries and a competitive visual performance. In comparison, the method proposed in this study excels in detail representation and boundary preservation, effectively mitigating inter-class misclassification and the blurring effect.
Figure 9 presents a comparison of the classification visualization results obtained by various methods on the Trento dataset. As can be observed from the figure, HybridSN produces a significant amount of noise in the vineyard and apple tree regions. SSFTT, DCTN, and MS2CANet exhibit minor misclassifications at the boundary between the lower vineyard and the road. MFT demonstrates noticeable classification errors in the building area. PyionNet shows relatively better performance in these regions, with only a limited number of noisy points appearing in the central ground area. In contrast, the method proposed in this study not only minimizes classification errors but also achieves the best overall classification performance.
Figure 10 presents the visual classification results of eight algorithms on the Houston 2013 dataset. This dataset is characterized by a relatively scattered sample distribution and a complex urban scene. As a result, the first six methods exhibit noticeable classification errors in regions such as railways, stressed grass areas, and the junctions between running tracks and grass in the lower portion of the map. In contrast, PyionNet and the method proposed in this study demonstrate superior classification performance, with no evident misclassification observed.

4.4. Ablation Experiment

To evaluate the effectiveness of multimodal fusion in the system, this study designed two experimental frameworks—single-modal (HSI/LiDAR) and dual-modal fusion—for comparative analysis. Experimental results (as presented in Table 8) demonstrate that, across three standard remote sensing datasets, the dual-modal fusion approach substantially outperforms the single-modal approach in various classification metrics. Notably, the performance advantage is more pronounced in test sets characterized by higher scene complexity. This comparative analysis confirms the superiority of the multimodal data fusion strategy in remote sensing image classification, suggesting that the complementary nature of multi-source information can effectively enhance the model’s ability to distinguish complex land cover types.
To systematically evaluate the effectiveness of each module within the model, this study conducted three ablation experiments: (1) EXP1 excluded the shallow multi-scale feature extraction module and retained only the CCformer encoder along with the classifier; (2) EXP2 removed the CCformer encoder and instead employed a basic feature concatenation and fusion strategy in conjunction with the shallow multi-scale feature extraction module; (3) Full denotes the complete model architecture. As presented in Table 9, the experimental results demonstrate that although the CCformer encoder alone (EXP1) achieves relatively high classification accuracy, the integration of the shallow feature extraction module with the CCformer encoder leads to further performance enhancement. Specifically, the full model significantly outperforms the two simplified variants in terms of overall accuracy (OA) and average accuracy (AA), confirming the effectiveness of combining multi-scale feature extraction with the Transformer-based architecture. This comparison clearly illustrates the complementary nature of the model’s components, where the shallow feature extraction module provides a more robust and semantically rich foundation for subsequent high-level feature learning.

5. Conclusions

This paper presents a cross-modal cross-attention transformer network designed for the fusion and classification of HSI and LiDAR data. Initially, the proposed method employs a dual-branch shallow multi-scale feature extraction module to separately capture the spectral-spatial features of HSI and the elevation features from LiDAR data. Subsequently, a cross-attention mechanism is integrated into the Transformer architecture to jointly guide and fuse the early-stage features of both modalities, thereby enabling adaptive feature complementation and enhancement across modalities. Experimental results on three widely recognized datasets—MUUFL, Trento, and Houston2013—demonstrate that OA achieved by the proposed method reaches 88.35%, 99.02%, and 91.85%, respectively. The method significantly outperforms existing state-of-the-art approaches, thereby validating its effectiveness and technical advancement. This study offers a novel approach for multi-modal remote sensing data fusion and classification. In future work, we plan to investigate self-supervised pre-training strategies for multi-modal feature learning to enhance generalization in unsupervised scenarios.

Author Contributions

Conceptualization, H.G. and W.L.; methodology, H.G.; software, H.G.; validation, H.G. and B.T.; formal analysis, H.G.; investigation, H.G. and B.T.; resources, H.G. and B.T.; data curation, H.G. and B.T.; writing—original draft preparation, H.G.; writing—review and editing, H.G. and W.L.; visualization, H.G.; supervision, W.L.; project administration, W.L.; funding acquisition, W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Science and Technology Research Project of Henan Province under Grant No. 252102241019, in part by the Key Scientific Research Projects of Colleges and Universities of Henan Province under Grant No. 26A510005 and Grant No. 25B510005, and in part by the Higher Education Teaching Reform Research and Practice Projects of Henan Province under Grant No. 2024SJGLX0951.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HSIHyperspectral image
HSICHyperspectral image Classification
LiDARLight Detection and Ranging
CNNConvolutional Neural Network
SVMSupport Vector Machine
ELMLearning Machine
RFRandom Forest
ViTVision Transformer
FPNFeature Pyramid Network
RNNRecurrent Neural Network
NLPNatural Language Processing
DSMDigital Surface Model
OAOverall Accuracy
AAAverage Accuracy

References

  1. Zhang, Z.; Huang, L.; Wang, Q.; Jiang, L.; Qi, Y.; Wang, S.; Shen, T.; Tang, B.-H.; Gu, Y. UAV Hyperspectral Remote Sensing Image Classification: A Systematic Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 18, 3099–3124. [Google Scholar] [CrossRef]
  2. Guo, H.; Liu, W. EDB-Net: Efficient Dual-Branch Convolutional Transformer Network for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 12485–12500. [Google Scholar] [CrossRef]
  3. Pan, H.; Zhang, Q.; Ge, H.; Liu, M.; Shi, C. PyionNet: Pyramid Progressive Cross-Fusion Network for Joint Classification of Hyperspectral and LiDAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 12042–12058. [Google Scholar] [CrossRef]
  4. Guo, H.; Liu, W. S3L: Spectrum Transformer for Self-Supervised Learning in Hyperspectral Image Classification. Remote Sens. 2024, 16, 970. [Google Scholar] [CrossRef]
  5. Ghasemi, N.; Justo, J.A.; Celesti, M.; Despoisse, L.; Nieke, J. Onboard Processing of Hyperspectral Imagery: Deep Learning Advancements, Methodologies, Challenges, and Emerging Trends. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 4780–4790. [Google Scholar] [CrossRef]
  6. Liu, Y.; Zhang, Y.; Zhang, J. Lightweight Multi-Head MambaOut with CosTaylorFormer for Hyperspectral Image Classification. Remote Sens. 2025, 17, 1864. [Google Scholar] [CrossRef]
  7. Wang, L.; Deng, S. Hypergraph Convolution Network Classification for Hyperspectral and LiDAR Data. Sensors 2025, 25, 3092. [Google Scholar] [CrossRef]
  8. Roy, S.K.; Deria, A.; Hong, D.; Rasti, B.; Plaza, A.; Chanussot, J. Multimodal Fusion Transformer for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5515620. [Google Scholar] [CrossRef]
  9. Ghamisi, P.; Benediktsson, J.A.; Phinn, S. Land-Cover Classification Using Both Hyperspectral and LiDAR Data. Int. J. Image Data Fusion 2015, 6, 189–215. [Google Scholar] [CrossRef]
  10. Melgani, F.; Bruzzone, L. Classification of Hyperspectral Remote Sensing Images with Support Vector Machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  11. Chen, T.; Chen, S.; Chen, L.; Chen, H.; Zheng, B.; Deng, W. Joint Classification of Hyperspectral and LiDAR Data via Multiprobability Decision Fusion Method. Remote Sens. 2024, 16, 4317. [Google Scholar] [CrossRef]
  12. Platel, A.; Sandino, J.; Shaw, J.; Bollard, B.; Gonzalez, F. Advancing Sparse Vegetation Monitoring in the Arctic and Antarctic: A Review of Satellite and UAV Remote Sensing, Machine Learning, and Sensor Fusion. Remote Sens. 2025, 17, 1513. [Google Scholar] [CrossRef]
  13. Takhtkeshha, N.; Mandlburger, G.; Remondino, F.; Hyyppä, J. Multispectral Light Detection and Ranging Technology and Applications: A Review. Sensors 2024, 24, 1669. [Google Scholar] [CrossRef]
  14. Pan, H.; Li, X.; Ge, H.; Wang, L.; Yu, X. Multi-Scale Hierarchical Cross Fusion Network for Hyperspectral Image and LiDAR Classification. J. Frankl. Inst. 2025, 362, 107713. [Google Scholar] [CrossRef]
  15. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
  16. Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–Spatial Feature Tokenization Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
  17. Guo, H.; Liu, W. DMAF-NET: Deep Multi-Scale Attention Fusion Network for Hyperspectral Image Classification with Limited Samples. Sensors 2024, 24, 3153. [Google Scholar] [CrossRef]
  18. Zhao, Y.; Bao, W.; Xu, J.; Xu, X. BIHAF-Net: Bilateral Interactive Hierarchical Adaptive Fusion Network for Collaborative Classification of Hyperspectral and LiDAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 15971–15988. [Google Scholar] [CrossRef]
  19. Huang, J.; Zhang, Y.; Yang, F.; Chai, L. Attention-Guided Fusion and Classification for Hyperspectral and LiDAR Data. Remote Sens. 2023, 16, 94. [Google Scholar] [CrossRef]
  20. Ge, H.; Wang, L.; Pan, H.; Liu, Y.; Li, C.; Lv, D.; Ma, H. Cross Attention-Based Multi-Scale Convolutional Fusion Network for Hyperspectral and LiDAR Joint Classification. Remote Sens. 2024, 16, 4073. [Google Scholar] [CrossRef]
  21. Liu, Y.; Ye, Z.; Xi, Y.; Liu, H.; Li, W.; Bai, L. Multiscale and Multidirection Feature Extraction Network for Hyperspectral and LiDAR Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 9961–9973. [Google Scholar] [CrossRef]
  22. Wang, Q.; Zhou, B.; Zhang, J.; Xie, J.; Wang, Y. Joint Classification of Hyperspectral Images and LiDAR Data Based on Dual-Branch Transformer. Sensors 2024, 24, 867. [Google Scholar] [CrossRef]
  23. Zhou, Y.; Huang, X.; Yang, X.; Peng, J.; Ban, Y. DCTN: Dual-Branch Convolutional Transformer Network with Efficient Interactive Self-Attention for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5508616. [Google Scholar] [CrossRef]
  24. Wang, X.; Zhu, J.; Feng, Y.; Wang, L. MS2CANet: Multiscale Spatial–Spectral Cross-Modal Attention Network for Hyperspectral Image and LiDAR Classification. IEEE Geosci. Remote Sens. Lett. 2024, 21, 5501505. [Google Scholar] [CrossRef]
  25. He, Y.; Xi, B.; Li, G.; Zheng, T.; Li, Y.; Xue, C.; Chanussot, J. Multilevel Attention Dynamic-Scale Network for HSI and LiDAR Data Fusion Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5529916. [Google Scholar] [CrossRef]
  26. Yang, B.; Wang, X.; Xing, Y.; Cheng, C.; Jiang, W.; Feng, Q. Modality Fusion Vision Transformer for Hyperspectral and LiDAR Data Collaborative Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 17052–17065. [Google Scholar] [CrossRef]
  27. Huang, W.; Wu, T.; Zhang, X.; Li, L.; Lv, M.; Jia, Z.; Zhao, X.; Ma, H.; Vivone, G. MCFTNet: Multimodal Cross-Layer Fusion Transformer Network for Hyperspectral and LiDAR Data Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 12803–12818. [Google Scholar] [CrossRef]
  28. Sun, L.; Wang, X.; Zheng, Y.; Wu, Z.; Fu, L. Multiscale 3-D–2-D Mixed CNN and Lightweight Attention-Free Transformer for Hyperspectral and LiDAR Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 2100116. [Google Scholar] [CrossRef]
  29. Ni, K.; Li, Z.; Yuan, C.; Zheng, Z.; Wang, P. Selective Spectral-Spatial Aggregation Transformer for Hyperspectral and LiDAR Classification. IEEE Geosci. Remote Sens. Lett. 2024, 22, 5501205. [Google Scholar] [CrossRef]
  30. Roy, S.K.; Sukul, A.; Jamali, A.; Haut, J.M.; Ghamisi, P. Cross Hyperspectral and LiDAR Attention Transformer: An Extended Self-Attention for Land Use and Land Cover Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5512815. [Google Scholar] [CrossRef]
  31. Shen, Y.; Zhu, S.; Chen, C.; Du, Q.; Xiao, L.; Chen, J.; Pan, D. Efficient Deep Learning of Nonlocal Features for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 6029–6043. [Google Scholar] [CrossRef]
  32. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-V4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar] [CrossRef]
  33. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  34. Schuster, R.; Battrawy, R.; Wasenmuller, O.; Stricker, D. ResFPN: Residual Skip Connections in Multi-Resolution Feature Pyramid Networks for Accurate Dense Pixel Matching. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 180–187. [Google Scholar] [CrossRef]
  35. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
  36. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2021, arXiv:2010.11929v1. [Google Scholar] [CrossRef]
Figure 1. The multi-head self-attention mechanism within the Transformer encoder: (a) Multi-Head Attention; (b) Scaled Dot-Product Attention mechanism.
Figure 1. The multi-head self-attention mechanism within the Transformer encoder: (a) Multi-Head Attention; (b) Scaled Dot-Product Attention mechanism.
Sensors 25 05698 g001
Figure 2. Overview framework of the proposed CAFormer-based HSI-LiDAR joint classification network. In the figure, the tokens marked with an asterisk (*) denote classification tokens.
Figure 2. Overview framework of the proposed CAFormer-based HSI-LiDAR joint classification network. In the figure, the tokens marked with an asterisk (*) denote classification tokens.
Sensors 25 05698 g002
Figure 3. Spectral Pyramid-Based Multiscale Feature Extraction Module.
Figure 3. Spectral Pyramid-Based Multiscale Feature Extraction Module.
Sensors 25 05698 g003
Figure 4. Spatial Pyramid-Based Multiscale Feature Extraction Module.
Figure 4. Spatial Pyramid-Based Multiscale Feature Extraction Module.
Sensors 25 05698 g004
Figure 5. The MUUFL dataset. (a) Pseudocolor Representation of an HSI. (b) DSM of LiDAR data. (c) Ground-truth map.
Figure 5. The MUUFL dataset. (a) Pseudocolor Representation of an HSI. (b) DSM of LiDAR data. (c) Ground-truth map.
Sensors 25 05698 g005
Figure 6. The Trento dataset. (a) Pseudocolor Representation of an HSI. (b) DSM of LiDAR data. (c) Ground-truth map.
Figure 6. The Trento dataset. (a) Pseudocolor Representation of an HSI. (b) DSM of LiDAR data. (c) Ground-truth map.
Sensors 25 05698 g006
Figure 7. The Houston2013 dataset. (a) Pseudocolor Representation of an HSI. (b) DSM of LiDAR data. (c) Ground-truth map.
Figure 7. The Houston2013 dataset. (a) Pseudocolor Representation of an HSI. (b) DSM of LiDAR data. (c) Ground-truth map.
Sensors 25 05698 g007
Figure 8. Visualization results of the classification performance of eight different methods on the MUUFL dataset. White wireframes highlight areas with concentrated errors. (a) Ground truth. (b) HybridSN. (c) SSFTT. (d) DCTN. (e) MFT. (f) MS2CANet. (g) MADNet. (h) PyionNet. (i) Proposed method.
Figure 8. Visualization results of the classification performance of eight different methods on the MUUFL dataset. White wireframes highlight areas with concentrated errors. (a) Ground truth. (b) HybridSN. (c) SSFTT. (d) DCTN. (e) MFT. (f) MS2CANet. (g) MADNet. (h) PyionNet. (i) Proposed method.
Sensors 25 05698 g008
Figure 9. Visualization results of the classification performance of eight different methods on the Trento dataset. White wireframes highlight areas with concentrated errors. (a) Ground truth. (b) HybridSN. (c) SSFTT. (d) DCTN. (e) MFT. (f) MS2CANet. (g) MADNet. (h) PyionNet. (i) Proposed method.
Figure 9. Visualization results of the classification performance of eight different methods on the Trento dataset. White wireframes highlight areas with concentrated errors. (a) Ground truth. (b) HybridSN. (c) SSFTT. (d) DCTN. (e) MFT. (f) MS2CANet. (g) MADNet. (h) PyionNet. (i) Proposed method.
Sensors 25 05698 g009
Figure 10. Visualization results of the classification performance of eight different methods on the Houston 2013 dataset. White wireframes highlight areas with concentrated errors. (a) Ground truth. (b) HybridSN. (c) SSFTT. (d) DCTN. (e) MFT. (f) MS2CANet. (g) MADNet. (h) PyionNet. (i) Proposed method.
Figure 10. Visualization results of the classification performance of eight different methods on the Houston 2013 dataset. White wireframes highlight areas with concentrated errors. (a) Ground truth. (b) HybridSN. (c) SSFTT. (d) DCTN. (e) MFT. (f) MS2CANet. (g) MADNet. (h) PyionNet. (i) Proposed method.
Sensors 25 05698 g010
Table 1. Summary of HSI datasets.
Table 1. Summary of HSI datasets.
MUUFLTrentoHouston2013
Data TypeHSI, DSMHSI, DSMHSI, DSM
Samples53,68730,21415,029
Number of Classes11615
pixels325 × 220166 × 600 349 × 1905
Resolution (m)0.54 × 1.012.5
Band Number64 (HIS)63 (HSI)144 (HSI)
Area and CountryMississippi, USAItalyTexas, USA
Table 2. The labels and quantities of the samples in the MUUFL dataset.
Table 2. The labels and quantities of the samples in the MUUFL dataset.
ClassCategoryTrain SamplesTest SamplesTotal Samples
01Trees2023,22623,246
02Mostly Grass2042504270
03Mixed Ground Surface2068626882
04Dirt and Sand2018061826
05Road2066676687
06Water20446466
07Building Shadow2022132233
08Building2062206240
09Sidewalk2013651385
10Yellow Curb20163183
11Cloth panels20249269
Total Samples22053,46753,687
Table 3. The labels and quantities of the samples in the Trento dataset.
Table 3. The labels and quantities of the samples in the Trento dataset.
ClassCategoryTrain SamplesTest SamplesTotal Samples
01Apple Trees2040144034
02Building2028832903
03Ground20459479
04Woods2091039123
05Vineyard2010,48110,501
06Roads2031543174
Total Samples12030,09430,214
Table 4. The labels and quantities of the samples in the Houston2013 dataset.
Table 4. The labels and quantities of the samples in the Houston2013 dataset.
ClassCategoryTrain SamplesTest SamplesTotal Samples
01Health Grass2012311251
02Stressed Grass2012341254
03Synthetic Grass20677697
04Trees2012241244
05Soil2012221242
06Water20305325
07Residential2012481268
08Commercial2012241244
09Road2012321252
10Highway2012071227
11Railway2012151235
12Parking Lot12012131233
13Parking Lot220449469
14Tennis Court20408428
15Running Track20640660
Total Samples30014,72915,029
Table 5. The quantitative comparison results of 8 methods on the MUUFL dataset.
Table 5. The quantitative comparison results of 8 methods on the MUUFL dataset.
Class No.HSI InputHSI and LiDAR-DSM Input
HybridSNSSFTTDCTNMFTMS2CANetMADNetPyionNetProposed
188.4488.8984.7382.5183.1290.1787.4690.15
271.2672.8574.6170.6274.3380.3680.1183.97
349.7862.8167.4768.4681.5369.9480.8085.03
473.2090.8678.1775.6780.6689.8087.5189.20
534.8063.8568.4477.3582.5688.0983.4093.21
695.5197.3394.2197.1596.9998.9095.3295.71
762.0083.3375.5580.1083.1286.9682.5887.93
884.2278.9988.3690.3389.8890.0888.3391.51
940.9043.5966.1969.6974.6870.2876.9075.73
1060.1177.9179.7771.3683.3789.2888.9185.44
1190.4789.5691.0990.7989.4490.4689.9987.38
OA (%)71.50 ± 2.5178.77 ± 1.7780.11 ± 1.8981.46 ± 1.9982.98 ± 1.7785.65 ± 1.7786.08 ± 1.8888.35 ± 1.87
AA (%)68.45 ± 3.0877.45 ± 2.3380.71 ± 2.6281.10 ± 2.7083.61 ± 1.6085.85 ± 1.2485.57 ± 2.0187.03 ± 1.13
Kappa × 10065.96 ± 3.0572.64 ± 2.5981.20 ± 3.2181.00 ± 2.8982.90 ± 1.8686.10 ± 1.3885.64 ± 1.9887.43 ± 1.51
The bolded value indicates the optimal value.
Table 6. The quantitative comparison results of 8 methods on the Trento dataset.
Table 6. The quantitative comparison results of 8 methods on the Trento dataset.
Class No.HSI InputHSI and LiDAR-DSM Input
HybridSNSSFTTDCTNMFTMS2CANetMADNetPyionNetProposed
164.3697.7995.8895.0998.1898.0095.4799.37
283.2177.2687.6899.0795.5497.0997.0797.68
395.4596.5998.6789.8495.5196.3910097.36
497.7199.9599.7698.0810099.9998.11100
589.0998.9995.8098.1697.5098.9998.0799.95
667.2389.9790.2189.9991.4090.7794.6597.20
OA (%)85.51 ± 2.6194.11 ± 1.8795.41 ± 2.1995.01 ± 1.9796.35 ± 1.8896.62 ± 1.7598.09 ± 1.1799.02 ± 1.39
AA (%)82.85 ± 3.0893.67 ± 2.1494.89 ± 1.9495.10 ± 2.2297.36 ± 1.5996.87 ± 2.4997.23 ± 0.7098.59 ± 1.50
Kappa × 10082.11 ± 2.9394.07 ± 2.1994.20 ± 1.7595.44 ± 1.6996.03 ± 1.7197.09 ± 1.9897.65 ± 1.2599.15 ± 1.37
The bolded value indicates the optimal value.
Table 7. The quantitative comparison results of 8 methods on the Houston2013 dataset.
Table 7. The quantitative comparison results of 8 methods on the Houston2013 dataset.
Class No.HSI InputHSI and LiDAR-DSM Input
HybridSNSSFTTDCTNMFTMS2CANetMADNetPyionNetProposed
148.2391.7880.5576.6780.3977.5989.3183.67
284.6878.4291.2393.0291.4798.9093.2290.78
375.4780.5792.3495.4499.8990.5599.6898.90
469.2890.9889.1695.3397.5695.1596.1196.00
585.1487.0792.1194.6098.5793.6695.4794.67
687.3990.9291.5893.4798.0194.2298.2598.06
765.7993.0290.2686.2980.7389.6184.1294.34
891.3392.3687.6084.3990.2189.6987.9594.23
971.8793.0286.1684.7785.7195.3375.6690.56
1086.7282.6079.1265.9967.2380.8189.1784.34
1188.9077.6888.0992.1994.1694.9990.2190.69
1269.0270.3687.3487.4893.1988.9690.6893.06
1380.1491.8291.5684.9978.8785.8694.7394.33
1490.7091.7192.4093.1890.1994.7696.6597.30
1565.7789.8785.2396.7790.2594.2293.2190.11
OA (%)74.32 ± 1.5183.05 ± 1.7787.99 ± 2.2788.12 ± 2.0788.13 ± 1.7889.60 ± 1.9590.01 ± 1.2791.85 ± 1.15
AA (%)75.24 ± 1.4884.40 ± 2.8488.29 ± 2.0888.30 ± 2.4889.10 ± 1.8590.96 ± 2.2091.21 ± 1.3392.33 ± 0.93
Kappa × 10066.89 ± 1.2581.79 ± 1.4988.20 ± 1.8888.07 ± 1.8888.02 ± 1.7790.11 ± 1.5990.25 ± 1.7891.63 ± 1.28
The bolded value indicates the optimal value.
Table 8. Experimental Comparison Results under Varying Input Conditions.
Table 8. Experimental Comparison Results under Varying Input Conditions.
Input CaseMUUFLTrentoHouston2013
OA (%)AA (%)K × 100OA (%)AA (%)K × 100OA (%)AA (%)K × 100
HSI81.3880.7080.9893.5694.2592.5585.5886.2985.64
LiDAR57.4755.7654.2264.7765.3263.1669.9871.4368.86
HSI + LiDAR88.3587.0387.4399.0298.5999.1591.8592.3391.63
The bolded value indicates the optimal value.
Table 9. Comparative Results of Ablation Experiments.
Table 9. Comparative Results of Ablation Experiments.
ExpMUUFLTrentoHouston2013
OA (%)AA (%)K × 100OA (%)AA (%)K × 100OA (%)AA (%)K × 100
Exp179.4479.5678.8787.8688.9887.8482.1883.3681.47
Exp249.3550.8548.9555.3358.4756.6651.6651.6950.11
Full88.3587.0387.4399.0298.5999.1591.8592.3391.63
The bolded value indicates the optimal value.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, H.; Tian, B.; Liu, W. CCFormer: Cross-Modal Cross-Attention Transformer for Classification of Hyperspectral and LiDAR Data. Sensors 2025, 25, 5698. https://doi.org/10.3390/s25185698

AMA Style

Guo H, Tian B, Liu W. CCFormer: Cross-Modal Cross-Attention Transformer for Classification of Hyperspectral and LiDAR Data. Sensors. 2025; 25(18):5698. https://doi.org/10.3390/s25185698

Chicago/Turabian Style

Guo, Hufeng, Baohui Tian, and Wenyi Liu. 2025. "CCFormer: Cross-Modal Cross-Attention Transformer for Classification of Hyperspectral and LiDAR Data" Sensors 25, no. 18: 5698. https://doi.org/10.3390/s25185698

APA Style

Guo, H., Tian, B., & Liu, W. (2025). CCFormer: Cross-Modal Cross-Attention Transformer for Classification of Hyperspectral and LiDAR Data. Sensors, 25(18), 5698. https://doi.org/10.3390/s25185698

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop