Next Article in Journal
1.8 m Class Pathfinder Raman LIDAR for Northern Site of Cherenkov Telescope Array Observatory—Performance
Previous Article in Journal
Risk Assessment of the 2022 Nigerian Flood Event Using Remote Sensing Products and Climate Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Spatial-Spectral Linear Extrapolation for Cross-Scene Hyperspectral Image Classification

School of Electronic Information Engineering, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(11), 1816; https://doi.org/10.3390/rs17111816
Submission received: 17 February 2025 / Revised: 12 May 2025 / Accepted: 19 May 2025 / Published: 22 May 2025
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

In realistic hyperspectral image (HSI) cross-scene classification tasks, it is ideal to obtain target domain samples during the training phase. Therefore, a model needs to be trained on one or more source domains (SD) and achieve robust domain generalization (DG) performance on an unknown target domain (TD). Popular DG strategies constrain the model’s predictive behavior in synthetic space through deep, nonlinear source expansion, and an HSI generation model is usually adopted to enrich the diversity of training samples. However, recent studies have shown that the activation functions of neurons in a network exhibit asymmetry for different categories, which results in the learning of task-irrelevant features while attempting to learn task-related features (called “feature contamination”). For example, even if some intrinsic features of HSIs (lighting conditions, atmospheric environment, etc.) are irrelevant to the label, the neural network still tends to learn them, resulting in features that make the classification related to these spurious components. To alleviate this problem, this study replaces the common nonlinear generative network with a specific linear projection transformation, to reduce the number of neurons activated nonlinearly during training and alleviate the learning of contaminated features. Specifically, this study proposes a dimensionally decoupled spatial spectral linear extrapolation (SSLE) strategy to achieve sample augmentation. Inspired by the weakening effect of water vapor absorption and Rayleigh scattering on band reflectivity, we simulate a common spectral drift based on Markov random fields to achieve linear spectral augmentation. Further considering the common co-occurrence phenomenon of patch images in space, we design spatial weights combined with label determinism of the center pixel to construct linear spatial enhancement. Finally, to ensure the cognitive unity of the high-level features of the discriminator in the sample space, we use inter-class contrastive learning to align the back-end feature representation. Extensive experiments were conducted on four datasets, an ablation study showed the effectiveness of the proposed modules, and a comparative analysis with advanced DG algorithms showed the superiority of our model in the face of various spectral and category shifts. In particular, on the Houston18/Shanghai datasets, its overall accuracy was 0.51%/0.83% higher than the best results of the other methods, and its Kappa coefficient was 0.78%/2.07% higher, respectively.
Keywords: hyperspectral image classification; domain generalization; data augmentation; contrastive learning hyperspectral image classification; domain generalization; data augmentation; contrastive learning

Share and Cite

MDPI and ACS Style

Lin, L.; Zhao, H.; Gao, S.; Wang, J.; Zhang, Z. Spatial-Spectral Linear Extrapolation for Cross-Scene Hyperspectral Image Classification. Remote Sens. 2025, 17, 1816. https://doi.org/10.3390/rs17111816

AMA Style

Lin L, Zhao H, Gao S, Wang J, Zhang Z. Spatial-Spectral Linear Extrapolation for Cross-Scene Hyperspectral Image Classification. Remote Sensing. 2025; 17(11):1816. https://doi.org/10.3390/rs17111816

Chicago/Turabian Style

Lin, Lianlei, Hanqing Zhao, Sheng Gao, Junkai Wang, and Zongwei Zhang. 2025. "Spatial-Spectral Linear Extrapolation for Cross-Scene Hyperspectral Image Classification" Remote Sensing 17, no. 11: 1816. https://doi.org/10.3390/rs17111816

APA Style

Lin, L., Zhao, H., Gao, S., Wang, J., & Zhang, Z. (2025). Spatial-Spectral Linear Extrapolation for Cross-Scene Hyperspectral Image Classification. Remote Sensing, 17(11), 1816. https://doi.org/10.3390/rs17111816

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop