HL-Mamba: A High–Low Frequency Interaction Mamba Network for Hyperspectral Image Classification
Abstract
1. Introduction
- 1.
- A high–low frequency interaction Mamba network is proposed, which innovatively integrates frequency-domain feature decoupling with the efficient long-range modeling capability of Mamba to tackle the core issues of feature entanglement and computational redundancy in HSI classification.
- 2.
- A dynamic cross-frequency interaction module is designed, where low-frequency structural features guide aggregation of discriminative high-frequency details and high-frequency textures refine global structural representations, yielding more discriminative spectral–spatial features for HSI classification.
- 3.
- A frequency alignment loss is designed to constrain the consistency and complementarity of distribution between the high- and low-frequency components, further enhancing the discriminability of classification.
2. Materials and Methods
2.1. Dataset Description
- 1.
- Indian Pines dataset [37]: Captured over northwestern Indiana, USA, this dataset has a spatial resolution of 20 m and 220 spectral bands. The spatial size of image is 145 × 145 pixels, containing 16 land-cover classes dominated by agricultural crops (e.g., corn, soybean), forests, and low-density residential areas. The strong spectral similarity between corn, soybean, and grass species poses a significant challenge for HSI classification.
- 2.
- Pavia University dataset [38]: Acquired by the ROSIS sensor over Pavia, Italy, this dataset features a high spatial resolution of 1.3 m and 103 spectral bands. A spatial size of 610 × 340 pixels and 9 urban surface classes (e.g., asphalt, meadows, bare soil, self-blocking bricks), it is ideal for evaluating the fine-grained spatial discrimination capability of models due to its rich spatial details.
- 3.
- Houston dataset [39]: Released by the IEEE GRSS Data Fusion Contest 2013, this dataset includes 349 × 1905 pixels and 144 spectral bands, covering 15 complex urban and suburban classes (e.g., roads, parking lots, vegetation, buildings). High intra-class variability and mixed pixels caused by shadows and structural occlusions make it a challenging benchmark for urban HSI classification.
- 4.
- WHU-Hi-HanChuan dataset [40]: A large-scale dataset captured by the Wuhan University airborne hyperspectral sensor over Hanchuan, China. It covers 1100 × 5100 pixels with 270 spectral bands and a fine spatial resolution of 1 m. The 16 land-cover categories include agricultural crops (strawberry, soybean, sorghum), vegetation, water bodies, and built-up areas. The vast scene size and high spectral diversity enable a rigorous test of the model’s generalization to real-world scenarios.
2.2. Data Processing Methods
3. Proposed Method
3.1. Mamba: State Space Model for Sequence Modeling
3.2. Overall Framework
3.3. High–Low Frequency Decomposition Mamba Module
3.4. Cross-Frequency Interaction Module
3.5. Frequency Alignment Loss
4. Results
4.1. Experimental Setup
4.2. Evaluation Metrics
4.3. Performance Comparison
4.3.1. Quantitative Results and Analysis
- 1.
- Indian Pines Dataset: As shown in Table 2, HL-Mamba achieves the highest OA (94.07%), AA (90.40%), and Kappa (93.25%) among all compared methods. Compared with the second-best method DBDA (OA = 93.44%), HL-Mamba improves OA by 0.63%. The key reason is that the high–low frequency decomposition effectively separates the global structure and edge details of crops, and the cross-frequency interaction module enhances the discriminability of spectrally similar classes (e.g., Alfalfa and Oats), which is difficult for comparison methods to achieve. Notably, HL-Mamba shows significant advantages in class-level accuracy for challenging classes with high spectral similarity. For example, the accuracy of Alfalfa (a minority class) reaches 53.78%, which is 2.45% higher than DBDA (51.33%) and far superior to CDCNN (0.89%) and FDSSC (24.00%). For Oats, HL-Mamba achieves 79.47% accuracy, outperforming M3DCNN (78.42%) and DBDA (73.16%). This indicates that the frequency-aware feature learning of HL-Mamba effectively enhances the discrimination of spectrally similar classes.
- 2.
- Pavia University Dataset: Table 3 presents the quantitative results of HL-Mamba and comparison methods on the Pavia University dataset. This dataset is characterized by high resolution and large samples for most classes, which demands models to effectively capture fine-grained spatial details in processing large-scale data. HL-Mamba achieves the highest AA of 90.66% and Kappa of 91.79%. In terms of class-level accuracy, HL-Mamba shows remarkable performance in classes with similar spatial textures and spectral properties. For example, the Asphalt and Bitumen classes, which are easily misclassified due to their similar gray-scale textures in spatial images, HL-Mamba achieves accuracies of 90.82% and 95.18%, respectively. For the Shadows class, a typical low-frequency structural class with large intra-class variability, HL-Mamba achieves an accuracy of 82.72%. This advantage comes from the low-frequency branch’s efficient modeling of global structural context, which accurately identifies shadow regions. For the high-resolution urban scene of Pavia University, the key advantage lies in the HLFDMM’s ability to capture fine-grained spatial details (via high-frequency branch) and global structural consistency (via low-frequency branch). For example, the Asphalt class (easily confused with Bitumen due to similar gray textures) achieves 90.82% accuracy, and the Shadows class (with large intra-class variability) reaches 82.72% accuracy, far higher than M3DCNN’s 63.55%. This is because the low-frequency branch models the global context of shadow regions, while the cross-frequency interaction module refines edge details of urban structures, effectively mitigating misclassification caused by spatial similarity.
- 3.
- Houston Dataset: The Houston dataset is a challenging urban hyperspectral dataset with complex mixed pixels, shadows, and occlusions, requiring models to extract spectral–spatial feature better. As shown in Table 4, HL-Mamba achieves the highest OA of 87.32%, AA of 88.18%, and Kappa of 86.29%. The dataset’s challenges (mixed pixels, shadows, occlusions) are addressed by the synergy of CFIM and FAL: for the Commercial class (affected by high-rise building shadows), HL-Mamba’s accuracy (63.68%) is 3.45% higher than DBDA (60.23%); for the Parking Lot 2 class (small-scale and easily mixed with roads), accuracy reaches 86.90%, outperforming the second-best FDSSC (80.80%). The FAL enhances feature invariance to illumination variations, while the CFIM fuses structural (low-frequency) and texture (high-frequency) features, making the model robust to urban interference factors. For classes affected by shadows and occlusions, such as Commercial and Railway, HL-Mamba shows significant advantages. The Commercial class, which is often misclassified due to the shadow of high-rise buildings, achieves an accuracy of 63.38% with HL-Mamba, 3.45% higher than DBDA (92.48%). This is because the frequency alignment loss enhances the invariance of features to illumination variations, making the model less sensitive to shadow-induced spectral distortions.
- 4.
- WHU-Hi-HanChuan Dataset: As a large-scale agricultural and suburban hyperspectral dataset, WHU-Hi-HanChuan features a vast scene size and high spectral diversity, testing the model’s performance. Table 5 presents the quantitative results, where HL-Mamba achieves the highest OA of 95.28%, AA of 89.81%, and Kappa of 94.47%, outperforming the second-ranked M3DCNN (OA = 94.26%, Kappa = 993.28%). For crop classes with overlapping spectral bands (e.g., Soybean and Sorghum), HL-Mamba achieves 92.70% and 97.42% accuracy, respectively, benefiting from HLFDMM’s effective decomposition of spectral–spatial features. For the Water-spinach class (a minority crop with sparse samples), accuracy reaches 85.78%, 15.31% higher than DBDA (70.75%), as the cross-frequency interaction module aggregates discriminative details from high-frequency features, compensating for the lack of training samples. For agricultural crop classes with similar spectral characteristics, such as Soybean and Sorghum, HL-Mamba achieves accuracies of 97.42% and 85.78%, respectively. This demonstrates that the frequency-aware feature learning effectively extracts discriminative spectral features from overlapping spectral bands of different crops.
4.3.2. Visualization Results and Analysis
- 1.
- Indian Pines Dataset: Qualitatively, Figure 5 visualizes the classification maps of HL-Mamba and comparison methods on the Indian Pines dataset, which is dominated by agricultural land cover with similar crop types. The classification map of the HL-Mamba exhibits sharp edge details across the entire scene. In contrast, CDCNN and FDSSC suffer from severe salt-and-pepper noise in minority class regions, and their classification results show blurred boundaries between Oats and surrounding grassland classes. For the Grass-pasture-mowed class, a sparse category prone to misclassification, HL-Mamba clearly demarcates its distribution with minimal pixel confusion. This visual superiority stems from the CFIM and frequency-aware feature learning of HL-Mamba. The high-frequency branch captures fine texture differences between similar crops, while the low-frequency branch anchors the global structural distribution of sparse classes.
- 2.
- Pavia University: Qualitatively, the classification map of HL-Mamba (Figure 6) shows clear edge details and minimal misclassification noise compared to other methods. For instance, in the buildings, HL-Mamba accurately distinguishes the classes with no obvious mixed pixels, while CDCNN and FDSSC exhibit slight misclassification in these edge regions. Additionally, HL-Mamba effectively suppresses the “salt-and-pepper” noise in the classification map of the Gravel class, which is attributed to the CFIM integration of global and local features, enhancing the spatial consistency of classification results.
- 3.
- Houston Dataset: Figure 7 presents the qualitative classification results on the Houston dataset, a complex urban scene with mixed pixels. The classification map of HL-Mamba accurately delineates the boundaries of distinct urban functional areas, with no obvious mixed pixels in the transition zones. For shadow-affected regions, HL-Mamba maintains consistent classification accuracy without spectral distortion-induced misclassification, while DBDA and M3DCNN show scattered misclassified pixels in these shadowed areas. Notably, HL-Mamba precisely identifies small-scale high-texture objects embedded in complex urban landscapes, with clear and complete details of their shapes. Additionally, for the Healthy grass and Stressed grass classes with high intra-class variability, HL-Mamba’s classification map shows smooth spatial transitions between the two grass types, reflecting the effective fusion of global structure and local details via the CFIM.
- 4.
- WHU-Hi-HanChuan Dataset: Qualitatively, Figure 8 shows that HL-Mamba’s classification map accurately captures the large-scale distribution of agricultural crops and urban areas, with no obvious misclassification in the transition zones between Strawberry fields and Soybean fields. Compared to CDCNN and FDSSC, HL-Mamba’s classification map has better spatial continuity, especially in the Water class, where the boundaries of water bodies are clearly delineated without being affected by surrounding vegetation shadows. This confirms that HL-Mamba is well-suited for large-scale HSI classification tasks.
4.3.3. Computational Complexity
5. Discussion
5.1. Sensitivity Analysis of Hyperparameter
5.2. Module Ablation
5.3. Limitations
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| HSI | Hyperspectral image |
| CNN | Convolutional neural network |
| PCA | Principal component analysis |
| RNN | Recurrent neural network |
| HL-Mamba | High–low frequency interaction Mamba network |
| DFT | Discrete Fourier transform |
References
- Yang, X.; Cao, W.; Lu, Y.; Zhou, Y. Hyperspectral image transformer classification networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5528715. [Google Scholar] [CrossRef]
- Ahmad, M.; Ghous, U.; Usama, M.; Mazzara, M. WaveFormer: Spectral–spatial wavelet transformer for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2024, 21, 5502405. [Google Scholar] [CrossRef]
- Rajabi, R.; Zehtabian, A.; Singh, K.D.; Tabatabaeenejad, A.; Ghamisi, P.; Homayouni, S. Hyperspectral imaging in environmental monitoring and analysis. Front. Environ. Sci. 2024, 11, 1353447. [Google Scholar] [CrossRef]
- Avola, G.; Matese, A.; Riggi, E. An overview of the special issue on “precision agriculture using hyperspectral images”. Remote Sens. 2023, 15, 1917. [Google Scholar] [CrossRef]
- Booysen, R.; Lorenz, S.; Thiele, S.T.; Fuchsloch, W.C.; Marais, T.; Nex, P.A.M.; Gloaguen, R. Accurate hyperspectral imaging of mineralised outcrops: An example from lithium-bearing pegmatites at Uis, Namibia. Remote Sens. Environ. 2022, 269, 112790. [Google Scholar] [CrossRef]
- Yuan, J.; Wang, S.; Wu, C.; Xu, Y. Fine-grained classification of urban functional zones and landscape pattern analysis using hyperspectral satellite imagery: A case study of Wuhan. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 3972–3991. [Google Scholar] [CrossRef]
- Mei, S.; Song, C.; Ma, M.; Xu, F. Hyperspectral image classification using group-aware hierarchical transformer. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5539014. [Google Scholar] [CrossRef]
- He, L.; Li, J.; Liu, C.; Li, S. Recent advances on spectral–spatial hyperspectral image classification: An overview and new guidelines. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1579–1597. [Google Scholar] [CrossRef]
- Harikiran, J.J.H. Hyperspectral image classification using support vector machines. IAES Int. J. Artif. Intell. 2020, 9, 684. [Google Scholar] [CrossRef]
- Florimbi, G.; Fabelo, H.; Torti, E.; Lazcano, R.; Madroñal, D.; Ortega, S.; Salvador, R.; Leporati, F.; Danese, G.; Báez-Quevedo, A. Accelerating the K-nearest neighbors filtering algorithm to optimize the real-time classification of human brain tumor in hyperspectral images. Sensors 2018, 18, 2314. [Google Scholar] [CrossRef]
- Zhang, Y.; Cao, G.; Li, X.; Wang, B.; Fu, P. Active semi-supervised random forest for hyperspectral image classification. Remote Sens. 2019, 11, 2974. [Google Scholar] [CrossRef]
- Imani, M.; Ghassemian, H. Edge patch image-based morphological profiles for classification of Multispectral and hyperspectral data. IET Image Process. 2017, 11, 164–172. [Google Scholar] [CrossRef]
- Uddin, M.P.; Mamun, M.A.; Afjal, M.I.; Hossain, M.A. Information-theoretic feature selection with segmentation-based folded principal component analysis (PCA) for hyperspectral image classification. Int. J. Remote Sens. 2021, 42, 286–321. [Google Scholar] [CrossRef]
- Liu, Y.; Hu, J.; Kang, X.; Luo, J.; Fan, S. Interactformer: Interactive transformer and CNN for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5531715. [Google Scholar] [CrossRef]
- Liu, Y.; He, Q.; Li, J.; Liu, X.; Fiorio, P.R.; Nakai, É.S.; Yang, B. Towards resolution-arbitrary remote sensing change detection with Spatial-frequency dual domain learning. ISPRS J. Photogramm. Remote Sens. 2026, 231, 137–150. [Google Scholar] [CrossRef]
- Slavkovikj, V.; Verstockt, S.; De Neve, W.; Van Hoecke, S.; Van de Walle, R. Hyperspectral image classification with convolutional neural networks. In Proceedings of the 23rd ACM international conference on Multimedia, Brisbane, Australia, 26–30 October 2015; pp. 1159–1162. [Google Scholar]
- Ghaderizadeh, S.; Abbasi-Moghadam, D.; Sharifi, A.; Zhao, N.; Tariq, A. Hyperspectral image classification using a hybrid 3D-2D convolutional neural networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7570–7588. [Google Scholar] [CrossRef]
- Yang, J.; Wu, C.; Du, B.; Zhang, L. Enhanced multiscale feature fusion network for HSI classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10328–10347. [Google Scholar] [CrossRef]
- Zhou, W.; Kamata, S.-I.; Luo, Z.; Wang, H. Multiscanning strategy-based recurrent neural network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5521018. [Google Scholar] [CrossRef]
- Qi, W.; Zhang, X.; Wang, N.; Zhang, M.; Cen, Y. A spectral-spatial cascaded 3D convolutional neural network with a convolutional long short-term memory network for hyperspectral image classification. Remote Sens. 2019, 11, 2363. [Google Scholar] [CrossRef]
- Mou, L.; Ghamisi, P.; Zhu, X.X. Deep recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef]
- Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–spatial feature tokenization transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
- Peng, Y.; Zhang, Y.; Tu, B.; Li, Q.; Li, W. Spatial–spectral transformer with cross-attention for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5537415. [Google Scholar] [CrossRef]
- Yang, J.; Du, B.; Zhang, L. From center to surrounding: An interactive learning framework for hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2023, 197, 145–166. [Google Scholar] [CrossRef]
- Dang, L.; Pang, P.; Lee, J. Depth-wise separable convolution neural network with residual connection for hyperspectral image classification. Remote Sens. 2020, 12, 3408. [Google Scholar] [CrossRef]
- Tan, Y.; Li, M.; Yuan, L.; Shi, C.; Luo, Y.; Wen, G. Hyperspectral image classification with embedded linear vision transformer. Earth Sci. Inform. 2025, 18, 69. [Google Scholar] [CrossRef]
- Ma, X.; Wang, W.; Li, W.; Wang, J.; Ren, G.; Ren, P.; Liu, B. An ultralightweight hybrid CNN based on redundancy removal for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5506212. [Google Scholar] [CrossRef]
- Wang, Y.; Liu, L.; Xiao, J.; Yu, D.; Tao, Y.; Zhang, W. MambaHSI+: Multidirectional State Propagation for Efficient Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2025, 63, 4411414. [Google Scholar] [CrossRef]
- Liu, R.; Liang, J.; Yang, J.; Hu, M.; He, J.; Zhu, P.; Zhang, L. DHSNet: Dual Classification Head Self-Training Network for Cross-Scene Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5534515. [Google Scholar] [CrossRef]
- Qiao, X.; Huang, W. A dual frequency transformer network for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 10344–10358. [Google Scholar] [CrossRef]
- Zhuang, P.; Zhang, X.; Wang, H.; Zhang, T.; Liu, L.; Li, J. FAHM: Frequency-aware hierarchical mamba for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 6299–6313. [Google Scholar] [CrossRef]
- Anand, R.; Veni, S.; Aravinth, J. Robust classification technique for hyperspectral images based on 3D-discrete wavelet transform. Remote Sens. 2021, 13, 1255. [Google Scholar] [CrossRef]
- Wang, K.; Gu, X.F.; Yu, T.; Meng, Q.Y.; Zhao, L.M.; Feng, L. Classification of hyperspectral remote sensing images using frequency spectrum similarity. Sci. China Technol. Sci. 2013, 56, 980–988. [Google Scholar] [CrossRef]
- Gong, J.; Li, F.; Wang, J.; Yang, Z.; Ding, X. A split-frequency filter network for hyperspectral image classification. Remote Sens. 2023, 15, 3900. [Google Scholar] [CrossRef]
- Hong, D.; Wu, X.; Ghamisi, P.; Chanussot, J.; Yokoya, N.; Zhu, X.X. Invariant attribute profiles: A spatial-frequency joint feature extractor for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3791–3808. [Google Scholar] [CrossRef]
- Yang, J.; Du, B.; Wang, D.; Zhang, L. ITER: Image-to-pixel representation for weakly supervised HSI classification. IEEE Trans. Image Process. 2024, 33, 257–272. [Google Scholar] [CrossRef]
- Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar]
- Liu, X.; Hu, Q.; Cai, Y.; Cai, Z. Extreme learning machine-based ensemble transfer learning for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3892–3902. [Google Scholar] [CrossRef]
- Chen, H.; Miao, F.; Chen, Y.; Xiong, Y.; Chen, T. A hyperspectral image classification method using multifeature vectors and optimized KELM. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2781–2795. [Google Scholar] [CrossRef]
- Hu, X.; Zhong, Y.; Luo, C.; Wang, X. WHU-Hi: UAV-borne hyperspectral with high spatial resolution (H2) benchmark datasets for hyperspectral image classification. arXiv 2020, arXiv:2012.13920. [Google Scholar]
- Maćkiewicz, A.; Ratajczak, W. Principal components analysis (PCA). Comput. Geosci. 1993, 19, 303–342. [Google Scholar] [CrossRef]
- Lee, H.; Kwon, H. Going deeper with contextual CNN for hyperspectral image classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef]
- Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A fast dense spectral–spatial convolution network framework for hyperspectral images classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef]
- Hu, W.; Shi, W.; Lan, C.; Li, Y.; He, L. Hyperspectral Image Classification with Multi-Path 3D-CNN and Coordinated Hierarchical Attention. Remote Sens. 2025, 17, 4035. [Google Scholar] [CrossRef]
- Li, G.; Ye, M. DGCNet: An efficient 3d-densenet based on dynamic group convolution for hyperspectral remote sensing image classification. Spectrosc. Lett. 2025, 1–14. [Google Scholar] [CrossRef]
- Li, G.; Ye, M. Spatial-spectral hyperspectral classification based on learnable 3D group convolution. Spectrosc. Lett. 2025, 58, 704–716. [Google Scholar] [CrossRef]
- Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of hyperspectral image based on double-branch dual-attention mechanism network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef]
- Zhao, J.; Wang, J.; Ruan, C.; Dong, Y.; Huang, L. Dual-branch spectral–spatial attention network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5504718. [Google Scholar] [CrossRef]










| Indian Pines Dataset | Houston Dataset | ||||||
| Class | Class Name | Train | Test | Class | Class Name | Train | Test |
| 1 | Alfalfa | 1 | 45 | 1 | Healthy grass | 13 | 1238 |
| 2 | Corn-notill | 43 | 1385 | 2 | Stressed grass | 13 | 1241 |
| 3 | Corn-mintill | 25 | 805 | 3 | Synthetic grass | 7 | 690 |
| 4 | Corn | 7 | 230 | 4 | Trees | 12 | 1232 |
| 5 | Grass-pasture | 14 | 469 | 5 | Soil | 12 | 1230 |
| 6 | Grass-trees | 22 | 708 | 6 | Water | 3 | 322 |
| 7 | Grass-pasture-mowed | 1 | 27 | 7 | Residential | 13 | 1255 |
| 8 | Hay-windrowed | 14 | 464 | 8 | Commercial | 12 | 1232 |
| 9 | Oats | 1 | 19 | 9 | Road | 13 | 1239 |
| 10 | Soybean-notill | 29 | 943 | 10 | Highway | 12 | 1215 |
| 11 | Soybean-mintill | 73 | 2382 | 11 | Railway | 12 | 1223 |
| 12 | Soybean-clean | 18 | 575 | 12 | Parking Lot 1 | 12 | 1221 |
| 13 | Wheat | 6 | 199 | 13 | Parking Lot 2 | 5 | 464 |
| 14 | Woods | 38 | 1227 | 14 | Tennis Court | 4 | 424 |
| 15 | Buildings-Grass-Trees-Drives | 12 | 374 | 15 | Running Track | 7 | 653 |
| 16 | Stone-Steel-Towers | 3 | 90 | - | Total | 150 | 14,879 |
| - | Total | 307 | 9942 | ||||
| Pavia University Dataset | WHU-Hi-HanChuan Dataset | ||||||
| Class | Class Name | Train | Test | Class | Class Name | Train | Test |
| 1 | Asphalt | 33 | 6598 | 1 | Strawberry | 224 | 44,511 |
| 2 | Meadows | 93 | 18,556 | 2 | Cowpea | 114 | |
| 3 | Gravel | 10 | 2089 | 3 | Soybean | 51 | 10,236 |
| 4 | Trees | 15 | 3049 | 4 | Sorghum | 27 | 5326 |
| 5 | Painted metal sheets | 7 | 1338 | 5 | Water-spinach | 6 | 1194 |
| 6 | Bare Soil | 25 | 5004 | 6 | Watermelon | 23 | 4510 |
| 7 | Bitumen | 7 | 1323 | 7 | Greens | 29 | 5874 |
| 8 | Self-Blocking Bricks | 18 | 3664 | 8 | Trees | 90 | 17,888 |
| 9 | Shadows | 5 | 942 | 9 | Grass | 47 | 9422 |
| - | Total | 213 | 42,563 | 10 | Red-roof | 52 | 10,464 |
| 11 | Gray-roof | 84 | 16,827 | ||||
| 12 | Plastic | 18 | 3661 | ||||
| 13 | Bare-soil | 46 | 9070 | ||||
| 14 | Road | 93 | 18,467 | ||||
| 15 | Bright-object | 6 | 1130 | ||||
| 16 | Water | 377 | 75,024 | ||||
| - | Total | 1287 | 256,243 | ||||
| Class | CDCNN | FDSSC | DBDA | SSFTT | DBSSAN | DGCNet | LGCNet | M3DCNN | Ours |
|---|---|---|---|---|---|---|---|---|---|
| Alfalfa | 0.89 | 24.00 | 51.33 | 38.67 | 15.56 | 20.00 | 20.67 | 42.22 | 53.78 |
| Corn-notill | 78.06 | 86.82 | 92.35 | 81.86 | 73.60 | 73.69 | 72.58 | 92.09 | 92.53 |
| Corn-mintill | 88.05 | 84.17 | 91.13 | 86.41 | 61.13 | 66.45 | 63.65 | 90.93 | 92.37 |
| Corn | 79.70 | 80.52 | 87.78 | 77.26 | 57.74 | 52.83 | 55.35 | 89.87 | 90.70 |
| Grass-pasture | 90.30 | 87.74 | 91.62 | 83.54 | 67.01 | 74.03 | 72.81 | 87.80 | 90.00 |
| Grass-trees | 91.27 | 91.81 | 98.81 | 94.04 | 94.65 | 96.43 | 94.83 | 97.16 | 96.84 |
| Grass-pasture-mowed | 12.59 | 56.67 | 90.37 | 50.37 | 20.74 | 27.41 | 18.52 | 90.74 | 93.33 |
| Hay-windrowed | 97.41 | 97.33 | 99.20 | 97.76 | 98.43 | 98.36 | 97.56 | 98.97 | 99.46 |
| Oats | 0.00 | 32.11 | 73.16 | 67.37 | 11.58 | 25.26 | 17.37 | 78.42 | 79.47 |
| Soybean-notill | 79.95 | 87.25 | 90.36 | 80.84 | 69.09 | 71.63 | 67.30 | 89.85 | 89.70 |
| Soybean-mintill | 81.38 | 91.05 | 95.27 | 90.34 | 82.67 | 83.72 | 83.74 | 95.26 | 95.93 |
| Soybean-clean | 84.23 | 79.97 | 82.12 | 78.00 | 51.10 | 54.35 | 52.05 | 87.01 | 90.38 |
| Wheat | 89.65 | 95.08 | 98.14 | 89.40 | 90.90 | 94.07 | 89.50 | 97.74 | 98.04 |
| Woods | 98.21 | 97.39 | 98.32 | 92.67 | 97.07 | 97.46 | 95.83 | 96.50 | 98.04 |
| Buildings-Grass-Trees-Drives | 84.17 | 90.11 | 91.39 | 88.56 | 75.91 | 74.49 | 69.73 | 92.78 | 94.92 |
| Stone-Steel-Towers | 78.00 | 87.33 | 97.00 | 84.00 | 84.78 | 82.67 | 80.67 | 87.44 | 90.89 |
| OA (%) | 84.93 | 89.13 | 86.97 | 77.92 | 79.49 | 77.89 | 93.06 | 94.07 | |
| AA (%) | 70.87 | 79.33 | 80.07 | 65.75 | 68.30 | 65.76 | 88.42 | 90.40 | |
| Kappa (%) | 83.00 | 87.62 | 85.15 | 74.73 | 76.52 | 74.64 | 92.11 | 93.25 |
| Class | CDCNN | FDSSC | DBDA | SSFTT | DBSSAN | DGCNet | LGCNet | M3DCNN | Ours |
|---|---|---|---|---|---|---|---|---|---|
| Asphalt | 96.30 | 88.17 | 87.45 | 90.09 | 81.52 | 84.69 | 85.85 | 89.85 | 90.82 |
| Meadows | 95.36 | 99.15 | 99.52 | 99.26 | 98.72 | 99.29 | 89.30 | 99.41 | 99.28 |
| Gravel | 73.62 | 77.45 | 82.16 | 69.51 | 48.86 | 64.13 | 57.58 | 77.14 | 76.09 |
| Trees | 80.65 | 84.45 | 89.99 | 86.76 | 70.43 | 74.36 | 70.37 | 84.68 | 85.89 |
| Painted metal sheets | 99.57 | 99.32 | 99.78 | 98.83 | 99.19 | 99.51 | 99.13 | 99.78 | 99.79 |
| Bare Soil | 90.76 | 96.47 | 94.01 | 96.38 | 86.92 | 85.19 | 85.13 | 97.09 | 97.78 |
| Bitumen | 86.02 | 96.53 | 97.99 | 82.00 | 59.06 | 65.31 | 55.59 | 92.80 | 95.18 |
| Self-Blocking Bricks | 81.99 | 90.44 | 81.56 | 78.00 | 64.64 | 65.28 | 60.70 | 85.56 | 83.02 |
| Shadows | 77.31 | 83.30 | 80.61 | 72.52 | 59.84 | 71.60 | 64.17 | 63.55 | 82.70 |
| OA (%) | 91.13 | 93.84 | 93.46 | 92.17 | 85.18 | 87.27 | 81.61 | 93.33 | |
| AA (%) | 86.84 | 90.34 | 85.93 | 74.35 | 78.82 | 74.20 | 87.76 | 90.66 | |
| Kappa (%) | 88.57 | 91.06 | 89.59 | 79.94 | 82.80 | 76.53 | 91.14 | 91.79 |
| Class | CDCNN | FDSSC | DBDA | SSFTT | DBSSAN | DGCNet | LGCNet | M3DCNN | Ours |
|---|---|---|---|---|---|---|---|---|---|
| Healthy grass | 81.39 | 88.34 | 90.35 | 86.95 | 86.79 | 89.99 | 89.39 | 90.74 | 90.55 |
| Stressed grass | 87.51 | 88.76 | 92.39 | 89.19 | 86.45 | 90.61 | 89.33 | 89.46 | 93.22 |
| Synthetic grass | 98.55 | 98.32 | 98.13 | 98.86 | 96.70 | 95.13 | 96.09 | 98.33 | 98.94 |
| Trees | 89.33 | 88.98 | 91.78 | 85.98 | 83.68 | 83.12 | 86.57 | 89.16 | 90.32 |
| Soil | 98.07 | 97.32 | 99.59 | 95.83 | 97.20 | 98.67 | 97.79 | 99.11 | 99.62 |
| Water | 79.19 | 78.76 | 82.14 | 75.90 | 66.96 | 76.34 | 72.42 | 71.93 | 81.34 |
| Residential | 92.68 | 86.69 | 84.32 | 85.22 | 68.13 | 75.78 | 78.21 | 80.96 | 84.72 |
| Commercial | 61.00 | 65.86 | 60.23 | 60.62 | 62.49 | 67.82 | 67.66 | 62.83 | 63.68 |
| Road | 81.82 | 79.45 | 74.61 | 74.47 | 65.85 | 63.54 | 69.23 | 73.61 | 79.01 |
| Highway | 89.53 | 90.25 | 90.62 | 92.83 | 84.05 | 85.45 | 86.53 | 91.65 | 85.99 |
| Railway | 95.08 | 93.07 | 92.31 | 91.10 | 81.29 | 85.41 | 83.70 | 89.90 | 89.22 |
| Parking Lot 1 | 74.83 | 76.52 | 81.88 | 82.24 | 74.28 | 80.48 | 80.09 | 83.76 | 82.83 |
| Parking Lot 2 | 73.47 | 80.80 | 86.51 | 80.28 | 81.08 | 82.50 | 82.56 | 86.55 | 86.90 |
| Tennis Court | 99.60 | 99.72 | 100.00 | 95.64 | 86.96 | 94.76 | 93.92 | 99.03 | 99.34 |
| Running Track | 99.86 | 99.40 | 97.64 | 97.92 | 97.00 | 99.25 | 99.65 | 99.51 | 96.98 |
| OA (%) | 86.32 | 86.83 | 85.70 | 80.64 | 83.68 | 84.27 | 86.50 | 87.32 | |
| AA (%) | 86.79 | 87.48 | 86.20 | 81.26 | 84.59 | 84.88 | 87.10 | 88.18 | |
| Kappa (%) | 85.21 | 85.77 | 84.54 | 79.06 | 82.36 | 82.99 | 85.41 | 86.29 |
| Class | CDCNN | FDSSC | DBDA | SSFTT | DBSSAN | DGCNet | LGCNet | M3DCNN | Ours |
|---|---|---|---|---|---|---|---|---|---|
| Strawberry | 87.19 | 98.18 | 97.20 | 98.58 | 97.42 | 97.20 | 97.02 | 97.83 | 98.40 |
| Cowpea | 86.28 | 94.65 | 92.00 | 92.77 | 86.06 | 89.61 | 90.88 | 93.12 | 94.01 |
| Soybean | 82.19 | 88.24 | 88.92 | 93.09 | 88.32 | 88.60 | 88.33 | 89.92 | 92.70 |
| Sorghum | 94.44 | 96.73 | 97.05 | 95.84 | 95.31 | 97.10 | 96.29 | 96.03 | 97.42 |
| Water-spinach | 58.73 | 86.38 | 70.75 | 78.02 | 50.64 | 63.20 | 63.89 | 70.47 | 85.78 |
| Watermelon | 60.78 | 74.44 | 69.20 | 74.67 | 57.71 | 61.28 | 52.09 | 71.65 | 76.06 |
| Greens | 86.60 | 92.75 | 90.67 | 93.02 | 82.81 | 89.15 | 85.92 | 91.66 | 92.61 |
| Trees | 86.66 | 89.95 | 87.95 | 87.28 | 83.28 | 83.44 | 83.10 | 88.49 | 90.79 |
| Grass | 83.39 | 87.71 | 87.51 | 86.69 | 71.61 | 85.72 | 77.82 | 89.01 | 90.12 |
| Red-roof | 96.32 | 96.11 | 95.78 | 96.25 | 93.75 | 93.51 | 93.65 | 97.40 | 97.80 |
| Gray-roof | 93.44 | 96.51 | 95.82 | 97.17 | 94.23 | 95.28 | 93.39 | 96.26 | 97.30 |
| Plastic | 79.53 | 86.58 | 82.66 | 85.14 | 46.78 | 55.83 | 48.07 | 82.86 | 88.01 |
| Bare-soil | 72.14 | 73.45 | 76.86 | 73.82 | 68.80 | 70.72 | 66.62 | 77.83 | 77.76 |
| Road | 92.80 | 93.68 | 90.05 | 89.61 | 86.11 | 89.93 | 88.77 | 91.86 | 93.73 |
| Bright-object | 68.92 | 68.27 | 66.49 | 57.42 | 40.73 | 62.29 | 53.69 | 60.06 | 64.64 |
| Water | 99.04 | 99.56 | 99.63 | 99.76 | 99.44 | 99.68 | 99.66 | 99.68 | 99.75 |
| OA (%) | 90.21 | 93.62 | 94.18 | 90.25 | 92.07 | 91.07 | 94.26 | 95.28 | |
| AA (%) | 83.03 | 86.78 | 87.44 | 77.69 | 82.66 | 79.95 | 87.13 | 89.81 | |
| Kappa (%) | 88.63 | 92.53 | 93.18 | 88.56 | 90.70 | 89.53 | 93.28 | 94.47 |
| Metric | CDCNN | FDSSC | DBDA | SSFTT | DBSSAN | DGCNet | LGCNet | M3DCNN | Ours |
|---|---|---|---|---|---|---|---|---|---|
| Parameters (M) | 0.3022 | 0.1839 | 0.0389 | 0.1485 | 0.0721 | 3.3664 | 0.3782 | 0.4412 | 0.2957 |
| FLOPs (G) | 0.0506 | 0.4964 | 0.0556 | 0.0114 | 0.1398 | 0.0344 | 0.0193 | 0.0095 | 0.0354 |
| Test Time (s) | 0.3979 | 2.0812 | 0.9961 | 0.5796 | 1.4631 | 23.5910 | 29.6271 | 4.2803 | 0.2934 |
| Number of Mamba Modules | OA (%) | AA (%) | Kappa (%) |
|---|---|---|---|
| 2 | 92.15 | 87.32 | 90.98 |
| 3 (Ours) | 94.07 | 90.40 | 93.25 |
| 4 | 93.58 | 89.76 | 92.71 |
| 5 | 92.89 | 88.61 | 91.88 |
| OA (%) | AA (%) | Kappa (%) | |
|---|---|---|---|
| 0.0 | 92.86 | 88.57 | 91.85 |
| 0.1 | 93.52 | 89.63 | 92.58 |
| 0.2 (Ours) | 94.07 | 90.40 | 93.25 |
| 0.3 | 93.84 | 89.95 | 92.99 |
| 0.4 | 93.21 | 89.12 | 92.27 |
| 0.5 | 92.93 | 88.76 | 91.93 |
| 0.6 | 92.53 | 88.24 | 91.48 |
| HLFDMM | CFIM | FAL | OA (%) | AA (%) | Kappa (%) |
|---|---|---|---|---|---|
| × | × | × | 89.25 | 84.11 | 87.98 |
| ✓ | × | × | 92.86 | 88.57 | 91.85 |
| × | ✓ | × | 90.18 | 85.62 | 89.03 |
| × | × | ✓ | 89.76 | 84.83 | 88.59 |
| ✓ | ✓ | × | 93.69 | 89.92 | 92.88 |
| ✓ | × | ✓ | 93.24 | 89.31 | 92.37 |
| × | ✓ | ✓ | 90.87 | 86.45 | 89.81 |
| ✓ | ✓ | ✓ | 94.07 | 90.40 | 93.25 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Teng, Y.; Gan, S.; Yuan, X. HL-Mamba: A High–Low Frequency Interaction Mamba Network for Hyperspectral Image Classification. Sensors 2026, 26, 1556. https://doi.org/10.3390/s26051556
Teng Y, Gan S, Yuan X. HL-Mamba: A High–Low Frequency Interaction Mamba Network for Hyperspectral Image Classification. Sensors. 2026; 26(5):1556. https://doi.org/10.3390/s26051556
Chicago/Turabian StyleTeng, Yehong, Shu Gan, and Xiping Yuan. 2026. "HL-Mamba: A High–Low Frequency Interaction Mamba Network for Hyperspectral Image Classification" Sensors 26, no. 5: 1556. https://doi.org/10.3390/s26051556
APA StyleTeng, Y., Gan, S., & Yuan, X. (2026). HL-Mamba: A High–Low Frequency Interaction Mamba Network for Hyperspectral Image Classification. Sensors, 26(5), 1556. https://doi.org/10.3390/s26051556

