Abstract
Hyperspectral image (HSI) classification requires models that can simultaneously capture spatial structures and spectral continuity. Although state space models (SSMs), particularly Mamba, have shown strong capability in long-sequence modeling, their application to HSI remains limited due to insufficient spectral relation modeling and the constraints of unidirectional processing. To address these challenges, we propose BiMambaHSI, a novel bidirectional spectral-–spatial framework. First, we proposed a joint spectral–-spatial gated mamba (JGM) encoder that applies forward–backward state modeling with input-dependent gating, explicitly capturing bidirectional spectral–-spatial dependencies. This bidirectional mechanism explicitly captures long-range spectral–-spatial dependencies, overcoming the limitations of conventional unidirectional Mamba. Second, we introduced the spatial-–spectral mamba block (SSMB), which employs parallel bidirectional branches to extract spatial and spectral features separately and integrates them through a lightweight adaptive fusion mechanism. This design enhanced spectral continuity, spatial discrimination, and cross-dimensional interactions while preserving the linear complexity of pure SSMs. Extensive experiments on five public benchmark datasets (Pavia University, Houston, Indian Pines, WHU-Hi-HanChuan, and WHU-Hi-LongKou) demonstrate that BiMambaHSI consistently achieves state-of-the-art performance, improving classification accuracy and robustness compared with existing CNN- and Transformer-based methods.