BioLiteNet: A Biomimetic Lightweight Hyperspectral Image Classification Model
Abstract
1. Introduction
2. Related Work
2.1. Existing Hyperspectral Image Classification Methods
2.2. Lightweighting Techniques in Hyperspectral Classification
2.3. Biomimetic and Bio-Inspired Mechanisms in Hyperspectral Image Processing
3. BioLiteNet
3.1. Overview of BioLiteNet
3.2. BeeSenseSelector
3.3. AffScaleConv
4. Experiment
4.1. Experimental Setup
4.2. Description of the Datasets
4.3. Evaluation Indicators
4.4. Comparison Experiment
4.5. Results and Analyses
4.6. Ablation Experiments
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
- Li, W.; Sun, K.; Wei, J. Adapting Cross-Sensor High-Resolution Remote Sensing Imagery for Land Use Classification. Remote Sens. 2025, 17, 927. [Google Scholar] [CrossRef]
- Uddin, M.P.; Mamun, M.A.; Hossain, M.A. PCA-based feature reduction for hyperspectral remote sensing image classification. IETE Tech. Rev. 2021, 38, 377–396. [Google Scholar] [CrossRef]
- Li, X.; Zhang, L.; You, J. Locally weighted discriminant analysis for hyperspectral image classification. Remote Sens. 2019, 11, 109. [Google Scholar] [CrossRef]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef]
- Sun, Y.; Lei, L.; Tan, X.; Guan, D.; Wu, J.; Kuang, G. Structured graph based image regression for unsupervised multimodal change detection. ISPRS J. Photogramm. Remote Sens. 2022, 185, 16–31. [Google Scholar] [CrossRef]
- Wang, M.; Sun, Y.; Xiang, J.; Sun, R.; Zhong, Y. Adaptive learnable spectral–spatial fusion transformer for hyperspectral image classification. Remote Sens. 2024, 16, 1912. [Google Scholar] [CrossRef]
- De Lucia, G.; Lapegna, M.; Romano, D. Unlocking the potential of edge computing for hyperspectral image classification: An efficient low-energy strategy. Future Gener. Comput. Syst. 2023, 147, 207–218. [Google Scholar] [CrossRef]
- Liu, J.; Xiang, J.; Jin, Y.; Liu, R.; Yan, J.; Wang, L. Boost precision agriculture with unmanned aerial vehicle remote sensing and edge intelligence: A survey. Remote Sens. 2021, 13, 4387. [Google Scholar] [CrossRef]
- Liu, S.; Chu, R.S.; Wang, X.; Luk, W. Optimizing CNN-based hyperspectral image classification on FPGAs. In Proceedings of the International Symposium on Applied Reconfigurable Computing, Darmstadt, Germany, 9–11 April 2019; Springer: Cham, Switzerland, 2019; pp. 17–31. [Google Scholar]
- Wang, Y.; Zhang, T.; Zhao, L.; Hu, L.; Wang, Z.; Niu, Z.; Sun, X. RingMo-lite: A remote sensing multi-task lightweight network with CNN-transformer hybrid framework. arXiv 2023, arXiv:2309.09003. [Google Scholar] [CrossRef]
- Qingyun, F.; Zhaokui, W. Cross-modality attentive feature fusion for object detection in multispectral remote sensing imagery. Pattern Recognit. 2022, 130, 108786. [Google Scholar] [CrossRef]
- Tang, X.; Zhang, K.; Zhou, X.; Zeng, L.; Huang, S. Enhancing Binary Convolutional Neural Networks for Hyperspectral Image Classification. Remote Sens. 2024, 16, 4398. [Google Scholar] [CrossRef]
- Wu, J.; Leng, C.; Wang, Y.; Hu, Q.; Cheng, J. Quantized convolutional neural networks for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4820–4828. [Google Scholar]
- Zhai, Y.; Zheng, X.; Xie, G. Fish lateral line inspired flow sensors and flow-aided control: A review. J. Bionic Eng. 2021, 18, 264–291. [Google Scholar] [CrossRef]
- Fridman, L.; Lee, J.; Reimer, B.; Victor, T. ’Owl’ and ’Lizard’: Patterns of head pose and eye pose in driver gaze classification. IET Comput. Vis. 2016, 10, 308–314. [Google Scholar] [CrossRef]
- Li, M.; Zhou, G.; Cai, W.; Li, J.; Li, M.; He, M.; Li, L. Multi-scale sparse network with cross-attention mechanism for image-based butterflies fine-grained classification. Appl. Soft Comput. 2022, 117, 108419. [Google Scholar] [CrossRef]
- Dyer, A.G.; Paulk, A.C.; Reser, D.H. Colour processing in complex environments: Insights from the visual system of bees. Proc. R. Soc. B Biol. Sci. 2011, 278, 952–959. [Google Scholar] [CrossRef]
- Paulk, A.C.; Stacey, J.A.; Pearson, T.W.; Taylor, G.J.; Moore, R.J.; Srinivasan, M.V.; Van Swinderen, B. Selective attention in the honeybee optic lobes precedes behavioral choices. Proc. Natl. Acad. Sci. USA 2014, 111, 5006–5011. [Google Scholar] [CrossRef]
- Townson, S.M.; Chang, B.S.; Salcedo, E.; Chadwell, L.V.; Pierce, N.E.; Britt, S.G. Honeybee blue-and ultraviolet-sensitive opsins: Cloning, heterologous expression in Drosophila, and physiological characterization. J. Neurosci. 1998, 18, 2412–2422. [Google Scholar] [CrossRef]
- Pal, M.; Mather, P.M. Support vector machines for classification in remote sensing. Int. J. Remote Sens. 2005, 26, 1007–1011. [Google Scholar] [CrossRef]
- Ham, J.; Chen, Y.; Crawford, M.M.; Ghosh, J. Investigation of the random forest framework for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 492–501. [Google Scholar] [CrossRef]
- Fauvel, M.; Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.; Tilton, J.C. Advances in spectral-spatial classification of hyperspectral images. Proc. IEEE 2012, 101, 652–675. [Google Scholar] [CrossRef]
- Camps-Valls, G.; Bruzzone, L. Kernel-based methods for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1351–1362. [Google Scholar] [CrossRef]
- Khotimah, W.N.; Bennamoun, M.; Boussaid, F.; Sohel, F.; Edwards, D. A high-performance spectral-spatial residual network for hyperspectral image classification with small training data. Remote Sens. 2020, 12, 3137. [Google Scholar] [CrossRef]
- Du, T.; Zhang, Y.; Bao, L.; Zhang, Y. Ls-HybridSN: Hyperspectral Image Classification with Improved 3D-2D-CNN. 2023. Available online: https://ssrn.com/abstract=4618560 (accessed on 14 March 2025).
- Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking hyperspectral image classification with transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
- Huang, W.; He, W.; Liao, S.; Xu, Z.; Yan, J. Efficient spectralformer for hyperspectral image classification. Digit. Signal Process. 2023, 143, 104237. [Google Scholar] [CrossRef]
- Wang, J.; Huang, R.; Guo, S.; Li, L.; Pei, Z.; Liu, B. HyperLiteNet: Extremely lightweight non-deep parallel network for hyperspectral image classification. Remote Sens. 2022, 14, 866. [Google Scholar] [CrossRef]
- Zhang, X.; Yang, P.; Feng, J.; Wen, K.; Yan, G.; Luo, Q.; Lu, X. Network structure governs Drosophila brain functionality. Fundam. Res. 2025. [Google Scholar] [CrossRef]
- Huang, L.; Chen, Y.; He, X. Spectral-spatial mamba for hyperspectral image classification. Remote Sens. 2024, 16, 2449. [Google Scholar] [CrossRef]
- Wu, G.; Al-qaness, M.A.; Al-Alimi, D.; Dahou, A.; Abd Elaziz, M.; Ewees, A.A. Hyperspectral image classification using graph convolutional network: A comprehensive review. Expert Syst. Appl. 2024, 257, 125106. [Google Scholar] [CrossRef]
- Ding, X.; Li, H.; Yang, J.; Dale, P.; Chen, X.; Jiang, C.; Zhang, S. An improved ant colony algorithm for optimized band selection of hyperspectral remotely sensed imagery. IEEE Access 2020, 8, 25789–25799. [Google Scholar] [CrossRef]
- Sun, H.; Zheng, X.; Lu, X.; Wu, S. Spectral–spatial attention network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3232–3245. [Google Scholar] [CrossRef]
- Liang, M.; He, Q.; Yu, X.; Wang, H.; Meng, Z.; Jiao, L. A dual multi-head contextual attention network for hyperspectral image classification. Remote Sens. 2022, 14, 3091. [Google Scholar] [CrossRef]
- Liang, M.; Wang, H.; Yu, X.; Meng, Z.; Yi, J.; Jiao, L. Lightweight multilevel feature fusion network for hyperspectral image classification. Remote Sens. 2021, 14, 79. [Google Scholar] [CrossRef]
- Sheng, J.; Zhou, J.; Wang, J.; Ye, P.; Fan, J. DualMamba: A lightweight spectral-spatial Mamba-convolution network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2024. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11534–11542. [Google Scholar] [CrossRef]
- Qin, Z.; Zhang, P.; Wu, F.; Li, X. FcaNet: Frequency channel attention networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 783–792. [Google Scholar]
- Zhong, Z.; Li, Y.; Ma, L.; Li, J.; Zheng, W.S. Spectral–spatial transformer network for hyperspectral image classification: A factorized architecture search framework. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Li, W.; Wu, G.; Zhang, F.; Du, Q. Hyperspectral image classification using deep pixel-pair features. IEEE Trans. Geosci. Remote Sens. 2016, 55, 844–853. [Google Scholar] [CrossRef]
- Gong, Z.; Zhou, X.; Yao, W. MultiScale spectral–spatial convolutional transformer for hyperspectral image classification. IET Image Process. 2024, 18, 4328–4340. [Google Scholar] [CrossRef]
- Fırat, H.; Asker, M.E.; Bayındır, M.I.; Hanbay, D. Hybrid 3D/2D complete inception module and convolutional neural network for hyperspectral remote sensing image classification. Neural Process. Lett. 2023, 55, 1087–1130. [Google Scholar] [CrossRef]
- Muhammad, U.; Laaksonen, J. A fusion-guided inception network for hyperspectral image super-resolution. arXiv 2025, arXiv:2505.03431. [Google Scholar] [CrossRef]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar]
- Wang, J.; Ren, J.; Peng, Y.; Shi, M. Spectral segmentation multi-scale feature extraction residual networks for hyperspectral image classification. Remote Sens. 2023, 15, 4219. [Google Scholar] [CrossRef]
- Zhang, X.; Yang, W.; Feng, J.; Dai, B.; Bu, T.; Lu, X. GSpect: Spectral Filtering for Cross-Scale Graph Classification. IEEE Trans. Netw. Sci. Eng. 2025, 12, 547–558. [Google Scholar] [CrossRef]
- Sun, L.; Xu, B.; Lu, Z. Hyperspectral image classification based on a multi-scale weighted kernel network. Chin. J. Electron. 2022, 31, 832–843. [Google Scholar] [CrossRef]
- Yang, X.; Zhang, X.; Ye, Y.; Lau, R.Y.; Lu, S.; Li, X.; Huang, X. Synergistic 2D/3D convolutional neural network for hyperspectral image classification. Remote Sens. 2020, 12, 2033. [Google Scholar] [CrossRef]
- Zhan, J.; Xie, Y.; Guo, J.; Hu, Y.; Zhou, G.; Cai, W.; Li, L. DGPF-RENet: A low data dependence network with low training iterations for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–21. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Li, Y.; Zhang, K.; Cao, J.; Timofte, R.; Van Gool, L. LocalViT: Bringing locality to vision transformers. arXiv 2021, arXiv:2104.05707. [Google Scholar]
- Heo, B.; Yun, S.; Han, D.; Chun, S.; Choe, J.; Oh, S.J. Rethinking spatial dimensions of vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 11936–11945. [Google Scholar]
- Yuan, L.; Chen, Y.; Wang, T.; Yu, W.; Shi, Y.; Jiang, Z.H.; Yan, S. Tokens-to-token ViT: Training vision transformers from scratch on ImageNet. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 558–567. [Google Scholar]
- Zhou, D.; Kang, B.; Jin, X.; Yang, L.; Lian, X.; Jiang, Z.; Feng, J. DeepViT: Towards deeper vision transformer. arXiv 2021, arXiv:2103.11886. [Google Scholar]
- Yang, X.; Cao, W.; Lu, Y.; Zhou, Y. Hyperspectral image transformer classification networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
Experimental Environment | |||
---|---|---|---|
OS | Linux | RAM | 40 GB |
CUDA | 10.2 | Video memory | 11 GB |
Python | 3.6.5 | Server platform | autodl |
Keras | 2.2.0 | GPU | NVIDIA GeForce RTX 2080 Ti |
Tensorflow | 1.10.0 | CPU | Intel(R) Xeon(R) Platinum 8255C CPU@2.50 GHz |
WHU-Hi-LongKou | Pavia University | Indian Pines | |||
---|---|---|---|---|---|
Class | Samples | Class | Samples | Class | Samples |
Corn | 34,511 | Asphalt | 6631 | Alfalfa | 46 |
Cotton | 8374 | Meadows | 18,649 | Corn-notill | 1428 |
Sesame | 3031 | Gravel | 2099 | Corn-mintill | 830 |
Broad-leaf soybean | 63,212 | Trees | 3064 | Corn | 237 |
Narrow-leaf soybean | 4151 | Painted metal sheets | 1345 | Grass-pasture | 483 |
Rice | 11,854 | Bare soil | 5029 | Grass-trees | 730 |
Water | 67,056 | Bitumen | 1330 | Grass-pasture-mowed | 28 |
Roads and houses | 7124 | Self-Blocking Bricks | 3682 | Hay-windrowed | 478 |
Mixed weed | 5229 | Shadows | 947 | Oats | 20 |
Soybean-notill | 972 | ||||
Soybean-mintill | 2455 | ||||
Soybean-clean | 593 | ||||
Wheat | 205 | ||||
Woods | 1265 | ||||
Buildings-Grass-Trees-Drives | 386 | ||||
Stone-Steel-Towers | 93 |
Train Samples | Class | Models | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
HybridSN | vit | lvt | dit | hit | yang | rvt | t2t | DGPF | SS-Mamba | Ours | ||
5% | 1 | 31.82 | 0 | 0 | 2.27 | 22.73 | 0 | 0 | 0 | 72.95 | 39.55 | 86.36 ± 0.93 |
2 | 90.79 | 0 | 61.24 | 77.3 | 64.7 | 42.74 | 57.19 | 0 | 92.8 | 84.77 | 82.83 ± 0.79 | |
3 | 97.47 | 0 | 29.66 | 11.66 | 43.35 | 14.45 | 15.72 | 0 | 91.51 | 84.33 | 83.14 ± 0.89 | |
4 | 84.89 | 0 | 50 | 39.38 | 46.46 | 28.76 | 51.33 | 0 | 93.73 | 85.24 | 94.67 ± 1.13 | |
5 | 99.78 | 0 | 7.63 | 11.98 | 29.19 | 33.12 | 16.99 | 0 | 95.08 | 85.53 | 84.97 ± 0.77 | |
6 | 99.57 | 18.16 | 62.54 | 76.66 | 92.51 | 79.97 | 74.06 | 20.32 | 96.97 | 93.75 | 97.69 ± 0.63 | |
7 | 85.19 | 0 | 0 | 0 | 74.07 | 0 | 0 | 0 | 90.38 | 49.23 | 11.54 ± 1.27 | |
8 | 100 | 76.48 | 87.03 | 81.1 | 87.69 | 84.18 | 87.03 | 87.91 | 99.8 | 99.8 | 99.12 ± 0.19 | |
9 | 63.16 | 0 | 0 | 10.53 | 5.26 | 15.79 | 0 | 0 | 69.47 | 18.95 | 0.00 ± 0.31 | |
10 | 96.32 | 0 | 1.84 | 15.91 | 64.94 | 49.46 | 26.08 | 0 | 94.3 | 87.35 | 94.48 ± 0.57 | |
11 | 98.24 | 89.16 | 54.78 | 56.58 | 79.55 | 87.48 | 56.71 | 92.46 | 96.11 | 94.64 | 97.17 ± 0.88 | |
12 | 94.67 | 0 | 8.87 | 31.03 | 39.18 | 45.21 | 0.35 | 0 | 92.8 | 79.4 | 95.92 ± 0.78 | |
13 | 99.49 | 0 | 82.56 | 71.79 | 89.23 | 91.79 | 89.23 | 0 | 94.1 | 95.69 | 94.36 ± 1.13 | |
14 | 97.59 | 96.42 | 95.76 | 92.68 | 96.26 | 88.77 | 95.26 | 94.93 | 98.86 | 97.43 | 89.93 ± 1.32 | |
15 | 91.55 | 0 | 22.07 | 33.79 | 40.6 | 40.05 | 17.17 | 0 | 94.44 | 82.7 | 66.49 ± 1.06 | |
16 | 85.23 | 0 | 22.47 | 19.1 | 69.66 | 13.48 | 0 | 0 | 73.82 | 69.78 | 30.34 ± 0.89 | |
OA | 95.86 | 38.11 | 49.27 | 53.63 | 69.27 | 61.68 | 50.82 | 39.4 | 94.9 | 89.56 | 90.02 ± 0.97 | |
AA | 88.48 | 16.48 | 34.5 | 37.16 | 55.61 | 42.07 | 34.54 | 17.39 | 90.45 | 78.01 | 75.56 ± 0.63 | |
Kappa | 95.28 | 25.37 | 41.8 | 46.93 | 64.99 | 55.66 | 43.46 | 26.81 | 94.19 | 88.07 | 88.56 ± 0.82 | |
10% | 1 | 87.8 | 0 | 14.29 | 26.19 | 69.05 | 0 | 0 | 0 | 92.93 | 73.17 | 76.1 ± 2.23 |
2 | 95.64 | 17.65 | 52.95 | 60.96 | 80.33 | 62.13 | 59.1 | 55.68 | 96.09 | 90.58 | 96.4 ± 2.19 | |
3 | 97.19 | 22.36 | 36.68 | 31.33 | 63.72 | 61.04 | 26.51 | 22.49 | 97.26 | 94.24 | 95.6 ± 1.89 | |
4 | 100 | 15.42 | 46.26 | 67.29 | 66.36 | 68.22 | 56.07 | 56.07 | 96.43 | 97.65 | 91.27 ± 2.32 | |
5 | 98.39 | 19.77 | 32.41 | 32.41 | 54.71 | 34.48 | 26.67 | 9.43 | 96.09 | 93.33 | 94.02 ± 2.11 | |
6 | 98.78 | 66.97 | 83.26 | 85.69 | 97.9 | 90.87 | 78.84 | 92.09 | 98.36 | 98.63 | 96.7 ± 1.55 | |
7 | 100 | 0 | 0 | 3.85 | 23.08 | 0 | 0 | 0 | 93.6 | 44 | 69.2 ± 1.89 | |
8 | 100 | 87.7 | 88.4 | 87.24 | 89.79 | 89.56 | 89.1 | 88.4 | 99.95 | 100 | 98.23 ± 0.68 | |
9 | 94.44 | 0 | 0 | 0 | 16.67 | 0 | 0 | 0 | 87.22 | 22.22 | 2.78 ± 0.47 | |
10 | 99.66 | 0 | 60.8 | 59.43 | 72.57 | 63.2 | 54.29 | 1.94 | 97.27 | 89.49 | 96.71 ± 1.71 | |
11 | 99.23 | 72.04 | 76.88 | 61.31 | 84.21 | 79.59 | 66.11 | 63.26 | 97.82 | 96.79 | 98.93 ± 1.98 | |
12 | 95.32 | 0 | 34.27 | 39.89 | 67.23 | 47.94 | 58.43 | 14.04 | 94.64 | 87.83 | 84.04 ± 2.13 | |
13 | 100 | 87.57 | 91.89 | 70.27 | 89.19 | 100 | 97.84 | 56.76 | 98.04 | 100 | 93.26 ± 2.05 | |
14 | 100 | 96.31 | 93.42 | 91.13 | 95.87 | 93.24 | 95.35 | 93.85 | 99.63 | 98.42 | 98.15 ± 1.96 | |
15 | 100 | 21.84 | 37.07 | 24.43 | 40.8 | 40.23 | 36.49 | 18.39 | 98.24 | 95.68 | 89.42 ± 1.67 | |
16 | 91.67 | 0 | 91.67 | 95.24 | 78.57 | 82.14 | 84.52 | 0 | 85.48 | 79.76 | 82.98 ± 2.12 | |
OA | 95.86 | 38.11 | 49.27 | 53.63 | 69.27 | 61.68 | 50.82 | 39.4 | 94.9 | 89.56 | 90.02 ± 1.21 | |
AA | 88.48 | 16.48 | 34.5 | 37.16 | 55.61 | 42.07 | 34.54 | 17.39 | 90.45 | 78.01 | 75.56 ± 0.72 | |
Kappa | 95.28 | 25.37 | 41.8 | 46.93 | 64.99 | 55.66 | 43.46 | 26.81 | 94.19 | 88.07 | 88.56 ± 1.05 |
Train Samples | Class | Models | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
HybridSN | vit | lvt | dit | hit | yang | rvt | t2t | DGPF | SS-Mamba | Ours | ||
10% | 1 | 100 | 86.91 | 89.88 | 90.97 | 90.75 | 91.1 | 92.17 | 89.11 | 89.08 | 58.63 | 79.77 ± 5.12 |
2 | 100 | 85.37 | 85.87 | 84.8 | 86.37 | 86.28 | 78.71 | 85.34 | 99.86 | 99.21 | 99.31 ± 4.33 | |
3 | 98.62 | 71.59 | 85.13 | 81.22 | 80.79 | 62.38 | 77.2 | 83.39 | 81.46 | 47.11 | 84.65 ± 6.74 | |
4 | 99.53 | 93.15 | 94.6 | 93.29 | 94.49 | 87.42 | 95.21 | 92.78 | 91.58 | 87.47 | 71.74 ± 5.89 | |
5 | 100 | 97.19 | 100 | 100 | 99.67 | 99.75 | 100 | 99.5 | 99.8 | 97.52 | 89.41 ± 4.26 | |
6 | 100 | 88.47 | 77.8 | 94.7 | 99.18 | 67.11 | 99.93 | 85.47 | 98.91 | 68.65 | 96.89 ± 3.98 | |
7 | 100 | 83.46 | 77.44 | 89.47 | 89.89 | 89.14 | 89.31 | 89.72 | 93.88 | 42.6 | 89.29 ± 5.37 | |
8 | 99.64 | 79.09 | 91.04 | 94.84 | 94.6 | 61.5 | 96.29 | 94.99 | 85.7 | 76.27 | 67.85 ± 6.51 | |
9 | 99.06 | 93.55 | 100 | 98.94 | 99.3 | 98.83 | 97.54 | 98.48 | 82.36 | 72.28 | 19.4 ± 6.89 | |
OA | 99.85 | 85.81 | 87.07 | 89.15 | 90.39 | 82.34 | 87.33 | 88.08 | 94.79 | 81.55 | 88.2 ± 5.26 | |
AA | 99.65 | 77.88 | 80.18 | 82.82 | 83.5 | 74.35 | 82.64 | 81.88 | 91.4 | 72.19 | 77.59 ± 4.94 | |
Kappa | 99.8 | 81.79 | 83.31 | 86.13 | 87.71 | 76.71 | 84.02 | 84.69 | 93.08 | 74.27 | 84.25 ± 6.02 | |
20% | 1 | 100 | 92.21 | 90.88 | 92.03 | 93.16 | 93.42 | 92.4 | 90.93 | 96.22 | 99.92 | 92.11 ± 4.76 |
2 | 100 | 82.86 | 86.23 | 85.76 | 86.41 | 86.39 | 86.16 | 85.63 | 99.98 | 99.63 | 99.45 ± 3.84 | |
3 | 99.64 | 84.17 | 88.39 | 86.9 | 88.1 | 85.6 | 85.12 | 86.19 | 87.89 | 26.64 | 91.49 ± 5.63 | |
4 | 99.63 | 93.11 | 94.86 | 94.58 | 94.58 | 95.15 | 94.29 | 94.78 | 95.01 | 96.74 | 60.73 ± 6.21 | |
5 | 100 | 99.63 | 100 | 100 | 99.72 | 99.81 | 100 | 99.91 | 99.82 | 99.47 | 98.03 ± 3.97 | |
6 | 99.91 | 99.73 | 99.33 | 99.45 | 100 | 99.95 | 98.41 | 95.83 | 99.96 | 78.7 | 99.72 ± 3.69 | |
7 | 100 | 99.44 | 99.25 | 99.91 | 97.27 | 98.97 | 98.5 | 96.99 | 98.26 | 72.62 | 96.63 ± 5.18 | |
8 | 99.93 | 98.1 | 96.88 | 98.64 | 99.19 | 98.44 | 98.71 | 96.37 | 93.84 | 94.79 | 86.53 ± 5.86 | |
9 | 99.6 | 96.83 | 100 | 99.34 | 99.87 | 100 | 98.94 | 98.94 | 92.57 | 73.3 | 43.6 ± 6.74 | |
OA | 99.93 | 89.76 | 91.27 | 91.33 | 91.88 | 91.81 | 91.28 | 90.36 | 97.69 | 91.58 | 92.7 ± 5.49 | |
AA | 99.86 | 84.61 | 85.58 | 85.66 | 85.83 | 85.77 | 85.25 | 84.56 | 95.95 | 82.42 | 85.36 ± 5.12 | |
Kappa | 99.91 | 86.99 | 88.85 | 88.92 | 89.62 | 89.54 | 88.85 | 87.67 | 96.94 | 88.65 | 90.23 ± 6.28 |
Train Samples | Class | Models | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
HybridSN | vit | lvt | dit | hit | yang | rvt | t2t | DGPF | SS-Mamba | Ours | ||
25 | 1 | 99.72 | 76.47 | 79.22 | 78.54 | 78.54 | 79.57 | 68.08 | 4.18 | 99.29 | 83.95 | 84.34 ± 6.73 |
2 | 96.18 | 37.27 | 45.55 | 11.52 | 42.47 | 88.47 | 66.19 | 46.3 | 96.66 | 87.22 | 87.64 ± 7.94 | |
3 | 100 | 67.6 | 70.86 | 41.88 | 60.35 | 64.04 | 65.37 | 0 | 99.49 | 95.11 | 95.48 ± 6.15 | |
4 | 92.58 | 8.27 | 46.67 | 48.34 | 55.79 | 39.76 | 40.54 | 76.3 | 93.44 | 57.61 | 58.08 ± 8.12 | |
5 | 99.28 | 44.16 | 84.08 | 63.48 | 76.08 | 43.19 | 72.39 | 0 | 99.05 | 83.21 | 83.69 ± 7.02 | |
6 | 95.82 | 88.11 | 88.84 | 46.17 | 68.74 | 89.08 | 93.73 | 83.73 | 97.67 | 78.12 | 78.56 ± 5.34 | |
7 | 98.38 | 94.2 | 96.5 | 95.94 | 95.83 | 95.61 | 97.52 | 97.08 | 99.04 | 98.67 | 99.00 ± 4.26 | |
8 | 89.4 | 68.77 | 86.55 | 56.64 | 81.07 | 71.4 | 83.84 | 0 | 90.72 | 51.1 | 51.60 ± 7.58 | |
9 | 91.4 | 90.62 | 94.49 | 91.2 | 90.78 | 91.37 | 96.12 | 24.1 | 93.85 | 36.78 | 37.14 ± 6.47 | |
OA | 96.13 | 59.58 | 74.62 | 69.02 | 75.22 | 72.49 | 71.94 | 63.51 | 96.76 | 78.21 | 78.64 ± 7.13 | |
AA | 95.86 | 57.55 | 69.28 | 53.37 | 64.96 | 66.25 | 68.38 | 33.17 | 96.58 | 74.63 | 75.06 ± 8.24 | |
Kappa | 94.95 | 52.56 | 68.88 | 61.73 | 69.27 | 66.49 | 65.87 | 53.81 | 95.77 | 71.23 | 71.64 ± 7.68 | |
50 | 1 | 99.96 | 65.66 | 84.48 | 51.73 | 87.99 | 88.04 | 87.75 | 22.88 | 99.64 | 94.98 | 95.38 ± 5.13 |
2 | 98.93 | 55.25 | 65.92 | 21.82 | 92.05 | 90.11 | 44.32 | 0 | 98.98 | 92.51 | 92.88 ± 6.37 | |
3 | 100 | 46.19 | 25.13 | 5.2 | 82.32 | 77.69 | 80.74 | 0 | 99.73 | 99.33 | 99.60 ± 4.89 | |
4 | 92.32 | 51.54 | 73.28 | 69.87 | 63.64 | 64.28 | 51.47 | 87.87 | 97.46 | 76.21 | 76.64 ± 7.85 | |
5 | 100 | 79.69 | 87.25 | 36.41 | 4.15 | 45.6 | 88.95 | 13.28 | 99.67 | 92.91 | 93.37 ± 6.94 | |
6 | 98.86 | 79.41 | 93.47 | 78.93 | 86.23 | 90.69 | 94.3 | 64.72 | 99.21 | 83.47 | 83.99 ± 5.61 | |
7 | 99.83 | 96.49 | 96.77 | 96.87 | 96.03 | 94.75 | 96.34 | 94.82 | 99.35 | 98.85 | 99.19 ± 3.77 | |
8 | 95.58 | 87.39 | 91.53 | 85.98 | 78.63 | 86.27 | 94.85 | 39.37 | 92.94 | 35.91 | 36.30 ± 7.42 | |
9 | 96.75 | 92.99 | 94.07 | 95.48 | 92.97 | 92.47 | 91.83 | 56.57 | 96.87 | 35.12 | 35.57 ± 6.68 | |
OA | 97.22 | 73.23 | 84.49 | 73.83 | 81.19 | 82.18 | 78.22 | 68.97 | 98.52 | 86.11 | 86.53 ± 6.42 | |
AA | 98.02 | 65.46 | 71.19 | 54.23 | 68.4 | 72.99 | 73.06 | 37.95 | 98.21 | 78.82 | 79.21 ± 5.57 | |
Kappa | 96.37 | 67.13 | 80.31 | 67.15 | 76.44 | 77.71 | 73.14 | 59.67 | 98.06 | 81.82 | 82.21 ± 7.79 |
Datasets | Metrics | Models | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
HybridSN | ViT | lvt | dit | hit | yang | rvt | t2t | DGPF | SS-Mamba | Ours | ||
Indian Pines | Para (MB) | 2.26 | 2.49 | 30.37 | 49.82 | 48.86 | 43.7 | 8.52 | 696.41 | 4.03 | 1.18 | 0.02 |
Indian Pines | Flops (GB) | 1.7 | 222.21 | 2669.28 | 4381.55 | 1.09 | 1.81 | 749.21 | 2.77 | 0.01 | 1.30 | 0.000086 |
Pavia University | Para (MB) | 2 | 2.38 | 30.26 | 48.97 | 12.47 | 25.65 | 8.42 | 696.41 | 2.81 | 1.16 | 0.02 |
Pavia University | Flops (GB) | 0.32 | 109.57 | 1369.82 | 2217.6 | 0.28 | 0.75 | 380.98 | 2.77 | 0.01 | 0.83 | 0.000071 |
WHU-Hi-LongKou | Para (MB) | 2 | 2.56 | 30.44 | 50.43 | 57.32 | 57.07 | 8.6 | 696.41 | 4.91 | 1.20 | 0.02 |
WHU-Hi-LongKou | Flops (GB) | 0.32 | 309 | 3612.55 | 5987.27 | 1.55 | 2.78 | 1020.45 | 2.77 | 0.01 | 1.62 | 0.000096 |
Methods | OA | AA | Kappa | Params | Flops |
---|---|---|---|---|---|
Ours | 95.55 | 85.24 | 94.92 | 22 k | 92.6 k |
Ours without AffScaleConv | 96.61 | 89.53 | 96.12 | 28.8 k | 133.8 k |
Ours without BeeSenseSelector | 95.16 | 84.57 | 94.46 | 20.9 k | 90.6 k |
Ours without AffScaleConv and BeeSenseSelector | 95.08 | 83.03 | 94.38 | 27.7 k | 131.7 k |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zeng, B.; Chao, S.; Liu, J.; Guo, Y.; Wei, Y.; Yi, H.; Xie, B.; Hu, Y.; Li, L. BioLiteNet: A Biomimetic Lightweight Hyperspectral Image Classification Model. Remote Sens. 2025, 17, 2833. https://doi.org/10.3390/rs17162833
Zeng B, Chao S, Liu J, Guo Y, Wei Y, Yi H, Xie B, Hu Y, Li L. BioLiteNet: A Biomimetic Lightweight Hyperspectral Image Classification Model. Remote Sensing. 2025; 17(16):2833. https://doi.org/10.3390/rs17162833
Chicago/Turabian StyleZeng, Bo, Suwen Chao, Jialang Liu, Yanming Guo, Yingmei Wei, Huimin Yi, Bin Xie, Yaowen Hu, and Lin Li. 2025. "BioLiteNet: A Biomimetic Lightweight Hyperspectral Image Classification Model" Remote Sensing 17, no. 16: 2833. https://doi.org/10.3390/rs17162833
APA StyleZeng, B., Chao, S., Liu, J., Guo, Y., Wei, Y., Yi, H., Xie, B., Hu, Y., & Li, L. (2025). BioLiteNet: A Biomimetic Lightweight Hyperspectral Image Classification Model. Remote Sensing, 17(16), 2833. https://doi.org/10.3390/rs17162833