Semantic-Aware Co-Parallel Network for Cross-Scene Hyperspectral Image Classification
Abstract
1. Introduction
- We propose a multimodal network for cross-scene hyperspectral image classification, where image features are aligned by category in semantic space to enhance inter-class separability and learn cross-domain invariant representations.
- We extract rich image features through the collaborative efforts of the spatial–spectral feature extraction module and the multiscale feature extraction module. Compared to using a single module, our parallel network achieves superior feature extraction performance.
- We construct semantic spaces that facilitate information sharing between the two modalities, enabling the learning of cross-domain invariant representations. We also optimize supervised contrastive loss to ensure better tuning of feature distributions by category.
- Experiments show that our method outperforms all comparison methods, achieving the best classification results across all three datasets and demonstrating strong generalization ability.
2. Related Works
2.1. CNN in HSI Classification
2.2. Domain Generalization
2.3. Vision-Language Model
3. Proposed Method
3.1. Image Encoder
3.2. Optimized Semantic Space
| Algorithm 1: Pseudocode of SCPNet | |
| 1 | Training stage: |
| 2 | Input: Source domain samples , total epoch number T, knowledge base . |
| 3 | Output: The parameters , |
| 4 | Initialize:,. Load Pretrain parameters |
| 5 | Load: Pretrain parameters |
| 6 | For epoch = 1: T do: |
| 7 | Extract image features and text features |
| 8 | |
| 9 | through Equations (5)–(11) |
| 10 | Establish an optimized semantic space and map image features through Equations (12)–(14) |
| 11 | For all, : |
| 12 | Calculate the loss through Equation (3) |
| 13 | Calculate the loss through Equations (15)–(20) |
| 14 | Calculate the total loss through Equation (21) |
| 15 | End For |
| 16 | Update , , by gradient descent |
| 17 | End For |
| 18 | Testing stage: |
| 19 | Input: Target domain samples |
| 20 | Load: The parameters , |
| 21 | Extract image features |
| 22 | |
| 23 | Output: (Classification Vector) |
4. Experiment and Discussion
4.1. Description of Datasets
4.2. Experiment Setting
4.3. Parameter Tuning
4.4. Ablation Study
4.5. Comparison Experiment
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Alkhatib, M.Q.; Al-Saad, M.; Aburaed, N.; Almansoori, S.; Zabalza, J.; Marshall, S.; Al-Ahmad, H. Tri-CNN: A three branch model for hyperspectral image classification. Remote Sens. 2023, 15, 316. [Google Scholar] [CrossRef]
- Cao, X.; Yao, J.; Xu, Z.; Meng, D. Hyperspectral image classification with convolutional neural network and active learning. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4604–4616. [Google Scholar] [CrossRef]
- Ortac, G.; Ozcan, G. Comparative study of hyperspectral image classification by multidimensional Convolutional Neural Network approaches to improve accuracy. Expert Syst. Appl. 2021, 182, 115280. [Google Scholar] [CrossRef]
- Song, A.; Kim, Y. Deep learning-based hyperspectral image classification with application to environmental geographic information systems. Korean J. Remote Sens. 2017, 33, 1061–1073. [Google Scholar]
- Zhang, X.; Li, W.; Gao, C.; Yang, Y.; Chang, K. Hyperspectral pathology image classification using dimension-driven multi-path attention residual network. Expert Syst. Appl. 2023, 230, 120615. [Google Scholar] [CrossRef]
- Wang, C.; Liu, B.; Liu, L.; Zhu, Y.; Hou, J.; Liu, P.; Li, X. A review of deep learning used in the hyperspectral image analysis for agriculture. Artif. Intell. Rev. 2021, 54, 5205–5253. [Google Scholar] [CrossRef]
- Galdames, F.J.; Perez, C.A.; Estevez, P.A.; Adams, M. Rock lithological instance classification by hyperspectral images using dimensionality reduction and deep learning. Chemom. Intell. Lab. Syst. 2022, 224, 104538. [Google Scholar] [CrossRef]
- Farahani, A.; Voghoei, S.; Rasheed, K.; Arabnia, H.R. A brief review of domain adaptation. In Advances in Data Science and Information Engineering; Springer: Cham, Switzerland, 2021; pp. 877–894. [Google Scholar] [CrossRef]
- Wang, M.; Deng, W. Deep visual domain adaptation: A survey. Neurocomputing 2018, 312, 135–153. [Google Scholar] [CrossRef]
- Li, J.; Lu, K.; Huang, Z.; Zhu, L.; Shen, H.T. Heterogeneous domain adaptation through progressive alignment. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 1381–1391. [Google Scholar] [CrossRef]
- Long, M.; Cao, Y.; Wang, J.; Jordan, M. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; PMLR: Cambridge, MA, USA, 2015; pp. 97–105. [Google Scholar] [CrossRef]
- Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; March, M.; Lempitsky, V. Domain-adversarial training of neural networks. J. Mach. Learn. Res. 2016, 17, 1–35. [Google Scholar] [CrossRef]
- Blanchard, G.; Lee, G.; Scott, C. Generalizing from several related classification tasks to a new unlabeled sample. Adv. Neural Inf. Process. Syst. 2011, 24, 2178–2186. [Google Scholar]
- Jin, X.; Lan, C.; Zeng, W.; Chen, Z. Feature alignment and restoration for domain generalization and adaptation. arXiv 2020, arXiv:2006.12009. [Google Scholar] [CrossRef]
- Lu, W.; Wang, J.; Li, H.; Chen, Y.; Xie, X. Domain-invariant feature exploration for domain generalization. arXiv 2022, arXiv:2207.12020. [Google Scholar] [CrossRef]
- Wang, J.; Lan, C.; Liu, C.; Ouyang, Y.; Qin, T.; Lu, W.; Chen, Y.; Zeng, W.; Yu, P. Generalizing to unseen domains: A survey on domain generalization. IEEE Trans. Knowl. Data Eng. 2022, 35, 8052–8072. [Google Scholar] [CrossRef]
- Devillers, B.; Choksi, B.; Bielawski, R.; VanRullen, R. Does language help generalization in vision models? arXiv 2021, arXiv:2104.08313. [Google Scholar] [CrossRef]
- Wu, W.; Sun, Z.; Song, Y.; Wang, J.; Ouyang, W. Transferring vision-language models for visual recognition: A classifier perspective. Int. J. Comput. Vis. 2024, 132, 392–409. [Google Scholar] [CrossRef]
- Zhang, P.; Li, X.; Hu, X.; Yang, J.; Zhang, L.; Wang, L.; Choi, Y.; Gao, J. Vinvl: Revisiting visual representations in vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19–25 June 2021; IEEE: New York, NY, USA, 2021; pp. 5579–5588. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Song, W.; Li, S.; Fang, L.; Lu, T. Hyperspectral image classification with deep feature fusion network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3173–3184. [Google Scholar] [CrossRef]
- Guo, T.; Wang, R.; Luo, F.; Gong, X.; Zhang, L.; Gao, X. Dual-view spectral and global spatial feature fusion network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–13. [Google Scholar] [CrossRef]
- Ahmad, M.; Khan, A.M.; Mazzara, M.; Distefano, S.; Ali, M.; Sarfraz, M.S. A fast and compact 3-D CNN for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 19, 1–5. [Google Scholar] [CrossRef]
- Zhou, J.; Zeng, S.; Gao, G.; Chen, Y.; Tang, Y. A novel spatial-spectral pyramid network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–14. [Google Scholar] [CrossRef]
- Sima, H.; Gao, F.; Zhang, Y.; Sun, J.; Guo, P. Collaborative optimization of spatial-spectrum parallel convolutional network (CO-PCN) for hyperspectral image classification. Int. J. Mach. Learn. Cybern. 2023, 14, 2353–2366. [Google Scholar] [CrossRef]
- Cao, F.; Guo, W. Deep hybrid dilated residual networks for hyperspectral image classification. Neurocomputing 2020, 384, 170–181. [Google Scholar] [CrossRef]
- Roy, S.K.; Jamali, A.; Chanussot, J.; Ghamisi, P.; Ghaderpour, E.; Shahabi, H. SimPoolFormer: A two-stream vision transformer for hyperspectral image classification. Remote Sens. Appl. Soc. Environ. 2025, 37, 101478. [Google Scholar] [CrossRef]
- Zunair, H.; Ben Hamza, A. Learning to recognize occluded and small objects with partial inputs. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2024; IEEE: New York, NY, USA, 2024; pp. 675–684. [Google Scholar] [CrossRef]
- Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Hu, S.; Zhang, K.; Chen, Z.; Chan, L. Domain generalization via multidomain discriminant analysis. In Uncertainty in Artificial Intelligence; PMLR: Cambridge, MA, USA, 2020; pp. 292–302. [Google Scholar]
- Krueger, D.; Caballero, E.; Jacobsen, J.-H.; Zhang, A.; Binas, J.; Zhang, D.; Le Priol, R.; Courville, A. Out-of-distribution generalization via risk extrapolation (rex). In Proceedings of the International Conference on Machine Learning, Vienna, Austria, 18–24 June 2021; PMLR: Cambridge, MA, USA, 2021; pp. 5815–5826. [Google Scholar]
- Du, Y.; Wang, J.; Feng, W.; Pan, S.; Qin, T.; Xu, R.; Wang, C. Adarnn: Adaptive learning and forecasting of time series. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Queensland, Australia, 1–5 November 2021; ACM: New York, NY, USA, 2021; pp. 402–411. [Google Scholar] [CrossRef]
- Sagawa, S.; Koh, P.W.; Hashimoto, T.B.; Liang, P. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv 2019, arXiv:1911.08731. [Google Scholar] [CrossRef]
- Dong, W.; Du, B.; Xu, Y. Source domain prior-assisted segment anything model for single domain generalization in medical image segmentation. Image Vis. Comput. 2024, 150, 105216. [Google Scholar] [CrossRef]
- Li, X.; Yin, X.; Li, C.; Zhang, P.; Hu, X.; Zhang, L.; Wang, L.; Hu, H.; Dong, L.; Wei, F. Oscar: Object-semantics aligned pre-training for vision-language tasks. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020, Proceedings, Part XXX16; Springer: Cham, Switzerland, 2020; pp. 121–137. [Google Scholar] [CrossRef]
- Chen, Y.-C.; Li, L.; Yu, L.; El Kholy, A.; Ahmed, F.; Gan, Z.; Cheng, Y.; Liu, J. Uniter: Universal image-text representation learning. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 104–120. [Google Scholar]
- Wang, Z.; Yu, J.; Yu, A.W.; Dai, Z.; Tsvetkov, Y.; Cao, Y. Simvlm: Simple visual language model pretraining with weak supervision. arXiv 2021, arXiv:2108.10904. [Google Scholar] [CrossRef]
- Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning, Vienna, Austria, 18–24 June 2021; PMLR: Cambridge, MA, USA, 2021; pp. 8748–8763. [Google Scholar] [CrossRef]
- Zhang, Y.; Wang, J.; Tang, H.; Qin, R. DALSCLIP: Domain aggregation via learning stronger domain-invariant features for CLIP. Image Vis. Comput. 2024, 154, 105359. [Google Scholar] [CrossRef]
- Yu, J.; Wang, Z.; Vasudevan, V.; Yeung, L.; Seyedhosseini, M.; Wu, Y. Coca: Contrastive captioners are image-text foundation models. arXiv 2022, arXiv:2205.01917. [Google Scholar]
- Cheng, Q.; Zhou, Y.; Fu, P.; Xu, Y.; Zhang, L. A deep semantic alignment network for the cross-modal image-text retrieval in remote sensing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 4284–4297. [Google Scholar] [CrossRef]
- Yuan, Z.; Zhang, W.; Tian, C.; Rong, X.; Zhang, Z.; Wang, H.; Fu, K.; Sun, X. Remote sensing cross-modal text-image retrieval based on global and local information. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
- Xu, Y.; Yu, W.; Ghamisi, P.; Kopp, M.; Hochreiter, S. Txt2Img-MHN: Remote sensing image generation from text using modern Hopfield networks. IEEE Trans. Image Process. 2023, 32, 5737–5750. [Google Scholar] [CrossRef]
- Zhao, R.; Shi, Z. Text-to-remote-sensing-image generation with structured generative adversarial networks. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Li, Y.; Zhu, Z.; Yu, J.-G.; Zhang, Y. Learning deep cross-modal embedding networks for zero-shot remote sensing image scene classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10590–10603. [Google Scholar] [CrossRef]
- Sumbul, G.; Cinbis, R.G.; Aksoy, S. Fine-grained object recognition and zero-shot learning in remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2017, 56, 770–779. [Google Scholar] [CrossRef]
- Cheng, G.; Yan, B.; Shi, P.; Li, K.; Yao, X.; Guo, L.; Han, J. Prototype-CNN for few-shot object detection in remote sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–10. [Google Scholar] [CrossRef]
- Huang, X.; He, B.; Tong, M.; Wang, D.; He, C. Few-shot object detection on remote sensing images via shared attention module and balanced fine-tuning strategy. Remote Sens. 2021, 13, 3816. [Google Scholar] [CrossRef]
- Debes, C.; Merentitis, A.; Heremans, R.; Hahn, J.; Frangiadakis, N.; van Kasteren, T.; Liao, W.; Bellens, R.; Pižurica, A.; Gautama, S. Hyperspectral and LiDAR data fusion: Outcome of the 2013 GRSS data fusion contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2405–2418. [Google Scholar] [CrossRef]
- Le Saux, B.; Yokoya, N.; Hänsch, R.; Prasad, S. 2018 IEEE GRSS data fusion contest: Multimodal land use classification [technical committees]. IEEE Geosci. Remote Sens. Mag. 2018, 6, 52–54. [Google Scholar] [CrossRef]
- Licciardi, G.; Pacifici, F.; Tuia, D.; Prasad, S.; West, T.; Giacco, F.; Thiel, C.; Inglada, J.; Christophe, E.; Chanussot, J. Decision fusion for the classification of hyperspectral data: Outcome of the 2008 GRS-S data fusion contest. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3857–3865. [Google Scholar] [CrossRef]
- Ye, M.; Qian, Y.; Zhou, J.; Tang, Y.Y. Dictionary learning-based feature-level domain adaptation for cross-scene hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1544–1562. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, W.; Sun, W.; Tao, R.; Du, Q. Single-source domain expansion network for cross-scene hyperspectral image classification. IEEE Trans. Image Process. 2023, 32, 1498–1512. [Google Scholar] [CrossRef] [PubMed]
- Dong, L.; Geng, J.; Jiang, W. Spectral-Spatial Enhancement and Causal Constraint for Hyperspectral Image Cross-Scene Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5507013. [Google Scholar] [CrossRef]
- Zhao, H.; Zhang, J.; Lin, L.; Wang, J.; Gao, S.; Zhang, Z. Locally linear unbiased randomization network for cross-scene hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5526512. [Google Scholar] [CrossRef]
- Qin, B.; Feng, S.; Zhao, C.; Xi, B.; Li, W.; Tao, R. FDGNet: Frequency disentanglement and data geometry for domain generalization in cross-scene hyperspectral image classification. IEEE Trans. Neural Netw. Learn. Syst. 2024, 36, 10297–10310. [Google Scholar] [CrossRef]
- Parascandolo, G.; Neitz, A.; Orvieto, A.; Gresele, L.; Schölkopf, B. Learning explanations that are hard to vary. arXiv 2020, arXiv:2009.00329. [Google Scholar] [CrossRef]















| Class | Number of Samples | ||
|---|---|---|---|
| ID | Name | Source Scene (Houston 13) | Target Scene (Houston 18) |
| 1 | Grass healthy | 345 | 1353 |
| 2 | Grass stressed | 365 | 4888 |
| 3 | Trees | 365 | 2766 |
| 4 | Water | 285 | 22 |
| 5 | Residential buildings | 319 | 5347 |
| 6 | Non-residential buildings | 408 | 32,459 |
| 7 | Road | 443 | 6365 |
| Total | 2530 | 53,200 | |
| Class | Number of Samples | ||
|---|---|---|---|
| ID | Name | Source Scene (UP) | Target Scene (PC) |
| 1 | Tree | 3064 | 7598 |
| 2 | Asphalt | 6631 | 9248 |
| 3 | Brick | 3682 | 2685 |
| 4 | Bitumen | 1330 | 7287 |
| 5 | Shadow | 3947 | 2863 |
| 6 | Meadow | 18,649 | 3090 |
| 7 | Bare soil | 5029 | 6584 |
| Total | 39,332 | 39,335 | |
| Class | Number of Samples | ||
|---|---|---|---|
| ID | Name | Source Scene (IndianaSD) | Target Scene (IndianaTD) |
| 1 | Concrete/Asphalt | 4867 | 2942 |
| 2 | Corn-CleanTill | 9822 | 6029 |
| 3 | Corn-CleanTill-EW | 11,414 | 7999 |
| 4 | Orchard | 5106 | 1562 |
| 5 | Soybeans-CleanTill | 4731 | 4792 |
| 6 | Soybeans-CleanTill-EW | 2996 | 1638 |
| 7 | Wheat | 3223 | 10,739 |
| Total | 42,159 | 35,701 | |
| Class | Name | Prior Knowledge |
|---|---|---|
| 1 | Grass healthy | The grass healthy is lush |
| 2 | Grass stressed | The grass stressed by the road appears pale |
| 3 | Trees | The trees grow steadily along the road |
| 4 | Water | Water appears smooth with a dark blue or black color |
| 5 | Residential buildings | Residential buildings arranged neatly |
| 6 | Non-residential buildings | Non-residential buildings vary in shape |
| 7 | Road | Roads divide buildings into blocks |
| Class | Name | Prior Knowledge |
|---|---|---|
| 1 | Tree | The trees grow steadily along the road |
| 2 | Asphalt | Asphalt is used to pave roads |
| 3 | Brick | A brick is a type of construction material |
| 4 | Bitumen | Bitumen is a material for building surfaces |
| 5 | Shadow | Shadows will appear on the backlight of the building |
| 6 | Meadow | Meadow is a land covered with grass |
| 7 | Bare soil | No vegetation on the surface of bare soil |
| Class | Name | Prior Knowledge |
|---|---|---|
| 1 | Concrete/Asphalt | No crops on the surfaces of Concrete or Asphalt |
| 2 | Corn-CleanTill | Corn-CleanTill planted with corn |
| 3 | Corn-CleanTill-EW | Corn-CleanTill-EW planted with early maturing maize |
| 4 | Orchard | The orchard is full of fruit trees |
| 5 | Soybeans-CleanTill | Soybeans-CleanTill planted with soybeans |
| 6 | Soybeans-CleanTill-EW | Soybeans-CleanTill-EW grows early soybeans |
| 7 | Wheat | Wheat is an important food crop |
| SSEM | MSEM | SS | OSS | Houston | Pavia | Indiana | |||
|---|---|---|---|---|---|---|---|---|---|
| OA | KC | OA | KC | OA | KC | ||||
| ✓ | ✓ | 75.35 | 63.05 | 74.86 | 69.81 | 55.27 | 41.94 | ||
| ✓ | ✓ | 78.32 | 63.05 | 72.83 | 66.93 | 54.99 | 41.28 | ||
| ✓ | ✓ | 74.72 | 51.87 | 71.68 | 66.42 | 55.20 | 41.41 | ||
| ✓ | ✓ | ✓ | 79.81 | 63.21 | 78.58 | 74.33 | 55.37 | 41.67 | |
| ✓ | ✓ | ✓ | 82.29 | 68.62 | 84.53 | 81.37 | 56.56 | 44.81 | |
| Class | GroupDRO | ANDMask | VREx | DIFEX | SDEnet | S2ECnet | LLURnet | FDGnet | Ours |
|---|---|---|---|---|---|---|---|---|---|
| 1 | 30.36 ± 7.61 | 35.02 ± 7.54 | 33.75 ± 7.85 | 23.62 ± 6.12 | 62.94 ± 13.10 | 80.34 ± 15.84 | 33.06 ± 10.77 | 58.02 ± 10.98 | 46.27 ± 12.07 |
| 2 | 71.18 ± 1.65 | 70.76 ± 2.95 | 69.87 ± 1.74 | 68.28 ± 5.27 | 78.13 ± 5.64 | 65.09 ± 8.56 | 69.01 ± 5.72 | 77.96 ± 4.38 | 67.96 ± 5.87 |
| 3 | 63.98 ± 1.82 | 64.16 ± 4.38 | 64.30 ± 3.74 | 66.41 ± 4.59 | 57.91 ± 12.53 | 46.71 ± 3.50 | 64.90 ± 4.03 | 67.14 ± 3.98 | 68.51 ± 8.10 |
| 4 | 81.82 ± 5.14 | 81.82 ± 2.30 | 78.18 ± 6.25 | 91.82 ± 8.10 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 |
| 5 | 50.73 ± 6.54 | 53.30 ± 7.20 | 55.58 ± 3.94 | 54.34 ± 7.60 | 62.02 ± 3.88 | 73.62 ± 5.34 | 67.80 ± 4.01 | 70.00 ± 10.82 | 74.10 ± 3.90 |
| 6 | 80.92 ± 6.87 | 78.11 ± 6.46 | 78.06 ± 4.84 | 81.87 ± 4.32 | 84.54 ± 3.15 | 83.42 ± 1.86 | 74.25 ± 26.44 | 84.28 ± 1.36 | 94.53 ± 2.33 |
| 7 | 59.83 ± 6.82 | 65.46 ± 4.38 | 62.86 ± 6.22 | 60.13 ± 6.74 | 50.05 ± 1.97 | 47.02 ± 4.26 | 46.63 ± 3.73 | 57.73 ± 4.59 | 51.33 ± 6.29 |
| OA | 72.30 ± 2.67 | 71.61 ± 3.05 | 71.39 ± 1.95 | 72.97 ± 1.98 | 75.63 ± 2.07 | 75.61 ± 1.30 | 78.07 ± 0.63 | 77.54 ± 0.37 | 82.29 ± 1.01 |
| KC | 55.33 ± 2.28 | 55.34 ± 3.07 | 54.65 ± 1.79 | 55.89 ± 2.22 | 59.59 ± 2.51 | 59.96 ± 1.70 | 61.46 ± 1.65 | 62.70 ± 1.16 | 68.62 ± 2.17 |
| Class | GroupDRO | ANDMask | VREx | DIFEX | SDEnet | S2ECnet | LLURnet | FDGnet | Ours |
|---|---|---|---|---|---|---|---|---|---|
| 1 | 79.55 ± 13.27 | 81.85 ± 4.96 | 81.51 ± 14.02 | 82.53 ± 7.87 | 92.79 ± 2.14 | 95.36 ± 2.07 | 83.29 ± 5.39 | 78.24 ± 8.13 | 9.44 ± 1.46 |
| 2 | 82.50 ± 3.93 | 76.45 ± 3.12 | 80.82 ± 2.56 | 81.55 ± 5.80 | 84.73 ± 2.20 | 81.68 ± 6.62 | 84.48 ± 1.19 | 76.92 ± 4.71 | 87.73 ± 2.95 |
| 3 | 19.11 ± 11.83 | 23.07 ± 13.41 | 21.42 ± 10.04 | 25.20 ± 21.68 | 78.39 ± 6.49 | 58.89 ± 25.99 | 74.67 ± 4.26 | 68.56 ± 11.45 | 71.23 ± 6.54 |
| 4 | 73.18 ± 12.26 | 68.40 ± 16.49 | 78.13 ± 7.89 | 74.71 ± 9.97 | 81.05 ± 1.82 | 78.44 ± 6.68 | 84.96 ± 1.13 | 86.20 ± 1.09 | 82.76 ± 3.04 |
| 5 | 80.00 ± 3.08 | 80.44 ± 7.73 | 81.15 ± 4.86 | 90.78 ± 4.28 | 83.89 ± 5.66 | 84.48 ± 4.87 | 86.16 ± 3.93 | 88.84 ± 1.40 | 88.95 ± 3.12 |
| 6 | 77.86 ± 4.90 | 80.82 ± 5.65 | 76.30 ± 5.31 | 76.99 ± 3.79 | 71.42 ± 6.44 | 61.49 ± 7.38 | 75.91 ± 4.88 | 75.26 ± 3.93 | 81.49 ± 4.15 |
| 7 | 74.65 ± 10.06 | 82.76 ± 7.08 | 70.79 ± 8.22 | 69.85 ± 16.45 | 71.21 ± 3.89 | 79.58 ± 6.61 | 87.62 ± 3.75 | 81.04 ± 8.43 | 81.28 ± 7.93 |
| OA | 74.02 ± 3.08 | 74.04 ± 2.22 | 74.39 ± 3.13 | 74.98 ± 3.46 | 81.89 ± 0.87 | 80.58 ± 0.30 | 83.67 ± 0.38 | 80.09 ± 1.11 | 84.53 ± 0.94 |
| KC | 68.64 ± 3.53 | 68.78 ± 2.54 | 69.17 ± 3.58 | 69.98 ± 4.02 | 78.27 ± 1.02 | 75.88 ± 1.43 | 80.41 ± 0.46 | 76.20 ± 1.29 | 81.37 ± 1.19 |
| Class | GroupDRO | ANDMask | VREx | DIFEX | SDEnet | S2ECnet | LLURnet | FDGnet | Ours |
|---|---|---|---|---|---|---|---|---|---|
| 1 | 10.29 ± 6.14 | 11.10 ± 7.54 | 8.45 ± 6.90 | 14.73 ± 15.02 | 0.01 ± 0.02 | 0.63 ± 0.64 | 0.24 ± 0.39 | 0.31 ± 0.48 | 1.43 ± 1.64 |
| 2 | 29.24 ± 18.59 | 8.47 ± 12.09 | 4.13 ± 1.12 | 9.51 ± 14.57 | 0.82 ± 0.79 | 2.39 ± 2.52 | 1.95 ± 2.17 | 1.78 ± 1.19 | 46.39 ± 6.75 |
| 3 | 68.82 ± 16.44 | 85.06 ± 9.66 | 92.21 ± 4.65 | 86.05 ± 11.96 | 94.52 ± 1.82 | 92.80 ± 3.84 | 92.93 ± 3.95 | 90.79 ± 1.52 | 44.17 ± 5.46 |
| 4 | 77.89 ± 4.60 | 70.82 ± 8.46 | 66.91 ± 18.70 | 77.55 ± 12.18 | 83.18 ± 2.93 | 86.03 ± 2.78 | 86.89 ± 3.94 | 88.98 ± 2.34 | 66.07 ± 3.07 |
| 5 | 0.66 ± 0.96 | 0.10 ± 0.16 | 0.02 ± 0.02 | 0.01 ± 0.11 | 0.01 ± 0.02 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 44.03 ± 2.04 |
| 6 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.09 ± 0.14 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 |
| 7 | 96.16 ± 1.03 | 95.40 ± 1.17 | 96.34 ± 1.44 | 94.83 ± 1.89 | 95.96 ± 0.54 | 94.81 ± 1.72 | 94.77 ± 1.99 | 94.96 ± 1.62 | 99.42 ± 0.36 |
| OA | 53.65 ± 0.68 | 53.21 ± 1.24 | 53.96 ± 0.77 | 54.48 ± 1.00 | 54.00 ± 0.26 | 53.53 ± 0.49 | 53.46 ± 0.43 | 54.97 ± 0.41 | 56.56 ± 0.46 |
| KC | 40.30 ± 0.57 | 39.32 ± 1.39 | 39.86 ± 1.16 | 41.05 ± 1.44 | 40.13 ± 0.58 | 39.55 ± 0.58 | 39.52 ± 0.55 | 39.03 ± 0.39 | 44.81 ± 0.60 |
| GroupDRO | ANDMask | VREx | DIFEX | SDEnet | S2ECnet | LLURnet | FDGnet | Ours | ||
|---|---|---|---|---|---|---|---|---|---|---|
| Houston | Train (s) | 7.08 | 10.94 | 7.59 | 957.21 | 16.75 | 174.87 | 12.15 | 22.09 | 40.27 |
| Test (s) | 7.19 | 7.16 | 6.87 | 7.14 | 15.16 | 50.31 | 10.14 | 19.99 | 44.75 | |
| Params (M) | 10.80 | 10.80 | 10.80 | 21.78 | 1.79 | 1.51 | 0.55 | 3.14 | 38.88 | |
| Pavia | Train (s) | 9.27 | 13.97 | 9.37 | 1068.02 | 49.83 | 356.76 | 31.32 | 53.19 | 139.22 |
| Test (s) | 8.13 | 7.67 | 7.92 | 11.05 | 22.62 | 185.69 | 19.65 | 24.15 | 53.45 | |
| Params (M) | 10.96 | 10.96 | 10.96 | 22.10 | 2.41 | 2.16 | 0.57 | 4.04 | 47.87 | |
| Indiana | Train (s) | 10.92 | 15.36 | 11.12 | 1172.47 | 62.56 | 426.12 | 51.46 | 75.30 | 221.39 |
| Test (s) | 9.91 | 9.48 | 9.85 | 8.81 | 37.08 | 267.67 | 27.76 | 44.63 | 92.31 | |
| Params (M) | 11.31 | 11.31 | 11.31 | 22.81 | 3.75 | 3.58 | 0.63 | 6.66 | 56.25 | |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, X.; Jin, C.; Tang, Y.; Xing, K.; Yu, X. Semantic-Aware Co-Parallel Network for Cross-Scene Hyperspectral Image Classification. Sensors 2025, 25, 6688. https://doi.org/10.3390/s25216688
Li X, Jin C, Tang Y, Xing K, Yu X. Semantic-Aware Co-Parallel Network for Cross-Scene Hyperspectral Image Classification. Sensors. 2025; 25(21):6688. https://doi.org/10.3390/s25216688
Chicago/Turabian StyleLi, Xiaohui, Chenyang Jin, Yuntao Tang, Kai Xing, and Xiaodong Yu. 2025. "Semantic-Aware Co-Parallel Network for Cross-Scene Hyperspectral Image Classification" Sensors 25, no. 21: 6688. https://doi.org/10.3390/s25216688
APA StyleLi, X., Jin, C., Tang, Y., Xing, K., & Yu, X. (2025). Semantic-Aware Co-Parallel Network for Cross-Scene Hyperspectral Image Classification. Sensors, 25(21), 6688. https://doi.org/10.3390/s25216688

