A Dive into Generative Adversarial Networks in the World of Hyperspectral Imaging: A Survey of the State of the Art
Highlights
- GANs enhance HSI analysis: Integrating GANs into HSI improves data interpretation, which increases the practicality and usefulness of HSI across various industries.
- Identification of research gaps: The survey highlights critical gaps in hyperparameter tuning, architectural efficiency, and domain-specific applications of GANs in HSI. This provides researchers with clear directions for improvement.
- Broader industry adoption: With improved analysis powered by GANs, HSI can be applied more effectively in real-world domains such as agriculture, healthcare, and environmental monitoring.
- Future research roadmap: By exposing gaps in GAN-HSI integration, the study acts as a foundation for future innovation, guiding researchers toward developing more efficient, robust, and domain-adapted models.
Abstract
1. Principles of Hyperspectral Imaging
1.1. Challenges with HSI Processing
- Sensitivity to Environmental Variables: HSI systems are highly sensitive to environmental variables, which can lead to signal degradation and compromise data reliability.
- Scarcity of Labeled Training Data: HSI processing heavily relies on deep learning models due to its high-dimensional nature, but the acute shortage of labeled data limits the performance of the models [15,16,17,18,19,20,21]. The costly and time-consuming annotation process [9,11,22] further exacerbates this challenge across various domains.
1.2. Evolution of Deep Learning Solutions, Especially GANs, for HSI Processing
1.3. The Growing Intersection of GANs and HSI
1.4. Key Contributions of Our Survey
- A comprehensive and structured taxonomy of GAN architectures for HSI is presented, covering classical, conditional, attention-based, transformer-integrated, and hybrid GAN models. This taxonomy, detailed in Section 2 and Section 3, clarifies architectural evolution and highlights design choices specific to spectral–spatial data.
- An extensive survey of GAN-based HSI applications across diverse domains is conducted, including remote sensing, agriculture, environmental monitoring, medical imaging, food safety, and industrial inspection. Section 3 systematically analyzes how GANs address task-specific challenges such as limited labeled data, noise, spectral distortion, and resolution enhancement.
- A critical assessment of current limitations, open research gaps, and practical considerations is provided, including computational inefficiency, spectral–spatial trade-offs, data scarcity, domain generalization, real-time deployment constraints, evaluation metrics, benchmark datasets, ethical considerations, and model interpretability. These issues are synthesized through comparative analysis and summarized in Section 4, Section 5 and Section 6.
- Future research directions for GAN-enabled HSI are outlined in Section 4, with particular emphasis on transformer-based GANs, foundation models, multimodal learning, and scalable deployment, providing a forward-looking roadmap for the field.
1.5. Structure of Our Survey
2. Existing Surveys
Quantitative Benchmarking of Survey Coverage and Novel Contributions
3. The Convergence of Hyperspectral Imaging and GANs: Domain-Specific Applications
3.1. How HSI Can Benefit Across Various Domains
- Medicine: In tissue oxygen saturation mapping, GAN-based frameworks effectively manage the inherent spectral–spatial trade-off in hyperspectral medical data. Chang et al. [37] demonstrated that adversarial learning enables the preservation of fine spectral absorption characteristics critical for oxygenation estimation, while simultaneously enhancing spatial continuity through generator–discriminator interaction. This joint optimization mitigates spectral distortion commonly introduced by purely spatial enhancement techniques. Similarly, in microscopy super-resolution, Zhang et al. [38] employed a dual-discriminator GAN architecture to decouple spatial resolution enhancement from spectral fidelity preservation. One discriminator enforces high-frequency spatial realism, while the second constrains spectral consistency, ensuring that super-resolved outputs maintain biologically meaningful spectral signatures. This design addresses the common challenge of spectral degradation during aggressive spatial up sampling in medical HSI.
- Agriculture: In agricultural applications, HSI combined with GANs supports crop and weed discrimination, resource optimization, and chemical residue analysis. Tan et al. [39] leveraged HSI for predicting pesticide residues, while Zhang et al. [40] employed attention-guided GANs to augment corn hyperspectral datasets, improving model robustness under limited labeled samples and varying growth conditions.
- Marine and Aquatic Research: In marine and aquatic environments, GAN-assisted HSI facilitates the synthesis and enhancement of vegetation and water-quality data. Hennessy et al. [41] demonstrated the use of GAN-generated hyperspectral vegetation data to support aquatic ecosystem analysis and resource management, highlighting the potential of HSI–GAN integration for studying underwater and coastal environments.
- Food Safety: HSI plays a vital role in food safety by enabling non-destructive assessment of product quality, composition, and freshness. Qi et al. [42] utilized HSI to analyze rice seed viability, while Cui et al. [43] applied GAN-based regression models to predict soluble sugar content in cherry tomatoes, illustrating how GANs enhance spectral sensitivity and prediction accuracy in food quality evaluation.
- Forestry and Vegetation Management: In forestry and vegetation monitoring, GAN-based HSI methods improve detection of defects, stress, and structural anomalies. Li et al. [44] employed GANs for identifying unsound kernels, while HSI more broadly supports large-scale vegetation assessment, forest health monitoring, and sustainable resource management.
- Surveillance and Security: In surveillance and security-related remote sensing, GANs enhance hyperspectral change detection, scene classification, and anomaly identification. Xie et al. [45] introduced federated GANs for hyperspectral change detection, enabling decentralized analysis, while Mahmoudi and Ahmadyfard [46] applied GANs for scene classification, supporting reconnaissance and monitoring of sensitive or restricted areas.
- Geological and Industrial Applications: In geological exploration, HSI–GAN methods improve material discrimination and subsurface mapping. Liu et al. [47] employed physics-informed GAN-based synthesis for geological mapping, while industrial inspection has benefited from precise spectral analysis, as demonstrated by Wang et al. [48] using single-pixel infrared HSI for quality control and defect detection.
- Environmental Monitoring and Climate Research: GAN-enhanced HSI contributes to environmental monitoring by improving biomass estimation, wetland mapping, and ecosystem analysis. Chen et al. [49] expanded wetland vegetation biomass datasets using HSI, supporting climate research and long-term environmental assessment under diverse atmospheric conditions.
- Cultural Heritage and Historical Research: In cultural heritage preservation, HSI–GAN frameworks enable non-invasive analysis and reconstruction of artifacts. Hauser et al. [50] demonstrated the use of synthetic hyperspectral data for artifact analysis, highlighting the role of GANs in enhancing spectral detail while preserving historical integrity.
3.2. How GANs Improve HSI with a Range of Use Cases
- Anomaly Detection: GANs are highly effective in detecting anomalies in HSI. Shanmugam and Amali (2024) [52] developed a dual-discriminator conditional GAN optimized with hybrid algorithms for anomaly detection. Xie et al. (2024) [45] introduced decentralized federated GANs, which facilitated anomaly detection in distributed edge computing environments.
- Spectral Unmixing: GANs have advanced spectral unmixing by enabling accurate pixel-to-abundance translations. Wang et al. (2024) [46] proposed a conditional GAN integrated with patch transformers [53,54,55] to enhance unmixing accuracy. Sun et al. (2024) [56] tackled shadowing challenges in unmixing by developing a generative autoencoder.
- Super pixel and Super-Resolution Applications: In super pixel-based HSI processing, GANs ensure spatial consistency and improve fusion of spectral-spatial data. Zhu et al. (2023) [15] demonstrated their effectiveness in enhancing classification and segmentation through super pixel representation. Additionally, GANs play a critical role in noise reduction and super-resolution tasks. Zhang et al. (2021) [57] showcased a degradation learning approach for unsupervised hyperspectral im- age super-resolution, reconstructing high-quality imagery while preserving spectral details.
- Data Augmentation and Unmixing: GANs excel in expanding limited hyper spectral datasets, addressing scarcity challenges. Zhang et al. (2023) [58] proposed a features-kept GAN strategy to enhance HSI classification, while Tan et al. (2024) [24] utilized an improved DCGAN for pesticide residue identification. Dong et al. (2021) [11] leveraged GANs for both spectral fidelity and spatial enhancement. Furthermore, Sun et al. (2024) [56] applied generative autoencoders for shadow removal and abundance estimation.
- Domain Adaptation: GANs facilitate seamless translation between different HSI platforms, standardizing spectral representations across diverse sensor technologies. Zhao et al. (2024) [59] developed HSGANs to reconstruct hyperspectral data from RGB images, enabling cross-platform spectral compatibility.
4. Evolutionary Trajectory of Generative Adversarial Networks in Hyperspectral Imaging: A Chronological Analysis
4.1. Unique Integration Mechanisms of GANs with Hyperspectral Data
4.1.1. Joint Spectral–Spatial Representation Learning
4.1.2. Spectral Variation Compensation
4.1.3. Modelling and Mitigating Spectral Mixing
4.1.4. Stable Training for High-Dimensional Spectral Data
4.1.5. Mechanism–Gap Alignment
4.2. Evolution of GAN-Based HSI Classification
4.3. HSI Object and Anomaly Detection Based on GAN
4.4. HSI Super-Resolution (SR) Models Based on GAN
4.5. HSI Fusion Models Based on GAN
4.6. HSI Pan Sharpening Models Based on GAN
4.7. HSI Unmixing Models Based on GANs
4.8. HSI Domain Adaptation Models Based on GANs
5. Taxonomy of GANs in HSI and Hyperparameter Tuning
5.1. Tier 1—Foundational GAN Architectures
5.2. Tier 2—HSI-Adapted Architecture Extensions (Mapped to the Loss Branch)
- Adversarial Loss → used universally across all architectures.
5.3. Computational Characteristics and Resource Considerations
5.4. Advantages, Limitations, and Recommended Use Cases of GAN-Based HSI Models
5.4.1. Multiscale and Fusion-Oriented Models
5.4.2. Federated and Distributed Learning Models
5.4.3. Transformer-Enhanced Spectral Models
5.4.4. Physics-Guided and Fidelity-Constrained Models
5.4.5. Classification-Oriented Semi-Supervised Models
5.4.6. Dual-Discriminator and High-Fidelity Reconstruction Models
6. Research Directions for Future Work
6.1. Computational Efficiency Optimization
6.2. Optimization Strategies and Hyperparameter Sensitivity
6.3. Expanding Object/Anomaly Detection Capabilities
6.4. Enhancing Image Resolution and Reconstruction
6.5. Multimodal and Cross-Modal Fusion
6.6. Foundation Models and Large-Scale Representation Learning
6.7. Domain Adaptation for Cross-Environment Applications
6.8. Application-Specific Innovations
6.9. Ethical, Explainable, and Trustworthy AI in HSI
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| HIS | Hyperspectral Data |
| GAN | Generative Adversarial Network |
| DL | Deep Learning |
| ML | Machine Learning |
Appendix A
| Dataset | Sensor/Platform | Bands | Wavelength Range | Spatial Resolution | Scene Type | Typical GAN-HSI Tasks |
|---|---|---|---|---|---|---|
| Indian Pines [17] | AVIRIS | 200 | 0.4–2.5 µm | 20 m | Agricultural fields | Classification, anomaly detection, domain adaptation |
| Salinas [17] | AVIRIS | 204 | 0.4–2.5 µm | 3.7 m | Agriculture (high-resolution) | Classification, super-resolution, reconstruction |
| Pavia University [17] | ROSIS | 103 | 0.43–0.86 µm | 1.3 m | Urban campus | Classification, denoising, SR |
| Pavia Center [17] | ROSIS | 102 | 0.43–0.86 µm | 1.3 m | Urban area | Classification, domain adaptation |
| Houston 2013 [17] | CASI | 144 | 0.38–1.05 µm | 2.5 m | Urban (multi-class) | Classification, domain adaptation, change detection |
| Botswana [17] | Hyperion | 242 | 0.4–2.5 µm | 30 m | Wetlands/Savanna | Classification, anomaly detection |
| Kennedy Space Center (KSC) [17] | AVIRIS | 176 | 0.4–2.5 µm | 18 m | Coastal/Vegetation | Classification, spectral unmixing |
| CAVE [132] | CAVE Multispectral Camera | 31 | 0.4–0.7 µm | ~1 mm/pixel | Indoor scenes/objects | Super-resolution, spectral reconstruction, GAN training |
| Harvard Scene Dataset [133] | Custom HS Camera | 31 | 420–720 nm | Varies | Indoor objects, natural scenes | Super-resolution, spectral reconstruction |
| Chikusei [134] | Headwall Hyperspec | 128 | 0.36–1.0 µm | 2.5 m | Agricultural/residential | Classification, SR, fusion |
| UAV-HSI [135] | Headwall, Cubert sensors | 50–270 | Visible–NIR | 0.03–0.3 m | Precision agriculture, forestry | Super-resolution, anomaly detection, lightweight GAN evaluation |
| HyRANK Benchmark [130] | ROSIS/Hyperion | 200–242 | Visible–SWIR | 30 m | Mixed land-cover | Domain adaptation, classification |
| PU-Net Multispectral [136] | Multiple UAV sensors | 5–13 | Visible–NIR | cm-level | Agriculture/forestry | GAN fusion, SR, data augmentation |
| Medical HSI (Skin/Tissue) [137] | Various biomedical HSI cameras | 31–200 | Visible–NIR | Sub-mm | Biomedical imaging | Spectral reconstruction, anomaly detection, generative modeling |
References
- Keshava, N.; Mustard, J.F. Spectral unmixing. IEEE Signal Process. Mag. 2002, 19, 44–57. [Google Scholar] [CrossRef]
- Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative adversarial networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
- Yu, Z.; Cui, W. Robust hyperspectral image classification using generative adversarial networks. Inf. Sci. 2024, 666, 120452. [Google Scholar] [CrossRef]
- Feng, J.; Yu, H.; Wang, L.; Cao, X.; Zhang, X.; Jiao, L. Classification of hyperspectral images based on multiclass spatial–spectral generative adversarial networks. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5329–5343. [Google Scholar] [CrossRef]
- Liu, W.; You, J.; Lee, J. HSIGAN: A conditional hyperspectral image synthesis method with auxiliary classifier. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3330–3344. [Google Scholar] [CrossRef]
- Chen, C.; Wang, Y.; Zhang, N.; Zhang, Y.; Zhao, Z. A review of hyperspectral image super-resolution based on deep learning. Remote Sens. 2023, 15, 2853. [Google Scholar] [CrossRef]
- Feng, H.; Wang, Y.; Li, Z.; Zhang, N.; Zhang, Y.; Gao, Y. Information leakage in deep learning-based hyperspectral image classification: A survey. Remote Sens. 2023, 15, 3793. [Google Scholar] [CrossRef]
- Lu, Y.; Chen, D.; Olaniyi, E.; Huang, Y. Generative adversarial networks (GANs) for image augmentation in agriculture: A systematic review. Comput. Electron. Agric. 2022, 200, 107208. [Google Scholar] [CrossRef]
- Zhang, S.; Xu, M.; Zhou, J.; Jia, S. Unsupervised spatial-spectral cnn-based feature learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5524617. [Google Scholar] [CrossRef]
- Moharram, M.A.; Sundaram, D.M. Land use and land cover classification with hyperspectral data: A comprehensive review of methods, challenges and future directions. Neurocomputing 2023, 536, 90–113. [Google Scholar] [CrossRef]
- Dong, W.; Hou, S.; Xiao, S.; Qu, J.; Du, Q.; Li, Y. Generative dual-adversarial network with spectral fidelity and spatial enhancement for hyperspectral pansharpening. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 7303–7317. [Google Scholar] [CrossRef]
- Song, D.; Tang, Y.; Wang, B.; Zhang, J.; Yang, C. Two-branch generative adversarial network with multiscale connections for hyperspectral image classification. IEEE Access 2022, 11, 7336–7347. [Google Scholar] [CrossRef]
- Liang, H.; Bao, W.; Shen, X.; Zhang, X. Spectral–spatial attention feature extraction for hyperspectral image classification based on generative adversarial network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 10017–10032. [Google Scholar] [CrossRef]
- Kumar, V.; Singh, R.S.; Rambabu, M.; Dua, Y. Deep learning for hyperspectral image classification: A survey. Comput. Sci. Rev. 2024, 53, 100658. [Google Scholar] [CrossRef]
- Zhu, C.; Deng, S.; Zhou, Y.; Deng, L.J.; Wu, Q. QIS-GAN: A lightweight adversarial network with quadtree implicit sampling for multispectral and hyperspectral image fusion. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5531115. [Google Scholar] [CrossRef]
- Ranjan, P.; Girdhar, A.; Ankur; Kumar, R. A novel spectral-spatial 3D auxiliary conditional GAN integrated convolutional LSTM for hyperspectral image classification. Earth Sci. Inform. 2024, 17, 5251–5271. [Google Scholar] [CrossRef]
- Ranjan, P.; Girdhar, A. A comprehensive systematic review of deep learning methods for hyperspectral images classification. Int. J. Remote Sens. 2022, 43, 6221–6306. [Google Scholar] [CrossRef]
- Wu, Y.; Li, Z.; Zhao, B.; Song, Y.; Zhang, B. Transfer Learning of Spatial Features from High-resolution RGB Images for Large-scale and Robust Hyperspectral Remote Sensing Target Detection. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5505732. [Google Scholar] [CrossRef]
- Ranjan, P.; Kumar, R.; Girdhar, A. A 3D-convolutional-autoencoder embedded Siamese-attention-network for classification of hyperspectral images. Neural Comput. Appl. 2024, 36, 8335–8354. [Google Scholar] [CrossRef]
- Deng, Y.J.; Yang, M.L.; Li, H.C.; Long, C.F.; Fang, K.; Du, Q. Feature Dimensionality Reduction with L2,p-Norm-Based Robust Embedding Regression for Classification of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5509314. [Google Scholar] [CrossRef]
- Wang, C.; Liu, B.; Liu, L.; Zhu, Y.; Hou, J.; Liu, P.; Li, X. A review of deep learning used in the hyperspectral image analysis for agriculture. Artif. Intell. Rev. 2021, 54, 5205–5253. [Google Scholar] [CrossRef]
- Dubey, S.R.; Singh, S.K. Transformer-based generative adversarial networks in computer vision: A comprehensive survey. IEEE Trans. Artif. Intell. 2024, 5, 4851–4867. [Google Scholar] [CrossRef]
- Zhan, Y.; Wang, Y.; Yu, X. Semisupervised hyperspectral image classification based on generative adversarial networks and spectral angle distance. Sci. Rep. 2023, 13, 22019. [Google Scholar] [CrossRef] [PubMed]
- Tan, H.; Hu, Y.; Ma, B.; Yu, G.; Li, Y. An improved DCGAN model: Data augmentation of hyperspectral image for identification pesticide residues of Hami melon. Food Control 2024, 157, 110168. [Google Scholar] [CrossRef]
- Feng, J.; Zhou, Z.; Shang, R.; Wu, J.; Zhang, T.; Zhang, X.; Jiao, L. Class-aligned and class-balancing generative domain adaptation for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5509617. [Google Scholar] [CrossRef]
- Ranjan, P.; Girdhar, A. Deep Siamese network with handcrafted feature extraction for hyperspectral image classification. Multimed. Tools Appl. 2024, 83, 2501–2526. [Google Scholar]
- Ranjan, P.; Girdhar, A. Xcep-Dense: A novel lightweight extreme inception model for hyperspectral image classification. Int. J. Remote Sens. 2022, 43, 5204–5230. [Google Scholar]
- Ranjan, P.; Gupta, G. A Cross-Domain Semi-Supervised Zero-Shot Learning Model for the Classification of Hyperspectral Images. J. Indian. Soc. Remote Sens. 2023, 51, 1991–2005. [Google Scholar]
- Guerri, M.F.; Distante, C.; Spagnolo, P.; Bougourzi, F.; Taleb-Ahmed, A. Deep learning techniques for hyperspectral image analysis in agriculture: A review. ISPRS Open J. Photogramm. Remote Sens. 2024, 3, 100062. [Google Scholar] [CrossRef]
- Lou, C.; Al-qaness, M.A.; AL-Alimi, D.; Dahou, A.; Abd Elaziz, M.; Abualigah, L.; Ewees, A.A. Land use/land cover (LULC) classification using hyperspectral images: A review. Geo-Spatial Inf. Sci. 2024, 28, 345–386. [Google Scholar] [CrossRef]
- Shuai, L.; Li, Z.; Chen, Z.; Luo, D.; Mu, J. A research review on deep learning combined with hyperspectral Imaging in multiscale agricultural sensing. Comput. Electron. Agric. 2024, 217, 108577. [Google Scholar] [CrossRef]
- Ur Rahman, Z.; Asaari, M.S.M.; Ibrahim, H.; Abidin, I.S.Z.; Ishak, M.K. Generative Adversarial Networks (GANs) for Image Augmentation in Farming: A Review. IEEE Access 2024, 12, 179912–179943. [Google Scholar] [CrossRef]
- Mamo, A.A.; Gebresilassie, B.G.; Mukherjee, A.; Hassija, V.; Chamola, V. Advancing Medical Imaging Through Generative Adversarial Networks: A Comprehensive Review and Future Prospects. Cognit. Comput. 2024, 16, 1–23. [Google Scholar] [CrossRef]
- Wang, X.; Sun, L.; Chehri, A.; Song, Y. A review of GAN-based super-resolution reconstruction for optical remote sensing images. Remote Sens. 2023, 15, 5062. [Google Scholar] [CrossRef]
- Patel, U.; Patel, V. A comprehensive review: Active learning for hyperspectral image classifications. Earth Sci. Inform. 2023, 16, 1975–1991. [Google Scholar] [CrossRef]
- Jozdani, S.; Chen, D.; Pouliot, D.; Johnson, B.A. A review and meta-analysis of generative adversarial networks and their applications in remote sensing. Int. J. Appl. Earth Obs. Geoinf. 2022, 108, 102734. [Google Scholar] [CrossRef]
- Chang, M.; Lee, W.; Jeong, K.Y.; Kim, J.W.; Jung, C.H. Efficient Mapping of Tissue Oxygen Saturation using Hyperspectral Imaging and GAN. IEEE Access 2024, 12, 153822–153831. [Google Scholar] [CrossRef]
- Liu, J.; Zhang, H.; Tian, J.H.; Su, Y.; Chen, Y.; Wang, Y. R2D2-GAN: Robust Dual Discriminator Generative Adversarial Network for Microscopy Hyperspectral Image Super-Resolution. IEEE Trans. Med. Imaging 2024, 43, 4064–4074. [Google Scholar] [CrossRef]
- Tan, H.; Ma, B.; Xu, Y.; Dang, F.; Yu, G.; Bian, H. An innovative variant based on generative adversarial network (GAN): Regression GAN combined with hyperspectral imaging to predict pesticide residue content of Hami melon. Spectrochim. Acta Part. A Mol. Biomol. Spectrosc. 2025, 325, 125086. [Google Scholar] [CrossRef]
- Zhang, W.; Li, Z.; Li, G.; Zhou, L.; Zhao, W.; Pan, X. AGANet: Attention-Guided Generative Adversarial Network for Corn Hyperspectral Images Augmentation. IEEE Trans. Consum. Electron. 2024, 71, 3683–3694. [Google Scholar] [CrossRef]
- Gao, Y.; Feng, Y.; Yu, X. Hyperspectral target detection with an auxiliary generative adversarial network. Remote Sens. 2021, 13, 4454. [Google Scholar]
- Qi, H.; Huang, Z.; Jin, B.; Tang, Q.; Jia, L.; Zhao, G.; Cao, D.; Sun, Z.; Zhang, C. SAM-GAN: An improved DCGAN for rice seed viability determination using near-infrared hyperspectral imaging. Comput. Electron. Agric. 2024, 216, 108473. [Google Scholar] [CrossRef]
- Cui, J.; Zhang, Y.; Men, J.; Wu, L. Utilizing wasserstein generative adversarial networks for enhanced hyperspectral imaging: A novel approach to predict soluble sugar content in cherry tomatoes. LWT 2024, 206, 116585. [Google Scholar] [CrossRef]
- Li, H.; Zhang, L.; Sun, H.; Rao, Z.; Ji, H. Discrimination of unsound wheat kernels based on deep convolutional generative adversarial network and near-infrared hyperspectral imaging technology. Spectrochim. Acta Part. A Mol. Biomol. Spectrosc. 2022, 268, 120722. [Google Scholar]
- Xie, W.; Xu, X.; Li, Y. Decentralized Federated GAN for Hyperspectral Change Detection in Edge Computing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 8863–8874. [Google Scholar]
- Wang, L.; Zhang, X.; Zhang, J.; Dong, H.; Meng, H.; Jiao, L. Pixel-to-Abundance Translation: Conditional Generative Adversarial Networks Based on Patch Transformer for Hyperspectral Unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 5734–5749. [Google Scholar]
- Liu, L.; Li, W.; Shi, Z.; Zou, Z. Physics-informed hyperspectral remote sensing image synthesis with deep conditional generative adversarial networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5528215. [Google Scholar]
- Wang, D.Y.; Bie, S.H.; Chen, X.H.; Yu, W.K. Single-Pixel Infrared Hyperspectral Imaging via Physics-Guided Generative Adversarial Networks. Photonics 2024, 11, 174. [Google Scholar]
- Chen, C.; Ma, Y.; Ren, G.; Wang, J. Aboveground biomass of salt-marsh vegetation in coastal wetlands: Sample expansion of in situ hyperspectral and Sentinel-2 data using a generative adversarial network. Remote Sens. Environ. 2022, 270, 112885. [Google Scholar]
- Hauser, J.; Shtendel, G.; Zeligman, A.; Averbuch, A.; Nathan, M. SHS-GAN: Synthetic enhancement of a natural hyperspectral database. IEEE Trans. Comput. Imaging 2021, 7, 505–517. [Google Scholar] [CrossRef]
- Li, C.; Wang, R.; Chen, Z.; Gao, H.; Xu, S. Transformer-inspired stacked-GAN for hyperspectral target detection. Int. J. Remote Sens. 2024, 45, 4961–4982. [Google Scholar] [CrossRef]
- Shanmugam, P.; Amali, S.A.M.J. Dual-discriminator conditional generative adversarial network optimized with hybrid manta ray foraging optimization and volcano eruption algorithm for hyperspectral anomaly detection. Expert. Syst. Appl. 2024, 238, 122058. [Google Scholar]
- Yang, X.; Cao, W.; Tang, D.; Zhou, Y.; Lu, Y. ACTN: Adaptive coupling transformer network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5503115. [Google Scholar] [CrossRef]
- Tao, W.; Zhang, H.; Zeng, S.; Wang, L.; Liu, C.; Li, B. Pixel-Level and Global Similarity-Based Adversarial Autoencoder Network for Hyperspectral Unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 7064–7082. [Google Scholar] [CrossRef]
- Hadi, F.; Farooque, G.; Shao, Y.; Yang, J.; Xiao, L. DSSFT: Dual branch spectral-spatial feature fusion transformer network for hyperspectral image unmixing. Earth Sci. Inform. 2025, 18, 352. [Google Scholar] [CrossRef]
- Sun, B.; Su, Y.; Sun, H.; Bai, J.; Li, P.; Liu, F.; Liu, D. Generative Adversarial Autoencoder Network for Anti-Shadow Hyperspectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2024, 21, 5506005. [Google Scholar] [CrossRef]
- Zhang, S.; Fu, G.; Wang, H.; Zhao, Y. Degradation learning for unsupervised hyperspectral image super-resolution based on generative adversarial network. Signal Image Video Process. 2021, 15, 1695–1703. [Google Scholar] [CrossRef]
- Zhang, M.; Wang, Z.; Wang, X.; Gong, M.; Wu, Y.; Li, H. Features kept generative adversarial network data augmentation strategy for hyperspectral image classification. Pattern Recognit. 2023, 142, 109701. [Google Scholar] [CrossRef]
- Zhao, Y.; Po, L.M.; Lin, T.; Yan, Q.; Liu, W.; Xian, P. HSGAN: Hyperspectral reconstruction from RGB images with generative adversarial network. IEEE Trans. Neural Netw. Learn. Syst. 2024, 34, 17137–17150. [Google Scholar] [CrossRef]
- Chen, Z.; Tong, L.; Qian, B.; Yu, J.; Xiao, C. Self-attention-based conditional variational auto-encoder generative adversarial networks for hyperspectral classification. Remote Sens. 2021, 13, 3316. [Google Scholar]
- Dam, T.; Anavatti, S.G.; Abbass, H.A. Mixture of spectral generative adversarial networks for imbalanced hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 19, 5502005. [Google Scholar] [CrossRef]
- Yang, Y.; Xu, Y.; Wu, Z.; Wang, B.; Wei, Z. Cross-scene classification of hyperspectral images via generative adversarial network in latent space. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5526217. [Google Scholar] [CrossRef]
- He, Z.; Xia, K.; Ghamisi, P.; Hu, Y.; Fan, S.; Zu, B. Hypervitgan: Semisupervised generative adversarial network with transformer for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 6053–6068. [Google Scholar] [CrossRef]
- Feng, J.; Gao, A.; Shang, R.; Zhang, X.; Jiao, L. Multi-complementary generative adversarial networks with contrastive learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5520018. [Google Scholar] [CrossRef]
- Hennessy, A.; Clarke, K.; Lewis, M. Generative adversarial network synthesis of hyperspectral vegetation data. Remote Sens. 2021, 13, 2243. [Google Scholar] [CrossRef]
- Ma, C.; Wan, M.; Kong, X.; Zhang, X.; Chen, Q.; Gu, G. Hybrid spatial-spectral generative adversarial network for hyperspectral image classification. JOSA A 2023, 40, 538–548. [Google Scholar] [CrossRef]
- Feng, J.; Zhao, N.; Shang, R.; Zhang, X.; Jiao, L. Self-supervised divide-and-conquer generative adversarial network for classification of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5536517. [Google Scholar] [CrossRef]
- Zhong, J.; Xie, W.; Li, Y.; Lei, J.; Du, Q. Characterization of background-anomaly separability with generative adversarial network for hyperspectral anomaly detection. IEEE Trans. Geosci. Remote Sens. 2020, 59, 6017–6028. [Google Scholar] [CrossRef]
- Dong, W.; Yang, Y.; Qu, J.; Xie, W.; Li, Y. Fusion of hyperspectral and panchromatic images using generative adversarial network and image segmentation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5508413. [Google Scholar] [CrossRef]
- Zhang, F.; Bai, J.; Zhang, J.; Xiao, Z.; Pei, C. An optimized training method for GAN-based hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1791–1795. [Google Scholar] [CrossRef]
- Su, L.; Sui, Y.; Yuan, Y. An unmixing-based multi-attention GAN for unsupervised hyperspectral and multispectral image fusion. Remote Sens. 2023, 15, 936. [Google Scholar] [CrossRef]
- Wang, J.; Gao, F.; Dong, J.; Du, Q. Adaptive DropBlock-enhanced generative adversarial networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5040–5053. [Google Scholar] [CrossRef]
- Hao, S.; Xia, Y.; Ye, Y. Generative adversarial network with transformer for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2023, 20, 5510205. [Google Scholar] [CrossRef]
- Zhang, L.; Nie, Q.; Ji, H.; Wang, Y.; Wei, Y.; An, D. Hyperspectral imaging combined with generative adversarial network (GAN)-based data augmentation to identify haploid maize kernels. J. Food Compos. Anal. 2022, 106, 104346. [Google Scholar] [CrossRef]
- Xie, W.; Zhang, J.; Lei, J.; Li, Y.; Jia, X. Self-spectral learning with GAN based spectral–spatial target detection for hyperspectral image. Neural Netw. 2021, 142, 375–387. [Google Scholar] [CrossRef]
- Xu, T.; Han, B.; Li, J.; Du, Y. Domain-invariant feature and generative adversarial network boundary enhancement for multi-source unsupervised hyperspectral image classification. Remote Sens. 2023, 15, 5306. [Google Scholar] [CrossRef]
- Zhang, L.; Wang, Y.; Wei, Y.; An, D. Near-infrared hyperspectral imaging technology combined with deep convolutional generative adversarial network to predict oil content of single maize kernel. Food Chem. 2022, 370, 131047. [Google Scholar] [CrossRef]
- Wang, B.; Zhang, Y.; Feng, Y.; Xie, B.; Mei, S. Attention-enhanced generative adversarial network for hyperspectral imagery spatial super-resolution. Remote Sens. 2023, 15, 3644. [Google Scholar] [CrossRef]
- Bai, J.; Lu, J.; Xiao, Z.; Chen, Z.; Jiao, L. Generative adversarial networks based on transformer encoder and convolution block for hyperspectral image classification. Remote Sens. 2022, 14, 3426. [Google Scholar] [CrossRef]
- Wang, Z.; Wang, X.; Tan, K.; Han, B.; Ding, J.; Liu, Z. Hyperspectral anomaly detection based on variational background inference and generative adversarial network. Pattern Recognit. 2023, 143, 109795. [Google Scholar] [CrossRef]
- Li, Y.; Jiang, T.; Xie, W.; Lei, J.; Du, Q. Sparse coding-inspired GAN for hyperspectral anomaly detection in weakly supervised learning. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5512811. [Google Scholar] [CrossRef]
- Li, Z.; Zhu, X.; Xin, Z.; Guo, F.; Cui, X.; Wang, L. Variational generative adversarial network with crossed spatial and spectral interactions for hyperspectral image classification. Remote Sens. 2021, 13, 3131. [Google Scholar] [CrossRef]
- Yu, W.; Zhang, M.; He, Z.; Shen, Y. Convolutional two-stream generative adversarial network-based hyperspectral feature extraction. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5506010. [Google Scholar] [CrossRef]
- Li, Z.; Shi, S.; Wang, L.; Xu, M.; Li, L. Unsupervised generative adversarial network with background enhancement and irredundant pooling for hyperspectral anomaly detection. Remote Sens. 2022, 14, 1265. [Google Scholar] [CrossRef]
- Shang, Y.; Liu, J.; Zhang, J.; Wu, Z. MFT-GAN: A Multiscale Feature-guided Transformer Network for Unsupervised Hyperspectral Pansharpening. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5518516. [Google Scholar] [CrossRef]
- Bai, J.; Zhang, Y.; Xiao, Z.; Ye, F.; Li, Y.; Alazab, M.; Jiao, L. Immune evolutionary generative adversarial networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5543614. [Google Scholar] [CrossRef]
- Wang, B.; Zhang, S.; Feng, Y.; Mei, S.; Jia, S.; Du, Q. Hyperspectral imagery spatial super-resolution using generative adversarial network. IEEE Trans. Comput. Imaging 2021, 7, 948–960. [Google Scholar] [CrossRef]
- Mahmoudi, A.; Ahmadyfard, A. A GAN based method for cross-scene classification of hyperspectral scenes captured by different sensors. Multimedia Tools Appl. 2024, 84, 26351–26369. [Google Scholar] [CrossRef]
- Qin, A.; Tan, Z.; Wang, R.; Sun, Y.; Yang, F.; Zhao, Y.; Gao, C. Distance constraint-based generative adversarial networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5511416. [Google Scholar] [CrossRef]
- Jiang, T.; Xie, W.; Li, Y.; Lei, J.; Du, Q. Weakly supervised discriminative learning with spectral constrained generative adversarial network for hyperspectral anomaly detection. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6504–6517. [Google Scholar] [CrossRef]
- Wang, W.Y.; Li, H.C.; Deng, Y.J.; Shao, L.Y.; Lu, X.Q.; Du, Q. Generative adversarial capsule network with ConvLSTM for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 18, 523–527. [Google Scholar] [CrossRef]
- Wang, D.; Gao, L.; Qu, Y.; Sun, X.; Liao, W. Frequency-to-spectrum mapping GAN for semisupervised hyperspectral anomaly detection. CAAI Trans. Intell. Technol. 2023, 8, 1258–1273. [Google Scholar] [CrossRef]
- Xie, W.; Cui, Y.; Li, Y.; Lei, J.; Du, Q.; Li, J. HPGAN: Hyperspectral pansharpening using 3-D generative adversarial networks. IEEE Trans. Geosci. Remote Sens. 2020, 59, 463–477. [Google Scholar] [CrossRef]
- Sun, M.; Jiang, H.; Yuan, W.; Jin, S.; Zhou, H.; Zhou, Y.; Zhang, C. Discrimination of maturity of Camellia oleifera fruit on-site based on generative adversarial network and hyperspectral imaging technique. J. Food Meas. Charact. 2024, 18, 10–25. [Google Scholar] [CrossRef]
- Zhao, J.; Hu, L.; Huang, L.; Wang, C.; Liang, D. MSRA-G: Combination of multi-scale residual attention network and generative adversarial networks for hyperspectral image classification. Eng. Appl. Artif. Intell. 2023, 121, 106017. [Google Scholar] [CrossRef]
- Luo, H.; Zhu, H.; Liu, S.; Liu, Y.; Zhu, X.; Lai, J. 3-D auxiliary classifier gan for hyperspectral anomaly detection via weakly supervised learning. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6009805. [Google Scholar] [CrossRef]
- Liang, H.; Bao, W.; Shen, X. Adaptive weighting feature fusion approach based on generative adversarial network for hyperspectral image classification. Remote Sens. 2021, 13, 198. [Google Scholar] [CrossRef]
- Wei, W.; Tong, L.; Guo, B.; Zhou, J.; Xiao, C. Few-shot hyperspectral image classification using relational generative adversarial network. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5539016. [Google Scholar] [CrossRef]
- Shi, C.; Zhang, T.; Liao, D.; Jin, Z.; Wang, L. Dual hybrid convolutional generative adversarial network for hyperspectral image classification. Int. J. Remote Sens. 2022, 43, 5452–5479. [Google Scholar] [CrossRef]
- Hang, R.; Zhou, F.; Liu, Q.; Ghamisi, P. Classification of hyperspectral images via multitask generative adversarial networks. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1424–1436. [Google Scholar] [CrossRef]
- Feng, B.; Liu, Y.; Chi, H.; Chen, X. Hyperspectral remote sensing image classification based on residual generative adversarial neural networks. Signal Process. 2023, 213, 109202. [Google Scholar] [CrossRef]
- Jiang, K.; Xie, W.; Li, Y.; Lei, J.; He, G.; Du, Q. Semisupervised spectral learning with generative adversarial network for hyperspectral anomaly detection. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5224–5236. [Google Scholar] [CrossRef]
- Chu, M.; Yu, X.; Dong, H.; Zang, S. Domain-adversarial generative and dual feature representation discriminative network for hyperspectral image domain generalization. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5533213. [Google Scholar] [CrossRef]
- Shi, Y.; Han, L.; Han, L.; Chang, S.; Hu, T.; Dancey, D. A latent encoder coupled generative adversarial network (le-gan) for efficient hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5534819. [Google Scholar] [CrossRef]
- Tang, R.; Liu, H.; Wei, J. Visualizing near infrared hyperspectral images with generative adversarial networks. Remote Sens. 2020, 12, 3848. [Google Scholar] [CrossRef]
- Xu, T.; Wang, Y.; Li, J.; Du, Y. Generative adversarial network and mutual-point learning algorithm for few-shot open-set classification of hyperspectral images. Remote Sens. 2024, 16, 1285. [Google Scholar] [CrossRef]
- Zhang, X.; Xie, W.; Li, Y.; Lei, J.; Du, Q.; Yang, G. Rank-aware generative adversarial network for hyperspectral band selection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5521812. [Google Scholar] [CrossRef]
- Jiang, T.; Li, Y.; Xie, W.; Du, Q. Discriminative reconstruction constrained generative adversarial network for hyperspectral anomaly detection. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4666–4679. [Google Scholar] [CrossRef]
- Kusch, G.; Conroy, M.; Li, H.; Edwards, P.R.; Zhao, C.; Ooi, B.S.; Pugh, J.; Cryan, M.J.; Parbrook, P.J.; Martin, R.W. Multi-wavelength emission from a single InGaN/GaN nanorod analyzed by cathodoluminescence hyperspectral imaging. Sci. Rep. 2018, 8, 1742. [Google Scholar] [CrossRef]
- Zhang, M.; Gong, M.; Mao, Y.; Li, J.; Wu, Y. Unsupervised feature extraction in hyperspectral images based on Wasserstein generative adversarial network. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2669–2688. [Google Scholar] [CrossRef]
- Öner, B.; Pomeroy, J.W.; Kuball, M. Time resolved hyperspectral quantum rod thermography of microelectronic devices: Temperature transients in a GaN HEMT. IEEE Electron. Device Lett. 2020, 41, 812–815. [Google Scholar] [CrossRef]
- Zhan, Y.; Hu, D.; Wang, Y.; Yu, X. Semisupervised hyperspectral image classification based on generative adversarial networks. IEEE Geosci. Remote Sens. Lett. 2017, 15, 212–216. [Google Scholar] [CrossRef]
- Wang, D.; Vinson, R.; Holmes, M.; Seibel, G.; Bechar, A.; Nof, S.; Tao, Y. Early detection of tomato spotted wilt virus by hyperspectral imaging and outlier removal auxiliary classifier generative adversarial nets (OR-AC-GAN). Sci. Rep. 2019, 9, 4377. [Google Scholar] [CrossRef]
- Alipour-Fard, T.; Arefi, H. Structure aware generative adversarial networks for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5424–5438. [Google Scholar] [CrossRef]
- Li, Q.; Lin, J.; Clancy, N.T.; Elson, D.S. Estimation of tissue oxygen saturation from RGB images and sparse hyperspectral signals based on conditional generative adversarial network. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 987–995. [Google Scholar] [CrossRef] [PubMed]
- He, Z.; Liu, H.; Wang, Y.; Hu, J. Generative adversarial networks-based semi-supervised learning for hyperspectral image classification. Remote Sens. 2017, 9, 1042. [Google Scholar] [CrossRef]
- Gao, H.; Yao, D.; Wang, M.; Li, C.; Liu, H.; Hua, Z.; Wang, J. A hyperspectral image classification method based on multi-discriminator generative adversarial networks. Sensors 2019, 19, 3269. [Google Scholar] [CrossRef]
- Tao, C.; Wang, H.; Qi, J.; Li, H. Semisupervised variational generative adversarial networks for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 914–927. [Google Scholar] [CrossRef]
- Wang, X.; Tan, K.; Du, Q.; Chen, Y.; Du, P. Caps-TripleGAN: GAN-assisted CapsNet for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7232–7245. [Google Scholar] [CrossRef]
- Feng, J.; Feng, X.; Chen, J.; Cao, X.; Zhang, X.; Jiao, L.; Yu, T. Generative adversarial networks based on collaborative learning and attention mechanism for hyperspectral image classification. Remote Sens. 2020, 12, 1149. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Clausi, D.A.; Wong, A. Generative adversarial networks and conditional random fields for hyperspectral image classification. IEEE Trans. Cybern. 2019, 50, 3318–3329. [Google Scholar] [CrossRef]
- Zhao, W.; Chen, X.; Chen, J.; Qu, Y. Sample generation with self-attention generative adversarial adaptation network (SaGAAN) for hyperspectral image classification. Remote Sens. 2020, 12, 843. [Google Scholar] [CrossRef]
- Li, H.; Wang, W.; Ye, S.; Deng, Y.; Zhang, F.; Du, Q. A mixture generative adversarial network with category multi-classifier for hyperspectral image classification. Remote Sens. Lett. 2020, 11, 983–992. [Google Scholar] [CrossRef]
- Cui, J.; Li, K.; Lv, Y.; Liu, S.; Cai, Z.; Luo, R.; Zhang, Z.; Wang, S. Development of a new hyperspectral imaging technology with autoencoder-assisted generative adversarial network for predicting the content of polyunsaturated fatty acids in red meat. Comput. Electron. Agric. 2024, 220, 108842. [Google Scholar] [CrossRef]
- Zhang, Y.; Yan, S.; Zhang, L.; Du, B. Fast projected fuzzy clustering with anchor guidance for multimodal remote sensing imagery. IEEE Trans. Image Process. 2024, 33, 4640–4653. [Google Scholar] [CrossRef]
- Zhang, Y.; Yan, S.; Jiang, X.; Zhang, L.; Cai, Z.; Li, J. Dual graph learning affinity propagation for multimodal remote sensing image clustering. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5521713. [Google Scholar] [CrossRef]
- Li, C.; Zhang, B.; Hong, D.; Yao, J.; Chanussot, J. LRR-Net: An interpretable deep unfolding network for hyperspectral anomaly detection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5513412. [Google Scholar] [CrossRef]
- Jia, S.; Jiang, S.; Lin, Z.; Xu, M.; Sun, W.; Huang, Q.; Zhu, J.; Jia, X. A semisupervised Siamese network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5516417. [Google Scholar] [CrossRef]
- Jin, Q.; Ma, Y.; Fan, F.; Huang, J.; Mei, X.; Ma, J. Adversarial autoencoder network for hyperspectral unmixing. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 4555–4569. [Google Scholar] [CrossRef]
- Dang, Y.; Li, H.; Liu, B.; Zhang, X. Cross-Domain Few-Shot Learning for Hyperspectral Image Classification Based on Global-to-Local Enhanced Channel Attention. IEEE Geosci. Remote Sensing Lett. 2025, 22, 1–5. [Google Scholar] [CrossRef]
- Fletcher, J.R. Regulating, L. HyRANK: A benchmark for hyperspectral domain adaptation. IEEE TGRS 2020, 58, 5453–5466. [Google Scholar]
- Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S.K. Generalized Assorted Pixel Camera: Postcapture Control of Resolution, Dynamic Range, and Spectrum. IEEE Trans. Image Process. 2010, 19, 2241–2253. [Google Scholar] [CrossRef]
- Chakrabarti, A.; Zickler, T. Statistics of real-world hyperspectral images. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 193–200. [Google Scholar]
- Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and multispectral data fusion: A comparative review of the recent literature. IEEE Geosci. Remote Sens. Mag. 2017, 5, 29–56. [Google Scholar] [CrossRef]
- Aasen, H.; Honkavaara, E.; Lucieer, A.; Zarco-Tejada, P.J. Quantitative remote sensing at ultra-high resolution with UAV spectroscopy: A review of sensor technology, measurement procedures, and data correction workflows. Remote Sens. 2018, 10, 1091. [Google Scholar] [CrossRef]
- Feng, C.; Cheng, J.; Xiao, Y.; Cao, Z. Advance One-Shot Multispectral Instance Detection With Text’s Supervision. IEEE Signal Process. Lett. 2024, 31, 1605–1609. [Google Scholar] [CrossRef]
- Lu, G.; Fei, B. Medical hyperspectral imaging: A review. J. Biomed. Opt. 2014, 19, 010901. [Google Scholar] [CrossRef]









| Reference | Selection Criteria | Focus Domain | Coverage of GAN | Coverage of HSI | Comprehensive Study on GAN and HSI | Remark |
|---|---|---|---|---|---|---|
| Guerri et al., 2024 [29] | DL + HSI | Agriculture: Crop monitoring, Yield prediction, Quality assessment | Partial | High | Low | This study includes various DL models-AE, CNNs, RNNs, DBNs, GANs, Transfer Learning, SSL, FSL, and AL—focused solely on agriculture and no other domain. |
| Lou et al., 2024 [30] | DL + HSI | Remote Sensing: Land Use/Land Cover (LULC) classification | Partial | High | Low | This study covers hyperspectral datasets and models, including traditional ML, CNN, RNN, DBN, LSTM, GAN, Transformer, and Spectral Unmixing, but lacks focus on GAN as a model. |
| Kumar et al., 2024 [14] | DL + HSI | Remote Sensing & Agriculture: HSI classification | Low | High | Low | This study covers HSI datasets and range of DL models. However, it is classification-focused; limited discussion of GANs. |
| Shuai et al., 2024 [31] | DL + HSI | Agriculture Multiscale agricultural data analysis | Partial | Partial | Low | This study covers multi-scale agricultural HSI data and DL models such as CNN, AE, RNN, TL, and AL. However, it is limited to food and crop applications, excluding other HSI data. |
| Dubey and Singh, 2024 [22] | Transformer based GAN | Computer Vision: Image/Video Synthesis | High | High | Low | Datasets: This study includes generic datasets including CiFAR10, STL10, CelebA, ImageNet and models including TransGAN, HiT, STrans-G, STyle-former, Swin-GAN, ViTGAN, VQGAN, StyleSwin with limited focus on HSI; |
| Rahman et al., 2024 [32] | GANs | Farming | High | Partial | Low | This study covers a range of GAN based models including CGAN, DCGAN, ES-RGAN, FCN, ProGAN, SA-GAN, SR-GAN, StyleGAN, CycleGAN, AE-GAN with a limited exploration of multimodal GANs and HSI data. |
| Mamo et al., 2024 [33] | GANs | Medical Imaging Imagegeneration, reconstruction, segmentation | High | Low | Low | This study covers a range of GANs including ProGAN, CGAN, SAGAN, CycleGAN, Pix2Pix, StarGAN, SinGAN with no focus on hyperspectral data. |
| Wang et al., 2023 [34] | GANs | Remote Sensing Super-resolution reconstruction | High | Low | Low | This study contains super resolution GAN models including SRCNN, SR- GAN, ESRGAN, DRGAN, EEGAN, SPGAN, ISRGAN, USRGAN, Kernel GAN, TE-SRGAN with a focus on Optical Remote Sensing data only. |
| Patel and Patel, 2023 [35] | Active Learning | HSI Remote Sensing | Low | High | Low | This study covered models including CNN, AE, CapsNet, Bayesian CNN, RBM with limited discussion on GANs. |
| Moharram and Sundaram, 2023 [10] | LULC + HSI | Remote Sensing LULC | Low | High | Low | This study covers ML and DL models CNN, SAE, DBN, RNN, GAN, GRU, KNN, SVM, RF with a limited focus on GAN. |
| Lu et al., 2022 [8] | GANs | Agriculture Image augmentation | Partial | High | Low | This study covers GAN models including ACGAN, ARGAN, CGAN, DC-GAN, InfoGAN, ProGAN, SAGAN, SR-GAN with a limited exploration of hyperspectral data |
| Jozdani et al., 2022 [36] | GANs | Remote Sensing Multiple RS applications | High | Partial | Low | This study covers VanillaGAN, AC-GAN, VAE, VAEGAN, BiGAN with all remote sensing data, lacking specific focus on hyperspectral data |
| Our Study 2025 | GANs | HSI- All Domains | High | High | High | Comprehensive Study on HSI applications with GAN as base model |
| Model | Ref. | Year | Model | Ref. | Year | Model | Ref. | Year | Model | Ref. | Year |
|---|---|---|---|---|---|---|---|---|---|---|---|
| Regression GAN | [39] | 2025 | GLSGAN | [62] | 2023 | HyperViTGAN | [63] | 2022 | AuxGAN | [41] | 2021 |
| R2D2-GAN | [38] | 2024 | CMC-GAN | [64] | 2023 | DCGAN | [44] | 2022 | GAN | [65] | 2021 |
| OxyGAN | [37] | 2024 | TV-WGAN-GP | [66] | 2023 | SDC-GAN | [67] | 2022 | BASGAN | [68] | 2021 |
| Physics Guided GAN | [48] | 2024 | HSSGAN | [66] | 2023 | GASN-ISGE | [69] | 2022 | (PG + W)GAN-GP | [70] | 2021 |
| AGANet | [40] | 2024 | CMAN | [71] | 2023 | GAN-CF | [49] | 2022 | ADGAN | [72] | 2021 |
| HyperGAN | [46] | 2024 | TRUG | [73] | 2023 | CGAN, DC-GAN | [74] | 2022 | SSTD | [75] | 2021 |
| AE-GAN | [22] | 2024 | MUDA | [76] | 2023 | DCGAN | [77] | 2022 | SACVAEGAN | [60] | 2021 |
| SAM-GAN | [42] | 2024 | AEGAN | [78] | 2023 | TEGAN | [79] | 2022 | SRGAN | [57] | 2021 |
| GAA-AS | [56] | 2024 | VBIGAN-AD | [80] | 2023 | sparseHAD | [81] | 2022 | CSSVGAN | [82] | 2021 |
| DcGAN-HAD | [52] | 2024 | FPGANDA | [58] | 2023 | cs2GAN-FE | [83] | 2022 | SHS-GAN | [50] | 2021 |
| WGAN | [43] | 2024 | TBGAN | [12] | 2023 | BEGAIP | [84] | 2022 | SSAT-GAN | [13] | 2021 |
| MFT-GAN | [85] | 2024 | QIS-GAN | [15] | 2023 | HIEGAN | [86] | 2022 | HSSRGAN | [87] | 2021 |
| GANHDA | [88] | 2024 | DGAN | [89] | 2023 | KLOPDGAN | [90] | 2022 | CapsCLSTMGAN | [91] | 2021 |
| DFGAN | [45] | 2024 | FTSGAN | [92] | 2023 | PDASS | [47] | 2022 | HPGAN | [93] | 2021 |
| MAPRB CycleGAN | [94] | 2024 | MSRA-GAN | [95] | 2023 | 3DACGAN | [96] | 2022 | AWF2-GAN | [97] | 2021 |
| 3DACWGANCLST13M | [98] | 2024 | HSGAN | [59] | 2023 | DHCGAN | [99] | 2022 | MTGAN | [100] | 2021 |
| FSHyperRGAN | [98] | 2024 | RGAN | [101] | 2023 | MGSGAN | [61] | 2022 | SSLGAN | [102] | 2020 |
| D3Net | [103] | 2024 | SAD-GAN | [23] | 2023 | LE-GAN | [104] | 2022 | HVCNN | [105] | 2020 |
| DWGAN | [106] | 2024 | 3D-GAN | [2] | 2018 | R-GAN | [107] | 2022 | HADGAN | [108] | 2020 |
| STGAN | [51] | 2024 | InGAN | [109] | 2018 | WGAN | [110] | 2019 | GaN HEMT | [111] | 2020 |
| Sill-RGAN | [3] | 2024 | HSGAN | [112] | 2018 | OR-AC-GAN | [113] | 2019 | SA-GAN | [114] | 2020 |
| Dual2StO2 | [115] | 2019 | 3DBFGAN | [116] | 2017 | MDGAN | [117] | 2019 | SSVGAN | [118] | 2020 |
| MSGAN | [4] | 2019 | TripleGAN | [119] | 2019 | CA-GAN | [120] | 2020 | |||
| SS-GAN-CRF | [121] | 2020 | |||||||||
| SaGAAN | [122] | 2020 | |||||||||
| MGAN-3DCNN | [123] | 2020 |
| Challenges in Literature | Identified Gap | Description |
|---|---|---|
| Architectural Efficiency | ||
| AEGAN [78] demonstrates high computational overhead in attention mechanisms | Lightweight Architecture Development | Multi-head attention increases GPU memory usage by ~3× compared to baseline CNN-GANs, limiting scalability to large HSI scenes |
| MFT-GAN [85] shows excessive resource consumption in transformer operations | Transformer Optimization | Transformer-based spectral attention leads to high FLOPs and long training time, restricting deployment on resource-constrained platforms |
| CMAN [71] exhibits complexity issues in multi-attention mechanisms | Computational Efficiency | Coupled multi-attention blocks increase inference latency and parameter count, reducing real-time applicability |
| Environmental Robustness | ||
| Sill-RGAN [3] struggles with varying lighting conditions | Adaptive Environmental Processing | Performance degrades under varying illumination conditions, indicating limited spectral–illumination invariance |
| HPGAN [93] shows limited cross-sensor compatibility | Sensor Generalization | Trained on single-sensor data; limited cross-sensor transferability without retraining |
| GASN-ISGE [69] faces temporal consistency issues | Temporal Stability | Temporal inconsistencies observed in sequential HSI data, affecting change detection reliability |
| Integration Challenges | ||
| QIS-GAN [15] faces edge deployment limitations | Edge Computing Optimization | High memory footprint and multi-stage processing hinder edge and real-time deployment |
| HSSRGAN [87] shows real-time processing constraints | Real-time Processing | Super-resolution pipeline introduces high inference time, unsuitable for time-sensitive applications |
| BASGAN [68] requires extensive parameter tuning | Automated Optimization | Requires extensive manual hyperparameter tuning across datasets |
| Spectral-Spatial Balance | ||
| SHSGAN [50] struggles with spectral-spatial trade-offs | Fidelity Balance | Difficulty in preserving fine spectral signatures while enhancing spatial resolution |
| HSSRGAN [87] shows over-smoothing in high-band processing | High-band Processing | Over-smoothing observed in high spectral bands, reducing material discrimination accuracy |
| MS-GAN [4] faces feature extraction challenges | Feature Optimization | Limited ability to extract discriminative spectral–spatial features from high-dimensional data |
| Data and Training | ||
| FSHRGAN [87] indicates limited data handling issues | Limited Data Processing | Performance drops significantly under few-shot learning conditions |
| GANHDA [88] shows domain transfer limitations | Domain Adaptation | Domain transfer effectiveness declines under large spectral distribution shifts |
| 3DACGAN [96] requires extensive data augmentation | Data Augmentation | Requires extensive synthetic data generation to achieve stable training |
| Parameter | [16] | [3] | [62] | [89] | [38] | [110] |
|---|---|---|---|---|---|---|
| Learning Rate | 5 × 10−5 | 0.001 (Gen), 0.002 (Dis) | 0.001 | 0.0002 | 0.005 | 0.01 |
| Batch Size | 256 | 16, 24, 38 | 16, 24 | 128 | 1 | 5 |
| Optimizer | RMSProp | Adadelta | Adam | SGD | Adam | Adam, SGD |
| Network | 3D-AuxConvLSTM | MLP, U-Net, Convolutional Attention | 3D-CNN | Encoder-Decoder | GAN | 3D-CNN |
| Activation Function | Leaky ReLU, Sigmoid | NA | ReLU | Sigmoid, Softmax | Sigmoid | ReLU |
| Performance Criteria | OA, AA, KC | OA, AA, KC | OA, AA, KC | OA, AA, KC, F1 | PSNR, SAM, SSIM | OA, AA, KC, F1 |
| Framework Used | TensorFlow 2.16.1 | PyTorch 1.8.1 | PyTorch 1.8.1 | TensorFlow 2.16.1 | PyTorch 1.8.1 | PyTorch 1.8.1 |
| Loss Function | Sparse Categorical Cross-Entropy | Two-part (Label + Domain Prediction) | Logarithmic Cross-Entropy | Cross-Entropy Loss | Adversarial Loss, Content Loss | VAE Loss, Manifold Regularization |
| Type | [108] | [102] | [75] | [69] | [41] | [90] | [45] | [52] |
|---|---|---|---|---|---|---|---|---|
| Learning Rate | 0.0001 | 1.0 | 0.001 for E and D, 0.0001 for D1 and D2 | 1 × 10−4 | 1 × 10−3 | 0.0001 | Customized | 0.0001 |
| Optimizer | Adam | Adam | Adam | Adam | Adam | Adam | Adam | Hybrid Manta Ray Foraging |
| Network | Auto encoder | Auto encoder GAN | Auto encoder GAN | BASGAN | GAN | GAN | DFGAN | Dual-discriminator GAN architecture |
| Activation Function | ReLU | Leaky ReLU | Leaky ReLU and Sigmoid | ReLU, Leaky ReLU, Sigmoid, Tanh | Leaky ReLU activation | ReLU | ||
| Performance Criteria | Mean AUC | AUC scores | AUC scores, FAR scores, ROC curves | ROC curve and AUC | AUC | AUC | OA, AA, KC | F1-score, Recall, AUC Scores, Accuracy, Precision |
| Framework Used | TensorFlow 2.16.1 | TensorFlow 2.16.1 | TensorFlow 2.16.1 | TensorFlow 2.16.1 | TensorFlow 2.16.1 | TensorFlow 2.16.1 | Python 3.13 | PyTorch 1.8.1 |
| Loss Function | Cross Entropy | Adversarial | Background-anomaly separability constrained loss | Adversarial loss, cross-entropy | KL Divergence | Spectral Constrained Reconstruction Loss | Binary Cross Entropy |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Ranjan, P.; Nandal, A.; Agarwal, S.; Kumar, R. A Dive into Generative Adversarial Networks in the World of Hyperspectral Imaging: A Survey of the State of the Art. Remote Sens. 2026, 18, 196. https://doi.org/10.3390/rs18020196
Ranjan P, Nandal A, Agarwal S, Kumar R. A Dive into Generative Adversarial Networks in the World of Hyperspectral Imaging: A Survey of the State of the Art. Remote Sensing. 2026; 18(2):196. https://doi.org/10.3390/rs18020196
Chicago/Turabian StyleRanjan, Pallavi, Ankur Nandal, Saurabh Agarwal, and Rajeev Kumar. 2026. "A Dive into Generative Adversarial Networks in the World of Hyperspectral Imaging: A Survey of the State of the Art" Remote Sensing 18, no. 2: 196. https://doi.org/10.3390/rs18020196
APA StyleRanjan, P., Nandal, A., Agarwal, S., & Kumar, R. (2026). A Dive into Generative Adversarial Networks in the World of Hyperspectral Imaging: A Survey of the State of the Art. Remote Sensing, 18(2), 196. https://doi.org/10.3390/rs18020196

