Meta-Learning-Integrated Neural Architecture Search for Few-Shot Hyperspectral Image Classification
Abstract
1. Introduction
- Explored the FSL application of NAS in hyperspectral classification tasks, constructed a multi-source domain learning framework combined with meta learning, and improved the richness of learnable meta knowledge.
- By designing a precise robust search space for attention convolution, the automatic design of the HSIC feature extractor architecture under limited samples was achieved. The optimal precise and robust units were deployed at different positions of the architecture, ensuring that the architecture maintains both classification accuracy and transfer robustness on HSIC.
- Within the search space, an attention convolution operator is proposed, which combines efficient attention mechanisms with depthwise separable convolutions to enhance the discriminative feature extraction capability of the optimal architecture while maintaining the effectiveness of the convolution.
- By combining focus loss and label-distribution-aware margin loss, optimal architecture can effectively improve the classification performance of the model for imbalanced samples.
2. Materials and Methods
2.1. Overall Framework of MLFS-NAS
2.2. Few-Shot Sample Learning of Multi-Source Domain and Target Domain
2.3. Accurate and Robust Search Space
2.3.1. Design of Accurate and Robust Search Space
2.3.2. Internal Design of Search Space
Algorithm 1 MLSF-NAS |
Initialization and Data Preparation: 1. source_domains = [Domain_D1, Domain_D2] //Source domain data 2. target_domain_labeled = Domain_Dt_labeled //Labeled data of the target domain 3. target_domain_unlabeled = Domain_Dt_unlabeled //Unlabeled data of the target domain |
Stage 1: Supernet Architecture Search: 1. SuperNet = InitializeSuperNet() //Initialize the supernet 2. for epoch = 1 to SUPERNET_EPOCHS do for each domain in source_domains + [target_domain_labeled] do S_samples, Q_samples = SplitIntoSupportQuery(domain) //Split into support set and query set features = SuperNet(S_samples, Q_samples) //Supernet feature extraction loss = Loss(features) //Calculate loss UpdateSuperNet(SuperNet, loss) //Update supernet parameters end for end for |
Stage 2: Optimal Architecture Extraction and Final Network Construction: 1. EpisodeOptimizer = InitializeOptimizer(FinalNet) //Initialize the final network optimizer 2. for episode = 1 to EPISODES do episode_data = SampleEpisodeData(source_domains, target_domain_labeled) //Sample episode data for each batch in episode_data do M_samples, other_samples = SplitBatch(batch) //Split into M set and other sets features = FinalNet(M_samples, other_samples) //Final network feature extraction loss = Loss(features) //Calculate loss UpdateFinalNet(FinalNet, loss, EpisodeOptimizer) //Update final network parameters end for end for |
Stage 4: Transfer Application of Unlabeled Data in the Target Domain: 1. NNClassifier = InitializeClassifier() //Initialize the classifier 2. for each sample in target_domain_unlabeled do features = FinalNet(sample) //Feature extraction prediction = NNClassifier(features) //Classification prediction end for |
3. Results
3.1. Dataset Description
3.2. Experimental Environment Configuration and Implementation Details
3.3. Comparison of the Proposed Method with the State-of-the-Art Methods
4. Discussion
4.1. Analysis of Optimal Cell Structure
4.2. Analysis of Related Parameter
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Cantalloube, H.M.J.; Nahum, C.E. Airborne SAR-Efficient Signal Processing for Very High Resolution. Proc. IEEE 2013, 101, 784–797. [Google Scholar] [CrossRef]
- Gao, Z.D.; Hao, Q.; Liu, Y.; Zhu, Y.Y.; Cao, J.; Meng, H.M.; Liu, J.; Chen, H.L. Development of Hyperspectral Imaging and Application Technology. Metrol. Meas. Technol. 2019, 24–34. [Google Scholar] [CrossRef]
- Rasti, B.; Hong, D.; Hang, R.; Ghamisi, P.; Kang, X.; Chanussot, J.; Benediktsson, J.A. Feature Extraction for Hyperspectral Imagery: The Evolution From Shallow to Deep: Overview and Toolbox. IEEE Geosci. Remote Sens. Mag. 2020, 8, 60–88. [Google Scholar] [CrossRef]
- Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Process. Mag. 2002, 19, 17–28. [Google Scholar] [CrossRef]
- Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar]
- Kavita, B.; Vijaya, M. Evaluation of Deep Learning CNN Model for Land Use Land Cover Classification and Crop Identification Using Hyperspectral Remote Rensing Images. J. Indian Soc. Remote Sens. 2019, 47, 1949–1958. [Google Scholar]
- Gao, A.F.; Rasmussen, B.; Kulits, P.; Scheller, E.L.; Greenberger, R.; Ehlmann, B.L. Generalized Unsupervised Clustering of Hyperspectral Images of Geological Targets in the Near Infrared. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; pp. 4289–4298. [Google Scholar]
- Yue, J.; Zhao, W.; Mao, S.; Liu, H. Spectral-Spatial Classification of Hyperspectral Images using Deep Convolutional Neural Networks. Remote Sens. Lett. 2015, 6, 468–477. [Google Scholar] [CrossRef]
- Zhang, H.; Li, Y.; Zhang, Y.; Shen, Q. Spectral-Spatial Classification of Hyperspectral Imagery using a Dual-channel Convolutional Neural Network. Remote Sens. Lett. 2017, 8, 438–447. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Chapman. Spectral-Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 847–858. [Google Scholar] [CrossRef]
- Roy, S.K.; Manna, S.; Song, T.; Bruzzone, L. Attention-Based Adaptive Spectral-Spatial Kernel ResNet for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7831–7843. [Google Scholar] [CrossRef]
- Liu, B.; Yu, X.; Yu, A.; Zhang, P.; Wan, G.; Wang, R. Deep few-shot learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2290–2304. [Google Scholar] [CrossRef]
- Chen, Y.; Zhu, K.; Zhu, L.; He, X.; Ghamisi, P.; Benediktsson, J.A. Automatic Design of Convolutional Neural Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7048–7066. [Google Scholar] [CrossRef]
- Zhang, H.; Gong, C.; Bai, Y.; Bai, Z.; Li, Y. 3-D-ANAS: 3-D Asymmetric Neural Architecture Search for Fast Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5508519. [Google Scholar] [CrossRef]
- Xue, X.; Zhang, H.; Fang, B.; Bai, Z.; Li, Y. Grafting Transformer on Automatically Designed Convolutional Neural Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5531116. [Google Scholar] [CrossRef]
- Cao, C.; Xiang, H.; Song, W.; Yi, H.; Xiao, F.; Gao, X. Lightweight Multiscale Neural Architecture Search With Spectra-Spatial Attention for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5505315. [Google Scholar] [CrossRef]
- Xiao, F.; Xiang, H.; Cao, C.; Gao, X. Neural Architecture Search-Based Few-Shot Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5513715. [Google Scholar] [CrossRef]
- Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Cao, K.; Wei, C.; Gaidon, A.; Arechiga, N.; Ma, T. Learning imbalanced datasets with label-distribution-aware margin loss. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar] [CrossRef]
- Ou, Y.; Feng, Y.; Sun, Y. Towards Accurate and Robust Architectures via Neural Architecture Search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 5967–5976. [Google Scholar]
- Feng, J.; Fu, C. An Enhanced YOLOv8 Model for Flame and Smoke Detection. In Proceedings of the 2024 4th International Conference on Computer Science and Blockchain (CCSB), Shenzhen, China, 6–8 September 2024; pp. 109–113. [Google Scholar]
- Sun, H.; Wen, Y.; Feng, H.; Zheng, Y.; Mei, Q.; Ren, D.; Yu, M. Unsupervised Bidirectional Contrastive Reconstruction and Adaptive Fine-Grained Channel Attention Networks for image dehazing. Neural Netw. 2024, 176, 106314. [Google Scholar] [CrossRef] [PubMed]
- Wan, D.; Lu, R.; Shen, S.; Xu, T.; Lang, X.; Ren, Z. Mixed Local Channel Attention for Object Detection. Eng. Appl. Artif. Intell. 2023, 123, 106442. [Google Scholar] [CrossRef]
- Guo, T.; Wang, R.; Luo, F.; Gong, X.; Zhang, L.; Gao, X. Dual-View Spectral and Global Spatial Feature Fusion Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5512913. [Google Scholar] [CrossRef]
- Li, Z.; Liu, M.; Chen, Y.; Xu, Y.; Li, W.; Du, Q. Deep Cross-Domain Few-Shot Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5501618. [Google Scholar] [CrossRef]
- Wang, Y.; Liu, M.; Yang, Y.; Li, Z.; Du, Q.; Chen, Y.; Li, F.; Yang, H. Heterogeneous Few-Shot Learning for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5510405. [Google Scholar] [CrossRef]
No. | Class | Color | Sample Numbers | False-Color Map | Ground-Truth Map |
---|---|---|---|---|---|
1 | Water | 65,971 | |||
2 | Trees | 7598 | |||
3 | Asphalt | 300 | |||
4 | Self-Blocking Bricks | 2685 | |||
5 | Bitumen | 6584 | |||
6 | Tiles | 9248 | |||
7 | Shadows | 7287 | |||
8 | Meadows | 42,826 | |||
9 | Bare Soil | 2863 | |||
Total | 42,776 |
No. | Class Name | Color | Sample | False-Color Map | Ground-Truth Map |
---|---|---|---|---|---|
1 | Alfalfa | 46 | |||
2 | Corn-notill | 1428 | |||
3 | Corn-mintill | 830 | |||
4 | Corn | 237 | |||
5 | Grass-pasture | 483 | |||
6 | Grass-trees | 730 | |||
7 | Grass-pasture-mowed | 28 | |||
8 | Hay-windrowed | 478 | |||
9 | Oats | 20 | |||
10 | Soybean-notill | 972 | |||
11 | Soybean-mintill | 2455 | |||
12 | Soybean-clean | 593 | |||
13 | Wheat | 205 | |||
Woods | 1265 | ||||
Buildings-grass-trees-drives | 386 | ||||
Stone-steel-towers | 93 | ||||
Total | 10,249 |
No. | Class | Color | Sample Numbers | False-Color Map | Ground-Truth Map |
---|---|---|---|---|---|
1 | Corn | 34,511 | |||
2 | Cotton | 8374 | |||
3 | Sesame | 3031 | |||
4 | Broad-leaf soybean | 63,212 | |||
5 | Narrow-leaf soybean | 4151 | |||
6 | Rice | 11,854 | |||
7 | Water | 67,056 | |||
8 | Roads and houses | 7124 | |||
9 | Mixed weed | 5229 | |||
Total | 204,542 |
Method | SSRN | A2S2K-ResNet | DSGSF | LMSS-NAS | DCFSL | HFSL | MLFS-NAS | |
---|---|---|---|---|---|---|---|---|
Class | ||||||||
1 | 99.14 ± 0.20 | 99.99 ± 0.01 | 99.98 ± 0.02 | 99.95 ± 0.04 | 99.50 ± 0.12 | 99.78 ± 0.02 | 99.65 ± 0.11 | |
2 | 84.13 ± 4.67 | 88.48 ± 11.20 | 75.05 ± 6.39 | 96.87 ± 1.86 | 92.70 ± 4.31 | 95.56 ± 5.10 | 93.46 ± 1.58 | |
3 | 66.49 ± 7.85 | 57.80 ± 18.19 | 97.56 ± 2.01 | 83.84 ± 5.94 | 84.73 ± 3.56 | 90.26 ± 2.71 | 93.67 ± 2.63 | |
4 | 61.27 ± 12.26 | 62.32 ± 6.68 | 51.49 ± 14.78 | 52.91 ± 15.23 | 99.51 ± 0.21 | 88.20 ± 0.84 | 93.66 ± 3.56 | |
5 | 81.32 ± 8.73 | 90.04 ± 7.34 | 99.97 ± 4.85 | 91.65 ± 5.35 | 86.23 ± 3.83 | 89.85 ± 4.10 | 94.86 ± 2.68 | |
6 | 84.24 ± 4.82 | 73.76 ± 6.72 | 93.29 ± 3.41 | 91.55 ± 0.77 | 94.07 ± 1.25 | 71.71 ± 0.45 | 97.98 ± 1.52 | |
7 | 91.27 ± 1.70 | 96.68 ± 2.66 | 99.96 ± 1.49 | 98.71 ± 0.91 | 84.45 ± 3.64 | 99.86 ± 4.55 | 90.37 ± 2.89 | |
8 | 94.26 ± 5.31 | 97.63 ± 2.62 | 98.67 ± 0.99 | 99.93 ± 0.05 | 98.75 ± 0.22 | 99.80 ± 0.91 | 99.52 ± 0.14 | |
9 | 93.71 ± 0.18 | 99.98 ± 0.01 | 100.00 ± 0.00 | 99.25 ± 0.77 | 95.98 ± 4.73 | 92.51 ± 3.04 | 96.58 ± 1.46 | |
OA (%) | 91.64 ± 1.75 | 99.98 ± 0.01 | 95.77 ± 0.56 | 96.85 ± 1.25 | 96.89 ± 0.30 | 96.24 ± 0.95 | 96.58 ± 1.46 | |
AA (%) | 83.98 ± 0.93 | 85.21 ± 0.04 | 90.67 ± 2.08 | 90.52 ± 2.44 | 92.88 ± 2.56 | 91.95 ± 1.11 | 98.57 ± 0.53 | |
K × 100 | 89.94 ± 2.55 | 90.70 ± 0.03 | 94.00 ± 0.80 | 95.55 ± 1.76 | 95.60 ± 0.09 | 94.68 ± 0.76 | 95.85 ± 0.47 |
Method | SSRN | A2S2K-ResNet | DSGSF | LMSS-NAS | DCFSL | HFSL | MLFS-NAS | |
---|---|---|---|---|---|---|---|---|
Class | ||||||||
1 | 29.93 ± 23.74 | 27.28 ± 18.94 | 91.66 ± 2.09 | 26.39 ± 9.81 | 98.8 ± 1.69 | 98.78 ± 1.22 | 96.67 ± 2.57 | |
2 | 52.13 ± 14.64 | 51.99 ± 4.89 | 47.63 ± 3.80 | 59.97 ± 18.25 | 38.15 ± 6.16 | 56.71 ± 4.64 | 61.73 ± 1.33 | |
3 | 24.07 ± 3.28 | 42.27 ± 1.02 | 41.86 ± 2.76 | 46.78 ± 24.84 | 51.63 ± 3.94 | 50.61 ± 4.30 | 75.78 ± 1.81 | |
4 | 29.30 ± 12.91 | 30.40 ± 10.22 | 16.01 ± 16.77 | 40.86 ± 3.87 | 72.70 ± 13.01 | 79.74 ± 18.97 | 82.93 ± 9.21 | |
5 | 76.71 ± 12.49 | 77.49 ± 13.48 | 54.72 ± 14.19 | 79.60 ± 12.79 | 61.93 ± 13.01 | 63.39 ± 10.25 | 91.10 ± 0.92 | |
6 | 89.19 ± 7.26 | 89.66 ± 1.85 | 82.06 ± 3.61 | 85.75 ± 10.73 | 94.14 ± 0.69 | 78.41 ± 11.10 | 87.66 ± 1.71 | |
7 | 35.31 ± 26.51 | 25.44 ± 4.12 | 40.00 ± 22.74 | 32.23 ± 19.72 | 100.00 ± 0.00 | 97.83 ± 2.17 | 98.92 ± 1.25 | |
8 | 95.02 ± 6.90 | 100.00 ± 0.00 | 99.32 ± 0.45 | 99.11 ± 0.86 | 97.14 ± 3.73 | 99.68 ± 0.11 | 96.57 ± 2.01 | |
9 | 12.28 ± 5.63 | 33.60 ± 2.42 | 11.59 ± 1.49 | 18.50 ± 15.11 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | |
10 | 63.05 ± 7.61 | 51.89 ± 11.39 | 45.04 ± 11.45 | 70.90 ± 18.40 | 59.82 ± 0.79 | 67.99 ± 3.36 | 75.32 ± 2.20 | |
11 | 61.22 ± 5.14 | 67.19 ± 2.78 | 73.21 ± 3.88 | 56.15 ± 8.85 | 33.31 ± 3.86 | 64.65 ± 1.71 | 71.52 ± 1.07 | |
12 | 28.32 ± 4.98 | 41.61 ± 2.06 | 42.85 ± 5.74 | 46.20 ± 25.19 | 41.52 ± 6.70 | 55.78 ± 23.98 | 71.44 ± 4.27 | |
13 | 94.99 ± 4.15 | 84.42 ± 0.56 | 80.61 ± 1.22 | 91.56 ± 7.69 | 99.00 ± 0.71 | 98.75 ± 0.25 | 98.77 ± 1.77 | |
14 | 91.43 ± 4.60 | 96.97 ± 4.77 | 93.06 ± 1.95 | 96.30 ± 2.31 | 92.10 ± 5.33 | 80.99 ± 9.48 | 91.48 ± 0.87 | |
15 | 58.55 ± 25.51 | 59.61 ± 6.78 | 55.50 ± 15.89 | 64.14 ± 12.83 | 79.27 ± 8.58 | 99.21 ± 0.52 | 95.67 ± 3.01 | |
16 | 73.47 ± 29.26 | 72.21 ± 4.91 | 89.77 ± 8.01 | 83.60 ± 8.13 | 98.30 ± 2.41 | 94.32 ± 5.68 | 96.55 ± 2.77 | |
OA (%) | 53.70 ± 4.84 | 57.28 ± 0.91 | 59.17 ± 2.23 | 60.96 ± 4.27 | 66.55 ± 1.81 | 69.61 ± 1.71 | 78.39 ± 0.78 | |
AA (%) | 57.18 ± 5.35 | 59.49 ± 3.39 | 60.31 ± 2.69 | 62.38 ± 2.63 | 78.53 ± 1.09 | 80.43 ± 2.26 | 84.35 ± 3.19 | |
K × 100 | 48.66 ± 5.22 | 52.70 ± 0.94 | 54.30 ± 2.18 | 55.23 ± 4.97 | 61.99 ± 1.49 | 65.83 ± 1.30 | 76.91 ± 0.90 |
Method | SSRN | A2S2K-ResNet | DSGSF | LMSS-NAS | DCFSL | HFSL | MLFS-NAS | |
---|---|---|---|---|---|---|---|---|
Class | ||||||||
1 | 93.97 ± 4.58 | 92.07 ± 0.73 | 99.12 ± 0.01 | 98.17 ± 0.53 | 99.35 ± 0.37 | 94.39 ± 4.03 | 98.45 ± 0.53 | |
2 | 54.57 ± 15.75 | 76.51 ± 19.53 | 94.93 ± 0.42 | 95.23 ± 8.08 | 93.71 ± 4.22 | 99.00 ± 0.39 | 97.58 ± 0.30 | |
3 | 59.98 ± 11.32 | 89.33 ± 3.31 | 65.59 ± 0.69 | 74.45 ± 22.62 | 93.75 ± 6.20 | 81.18 ± 6.03 | 98.32 ± 0.08 | |
4 | 91.85 ± 6.70 | 74.08 ± 2.13 | 94.86 ± 0.21 | 98.40 ± 0.91 | 82.06 ± 5.31 | 97.04 ± 1.25 | 98.53 ± 0.23 | |
5 | 43.85 ± 6.01 | 84.16 ± 0.86 | 76.99 ± 3.43 | 77.33 ± 12.01 | 98.29 ± 1.17 | 91.41 ± 3.71 | 98.42 ± 0.85 | |
6 | 85.98 ± 12.90 | 97.38 ± 2.30 | 98.49 ± 1.21 | 97.39 ± 4.19 | 87.85 ± 3.62 | 96.40 ± 1.96 | 98.31 ± 0.88 | |
7 | 98.65 ± 0.88 | 99.48 ± 0.39 | 99.34 ± 0.30 | 99.31 ± 0.65 | 99.94 ± 0.31 | 99.58 ± 0.37 | 98.31 ± 1.22 | |
8 | 87.20 ± 15.57 | 93.64 ± 2.53 | 94.73 ± 1.17 | 68.24 ± 13.74 | 88.96 ± 3.86 | 96.38 ± 0.86 | 97.64 ± 0.36 | |
9 | 46.62 ± 15.21 | 90.63 ± 1.46 | 74.13 ± 2.42 | 59.23 ± 4.65 | 92.97 ± 1.49 | 87.08 ± 0.75 | 98.55 ± 0.48 | |
OA (%) | 87.48 ± 1.31 | 88.42 ± 0.49 | 94.60 ± 0.14 | 94.99 ± 1.36 | 92.67 ± 1.14 | 96.84 ± 0.97 | 98.74 ± 0.46 | |
AA (%) | 73.63 ± 1.24 | 73.25 ± 1.23 | 88.69 ± 0.47 | 86.84 ± 2.19 | 92.99 ± 0.55 | 93.61 ± 0.45 | 98.93 ± 0.11 | |
K × 100 | 83.64 ± 1.76 | 85.22 ± 0.61 | 92.88 ± 0.23 | 93.34 ± 1.83 | 90.54 ± 1.35 | 95.86 ± 1.27 | 98.61 ± 0.35 |
Source Datasets | Target Datasets | OA (%) | AA (%) | K × 100 |
---|---|---|---|---|
MI | PC | 97.49 ± 0.98 | 94.62 ± 0.46 | 96.41 ± 0.76 |
IN | 73.77 ± 0.89 | 76.94 ± 1.37 | 71.35 ± 1.22 | |
LK | 96.50 ± 1.69 | 96.63 ± 0.91 | 96.39 ± 1.13 | |
CK | PC | 97.36 ± 0.51 | 94.56 ± 1.07 | 96.38 ± 0.84 |
IN | 77.42 ± 1.02 | 79.81 ± 1.14 | 75.29 ± 1.27 | |
LK | 96.47 ± 0.70 | 96.53 ± 0.39 | 96.32 ± 0.51 | |
MI&CK | PC | 98.57 ± 0.53 | 95.85 ± 0.47 | 97.89 ± 0.66 |
IN | 79.85 ± 0.78 | 80.06 ± 3.19 | 76.91 ± 0.90 | |
LK | 98.74 ± 0.46 | 98.93 ± 0.11 | 98.61 ± 0.35 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, A.; Zhang, K.; Wu, H.; Chen, H.; Wang, M. Meta-Learning-Integrated Neural Architecture Search for Few-Shot Hyperspectral Image Classification. Electronics 2025, 14, 2952. https://doi.org/10.3390/electronics14152952
Wang A, Zhang K, Wu H, Chen H, Wang M. Meta-Learning-Integrated Neural Architecture Search for Few-Shot Hyperspectral Image Classification. Electronics. 2025; 14(15):2952. https://doi.org/10.3390/electronics14152952
Chicago/Turabian StyleWang, Aili, Kang Zhang, Haibin Wu, Haisong Chen, and Minhui Wang. 2025. "Meta-Learning-Integrated Neural Architecture Search for Few-Shot Hyperspectral Image Classification" Electronics 14, no. 15: 2952. https://doi.org/10.3390/electronics14152952
APA StyleWang, A., Zhang, K., Wu, H., Chen, H., & Wang, M. (2025). Meta-Learning-Integrated Neural Architecture Search for Few-Shot Hyperspectral Image Classification. Electronics, 14(15), 2952. https://doi.org/10.3390/electronics14152952