Unsupervised Multi-Level Feature Extraction for Improvement of Hyperspectral Classification
Abstract
:1. Introduction
2. Preliminaries
2.1. Convolution Operation
2.2. Deconvolution Operation
3. Proposed Framework for Multi-Level Feature Extraction
4. Experiments
4.1. Data Set Description
4.2. Network Construction
4.3. Comparison and Analysis of Experimental Results
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Scafutto, R.D.P.M.; de Souza Filho, C.R.; de Oliveira, W.J. Hyperspectral remote sensing detection of petroleum hydrocarbons in mixtures with mineral substrates: Implications for onshore exploration and monitoring. J. Photogramm. Remote Sens. 2017, 128, 146–157. [Google Scholar] [CrossRef]
- Delalieux, S.; Zarco-Tejada, P.J.; Tits, L.; Bello, M.Á.J.; Intrigliolo, D.S.; Somers, B. Unmixing-based fusion of hyperspatial and hyperspectral airborne imagery for early detection of vegetation stress. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2571–2582. [Google Scholar] [CrossRef]
- Chang, C.I. Hyperspectral Imaging: Techniques for Spectral Detection and Classification; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
- Tao, Y.; Xu, M.; Zhong, Y.; Cheng, Y. GAN-assisted two-stream neural network for high-resolution remote sensing image classification. Remote Sens. 2017, 9, 1328. [Google Scholar] [CrossRef] [Green Version]
- Anwar, S.M.; Majid, M.; Qayyum, A.; Awais, M.; Alnowami, M.; Khan, M.K. Medical image analysis using convolutional neural networks: A review. J. Med. Syst. 2018, 42, 226. [Google Scholar] [CrossRef] [Green Version]
- Gao, Q.; Lim, S.; Jia, X. Hyperspectral image classification using convolutional neural networks and multiple feature learning. Remote Sens. 2018, 10, 299. [Google Scholar] [CrossRef] [Green Version]
- Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep learning for hyperspectral image classification: An overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef] [Green Version]
- Otter, D.W.; Medina, J.R.; Kalita, J.K. A survey of the usages of deep learning for natural language processing. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 604–624. [Google Scholar] [CrossRef] [Green Version]
- Guo, J.; He, H.; He, T.; Lausen, L.; Li, M.; Lin, H.; Shi, X.; Wang, C.; Xie, J.; Zha, S.; et al. GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing. J. Mach. Learn. Res. 2020, 21, 1–7. [Google Scholar]
- Young, T.; Hazarika, D.; Poria, S.; Cambria, E. Recent trends in deep learning based natural language processing. IEEE Comput. Intell. Mag. 2018, 13, 55–75. [Google Scholar] [CrossRef]
- Wu, S.; Roberts, K.; Datta, S.; Du, J.; Ji, Z.; Si, Y.; Soni, S.; Wang, Q.; Wei, Q.; Xiang, Y.; et al. Deep learning in clinical natural language processing: A methodical review. J. Am. Med. Inform. Assoc. 2020, 27, 457–470. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Z.Q.; Zheng, P.; Xu, S.T.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef] [Green Version]
- Yang, C.; Chen, J.; Li, Z.; Huang, Y. Structural Crack Detection and Recognition Based on Deep Learning. Appl. Sci. 2021, 11, 2868. [Google Scholar] [CrossRef]
- Li, K.; Zhang, K.; Zhang, Z.; Liu, Z.; Hua, S.; He, J. A UAV Maneuver Decision-Making Algorithm for Autonomous Airdrop Based on Deep Reinforcement Learning. Sensors 2021, 21, 2233. [Google Scholar] [CrossRef]
- Gadekallu, T.R.; Khare, N.; Bhattacharya, S.; Singh, S.; Reddy Maddikunta, P.K.; Ra, I.H.; Alazab, M. Early detection of diabetic retinopathy using PCA-firefly based deep learning model. Electronics 2020, 9, 274. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Zhang, H.; Shen, Q. Spectral-spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
- Yu, C.; Han, R.; Song, M.; Liu, C.; Chang, C.I. A simplified 2D-3D CNN architecture for hyperspectral image classification based on spatial–spectral fusion. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 2485–2501. [Google Scholar] [CrossRef]
- Zhan, Y.; Hu, D.; Wang, Y.; Yu, X. Semisupervised hyperspectral image classification based on generative adversarial networks. IEEE Geosci. Remote Sens. Lett. 2017, 15, 212–216. [Google Scholar] [CrossRef]
- Xie, F.; Gao, Q.; Jin, C.; Zhao, F. Hyperspectral Image Classification Based on Superpixel Pooling Convolutional Neural Network with Transfer Learning. Remote Sens. 2021, 13, 930. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. arXiv 2014, arXiv:1406.2661. [Google Scholar]
- Liang, H.; Bao, W.; Shen, X. Adaptive Weighting Feature Fusion Approach Based on Generative Adversarial Network for Hyperspectral Image Classification. Remote Sens. 2021, 13, 198. [Google Scholar] [CrossRef]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Zhang, M.; Gong, M.; Mao, Y.; Li, J.; Wu, Y. Unsupervised feature extraction in hyperspectral images based on wasserstein generative adversarial network. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2669–2688. [Google Scholar] [CrossRef]
- Hu, A.; Xie, Z.; Xu, Y.; Xie, M.; Wu, L.; Qiu, Q. Unsupervised Haze Removal for High-Resolution Optical Remote-Sensing Images Based on Improved Generative Adversarial Networks. Remote Sens. 2020, 12, 4162. [Google Scholar] [CrossRef]
- Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A.; Bottou, L. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
- Protopapadakis, E.; Doulamis, A.; Doulamis, N.; Maltezos, E. Stacked Autoencoders Driven by Semi-Supervised Learning for Building Extraction from near Infrared Remote Sensing Imagery. Remote Sens. 2021, 13, 371. [Google Scholar] [CrossRef]
- Tao, C.; Pan, H.; Li, Y.; Zou, Z. Unsupervised spectral-spatial feature learning with stacked sparse autoencoder for hyperspectral imagery classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2438–2442. [Google Scholar]
- Zhou, P.; Han, J.; Cheng, G.; Zhang, B. Learning compact and discriminative stacked autoencoder for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4823–4833. [Google Scholar] [CrossRef]
- Zhang, X.; Liang, Y.; Li, C.; Huyan, N.; Jiao, L.; Zhou, H. Recursive autoencoders-based unsupervised feature learning for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1928–1932. [Google Scholar] [CrossRef] [Green Version]
- Mei, S.; Ji, J.; Geng, Y.; Zhang, Z.; Li, X.; Du, Q. Unsupervised spatial–spectral feature learning by 3D convolutional autoencoder for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6808–6820. [Google Scholar] [CrossRef]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Adelson, E.H.; Anderson, C.H.; Bergen, J.R.; Burt, P.J.; Ogden, J.M. Pyramid methods in image processing. RCA Eng. 1984, 29, 33–41. [Google Scholar]
- Tran, D.; Bourdev, L.; Fergus, R.; Torresani, L.; Paluri, M. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 4489–4497. [Google Scholar]
- Zuo, Z.; Shuai, B.; Wang, G.; Liu, X.; Wang, X.; Wang, B.; Chen, Y. Learning contextual dependence with convolutional hierarchical recurrent neural networks. IEEE Trans. Image Process. 2016, 25, 2983–2996. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Agarwal, A.; El-Ghazawi, T.; El-Askary, H.; Le-Moigne, J. Efficient hierarchical-PCA dimension reduction for hyperspectral imagery. In Proceedings of the IEEE International Symposium on Signal Processing and Information Technology, Giza, Egypt, 15–18 December 2007; pp. 353–356. [Google Scholar]
- Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative adversarial networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Liu, Y.; Zhou, S.; Chen, Q. Discriminative deep belief networks for visual data classification. Pattern Recognit. 2011, 44, 2287–2296. [Google Scholar] [CrossRef]
- Abdel-Zaher, A.M.; Eldeib, A.M. Breast cancer classification using deep belief networks. Expert Syst. Appl. 2016, 46, 139–144. [Google Scholar] [CrossRef]
- Li, J.; Xi, B.; Li, Y.; Du, Q.; Wang, K. Hyperspectral classification based on texture feature enhancement and deep belief networks. Remote Sens. 2018, 10, 396. [Google Scholar] [CrossRef] [Green Version]
- Attias, H. Independent factor analysis. Neural Comput. 1999, 11, 803–851. [Google Scholar] [CrossRef]
- Kang, M.; Ji, K.; Leng, X.; Xing, X.; Zou, H. Synthetic aperture radar target recognition with feature fusion based on a stacked autoencoder. Sensors 2017, 17, 192. [Google Scholar] [CrossRef]
- Liang, P.; Shi, W.; Zhang, X. Remote sensing image classification based on stacked denoising autoencoder. Remote Sens. 2018, 10, 16. [Google Scholar] [CrossRef] [Green Version]
Layer | Input Size | Kernel | Output |
---|---|---|---|
Conv-1 | |||
Conv-2 | |||
Conv-3 | |||
Conv-4 |
Class No. | Class | Total | Training | Testing |
---|---|---|---|---|
1 | Asphalt | 6631 | 663 | 5968 |
2 | Meadows | 18,649 | 1865 | 16,784 |
3 | Gravel | 2099 | 210 | 1889 |
4 | Trees | 3064 | 306 | 2758 |
5 | Metal sheets | 1345 | 135 | 1210 |
6 | Bare soil | 5029 | 503 | 4526 |
7 | Bitumen | 1330 | 133 | 1179 |
8 | Bricks | 3682 | 368 | 3314 |
9 | Shadows | 947 | 95 | 852 |
Class No. | Class | Total | Training | Testing |
---|---|---|---|---|
1 | Alfalfa | 46 | 5 | 41 |
2 | Corn-notil | 1428 | 143 | 1285 |
3 | Corn-min | 830 | 83 | 747 |
4 | Corn | 237 | 24 | 213 |
5 | Grass-pasture | 483 | 48 | 435 |
6 | Grass-trees | 730 | 73 | 657 |
7 | Grass-pasture-mowed | 28 | 3 | 25 |
8 | Hay-windrowed | 478 | 48 | 430 |
9 | Oats | 20 | 2 | 18 |
10 | Soybean-notill | 972 | 97 | 875 |
11 | Soybean-mintill | 2455 | 25 | 2430 |
12 | Soybean-clean | 593 | 59 | 534 |
13 | Wheat | 205 | 21 | 184 |
14 | Woods | 1265 | 127 | 1138 |
15 | Buildings-grass-trees | 386 | 39 | 347 |
16 | Stone-stel-towers | 93 | 9 | 84 |
Class No. | Single-Level | Multi-Level | ||
---|---|---|---|---|
Prediction IV | Prediction III | Prediction II | Prediction I | |
1 | 95.17 | 96.47 | 97.68 | 98.28 |
2 | 78.94 | 89.47 | 93.14 | 94.14 |
3 | 97.45 | 98.69 | 98.47 | 99.46 |
4 | 95.98 | 96.96 | 97.98 | 97.75 |
5 | 99.86 | 99.98 | 100.00 | 100.00 |
6 | 78.43 | 84.81 | 88.67 | 96.60 |
7 | 77.44 | 79.70 | 79.92 | 91.43 |
8 | 90.71 | 93.45 | 96.06 | 96.79 |
9 | 98.83 | 99.36 | 99.78 | 99.79 |
OA (%) | 92.76 | 95.11 | 96.19 | 98.10 |
AA (%) | 90.33 | 93.20 | 94.65 | 97.14 |
(%) | 90.33 | 93.49 | 94.93 | 97.48 |
Class No. | Single-Level | Multi-Level | ||
---|---|---|---|---|
Predict IV | Predict III | Predict II | Predict I | |
1 | 80.43 | 82.61 | 84.78 | 89.13 |
2 | 56.63 | 70.36 | 92.05 | 96.14 |
3 | 58.89 | 74.58 | 82.28 | 87.04 |
4 | 53.16 | 72.99 | 80.17 | 87.34 |
5 | 84.06 | 95.24 | 97.31 | 98.75 |
6 | 93.84 | 96.85 | 97.81 | 98.90 |
7 | 82.14 | 75.00 | 53.57 | 67.85 |
8 | 97.28 | 98.54 | 100.00 | 100.00 |
9 | 95.00 | 90.00 | 75.00 | 100.00 |
10 | 54.22 | 75.21 | 86.93 | 91.04 |
11 | 76.86 | 75.89 | 83.29 | 88.39 |
12 | 85.36 | 94.63 | 97.07 | 97.56 |
13 | 54.97 | 66.61 | 76.73 | 83.31 |
14 | 94.23 | 98.10 | 97.94 | 99.53 |
15 | 76.69 | 83.68 | 78.76 | 86.27 |
16 | 92.47 | 95.70 | 96.77 | 97.85 |
OA (%) | 73.77 | 81.70 | 88.17 | 92.08 |
AA (%) | 77.27 | 84.12 | 86.28 | 91.83 |
(%) | 69.95 | 79.16 | 86.54 | 90.98 |
Class No. | Supervised FE | Unsupervised FE | ||||
---|---|---|---|---|---|---|
DBN | 2D-CNN | FA | SAE | 3D-CAE Single-Level | 3D-CAE Multi-Level | |
1 | 95.85 | 94.78 | 95.88 | 96.26 | 97.48 | 98.58 |
2 | 75.51 | 80.18 | 79.56 | 73.70 | 93.14 | 94.76 |
3 | 97.88 | 99.12 | 86.47 | 97.55 | 98.76 | 99.68 |
4 | 96.87 | 88.97 | 95.14 | 95.07 | 98.07 | 97.78 |
5 | 99.78 | 98.14 | 99.03 | 100.00 | 100.00 | 100.00 |
6 | 76.60 | 92.38 | 94.55 | 66.91 | 92.32 | 97.71 |
7 | 72.93 | 74.96 | 81.65 | 82.78 | 86.99 | 95.49 |
8 | 95.11 | 95.36 | 69.61 | 90.82 | 96.77 | 98.07 |
9 | 99.79 | 90.39 | 95.88 | 97.88 | 100.00 | 100.00 |
OA (%) | 92.97 | 94.70 | 88.16 | 91.45 | 97.01 | 98.65 |
AA (%) | 90.03 | 90.48 | 88.64 | 89.00 | 95.94 | 98.01 |
(%) | 90.60 | 92.96 | 84.62 | 88.50 | 96.03 | 98.21 |
Class No. | Supervised FE | Unsupervised FE | ||||
---|---|---|---|---|---|---|
DBN | 2D-CNN | FA | SAE | 3D-CAE Single-Level | 3D-CAE Multi-Level | |
1 | 89.13 | 60.87 | 89.13 | 65.22 | 84.78 | 91.30 |
2 | 92.77 | 95.54 | 61.81 | 86.14 | 93.49 | 94.61 |
3 | 92.36 | 80.74 | 61.90 | 84.59 | 91.25 | 96.98 |
4 | 87.76 | 94.94 | 43.88 | 83.12 | 91.14 | 94.93 |
5 | 75.77 | 86.96 | 87.78 | 83.85 | 97.10 | 97.51 |
6 | 92.33 | 98.36 | 81.51 | 95.21 | 99.17 | 99.45 |
7 | 92.86 | 89.29 | 89.29 | 50.00 | 75.00 | 85.71 |
8 | 98.12 | 99.58 | 93.10 | 94.35 | 99.58 | 100.00 |
9 | 90.00 | 80.00 | 65.00 | 65.00 | 95.00 | 100.00 |
10 | 77.77 | 80.97 | 53.60 | 88.37 | 88.78 | 94.15 |
11 | 81.02 | 99.35 | 88.96 | 93.60 | 92.67 | 95.47 |
12 | 98.54 | 99.02 | 99.99 | 88.29 | 94.63 | 91.39 |
13 | 85.67 | 75.71 | 34.23 | 71.50 | 90.05 | 90.73 |
14 | 98.74 | 99.21 | 95.65 | 92.89 | 98.33 | 99.92 |
15 | 95.34 | 97.15 | 70.47 | 74.35 | 92.75 | 96.89 |
16 | 46.23 | 76.34 | 68.82 | 55.91 | 99.97 | 95.69 |
OA (%) | 87.87 | 92.04 | 75.16 | 87.85 | 93.71 | 96.17 |
AA (%) | 87.15 | 88.38 | 74.07 | 79.53 | 92.73 | 95.29 |
(%) | 86.24 | 90.85 | 71.16 | 86.12 | 92.83 | 95.63 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, Q.; Liu, X.; Bourennane, S. Unsupervised Multi-Level Feature Extraction for Improvement of Hyperspectral Classification. Remote Sens. 2021, 13, 1602. https://doi.org/10.3390/rs13081602
Sun Q, Liu X, Bourennane S. Unsupervised Multi-Level Feature Extraction for Improvement of Hyperspectral Classification. Remote Sensing. 2021; 13(8):1602. https://doi.org/10.3390/rs13081602
Chicago/Turabian StyleSun, Qiaoqiao, Xuefeng Liu, and Salah Bourennane. 2021. "Unsupervised Multi-Level Feature Extraction for Improvement of Hyperspectral Classification" Remote Sensing 13, no. 8: 1602. https://doi.org/10.3390/rs13081602
APA StyleSun, Q., Liu, X., & Bourennane, S. (2021). Unsupervised Multi-Level Feature Extraction for Improvement of Hyperspectral Classification. Remote Sensing, 13(8), 1602. https://doi.org/10.3390/rs13081602