A Deep Learning-Based Hyperspectral Object Classification Approach via Imbalanced Training Samples Handling
Abstract
:1. Introduction
- To address the issue of imbalanced classes, a modified combined approach was adopted using both undersampling and oversampling methods. This technique involves generating synthetic images and undersampling classes that have an excess of samples. The objective is to obtain balanced classes and datasets. The method uses a combination of undersampling and oversampling, where large classes are downsampled using one method and minority classes are oversampled with another method, resulting in a dataset that is suitably balanced.
- To streamline the classification process and reduce computational load, a technique was implemented to reduce the number of features utilized. This technique involves a new subgrouping-based method in combination with a greedy feature selection technique, which ensures that only the most crucial spectral features are utilized while minimizing extraneous ones.
- A hybrid 3D-2D CNN network was utilized; in order to attain the highest level of accuracy possible, the 3D-CNN and 2D-CNN layers that make up the proposed model are combined to make full use of the spectral and spatial feature maps, ensuring that the model is as accurate as possible.
2. Proposed Methodology
2.1. Motivation
2.2. Proposed Resampling Method
- 1.
- Calculate the distance between each instance in the majority class and each instance in the minority class;
- 2.
- Select the instances from the majority class that have the shortest distance to the instances in the minority class;
- 3.
- Store the selected instances for elimination;
- 4.
- Determine the number of instances q in the minority class;
- 5.
- Return q × p instances of the larger class as a minority class.
2.3. Proposed Dimensionality Reduction Method
Algorithm 1 Overall proposed methodology |
|
2.4. Proposed Deep Learning Model and Classification
Algorithm 2 Dimensionality reduction. |
|
3. Experiments
3.1. Dataset Description
- 1.
- Salinas Scene: the Salinas Scene (SC) dataset for HSIs was acquired in 1992 using the AVIRIS sensor, which obtained 566 images of the Salinas Valley in California. The initial image comprised 224 bands [39]. To create an HSI dataset with 204 bands, we excluded twenty water absorption bands: bands 108–112, 154–167, and 224. The spatial dimensions of the scene are pixels. The scene has a total of sixteen labeled classes. Figure 3a displays ground truth images of the SC dataset, and Table 1 shows the sample distributions for the experiment.
- 2.
- Pavia University: this dataset consists of ROSIS optical sensor data collected by the University of Pavia (PU) over the city of Pavia in northern Italy. The PU dataset has a spatial resolution of 1.3 m, and has 103 spectral bands in addition to its spatial resolution [39]. There are nine ground truth classes in PU. Further information on the PU experimental datasets may be accessed at [23]. Figure 3c displays the ground truth images for each of the experimental datasets. The sample distributions in the original dataset are shown in Figure 3c and Table 2.
- 3.
- Kennedy Space Center: the final dataset, referred to as the Kennedy Space Center (KSC) dataset, was acquired on March 23, 1996 using the AVIRIS sensor, which captured the KSC region in Florida. This HSI dataset is characterized by spatial dimensions of pixels and comprises 224 spectral bands [39]. Following the removal of 48 noisy bands, a total of 172 spectral bands were obtained. The dataset consists of thirteen labeled classes, as outlined in Table 3. The sample distributions in the original dataset are shown in Figure 3b and Table 3.
3.2. Experimental Setup and Hyperparameter Tuning
3.3. Comparison Methods
- 1.
- SVM [40]: Mounika et al. used Principal Component Analysis (PCA) for feature extraction in the classification of hyperspectral images using Support Vector Machine (SVM). Their paper demonstrated the effectiveness of using PCA as a preprocessing technique for feature extraction in hyperspectral image classification, providing a valuable tool for researchers in remote sensing and image analysis.
- 2.
- 3D-CNN [27]: Li et al. proposed a 3D convolutional neural network (CNN) for spectral–spatial classification of hyperspectral images. Their paper introduced a novel method that utilizes both spectral and spatial information to improve classification performance. The application of 3D-CNNs for HSI classification can pose significant challenges in terms of computational requirements and data availability, potentially restricting its feasibility for certain practical applications.
- 3.
- Fast 3D-CNN [28]: Ahmad et al. introduced a fast and compact 3D-CNN for hyperspectral image classification that achieved high accuracy while using a relatively small number of parameters. This paper’s contribution lies in the development of an efficient method that overcomes the challenge of processing large amounts of hyperspectral data in real-time applications. A potential disadvantage of employing 3D-CNN for hyperspectral image classification is the risk of losing relevant information during the feature extraction process, which can negatively impact classification accuracy.
- 4.
- HybridSN [29]: HybridSN is a deep learning architecture proposed for efficient hyperspectral image classification. It combines a deep 2D CNN with a shallow 3D CNN to capture both spatial and spectral features of hyperspectral data. The architecture may pose a challenge in terms of hyperparameter tuning, as its performance can be highly dependent on the careful selection and adjustment of these parameters to achieve the best results.
- 5.
- SpectralNET [41]: this paper proposes a deep learning architecture that combines the spectral–spatial features of hyperspectral data with the multi-resolution analysis capabilities of wavelet transforms, with the aim of improving classification performance.
- 6.
- MDBA [42]: this paper’s main contribution lies in the development of a multibranch 3D Dense Attention Network (MDBA) that incorporates attention mechanisms to capture spatial and spectral information effectively. While the proposed MDBA offers promising results in hyperspectral image classification, it is important to note that the network architecture and attention mechanisms add complexity to the model, requiring careful implementation and large computational resources for practical deployment.
- 7.
- Hybrid 3D-CNN/2D-CNN [43]: In this paper, the authors proposed a hybrid CNN architecture combining 3D-CNN and 2D-CNN and using depthwise separable convolutions and group convolutions to improve the classification performance of hyperspectral images. However, the lack of comparison with state-of-the-art methods on larger or more diverse hyperspectral datasets makes it difficult to assess the generalizability and scalability of the proposed method beyond the specific conditions used in the study.
3.4. Complexity Analysis
- 1.
- Conv3D Layer: the number of parameters in a Conv3D layer can be calculated as . The first Conv3D layer has eight filters and a kernel size of , and the input has one channel; thus, the number of parameters in this layer is parameters.
- 2.
- BatchNormalization Layer: the number of parameters in a BatchNormalization layer is equal to four times the number of filters in the previous layer. The first BatchNormalization layer after Conv3D has eight filters; thus, it has parameters.
- 3.
- Conv2D Layer: the number of parameters in a Conv2D layer can be calculated as . The Conv2D layer has 32 filters and a kernel size of , and the input has 32 channels; thus, the number of parameters in this layer is parameters.
- 4.
- DepthwiseConv2D Layer: the number of parameters in a DepthwiseConv2D layer can be calculated as . The DepthwiseConv2D layer has a kernel size of and 32 input channels; thus, the number of parameters in this layer is parameters.
- 5.
- Dense Layer: the number of parameters in a Dense layer can be calculated as . The first Dense layer has 256 units and an input dimension of 7200; thus, the number of parameters in this layer is parameters.
3.5. Classification Performance
3.5.1. Results for the SC
3.5.2. Results for the PU
3.5.3. Results for the KSC
4. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhao, J.L.; Lerner, A.; Shu, Z.; Gao, X.J.; Zee, C.S. Imaging spectrum of neurocysticercosis. Radiol. Infect. Dis. 2015, 1, 94–102. [Google Scholar] [CrossRef] [Green Version]
- Mallapragada, S.; Wong, M.; Hung, C.C. Dimensionality reduction of hyperspectral images for classification. In Proceedings of the Ninth International Conference on Information, Tokyo, Japan, 9–12 October 2018; Volume 1. [Google Scholar]
- Yuen, P.W.; Richardson, M. An introduction to hyperspectral imaging and its application for security, surveillance and target acquisition. Imaging Sci. J. 2010, 58, 241–253. [Google Scholar] [CrossRef]
- Lv, W.; Wang, X. Overview of hyperspectral image classification. J. Sens. 2020, 2020. [Google Scholar] [CrossRef]
- Uddin, M.P.; Mamun, M.A.; Hossain, M.A. PCA-based feature reduction for hyperspectral remote sensing image classification. IETE Tech. Rev. 2021, 38, 377–396. [Google Scholar] [CrossRef]
- Jayaprakash, C.; Damodaran, B.B.; Sowmya, V.; Soman, K. Dimensionality reduction of hyperspectral images for classification using randomized independent component analysis. In Proceedings of the 2018 5th IEEE International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 22–23 February 2018; pp. 492–496. [Google Scholar]
- Boori, M.S.; Paringer, R.A.; Choudhary, K.; Kupriyanov, A.V. Comparison of hyperspectral and multi-spectral imagery to building a spectral library and land cover classification performanc. Comput. Opt. 2018, 42, 1035–1045. [Google Scholar] [CrossRef]
- Aparna, G.; Rachana, K.; Rikhita, K.; Phaneendra Kumar, B.L. Comparison of Feature Reduction Techniques for Change Detection in Remote Sensing. In Evolution in Signal Processing and Telecommunication Networks, Proceedings of Sixth International Conference on Microelectronics, Electromagnetics and Telecommunications (ICMEET 2021), Bhubaneswar, India, 27–28 August 2021; Springer: Berlin, Germnay, 2022; Volume 2, pp. 325–333. [Google Scholar]
- Rodarmel, C.; Shan, J. Principal component analysis for hyperspectral image classification. Surv. Land Inf. Sci. 2002, 62, 115–122. [Google Scholar]
- Luo, G.; Chen, G.; Tian, L.; Qin, K.; Qian, S.E. Minimum noise fraction versus principal component analysis as a preprocessing step for hyperspectral imagery denoising. Can. J. Remote Sens. 2016, 42, 106–116. [Google Scholar] [CrossRef]
- Du, Q.; Zhu, W.; Yang, H.; Fowler, J.E. Segmented principal component analysis for parallel compression of hyperspectral imagery. IEEE Geosci. Remote Sens. Lett. 2009, 6, 713–717. [Google Scholar]
- Lixin, G.; Weixin, X.; Jihong, P. Segmented minimum noise fraction transformation for efficient feature extraction of hyperspectral images. Pattern Recognit. 2015, 48, 3216–3226. [Google Scholar] [CrossRef]
- Tsuge, S.; Shishibori, M.; Kuroiwa, S.; Kita, K. Dimensionality reduction using non-negative matrix factorization for information retrieval. In Proceedings of the 2001 IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace (Cat. No. 01CH37236), Tucson, AZ, USA, 7–10 October 2001; Volume 2, pp. 960–965. [Google Scholar]
- Wu, L.; Kong, C.; Hao, X.; Chen, W. A short-term load forecasting method based on GRU-CNN hybrid neural network model. Math. Probl. Eng. 2020, 2020, 1–10. [Google Scholar] [CrossRef] [Green Version]
- Liu, G.; Xiao, L.; Xiong, C. Image classification with deep belief networks and improved gradient descent. In Proceedings of the 2017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC), Guangzhou, China, 21–24 July 2017; Volume 1, pp. 375–380. [Google Scholar]
- Farrell, M.D.; Mersereau, R.M. On the impact of PCA dimension reduction for hyperspectral detection of difficult targets. IEEE Geosci. Remote Sens. Lett. 2005, 2, 192–195. [Google Scholar] [CrossRef]
- Van Der Maaten, L.; Postma, E.; Van den Herik, J. Dimensionality reduction: A comparative. J. Mach. Learn. Res. 2009, 10, 13. [Google Scholar]
- Du, Q.; Chang, C.I. Segmented PCA-based compression for hyperspectral image analysis. In Proceedings of the Chemical and Biological Standoff Detection; SPIE: Bellingham, WA, USA, 2004; Volume 5268, pp. 274–281. [Google Scholar]
- Chen, G.; Qian, S.E. Evaluation and comparison of dimensionality reduction methods and band selection. Can. J. Remote Sens. 2008, 34, 26–36. [Google Scholar] [CrossRef]
- Zhang, Z.Y. Nonnegative matrix factorization: Models, algorithms and applications. Data Mining Found. Intell. Paradig. 2012, 2, 99–134. [Google Scholar]
- Kumar, N.; Uppala, P.; Duddu, K.; Sreedhar, H.; Varma, V.; Guzman, G.; Walsh, M.; Sethi, A. Hyperspectral tissue image segmentation using semi-supervised NMF and hierarchical clustering. IEEE Trans. Med Imaging 2018, 38, 1304–1313. [Google Scholar] [CrossRef]
- Harikiran, J.J.H. Hyperspectral image classification using support vector machines. IAES Int. J. Artif. Intell. 2020, 9, 684. [Google Scholar] [CrossRef]
- Petersson, H.; Gustafsson, D.; Bergstrom, D. Hyperspectral image analysis using deep learning—A review. In Proceedings of the 2016 IEEE Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA), Oulu, Finland, 12–15 December 2016; pp. 1–6. [Google Scholar]
- Rissati, J.; Molina, P.; Anjos, C. Hyperspectral image classification using random forest and deep learning algorithms. In Proceedings of the 2020 IEEE Latin American GRSS & ISPRS Remote Sensing Conference (LAGIRS), Santiago, Chile, 22–26 March 2020; p. 132. [Google Scholar]
- Vaddi, R.; Manoharan, P. Hyperspectral image classification using CNN with spectral and spatial features integration. Infrared Phys. Technol. 2020, 107, 103296. [Google Scholar] [CrossRef]
- Medus, L.D.; Saban, M.; Frances-Villora, J.V.; Bataller-Mompean, M.; Rosado-Muñoz, A. Hyperspectral image classification using CNN: Application to industrial food packaging. Food Control 2021, 125, 107962. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, H.; Shen, Q. Spectral-spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef] [Green Version]
- Ahmad, M.; Khan, A.M.; Mazzara, M.; Distefano, S.; Ali, M.; Sarfraz, M.S. A fast and compact 3-D CNN for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 19, 1–5. [Google Scholar] [CrossRef]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
- Kotsiantis, S.; Kanellopoulos, D.; Pintelas, P. Handling imbalanced datasets: A review. GESTS Int. Trans. Comput. Sci. Eng. 2006, 30, 25–36. [Google Scholar]
- Sun, Y.; Wong, A.K.; Kamel, M.S. Classification of imbalanced data: A review. Int. J. Pattern Recognit. Artif. Intell. 2009, 23, 687–719. [Google Scholar] [CrossRef]
- Chaubey, Y.P. Resampling methods: A practical guide to data analysis. Technometrics 2000, 42, 311. [Google Scholar] [CrossRef]
- Somasundaram, A.; Reddy, U.S. Data imbalance: Effects and solutions for classification of large and highly imbalanced data. In Proceedings of the 1st International Conference on Research in Engineering, Computers and Technology (ICRECT 2016), Tiruchirappalli, India, 8–10 September 2016. [Google Scholar]
- Liu, B.; Tsoumakas, G. Dealing with class imbalance in classifier chains via random undersampling. Knowl.-Based Syst. 2020, 192, 105292. [Google Scholar] [CrossRef]
- Zheng, Z.; Cai, Y.; Li, Y. Oversampling method for imbalanced classification. Comput. Inform. 2015, 34, 1017–1037. [Google Scholar]
- Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
- Alsowail, R.A. An Insider Threat Detection Model Using One-Hot Encoding and Near-Miss Under-Sampling Techniques. In Proceedings of the International Joint Conference on Advances in Computational Intelligence: IJCACI 2021, online, 23–24 October 2021; Springer: Berlin, Germany, 2022; pp. 183–196. [Google Scholar]
- Borgognone, M.G.; Bussi, J.; Hough, G. Principal component analysis in sensory analysis: Covariance or correlation matrix? Food Qual. Prefer. 2001, 12, 323–326. [Google Scholar] [CrossRef]
- Billah, M.; Waheed, S. Minimum redundancy maximum relevance (mRMR) based feature selection from endoscopic images for automatic gastrointestinal polyp detection. Multimed. Tools Appl. 2020, 79, 23633–23643. [Google Scholar] [CrossRef]
- Tsai, F.; Lin, E.K.; Yoshino, K. Spectrally segmented principal component analysis of hyperspectral imagery for mapping invasive plant species. Int. J. Remote Sens. 2007, 28, 1023–1039. [Google Scholar] [CrossRef]
- Chakraborty, T.; Trehan, U. Spectralnet: Exploring spatial-spectral waveletcnn for hyperspectral image classification. arXiv 2021, arXiv:2104.00341. [Google Scholar]
- Yin, J.; Qi, C.; Huang, W.; Chen, Q.; Qu, J. Multibranch 3d-dense attention network for hyperspectral image classification. IEEE Access 2022, 10, 71886–71898. [Google Scholar] [CrossRef]
- Diakite, A.; Jiangsheng, G.; Xiaping, F. Hyperspectral image classification using 3D 2D CNN. IET Image Process. 2021, 15, 1083–1092. [Google Scholar] [CrossRef]
- Mounika, K.; Aravind, K.; Yamini, M.; Navyasri, P.; Dash, S.; Suryanarayana, V. Hyperspectral image classification using SVM with PCA. In Proceedings of the 2021 6th IEEE International Conference on Signal Processing, Computing and Control (ISPCC), Solan, India, 7–9 October 2021; pp. 470–475. [Google Scholar]
- Graña, M.; Veganzons, M.; Ayerdi, B. Hyperspectral Remote Sensing Scenes. 2021. Available online: https://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 11 July 2023).
No | Class Labels | Samples |
---|---|---|
1 | _Brocoli_ green_weeds_1_ | 2009 |
2 | _Brocoli_ green_weeds_2_ | 3726 |
3 | _Fallow__ | 1976 |
4 | _Fallow_ rough_ plow_ | 1294 |
5 | _Fallow_ smooth_ | 2678 |
6 | _Stubble__ | 3959 |
7 | _Celery__ | 3579 |
8 | _Grapes_ untrained_ | 11,271 |
9 | _Soil_ vinyard_ develop_ | 6203 |
10 | _Corn_ senesced_ green_ weeds_ | 3278 |
11 | _Lettuce_ romaine_ 4wk_ | 1068 |
12 | _Lettuce_ romaine_ 5wk_ | 1927 |
13 | _Lettuce_ romaine_ 6wk_ | 916 |
14 | _Lettuce_ romaine_ 7wk_ | 1070 |
15 | _Vinyard_ untrained_ | 7268 |
16 | _Vinyard_ vertical_ trellis_ | 1807 |
No | Class Labels | Samples |
---|---|---|
1 | _Asphalt__ | 6631 |
2 | _Meadows__ | 18,649 |
3 | _Gravel__ | 2099 |
4 | _Trees__ | 3064 |
5 | _Painted_ metal_ sheets_ | 1345 |
6 | _Bare_ Soil_ | 5029 |
7 | _Bitumen__ | 1330 |
8 | _Self_ Locking_ Bricks_ | 3682 |
9 | _Shadows__ | 947 |
No | Class Labels | Samples |
---|---|---|
1 | _Scrub__ | 761 |
2 | _Willow_ swamp_ | 243 |
3 | _CP_ hammock_ | 256 |
4 | _Slash_ pine_ | 252 |
5 | _Broadleaf__ | 161 |
6 | _Hardwood__ | 229 |
7 | _Swamp__ | 105 |
8 | _Graminoid_ marsh_ | 431 |
9 | _Spartina_ marsh_ | 520 |
10 | _Cattail_ marsh_ | 404 |
11 | _Salt_ marsh_ | 419 |
12 | _Mud_ flat_ | 503 |
13 | _Water__ | 927 |
SC | KSC | PU | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
No | Train | Test | Train | Test | Train | Test | ||||||
Real | Synth | Final | Real | Synth | Final | Real | Synth | Final | ||||
1 | 402 | 273 | 675 | 2708 | 152 | 0 | 80 | 321 | 1326 | 0 | 950 | 3803 |
2 | 745 | 0 | 676 | 2707 | 49 | 31 | 80 | 321 | 3730 | 0 | 951 | 3802 |
3 | 395 | 281 | 676 | 2707 | 51 | 29 | 80 | 321 | 420 | 531 | 951 | 3802 |
4 | 259 | 416 | 675 | 2708 | 50 | 30 | 80 | 321 | 613 | 337 | 950 | 3803 |
5 | 536 | 140 | 676 | 2707 | 32 | 48 | 80 | 321 | 269 | 682 | 951 | 3802 |
6 | 792 | 0 | 676 | 2707 | 46 | 34 | 80 | 321 | 1006 | 0 | 951 | 3802 |
7 | 716 | 0 | 676 | 2707 | 21 | 59 | 80 | 319 | 266 | 685 | 951 | 3802 |
8 | 2254 | 0 | 676 | 2707 | 86 | 0 | 81 | 320 | 736 | 214 | 950 | 3803 |
9 | 1241 | 0 | 675 | 2708 | 104 | 0 | 80 | 321 | 189 | 761 | 950 | 3802 |
10 | 656 | 19 | 675 | 2708 | 81 | 0 | 80 | 321 | ||||
11 | 214 | 462 | 676 | 2707 | 84 | 0 | 80 | 321 | ||||
12 | 385 | 291 | 676 | 2707 | 100 | 0 | 81 | 320 | ||||
13 | 183 | 493 | 676 | 2707 | 185 | 0 | 80 | 321 | ||||
14 | 214 | 462 | 676 | 2707 | ||||||||
15 | 1454 | 0 | 676 | 2707 | ||||||||
16 | 361 | 312 | 673 | 2695 |
Methods | Selected Ranked Features | ||
---|---|---|---|
SC | PU | KSC | |
PCA | PC 1, 2, 3, 4, 5 | PC 1, 2, 3, 4, 5 | PC 1, 2, 3, 4, 5 |
Seg-PCA | Group: PC 4:24, 1:1, 3:14, 2:12, 2:11 | Group: PC 2:12, 1:1, 2:11, 1:2, 1:3 | Group: PC 1:2, 1:1, 2:4, 3:14, 2:5 |
MNF | PC 1, 2, 3, 4, 5 | PC: 1, 2, 3, 4, 5 | PC 1, 2, 3, 4, 5 |
Seg-MNF | Group: MNFC 4:1, 1:1, 3:14, 2:11, 1:2 | Group: MNFC 1:1, 2:12, 2:11, 1:2, 1:3 | Group: MNFC 1:2, 2:4, 1:1, 3:14, 2:5 |
Bg-MNF | Group: MNFC 1:2, 1:4, 1:6, 1:3, 1:5 | Group: MNFC 1:3, 1:2, 1:4, 1:5, 1:7 | Group: PC 3:18, 3:16, 3:17, 3:15, 3:19 |
NMF | NMFC 1, 2, 3, 4, 5 | NMFC 1, 2, 3, 4, 5 | NMFC 1, 2, 3, 4, 5 |
Proposed | Group: NMFC1:3, 4:28, 1:2, 1:4, 3:16 | Group: NMFC 1:3, 2:13, 1:10, 2:15, 1:7 | Group: NMFC 3:13, 2:8, 2:6, 3:16, 2:7 |
Layers | Outputs | Parameters |
---|---|---|
Input_ | [(None, 25, 25, 5, 1)] | 0 |
Conv3D(1)_ | (None, 23, 23, 3, 8) | 224 |
Batch Normalization1_ | (None, 23, 23, 3, 8) | 32 |
Conv3D(2) | (None, 21, 21, 3, 16) | 1168 |
Batch Normalization2_ | (None, 21, 21, 3, 16) | 64 |
Conv3D(3)_ | (None, 19, 19, 3, 32) | 4640 |
Batch Normalization3_ | (None, 19, 19, 3, 32) | 128 |
Reshape_ | (None, 19, 19, 96) | 0 |
Conv2D_ | (None, 17, 17, 32) | 27,680 |
Batch Normalization4_ | (None, 17, 17, 32) | 128 |
Depthwise Conv2D_ | (None, 15, 15, 32) | 320 |
Batch Normalization5_ | (None, 15, 15, 32) | 128 |
Flatten_ | (None, 7200) | 0 |
Dense_ | (None, 256) | 1,843,456 |
Dropout_ | (None, 256) | 0 |
Dense_ | (None, 128) | 32,896 |
Dropout_ | (None, 128) | 0 |
Dense_ | (None, Classes) | 2064 |
Total trainable parameters: 1,912,688, non-trainable parameters: 240 |
Classes | Random O.S. | Random U.S. | Near-Miss U.S. | SMOTE O.S. | Random U.S. & O.S. | Random U.S. & SMOTE | Proposed |
---|---|---|---|---|---|---|---|
1 | 100.0 | 99.59 | 98.77 | 100.0 | 100.0 | 99.70 | 100.0 |
2 | 99.29 | 99.72 | 99.86 | 100.0 | 99.29 | 99.92 | 100.0 |
3 | 9970 | 100.0 | 100.0 | 99.51 | 99.70 | 99.85 | 100.0 |
4 | 99.81 | 99.59 | 99.72 | 99.88 | 99.81 | 99.85 | 100.0 |
5 | 100.0 | 99.86 | 99.72 | 99.77 | 100.0 | 99.96 | 99.83 |
6 | 99.88 | 100.0 | 100.0 | 100.0 | 99.88 | 99.88 | 100.0 |
7 | 100.0 | 99.72 | 99.72 | 99.93 | 100.0 | 99.96 | 100.0 |
8 | 99.96 | 99.72 | 99.59 | 99.96 | 99.96 | 99.92 | 100.0 |
9 | 100.0 | 100.0 | 100.0 | 99.95 | 100.0 | 99.96 | 100.0 |
10 | 99.92 | 97.95 | 99.04 | 99.77 | 99.92 | 99.88 | 100.0 |
11 | 99.96 | 99.72 | 99.45 | 99.88 | 99.96 | 100.0 | 99.76 |
12 | 99.96 | 100.0 | 100.0 | 99.92 | 99.96 | 99.96 | 100.0 |
13 | 99.92 | 99.31 | 99.45 | 100.0 | 99.92 | 99.81 | 100.0 |
14 | 99.88 | 99.31 | 99.45 | 100.0 | 99.88 | 99.96 | 100.0 |
15 | 99.81 | 99.86 | 99.86 | 99.84 | 99.81 | 99.85 | 100.0 |
16 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
OA | 99.88 | 99.64 | 99.66 | 99.90 | 99.87 | 99.90 | 99.98 |
Kappa | 99.87 | 99.62 | 99.59 | 99.91 | 99.88 | 99.90 | 99.98 |
AA | 99.88 | 99.65 | 99.61 | 99.90 | 99.88 | 99.90 | 99.97 |
Dimensionality Reduction Methods | Before Re-Sampling | After Re-Sampling | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
10% Training Data | 20% Training Data | 10% Training Data | 20% Training Data | |||||||||
OA | Kappa | AA | OA | Kappa | AA | OA | Kappa | AA | OA | Kappa | AA | |
PCA | 99.14 | 99.11 | 99.02 | 99.49 | 99.44 | 99.27 | 99.53 | 99.43 | 99.51 | 99.93 | 99.89 | 99.92 |
Segmented PCA | 99.30 | 99.24 | 99.27 | 99.54 | 99.49 | 99.31 | 99.52 | 99.49 | 99.55 | 99.88 | 99.85 | 99.86 |
MNF | 99.32 | 99.21 | 99.22 | 99.89 | 99.87 | 99.84 | 99.49 | 99.42 | 99.48 | 99.42 | 99.40 | 99.39 |
Segmented MNF | 99.34 | 99.29 | 99.23 | 99.31 | 99.24 | 99.61 | 99.58 | 99.55 | 99.56 | 99.91 | 99.90 | 99.91 |
Bg-MNF | 99.23 | 99.20 | 99.26 | 99.84 | 99.82 | 99.69 | 99.53 | 99.39 | 99.51 | 99.90 | 99.83 | 99.88 |
NMF | 99.42 | 99.35 | 99.38 | 99.49 | 99.43 | 99.81 | 99.60 | 99.52 | 99.65 | 99.93 | 99.91 | 99.92 |
Proposed (SNC) | 99.44 | 99.39 | 99.45 | 99.76 | 99.74 | 99.71 | 99.61 | 99.58 | 99.6 | 99.98 | 99.98 | 99.97 |
Window Size (Spatial) | |||||||
---|---|---|---|---|---|---|---|
# | 17 × 17 | 19 × 19 | 21 × 21 | 23 × 23 | 25 × 25 | 27 × 27 | 29 × 29 |
Accuracy | 99.32 | 99.69 | 99.86 | 99.74 | 99.98 | 99.97 | 99.913 |
Time (s) | 44.93 | 46.39 | 60.21 | 82.63 | 83.91 | 87.214 | 92.15 |
Comparison Method | 10% Training Data | 20% Training Data | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
OA | Kappa | AA | Precision | Recall | f-1 Score | OA | Kappa | AA | Precision | Recall | f-1 Score | |
SVM | 87.76 | 87.40 | 87.64 | 89.56 | 90.21 | 89.88 | 93.56 | 92.96 | 93.26 | 93.56 | 92.96 | 93.34 |
2D CNN | 96.46 | 96.04 | 95.88 | 98.45 | 97.88 | 98.16 | 98.89 | 98.85 | 98.87 | 98.89 | 98.85 | 99.54 |
3D CNN | 99.12 | 99.02 | 99.14 | 99.56 | 99.34 | 99.45 | 99.63 | 99.64 | 99.63 | 99.63 | 99.64 | 99.23 |
Fast 3D CNN | 99.17 | 99.02 | 99.13 | 99.62 | 99.59 | 99.6 | 99.77 | 99.81 | 99.79 | 99.77 | 99.81 | 99.93 |
HybridSN | 99.26 | 99.21 | 99.22 | 99.66 | 99.65 | 99.65 | 99.88 | 99.91 | 99.89 | 99.88 | 99.91 | 99.97 |
Spectral-NET | 99.35 | 99.27 | 99.34 | 99.77 | 99.76 | 97.78 | 99.69 | 99.55 | 99.64 | 100.0 | 99.93 | 99.97 |
Hybrid 3D-2D CNN | 99.49 | 99.41 | 99.42 | 100.0 | 99.94 | 99.97 | 99.77 | 99.68 | 99.75 | 100.0 | 100.0 | 100.0 |
MBDA | 99.42 | 99.36 | 99.39 | 100.0 | 99.93 | 99.96 | 99.62 | 99.54 | 99.60 | 100.0 | 99.97 | 100.0 |
Proposed (SNC) | 99.61 | 99.58 | 99.60 | 100.0 | 100.0 | 100.0 | 99.98 | 99.98 | 99.97 | 100.0 | 100.0 | 100.0 |
Classes | Random O.S. | Random U.S. | Near-Miss U.S. | SMOTE O.S. | Random U.S. & O.S. | Random U.S. & SMOTE | Proposed |
---|---|---|---|---|---|---|---|
1 | 99.08 | 99.83 | 99.79 | 100.0 | 100 | 99.13 | 100.0 |
2 | 99.87 | 99.98 | 99.91 | 99.89 | 99.97 | 99.21 | 100.0 |
3 | 100.0 | 99.84 | 99.42 | 99.92 | 99.94 | 98.85 | 99.92 |
4 | 99.58 | 97.53 | 98.61 | 99.92 | 99.88 | 99.73 | 99.87 |
5 | 97.36 | 99.82 | 99.5 | 99.97 | 99.81 | 99.15 | 100.0 |
6 | 99.60 | 100.0 | 100.0 | 99.82 | 99.95 | 99.32 | 99.89 |
7 | 99.62 | 99.92 | 99.95 | 99.97 | 99.72 | 99.41 | 99.95 |
8 | 99.74 | 98.58 | 99.08 | 99.03 | 99.83 | 99.39 | 99.79 |
9 | 100.0 | 99.67 | 99.27 | 99.97 | 99.76 | 99.62 | 99.87 |
OA | 99.48 | 99.52 | 99.56 | 99.85 | 99.89 | 99.38 | 99.94 |
Kappa | 99.43 | 99.42 | 99.51 | 99.79 | 99.86 | 99.29 | 99.92 |
AA | 99.42 | 99.46 | 99.50 | 99.83 | 99.87 | 99.31 | 99.92 |
Dimensionality Reduction Methods | Before Re-Sampling | After Re-Sampling | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
10% Training Data | 20% Training Data | 10% Training Data | 20% Training Data | |||||||||
OA | Kappa | AA | OA | Kappa | AA | OA | Kappa | AA | OA | Kappa | AA | |
PCA | 98.65 | 98.61 | 98.63 | 99.33 | 99.32 | 98.35 | 99.54 | 99.48 | 99.52 | 99.9 | 99.89 | 99.89 |
Segmented PCA | 99.11 | 98.96 | 99.04 | 99.37 | 99.32 | 99.33 | 99.53 | 99.52 | 99.51 | 99.91 | 99.90 | 99.91 |
MNF | 99.21 | 99.13 | 99.19 | 99.44 | 99.41 | 99.42 | 99.32 | 99.26 | 99.25 | 99.60 | 99.55 | 99.57 |
Segmented MNF | 99.26 | 99.22 | 99.25 | 99.5 | 99.43 | 99.47 | 99.68 | 99.57 | 99.63 | 99.92 | 99.9 | 99.91 |
Bg-MNF | 99.23 | 99.14 | 99.20 | 99.47 | 99.44 | 99.43 | 99.57 | 99.55 | 99.55 | 99.91 | 99.89 | 99.91 |
NMF | 99.24 | 99.22 | 99.19 | 99.56 | 99.49 | 99.52 | 99.65 | 99.54 | 99.59 | 99.92 | 99.90 | 99.92 |
Proposed (SNC) | 99.35 | 99.31 | 99.34 | 99.59 | 99.48 | 98.54 | 99.67 | 99.61 | 99.63 | 99.94 | 99.92 | 99.92 |
Comparison Method | 10% Training Data | 20% Training Data | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
OA | Kappa | AA | Precision | Recall | f-1 Score | OA | Kappa | AA | Precision | Recall | f-1 Score | |
SVM | 85.36 | 84.19 | 86.11 | 89.45 | 97.76 | 87.30 | 88.35 | 86.98 | 92.98 | 88.40 | 87.54 | 88.11 |
2D CNN | 95.43 | 95.24 | 95.17 | 96.32 | 98.02 | 97.34 | 96.78 | 96.43 | 98.75 | 98.14 | 97.46 | 97.20 |
3D CNN | 99.06 | 99.01 | 99.05 | 99.10 | 99.21 | 99.14 | 99.24 | 99.21 | 99.17 | 99.70 | 99.79 | 99.74 |
Fast 3D CNN | 99.08 | 99.04 | 98.99 | 99.3 | 99.43 | 99.36 | 99.31 | 99.26 | 99.29 | 99.84 | 99.89 | 99.86 |
Hybrid SN | 99.12 | 98.97 | 99.09 | 99.42 | 99.29 | 99.35 | 99.62 | 99.54 | 99.58 | 100.0 | 100.0 | 100.0 |
Spectral-NET | 99.36 | 99.31 | 99.37 | 100.0 | 99.87 | 99.93 | 99.79 | 99.65 | 99.74 | 100.0 | 100.0 | 100.0 |
Hybrid 3D-2D CNN | 99.19 | 99.03 | 99.14 | 99.78 | 99.65 | 99.71 | 99.53 | 99.45 | 99.51 | 100.0 | 100.0 | 100.0 |
MBDA | 99.21 | 99.13 | 99.17 | 99.83 | 99.87 | 99.92 | 99.61 | 99.57 | 99.59 | 99.78 | 99.89 | 99.85 |
Proposed (SNC) | 99.67 | 99.61 | 99.63 | 100.0 | 100.0 | 100.0 | 99.94 | 99.92 | 99.92 | 100.0 | 100.0 | 100.0 |
Window Size (Spatial) | |||||||
---|---|---|---|---|---|---|---|
# | 17 × 17 | 19 × 19 | 21 × 21 | 23 × 23 | 25 × 25 | 27 × 27 | 29 × 29 |
Accuracy | 99.35 | 99.58 | 99.53 | 99.80 | 99.94 | 99.91 | 99.91 |
Time (s) | 42.79 | 45.23 | 62.83 | 80.48 | 82.81 | 84.58 | 89.75 |
Classes | Random O.S. | Random U.S. | Near-Miss U.S. | SMOTE O.S. | Random U.S. & O.S. | Random U.S. & SMOTE | Proposed |
---|---|---|---|---|---|---|---|
1 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
2 | 96.91 | 99.69 | 95.88 | 98.75 | 99.38 | 99.69 | 99.07 |
3 | 94.63 | 96.88 | 97.07 | 96.26 | 98.44 | 100.0 | 100.0 |
4 | 100.0 | 89.72 | 99.50 | 96.26 | 92.21 | 97.20 | 97.51 |
5 | 98.45 | 99.44 | 100.0 | 99.69 | 100.0 | 99.38 | 99.38 |
6 | 92.90 | 97.81 | 95.08 | 97.2 | 96.26 | 98.44 | 99.38 |
7 | 97.62 | 100.0 | 97.62 | 99.69 | 99.69 | 99.37 | 99.69 |
8 | 100.0 | 99.42 | 100.0 | 99.69 | 96.25 | 99.69 | 100.0 |
9 | 99.72 | 100.0 | 100.0 | 99.38 | 100.0 | 99.38 | 100.0 |
10 | 100.0 | 99.69 | 97.52 | 99.69 | 99.07 | 99.69 | 99.69 |
11 | 98.81 | 100.0 | 100.0 | 99.38 | 99.38 | 99.38 | 99.07 |
12 | 100.0 | 100.0 | 100.0 | 99.69 | 100.0 | 99.38 | 100.0 |
13 | 99.05 | 100.0 | 100.0 | 100.0 | 100.0 | 98.75 | 99.38 |
OA | 98.33 | 98.76 | 98.59 | 98.91 | 98.54 | 98.28 | 99.46 |
Kappa | 98.31 | 98.71 | 98.55 | 98.85 | 98.47 | 98.21 | 99.41 |
AA | 98.31 | 98.67 | 98.56 | 98.83 | 98.51 | 99.23 | 99.43 |
Dimensionality Reduction Methods | Before Re-Sampling | After Re-Sampling | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
10% Training Data | 20% Training Data | 10% Training Data | 20% Training Data | |||||||||
OA | Kappa | AA | OA | Kappa | AA | OA | Kappa | AA | OA | Kappa | AA | |
PCA | 96.16 | 96.1 | 96.14 | 98.33 | 98.26 | 96.32 | 99.12 | 98.94 | 99.08 | 99.25 | 99.18 | 99.23 |
Segmented PCA | 98.31 | 98.28 | 98.27 | 98.63 | 98.58 | 98.56 | 99.25 | 99.18 | 99.23 | 98.81 | 98.74 | 98.73 |
MNF | 98.59 | 98.51 | 98.53 | 98.92 | 98.88 | 98.88 | 98.74 | 98.66 | 98.67 | 98.61 | 98.52 | 98.56 |
Segmented MNF | 99.12 | 99.05 | 99.08 | 99.26 | 99.24 | 99.23 | 98.98 | 98.89 | 98.93 | 99.29 | 99.22 | 99.23 |
Bg-MNF | 99.17 | 99.12 | 99.16 | 99.15 | 99.08 | 99.11 | 99.21 | 99.18 | 99.17 | 99.37 | 99.31 | 99.35 |
NMF | 99.16 | 99.12 | 99.11 | 99.19 | 99.13 | 99.15 | 99.11 | 98.97 | 99.1 | 99.23 | 99.19 | 99.21 |
Proposed (SNC) | 99.21 | 99.16 | 99.17 | 99.31 | 99.22 | 99.25 | 99.28 | 99.23 | 99.26 | 99.46 | 99.41 | 99.43 |
Window Size (Spatial) | |||||||
---|---|---|---|---|---|---|---|
# | 17 ×17 | 19 × 19 | 21 × 21 | 23 × 23 | 25 × 25 | 27 × 27 | 29 × 29 |
Accuracy | 98.52 | 98.96 | 99.34 | 99.27 | 99.46 | 99.12 | 99.31 |
Time (s) | 24.38 | 26.88 | 27.74 | 30.49 | 32.26 | 35.34 | 37.29 |
Comparison Method | 10% Training Data | 20% Training Data | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
OA | Kappa | AA | Precision | Recall | f-1 Score | OA | Kappa | AA | Precision | Recall | f-1 Score | |
SVM | 86.54 | 85.31 | 86.33 | 90.46 | 89.63 | 90.04 | 87.12 | 86.83 | 90.35 | 90.46 | 91.22 | 90.84 |
2D CNN | 94.15 | 94.14 | 94.26 | 96.35 | 98.11 | 97.22 | 97.52 | 96.82 | 97.51 | 98.64 | 98.59 | 98.61 |
3D CNN | 98.12 | 98.04 | 98.12 | 99.13 | 0.12 | 0.24 | 98.24 | 98.19 | 98.16 | 99.11 | 99.13 | 99.12 |
Fast 3D CNN | 98.40 | 98.21 | 98.37 | 99.18 | 99.12 | 99.15 | 98.89 | 98.73 | 98.73 | 99.32 | 99.3 | 99.31 |
Hybrid SN | 98.50 | 98.48 | 98.59 | 99.21 | 99.16 | 99.18 | 98.93 | 98.99 | 98.52 | 99.39 | 99.37 | 99.38 |
Spectral-NET | 98.82 | 98.76 | 98.83 | 99.23 | 99.21 | 99.22 | 98.98 | 98.82 | 98.99 | 99.49 | 99.46 | 99.47 |
Hybrid 3D-2D CNN | 99.14 | 99.11 | 99.12 | 99.58 | 99.56 | 99.57 | 99.16 | 99.21 | 99.13 | 99.59 | 99.61 | 99.60 |
MBDA | 99.16 | 99.08 | 99.11 | 99.83 | 99.61 | 99.82 | 99.25 | 99.22 | 99.23 | 99.83 | 99.89 | 99.78 |
Proposed(SNC) | 99.28 | 99.23 | 99.26 | 99.76 | 99.75 | 99.75 | 99.46 | 99.41 | 99.43 | 100.0 | 100.0 | 99.00 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Islam, M.T.; Islam, M.R.; Uddin, M.P.; Ulhaq, A. A Deep Learning-Based Hyperspectral Object Classification Approach via Imbalanced Training Samples Handling. Remote Sens. 2023, 15, 3532. https://doi.org/10.3390/rs15143532
Islam MT, Islam MR, Uddin MP, Ulhaq A. A Deep Learning-Based Hyperspectral Object Classification Approach via Imbalanced Training Samples Handling. Remote Sensing. 2023; 15(14):3532. https://doi.org/10.3390/rs15143532
Chicago/Turabian StyleIslam, Md Touhid, Md Rashedul Islam, Md Palash Uddin, and Anwaar Ulhaq. 2023. "A Deep Learning-Based Hyperspectral Object Classification Approach via Imbalanced Training Samples Handling" Remote Sensing 15, no. 14: 3532. https://doi.org/10.3390/rs15143532