Next Article in Journal
Non-Destructive Diagnosis on the Masaccio Frescoes at the Brancacci Chapel, Church of Santa Maria del Carmine (Florence)
Previous Article in Journal
Mapping Cropland Abandonment in Mountainous Areas in China Using the Google Earth Engine Platform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Classification via Information Theoretic Dimension Reduction

1
Department of Computer Science and Engineering, Hajee Mohammad Danesh Science and Technology University, Dinajpur 5200, Bangladesh
2
School of Information Technology, Deakin University, Geelong, VIC 3220, Australia
3
School of Computing, Mathematics and Engineering, Charles Sturt University, Bathurst, NSW 2795, Australia
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(4), 1147; https://doi.org/10.3390/rs15041147
Submission received: 7 January 2023 / Revised: 7 February 2023 / Accepted: 17 February 2023 / Published: 20 February 2023

Abstract

:
Hyperspectral images (HSIs) are one of the most successfully used tools for precisely and potentially detecting key ground surfaces, vegetation, and minerals. HSIs contain a large amount of information about the ground scene; therefore, object classification becomes the most difficult task for such a high-dimensional HSI data cube. Additionally, the HSI’s spectral bands exhibit a high correlation, and a large amount of spectral data creates high dimensionality issues as well. Dimensionality reduction is, therefore, a crucial step in the HSI classification pipeline. In order to identify a pertinent subset of features for effective HSI classification, this study proposes a dimension reduction method that combines feature extraction and feature selection. In particular, we exploited the widely used denoising method minimum noise fraction (MNF) for feature extraction and an information theoretic-based strategy, cross-cumulative residual entropy (CCRE), for feature selection. Using the normalized CCRE, minimum redundancy maximum relevance (mRMR)-driven feature selection criteria were used to enhance the quality of the selected feature. To assess the effectiveness of the extracted features’ subsets, the kernel support vector machine (KSVM) classifier was applied to three publicly available HSIs. The experimental findings manifest a discernible improvement in classification accuracy and the qualities of the selected features. Specifically, the proposed method outperforms the traditional methods investigated, with overall classification accuracies on Indian Pines, Washington DC Mall, and Pavia University HSIs of 97.44%, 99.71%, and 98.35%, respectively.

1. Introduction

Due to the extraordinary advancement of hyperspectral remote sensors, hundreds of tiny and continuous spectral bands are feasible to acquire from the electromagnetic (EM) spectrum, which typically spreads in the ranges 0.4 μm to 2.5 μm and includes visible to the near-infrared region of the EM spectrum [1]. For example, with an exceptional spectral resolution of 0.01 μm, the airborne visible/infrared imaging spectrometer (AVIRIS) sensor effectively captures 224 spectral images for the Indian Pines (IP) hyperspectral image (HSI) scene [2]. Due to the superior spectral resolution, ground objects are becoming a more commonly used research topic [3]. Every single spectral channel is recognized as a feature for classification in this context, as long as it contains distinct ground surface responses [4]. An HSI is represented by a 3D data cube, from which we can extract 2D spatial information corresponding to the HSI’s height and width and 1D spectral information corresponding to the HSI’s total number of spectral bands. Due to the enormous amount of data made available by HSIs, traditional HSIs pose significant challenges during image processing processes such as classification [5]. The reasons are as follows: (i) There is a strong correlation between the input image bands; (ii) Not all spectral bands have the same amount of information to convey, and some of it is noisy [6]; (iii) The spectral bands are collected in a continuous range by hyperspectral sensors, which means that certain spectral bands might reveal unusual information about the surface of the earth [7]; (iv) The increased spectral resolution of hyperspectral images improves classification techniques but limits computational capacity. Additionally, because there are few training examples, this high-dimensional data cube’s classification accuracy is relatively unsatisfactory; (v) The ratio of the amount of input HSI features to training samples is not balanced. The test data classification accuracy steadily degrades as a result, a phenomenon known as the Hughes phenomenon or the curse of the dimensionality effect [8]. To increase classification accuracy, it is crucial to condense the high-dimensional HSI data to a useful subspace of informative features. Therefore, the fundamental goal of this study was to use a constructive approach to reduce the HSI dimensions for improved classification.
The HSI high-dimensional data may be reduced into a lower dimension using various feature reduction techniques. Feature extraction, feature selection, or a combination of the two can be used for this [9,10]. By utilizing a linear or nonlinear transformation, feature extraction converts the original images from the original space of S dimensionality to a new space of P dimensionality, where P << S. However, because HSIs are noisy data, the noise must be removed [11]. Feature extraction strategies for data subsets might be supervised or unsupervised [12]. Known data classes are used in supervised algorithms. To infer class separability, these approaches require datasets containing labeled samples. The most used supervised methods include linear discriminant analysis (LDA) [13], nonparametric weighted feature extraction (NWFE) [14], and genetic algorithms [15]. The fundamental drawback of these approaches is the need for labeled samples to reduce dimensionality. The unavailability of labelled data is addressed via unsupervised dimensionality reduction approaches. Minimum noise fraction (MNF) is an extremely popular unsupervised technique and is used to evaluate extracting features. Even though principal component analysis (PCA) is used to extract features from HSI data in many analyses, PCA did not accurately show the ratio of noise in the HSI data [16,17,18]. In this case, PCA only considers the HSI’s global variance to uncorrelate the data [19]. For such noisy data, the image quality is ignored when applying a variance because of the lack of consideration for the original signal-to-noise ratio (SNR) [20]. Therefore, MNF is introduced as a better approach for feature extraction in terms of image quality. In MNF, the components are ordered according to their SNR values, regardless of how noise appears in bands [21]. Some studies have shown that even though feature extraction moves the original data to a newly generated space, ranking the extracted features is still important [13,22,23]. MNF is unsupervised and takes SNR into account exclusively; therefore, there is a chance that some classes will negatively impact accuracy and that the first few features would not be used.
Therefore, feature selection is necessary to prioritize the features generated by the feature extraction method which contain the most spatial information. In order to obtain a blend approach that performs better than either feature extraction or feature selection alone, it is common practice at the moment to combine existing feature extraction and feature selection methods to obtain an approach that performs better than either feature extraction or feature selection alone [10]. Combining feature extraction and selection is advantageous for the reason that feature extraction performed prior to feature selection can fully utilize the spectral information to generate new features, whereas feature selection performed after feature extraction can generate new features. In the following step, feature selection approaches are utilized to select the appropriate bands based on a set of predefined criteria. Due to combinatorial explosion from local minima and excessive computation, search-based feature selection typically fails [24]. Mutual information (MI)-based feature selection is one example of an information-theoretic approach that can be used to uncover non-linear as well as linear correlations between input image bands and ground-truth labels [10]. However, it is conditional on the marginal and joint probabilities of the outcomes. Due to the exponential increase in the estimation of marginal and joint probability distributions with dimensionality, it is incapable of successfully selecting features from high-dimensional data [25]. In the suggested approach, we select a subset of informative features by lowering the number of features using cross-cumulative residual entropy (CCRE). The CCRE method is applied as a feature selection technique that quantifies the similarity of information between two images. A significant advantage over MI is that CCRE is more robust and has significantly greater immunity to noise. As CCRE is not bound to [0, 1], we propose to normalize CCRE and apply the extracted image by MNF and the available class tackling the minimum redundancy and maximum relevance (mRMR) criteria. Consequently, the informative characteristics are ordered, and a feature subset that can be employed for classification is exposed. Therefore, the proposed method to generate the subset of features is termed as MNF-nCCREmRMR. A kernel support vector machine (KSVM) was used to classify the data and is compared with other methods to determine its reliability. Below is a summary of this paper’s significant contributions.
  • In addition to MNF and CCRE, we propose a hybrid feature reduction technique.
  • To avoid selecting redundant features, we propose a normalized CCRE-based mRMR feature selection approach over the extracted features.
We organize the rest of the paper as follows. In Section 2, we first describe conventional feature reduction methods such as PCA, MNF, MI, and CCRE. In Section 3, the proposed hybrid subset detection method, MNF-nCCREmRMR, according to mRMR, is comprehensively described. In Section 4, we provide a detailed explanation of the experiments carried out on the three real HSI datasets utilizing the proposed method compared with the current state of the art. The results are summarized in Section 5, which also outlines the paper’s conclusion.

2. Preliminaries

2.1. Principal Component Analysis (PCA)

To extract meaningful features from spectral image bands, PCA, the most used linear unsupervised feature extraction method in HSI classification, determines the association between the bands. It depends on the fact that the HSI’s neighboring bands are highly correlated and usually convey information about ground things that are similar to one another [26,27,28]. Let the spectral vector of a pixel, denoted as X n , in X be defined as X n = X n 1 X n 2 X n P T , where n 1   S all . Now, subtract the mean spectral vector, M , to obtain the mean adjusted spectral vector, I n , as:
I n = X n M ,
Where the mean image vector, M = 1 S all n = 1 S all X n . The zero-mean image, denoted by I , is thus obtained as I = I 1 I 2 I n . Subsequently, the covariance matrix, C , is computed as follows:
C = 1 S all I   I T .
Eigenvalues E = E 1 E 2 E P and eigenvectors V = V 1 V 2 V P are obtained by decomposing the covariance matrix C as C = V E V T . The orthonormal matrix, Z , is composed by choosing K eigenvectors after rearranging the eigenvectors with highest eigenvalues, where K < P and often K P . Finally, the transformed or projected data matrix, Y , is calculated as:
Y = Z T I .

2.2. Minimum Noise Fraction (MNF)

MNF transformation is used to estimate an HSI’s intrinsic features, which is the superposition of two PCA [28]. As such, instead of using the global variance to assess the relevance of features, the MNF is more appropriate, which uses SNR. Let X denote the input HSI, where X = [x1, x2 …… xp]T, and p represents the number of image bands. As noise exists in HSI in the signal, X will be X = Sg + N, where Sg and N are the noiseless image, and noise separately. Now, the covariance can be computed by the following equation:
C X = = S g + N ,
Here, S g = covariances of the signal, and N = covariances of the noise. The MNF transformation can be computed in terms of noise covariance as:
Z i = A T X ,
Here, A represents the eigenvector matrix, and it can be computed as:
1 N = Λ A ,
where Λ represents the diagonal eigenvalues matrix and can also be computed using the noise ratio, as:
Λ = V a r a i T N / V a r a i T X .
The components are reorganized in accordance with the SNR after passing through the MNF transformation, in contrast to PCA, which employs global data statistics. Therefore, the few MNF components include meaningful and less noisy classification features.

2.3. Mutual Information (MI)

MI is a fundamental notion in the field of statistics to determine the dependency of two input variables, A and C, and is defined as:
I A , C = c C a A p a , c l o g p a , c p a p c ,
where p a and p c represent the marginal probability distributions and the joint probability distribution, p a , c , of the two variables A and C. MI can also be defined in terms of entropy, provided that A is a band of an input image, and C is a class label of the input image.
I(A, C) = H(A) + H(C) − H(A, C),
where H(A) and H(C) represent the entropies of A and C, and H(A, C) is their joint entropy. It is possible to utilize the MI value in Equation (8) or Equation (9) as the selection criterion in order to choose the features that are the most helpful and informative.

2.4. Cross-Cumulative Residual Entropy (CCRE)

CCRE was introduced in [29] as a well-known similarity measure technique for multimodal image registration. CCRE can be utilized to compare the similarities between the two images where cumulative residual distribution is applied rather than probability distribution. Cumulative residual entropy (CRE), developed in [30], can be used to determine CCRE. Let a in R be a random variable. Then, we can write CRE as:
ε a = + f λ l o g F λ d λ ,
where + = (xR; x ≥ 0). As a result, the following formula can be used to estimate the CCRE between images a and b:
C a , b = ε a E [ ε ( a | b ) ] ,
where ε ( a | b ) ] = conditional CRE between a and b, and E [ ε ( a | b ) ] = expectation of ε ( a | b ) ] . One way to calculate the conditional CRE ( ε ( a | b ) ] ) between a and b is as follows:
ε ( a | b ) = + P ( a > a | b ) l o g P ( a > a | b ) d a .
Now, the CCRE of two images I and J can be given by:
C C C R E = u = 1 L v = 1 L G u , v l o g G u , v G I u P J v ,
where L is the maximum value of any pixel in the images, G(u) is the joint cumulative residual distribution, GI(u) is the marginal cumulative residual distribution of I, and PJ(v) is the marginal probability of J. CCRE is mostly used to determine how the training data and the transformed MNF images are related so that the informative MNF component can be found based on the available class label.

3. Proposed Methodology

There are two primary phases in the proposed feature reduction process: (i) feature extraction through the implementation of the classical MNF on the complete HSI; and (ii) feature selection through the measurement of normalizing CCRE-based mRMR criteria on the transformed features of the HSI. Figure 1 illustrates the practical steps of our proposed method.

3.1. mRMR-Driven CCRE-Based Feature Selection

When deciding which features are most useful, the CCRE value in Equation (12) is taken into account. Using a comparison between the newly generated features from MNF component, Zi, and the available training class labels, C, CCRE is able to isolate the subset of features that were most important. Therefore, the feature that is calculated to be the most informative is as [31]:
V = M a x i p C C C R E Z i , C ,
where V is the most informative classification feature (maximum CCRE value) that was given to S (the number of features that were chosen). This ranked the MNF components, with the possibility that the first few components are the most useful for classification. However, there may be redundancy in the features chosen using Equation (14). Overall, the selected features should be as relevant as possible while being as redundant as possible. The greedy strategy can be utilized to select the (k + 1)th feature; then, it can be assigned to the subsets that have already been chosen. As such, the mRMR algorithm can be used to determine the model that was picked for subspace detection as:
G Z i , k = C C C R E Z i , C 1 S i , j S C C C R E Z i , Z j , Z i S .
In Equation (15), S represent the subset of selected feature, Zi is the current features extracted from MNF, k denotes the number of features to be selected for the feature space S, C signifies the ground truth image of the HSI, and Zj represents the already selected feature in the feature space, S.

3.2. Improved Feature Selection

(i) Using normalized CCRE: The values of CCRE are not constrained to a particular limit. Direct application of the G Z i ,   k in the aforementioned method is complicated by the fact that it is sensitive to the entropy of two variables and does not have a fixed range of validity. The quality of a given CCRE value is evaluated by comparing it to the range [0, 1] [32,33]. The normalized CCRE can be defined as:
C ^ C C R E Z , C = C C C R E Z , C C C C R E Z C C C R E C
We propose nCCREmRMR, utilizing the normalized CCRE in Equation (16). Accordingly, the subset of features using our proposed method can be defined as:
G ^ Z i , k = C ^ C C R E Z i , C 1 S i , j S C ^ C C R E Z i , Z j , Z i S .
The observation is made that the (k + 1)th feature to be added to the target subspace of features, S, should have the largest value of G ^ Z i , k .
(ii) Discard Negative values: Using Equation (17), the largest value of the difference might be less than zero, resulting in the chosen features being different from the previously picked characteristics, which is undesirable. As a result, in this study, G ^ Z i , k was assumed to be positive, i.e., G ^ Z i , k > 0 . If the greatest difference value is not positive, it might mean that there are no longer any desirable characteristics, and that the informative subspace has grown to its specified width.
(iii) Remove the noisy features: The formula described in Equation (17) is likely to be used to choose undesirable features. The selected features are thus only weakly related to the target when the largest difference value derives from two tiny values. The user-defined threshold (T) is introduced as a means of avoiding complications (if C ^ C C R E Z i , C < T , remove Zi). The preprocessing stage applies the user-defined threshold, T, to condense the searching space for the greedy technique and eliminate the noisy feature. An outline of the suggested hybrid feature reduction method’s algorithm is provided below. Now, the selected set S contains the most informative features. The proposed feature reduction methodology is illustrated in Algorithm 1.
Algorithm 1 MNF-nCCREmRMR
(Y is the original HSI data, Z represents the transform MNF components, C is the ground truth image, T defines a user-defined threshold, and S represents the final subsets of n number of features.)
    i.
Start ( Z : the projected data matrix of original HSI, Y)
   ii.
Evaluate  C ^ C C R E (Zi, C) and utilize T, if C ^ C C R E (Zi, C) < T
  iii.
Set, S 0 = {Ф}
  iv.
Set S 1 = S 0   Z j , where Z j is first feature utilizing Equations (14) and (16)
   v.
Apply steps (vi) and (vii) until the S contains n features
  vi.
Update  S by utilizing Equation (17)
 vii.
OutputS as the subsets of effective features

4. Experimental Setup and Analysis

4.1. Remote Sensing Data Sets

For experimental analysis, we used three HSI datasets publicly available and broadly used for HSI classification, i.e., AVIRIS IP, HYDICE Washington DC Mall (WDM), and ROSIS Pavia University (PU). Detailed descriptions of these datasets are given below.

4.1.1. AVIRIS IP Dataset

NASA’s AVIRIS sensor captured the IP dataset in June 1992, which consists of 220 imaging bands [34]. The ground truth image, which is made up of sixteen classes and has a dimension of 145 by 145 pixels, contains the dataset. Furthermore, the data have a 0.1 µm spectral resolution. We excluded the classes “Oats” and “Grass/Pasture mowed” from this experiment for their insufficient training and testing samples. Figure 2 presents the IP false-color RGB and ground truth image.

4.1.2. HYDICE WDM Dataset

The WDM data collection comprises 191 image bands, and each band consists of 1280 × 307 pixels. The HYDICE sensor captured the data in 1995 over the WDM. In the scenario depicted by the ground truth image, there are a total of seven distinct classes [35]. We have not employed “paths” in this study because the training data are insufficient. Figure 3 presents the WDM false-color RGB and ground truth image.

4.1.3. ROSIS PU Dataset

The ROSIS optical sensor was utilized to gather the PU dataset from the University of Pavia’s urban environment in Italy. A spatial resolution of 1.3 m per pixel and an image size of 610 × 610 were used in this his [36]. The acquired image contains 103 data channels (with a spectral range from 0.43 μm to 0.86 μm). A false-color composite of the image appears in Figure 4a, whereas nine ground-truth classes of interest are depicted in Figure 4b. Table 1 reviews the key features of all the three datasets.

4.2. Analysis of Feature Extraction and Feature Selection

During feature extraction, the MNF technique generates new features using the transformation principles. In order to enhance the subset of features, features are chosen utilizing the normalized CCRE on newly generated features. It is possible to select the noisy feature using Equation (14). Furthermore, two weak MNF components might yield a large variation, and the algorithm can pick a useless component that is weakly connected to the ground truth image. We implemented a user-defined threshold T to prevent the use of less informative features in classification. The advantage of using T is that throughout the preprocessing steps, it first rejects noisy features. As a result, there is a reduced possibility of selecting a noisy features, and the order of the chosen features is apparent. For assessing the robustness of the proposed approach, we compared it to MNF, CCRE, MNF-MI, and MNF-CCRE methods. Table 2 outlines the abbreviations associated with the stated and different methods studied. For all studied and proposed methods, the order of the ranked features is presented in Table 3, Table 4 and Table 5 for all three datasets, respectively. The proposed method (MNF-nCCREmRMR), as shown in Table 3, ranks MNF component two (MNF-C: 2) and MNF component four (MNF-C: 4) as the top two features, in contrast to the traditional MNF method, which ranks MNF component one (MNF-C: 1) and MNF component two (MNF-C: 2) as the first and second-ranked features, respectively. Figure 5 provides a graphic representation of the first two ranking features of MNF, MNF-MI, and MNF-CCRE, as well as the proposed approach of the AVIRIS image. The illustration demonstrates how the suggested approach improves the effectiveness of the chosen characteristics and is more visually appealing than the other approaches used.

4.3. Parameters Tuning for Classification

To evaluate the effectiveness of the chosen features, we used the KSVM classifier with the Gaussian (RBF) kernel to classify the HSIs. In the classifier, we used a grid search strategy based on tenfold cross-validation to determine the optimal values for the cost parameter, C, and the kernel width, γ [37]. The complete parameter tuning results for all the studied and proposed methods on the three datasets are presented in Table 6, Table 7 and Table 8, respectively. In particular, we obtained the optimal parameters C = 2.9 and γ = 0.5 for the AVIRIS dataset, C = 2.4 and γ = 2.1 for the HYDICE dataset, and C = 2.8 and γ = 3 for the ROSIS PU HSI.

4.4. Classification Performance Evaluation Metrics

In this study, the performance of the proposed method was assessed using commonly used quality indicators, including overall accuracy (OA), average accuracy (AA), the Kappa coefficient, and the F1 score. The proportion of all correctly categorized pixels is what the OA calculates; it can be calculated as follows:
OA = i = 1 C A i i B
The confusion matrix represented by A is determined by contrasting the classification map with the actual image. The number of classes is denoted by the letter C in this equation. Aii represents the number of samples in class i that are classified as class i, while B represents the total number of test samples. AA stands for the average value of the proportion of pixels in each class that have been correctly classified. This value is derived as follows:
AA = i = 1 C A i i / i = 1 C A i + C ,
where Aii stands for the total number of samples belonging to class i and classified as class i, and Ai+ represents the total number of samples as classified as class i. The Kappa coefficient computes the proportion of classified pixels adjusted for the number of agreements predicted only by chance. It indicates how much better the categorization performed than the likelihood of randomly assigning pixels to their correct categories and can be calculated using the notation used in Equations (18) and (19) as:
Kappa = B i = 1 C A i i i = 1 C ( A i + ) A + i B 2 i = 1 C i = 1 C ( A i + ) A + i ,
where A+i represents the total number of actual samples of class i. Now, the F1 score is calculated as follows:
F 1   score = 2   ×   Precision   ×   Recall Precision + Recall ,
where the precision and recall can be calculated as follows:
Precision = TP TP + FP   and   Recall = TP TP + FN .
In Equation (22), TP, FP, and FN denote the number of true positive, false positive, and false negative classification of the testing samples of multiple classes.

4.5. Classification Results on the AVIRIS IP Dataset

In this experiment, we took approximately 50% samples of each class as the training set and 50% samples as the testing set from a total of 2401 samples. The information regarding the samples utilized for both training and testing is presented in Table S1. As shown in Figure 2, the ground-truth image served as the basis for selecting both the training and testing samples to be used in the classification process. We calculated the OA of AVIRIS data without feature extraction and feature selection, and found 66.85% using the first ten features. This result provides motivation to reduce the number of features used in HSI classification. Table 9 shows the values of the OA, AA, Kappa, and F1 score of each method. The proposed technique has the highest OA, AA, and Kappa values, as shown in the table. This table demonstrates that the proposed MNF-nCCREmRMR approach performs better than the state-of-the-art methods on every single criterion used to evaluate performance. The two-dimensional line graphs presented in Figure 6 show the comparison of the proposed and studied methods in a more meaningful way using the OA versus the number of ranked features. As the number of features increases, the OA increases too.

4.6. Classification Results on the HYDICE WDM Dataset

In this experiment, we have taken around 30% samples of each class as training set and 70% samples as testing set from a total of 5154 samples. Table S2 contains a representation of the information regarding the samples used for both training and testing. As shown in Figure 3, the ground-truth image is used to choose both the training samples and the testing samples for classification. Table 10 shows the values of the OA, AA, Kappa, and F1 score of each method. Performing the equivalent number of selected features, we find the OA of 99.71% by the proposed MNF-nCCREmRMR method. This table demonstrates that the proposed MNF-nCCREmRMR approach performs better than the state-of-the-art methods on every single criterion used to evaluate performance. In addition, the line graphs presented in Figure 7 shows the comparison of the proposed over the others. It can be seen that the overall classification accuracy increases with the increase in the ranked features.

4.7. Classification Results on the ROSIS PU Dataset

For the ROSIS PU dataset, we took approximately 17% samples of each class as training set and 83% samples as testing set from a total of 20,075 samples. The detailed information of the training and testing samples is presented in Table S3. The ground-truth image is used to select the training and testing samples for classification, as shown in Figure 4. Table 11 shows the results of the OA, AA, Kappa, and F1 score of each method. Performing on the same number of selected features, we found a classification accuracy of 98.35% by the proposed MNF-nCCREmRMR method. This table also demonstrates that the proposed MNF-nCCREmRMR method outperformed all the performance measurement metrics. Based on the two-dimensional line graph presented in Figure 8, it can be seen that as the number of features increases, the overall classification accuracy also increases.
We next calculated the OA on the three datasets in various numbers of training and testing ratios to confirm the robustness of the suggested feature extraction techniques in comparison to the investigated state-of-the-art feature reduction techniques. Table 12 shows the OA of the three datasets in 10%, 20%, and 30% of training samples. The results also reveal that the proposed method for feature extraction was better that the investigated state-of-the-art feature reduction methods for each of the three HSI datasets. On the other hand, we tested the investigated and proposed methods utilizing the three classifiers (Naïve Bayes classifier, decision tree classifier, and SVM) for three datasets are presented in Table 13, Table 14 and Table 15, respectively. From these tables’ data, we can conclude that the proposed methods outperform the studied methods.

4.8. Features Scatter Plot Analysis

Here, we consider the feature space analysis approach using the scatter plot of the first two selected features to evaluate the robustness of the proposed method (MNF-nCCREmRMR). Figure 9 depicts the feature space for the AVIRIS IP dataset, utilizing the conventional approaches such as MNF, MNF-MI, MNF-CCRE, and proposed method. We used eight classes in the scatter plot to keep things simple. In this case, the standard MNF and MNF-MI methods exhibited greater class overlap, as shown in Figure 9a,b. As opposed to these studied methods, the proposed method MNF-nCCREmRMR demonstrates that the classes are more visually separable. Similarly, the feature space for the traditional MNF, MNF-MI, and MNF-CCRE, and the suggested approach on the WDM HYDICE dataset, is also depicted in Figure 10. As shown in Figure 10a,b, the classes are more intimately connected, but in the proposed method shown in Figure 10d, the class samples are more distinguishable than in the investigated methods. Additionally, it demonstrates how applying normalized CCRE with the mRMR approach to MNF data enhances the dominance of the selected features.

4.9. Extended Analysis

Each method’s execution time is analyzed and listed in this section for comparison. On a desktop computer running the Microsoft Windows 10 operating system and powered by an Intel Core i5 3.2 GHz processor, the experiments were carried out using MATLAB R2014b. Table 16 presents the execution time of each method for different datasets, from which it can be seen that MNF-nCCREmRMR is computationally comparable with the existing methods. In addition, the robustness of the proposed method MNF-nCCREmRMR for the multiclass classification was assessed using the error matrices. Tables S4–S6 show the error matrices using the AVIRIS IP, HYDICE WDM, and ROSIS PU datasets, respectively. All three error matrices illustrate that almost all classes are correctly predicted except very few of them by the proposed method.

5. Conclusions

This study proposes a dimension reduction strategy that combines feature extraction and feature selection in order to find a relevant subset of characteristics for efficient HSI classification. We specifically made use of the widely utilized feature extraction technique MNF and the information theoretic approach CCRE for feature selection. The normalized CCRE was employed alongside the mRMR-driven feature selection criterion to enhance the quality of the chosen feature. The KSVM classifier was used to analyze the performance of the produced feature subsets on three real HSIs. The testing results showed a considerable improvement in the quality of the selected features and classification accuracy as well. The results manifest that applying normalized CCRE to the MNF data with mRMR criteria results in the subsets of informative features. The experiments also manifest that, in comparison to the traditional MNF, feature selection using normalized CCRE after the MNF transformation improves the grade of the selected features. This is the reason that the proposed approach (MNF-nCCREmRMR) selected the subset of less noisy features, provided relevant details about the appropriate objects, and ignored the redundant features. The improvement of classification accuracy and feature space analysis demonstrates the robustness of the proposed technique.

Future Work

Although, deep learning is now a trendy tool to analyze HSI but requires a large amount of labeled data, which can be costly and time-consuming. Therefore, in future, MNF-nCCREmRMR could be coupled with deep-learning-based approaches to extract both spectral and spatial characteristics of HSIs for further improving the classification performance which overcome the limitation of deep learning based HSI analysis.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs15041147/s1, Table S1. Training and testing samples of the AVIRIS IP dataset. Table S2. Training and testing samples of the HYDICE WDM dataset. Table S3. Training and testing samples of the ROSIS PU dataset. Table S4. Error matrix using the MNF-nCCREmRMR for the IP dataset. Table S5. Error matrix using the MNF-nCCREmRMR for the WDM dataset. Table S6. Error matrix using the MNF-nCCREmRMR for the ROSIS PU dataset.

Author Contributions

Conceptualization, M.R.I., A.S. and M.P.U.; methodology, M.R.I., M.P.U. and M.I.A.; software M.R.I.; validation, M.R.I., A.S. and M.P.U.; formal analysis, M.R.I., A.S. and M.I.A.; investigation, M.R.I., A.S., M.I.A. and M.P.U.; resources, M.R.I., A.S. and A.U.; data curation, M.R.I., A.S. and M.P.U.; writing—original draft preparation, M.R.I. and A.S.; writing—review and editing, M.I.A., M.P.U. and A.U.; visualization, M.R.I., A.S., M.I.A. and M.P.U.; supervision, M.I.A., M.P.U. and A.U.; funding acquisition, M.P.U. and A.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The AVIRIS Indian Pines data are available at https://purr.purdue.edu/publications/1947/1 (accessed on 15 February 2023), while the HYDICE Washington DC Mall data are available at https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html (accessed on 15 February 2023). The ROSIS Pavia University dataset can be found at https://ieee-dataport.org/documents/hyperspectral-data (accessed on 15 February 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Richards, J.A.; Richards, J. Remote Sensing Digital Image Analysis; Springer: Berlin/Heidelberg, Germany, 1999; Volume 3. [Google Scholar]
  2. Campbell, J.B.; Wynne, R.H. Introduction to Remote Sensing; Guilford Press: New York, NY, USA, 2011. [Google Scholar]
  3. Chen, Y.; Zhao, X.; Jia, X. Spectral–spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
  4. Islam, M.R.; Ahmed, B.; Hossain, M.A. Feature reduction based on segmented principal component analysis for hyperspectral images classification. In Proceedings of the 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’s Bazar, Bangladesh, 7–9 February 2019. [Google Scholar]
  5. Jia, X.; Kuo, B.-C.; Crawford, M.M. Feature mining for hyperspectral image classification. Proc. IEEE 2013, 101, 676–697. [Google Scholar] [CrossRef]
  6. Tarabalka, Y.; Chanussot, J.; Benediktsson, J.A. Segmentation and classification of hyperspectral images using watershed transformation. Pattern Recognit. 2010, 43, 2367–2379. [Google Scholar] [CrossRef] [Green Version]
  7. Lillesand, T.; Kiefer, R.W.; Chipman, J. Remote Sensing and Image Interpretation; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  8. Hughes, G. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef] [Green Version]
  9. Islam, M.R.; Hossain, M.A.; Ahmed, B. Improved subspace detection based on minimum noise fraction and mutual information for hyperspectral image classification. In Proceedings of the International Joint Conference on Computational Intelligence, Budapest, Hungary, 2–4 November 2020. [Google Scholar]
  10. Islam, R.; Ahmed, B.; Hossain, A. Feature reduction of hyperspectral image for classification. J. Spat. Sci. 2020, 67, 331–351. [Google Scholar] [CrossRef]
  11. Luo, G.; Chen, G.; Tian, L.; Qin, K.; Qian, S.-E. Minimum noise fraction versus principal component analysis as a preprocessing step for hyperspectral imagery denoising. Can. J. Remote Sens. 2016, 42, 106–116. [Google Scholar] [CrossRef]
  12. Yang, X.; Xu, W.D.; Liu, H.; Zhu, L.Y. Research on dimensionality reduction of hyperspectral image under close range. In Proceedings of the 2019 International Conference on Communications, Information System and Computer Engineering (CISCE), Haikou, China, 5–7 July 2019; pp. 171–174. [Google Scholar]
  13. Li, H.; Cui, J.; Zhang, X.; Han, Y.; Cao, L. Dimensionality Reduction and Classification of Hyperspectral Remote Sensing Image Feature Extraction. Remote Sens. 2022, 14, 4579. [Google Scholar] [CrossRef]
  14. Kuo, B.C.; Li, C.H. Kernel Nonparametric Weighted Feature Extraction for Classification. In AI 2005: Advances in Artificial Intelligence. AI 2005. Lecture Notes in Computer Science; Zhang, S., Jarvis, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3809. [Google Scholar] [CrossRef]
  15. Gao, H.-M.; Zhou, H.; Xu, L.-Z.; Shi, A.-Y. Classification of hyperspectral remote sensing images based on simulated annealing genetic algorithm and multiple instance learning. J. Central South Univ. 2014, 21, 262–271. [Google Scholar] [CrossRef]
  16. Chang, C.-I.; Du, Q. Interference and noise-adjusted principal components analysis. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2387–2396. [Google Scholar] [CrossRef] [Green Version]
  17. Fauvel, M.; Chanussot, J.; Benediktsson, J.A. Kernel Principal Component Analysis for the Classification of Hyperspectral Remote Sensing Data over Urban Areas. EURASIP J. Adv. Signal Process. 2009, 2009, 783194. [Google Scholar] [CrossRef] [Green Version]
  18. Rodarmel, C.; Shan, J. Principal component analysis for hyperspectral image classification. Surv. Land Inf. Sci. 2002, 62, 115–122. [Google Scholar]
  19. Hossain, A.; Jia, X.; Pickering, M. Subspace Detection Using a Mutual Information Measure for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2013, 11, 424–428. [Google Scholar] [CrossRef]
  20. Green, A.; Berman, M.; Switzer, P.; Craig, M. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. IEEE Trans. Geosci. Remote Sens. 1988, 26, 65–74. [Google Scholar] [CrossRef] [Green Version]
  21. Lixin, G.; Weixin, X.; Jihong, P. Segmented minimum noise fraction transformation for efficient feature extraction of hyperspectral images. Pattern Recognit. 2015, 48, 3216–3226. [Google Scholar] [CrossRef]
  22. Guo, B.; Gunn, S.; Damper, R.; Nelson, J. Band Selection for Hyperspectral Image Classification Using Mutual Information. IEEE Geosci. Remote Sens. Lett. 2006, 3, 522–526. [Google Scholar] [CrossRef] [Green Version]
  23. Hossain, M.A.; Jia, X.; Pickering, M. Improved feature selection based on a mutual information measure for hyperspectral image classification. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 June 2012. [Google Scholar]
  24. Islam, R.; Ahmed, B.; Hossain, A.; Uddin, P. Mutual Information-Driven Feature Reduction for Hyperspectral Image Classification. Sensors 2023, 23, 657. [Google Scholar] [CrossRef]
  25. Uddin, M.P.; Mamun, M.A.; Afjal, M.I.; Hossain, M.A. Information-theoretic feature selection with segmentation-based folded principal component analysis (PCA) for hyperspectral image classification. Int. J. Remote Sens. 2021, 42, 286–321. [Google Scholar] [CrossRef]
  26. Das, S.; Routray, A.; Deb, A.K. Fast Semi-Supervised Unmixing of Hyperspectral Image by Mutual Coherence Reduction and Recursive PCA. Remote Sens. 2018, 10, 1106. [Google Scholar] [CrossRef] [Green Version]
  27. Machidon, A.L.; Del Frate, F.; Picchiani, M.; Machidon, O.M.; Ogrutan, P.L. Geometrical Approximated Principal Component Analysis for Hyperspectral Image Analysis. Remote Sens. 2020, 12, 1698. [Google Scholar] [CrossRef]
  28. Uddin, M.P.; Mamun, M.A.; Hossain, M.A. PCA-based feature reduction for hyperspectral remote sensing image classification. IETE Tech. Rev. 2020, 38, 377–396. [Google Scholar] [CrossRef]
  29. Wang, F.; Vemuri, B.C. Non-Rigid Multi-Modal Image Registration Using Cross-Cumulative Residual Entropy. Int. J. Comput. Vis. 2007, 74, 201–215. [Google Scholar] [CrossRef] [Green Version]
  30. Rao, M.; Chen, Y.M.; Vemuri, B.C.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inform. Theory 2004, 50, 1220–1228. [Google Scholar] [CrossRef]
  31. Peng, H.; Long, F.; Ding, C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1226–1238. [Google Scholar] [CrossRef] [PubMed]
  32. Estevez, P.A.; Tesmer, M.; Perez, C.A.; Zurada, J.M. Normalized Mutual Information Feature Selection. IEEE Trans. Neural Networks 2009, 20, 189–201. [Google Scholar] [CrossRef] [Green Version]
  33. Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J. Spectral–Spatial Classification of Hyperspectral Imagery Based on Partitional Clustering Techniques. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2973–2987. [Google Scholar] [CrossRef]
  34. Soelaiman, R.; Asfiandy, D.; Purwananto, Y.; Purnomo, M.H. Weighted kernel function implementation for hyperspectral image classification based on Support Vector Machine. In Proceedings of the International Conference on Instrumentation, Communication, Information Technology, and Biomedical Engineering 2009, Bandung, Indonesia, 23–25 November 2009. [Google Scholar]
  35. Huang, X.; Han, X.; Zhang, L.; Gong, J.; Liao, W.; Benediktsson, J.A. Generalized Differential Morphological Profiles for Remote Sensing Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1736–1751. [Google Scholar] [CrossRef] [Green Version]
  36. Chen, Z.; Jiang, J.; Jiang, X.; Fang, X.; Cai, Z. Spectral-Spatial Feature Extraction of Hyperspectral Images Based on Propagation Filter. Sensors 2018, 18, 1978. [Google Scholar] [CrossRef] [Green Version]
  37. Hsu, C.-W.; Chang, C.-C.; Lin, C.-J. A Practical Guide to Support Vector Classification; National Taiwan University: Taipei, Taiwan, 2003. [Google Scholar]
Figure 1. Overview of the proposed method.
Figure 1. Overview of the proposed method.
Remotesensing 15 01147 g001
Figure 2. AVIRIS IP data: (a) false color RGB image RGB (50, 27, and 17); (b) ground truth image.
Figure 2. AVIRIS IP data: (a) false color RGB image RGB (50, 27, and 17); (b) ground truth image.
Remotesensing 15 01147 g002
Figure 3. HYDICE WDM data: (a) false color RGB image RGB (50, 52, and 36); (b) ground truth image.
Figure 3. HYDICE WDM data: (a) false color RGB image RGB (50, 52, and 36); (b) ground truth image.
Remotesensing 15 01147 g003
Figure 4. ROSIS PU data: (a) false color RGB image; (b) ground truth image.
Figure 4. ROSIS PU data: (a) false color RGB image; (b) ground truth image.
Remotesensing 15 01147 g004
Figure 5. Visual representation of first two ranked features of the AVIRIS IP dataset: (a) MNF; (b) MNF-MI; (c) MNF-CCRE; and (d) MNF-nCCREmRMR.
Figure 5. Visual representation of first two ranked features of the AVIRIS IP dataset: (a) MNF; (b) MNF-MI; (c) MNF-CCRE; and (d) MNF-nCCREmRMR.
Remotesensing 15 01147 g005
Figure 6. Classification performance measure (%) on the AVIRIS IP HSI.
Figure 6. Classification performance measure (%) on the AVIRIS IP HSI.
Remotesensing 15 01147 g006
Figure 7. Classification performance measure (%) on the HYDICE WDM HSI.
Figure 7. Classification performance measure (%) on the HYDICE WDM HSI.
Remotesensing 15 01147 g007
Figure 8. Overall classification accuracy versus features for the ROSIS PU dataset.
Figure 8. Overall classification accuracy versus features for the ROSIS PU dataset.
Remotesensing 15 01147 g008
Figure 9. Feature space on the AVIRIS IP dataset: (a) MNF; (b) MNF-MI; (c) MNF-CCRE; and (d) MNF-nCCREmRMR.
Figure 9. Feature space on the AVIRIS IP dataset: (a) MNF; (b) MNF-MI; (c) MNF-CCRE; and (d) MNF-nCCREmRMR.
Remotesensing 15 01147 g009aRemotesensing 15 01147 g009b
Figure 10. Feature space on the HYDICE WDM dataset: (a) MNF; (b) MNF-MI; (c) MNF-CCRE; and (d) MNF-nCCREmRMR.
Figure 10. Feature space on the HYDICE WDM dataset: (a) MNF; (b) MNF-MI; (c) MNF-CCRE; and (d) MNF-nCCREmRMR.
Remotesensing 15 01147 g010
Table 1. Summary of the HSI datasets.
Table 1. Summary of the HSI datasets.
Name of the DatasetCapturing Sensor P Wavelength
Range (nm)
H W Ground
Classes
Ground Sampling
Distance (m)
IPAVIRIS220400–25001451451620
WDMHYDICE191400–2400128030773
PUROSIS103 430–86061061091.3
Table 2. Description of different studied and proposed methods.
Table 2. Description of different studied and proposed methods.
AcronymMethod TypeInformation Used for Dimension ReductionMain Steps
PCAConventionalSpectral
  i.
Obtain the covariance matrix of mean adjusted data.
 ii.
Apply the Eigen decomposition operation on the covariance matrix.
iii.
Calculate the projection matrix.
MNFConventionalSpectral
  i.
Whiten the noise.
 ii.
Obtain the covariance matrix of noise adjusted data.
iii.
Calculate the projection matrix.
MIConventionalSpatial
 i.
Calculate the MI of individual spectral bands and ground truth image.
ii.
Sort the image bands based on MI values.
CCREConventionalSpatial
 i.
Calculate the CCRE of individual spectral bands and ground truth image.
ii.
Sort the image bands based on CCRE values.
nCCREConventionalSpatial
 i.
Calculate the normalized CCRE of individual spectral bands and ground truth image.
ii.
Sort the image bands based on normalized CCRE values.
PCA-MIHybridSpectral-spatial
  i.
Calculate the projection matrix using the PCA approach.
 ii.
Calculate the MI values between the PCA bands and ground truth image.
iii.
Select the informative features according to the MI values.
PCA-CCREHybridSpectral-spatial
  i.
Calculate the projection matrix using the PCA approach.
 ii.
Calculate the CCRE values between the PCA bands and ground truth image.
iii.
Select the informative features according to the CCRE values.
MNF-MIHybridSpectral-spatial
  i.
Calculate the projection matrix using the MNF approach.
 ii.
Calculate the MI values between the MNF bands and ground truth image.
iii.
Select the informative features according to the MI values.
MNF-CCREHybridSpectral-spatial
  i.
Calculate the projection matrix using the MNF approach.
 ii.
Calculate the CCRE values between the MNF bands and ground truth image.
iii.
Select the informative features according to the CCRE values.
MNF-nCCREHybridSpectral-spatial
  i.
Calculate the projection matrix using the MNF approach.
 ii.
Calculate the normalized CCRE values between the MNF bands and ground truth image.
iii.
Select the informative features according to the normalized CCRE values.
MNF-nCCREmRMRHybridSpectral-spatial
 i.
Calculate the projection matrix using the MNF approach.
ii.
Select the informative features using the proposed feature selection method.
Table 3. The order of selected features for AVIRIS IP dataset.
Table 3. The order of selected features for AVIRIS IP dataset.
MethodOrders of the Selected Features
PCAPCA Components: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
MNFMNF Components: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
MIHSI Bands: 29, 22, 23, 32, 188, 128, 43, 192, 24, 25
CCREHSI Bands: 22, 29, 28, 32, 27, 26, 30, 33, 25, 24
nCCREHSI Bands: 32, 22, 34, 188, 191, 26, 23, 30, 28, 188
PCA-MIPCA Components: 1, 3, 4, 7, 8, 5, 6, 11, 15, 12
PCA-CCREPCA Components: 1, 3, 2, 5, 9, 8, 11, 7, 16, 13
MNF-MIMNF Components: 2, 5, 4, 3, 6, 7, 8, 9, 10, 11
MNF-CCREMNF Components: 3, 2, 6, 4, 5, 8, 7, 10, 9, 15
MNF-nCCREMNF Components: 2, 3, 5, 11, 8, 13, 16, 12, 9, 10
MNF-nCCREmRMRMNF Components: 2, 4, 6, 4, 5, 10, 9, 13, 12, 11
Table 4. The order of selected features for HYDICE WDM dataset.
Table 4. The order of selected features for HYDICE WDM dataset.
MethodOrders of the Selected Features
PCAPCA Components: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
MNFMNF Components: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
MIHSI Bands: 82, 83, 28, 166, 128, 82, 155, 151, 52, 12
CCREHSI Bands: 83, 50, 101, 77, 28, 57, 167,165, 51, 82
nCCREHSI Bands: 83, 102, 52, 66, 42, 166, 182, 177, 15, 86
PCA-MIPCA Components: 1, 3, 4, 5, 2, 9, 11, 15, 13, 10
PCA-CCREPCA Components: 1, 2, 5, 4, 9, 7, 11, 15, 16, 12
MNF-MIMNF Components: 2, 5, 4, 3, 6, 1, 191, 14, 18, 11
MNF-CCREMNF Components: 2, 3, 4, 5, 1, 6, 7, 19, 18, 12
MNF-nCCREMNF Components: 2, 3, 5, 6, 12, 7, 8, 4, 13, 11
MNF-nCCREmRMRMNF Components: 2, 4, 8, 5, 3, 6, 7, 11, 13, 12
Table 5. The order of selected features for ROSIS PU dataset.
Table 5. The order of selected features for ROSIS PU dataset.
MethodOrders of the Selected Features
PCAPCA Components: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
MNFMNF Components: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
MIHSI Bands: 102, 103, 98, 101, 95, 96, 97, 88, 99, 100
CCREHSI Bands: 103, 102, 100, 99, 86, 88, 96, 97, 92, 83
nCCREHSI Bands: 103, 101, 98, 99, 87, 91, 83, 103, 92, 95
PCA-MIPCA Components: 1, 2, 4, 3, 6, 8, 7, 9, 12, 11
PCA-CCREPCA Components: 1, 3, 5, 2, 9, 11, 10, 6, 15, 7
MNF-MIMNF Components: 2, 3, 4, 6, 5, 11, 9, 10, 8, 12
MNF-CCREMNF Components: 2, 3, 4, 5, 7, 8, 13, 11, 16, 9
MNF-nCCREMNF Components: 2, 3, 5, 4, 8, 7, 12, 10, 11, 13
MNF-nCCREmRMRMNF Components: 2, 4, 3, 9, 7, 11, 15, 16, 10, 8
Table 6. Parameter tuning results using tenfold cross-validation for the AVIRIS IP dataset.
Table 6. Parameter tuning results using tenfold cross-validation for the AVIRIS IP dataset.
MethodBest CBest γTraining Accuracy
PCA2.8398.55
MNF52.994.63
MI3281.3
CCRE2.73.789.31
nCCRE82.288.32
PCA-MI101.396.98
PCA-CCRE7198.81
MNF-MI102.797.53
MNF-CCRE82.897.89
MNF-nCCRE5298.91
MNF-nCCREmRMR2.11.299.85
Table 7. Parameter tuning results using tenfold cross-validation for the HYDICE WDM dataset.
Table 7. Parameter tuning results using tenfold cross-validation for the HYDICE WDM dataset.
MethodBest CBest γTraining Accuracy
PCA102.997.55
MNF42.198.43
MI23.785.61
CCRE3291.32
nCCRE71.292.58
PCA-MI8298.58
PCA-CCRE102.198.93
MNF-MI82.899.09
MNF-CCRE5399.18
MNF-nCCRE30.599.88
MNF-nCCREmRMR2.21.899.95
Table 8. Parameter tuning results using tenfold cross-validation for the ROSIS PU dataset.
Table 8. Parameter tuning results using tenfold cross-validation for the ROSIS PU dataset.
MethodBest CBest γTraining Accuracy
PCA10395.85
MNF51.995.63
MI22.787.86
CCRE82.988.39
nCCRE100.288.81
PCA-MI82.597.98
PCA-CCRE102.997.89
MNF-MI1.2398.06
MNF-CCRE10398.43
MNF-nCCRE81.599.24
MNF-nCCREmRMR52.299.91
Table 9. Classification performance measure (%) on the AVIRIS IP HSI.
Table 9. Classification performance measure (%) on the AVIRIS IP HSI.
ClassPCAMNFMICCREnCCREPCA-MIPCA-CCREMNF-MIMNF-CCREMNF-nCCREMNF-nCCREmRMR
Alfalfa88.8988.8975.0061.5476.1994.1288.8990.0090.0094.1290.00
Wheat86.8490.4183.5885.7180.4986.8488.0095.6595.6597.0695.65
Bldge-Grass90.0094.7472.0036.7390.0090.4886.3694.7494.74100.00100.00
Soybean-min97.6397.7079.3171.3280.9597.0897.0898.8498.8797.1698.87
Stone-Steel76.0094.7476.0062.5076.0079.1776.0095.2496.5589.29100.00
Soybean-no till92.7191.4957.8590.9161.0792.7194.6898.9098.9096.8198.94
Grass/Pasture97.0182.2280.6536.1896.4997.0198.4882.2288.1097.3088.10
Corn-no till78.9577.5964.8156.4564.8190.0090.0079.3790.9190.5790.91
Soybean Clean64.7152.3852.3852.3852.3864.7157.8964.7172.2286.6775.00
Corn-min83.3393.7571.4371.4373.3383.3390.1694.2094.2094.0398.48
Hay-windrowed95.1792.6748.0890.0949.2695.1794.5295.8699.2997.96100.00
Woods100.0099.5893.7895.1593.78100.0099.2299.5899.5999.6199.59
Grass/Trees82.9196.0882.4691.5990.3886.7386.7396.0898.0095.15100.00
Corn97.4098.6178.9590.9190.9198.6898.7098.6898.6898.7098.70
AA87.9789.3572.5970.92176.8689.7289.0591.723393.9895.3195.30
OA92.3893.0472.9575.12675.9693.5593.6394.9096.7296.9097.44
KAPPA91.4292.1769.6672.3373.0992.7392.8294.396.396.5097.10
F1 Score88.0789.1373.5373.2077.3189.3389.0491.9094.794.7096.3
Table 10. Classification performance measure (%) on the HYDICE WDM HSI.
Table 10. Classification performance measure (%) on the HYDICE WDM HSI.
ClassPCAMNFMICCREnCCREPCA-MIPCA-CCREMNF-MIMNF-CCREMNF-nCCREMNF-nCCREmRMR
Shadow (C1)90.9158.5488.2472.7390.6396.7797.0658.0071.7497.06100.00
Tree (C2)97.0897.2492.1495.1792.6397.0997.0998.3499.2398.49100.00
Roof (C3)92.1473.5588.2461.2482.6988.3697.7482.6489.4797.7498.48
Water (C4)92.3895.4182.9095.3687.1193.2793.3397.9498.9296.5598.77
Street (C5)93.8586.7793.7786.6392.4493.8594.1093.6796.8493.6699.74
Grass (C6)86.1493.0180.3578.8283.0992.5592.5996.7796.8899.23100.00
AA92.0884.0987.6181.6688.1093.6595.3287.8992.1897.1299.50
OA92.5093.3687.2787.6788.6594.5494.9796.1297.5697.9399.71
KAPPA89.5790.7382.2782.9384.1892.3792.9794.6096.6097.0699.60
F1 Score92.4184.8487.8781.7589.2893.5595.3890.5694.4497.2499.44
Table 11. Classification performance measure (%) on the ROSIS PU HSI.
Table 11. Classification performance measure (%) on the ROSIS PU HSI.
ClassPCAMNFMICCREnCCREPCA-MIPCA-CCREMNF-MIMNF-CCREMNF-nCCREMNF-nCCREmRMR
Asphalt99.7096.8993.4293.4593.4699.7199.7199.7199.2799.7199.70
Meadows90.3492.2573.7375.2975.2999.3299.4199.8697.7099.89100
Gravel86.5387.7482.1082.0882.0888.2488.2489.7399.8789.7789.83
Tree83.1083.1067.8067.8980.3984.0984.0984.7192.8483.7284.03
metal sheets90.8593.0168.2368.4769.0498.5598.5695.1392.1892.1893.45
Bare soil88.0188.0562.6062.6064.1091.4194.7196.0496.2999.62100
Bitumen99.5299.3687.8991.8291.8899.3699.3699.5299.52100.00100
Blocking Bricks85.5787.6569.3069.3069.3095.4294.6894.6898.0898.09100
Shadow86.9686.9686.9686.9686.96100.00100.00100.00100.00100.00100
AA90.0690.5676.8977.5479.1795.1295.4295.4997.3195.8996.33
OA90.8791.3575.4876.1376.9096.1296.5896.8997.6297.8898.35
KAPPA88.8889.4670.1170.9471.8695.2895.8496.2297.0997.4298.00
F1 Score90.4791.2476.9477.9979.2195.6695.6495.8796.6296.2197.39
Table 12. OA measure using three different training: testing ratios on the three HSI datasets.
Table 12. OA measure using three different training: testing ratios on the three HSI datasets.
Training SizeDatasetPCAMNFMICCREnCCREPCA-MIPCA-CCREMNF-MIMNF-CCREMNF-nCCREMNF-nCCREmRMR
10%IP89.9489.2967.3967.9568.1889.2190.1290.0991.1091.3993.21
WDM 89.8990.0884.5785.1785.3191.8591.8994.3895.7595.5697.39
PU90.0590.1574.1475.1974.0895.3295.5194.2996.1496.2997.79
20%IP89.2890.5068.5770.1870.8990.5491.3991.9494.3994.6795.39
WDM 91.4191.9586.3586.7486.9493.1293.5895.0896.4796.2598.24
PU90.8791.3575.4876.1376.9096.1296.5896.8997.6297.8898.35
30%IP91.2391.5870.5672.6172.9791.8692.0193.8095.2295.9496.92
WDM 92.5093.3687.2787.6788.6594.5494.9796.1297.5697.9399.71
PU91.0991.9576.2077.0777.6196.7996.9697.0997.9398.0598.94
Table 13. Classification performance measure (%) on the AVIRIS IP HSI for different dimension reduction methods and classification methods.
Table 13. Classification performance measure (%) on the AVIRIS IP HSI for different dimension reduction methods and classification methods.
ClassifiersClassPCAMNFMICCREnCCREPCA-MIPCA-CCREMNF-MIMNF-CCREMNF-nCCREMNF-nCCREmRMR
Naïve Bayesian ClassifierAA80.0283.6568.5969.92167.8683.7286.0587.3887.6988.5088.05
OA81.3883.0167.9568.12667.9682.5586.1786.4187.0488.2089.04
KAPPA82.3384.2467.6669.3368.0982.7386.0988.6387.0287.3188.89
F1 Score80.2083.3268.5368.2068.3181.3386.0486.8287.2786.8987.39
Decision TreeAA85.5485.2470.5973.92175.2787.2488.3490.9192.5793.2493.87
OA88.2287.1270.9573.12675.3188.9187.8791.2793.3593.5494.68
KAPPA89.0685.6871.6674.0174.2887.3489.6891.3992.6492.2193.25
F1 Score84.0786.7270.5373.2072.3087.2788.1890.2891.1893.3192.58
SVMAA87.9789.3572.5970.92176.8689.7289.0591.723393.9895.3195.30
OA92.3893.0472.9575.12675.9693.5593.6394.9096.7296.9097.44
KAPPA91.4292.1769.6672.3373.0992.7392.8294.396.396.5097.10
F1 Score88.0789.1373.5373.2077.3189.3389.0491.9094.794.7096.3
Table 14. Classification performance measure (%) on the HYDICE WDM HSI for different dimension reduction methods and classification methods.
Table 14. Classification performance measure (%) on the HYDICE WDM HSI for different dimension reduction methods and classification methods.
ClassifiersClassPCAMNFMICCREnCCREPCA-MIPCA-CCREMNF-MIMNF-CCREMNF-nCCREMNF-nCCREmRMR
Naïve Bayesian ClassifierAA82.0982.6966.8767.5867.3486.5486.9788.3787.2288.4188.57
OA84.6485.7268.3969.2969.3888.2388.5689.2188.1989.5789.86
KAPPA82.6483.6665.2867.6468.3986.3587.5886.4185.3187.2488.94
F1 Score81.4781.4765.4467.3867.4683.5786.3487.8583.5487.2889.07
Decision TreeAA90.2988.4382.5883.3484.9893.3991.5892.8793.3994.0996.65
OA91.0790.9784.2484.7185.3495.4793.4294.4995.2795.3397.39
KAPPA88.6587.2980.3982.9883.4893.7491.7992.3992.7495.0396.74
F1 Score88.4786.9879.7382.7581.3994.0990.9891.8592.4594.1796.29
SVMAA92.0884.0987.6181.6688.1093.6595.3287.8992.1897.1299.50
OA92.5093.3687.2787.6788.6594.5494.9796.1297.5697.9399.71
KAPPA89.5790.7382.2782.9384.1892.3792.9794.6096.6097.0699.60
F1 Score92.4184.8487.8781.7589.2893.5595.3890.5694.4497.2499.44
Table 15. Classification performance measure (%) on the ROSIS PU HSI for different dimension reduction methods and classification methods.
Table 15. Classification performance measure (%) on the ROSIS PU HSI for different dimension reduction methods and classification methods.
ClassifiersClassPCAMNFMICCREnCCREPCA-MIPCA-CCREMNF-MIMNF-CCREMNF-nCCREMNF-nCCREmRMR
Naïve Bayesian ClassifierAA80.9978.1862.2261.3164.2283.1082.3984.4184.4494.3985.99
OA81.0880.9864.2565.8767.5785.4785.4585.1786.4886.2787.87
KAPPA80.4579.4561.3466.2060.9782.3682.6483.3683.6084.2184.95
F1 Score79.3378.3361.1864.3961.3381.4481.2282.9583.2581.5982.33
Decision TreeAA86.1986.370.0971.4771.3191.3392.1492.3193.1994.6995.29
OA88.9188.3273.7174.3474.9793.2794.8194.5294.2496.0896.28
KAPPA85.3385.7070.0170.6670.5491.4792.3492.4793.3795.3895.08
F1 Score83.9785.4469.9571.0170.1190.6591.1992.3992.8594.2994.27
SVMAA90.0690.5676.8977.5479.1795.1295.4295.4997.3195.8996.33
OA90.8791.3575.4876.1376.9096.1296.5896.8997.6297.8898.35
KAPPA88.8889.4670.1170.9471.8695.2895.8496.2297.0997.4298.00
F1 Score90.4791.2476.9477.9979.2195.6695.6495.8796.6296.2197.39
Table 16. The computational time (in second) of each method on AVIRIS IP, HYDICE WDM, and ROSIS PU datasets.
Table 16. The computational time (in second) of each method on AVIRIS IP, HYDICE WDM, and ROSIS PU datasets.
DatasetStagePCAMNFMICCREPCA-MIPCA-CCREMNF-MIMNF-CCREMNF-nCCREmRMR
AVIRIS IPTransformation0.110.120.110.110.120.120.12
Feature Selection1.561.381.511.371.61.451.58
Total Cost0.110.121.561.381.621.481.721.571.7
HYDICE WDMTransformation0.170.180.170.170.190.190.19
Feature Selection2.11.832.400.832.61.952.1
Total Cost0.170.182.11.832.572.02.792.142.29
ROSIS PUTransformation0.120.100.120.120.100.100.10
Feature Selection1.881.491.911.522.11.61.78
Total Cost0.120.101.881.492.031.642.21.71.88
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Islam, M.R.; Siddiqa, A.; Ibn Afjal, M.; Uddin, M.P.; Ulhaq, A. Hyperspectral Image Classification via Information Theoretic Dimension Reduction. Remote Sens. 2023, 15, 1147. https://doi.org/10.3390/rs15041147

AMA Style

Islam MR, Siddiqa A, Ibn Afjal M, Uddin MP, Ulhaq A. Hyperspectral Image Classification via Information Theoretic Dimension Reduction. Remote Sensing. 2023; 15(4):1147. https://doi.org/10.3390/rs15041147

Chicago/Turabian Style

Islam, Md Rashedul, Ayasha Siddiqa, Masud Ibn Afjal, Md Palash Uddin, and Anwaar Ulhaq. 2023. "Hyperspectral Image Classification via Information Theoretic Dimension Reduction" Remote Sensing 15, no. 4: 1147. https://doi.org/10.3390/rs15041147

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop