You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

28 November 2025

Interband Consistency-Driven Structural Subspace Clustering for Unsupervised Hyperspectral Band Selection

and
College of Computer Science, Liaocheng University, Liaocheng 252059, China
*
Author to whom correspondence should be addressed.
Sensors2025, 25(23), 7265;https://doi.org/10.3390/s25237265 
(registering DOI)
This article belongs to the Section Sensing and Imaging

Abstract

In the classification applications of hyperspectral remote sensing images (HSIs), band selection is crucial for mitigating the curse of dimensionality while preserving the intrinsic physical information within HSIs. Although clustering-based band selection methods are widely applied, they often overlook the inherent physical properties of hyperspectral images. Such approaches typically operate in raw high-dimensional space, which is susceptible to noise and redundancy. This results in generated band combinations that fail to adequately characterize the spectral features of the underlying materials, leading to suboptimal band-grouping schemes. To address this, we propose a novel Interband Consistency-Constrained Structural Subspace Clustering (ICC-SSC) method. The core assumption is that the spectral characteristics of land cover inherently reside within a low-dimensional subspace, where bands within this subspace should exhibit strong physical consistency, which means that the spectral signatures of land covers show significant similarity across these bands. Driven by this physical interpretation, our method innovates in two ways. Specifically, we employ the l 1 , 2 norm in the self-representation model to discover the inherent grouping structure of the bands. This enforces a small set of common, representative basis bands to reconstruct others, effectively identifying the most physically informative bands that anchor these material-specific subspaces. In addition, we incorporate a total variance (TV) regularization term into the proposed model to capture the smoothing characteristics between adjacent bands. This physics-based constraint enhances the consistency of representations among adjacent bands, ensuring that subspace representations across all bands maintain well-structured coherence. An efficient algorithm based on the Alternating Direction Method of Multipliers (ADMM) is derived to solve the proposed model. Extensive experiments on three real HSIs demonstrate that ICC-SSC significantly outperforms state-of-the-art methods.

1. Introduction

Hyperspectral sensors capture intricate detail of land covers by generating hyperspectral remote sensing image (HSI) cubes with hundreds of contiguous spectral bands, resulting in a wealth of sensor-derived spectral and spatial information []. High-resolution HSI information is crucial for effectively distinguishing various ground objects, enabling a wide range of applications such as ocean monitoring [], land cover classification [], and object detection []. However, the very core capability of these sensors to capture high-dimensional data also presents significant processing challenges. The high dimensionality and strong correlation between adjacent bands inherent in this sensor data often lead to the Hughes phenomenon in classification tasks [], which can diminish the utility of the sensor’s output. Therefore, developing effective dimensionality reduction techniques is by no means just a data preprocessing step but a crucial requirement to unlock the full potential of hyperspectral sensor technology.
The dimensionality reduction of the HSI primarily involves two distinct approaches: feature extraction [] and band selection [,]. Through feature extraction, the initial high-dimensional spectral bands can be efficiently projected into a low-dimensional space, which facilitates the analysis of HSIs. Instead, band selection methods aim at identifying and extracting a representative subset of bands from the target HSI, thus effectively reducing its dimensionality. The advantage of band selection methods lies in their ability to preserve the original spectral information while ensuring its physical interpretability []. Depending on whether training samples are used or not, band selection methods can be classified into three types: supervised [], unsupervised and semi-supervised []. Supervised and semi-supervised methods require labeled samples, whereas unsupervised methods propose a more flexible approach by avoiding this requirement considering the challenge of obtaining labeled data [].
Unsupervised band selection methods are mainly categorized into ranking-based methods, clustering-based methods, and search-based methods. Ranking-based methods, such as maximum variance-based principal component analysis (MVPCA) [], fast neighborhood grouping for hyperspectral band selection (FNGBS) [], and similarity-based ranking strategy [], typically select the representative band based on specific metrics and their rankings. However, ranking-based methods rarely consider the correlation between bands []. Search-based methods commonly address the task of band selection by optimizing a specific objective function, such as adaptive multistrategy particle swarm optimization [] and the cuckoo search-based method []. However, the computational complexity of these methods tends to be high []. In terms of clustering-based methods, they perform band selection by grouping similar bands and then selecting a representative band from each group [], such as enhanced fast density-peak-based clustering (E-FDPC) [], Ward’s linkage strategy using divergence (WaLuDi) [], and the adaptive subspace partition strategy (ASPS) []. Recently, due to the successful applications of deep learning in processing HSIs [], deep learning has been used to design effective clustering-based methods for band selection, such as the deep subspace clustering method (DSC) [] and the sparsity regularized deep subspace clustering method []. However, training deep neural network models in these methods requires high-performance devices, and model training is computationally intensive.
Sparse subspace clustering (SSC) [] has received much attention for its superior performance in hyperspectral band selection. The SSC method first constructs a sparse coefficient matrix assuming that each band of the target HSI can be represented by a linear combination of other bands. Subsequently, a similarity matrix is created from this coefficient matrix. Finally, spectral clustering is performed based on the similarity matrix, and the clustering results obtained are used to select representative bands. Therefore, existing studies mainly enhance the effectiveness of SSC-based methods by improving the quality of the generated sparse coefficient matrices. For example, Sun et al. [] proposed an SSC-based band selection method that enforces the sparsity of the coefficient matrices and the block diagonal structure. Huang et al. [] factorized the coefficient matrix of the self-representation model into the desired coefficient matrix and sparse error matrix, and they adaptively adjusted the coefficient matrix to take advantage of the intrinsic structural information in HSIs. Cai et al. [] incorporated graphical convolution into the self-representation model and provided a closed-form solution. Based on the obtained coefficient matrix, ranking-based and clustering-based strategies are proposed to identify the representative band subset. You et al. [] presented a global affinity matrix reconstruction (GAMR) method that adaptively reconstructs the global similarity matrix to guide the band selection by iteratively refining pseudo-labels through convex optimization. Although the clustering performance of SSC-based band selection methods has been significantly improved, these methods neglect the high correlation between bands in real HSIs and thus have limited performance. Therefore, it remains challenging to design effective self-representation models by exploiting the inherent high correlation between adjacent bands in real HSIs so as to learn a consistent representation of all bands for effective clustering that contributes to band selection.
To address this inherent challenge in hyperspectral sensor data processing, we propose a novel Interband Consistency-Constrained Structural Subspace Clustering (ICC-SSC) method for hyperspectral band selection. Our approach is grounded in the observation that the spectral signatures of land covers reside in low-dimensional subspaces within the high-dimensional sensor data. We assume that each band within such a material-specific subspace can be represented by a linear combination of a common set of physically informative basis bands. Driven by this sensor-derived physical insight, ICC-SSC innovates in two key aspects: structured sparsity for physical group discovery and physics-aware smoothness constraints, thereby learning more effective band representations. Specifically, instead of the conventional l 1 norm, we employ the l 1 , 2 norm in the self-representation model. This enforcement of structured sparsity ensures that a common set of basis bands collectively reconstructs others within a subspace. This mechanism directly identifies the most representative and physically central bands that anchor the characteristic spectral profiles of distinct materials, leading to a more consistent and interpretable grouping. To explicitly model the high correlation between adjacent bands—a fundamental property of continuous spectral sampling by hyperspectral sensors—we incorporate a total variation (TV) regularization term into the self-representation model. This term penalizes representation discrepancies between neighboring bands, thereby enforcing smoothness along the spectral dimension and ensuring that the learned subspace representations respect the inherent physical continuity of the sensor data. An efficient algorithm based on the Alternating Direction Method of Multipliers (ADMM) algorithm is derived to solve the proposed ICC-SSC model. Finally, based on the cluster partitioning result, we select the most representative and informative bands from each subspace using an information entropy-based criterion to form the representative band subset. The primary contributions of this study are as follows.
  • We propose a novel Interband Consistency-Constrained Structural Subspace Clustering (ICC-SSC) method for band selection, replacing the conventional l 1 norm with the l 1 , 2 norm in the self-representation model. This enforces structured sparsity in the coefficient matrix, ensuring that bands within the same subspace share a common set of basis bands for linear representation. The structural constraint enhances intra-subspace consistency and improves clustering discriminability.
  • To leverage the inherent high correlation between adjacent bands in hyperspectral imagery (HSI), we integrate total variation (TV) regularization into the self-representation model. This explicitly enforces smoothness and consistency in the representations of neighboring bands.
  • We develop an efficient Alternating Direction Method of Multipliers (ADMM)-based optimization strategy to solve the non-convex ICC-SSC model. In addition, the effectiveness of ICC-SSC is demonstrated by an experimental comparison with eight state-of-the-art band selection methods on three real datasets.
This paper is structured as follows: Section 2 briefly describes the sparse subspace clustering model. Next, Section 3 mainly describes the model and optimization solution of the ICC-SSC method proposed in this study. Section 4 outlines the experimental setup, gives a series of experimental results, and discusses the ICC-SSC methodology. Finally, Section 5 summarizes this study.

2. Preliminary

In this section, some relevant works on subspace clustering will be briefly introduced, which help to understand the band selection method proposed in this paper.

2.1. SSC

SSC aims to group data points that are drawn from a union of multiple low-dimensional subspaces. This is achieved by learning a sparse representation of the data, where each point is reconstructed using a small subset of other points []. Specifically, given a high-dimensional data set Y = y i i = 1 N , where y i R M , the reconstruction of each data point y i is accomplished by effectively combining the other data points in the set Y . Mathematically, the model of SSC is represented as follows:
min C | | C | | 0 , s . t . Y = Y C , diag ( C ) = 0 ,
where | | · | | 0 indicates the l 0 norm of matrices; Y = ( y 1 , y 2 , , y i , , y N ) R M × N represents the data matrix composed of the data points in the set Y ; C = ( c 1 , c 2 , , c i , , c N ) R N × N denotes the coefficient matrix consisting of representation vectors of Y , and diag ( C ) = 0 , in which diag ( C ) R N denotes the vector formed by all diagonal components in C , ensuring that data points in Y are not represented by themselves. It should be noted that each column vector c i in C indicates the relative importance of the other data points in Y when expressing the corresponding data point y i  [].
Mathematically, Equation (1) can be solved by minimizing the objective function | | C | | 0 . However, this usually results in an NP-hard problem. Fortunately, with a convex relaxation technique, the l 0 norm can be replaced by the l 1 norm, resulting in the sparse self-representation model [] as follows:
min C | | C | | 1 , s . t . Y = Y C , diag ( C ) = 0 .

2.2. l 1 , 2 Norm-Based SSC

Recently, the l 1 , 2 norm has been successfully applied to SSC problems []. Specifically, the row sparsity of C can be achieved by using the l 1 , 2 norm on the matrix C so that a common group of base vectors is used to express data points in the same subspace  []. Mathematically, the l 1 , 2 norm-based SSC model can be expressed by the following:
min C | | C | | 1 , 2 + λ 2 | | Y Y C | | F 2 , s . t . diag ( C ) = 0 ,
where | | C | | 1 , 2 = i = 1 N | | C i , : | | 2 ; λ denotes the regularization parameter.
Then, using the obtained sparse matrix Z , the affinity matrix W can be constructed by the following:
W = | C | + | C | T .
Next, clustering results are obtained by performing spectral clustering based on W .

3. Method

This section describes the model and optimization scheme of the ICC-SSC method proposed in this study. We first introduce a new model of the ICC-SSC method, which aims to obtain clusters that are more favorable for band selection by learning a consistent representation of the bands. Then, the solution method of the ICC-SSC model is analyzed. Furthermore, a strategy for band selection via information entropy is presented based on the clustering results derived from the ICC-SSC method.

3.1. Proposed Model

The overall flowchart of the ICC-SSC method is shown in Figure 1. It can be seen that our proposed ICC-SSC method is achieved through an integrated workflow that begins with the input HSI data cube and progressively identifies the most representative band subset. Central to this process is an autoencoder model regularized by the l 1 , 2 -norm, which reveals the intrinsic grouping structure of bands by selecting the minimal basis set capable of reconstructing other bands, thereby identifying distinct spectral subspaces. Meanwhile, a TV regularization term is introduced. By promoting smoothness in the representation coefficients, this ensures physical consistency between adjacent bands, enabling the model to directly align with the spectral continuity inherent in the HSI data. This unified model is efficiently optimized using the Alternating Direction Method of Multipliers (ADMM) algorithm, generating a coefficient matrix that encodes subspace membership relationships. Based on the similarity matrix constructed from the obtained coefficient matrix, spectral clustering is then performed to partition the bands into coherent clusters. To robustly select representative bands from each cluster, this method uses information entropy as the final screening criterion. The band with the highest entropy value is chosen as the representative of the cluster, thereby forming a representative subset of bands.
Figure 1. The flowchart of ICC-SSC method, where similar colors represent similar element values.
In addition, Figure 2 illustrates a schematic diagram of the ICC-SSC model, where the different self-representation learning mechanisms between our proposed ICC-SSC method and the traditional SSC method are compared. Prior to sparse self-representation learning, ICC-SSC first reshapes the hyperspectral image cube of size H × W × B into a 2D matrix X R M × B , where M = H × W represents the spatial dimensionality, and H, W, and B denote the image height, width, and number of spectral bands, respectively. This is achieved by vectorizing each band image and arranging the resulting vectors in the order of their spectral wavelengths to form the matrix X . In Figure 2, we assume that subspace 1 and subspace 2 denote the same subspace of original bands, where band 1 and band 2 represent two neighboring bands in this subspace. In traditional SSC methods, as shown by the dashed arrows, band 1 and band 2 may be represented by different sets of basis bands, which leads to the inconsistent representation of band 1 and band 2. In contrast, the ICC-SSC method can learn a more consistent representation for both bands by employing a common set of basis bands. Specifically, the ICC-SSC approach employs an l 1 , 2 norm-based sparse subspace clustering model, which enables the representation of different bands within the same subspace using a common set of basis bands. Furthermore, the ICC-SSC model incorporates total variation regularization to enforce consistency in the representation of adjacent bands.
Figure 2. Schematic diagram of ICC-SSC method, where similar colors represent similar element values. Assume that subspace 1 and subspace 2 denote the same subspace of original bands, where band 1 and band 2 represent two neighboring bands in this subspace. In traditional SSC methods, as shown by the dashed arrows, band 1 and band 2 may be represented by different sets of basis bands, which leads to the inconsistent representation of band 1 and band 2. Conversely, the ICC-SSC method can learn a more consistent representation for both bands by employing a common set of basis bands. Specifically, the ICC-SSC approach employs an l 1 , 2 norm-based sparse subspace clustering model, which enables the representation of different bands within the same subspace using a common set of basis bands. In addition, the ICC-SSC model ensures consistency in the representation of adjacent bands by integrating total variance regularization.
Given the obvious interband correlations in HSIs, we introduce a total variation regularization [] into the l 1 , 2 norm-based SSC model [] to explore the similarity relationship among adjacent bands. Then, the model of ICC-SSC can be expressed as
min Z | | Z | | 1 , 2 + λ 2 | | X X Z | | F 2 + β TV ( Z ) , s . t . diag ( Z ) = 0 ,
where Z R B × B represents the coefficient matrix; TV ( Z ) = j = 1 B 1 | | Z : , j + 1 Z : , j | | 1 , 1 denotes the total variation regularization [], whereas β indicates the regularization parameter. It is worth noting that TV ( Z ) is used to impose the corresponding constraint that the self-representations of neighboring bands should be similar.
For computational convenience, we construct an auxiliary matrix G with B rows and B 1 columns, which is defined as
G = 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 1 0 0 0 1 .
Based on the definition of G , the total variation regularization in (5) is rewritten as TV ( Z ) = | | Z G | | 1 , 1 , where | | · | | 1 , 1 indicates the l 1 , 1 norm of matrices. Given a matrix C R M × N , | | C | | 1 , 1 is defined as | | C | | 1 , 1 = n = 1 N | | C : , n | | 1 . Accordingly, the model in (5) can be re-expressed as
min Z | | Z | | 1 , 2 + λ 2 | | X X Z | | F 2 + β | | Z G | | 1 , 1 , s . t . diag ( Z ) = 0 .
Considering that the optimization problem shown in (7) is an equality constraint problem, ADMM is an efficient method to solve such problems.

3.2. Solution for the Model of ICC-SSC

According to ADMM, to separate the variables, we introduced the auxiliary variables A , V 1 , and V 2 into the objective function in (7). Thus, the model described in (7) can be reformulated equivalently as
min A , Z , V 1 , V 2 | | Z | | 1 , 2 + λ 2 | | X X A | | F 2 + β | | V 2 | | 1 , 1 , s . t . A = Z diag ( Z ) , V 1 = A , V 2 = A G
Subsequently, according to the constraints in (8), the corresponding penalty term is added to the objective function to obtain the corresponding Lagrangian function as follows:
L ( A , Z , V 1 , V 2 , α 1 , α 2 , α 3 ) = | | Z | | 1 , 2 + λ 2 | | X X A | | F 2 + β | | V 2 | | 1 , 1 + ρ 2 | | A ( Z diag ( Z ) ) | | F 2 + < α 1 , A ( Z diag ( Z ) ) > + ρ 2 | | V 1 A | | F 2 + < α 2 , V 1 A > + ρ 2 | | V 2 V 1 G | | F 2 + < α 3 , V 2 V 1 G > ,
where α 1 , α 2 , and α 3 are three Lagrange multipliers; ρ signifies the penalty coefficient. Based on above Lagrangian function in (9), we iteratively optimize the variables Z , A , V 1 , and V 2 using ADMM, which is implemented as follows.
Z -update: Based on (9), the variable Z is optimized according to the sub-optimization problem given in (10), where k denotes the iteration times.
Z ( k + 1 ) = arg min Z | | Z k | | 1 , 2 + ρ 2 | | A k ( Z k diag ( Z k ) ) | | F 2 + < α 1 k , A k ( Z k diag ( Z k ) ) .
We address the sub-optimization problem from (10) following the Lemma 4.1 in [], and the resulting update rule about Z is given by the following:
Z i , : ( k + 1 ) = | | δ i , : k | | 2 ζ | | δ i , : k | | 2 δ i , : k , | | δ i , : k | | 2 ζ 0 , otherwise ,
where ζ = 1 ρ and δ = A k + α 1 k ρ .
A -update: By ignoring the terms in (9) that are unrelated to A , we obtain the sub-optimization problem as shown below:
A ( k + 1 ) = arg min A λ 2 | | X X A k | | F 2 + < α 1 k , A k ( Z ( k + 1 ) diag ( Z ( k + 1 ) ) ) > + ρ 2 | | A k ( Z ( k + 1 ) diag ( Z ( k + 1 ) ) ) | | F 2 + ρ 2 | | V 1 k A k | | F 2 + < α 2 k , V 1 k A k > .
By setting the derivative of the objective function in (12) to zero, we obtain the update rule for A as follows:
A ( k + 1 ) = ( λ X T X + 2 ρ I ) 1 ( λ X T X + ρ Z ( k + 1 ) + ρ V 1 k α 1 k + α 2 k ) ,
where I R N × N represents the identity matrix.
V 1 update: Variable V 1 is optimized based on the sub-optimization problem shown by (14).
V 1 ( k + 1 ) = arg min V 1 ρ 2 | | V 1 k A ( k + 1 ) | | F 2 + < α 2 k , V 1 k A ( k + 1 ) > + ρ 2 | | V 2 k V 1 k G | | F 2 + < α 3 k , V 2 k V 1 k G > .
By computing the derivative of the objective function in (14) and setting it to zero, we can derive the following update rule for V 1 :
V 1 ( k + 1 ) = ( A ( k + 1 ) α 2 k ρ + V 2 k G T + α 3 k G T ρ ) ( I + G G T ) 1 .
V 2 update: By keeping the variables that are independent of V 2 constant in (9), we obtain the following sub-optimization problem for V 2 :
V 2 ( k + 1 ) = arg min V 2 β | | V 2 k | | 1 , 1 + ρ 2 | | V 2 k V 1 ( k + 1 ) G | | F 2 + < α 3 k , V 2 k V 1 k G > .
To solve the sub-optimization problem in (16), we introduce the soft threshold operator [], and the resulting update rule for V 2 is expressed as shown below:
V 2 ( k + 1 ) = s o f t ( β ρ , V 1 ( k + 1 ) G α 3 k ρ ) ,
where s o f t ( τ , y ) s i g n ( y ) max { | y | τ , 0 } represents the soft threshold function.
Based on the obtained A ( k + 1 ) , Z ( k + 1 ) , V 1 ( k + 1 ) and V 2 ( k + 1 ) , the update rules for the Lagrange multipliers α 1 0 , α 2 0 , and α 3 0 are, respectively, given by the following:
α 1 ( k + 1 ) = α 1 k + ρ ( A ( k + 1 ) + Z ( k + 1 ) diag ( Z ( k + 1 ) ) ) , α 2 ( k + 1 ) = α 2 k + ρ ( V 1 ( k + 1 ) A ( k + 1 ) ) , α 3 ( k + 1 ) = α 3 k + ρ ( V 2 ( k + 1 ) V 1 ( k + 1 ) G ) .
Based on the above update rules, we give the specific algorithmic steps for ICC-SSC as outlined in Algorithm 1. Note that in step 9 of Algorithm 1, ICC-SSC needs to compute the similarity matrix W used for spectral clustering based on the resulting sparse coefficient matrix Z . Inspired by Equation (4) in the literature [], we calculate W according to the following:
W i , j = Z i , j 2 + Z j , i 2 Z m a x ,
where Z m a x denotes the largest element in matrix Z , which is used to keep the magnitude of W i , j within a certain range. It should be noted that by squaring the elements of matrix Z in (19), the importance of the bands with higher similarity to band i can be increased, while the influence of the bands with lower similarity is reduced, thus strengthening the connection between the bands with higher similarity in the affinity matrix. This helps spectral clustering obtain more accurate segmentation results.
Algorithm 1: The Algorithm for ICC-SSC
  • Input: the data matrix X , the count of clusters n, two regularization parameters λ and β , the tolerance ε , the penalty parameter ρ , and the maximum number of iterations I t e r .
  • Output: Selected band set Φ .
1:
Set the number of iterations k = 0 , initialize A 0 ;
2:
Initialize the Lagrange multipliers α 1 0 , α 2 0 , and α 3 0 ;
  • repeat
3:
      Update matrix Z according to (11);
4:
      Update matrix A by (13);
5:
      Update matrix V 1 via (15);
6:
      Update matrix V 2 by (17);
7:
      Update α t , where t = 1 , 2 , 3 , according to (18);
8:
      Set k = k + 1 ;
  • until  | Z ( k + 1 ) Z k | < ε or k I t e r
9:
Calculate affinity matrix W based on matrix Z according to (19);
10:
Obtain clustering results via spectral clustering;
11:
Based on the clustering results, use information entropy to select the target band set Φ ;
12:
Return: the selected band set Φ .

3.3. Band Selection via Information Entropy

After obtaining the clustering results, selecting representative bands from each cluster is a critical step in selecting bands for HSIs. Due to the fact that noise in hyperspectral images interferes with the affinity matrix and masks the clustering structure, it adversely affects the spectral clustering. Considering that information entropy has good noise adaptation ability [,,], we use information entropy to select bands in the ICC-SSC method.
Specifically, the information entropy for band X : , i , where i = 1 , , B , is calculated by the following:
H ( X : , i ) = ω Ω p ( ω ) log p ( ω ) ,
where Ω represents the gray space; H ( X : , i ) denotes the information entropy of the band X : , i ; p ( ω ) signifies the probability of the gray level ω occurring within the band X : , i . Next, we rank all the bands by information entropy and then select a set of most information-rich and top-ranking bands as representative bands. By prioritizing the bands with higher information entropy, the ICC-SSC method ensures that the selected bands are not only differentiated in terms of spectral similarity but also information-rich. In addition, this method also enables ICC-SSC to possess excellent resistance to noise interference.

4. Experiments and Discussion

In this section, we focus on the experimental setup and results, and we discuss the ICC-SSC method. We evaluate the ICC-SSC method on three real HSI datasets and compare its performance with eight representative band selection methods.

4.1. Datasets

We employ three widely used real HSI datasets in our experiments: the Botswana dataset, the Indian Pine dataset, and the University of Pavia dataset. These datasets are available online (available at https://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes, accessed on 1 March 2022). Figure 3 shows the pseudo-color images of the three HSI datasets used in our experiments. Table 1 lists the main information about these three datasets.
Figure 3. Pseudo-color images of the datasets used in the experiments. (a) Botswana dataset. (b) Indian Pines dataset. (c) Pavia University dataset.
Table 1. Specific information of the HSI datasets used.
The Botswana dataset was collected by the NASA EO-1 satellite during its overflight of Botswana in 2001. This dataset has a spatial resolution of 30 m with a size of 1476 × 256 pixels. After removing the noise band, the remaining 145 bands were employed for our experiments. The study area includes a total of 14 feature classes.
The Indian Pines dataset was obtained by the AVIRIS sensors with the wavelength range of 0.4 to 2.5 μ m. This dataset includes 16 categories of land covers and its image has 145 × 145 pixels. In addition, it contains 220 continuous bands with a spatial resolution of about 20 m. After eliminating 20 absorption bands, 200 bands were used in our experiments.
The Pavia University dataset, taken by the ROSIS-03 sensor over the University of Pavia, Italy, is the benchmark HSI dataset for urban remote sensing. This dataset contains 115 spectral bands ranging from 430 nm to 860 nm with a spatial resolution of 1.3 m. This image covers 610 × 340 pixels and contains nine land cover classes such as asphalt, grass and metal sheets.

4.2. Experimental Setup

We used two classifiers: Support Vector Machine (SVM) [] and the K-Nearest Neighbor (KNN) [] classifier to evaluate the performance of our proposed method for band selection. Note that SVM uses a radial basis function where the penalty parameter value C is set to 1 × 10 5 and the kernel length scale parameter σ is set to 100. The parameter K in the KNN classifier is set to 5. The matrix A in the ICC-SSC algorithm is initialized as a matrix of all zeros. In addition, evaluation metrics such as the Overall Accuracy (OA), Average Overall Accuracy (AOA), and Kappa coefficient were used to assess the performance of our method.
To test the performance of the ICC-SSC method, we compared it with eight representative methods, namely, MVPCA [], WaLuDi [], E-FDPC [], DSC [], ASPS [], FNGBS [], GAMR [], and the tensorial global-local graph self-representation (TGSR) method []. Parameter values for all comparison methods were set based on their original studies. During the experiments, the number of bands ranged from 5 to 50 at intervals of 5. The average performance of all methods is reported when tested ten times. Our experiments were carried out on MATLAB 2016b running on a Windows 10 operating system.

4.3. Parameter Setting

ICC-SSC has two regularization parameters, λ and β . To analyze the effect of the λ and β on the classification performance, we performed experiments using the SVM classifier on two datasets. In our experiments, we analyzed the effect of one parameter by fixing the other. The experimental results are shown in Figure 4. Specifically, Figure 4a shows the effect of the regularization parameter λ when its value changes on the interval [ 10 10 , 10 4 ] . From Figure 4a, it can be seen that the OA value is highest when λ is taken as 1 × 10 9 on the Indian Pines dataset. As the value of λ increases, the OA value decreases slightly. On the Botswana dataset, the OA value is maximum when λ is set to 1 × 10 8 . When λ > 10 8 , OA decreases significantly, indicating that Botswana is more sensitive to λ values. In contrast, in the Pavia University dataset, the change in the value of λ has less effect on OA. In addition, Figure 4b shows the results for different values of the regularization parameter β in the range [ 10 8 , 10 2 ] . It is easy to see from Figure 4b that the maximum value of OA occurs when β is taken as 1 × 10 2 on the Indian Pines and Botswana data. However, on the Pavia University dataset, OA achieves its maximum value when β is chosen as 1 × 10 6 . When β is set to any other value, the OA values are more similar and slightly worse. This may be due to the fact that a too small value of β will result in the consistency constraint not working, while too large a value of β will lead to a too strong consistency constraint, which will impair the discriminative ability of the feature. Based on this analysis, the regularization parameters λ and β were configured as follows: we set β = 1 × 10 2 and λ = 1 × 10 9 for the Indian Pines dataset, β = 1 × 10 2 and λ = 1 × 10 8 for the Botswana dataset, β = 1 × 10 6 and λ = 1 × 10 6 for the Pavia University dataset.
Figure 4. OA variation of two regularization parameters on three datasets. (a) Regularization parameter λ . (b) Regularization parameter β .

4.4. Experimental Results

4.4.1. Overall Classification Results

Table 2 shows the average AOA and kappa values obtained by all methods on three datasets, where the values shown in bold red font indicate the best results. From Table 2, we can see that ICC-SSC has the highest classification performance, which is followed by FNBGBS. Specifically, when using the KNN classifier on the Botswana dataset, ICC-SSC slightly outperforms FNBGBS. On the SVM classifier, ICC-SSC achieves significantly better results. When tested on the Indian Tree dataset, ICC-SSC outperforms FNBGBS on both classifiers. Overall, the ICC-SSC outperformed the other eight methods on both data based on AOA and kappa metrics. In addition, Figure 5, Figure 6 and Figure 7 display the curves of OA values across 5 to 50 bands, illustrating the results of all band selection methods tested on different datasets using SVM and KNN classifiers, respectively.
Table 2. Performance comparison of all methods on three datasets. The best results are shown in red bold font.
Figure 5. OA values on the Botswana dataset in the conditions of the two classifiers and different numbers of selected bands. (a) OA for SVM. (b) OA for KNN.
Figure 6. OA values on the Indian Pines dataset in the conditions of two classifiers and different numbers of selected bands. (a) OA for SVM. (b) OA for KNN.
Figure 7. OA values on the Pavia University dataset in the conditions of two classifiers and different numbers of selected bands. (a) OA for SVM. (b) OA for KNN.
(1) Botswana dataset: Figure 5 presents the OA values for all methods when tested on the Botswana dataset. As seen in Figure 5a, with the SVM classifier, ICC-SSC outperforms other methods for most of the cases. Specifically, ICC-SSC performs best when 5 to 35 bands are selected. When selecting 40 bands, ICC-SSC and GAMR provide comparable best performance. At 45 bands, ICC-SSC performed comparably to GAMR and FNGBS but better than the other methods. With 50 bands selected, ICC-SSC has the second best performance, which was slightly below that of FNGBS. In addition, Figure 5b presents the results of all the methods with the KNN classifier. We can see that ICC-SSC performs best when 20 and 30 bands as well as 40 to 50 bands are selected. When selecting five bands, the performance of ICC-SSC and WaLuDi is comparable but better than those of the other methods. Although ICC-SSC performs similarly to WaLuDi and TGSR in selecting 10 bands, it still outperforms other algorithms. At 25 and 35 bands, ICC-SSC performs similarly to FNGBS and outperforms the other methods. When selecting 15 bands, ICC-SSC ranks second, although its performance falls just below that of WaLuDi. Overall, ICC-SSC performs best on the Botswana dataset. In summary, ICC-SSC offers the best performance.
(2) Indian Pines dataset: Figure 6a,b show the test performance of ICC-SSC and the other eight methods when using SVM and KNN classifiers, respectively, on the Indian Pines dataset. From Figure 6a, we can see that ICC-SSC performs best when 20, 25, 45 and 50 bands are selected. When using 10 and 15 bands, the OA values of ICC-SSC are comparable to those of FNGBS and TGSR while surpassing the OA values of the other methods. At 30, 35 and 40 bands, ICC-SSC ranked second in performance, with FNGBS achieving the top ranking. When selecting five bands, ICC-SSC achieved second place, while TGSR was first. At five bands, the ICC-SSC ranked second in performance, with FNGBS achieving the top ranking. Furthermore, as can be seen from Figure 6b, ICC-SSC performs excellently while using the KNN classifier. Specifically, it performs best with 5, 10, 25 and 40 to 50 bands selected. Although ICC-SSC performs similarly to TGSR in selecting bands 10 and 20, it still outperforms other algorithms. Although ICC-SSC slightly performs worse than FNGBS at 30 bands, it still outperforms all other methods. Overall, ICC-SSC has the best performance.
(3) Pavia University dataset: Figure 7 shows the OA values obtained by all the methods on the Pavia University dataset. From Figure 7a, it can be seen that ICC-SSC shows the best performance when selecting 5, 15, 20, 25, 35 and 40 bands using the SVM classifier. When choosing 10 bands, the best performance is achieved by ICC-SSC, TGSR and WaLuDi. At 45 and 50 bands, the performance of ICC-SSC is comparable to FNGBS and TGSR but better than that of the other methods. With 30 bands, ICC-SSC is slightly lower than FNGBS and TGSR. In addition, Figure 7b presents the results of all the methods with the KNN classifier. As can be seen in the figure, ICC-SSC has the best performance when 5–40 bands are selected. When selecting 40–50 bands, the performance of the ICC-SSC method is comparable to GAMR but superior to the other methods. Overall, ICC-SSC outperforms other comparison methods.
In addition, Figure 8, Figure 9 and Figure 10 compare the differences between the classification maps produced by ICC-SSC using two classifiers and the ground truth when 30 bands are selected, providing an intuitive visualization of the quality of the selected bands. Specifically, Figure 8 shows the classification results obtained using SVM and KNN classifiers on the Botswana dataset compared to the ground truth. Figure 9 presents the results obtained using SVM and KNN classifiers on the Indian Pines dataset compared to the ground truth. Similarly, Figure 10 displays the classification results obtained on the Pavia University dataset using SVM and KNN classifiers compared to the ground truth. Overall, as can be seen from Figure 9 and Figure 10, the results of the ICC-SSC method using SVM and KNN classifiers on three datasets are satisfactory.
Figure 8. Classification maps from ICC-SSC when 30 bands are selected on the Botswana dataset as well as the ground truth. (a) Ground truth. (b) Using SVM. (c) Using KNN.
Figure 9. Classification maps from ICC-SSC when 30 bands are selected on the Indian Pines dataset as well as the ground truth. (a) Ground truth. (b) Using SVM. (c) Using KNN.
Figure 10. Classification maps from ICC-SSC when 30 bands are selected on the Pavia University dataset as well as the ground truth. (a) Ground truth. (b) Using SVM. (c) Using KNN.

4.4.2. Analysis of Per-Class Results

Table 3 lists the detailed performance of ICC-SSC on each class of the Indian Pines dataset, revealing the advantages and specific challenges of agricultural land cover classification. The ICC-SSC method demonstrates excellent performance on several key crop types, achieving outstanding F1-scores for Wheat (0.9415), Grass-trees (0.9535), and Hay-windrowed (0.9571), indicating a reliable discrimination of major agricultural classes. However, the metrics uncover particular difficulties with the Oats class (F1-score: 0.6154), which suffers from low precision (0.5217) despite reasonable recall (0.7500), suggesting issues with false positive detections. Similarly, the Buildings-grass-trees-drives class shows limited performance (F1-score: 0.6690), primarily due to low recall (0.6181), indicating missed detections in this complex mixed-landscape category. The close alignment between macro (0.8182) and weighted (0.8420) F1-scores confirms generally balanced performance across the 16-class agricultural scenario, though the variance in per-class results highlights the challenge of distinguishing spectrally similar crop types at different growth stages.
Table 3. Detailed per-class performance on the Indian Pines dataset.
Table 4 shows the detailed performance of our method on each class of the Pavia University dataset, demonstrating the excellent performance of our method across all categories. As shown in Table 4, our method not only achieved high F1-scores in most categories such as Meadows (0.9428) but also in challenging minority categories such as Shadows (0.9928). High precision values (eight out of nine categories exceeding 0.85) confirm low false positive rates, while consistently high recall values indicate comprehensive detection capabilities. What is particularly impressive is the almost perfect performance on the painted metal plate (F1-score: 1.0000), demonstrating the outstanding ability of our method to distinguish complex urban materials. In addition, the close consistency between macro (0.8860) and weighted (0.8964) F1-scores further validates the balanced performance of classes of different sizes.
Table 4. Detailed per-class performance on the Pavia University dataset.

4.5. Convergence Analysis

Figure 11 illustrates the convergence behavior of the proposed ICC-SCC algorithm by plotting the normalized objective function value versus the number of iterations. It is evident that our method exhibits rapid convergence on all three datasets. Specifically, the algorithm converges within 20 iterations for the Indian Pines dataset, and it requires approximately 40 and 45 iterations for the Pavia University and Botswana datasets, respectively. This consistent and efficient convergence across diverse HSI data validates the stability and practicality of the derived ADMM optimization procedure.
Figure 11. Convergence curve of ICC-SSC.

4.6. Algorithm Complexity Analysis

This section analyzes the computational complexity of the proposed method. According to (7), the optimization problem of ICC-SSC comprises three terms: an l 1 , 2 -norm regularization term, a Frobenius-norm fitting term, and a TV regularization term, denoted as Z G 1 , 1 . Under the ADMM framework, the original problem is decomposed into multiple subproblems whose computational complexities are analyzed as follows.
The subproblem for A is a quadratic problem, whose solution is obtained by solving a linear system. Directly solving this system requires O ( B 3 ) operations, making it the most computationally expensive step per iteration. The subproblem for V 1 is also a quadratic problem involving the multiplication of B × B matrices with a complexity of O ( B 3 ) . In addition, the updates for Z and V 2 involve the proximal operators of the l 1 , 2 -norm and l 1 , 1 -norm, respectively. These are implemented via group soft-thresholding and element-wise soft-thresholding operations, which each have a complexity of O ( B 2 ) . Moreover, the Lagrange multiplier update is a simple matrix addition with O ( B 2 ) complexity.
In summary, the per-iteration complexity of the algorithm is dominated by the updates of A and V 1 , each requiring O ( B 3 ) operations. For a preset number of iterations I t e r , the overall computational complexity of the ADMM solver is O ( I t e r · B 3 ) . This implies that the computational cost grows cubically with the number of samples B, which is a major determinant of the algorithm’s scalability and efficiency.

4.7. Discussion

The results shown in Figure 5, Figure 6 and Figure 7 indicate a significant correlation between the number of selected bands and the accuracy of the classifiers. Specifically, the accuracy of the band selection method is positively correlated with the number of selected bands. However, the growth rate of the OA value tends to decrease as the number of selected bands increases. Meanwhile, the experimental results have not demonstrated a linear relationship between the number of selected bands and the classification accuracy of the data, which suggests that there may be a complex interdependence between them, as reported in the literature []. This phenomenon is explained by the fact that with increasing number of bands, there is a corresponding increase in the redundancy of information in the dataset.
The ICC-SSC method shows important advantages in terms of the performance and robustness of various classification models. Specifically, our method, especially on SVM and KNN classifiers, shows consistent effectiveness when selecting bands varying between 5 and 50, as shown by Figure 5, Figure 6 and Figure 7. In contrast to other clustering-based methods, ICC-SSC learns a structured sparse representation between bands by self-representation learning while considering the similarity structure between neighboring bands. This enables it to produce effective representations that are more favorable for band selection. This confirms the importance of learning a consistent representation of bands for obtaining effective clustering results []. In addition, ICC-SSC benefits from the employed connectivity–improvable affinity matrix, which promotes more effective cluster segmentation. As a result, ICC-SSC achieves excellent band selection performance.
Furthermore, there are some limitations to this study. Specifically, the ICC-SSC method exploits similarity relationships between bands by employing total variation regularization. However, this approach focuses primarily on neighboring bands and thus may not fully capture the overall similarity relationship across all bands. Secondly, the performance of the ICC-SSC model is significantly affected by its three key parameters, namely the regularization parameters λ and β , as well as the penalty parameter ρ , which significantly affect the validity and applicability of our approach, and whose effective values may be data-dependent. Therefore, investigating adaptive tuning strategies for these parameters and developing methods that can more effectively exploit the global similarity properties of HSIs between bands are important directions for further research.

5. Conclusions

In this paper, we proposed a novel Interband Consistency-Constrained Structural Subspace Clustering (ICC-SSC) approach for hyperspectral band selection. This method is specifically designed to address the data redundancy challenge inherent in hyperspectral sensor data by leveraging two key physical properties: the intrinsic self-representation characteristic of the data and the high correlation among neighboring bands. The core of our contribution lies in its sensor-data-centric design. By employing the l 1 , 2 norm, our model learns a structured sparse representation that effectively identifies a common set of physically informative basis bands. This ensures that the resulting band groups are not only compact but also spectrally representative and interpretable. Furthermore, the incorporation of TV regularization explicitly enforces smoothness along the spectral dimension, which is a direct reflection of the continuous sampling nature of hyperspectral sensors. This physics-aware constraint significantly enhances the consistency between adjacent bands, leading to a more physically coherent and stable band grouping structure.
The efficacy of our method was validated through extensive experiments. The results demonstrate that ICC-SSC significantly outperforms state-of-the-art methods, proving its capability in selecting a highly discriminative and non-redundant band subset. This work underscores the critical role of advanced, physics-driven algorithms in maximizing the practical utility of hyperspectral sensors. By transforming high-dimensional, raw sensor output into a compact yet physically meaningful set of bands, our method facilitates more efficient and accurate downstream applications, thereby enhancing the value proposition of hyperspectral sensor technology itself.

Author Contributions

Conceptualization, Z.W. and W.W.; data curation, Z.W.; formal analysis, Z.W. and W.W.; funding acquisition, W.W.; investigation, Z.W. and W.W.; methodology, Z.W. and W.W.; project administration, W.W.; resources, W.W.; software, Z.W.; supervision, W.W.; validation, Z.W.; visualization, Z.W. and W.W.; writing—original draft, Z.W. and W.W.; writing—review and editing, Z.W. and W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Discipline with Strong Characteristics of Liaocheng University—Intelligent Science and Technology under Grant 319462208.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HSIHyperspectral remote sensing image
SSCSparse subspace clustering
ICC-SSCInterband consistency-constrained structural subspace clustering
ADMMAlternating direction multiplier method
WaLuDiWard’s linkage strategy using divergence
MVPCAMaximum-variance principal component analysis
ASPSAdaptive subspace partition strategy
E-FDPCEnhanced fast density-peak-based clustering
FNGBSFast neighborhood grouping method for hyperspectral band selection
GAMRGlobal affinity matrix reconstruction
DSCDeep subspace clustering method
TGSRTensorial global–local graph self-representation
SVMSupport vector machine
KNNK-nearest neighbor
OAOverall accuracy
AOAAverage overall accuracy

References

  1. Hu, P.; Liu, X.; Cai, Y.; Cai, Z. Band Selection of Hyperspectral Images Using Multiobjective Optimization-Based Sparse Self-Representation. IEEE Geosci. Remote Sens. Lett. 2019, 16, 452–456. [Google Scholar] [CrossRef]
  2. Kang, X.; Wang, Z.; Duan, P.; Wei, X. The Potential of Hyperspectral Image Classification for Oil Spill Mapping. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5538415. [Google Scholar] [CrossRef]
  3. Yu, C.; Xiao, X.; Gong, B.; Hu, Y.; Song, M.; Yu, H. Distillation-Constrained Prototype Representation Network for Hyperspectral Image Incremental Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5507414. [Google Scholar] [CrossRef]
  4. He, X.; Tang, C.; Liu, X.; Zhang, W.; Sun, K.; Xu, J. Object Detection in Hyperspectral Image via Unified Spectral–Spatial Feature Aggregation. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5521213. [Google Scholar] [CrossRef]
  5. Deng, Y.-J.; Li, H.-C.; Fu, K.; Du, Q.; Emery, W.J. Tensor Low-Rank Discriminant Embedding for Hyperspectral Image Dimensionality Reduction. IEEE Trans. Geosci. Remote Sens. 2018, 56, 7183–7194. [Google Scholar] [CrossRef]
  6. Kumar, B.; Dikshit, O.; Gupta, A.; Singh, M.K. Feature extraction for hyperspectral image classification: A review. Int. J. Remote Sens. 2020, 41, 6248–6287. [Google Scholar] [CrossRef]
  7. Ma, Z.; Yang, B. Spatial–Spectral Hypergraph-Based Unsupervised Band Selection for Hyperspectral Remote Sensing Image. IEEE Sens. J. 2024, 24, 27870–27882. [Google Scholar] [CrossRef]
  8. Sun, W.; Du, Q. Hyperspectral Band Selection: A Review. IEEE Geosci. Remote Sens. Mag. 2019, 7, 118–139. [Google Scholar] [CrossRef]
  9. Baisantry, M.; Sao, A.K.; Shukla, D.P. Discriminative Spectral–Spatial Feature Extraction-Based Band Selection for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5518014. [Google Scholar] [CrossRef]
  10. Shi, J.; Zhang, X.; Liu, X.; Lei, Y.; Jeon, G. Multicriteria semi-supervised hyperspectral band selection based on evolutionary multitask optimization. Knowl.-Based Syst. 2022, 240, 107934. [Google Scholar] [CrossRef]
  11. Yang, H.; Du, Q.; Chen, G. Unsupervised Hyperspectral Band Selection Using Graphics Processing Units. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2011, 4, 660–668. [Google Scholar] [CrossRef]
  12. Chang, C.I.; Du, Q.; Sun, T.L.; Althouse, M. A joint band prioritization and band-decorrelation approach to band selection for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2631–2641. [Google Scholar] [CrossRef]
  13. Wang, Q.; Li, Q.; Li, X. A Fast Neighborhood Grouping Method for Hyperspectral Band Selection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5028–5039. [Google Scholar] [CrossRef]
  14. Xu, B.; Li, X.; Hou, W.; Wang, Y.; Wei, Y. A Similarity-Based Ranking Method for Hyperspectral Band Selection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9585–9599. [Google Scholar] [CrossRef]
  15. Sawant, S.S.; Manoharan, P. Unsupervised band selection based on weighted information entropy and 3D discrete cosine transform for hyperspectral image classification. Int. J. Remote Sens. 2020, 41, 3948–3969. [Google Scholar] [CrossRef]
  16. Wan, Y.; Chen, C.; Ma, A.; Zhang, L.; Gong, X.; Zhong, Y. Adaptive Multistrategy Particle Swarm Optimization for Hyperspectral Remote Sensing Image Band Selection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5520115. [Google Scholar] [CrossRef]
  17. Wu, M.; Ou, X.; Lu, Y.; Li, W.; Yu, D.; Liu, Z.; Ji, C. Heterogeneous Cuckoo Search-Based Unsupervised Band Selection for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 62, 5500616. [Google Scholar] [CrossRef]
  18. Su, H.; Du, Q.; Chen, G.; Du, P. Optimized Hyperspectral Band Selection Using Particle Swarm Optimization. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2659–2670. [Google Scholar] [CrossRef]
  19. He, C.; Zhang, Y.; Gong, D.; Song, X.; Sun, X. A Multitask Bee Colony Band Selection Algorithm with Variable-Size Clustering for Hyperspectral Images. IEEE Trans. Evol. Comput. 2022, 26, 1566–1580. [Google Scholar] [CrossRef]
  20. Jia, S.; Tang, G.; Zhu, J.; Li, Q. A Novel Ranking-Based Clustering Approach for Hyperspectral Band Selection. IEEE Trans. Geosci. Remote Sens. 2016, 54, 88–102. [Google Scholar] [CrossRef]
  21. MartÍnez-UsÓMartinez-Uso, A.; Pla, F.; Sotoca, J.M.; GarcÍa-Sevilla, P. Clustering-Based Hyperspectral Band Selection Using Information Measures. IEEE Trans. Geosci. Remote Sens. 2007, 45, 4158–4171. [Google Scholar] [CrossRef]
  22. Wang, Q.; Li, Q.; Li, X. Hyperspectral Band Selection via Adaptive Subspace Partition Strategy. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 4940–4950. [Google Scholar] [CrossRef]
  23. Yang, J.; Zhao, Y.-Q.; Chan, J.C.-W. Learning and Transferring Deep Joint Spectral–Spatial Features for Hyperspectral Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4729–4742. [Google Scholar] [CrossRef]
  24. Zeng, M.; Cai, Y.; Cai, Z.; Liu, X.; Hu, P.; Ku, J. Unsupervised Hyperspectral Image Band Selection Based on Deep Subspace Clustering. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1889–1893. [Google Scholar] [CrossRef]
  25. Das, S.; Pratiher, S.; Kyal, C.; Ghamisi, P. Sparsity Regularized Deep Subspace Clustering for Multicriterion-Based Hyperspectral Band Selection. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2022, 15, 4264–4278. [Google Scholar] [CrossRef]
  26. Elhamifar, E.; Vidal, R. Sparse Subspace Clustering: Algorithm, Theory, and Applications. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2765–2781. [Google Scholar] [CrossRef]
  27. Sun, W.; Zhang, L.; Du, B.; Li, W.; Lai, Y.M. Band Selection Using Improved Sparse Subspace Clustering for Hyperspectral Imagery Classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 2784–2797. [Google Scholar] [CrossRef]
  28. Huang, S.; Zhang, H.; Pižurica, A. A Structural Subspace Clustering Approach for Hyperspectral Band Selection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5509515. [Google Scholar] [CrossRef]
  29. Cai, Y.; Zhang, Z.; Liu, X.; Cai, Z. Efficient Graph Convolutional Self-Representation for Band Selection of Hyperspectral Image. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 4869–4880. [Google Scholar] [CrossRef]
  30. You, M.; Yuan, A.; Zou, M.; Konno, K. Robust Unsupervised Hyperspectral Band Selection via Global Affinity Matrix Reconstruction. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2023, 16, 7374–7384. [Google Scholar] [CrossRef]
  31. Zhu, P.; Zuo, W.; Zhang, L.; Hu, Q.; Shiu, S.C.K. Unsupervised feature selection by regularized self-representation. Pattern Recognit. 2015, 48, 438–446. [Google Scholar] [CrossRef]
  32. Dong, W.; Wu, X.J.; Kittler, J. Subspace clustering via joint 1,2 and 2,1 norms. Inf. Sci. 2022, 612, 675–686. [Google Scholar] [CrossRef]
  33. Sun, W.; Peng, J.; Yang, G.; Du, Q. Fast and Latent Low-Rank Subspace Clustering for Hyperspectral Band Selection. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3906–3915. [Google Scholar] [CrossRef]
  34. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Total Variation Spatial Regularization for Sparse Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4484–4502. [Google Scholar] [CrossRef]
  35. Liu, G.; Lin, Z.; Yan, S.; Sun, J.; Yu, Y.; Ma, Y. Robust Recovery of Subspace Structures by Low-Rank Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 172–184. [Google Scholar] [CrossRef]
  36. Zhai, H.; Zhang, H.; Zhang, L.; Li, P. Squaring weighted low-rank subspace clustering for hyperspectral image band selection. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 2434–2437. [Google Scholar]
  37. Zhang, Z.; Wang, D.; Sun, X.; Zhuang, L.; Liu, R.; Ni, L. Spatial Sampling and Grouping Information Entropy Strategy Based on Kernel Fuzzy C-Means Clustering Method for Hyperspectral Band Selection. Remote Sens. 2022, 14, 5058. [Google Scholar] [CrossRef]
  38. Peng, Z.; Jia, Y.; Liu, H.; Hou, J.; Zhang, Q. Maximum Entropy Subspace Clustering Network. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 2199–2210. [Google Scholar] [CrossRef]
  39. Zhang, M.; Ma, J.; Gong, M. Unsupervised Hyperspectral Band Selection by Fuzzy Clustering With Particle Swarm Optimization. IEEE Trans. Geosci. Remote Sens. 2017, 14, 773–777. [Google Scholar] [CrossRef]
  40. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  41. Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  42. Zhang, Y.; Qi, J.; Wang, X.; Cai, Z.; Peng, J.; Zhou, Y. Tensorial Global-Local Graph Self-Representation for Hyperspectral Band Selection. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 13213–13225. [Google Scholar] [CrossRef]
  43. Yuan, Y.; Lin, J.; Wang, Q. Dual-Clustering-Based Hyperspectral Band Selection by Contextual Analysis. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1431–1445. [Google Scholar] [CrossRef]
  44. Ning, S.; Wang, W. Global-Local Consistency Constrained Deep Embedded Clustering for Hyperspectral Band Selection. IEEE Access 2023, 11, 129709–129721. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.