Next Article in Journal
Low-Cost and Convenient Experimental Methods for Research on the Physical Characteristics of Green Manure Seeds
Previous Article in Journal
Effect of Pulsed Electromagnetic Field (PEMF) on Pressure Ulcer in BALB/c and C57BL/6 Mice
Previous Article in Special Issue
Detection of Steel Reinforcement in Concrete Using Active Microwave Thermography and Neural Network-Based Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Refining Filter Global Feature Weighting for Fully Unsupervised Clustering

1
Department of Computer Science, West University of Timisoara, 300223 Timișoara, Romania
2
Department of Computers, Politehnica University of Timisoara, 300006 Timișoara, Romania
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(16), 9072; https://doi.org/10.3390/app15169072
Submission received: 26 June 2025 / Revised: 8 August 2025 / Accepted: 14 August 2025 / Published: 18 August 2025
(This article belongs to the Special Issue Innovations in Artificial Neural Network Applications)

Abstract

Featured Application

Our novel feature weighting strategies offer a practical tool for improving the clustering accuracy of unlabeled data, relevant in domains like bioinformatics, marketing segmentation, and anomaly detection.

Abstract

In the context of unsupervised learning, effective clustering plays a vital role in revealing patterns and insights from unlabeled data. However, the success of clustering algorithms often depends on the relevance and contribution of features, which can differ between various datasets. This paper explores feature weighting for clustering and presents new weighting strategies, including methods based on SHAP (SHapley Additive exPlanations), a technique commonly used for providing explainability in various supervised machine learning tasks. By taking advantage of SHAP values in a way other than just to gain explainability, we use them to weight features and ultimately improve the clustering process itself in unsupervised scenarios. Our empirical evaluations across five benchmark datasets and clustering methods demonstrate that feature weighting based on SHAP can enhance unsupervised clustering quality, achieving up to a 22.69% improvement over other weighting methods (from 0.586 to 0.719 in terms of the Adjusted Rand Index). Additionally, these situations where the weighted data boosts the results are highlighted and thoroughly explored, offering insight for practical applications.

1. Introduction

Clustering is a fundamental task in unsupervised learning that aims to group unlabeled data into meaningful subgroups (clusters) based on similarity measures. It has been applied extensively in a broad range of fields such as image segmentation, customer profiling, and bioinformatics, serving as a primary technique to discover hidden structures in data without relying on labels or annotations [1,2]. Numerous clustering algorithms have been developed; some of the most well-known include k-means, hierarchical clustering (e.g., Ward’s method), and density-based algorithms such as DBSCAN [3,4]. Despite their widespread use and versatility, the effectiveness of clustering algorithms often depends heavily on the selected feature representation and dataset, which can substantially influence how similarities or distances among data points are measured [1,2].
Feature weighting (FW) has become an effective approach to mitigating the impact of irrelevant or less informative features on clustering algorithms. Rather than assigning equal importance to all features, global FW methods allocate specific weights to features according to their relevance to the clustering objective. Generally, FW techniques can be categorized into two main types based on their strategies for estimating these weights:
1.
Filter. FW methods determine weights based on the relationship between the features and a specified reference, which corresponds, in an unsupervised scenario, to the intrinsic characteristics of the data [5].
2.
Wrapper. FW methods utilize feedback from a given machine learning algorithm to estimate weights in an iterative, black-box manner. Based on the performance achieved in the previous iteration, which is calculated using either supervised or unsupervised evaluation metrics, the method determines whether to adjust the weights to enhance the model’s performance for the next iteration or not [5].
In addition to FW offering a degree of interpretability regarding feature importance, dedicated XAI (explainable artificial intelligence) techniques have been developed for this purpose. For example, SHAP (SHapley Additive Explanations) decomposes individual predictions into feature-level contributions by utilizing concepts derived from cooperative game theory [6,7]. SHAP has primarily been utilized for interpreting and explaining model outputs by attributing a SHAP value to each feature, which indicates the feature’s contribution to the final prediction in comparison to a baseline.
In this paper, we present a novel perspective on SHAP by integrating it as a new FW approach in clustering. Instead of relying on model-specific interpretations, we leverage the core concept behind SHAP values, quantifying each feature’s contribution to assign data-driven weights that emphasize feature relevance within an unsupervised context. This adaptation of SHAP diverges from its traditional supervised applications and is fundamentally different from existing FW methods that utilize internal clustering metrics or heuristics.
In addition to using stand-alone SHAP values as weights, we combine them as an ensemble with other known FW methods. These combined approaches can surpass the performance of individual methods and even improve the overall effectiveness of unsupervised clustering algorithms.

2. Related Work

The use of FW in clustering has been studied extensively. Early work primarily focused on modifying existing clustering algorithms to assign and update feature weights during the clustering process. For instance, in weighted k-means, each feature is given a weight that is adapted iteratively to minimize within-cluster variance, aiming to place higher emphasis on features that are more discriminative [8].
Similarly, Ward’s method [4], originally introduced for hierarchical clustering, has been extended to account for feature-specific weights (sometimes referred to as Ward variants) by adjusting the distance measure used in building the hierarchy, such as the Minkowski distance [9].
Other works have tackled FW by separating it from the clustering procedure itself (i.e., using filter approaches). Filter-based methods rely on statistical tests or correlations to rank features based on their intrinsic properties, which can be representative of potential cluster structures [5]. In [10], a method for feature weighting called “K-means Clustering-based Feature Weighting” was suggested. This method first extracts features from the frequency domain and calculates their mean, minimum, maximum, and standard deviation as statistical measurements. In the next stage, the K-means algorithm groups these features together, and the average values of the features in relation to the centers of these groups are used as weights. Gürüler et al. [11] propose a hybrid system that integrates a complex-valued artificial neural network with a feature weighting technique based on k-means clustering. The clustering method is used to assign importance to input features by analyzing their distribution and separability across clusters, which improves the network’s ability to discriminate between Parkinson’s and non-Parkinson’s cases. This method also demonstrates how unsupervised clustering can be effectively combined with neural network models to enhance feature relevance and classification performance.
While these filter FW approaches can be computationally efficient and widely applicable, they do not take into account the behavior of a specific clustering algorithm. In [5], an extensive classification of FW research works is presented; it is stated that the global filter FW approach in unsupervised learning is not a commonly employed method and that few works are encountered in the literature.
In recent years, XAI techniques such as SHAP have transformed how researchers interpret model decisions [6,7]. SHAP decomposes predictions into additive feature contributions, making it possible to explain complex models in a manner consistent with game-theoretic axioms. Although SHAP has primarily been used in supervised learning settings (classification and regression), its underlying principle—quantifying each feature’s marginal contribution—is promising for providing FW in unsupervised learning. A few studies have started to explore combining XAI methods with cluster analysis, mostly for model interpretability or cluster labeling [12,13]. However, leveraging SHAP directly to derive feature weights that improve clustering outcomes remains largely unexplored territory.
Our work bridges this gap by introducing a SHAP-based global filter FW approach specifically tailored for clustering. By integrating the core ideas of SHAP values into the feature selection and weighting process, we aim to produce meaningful weights that not only enhance clustering performance but also provide what SHAP was meant to offer initially, i.e., feature importance.

3. Materials and Methods

Our primary objective is to integrate the numerical values derived from SHAP into the FW process for clustering tasks. SHAP typically provides a measure of each feature’s marginal contribution to a predictive model’s output in a supervised context. We adapt this concept to unsupervised tasks by training a surrogate predictive model (e.g., a classification model derived from the pseudo-labels) on an initial prediction Y 0 , made by a clustering algorithm.
Let X R n × d be the data matrix, and let a clustering algorithm C produce an initial prediction Y 0 . To temper dependence on the quality of Y 0 , we perform multiple random restarts and select a well-initialized Y 0 in a single-pass scheme. In preliminary runs, weight rankings were stable across restarts, and overall performance varied only within baseline variability. For clarity and reproducibility, we therefore proceed with one representative Y 0 .
We then train a random forest classifier f [14] to predict Y 0 from X . On tabular data, forests capture non-linearities and interactions without extensive tuning and enable the use of TreeSHAP. We compute SHAP values with TreeExplainer [7], which yields exact, polynomial-time Shapley values for tree ensembles and avoids the variance and background-set sensitivity of model-agnostic approximations.
TreeSHAP returns per-sample, per-class attributions ϕ i , k ( x j ; f ) for feature i { 1 , , d } , class k { 1 , , K } , and sample j { 1 , , n } . We aggregate to a single score per feature via the mean absolute SHAP [6]:
s i = 1 n K j = 1 n k = 1 K ϕ i , k ( x j ; f ) , w i SHAP = s i p = 1 d s p .
Taking absolute values focuses on magnitude rather than direction and is a common and empirically supported practice for feature importance with SHAP [7]. The resulting normalized vector W is then used to rescale features:
X ˜ = X · W .
We reapply C to X ˜ to obtain predictions Y. The entire process is summarized visually in Figure 1. Although the weighting can be iterated in a wrapper-like fashion by recomputing SHAP on Y until a stopping rule is met, in our experiments, we use a single iteration to isolate the causal effect of SHAP-derived weights, preserve comparability across algorithms, and maintain computational efficiency. This also avoids potential bias from repeatedly evaluating on the same pseudo-labels.
The resulting SHAP-based weights can then be combined with traditional FW strategies in an ensemble by multiplying the weights and applying them to the data. Multiplication amplifies consensus (features valued by both strategies) and suppresses disagreement (features valued by only one), which makes the metric space sharper along globally discriminative axes. This ensemble weighting strategy aims to retain each method’s strengths and even surpass the performance of the stand-alone method.
Let W ( g ) R + d be normalized weights from another FW strategy g, i w i ( g ) = 1 . We combine SHAP and g by elementwise multiplication followed by renormalization:
w i ens = w i ( SHAP ) w i ( g ) p = 1 d w p ( SHAP ) w p ( g ) , i = 1 , , d .
Multiplication amplifies consensus (features valued by both methods) and suppresses disagreement (features emphasized by only one), sharpening the metric space along globally discriminative axes.
We integrate and evaluate SHAP as an FW method alongside several other FW or feature selection methods adapted as FW, ensuring that the selected approaches are most varied in terms of their underlying principles, methodologies, and mathematical foundations:
1.
Minkowski distance ( L p norm). Inspired by [9], this approach can be considered a generalization of both the Euclidean distance ( p = 2 ) and the Manhattan distance ( p = 1 ) between two points.
2.
Minimum Redundancy Maximum Relevance (mRMR). A feature selection method adapted for FW, aiming to maximize the relevance of selected features to the target variable (or pseudo-label in unsupervised cases) while minimizing redundancy among features [15].
3.
Principal Component Analysis (PCA). A technique also used mainly in feature selection and dimensionality reduction. We adapt the principal component loadings as a proxy for feature importance. Larger loadings suggest a stronger influence on the principal components [16].
4.
One-way analysis of variance (F-test statistic). Often used as a method to compare statistical models, it can be adapted to act as an FW method. It is represented by the ratio of two scaled sums of squares reflecting different sources of variability [17].
5.
t-Distributed Stochastic Neighbor Embedding (t-SNE). Another unsupervised non-linear dimensionality reduction technique, embedding high-dimensional points in low dimensions in a way that respects similarities between points [18].
In order to run experiments for evaluating the performance of the FW strategies, we employed a hosted T4 GPU provided by Google Colab. For acquiring datasets and implementing algorithms, as well as for using common evaluation metrics, we used the open-source library scikit-learn [19], along with datasets from the UCI repository [20]. All clustering algorithms were used with their default parameters as specified in scikit-learn.

3.1. Datasets

We conduct experiments on well-known datasets in the field of machine learning, which can be adapted easily for clustering by ignoring the target feature during the clustering process:
1.
Diabetes (Pima) dataset: Contains 768 samples across 8 features that describe patients at risk of diabetes, with the objective of predicting the presence of the disease [21].
2.
Wine recognition dataset: Consists of 178 samples characterized by 13 chemical analysis features of wines derived from three different cultivars, resulting in three classes [22].
3.
Breast Cancer Wisconsin (diagnostic) dataset: Features 569 samples with 30 features describing tumor cells from clinical samples labeled as benign or malignant, resulting in 2 classes [23].
4.
Optical recognition of handwritten digits dataset: Contains 1797 images of hand-written digits, resulting in 10 classes where each class refers to a digit [24].
5.
Vehicle Silhouettes: Contains 946 instances for classifying a given vehicle as one of four types, using a set of 18 features extracted from their silhouette [25].
Although class labels exist in the datasets, we utilize them only at the evaluation stage to compute external clustering metrics. The clustering itself remains unsupervised.

3.2. Clustering Algorithms

Four different common clustering algorithms are employed:
1.
k-means, a centroid-based algorithm that partitions data into k clusters by minimizing within-cluster variance [1];
2.
Hierarchical clustering (Ward’s method), a bottom-up approach that successively merges clusters to minimize the increase in sum-of-squares [4];
3.
Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN), a density-based clustering algorithm that can handle varying densities, forming a hierarchical tree of possible clusters and extracting stable subclusters [26];
4.
Gaussian Mixture Models (GMM), a model-based technique assuming data are generated from a finite mixture of Gaussians, optimized via the Expectation–Maximization (EM) algorithm [2].

3.3. Evaluation Metrics

We evaluate clustering performance using four metrics that provide a comprehensive and balanced evaluation from different perspectives, both externally and internally:
1.
Adjusted Rand Index (ARI) measures the similarity between the predicted clusters and ground-truth labels, adjusting for chance [27];
2.
Silhouette Score quantifies how well samples in the same cluster are similar to each other compared to samples from other clusters [28];
3.
Normalized Mutual Information (NMI) evaluates the amount of mutual information between cluster assignments and ground-truth classes, normalized to the range [ 0 , 1 ] [5];
4.
Calinski–Harabasz Index (CH), also called the Variance Ratio Criterion, assesses the ratio of between-cluster dispersion to within-cluster dispersion [29].
While both ARI and NMI are external indices, ARI penalizes splits and merges more rigorously, whereas NMI is more tolerant of proportional splits. In contrast, Silhouette and CH are internal indices that evaluate the geometric properties of clusters. The Silhouette score emphasizes compact and well-separated clusters on a local level, while CH favors clusters with high between-cluster to within-cluster variance ratios, often preferring larger, more spherical, and balanced cluster structures.
Thus, a solution may achieve a high score on Silhouette or CH yet score low on ARI or NMI if it is geometrically well-organized but semantically misaligned. ARI can decrease significantly under over-segmentation, while NMI may remain moderate. Additionally, Silhouette and CH may yield differing evaluations due to factors such as cluster shape, outliers, class imbalance, and distance scaling. In practical terms, the results can be summarized as a trade-off between semantic fidelity and geometric separability.

4. Results

To present our results, we plotted the notable situations in Figure 2, Figure 3 and Figure 4 where SHAP-based FW exceeds other FW methods. All results are presented in Appendix A.1, Appendix A.2, Appendix A.3, Appendix A.4 and Appendix A.5, with each notable result related to SHAP highlighted. The lines of the result tables where multiple FW methods are enumerated denote a FW ensemble (multiplication).

4.1. Diabetes (Pima) Dataset

For the diabetes dataset, as seen in Appendix A.1, the ensemble methods, specifically SHAP paired with t-SNE and with F-test, consistently offer gains in internal cluster quality (Silhouette, CH) and, in some cases, external metrics (ARI, NMI).
In the case of the k-means algorithm, a noteworthy observation is that the unweighted dataset presents an ARI of 0.101, which, while relatively low considering the definition of the evaluation metric, exceeds the performance of any of the FW approaches.
For hierarchical clustering, SHAP alone increases the ARI from 0.102 (unweighted) to 0.108 and greatly improves the Silhouette score (from 0.157 to 0.444). However, the ensemble combining SHAP with t-SNE jumps to an ARI of 0.194 and shows the highest NMI (0.113) among the FW approaches, indicating that pairing SHAP with dimensionality reduction can better isolate the underlying cluster structure. Other ensembles also improve over the baseline, but SHAP combined with t-SNE seems to be the most effective in terms of external metrics.
The ARI and NMI values of HDBSCAN remain quite low. In contrast, internal metrics like the Silhouette and especially the CH index show dramatic improvements, especially in the case of F-test and its SHAP ensemble. This discrepancy suggests that although some methods form very distinct clusters (as per the internal measures), these clusters do not align as well with the true class labels.
SHAP-based weighting alone does not uniformly improve the ARI, especially in methods like k-means and GMM, where unweighted or other feature selection techniques (F-test) sometimes perform better.
Overall, these findings indicate that while SHAP delivers satisfactory performance, its combination with other strategies can further enhance the quality of clustering.

4.2. Wine Dataset

SHAP-based weighting consistently performs well across k-means and GMM. Hierarchical clustering sees a stark improvement with the L p method alone, as shown in Appendix A.2.
Regarding k-means, the ensembles involving SHAP do not strongly surpass the single SHAP approach; in fact, the ensembles have slightly lower ARIs compared to SHAP alone. This might indicate that SHAP’s weighting is already well aligned with the relevant wine features.
When it comes to hierarchical clustering, SHAP alone has an ARI of 0.601, while SHAP combined with PCA jumps to 0.832 in ARI and 0.820 in NMI. This improvement suggests that combining SHAP weights with PCA loadings can better discriminate hierarchical clusters.
In the context of HDBSCAN, mRMR stands out, with a high ARI (0.434) and the best Silhouette among single methods (0.301), and SHAP alone performs similarly (0.440 ARI, 0.233 Silhouette), as seen in Figure 3. The ensemble methods do not yield substantial improvements here in ARI, but do improve Silhouette.
When applying GMM, SHAP has an ARI of 0.947 and an NMI of 0.928, which is near the top. The highest ARI across all strategies is from SHAP alone (0.947), while the ensembles of SHAP– L p and SHAP–PCA remain close but slightly lower in ARI.

4.3. Breast Cancer Wisconsin Dataset

As seen in Appendix A.3, SHAP and mRMR appear to be the two most reliable strategies across k-means, hierarchical, and GMM. Both of them combined sometimes help (particularly with hierarchical clustering) but can harm GMM performance. F-test systematically shows high internal metrics (Silhouette, CH) but fails to align well with external metrics.
When applying k-means, SHAP performs reasonably well (0.659 ARI, 0.548 NMI). Notably, the SHAP–mRMR ensemble yields a high ARI of 0.671 and a high Silhouette score of 0.588, suggesting a beneficial combination.
Regarding hierarchical clustering, SHAP performs the best out of the tested methods in terms of ARI (0.719) and NMI (0.599). Moreover, SHAP combined with mRMR stands out with a high ARI of 0.694 and the highest CH (1029.995), indicative of well-separated clusters.
In the context of HDBSCAN, it is notable that the SHAP– L p ensemble performs the best in terms of ARI (0.269) and CH (72.904), as observed in Figure 4. Another ensemble, SHAP–F-test, provides the highest Silhouette score (0.353).
Applying GMM, SHAP leads with both ARI (0.793) and NMI (0.682). Just like in the previous algorithm’s case, the SHAP–F-test ensemble has the highest Silhouette score, 0.634.

4.4. Digits Dataset

As observed in Appendix A.4, mRMR tends to dominate k-means and HDBSCAN with higher ARI, while SHAP excels for hierarchical clustering.
For other datasets, F-test yields extremely high internal metrics (Silhouette and CH) but very low external agreement (ARI and NMI), suggesting that internal compactness can be misleading when the underlying label distribution is complex.
As this dataset contains 10 clusters, we opted to omit t-SNE, as the number of components should be fewer than 4 for the Barnes–Hut underlying algorithm of t-SNE to function efficiently.
While combining SHAP weights with other filter/feature selection techniques yielded synergy on some datasets (e.g., Wine), it did not consistently improve the digit clustering. Instead, single-method approaches (SHAP or mRMR) frequently performed better, indicating that synergy benefits highly depend on the data distribution.

4.5. Vehicle Silhouette Dataset

For this dataset, t-SNE was excluded again due to the high number of clusters. The results in Appendix A.5 demonstrate that SHAP markedly improves clustering performance across most of the clustering algorithms. Compared to the unweighted dataset, SHAP consistently boosts the ARI and the NMI.
For example, in the case of k-means, the unweighted ARI is 0.075, while using SHAP increases it to 0.096. Similar trends are observed for other metrics.
The SHAP and L p approach, in particular, yields the best overall performance, with an ARI of 0.122 and an NMI of 0.168 for k-means, an ARI of 0.133 and an NMI of 0.214 for hierarchical clustering, and 0.168 ARI and 0.334 NMI for the GMM algorithm.
This synergy further emphasizes SHAP’s ability to capture nuanced, potentially non-linear feature importance information, which complements the strengths of the L p norm.

5. Discussion

In general, SHAP alone often delivers competitive ARI and NMI values across all used datasets, and in some cases surpasses other methods. The synergy with other methods (most notably SHAP–mRMR and SHAP– L p ) can improve certain algorithms, but it can also slightly degrade the stand-alone FW methods.
Empirical patterns show that SHAP and its ensemble counterparts perform well when the dataset exhibits globally informative structure. For example, in the Wine and Breast Cancer datasets, SHAP-based ensembles significantly improved both internal and external clustering metrics. These datasets are characterized by a small number of dominant axes that are useful for separating all classes. In such settings, the multiplicative fusion sharpens separation along relevant dimensions without excluding important complementary cues, aligning geometric separability with semantic consistency.
However, not all datasets benefit equally. In the Diabetes and Vehicle datasets, performance gains were limited or inconsistent. These datasets tend to have a more heterogeneous structure, where different clusters rely on distinct subsets of features. In such cases, multiplying SHAP with another score can inadvertently suppress features critical for specific clusters, especially if those features are not globally ranked highly by both sources. As a result, internal metrics such as Silhouette or CH may improve due to better compactness and separation, while external indices like ARI or NMI may stagnate or decline due to a misalignment with ground-truth labels.
Another recurring phenomenon is the emergence of inflated internal scores alongside degraded semantic alignment in high-dimensional or imbalanced datasets. This is particularly evident in the Digits dataset, where certain combinations of weighting strategies (e.g., SHAP with F-test) led to near-perfect internal metrics but low external agreement. These results reflect overconcentration of weights on a small number of features, distorting distance computations and inducing artificial separation that is not semantically meaningful.
The observed variability in ensemble performance also suggests that the geometric properties of the data interact with the weighting scheme. For example, datasets with high feature redundancy or collinearity may suffer when SHAP is combined with redundancy-penalizing methods like mRMR. While mRMR is designed to suppress collinear features, doing so in conjunction with SHAP can lead to excessive pruning of informative dimensions, particularly in datasets with curved or manifold structure.
In terms of what algorithms are susceptible to improvement by SHAP alone, the situation is dataset-dependent:
  • Diabetes dataset: Hierarchical clustering, HDBSCAN, and GMM;
  • Wine dataset: k-means, HDBSCAN, and GMM;
  • Breast Cancer dataset: Hierarchical clustering and GMM;
  • Digits dataset: Hierarchical clustering;
  • Vehicle Silhouette dataset: k-means, hierarchical clustering, and GMM.
As a practical implication, our findings confirm that no single weighting strategy universally dominates. Rather, performance depends on the synergy between the dataset characteristics, the clustering algorithm, and the metric used. With this in mind, it can be observed that SHAP does not underperform, nor does it perform best under one single configuration. Therefore, practitioners can leverage SHAP as a general-purpose, reliable, and versatile FW method while simultaneously gaining insights into each feature’s contribution.

6. Conclusions

In this paper, we presented feature weighting approaches for clustering, motivated by the need to identify and weight the most informative features during unsupervised learning in a new way. By adapting SHAP—originally designed for supervised settings—we leveraged SHAP values as a principled way to estimate each feature’s contribution in distinguishing pseudo-clusters. Our proposed method was systematically compared against other FW strategies, namely L p , mRMR, PCA, F-test, and t-SNE, both as stand-alone techniques and in combination with SHAP through ensemble weighting.
Experimental results on standard datasets (Diabetes, Wine, Breast Cancer, Digits, and Vehicle Silhouette) and four clustering algorithms (k-means, hierarchical clustering, HDBSCAN, and GMM) demonstrated that SHAP-based feature weighting frequently provides competitive performance, often approaching or outperforming established methods with respect to external clustering metrics like ARI and NMI, especially for data suited for binary clustering, like the Breast Cancer dataset. Moreover, in certain scenarios—especially for density-based or hierarchical approaches—combining SHAP with other methods (e.g., SHAP–mRMR or SHAP– L p ) proved beneficial in improving cluster separability, as reflected by internal metrics like Silhouette and CH in relatively well-separated clusters, for example, the Wine dataset. Nonetheless, we observed that these benefits are still dataset- and algorithm-dependent, but perform well enough for this approach to be considered general-purpose and reliable.
Despite promising results, limitations exist. First, deriving SHAP values for clustering involves building a pseudo-supervised setup on unlabeled data (training a model on generated labels), which can increase computational overhead for large datasets. Furthermore, SHAP-based weighting relies on how accurately pseudo-labels approximate the underlying cluster structure. If the surrogate model poorly reflects the natural groupings or if the pseudo-labeling process is unstable, the resulting weights may not be optimal. Additionally, while our experiments included multiple well-known datasets, testing on other domains or signal processing data [30,31] could further validate robustness and reveal additional edge cases. By addressing these directions, we aim to strengthen the theoretical foundations of SHAP-inspired feature weighting in unsupervised learning and improve its utility in this way, not only as a tool purely for gaining explainability.

Author Contributions

Conceptualization, F.G., D.M.O. and C.I.; methodology, F.G., D.M.O. and C.I.; software, F.G., D.M.O. and C.I.; validation, F.G., D.M.O. and C.I.; formal analysis, F.G., D.M.O. and C.I.; investigation, F.G., D.M.O. and C.I.; resources, F.G., D.M.O. and C.I.; data curation, F.G., D.M.O. and C.I.; writing—original draft preparation, F.G., D.M.O. and C.I.; writing—review and editing, F.G., D.M.O. and C.I.; visualization, F.G., D.M.O. and C.I.; supervision, F.G., D.M.O. and C.I.; project administration, F.G., D.M.O. and C.I.; funding acquisition, F.G., D.M.O. and C.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research is partially supported by the project “Romanian Hub for Artificial Intelligence-HRIA”, Smart Growth, Digitization, and Financial Instruments Program, 2021–2027, MySMIS no. 334906.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARIAdjusted Rand Index
CHCalinski–Harabasz (Index)
DBSCANDensity-Based Spatial Clustering of Applications with Noise
FWFeature Weighting
GMMGaussian Mixture Model
HDBSCANHierarchical DBSCAN
L p L p norm (Minkowski metric)
mRMRMinimum Redundancy Maximum Relevance
NMINormalized Mutual Information
PCAPrincipal Component Analysis
SHAPSHapley Additive exPlanations
SilSilhouette score
t-SNEt-distributed Stochastic Neighbor Embedding
XAIeXplainable Artificial Intelligence

Appendix A. Complete Clustering Results

Appendix A.1. Diabetes (Pima) Dataset

k-MeansHierarchical Clustering (Ward)HDBSCANGaussian Mixture Model
ARISilNMICHARISilNMICHARISilNMICHARISilNMICH
Unweighted0.1010.1650.059146.7430.1020.1570.071117.8260.0530.2560.02335.9510.0130.1650.002115.419
SHAP0.0010.3680.009538.7850.1080.4440.051619.9460.0210.4940.013145.7070.0130.6000.0021343.91
L p 0.0140.2320.017260.9100.1000.2750.048250.8100.0240.4390.012101.1400.0130.3960.002574.569
mRMR0.0010.5010.0061152.980.0660.3520.042499.0230.0120.7330.009421.0340.0130.6490.0021704.41
PCA0.0960.1870.052170.6930.1390.2290.078133.4080.0460.2560.01934.1110.0130.1740.002128.979
F-test0.0890.6650.0412002.120.0890.6630.0461955.2050.0170.9610.0426516.820.0980.5920.0701375.37
t-SNE0.0020.3020.008340.9470.0120.3290.002311.0140.0150.4360.00576.0600.0130.3290.002312.241
SHAP, L p 0.0030.4410.006773.2920.0780.4660.045868.1720.0340.3550.017226.8330.0130.6540.0021681.47
SHAP, mRMR0.0050.5890.0011591.970.1100.5230.0571054.110.0280.4400.013324.7990.0130.6730.0021754.12
SHAP, PCA0.0020.3640.009517.2790.0890.4230.039558.4130.0210.4870.013147.7340.0130.5950.0021282.64
SHAP, F-test0.0890.6650.0412002.120.0890.6630.0461955.210.0180.9630.0438498.430.0980.5920.0701375.37
SHAP, t-SNE0.0050.5130.0011137.390.1940.3720.113377.2020.0120.7260.003726.6640.0130.6450.0021653.94
Note: Bold values indicate notable results involving SHAP (as a stand-alone method or in an ensemble).

Appendix A.2. Wine Dataset

k-MeansHierarchical Clustering (Ward)HDBSCANGaussian Mixture Model
ARISilNMICHARISilNMICHARISilNMICHARISilNMICH
Unweighted0.8710.2840.87570.9400.7890.2770.78667.6470.3450.1330.44924.6890.8970.2840.87570.940
SHAP0.8800.4350.850160.1290.6010.3640.654136.7200.4400.2330.56062.6900.9470.4290.928155.921
L p 0.7900.3700.770139.3410.9640.3510.954137.7200.3950.2620.48557.9680.8520.3650.836135.390
mRMR0.8200.3960.799153.4930.8170.3940.794165.2330.4340.3010.56385.3620.9150.3860.893146.743
PCA0.8810.2830.86569.5660.7300.2650.71664.6280.3410.1300.42524.4500.8970.2750.87667.308
F-test0.3740.5910.453487.0880.3200.6030.392462.7420.1220.3890.26925.8680.3180.5990.394470.792
t-SNE0.6980.3220.706126.6260.8370.3120.815117.9140.3900.2310.49354.2170.7560.3160.762122.395
SHAP, L p 0.8040.4500.784195.9760.5880.4150.652251.1790.4230.2740.53987.1900.9150.4410.893186.088
SHAP, mRMR0.8340.4460.807183.1810.7120.4080.743208.0130.4230.3490.532111.7040.8970.4300.876171.891
SHAP, PCA0.8800.4330.850157.8750.8320.4080.820150.3070.4030.2060.50750.0670.9310.4280.909153.549
SHAP, F-test0.3740.5910.453487.0880.3200.6030.392462.7420.1040.3980.25618.6040.3180.5990.394470.792
SHAP, t-SNE0.8470.4240.815187.3330.6720.4110.710235.8810.3920.2260.53465.3730.8950.4220.882185.763
Note: Bold values indicate notable results involving SHAP (as a stand-alone method or in an ensemble).

Appendix A.3. Breast Cancer Dataset

k-MeansHierarchical Clustering (Ward)HDBSCANGaussian Mixture Model
ARISilNMICHARISilNMICHARISilNMICHARISilNMICH
Unweighted0.6760.3440.562267.6800.5750.3390.456248.6280.1560.0280.21248.1460.7740.3140.661247.283
SHAP0.6590.5750.548944.1230.7190.5420.599880.3120.1100.0330.15965.0270.7930.5210.682752.406
L p 0.7180.4490.617495.0800.5860.4190.464411.2370.2570.1300.21856.8930.7550.4190.641447.810
mRMR0.7300.4870.629617.5110.7070.4530.585554.3750.0000.0000.0000.0000.7670.4610.655570.654
PCA0.6420.3500.523269.7430.6030.3390.479258.3760.1450.0830.21733.6860.6600.3320.537247.117
F-test0.1260.6180.070920.8640.1270.6110.071908.8110.0010.3520.04627.0810.1370.6330.082921.494
SHAP, L p 0.6530.5920.5421068.3920.4660.5740.421900.1220.2690.1720.20372.9040.1180.4840.115248.153
SHAP, mRMR0.6710.5880.5591069.6740.6940.5770.5921029.9950.0000.0000.0000.0000.1740.4560.159313.117
SHAP, PCA0.6590.5730.548924.6840.6890.5460.568897.5000.0760.0540.14557.1970.1030.5130.114224.856
SHAP, F-test0.1260.6180.070920.8640.1270.6110.071908.8110.0010.3530.04721.6950.1390.6340.084918.215
SHAP + t-SNE0.6880.5810.5831004.8400.4120.5520.372766.8060.3610.1720.28388.0300.3700.5280.333590.184
Note: Bold values indicate notable results involving SHAP (as a stand-alone method or in an ensemble).

Appendix A.4. Digits Dataset

k-MeansHierarchical Clustering (Ward)HDBSCANGaussian Mixture Model
ARISilNMICHARISilNMICHARISilNMICHARISilNMICH
Unweighted0.5300.1350.672113.0600.6640.1250.795105.8250.2090.0410.58030.7620.5460.1170.690102.577
SHAP0.5020.2100.626243.1550.7000.1810.786215.5120.3760.0050.65069.3130.5180.1700.651241.091
L p 0.4020.1030.5465.82 × 10160.5310.1540.738184.4170.0010.3340.04830.4990.0001.0000.0013.53 × 1017
mRMR0.5600.1920.660201.6360.6670.1850.788184.3840.4540.0830.72477.9510.6150.1990.717196.626
PCA0.5110.1340.660115.8680.6600.1360.778112.9020.2190.0320.58433.8540.5570.1280.687113.244
F-test0.0030.9860.06849,6880.0030.9840.06869,1820.0030.9920.070131,0830.0030.9860.06850,981
SHAP, L p 0.4410.1990.5661.38 × 10120.6350.1850.730237.4680.0010.9760.042232.3570.0000.0000.0000.000
SHAP, mRMR0.4870.2200.595274.5840.5470.1870.669256.4760.3370.0160.66873.4460.5050.2110.628302.152
SHAP, PCA0.5140.2130.634250.8720.5430.1800.676218.2760.3560.0220.67867.5990.0910.1890.193453.015
SHAP, F-test0.0030.9860.06849,6880.0030.9840.06860,6310.0030.9930.070141,9570.0030.9860.06849,688
Note: Bold values indicate notable results involving SHAP (as a stand-alone method or in an ensemble).

Appendix A.5. Vehicle Silhouette Dataset

k-MeansHierarchical Clustering (Ward)HDBSCANGaussian Mixture Model
ARISilNMICHARISilNMICHARISilNMICHARISilNMICH
Unweighted0.0750.2990.121390.8290.0550.2650.104353.8240.0020.5850.02266.2240.0840.2800.130368.677
SHAP0.0960.3640.141563.3100.1280.3450.172651.2260.0020.8570.024483.5200.1410.3450.204605.251
L p 0.0720.3450.116681.2170.0770.3630.148636.3580.0020.8430.024428.7100.0770.3310.120681.624
mRMR0.1140.2980.130575.2500.1260.3130.165604.9810.0020.7580.024210.2940.1040.1800.182394.014
PCA0.0760.3270.122437.0530.1100.3160.148413.9830.0020.5970.02269.2560.0850.2980.131400.975
F-test0.0510.5760.0574120.0810.0460.5510.0593821.2330.0180.9970.0707096710.0460.5630.0553952.806
SHAP, L p 0.1220.3790.168848.8620.1330.4090.2141080.0910.0020.9090.0241093.2200.1680.3920.334505.685
SHAP, mRMR0.0830.3000.108570.8530.0930.3360.127794.9040.0020.8870.024746.1530.0860.3430.126426.536
SHAP, PCA0.0830.3600.106582.4010.1010.3780.172598.4370.0020.8530.024466.5020.0820.3920.123451.038
SHAP, F-test0.0510.5760.0574120.0810.0390.5400.0563793.9310.0180.9970.0707096710.0460.5630.0553952.806
Note: Bold values indicate notable results involving SHAP (as a stand-alone method or in an ensemble).

References

  1. Jain, A.K. Data clustering: 50 years beyond K-means. Pattern Recognit. Lett. 2010, 31, 651–666. [Google Scholar] [CrossRef]
  2. Xu, R.; Wunsch, D. Survey of clustering algorithms. IEEE Trans. Neural Netw. 2005, 16, 645–678. [Google Scholar] [CrossRef] [PubMed]
  3. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, Portland, Oregon, 2–4 August 1996; KDD’96. pp. 226–231. [Google Scholar]
  4. Ward, J.H., Jr. Hierarchical grouping to optimize an objective function. J. Am. Stat. Assoc. 1963, 58, 236–244. [Google Scholar] [CrossRef]
  5. Niño-Adan, I.; Manjarres, D.; Landa-Torres, I.; Portillo, E. Feature weighting methods: A review. Expert Syst. Appl. 2021, 184, 115424. [Google Scholar] [CrossRef]
  6. Lundberg, S. A unified approach to interpreting model predictions. arXiv 2017, arXiv:1705.07874. [Google Scholar] [CrossRef]
  7. Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.I. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [Google Scholar] [CrossRef] [PubMed]
  8. Modha, D.S.; Spangler, W.S. Feature weighting in k-means clustering. Mach. Learn. 2003, 52, 217–237. [Google Scholar] [CrossRef]
  9. De Amorim, R.C. Feature relevance in ward’s hierarchical clustering using the L p norm. J. Classif. 2015, 32, 46–62. [Google Scholar] [CrossRef]
  10. Güneş, S.; Polat, K.; Yosunkaya, Ş. Efficient sleep stage recognition system based on EEG signal using k-means clustering based feature weighting. Expert Syst. Appl. 2010, 37, 7922–7928. [Google Scholar] [CrossRef]
  11. Gürüler, H. A novel diagnosis system for Parkinson’s disease using complex-valued artificial neural network with k-means clustering feature weighting method. Neural Comput. Appl. 2017, 28, 1657–1666. [Google Scholar] [CrossRef]
  12. Yang, H.; Jiao, L.; Pan, Q. A survey on interpretable clustering. In Proceedings of the 2021 40th Chinese Control Conference (CCC), Shanghai, China, 26–28 July 2021; pp. 7384–7388. [Google Scholar]
  13. Hu, L.; Jiang, M.; Dong, J.; Liu, X.; He, Z. Interpretable Clustering: A Survey. arXiv 2024, arXiv:2409.00743. [Google Scholar] [CrossRef]
  14. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  15. Peng, H.; Long, F.; Ding, C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1226–1238. [Google Scholar] [CrossRef] [PubMed]
  16. Jolliffe, I.T. Principal component analysis and factor analysis. In Principal Component Analysis; Springer: Berlin/Heidelberg, Germany, 2002; pp. 150–166. [Google Scholar]
  17. Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  18. van der Maaten, L.; Hinton, G. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  19. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  20. Asuncion, A.; Newman, D. UCI Machine Learning Repository. 2007. Available online: https://archive.ics.uci.edu/ (accessed on 12 August 2025).
  21. Smith, J.W.; Everhart, J.E.; Dickson, W.C.; Knowler, W.C.; Johannes, R.S. Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. In Proceedings of the Annual Symposium on Computer Application in Medical Care, Washington, DC, USA, 6–9 November 1988; p. 261. [Google Scholar]
  22. Aeberhard, S.; Coomans, D.; De Vel, O. Comparison of Classifiers in High Dimensional Settings; Technical Report No. 92-02; Department of Mathematics and Statistics, James Cook University of North Queensland: North Queensland, Australia, 1992. [Google Scholar]
  23. Street, W.N.; Wolberg, W.H.; Mangasarian, O.L. Nuclear feature extraction for breast tumor diagnosis. In Proceedings of the Biomedical Image Processing and Biomedical Visualization, San Jose, CA, USA, 1–4 February 1993; Volume 1905, pp. 861–870. [Google Scholar]
  24. Kaynak, C. Methods of Combining Multiple Classifiers and Their Applications to Handwritten Digit Recognition. Master Thesis, Institute of Graduate Studies in Science and Engineering, Bogazici University, Istanbul, Turkey, 1995. [Google Scholar]
  25. Siebert, J.P. Vehicle Recognition Using Rule Based Methods; Turing Institute: London, UK, 1987. [Google Scholar]
  26. Campello, R.J.; Moulavi, D.; Sander, J. Density-based clustering based on hierarchical density estimates. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Gold Coast, Australia, 14–17 April 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 160–172. [Google Scholar]
  27. Hubert, L.; Arabie, P. Comparing partitions. J. Classif. 1985, 2, 193–218. [Google Scholar] [CrossRef]
  28. Rousseeuw, P.J. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 1987, 20, 53–65. [Google Scholar] [CrossRef]
  29. Caliński, T.; Harabasz, J. A dendrite method for cluster analysis. Commun. Stat.-Theory Methods 1974, 3, 1–27. [Google Scholar] [CrossRef]
  30. Onchis, D. Observing damaged beams through their time–frequency extended signatures. Signal Process. 2014, 96, 16–20. [Google Scholar] [CrossRef]
  31. Feichtinger, H.; Onchiş, D. Constructive reconstruction from irregular sampling in multi-window spline-type spaces. In Progress in Analysis and Its Applications; World Scientific: Singapore, 2010; pp. 257–265. [Google Scholar]
Figure 1. Flowchart of the employed FW methodology.
Figure 1. Flowchart of the employed FW methodology.
Applsci 15 09072 g001
Figure 2. Hierarchical clustering metrics across three FW methods on the Diabetes dataset.
Figure 2. Hierarchical clustering metrics across three FW methods on the Diabetes dataset.
Applsci 15 09072 g002
Figure 3. HDBSCAN metrics across three FW methods on the Wine dataset.
Figure 3. HDBSCAN metrics across three FW methods on the Wine dataset.
Applsci 15 09072 g003
Figure 4. HDBSCAN metrics on four FW methods for the Breast Cancer dataset.
Figure 4. HDBSCAN metrics on four FW methods for the Breast Cancer dataset.
Applsci 15 09072 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Galis, F.; Onchis, D.M.; Istin, C. Refining Filter Global Feature Weighting for Fully Unsupervised Clustering. Appl. Sci. 2025, 15, 9072. https://doi.org/10.3390/app15169072

AMA Style

Galis F, Onchis DM, Istin C. Refining Filter Global Feature Weighting for Fully Unsupervised Clustering. Applied Sciences. 2025; 15(16):9072. https://doi.org/10.3390/app15169072

Chicago/Turabian Style

Galis, Fabian, Darian M. Onchis, and Codruta Istin. 2025. "Refining Filter Global Feature Weighting for Fully Unsupervised Clustering" Applied Sciences 15, no. 16: 9072. https://doi.org/10.3390/app15169072

APA Style

Galis, F., Onchis, D. M., & Istin, C. (2025). Refining Filter Global Feature Weighting for Fully Unsupervised Clustering. Applied Sciences, 15(16), 9072. https://doi.org/10.3390/app15169072

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop