Multi-View Utility-Based Clustering: A Mutually Supervised Perspective
Abstract
:1. Introduction
- (1)
- The proposed MUC method moves multi-view clustering from the level of feature and/or sample side information to that of the partition. By leveraging the proposed utility-based partition-level side information, the clustering process of MUC is more consistent with human thinking; it is higher-caliber and more instructional than that of sample or feature-level side information-based multi-view clustering.
- (2)
- While existing multi-view clustering methods share the benefits of mutual supervision learning in a co-trained and/or -regularized way, the proposed MUC method achieves a novel mutual supervision mode for first guiding the clustering of each view, in the sense of utility-based partition-level side information, for combining multi-view results. The distinctive merit of such a mutual supervision mode is that, using an alternating optimization strategy, the objective function of MUC can be solved in a K-means-like way. Our theoretical analysis indicates that MUC can guarantee an enhanced multi-view clustering performance.
- (3)
- In addition, based on the maximum entropy, the proposed MUC method considers the weight learning of each view, so as to further improve multi-view clustering performance.
- (4)
- Extensive experimental studies on various multi-view datasets justify the effectiveness of the proposed MUC method, in contrast to the existing commonly used single-and multi-view clustering methods.
2. Related Works
3. Multi-View Utility-Based Clustering: A Mutually Supervised Perspective
3.1. Utility-Based Partition-Level Side Information from Other Views
3.2. MUC
3.2.1. Objective Function of MUC
3.2.2. Solution of MUC
Algorithm 1: MUC |
Input: The multi-view dataset , where . The random initialization of partition . The total number K of clusters. The convergence threshold . The parameters and . |
Output: Final partition matrix H. |
Procedure: |
1: Generate the auxiliary matrices of each view according to Equation (22), where the partition-level side information is generated by averaging or majority-voting on the whole data partitions from all other views |
2: while not converge do |
3: Randomly select K expanded centroids of each view ; |
4: Compute the distance matrices of each view by Equation (14) |
5: Update for each view by Equation (17) |
6: repeat |
7: Update the partition of the vth view by Equation (23) |
8: Update the centroids of the vth view by Equation (24) |
9: Carry out step 7~step 8 alternately on each view. |
10: until the change value of the objective function in Equation (22) between two iterations is smaller than |
11: end while |
12: Output the final partition matrix H obtained by averaging or majority-voting on the partitions of all views. |
3.3. On Computational Complexity
4. Experimental Studies
4.1. Datasets
4.1.1. Synthetic Dataset
4.1.2. UCI Datasets
4.1.3. Real-Life Image Datasets
- (1)
- The CMU PIE image dataset [40] consists of a total of 41,368 facial images (64 × 64 in pixels) from 68 persons. The common versions of CMU PIE consist of five poses (views), i.e., C05 (left), C07 (upward), C09 (downward), C27 (frontal), and C29 (right), as organized by Deng Cai. In each view, 120 images of five different persons are randomly selected. Figure 4 illustrates images of one person from these five views. In order to make full use of the CMU PIE image dataset, we generate five combinational multi-view datasets (i.e., D1, D2, D3, D4 and D5) of the above five views, the details of which are listed in Table 4.
- (2)
- The AwA image dataset [41] consists of 30,475 images of 50 animals with six pre-extracted feature representations (views) each. In this study, we randomly select eight classes (5133 images) for experiments.
4.2. Evaluation Criteria
4.3. Adopted Methods and Parameters Settings
4.4. Experimental Results
4.4.1. Synthetic Dataset
4.4.2. UCI Datasets
- (1)
- In general, the performances of the seven multi-view methods, i.e., Co-FKM, MVKKM, MVSpec, WV-Co-FCM, TW-K-means, MVASM and MUC, are superior to those of the two single-view clustering methods, K-means and ComBKM.
- (2)
- The proposed method, MUC, clearly performs best on all the seven multi-view datasets in terms of overall average NMI and RI, demonstrating the benefit of multi-view clustering from mutual supervision based on utility-based partition-level side information. In terms of NMI and RI, the proposed MUC method is not always significantly superior to WV-Co-FCM or TW-K-means on every dataset (see, for example, Dermatology); however, it remains a very close second. The possible reason for this is that this dataset is so easy to cluster; as such, additional utility-based partition-level side information does not play a crucial role in enhancing the corresponding multi-view clustering performance.
- (3)
- An additional meaningful observation is that the weights of two views vary considerably on some datasets, such as SPECTF. This is possibly because view features are very different. That is, the amount of effective feature information provided by two views is different. Moreover, the degree of separability of the data from different views is different.
4.4.3. Real-Life Image Datasets
4.4.4. Statistical Analysis Results
4.4.5. Sensitivity Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhang, P.; Wang, D.; Yu, Z.; Zhang, Y.; Jiang, T.; Li, T. A multi-scale information fusion-based multiple correlations for unsupervised attribute selection. Inf. Fusion 2024, 106, 102276. [Google Scholar] [CrossRef]
- Liu, Z.; Zhang, X.; Jiang, B. Active learning with fairness-aware clustering for fair classification considering multiple sensitive attributes. Inf. Sci. 2023, 647, 119521. [Google Scholar] [CrossRef]
- Hu, Z.; Cai, S.-M.; Wang, J.; Zhou, T. Collaborative recommendation model based on multi-modal multi-view attention network: Movie and literature cases. Appl. Soft Comput. 2023, 144, 110518. [Google Scholar] [CrossRef]
- Zhao, J.; Xie, X.; Xu, X.; Sun, S. Multi-view learning overview: Recent progress and new challenges. Inf. Fusion 2017, 38, 43–54. [Google Scholar] [CrossRef]
- Lai, Z.; Chen, F.; Wen, J. Multi-view robust regression for feature extraction. Pattern Recognit. 2024, 149, 110219. [Google Scholar] [CrossRef]
- Jiang, Y.; Deng, Z.; Chung, F.; Wang, S. Realizing Two-View TSK Fuzzy Classification System by Using Collaborative Learning. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 145–160. [Google Scholar] [CrossRef]
- Bickel, S.; Scheffer, T. Multi-view clustering. In Proceedings of the IEEE International Conference on Data Mining, Brighton, UK, 1–4 November 2004; pp. 19–26. [Google Scholar]
- Chen, X.; Xu, X.; Huang, J.; Ye, Y. TW-k-means: Automated two-level variable weighting clustering algorithm for multi-view data. IEEE Trans. Knowl. Data Eng. 2013, 25, 932–944. [Google Scholar] [CrossRef]
- Deng, Z.; Liu, R.; Xu, P.; Choi, K.-S.; Zhang, W.; Tian, X.; Zhang, T.; Liang, L.; Qin, B.; Wang, S. Multi-View Clustering with the Cooperation of Visible and Hidden Views. IEEE Trans. Knowl. Data Eng. 2022, 34, 803–815. [Google Scholar] [CrossRef]
- Han, J.; Xu, J.; Nie, F.; Li, X. Multi-view K-Means Clustering with Adaptive Sparse Memberships and Weight Allocation. IEEE Trans. Knowl. Data Eng. 2020, 34, 803–815. [Google Scholar] [CrossRef]
- Cleuziou, G.; Exbrayat, M.; Martin, L.; Sublemontier, J.-H. CoFKM: A centralized method for multiple-view clustering. In Proceedings of the 9th IEEE International Conference Data Mining (ICDM), Miami Beach, FL, USA, 6–9 December 2009; pp. 752–757. [Google Scholar]
- Tzortzis, G.F.; Likas, A.C. Kernel-based weighted multi-view clustering. In Proceedings of the IEEE 12th International Conference on Data Mining, Brussels, Belgium, 10–13 December 2012; pp. 675–684. [Google Scholar]
- Jiang, Y.; Chung, F.-L.; Wang, S.; Deng, Z.; Wang, J.; Qian, P. Collaborative fuzzy clustering from multiple weighted views. IEEE Trans. Cybern. 2015, 45, 688–701. [Google Scholar] [CrossRef]
- Liu, H.; Fu, Y. Clustering with Partition Level Side Information. In Proceedings of the 2015 IEEE International Conference on Data Mining, Atlantic City, NJ, USA, 14–17 November 2015; pp. 877–882. [Google Scholar]
- Deng, Z.; Choi, K.S.; Chung, F.L.; Wang, S. EEW-SC: Enhanced entropy-weighting subspace clustering for high dimensional gene expression data clustering analysis. Appl. Soft Comput. 2011, 11, 4798–4806. [Google Scholar] [CrossRef]
- Zhang, Y.; Chung, F.-L.; Wang, S. A Multiview and Multiexemplar Fuzzy Clustering Approach: Theoretical Analysis and Experimental Studies. IEEE Trans. Fuzzy Sys. 2019, 27, 1543–1557. [Google Scholar] [CrossRef]
- Yang, Y.; Wang, H. Multi-view clustering: A survey. Big Data Min. Anal. 2018, 1, 83–107. [Google Scholar] [CrossRef]
- Kailing, K.; Kriegel, H.; Pryakhin, A.; Schubert, M. Clustering Multi-Represented Objects with Noise. In Advances in Knowledge Discovery and Data Mining, Proceedings of the 8th Pacific-Asia Conference, PAKDD 2004, Sydney, Australia, 26–28 May 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 394–403. [Google Scholar]
- de Sa, V.R. Clustering with Two Views. In Proceedings of the International Conference on Machine Learning, Bonn, Germany, 7–11 August 2005; pp. 20–27. [Google Scholar]
- Zhou, D.; Burges, C. Spectral Clustering and Transductive Learning with Multiple Views. In Proceedings of the 24th International Conference on Machine Learning, Corvalis, OR, USA, 20–24 June 2007; pp. 1159–1166. [Google Scholar]
- Blaschko, M.B.; Lampert, C.H. Correlational Spectral Clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
- Chaudhuri, K.; Kakade, S.; Livescu, K.; Sridharan, K. Multiview Clustering via Canonical Correlation Analysis. In Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, QC, Canada, 14–18 June 2009; pp. 129–136. [Google Scholar]
- Liu, J.; Wang, C.; Gao, J.; Han, J. Multi-view clustering via joint nonnegative matrix factorization. In Proceedings of the 2013 SIAM International Conference on Data Mining, Austin, TX, USA, 2–4 May 2013; pp. 252–260. [Google Scholar]
- Gupta, A.; Das, S. Transfer Clustering Using a Multiple Kernel Metric Learned Under Multi-Instance Weak Supervision. IEEE Trans. Emerg. Top. Comput. Intell. 2022, 6, 828–838. [Google Scholar] [CrossRef]
- Greene, D.; Cunningham, P. A Matrix Factorization Approach for Integrating Multiple Data Views. In Machine Learning and Knowledge Discovery in Databases, Proceedings of the European Conference, ECML PKDD 2009, Bled, Slovenia, 7–11 September 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 423–438. [Google Scholar]
- Ye, J. A netting method for clustering-simplified neutrosophic information. Soft Comput. 2017, 21, 7571–7577. [Google Scholar] [CrossRef]
- Pedrycz, W. Collaborative fuzzy clustering. Pattern Recognit. Lett. 2002, 23, 1675–1686. [Google Scholar] [CrossRef]
- Han, T.; Xie, W.; Zisserman, A. Self-supervised Co-training for Video Representation Learning. In Proceedings of the Advances in Neural Information Processing Systems 33 (NeurIPS 2020), Virtual, 6–12 December 2020. [Google Scholar]
- Kang, Z.; Zhao, X.; Peng, C.; Zhu, H.; Zhou, J.T.; Peng, X.; Chen, W.; Xu, Z. Partition level multiview subspace clustering. Neural Netw. 2020, 122, 279–288. [Google Scholar] [CrossRef]
- Wang, J.; Wu, B.; Ren, Z.; Zhang, H.; Zhou, Y. Multi-scale deep multi-view subspace clustering with self-weighting fusion and structure preserving. Expert Syst. Appl. 2023, 213, 119031. [Google Scholar] [CrossRef]
- Lan, W.; Yang, T.; Chen, Q.; Zhang, S.; Dong, Y.; Zhou, H.; Pan, Y. Multiview subspace clustering via low-rank symmetric affinity graph. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 11382–11395. [Google Scholar] [CrossRef]
- Mirkin, B. Reinterpreting the category utility function. Mach. Learn. 2001, 45, 219–228. [Google Scholar] [CrossRef]
- Wu, J.; Liu, H.; Xiong, H.; Cao, J.; Chen, J. K-means-based consensus clustering: A unified view. IEEE Trans. Knowl. Data Eng. 2015, 27, 155–169. [Google Scholar] [CrossRef]
- Hayashi, M. Bregman divergence based em algorithm and its application to classical and quantum rate distortion theory. IEEE Trans. Inf. Theory 2023, 69, 3460–3492. [Google Scholar] [CrossRef]
- Ahmed, M.; Seraj, R.; Islam, S.M.S. The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 2020, 9, 1295. [Google Scholar] [CrossRef]
- Andreani, R.; Schuverdt, M.L.; Secchin, L.D. On enhanced KKT optimality conditions for smooth nonlinear optimization. SIAM J. Optim. 2024, 34, 1515–1539. [Google Scholar] [CrossRef]
- Zhou, J.; Zhang, X.; Jiang, Z. Recognition of Imbalanced Epileptic EEG Signals by a Graph-Based Extreme Learning Machine. Wirel. Commun. Mob. Comput. 2021, 2021, 5871684. [Google Scholar] [CrossRef]
- Gu, Q.; Zhou, J. Learning the shared subspace for multi-task clustering and transductive transfer classification. In Proceedings of the 2009 Ninth IEEE International Conference on Data Mining, Miami Beach, FL, USA, 6–9 December 2009; pp. 159–168. [Google Scholar]
- Bache, K.; Lichman, M. UCI Machine Learning Repository; University California, School of Information and Computer Science: Irvine, CA, USA, 2013; Available online: http://archive.ics.uci.edu/ml (accessed on 28 January 2024).
- Cai, D.; He, X.; Han, J. Spectral Regression for Efficient Regularized Subspace Learning. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
- Motiian, S.; Piccirilli, M.; Adjeroh, D.A.; Doretto, G. Information bottleneck learning using privileged information for visual recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1496–1505. [Google Scholar]
- Friedman, J.H. On bias, variance, 0/1—Loss, and the curse-of-dimensionality. Data Min. Knowl. Discov. 1997, 1, 55–77. [Google Scholar] [CrossRef]
Levels of Side Information | Related Works |
---|---|
Feature | Bickel [7], Kailing [18], V.R. de Sa [19], Zhou [20], Blaschko [21], Chaudhuri [22], Liu [23] and Gupta [24]. |
Sample | Long [25], Greene [26], Pedrycz [27], Cleuziou [11] and Jiang [13]. |
Partition | Han [28], Kang [29] Wang [30] and Lan [31]. |
Datasets | Number of Samples | Number of Classes | View 1 | View 2 | ||
---|---|---|---|---|---|---|
Description | Number of Features | Description | Number of Features | |||
Dermatology | 366 | 6 | Histopathological view about histopathological information of a case | 12 | Clinical view about clinical information of a case | 22 |
Forest Type | 326 | 4 | Image band view about image band information of the data | 9 | Spectrum view about spectrum values and the difference values | 18 |
Image Segmentation | 2310 | 7 | Shape view about shape information of an image | 9 | RGB view about RGB information of an image | 10 |
Iris | 150 | 3 | Sepal view about sepal length and width of an iris | 2 | Petal view about petal length and width of an iris | 2 |
Multiple Features | 2000 | 10 | Fourier coefficients view about Fourier coefficients of the character shapes | 76 | Zernike moments view about Zernike moments of the character shapes | 47 |
SPECTF | 267 | 2 | Stress view about single proton emission computed tomography image of heart in stress | 22 | Rest view about single proton emission computed tomography image of heart in rest | 22 |
Water Treatment | 527 | 13 | Input conditions and input demands | 31 | Output demands | 7 |
Datasets | Combination of Views |
---|---|
D1 | C05, C07, C09, C27 |
D2 | C05, C07, C09, C29 |
D3 | C05, C07, C27, C29 |
D4 | C05, C09, C27, C29 |
D5 | C07, C09, C27, C29 |
Methods | Parameter Settings for Grid Search |
---|---|
K-means | The total number K of clusters. |
ComBKM | The total number K of clusters. |
Co-FKM | The fuzzifier ; the tradeoff parameter with a step size of 0.05, where K denotes the number of clusters. |
MVKKM | The total number K of clusters., the tradeoff parameter |
MVSpec | The total number K of clusters, the tradeoff parameter |
WV-Co-FCM | the fuzzifier ; the tradeoff parameter with a step size of 0.05, where K denotes the number of clusters; the tradeoff parameter |
TW-K-means | The total number K of clusters, the tradeoff parameters and |
MVASM | The tradeoff parameters with a step size of 0.1; the tradeoff parameter with a step size of 0.01 |
MUC | The total number K of clusters, the tradeoff parameters , |
Datasets | K-Means | ComBKM | Co-FKM | MVKKM | MVSpec | WV-Co-FCM | TW-K-Means | MVASM | MUC |
---|---|---|---|---|---|---|---|---|---|
View1-View2 | 0.8905 (0.0131) | 0.7845 (0.0014) | 0.9527 (0.0036) | 0.9527 (0.0032) | 0.9459 (0.0056) | 0.9658 (0.0068) | 0.9527 (0.0070) | 0.9496 (0) | 0.9784 (0.0011) |
View1-View3 | 0.8355 (0) | 0.8125 (0) | 0.9355 (0) | 0.9475 (0) | 0.9207 (0) | 0.9498 (0.0022) | 0.9355 (0.0035) | 0.9441 (0.0008) | 0.9491 (0.0009) |
View2-View3 | 0.8775 (0.0126) | 0.8125 (0.0028) | 0.9371 (0.0009) | 0.9253 (0.0011) | 0.9416 (0.0012) | 0.9348 (0.0086) | 0.9371 (0.0071) | 0.9004 (0) | 0.9638 (0.0012) |
Avg. NMI | 0.8678 | 0.8032 | 0.9418 | 0.9418 | 0.9361 | 0.9501 | 0.9418 | 0.9314 | 0.9638 |
Avg. Std. | 0.0086 | 0.0014 | 0.0015 | 0.0014 | 0.0023 | 0.0058 | 0.0059 | 0.0003 | 0.0011 |
Datasets | K-Means | ComBKM | Co-FKM | MVKKM | MVSpec | WV-Co-FCM | TW-K-Means | MVASM | MUC |
---|---|---|---|---|---|---|---|---|---|
View1-View2 | 0.9341 (0.0111) | 0.9043 (0.0017) | 0.9868 (0.0034) | 0.9868 (0.0008) | 0.9847 (0) | 0.9574 (0.0091) | 0.9868 (0.0063) | 0.9583 (0) | 0.9984 (0.0024) |
View1-View3 | 0.9804 (0) | 0.9419 (0.0011) | 0.9804 (0) | 0.9847 (0) | 0.9847 (0.0004) | 0.9856 (0.0011) | 0.9804 (0) | 0.9873 (0.0005) | 0.9907 (0) |
View2-View3 | 0.9302 (0.0110) | 0.9419 (0.0021) | 0.9825 (0) | 0.9762 (0) | 0.9762 (0.0002) | 0.9337 (0.0082) | 0.9825 (0.0011) | 0.9475 (0) | 0.9913 (0.0011) |
Avg. RI | 0.9482 | 0.9294 | 0.9832 | 0.9826 | 0.9819 | 0.9589 | 0.9832 | 0.9644 | 0.9935 |
Avg. Std. | 0.0074 | 0.0016 | 0.0011 | 0.0003 | 0.0002 | 0.0061 | 0.0025 | 0.0002 | 0.0012 |
Datasets | K-Means | ComBKM | Co-FKM | MVKKM | MVSpec | WV-Co-FCM | TW-K-Means | MVASM | MUC |
---|---|---|---|---|---|---|---|---|---|
D1 | 0.2315 (0.0196) | 0.1895 (0.0175) | 0.4549 (0.0081) | 0.6451 (0.0008) | 0.7055 (0.0014) | 0.6649 (0.0136) | 0.5957 (0.0103) | 0.5723 (0.0058) | 0.6635 (0.0061) |
D2 | 0.2107 (0.0214) | 0.2101 (0.0182) | 0.4342 (0.0034) | 0.5475 (0.0023) | 0.6638 (0.0011) | 0.6443 (0.0104) | 0.6324 (0.0095) | 0.5237 (0.0055) | 0.6806 (0.0039) |
D3 | 0.1804 (0.0136) | 0.1943 (0.0192) | 0.4693 (0.0102) | 0.5985 (0.0018) | 0.6720 (0.0009) | 0.6805 (0.0124) | 0.6483 (0.0089) | 0.5724 (0.0064) | 0.6823 (0.0059) |
D4 | 0.2013 (0.0105) | 0.2833 (0.0138) | 0.4732 (0.0064) | 0.5883 (0.0012) | 0.6804 (0.0010) | 0.6693 (0.0079) | 0.6102 (0.0132) | 0.5313 (0.0093) | 0.6875 (0.0060) |
D5 | 0.1934 (0.0103) | 0.2019 (0.0157) | 0.4682 (0.0087) | 0.6056 (0.0006) | 0.6690 (0.0013) | 0.6736 (0.0131) | 0.6126 (0.0146) | 0.5923 (0.0066) | 0.6952 (0.0053) |
Avg. NMI | 0.2035 | 0.2158 | 0.4600 | 0.5970 | 0.6781 | 0.6665 | 0.6198 | 0.5584 | 0.6818 |
Avg. Std. | 0.0151 | 0.0169 | 0.0074 | 0.0013 | 0.0011 | 0.0115 | 0.0113 | 0.0067 | 0.0054 |
Datasets | K-Means | ComBKM | Co-FKM | MVKKM | MVSpec | WV-Co-FCM | TW-K-Means | MVASM | MUC |
---|---|---|---|---|---|---|---|---|---|
D1 | 0.7173 (0.0113) | 0.7265 (0.0173) | 0.7631 (0.0024) | 0.7968 (0.0019) | 0.8245 (0.0014) | 0.8538 (0.0092) | 0.7984 (0.0075) | 0.7915 (0.0037) | 0.8590 (0.0036) |
D2 | 0.7236 (0.0084) | 0.7402 (0.0141) | 0.7623 (0.0022) | 0.7027 (0.0003) | 0.8474 (0.0010) | 0.8657 (0.0027) | 0.7724 (0.0021) | 0.7436 (0.0036) | 0.8612 (0.0020) |
D3 | 0.7224 (0.0095) | 0.7237 (0.0121) | 0.7386 (0.0034) | 0.7522 (0.0022) | 0.8760 (0.0026) | 0.8553 (0.0113) | 0.7992 (0.0059) | 0.7981 (0.0034) | 0.8709 (0.0073) |
D4 | 0.7309 (0.0107) | 0.7271 (0.0106) | 0.7408 (0.0042) | 0.7432 (0.0031) | 0.8394 (0.0042) | 0.8570 (0.0127) | 0.8309 (0.0072) | 0.7967 (0.0036) | 0.8517 (0.0033) |
D5 | 0.7231 (0.0137) | 0.7384 (0.0083) | 0.7518 (0.0054) | 0.7367 (0.0017) | 0.8743 (0.0018) | 0.8574 (0.0071) | 0.8368 (0.0053) | 0.7902 (0.0052) | 0.8629 (0.0052) |
Avg. RI | 0.7235 | 0.7312 | 0.7513 | 0.7463 | 0.8523 | 0.8578 | 0.8075 | 0.7840 | 0.8611 |
Avg. Std. | 0.0107 | 0.0125 | 0.0035 | 0.0018 | 0.0022 | 0.0086 | 0.0056 | 0.0039 | 0.0043 |
# | K-Means | ComBKM | Co-FKM | MVKKM | MVSpec | WV-Co-FCM | TW-K-Means | MVASM | MUC |
---|---|---|---|---|---|---|---|---|---|
NMI | 0.0902 (0.0012) | 0.1084 (0.0006) | 0.1450 (0.0028) | 0.1155 (0.0081) | 0.1087 (0.0090) | 0.1041 (0) | 0.1197 (0) | 0.0968 (0.0058) | 0.1336 (0.0029) |
RI | 0.6934 (0) | 0.7506 (0.0010) | 0.7848 (0.0061) | 0.7634 (0.0022) | 0.7401 (0.0031) | 0.7552 (0.0015) | 0.7744 (0.0012) | 0.6985 (0.0041) | 0.8061 (0.0032) |
i | Methods | Holm | Hypothesis | ||
---|---|---|---|---|---|
8 | K-means | 6.97137 | 0 | 0.00625 | Rejected |
7 | ComBKM | 5.9063 | 0 | 0.007143 | Rejected |
6 | MVASM | 5.099428 | 0 | 0.008333 | Rejected |
5 | Co-FKM | 3.679334 | 0.000234 | 0.01 | Rejected |
4 | TW-K-means | 3.098387 | 0.001946 | 0.0125 | Rejected |
3 | MVKKM | 2.743363 | 0.006081 | 0.016667 | Rejected |
2 | MVSpec | 2.065591 | 0.038867 | 0.025 | Not Rejected |
1 | WV-Co-FCM | 1.807392 | 0.070701 | 0.05 | Not Rejected |
i | Methods | Holm | Hypothesis | ||
---|---|---|---|---|---|
8 | K-means | 6.390423 | 0 | 0.00625 | Rejected |
7 | ComBKM | 5.519001 | 0 | 0.007143 | Rejected |
6 | MVASM | 4.260282 | 0.00002 | 0.008333 | Rejected |
5 | MVKKM | 3.51796 | 0.000435 | 0.01 | Rejected |
4 | Co-FKM | 3.38886 | 0.000702 | 0.0125 | Rejected |
3 | TW-K-means | 2.549714 | 0.010781 | 0.016667 | Rejected |
2 | MVSpec | 2.065591 | 0.038867 | 0.025 | Not Rejected |
1 | WV-Co-FCM | 1.936492 | 0.052808 | 0.05 | Not Rejected |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiang, Z.; Zhou, J.; Wang, S. Multi-View Utility-Based Clustering: A Mutually Supervised Perspective. Symmetry 2025, 17, 924. https://doi.org/10.3390/sym17060924
Jiang Z, Zhou J, Wang S. Multi-View Utility-Based Clustering: A Mutually Supervised Perspective. Symmetry. 2025; 17(6):924. https://doi.org/10.3390/sym17060924
Chicago/Turabian StyleJiang, Zhibin, Jie Zhou, and Shitong Wang. 2025. "Multi-View Utility-Based Clustering: A Mutually Supervised Perspective" Symmetry 17, no. 6: 924. https://doi.org/10.3390/sym17060924
APA StyleJiang, Z., Zhou, J., & Wang, S. (2025). Multi-View Utility-Based Clustering: A Mutually Supervised Perspective. Symmetry, 17(6), 924. https://doi.org/10.3390/sym17060924