ICIRD: Information-Principled Deep Clustering for Invariant, Redundancy-Reduced and Discriminative Cluster Distributions
Abstract
1. Introduction
- A unified information-principled deep clustering framework, termed ICIRD, is proposed. It systematically imposes information constraints on the cluster probability distributions under data augmentation scenarios, covering discriminability, redundancy reduction, and cross-view invariance.
- Three complementary information-principled modules are designed for the cluster probability distribution. The DDS (Section 4.2) module minimizes conditional entropy to enhance assignment certainty and sharpness. The MIDR (Section 4.3) module suppresses redundancy of cluster distributions by minimizing inter-cluster mutual information within each view. The CIDC (Section 4.4) module maximizes cross-view mutual information to preserve the semantic structural stability of cluster assignments.
- Extensive experiments conducted on multiple benchmark datasets demonstrate that ICIRD achieves superior clustering performance compared with state-of-the-art methods. In addition, an analysis of cross-view IDR further shows that the view alignment strategy can partially alleviate the structural distortion caused by cross-view IDR.
2. Related Works
2.1. Deep Clustering
2.2. Mutual Information for Deep Clustering
2.3. Deep Contrastive Clustering
3. Preliminaries
3.1. Information-Theoretic Preliminaries
3.2. Problem Definition
4. Proposed Method
4.1. Methodological Origins and Contributions
4.2. Discriminative Distribution Sharpness Module
4.3. Multi-View Inter-Cluster Distribution Redundancy Reduction Module
4.4. Cross-View Instance Distribution Consistency Module
4.5. Contrastive Representation Anchoring Module
4.6. Theoretical Interpretation of Objectives
4.7. Information-Principled Objective Formulation
| Algorithm 1: ICIRD |
| Input : Dataset X; training epochs E; batch size B; temperature ; cluster number K; hyper-parameters ; strong augmentation , weak augmentation T, neural network Output: Clustering result
|
5. Experiments
5.1. Experiment Setting
5.1.1. Datasets
5.1.2. Evaluation Metrics
5.1.3. Implementation Details
5.1.4. Comparison Methods
5.2. Quantitative Analysis of Clustering Results
5.3. Clustering on Fine-Grained Dataset
5.4. Qualitative Analysis of Clustering Results
5.4.1. Confusion Matrices
5.4.2. Case Studies
5.5. Ablation Studies
5.5.1. Effectiveness Analysis
5.5.2. Convergence Analysis
5.6. Discussion of Cross-View IDR
5.7. Analysis of Hyper-Parameters
5.7.1. Analysis of Representation Dimension
5.7.2. Analysis of Cluster Number
5.7.3. Analysis of Balance Parameters
6. Illustrative Applications and Scalability of ICIRD
6.1. Extending ICIRD to Non-Image Modalities
6.2. Practical Application of ICIRD in Downstream Tasks
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| ICIRD | Information-principled Deep Clustering for Invariant, Redundancy-reduced and Discriminative Cluster Distributions |
| DDS | Discriminative Distribution Sharpness |
| MIDR | Multiview Inter-cluster Distribution Redundancy Reduction |
| CIDC | Cross-view Instance Distribution Consistency |
| CRA | Contrastive Representation Anchoring |
| MI | Mutual Information |
| DPI | Data Processing Inequality |
| IDR | Inter-cluster Distribution Redundancy |
| KL | Kullback–Leibler Divergence |
References
- Ren, Y.; Pu, J.; Yang, Z.; Xu, J.; Li, G.; Pu, X.; Yu, P.S.; He, L. Deep Clustering: A Comprehensive Survey. arXiv 2022, arXiv:2212.07473. [Google Scholar] [CrossRef]
- Min, E.; Guo, X.; Liu, Q.; Zhang, G.; Cui, J.; Long, J. A Survey of Clustering with Deep Learning: From the Perspective of Network Architecture. IEEE Access 2018, 6, 39501–39514. [Google Scholar] [CrossRef]
- Aljalbout, E.; Golkov, V.; Siddiqui, Y.; Strobel, M.; Cremers, D. Clustering with Deep Learning: Taxonomy and New Methods. arXiv 2018, arXiv:1801.07648. [Google Scholar] [CrossRef]
- Ohl, L.; Mattei, P.A.; Precioso, F. A tutorial on discriminative clustering and mutual information. arXiv 2025, arXiv:2505.04484. [Google Scholar] [CrossRef]
- Zhou, S.; Xu, H.; Zheng, Z.; Chen, J.; Li, Z.; Bu, J.; Wu, J.; Wang, X.; Zhu, W.; Ester, M. A Comprehensive Survey on Deep Clustering: Taxonomy, Challenges, and Future Directions. arXiv 2024, arXiv:2206.07579. [Google Scholar] [CrossRef]
- Xie, J.; Girshick, R.; Farhadi, A. Unsupervised Deep Embedding for Clustering Analysis. In Proceedings of the 33rd International Conference on Machine Learning (ICML), New York, NY, USA, 20–22 June 2016. [Google Scholar]
- Yang, J.; Parikh, D.; Batra, D. Joint unsupervised learning of deep representations and image clusters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 5147–5156. [Google Scholar]
- Caron, M.; Bojanowski, P.; Joulin, A.; Douze, M. Deep Clustering for Unsupervised Learning of Visual Features. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Ji, X.; Henriques, J.F.; Vedaldi, A. Invariant Information Clustering for Unsupervised Image Classification and Segmentation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9865–9874. [Google Scholar] [CrossRef]
- Parulekar, A.; Collins, L.; Shanmugam, K.; Mokhtari, A.; Shakkottai, S. InfoNCE Loss Provably Learns Cluster-Preserving Representations. In Proceedings of the Thirty Sixth Conference on Learning Theory, Bangalore, India, 12–15 July 2023; Proceedings of Machine Learning Research. Volume 195, pp. 1914–1961. [Google Scholar]
- He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum Contrast for Unsupervised Visual Representation Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A Simple Framework for Contrastive Learning of Visual Representations. In Proceedings of the 37th International Conference on Machine Learning (ICML), Virtual, 13–18 July 2020. [Google Scholar]
- Li, Y.; Wang, L.; Wang, Y.; Liu, T.; Zhang, L. Contrastive Clustering. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Virtually, 2–9 February 2021. [Google Scholar]
- Liu, Y.; Tu, W.; Zhou, S.; Liu, X.; Song, L.; Yang, X.; Zhu, E. Deep Graph Clustering via Dual Correlation Reduction. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Online, 22 February–1 March 2022. [Google Scholar]
- Huang, D.; Chen, D.H.; Chen, X.; Wang, C.D.; Lai, J.H. Deepclue: Enhanced deep clustering via multi-layer ensembles in neural networks. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 1582–1594. [Google Scholar] [CrossRef]
- Huang, D.; Deng, X.; Chen, D.H.; Wen, Z.; Sun, W.; Wang, C.D.; Lai, J.H. Deep clustering with hybrid-grained contrastive and discriminative learning. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 9472–9483. [Google Scholar] [CrossRef]
- Zhao, Y.; Bai, L. Contrastive clustering with a graph consistency constraint. Pattern Recognit. 2024, 146, 110032. [Google Scholar] [CrossRef]
- Chang, J.; Wang, L.; Meng, G.; Xiang, S.; Pan, C. Deep adaptive image clustering. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 5879–5887. [Google Scholar]
- Nassar, I.; Karlinsky, L.; Feris, R.; Tay, Y.; Padhy, S.; Noy, A.; Zhang, L.; Elhoseiny, M.; Tsai, Y.H.; Nevatia, R. ProtoCon: Pseudo-Label Refinement via Online Clustering and Prototypical Consistency. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
- Nguyen, X.; Wainwright, M.J.; Jordan, M.I. Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization. IEEE Trans. Inf. Theory 2010, 56, 5847–5861. [Google Scholar] [CrossRef]
- Belghazi, M.I.; Baratin, A.; Rajeshwar, S.; Ozair, S.; Bengio, Y.; Courville, A.; Hjelm, D. Mutual Information Neural Estimation. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
- Hjelm, R.D.; Fedorov, A.; Lavoie-Marchildon, S.; Grewal, K.; Bachman, P.; Trischler, A.; Bengio, Y. Learning Deep Representations by Mutual Information Estimation and Maximization. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Hu, W.; Miyato, T.; Tokui, S.; Matsumoto, E.; Sugiyama, M. Learning Discrete Representations via Information Maximizing Self-Augmented Training. In Proceedings of the 34th International Conference on Machine Learning (ICML), Sydney, Australia, 6–11 August 2017; pp. 1558–1567. [Google Scholar]
- Wu, J.; Long, K.; Wang, F.; Qian, C.; Li, C.; Lin, Z.; Zha, H. Deep comprehensive correlation mining for image clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8150–8159. [Google Scholar]
- Zhang, H.; Liu, S.; Wang, C.; Yu, X. Deep Descriptive Clustering: Unifying Representation, Interpretability, and Discriminability. In Proceedings of the International Joint Conference on Artificial Intelligence, Montreal, QC, Canada, 19–27 August 2021. [Google Scholar]
- Zhang, L.; Li, H.; Chen, Q.; Zhang, W. Mutual Information-Driven Multi-View Clustering. In Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada, 29 October–3 November 2023. [Google Scholar]
- Yan, X.; Jin, Z.; Han, F.; Ye, Y. Differentiable information bottleneck for deterministic multi-view clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 27435–27444. [Google Scholar]
- Lou, Z.; Zhang, K.; Wu, Y.; Hu, S. Super Deep Contrastive Information Bottleneck for Multi-modal Clustering. In Proceedings of the Forty-Second International Conference on Machine Learning, Vancouver, BC, Canada, 13–19 July 2025. [Google Scholar]
- Caron, M.; Misra, I.; Mairal, J.; Goyal, P.; Bojanowski, P.; Joulin, A. Unsupervised Learning of Visual Features by Contrasting Cluster Assignments. In Proceedings of the 34th International Conference on Neural Information Processing Systems (NeurIPS), Online, 6–12 December 2020. [Google Scholar]
- Li, J.; Zhou, P.; Xiong, C.; Hoi, S.C.H. Prototypical Contrastive Learning of Unsupervised Representations. In Proceedings of the Ninth International Conference on Learning Representations: ICLR 2021, Vienna, Austria, 4–8 May 2021. [Google Scholar]
- Chuang, C.; Robinson, J.; Lin, Y.; Torralba, A.; Jegelka, S. Debiased Contrastive Learning. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Online, 6–12 December 2020. [Google Scholar]
- Van Gansbeke, W.; Vandenhende, S.; Georgoulis, S.; Proesmans, M.; Van Gool, L. Learning to Classify Images without Labels. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Niu, C.; Shan, H.; Wang, G. SPICE: Semantic Pseudo-Labeling for Image Clustering. IEEE Trans. Image Process. 2022, 31, 7172–7185. [Google Scholar] [CrossRef]
- Zhong, H.; Wu, J.; Chen, C.; Huang, J.; Deng, M.; Nie, L.; Lin, Z.; Hua, X.-S. Graph Contrastive Clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Virtual, 11–17 October 2021. [Google Scholar]
- Deng, X.; Huang, D.; Chen, D.H.; Wang, C.D.; Lai, J.H. Strongly augmented contrastive clustering. Pattern Recognit. 2023, 139, 109470. [Google Scholar] [CrossRef]
- Xu, Y.; Huang, D.; Wang, C.D.; Lai, J.H. Deep image clustering with contrastive learning and multi-scale graph convolutional networks. Pattern Recognit. 2024, 146, 110065. [Google Scholar] [CrossRef]
- Znalezniak, M.; Rola, P.; Kaszuba, P.; Tabor, J.; Śmieja, M. Contrastive hierarchical clustering. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Turin, Italy, 18–22 September 2023; pp. 627–643. [Google Scholar]
- Gray, R.M. Entropy and Information Theory; Springer Science & Business Media: New York, NY, USA, 2011. [Google Scholar]
- Tishby, N.; Pereira, F.C.; Bialek, W. The Information Bottleneck Method. arXiv 2000, arXiv:physics/0004057. [Google Scholar] [PubMed]
- Grandvalet, Y.; Bengio, Y. Semi-supervised Learning by Entropy Minimization. In Proceedings of the 18th International Conference on Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada, 13–18 December 2004. [Google Scholar]
- Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images. 2009. Available online: https://bibbase.org/network/publication/krizhevsky-hinton-learningmultiplelayersoffeaturesfromtinyimages-2009 (accessed on 3 November 2025).
- Coates, A.; Ng, A.; Lee, H. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 11–13 April 2011; JMLR Workshop and Conference Proceedings. pp. 215–223. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Strehl, A.; Ghosh, J. Cluster Ensembles—A Knowledge Reuse Framework for Combining Multiple Partitions. J. Mach. Learn. Res. 2002, 3, 583–617. [Google Scholar]
- Hubert, L.; Arabie, P. Comparing Partitions. J. Classif. 1985, 2, 193–218. [Google Scholar] [CrossRef]
- Wang, X.; Qi, G.J. Contrastive learning with stronger augmentations. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5549–5560. [Google Scholar] [CrossRef]
- Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Wu, Z.; Xiong, Y.; Yu, S.X.; Lin, D. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3733–3742. [Google Scholar]
- Huang, J.; Gong, S.; Zhu, X. Deep semantic clustering by partition confidence maximisation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 8849–8858. [Google Scholar]
- Zhong, H.; Chen, C.; Jin, Z.; Hua, X.S. Deep robust clustering by contrastive learning. arXiv 2020, arXiv:2008.03030. [Google Scholar] [CrossRef]
- Tao, Y.; Takagi, K.; Nakata, K. Clustering-friendly representation learning via instance discrimination and feature decorrelation. arXiv 2021, arXiv:2106.00131. [Google Scholar] [CrossRef]
- Dang, Z.; Deng, C.; Yang, X.; Huang, H. Doubly contrastive deep clustering. arXiv 2021, arXiv:2103.05484. [Google Scholar] [CrossRef]
- Zhang, F.; Li, L.; Hua, Q.; Dong, C.R.; Lim, B.H. Improved deep clustering model based on semantic consistency for image clustering. Knowl.-Based Syst. 2022, 253, 109507. [Google Scholar] [CrossRef]
- Khosla, A.; Jayadevaprakash, N.; Yao, B.; Fei-Fei, L. Novel Dataset for Fine-Grained Image Categorization. In Proceedings of the First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011. [Google Scholar]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the NAACL, Minneapolis, MN, USA, 2 June–7 June 2019. [Google Scholar]
- He, P.; Liu, X.; Gao, J.; Chen, W. DeBERTa: Decoding-enhanced BERT with Disentangled Attention. In Proceedings of the ICLR, Virtual Event, Austria, 3–7 May 2021. [Google Scholar]
- Guedes, G.B.; da Silva, A.A.E. Classification and Clustering of Sentence-Level Embeddings of Scientific Articles Generated by Contrastive Learning. arXiv 2024, arXiv:2404.00224. [Google Scholar] [CrossRef]
- Gao, T.; Yao, X.; Chen, D. SimCSE: Simple Contrastive Learning of Sentence Embeddings. In Proceedings of the EMNLP, Virtual Event/Punta Cana, Dominican Republic, 7–11 November 2021. [Google Scholar]
- Yan, Y.; Zhang, Y.; Lin, X.; Li, X. CoCLR-Text: Contrastive Cross-View Learning for Text Clustering. In Proceedings of the Findings of ACL, Dublin, Ireland, 22–27 May 2022. [Google Scholar]
- Huang, X.; Khetan, A.; Cvitkovic, M.; Bansal, V. TabTransformer: Tabular Data Modeling Using Contextual Embeddings. In Proceedings of the NeurIPS, Virtual, 6–12 December 2020. [Google Scholar]
- Gorishniy, Y.; Rubachev, I.; Khrulkov, V.; Babenko, A. Revisiting Deep Learning Models for Tabular Data. In Proceedings of the NeurIPS, Virtual, 6–14 December 2021. [Google Scholar]
- Ucar, T.; Artelt, A.; Hammer, B. Self-Supervised Learning for Tabular Data via Masked Feature Reconstruction. In Proceedings of the ICML, Honolulu, HI, USA, 23–29 July 2023. [Google Scholar]
- Fini, E.; Lathuilière, S.; Sangineto, E.; Zhong, Z.; Nabi, M.; Sebe, N.; Ricci, E. A Unified Objective for Novel Class Discovery. In Proceedings of the CVPR, Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
- Vaze, S.; De Melo, N.C.; Bojanowski, P.; Joulin, A.; Douze, M. Generalized Category Discovery. In Proceedings of the NeurIPS, New Orleans, LA, USA, 28 November–9 December 2022. [Google Scholar]
- Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning Transferable Visual Models From Natural Language Supervision. In Proceedings of the ICML, Virtual, 18–24 July 2021. [Google Scholar]
- Qiu, L.; Zhang, Q.; Chen, X.; Cai, S. Multi-Level Cross-Modal Alignment for Image Clustering. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Vancouver, BC, Canada, 26–27 February 2024; Volume 38, pp. 14695–14703. [Google Scholar] [CrossRef]








| Dataset | Class | Images Number | Image Size |
|---|---|---|---|
| CIFAR-10 | 10 | 60,000 | 32 × 32 × 3 |
| CIFAR-100 | 20/100 | 60,000 | 32 × 32 × 3 |
| STL-10 | 10 | 13,000 | 96 × 96 × 3 |
| ImageNet-10 | 10 | 13,000 | 224 × 224 × 3 |
| ImageNet-Dogs | 15 | 19,500 | 224 × 224 × 3 |
| Datasets | CIFAR-10 | CIFAR-100 | STL-10 | ImageNet-10 | ImageNet-Dogs | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Metric | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI |
| VAE [47] | 0.291 | 0.245 | 0.167 | 0.152 | 0.108 | 0.040 | 0.282 | 0.200 | 0.146 | 0.334 | 0.193 | 0.168 | 0.179 | 0.107 | 0.079 |
| JULE [7] | 0.272 | 0.192 | 0.138 | 0.137 | 0.103 | 0.033 | 0.277 | 0.182 | 0.164 | 0.300 | 0.175 | 0.138 | 0.138 | 0.054 | 0.028 |
| DCGAN [48] | 0.315 | 0.265 | 0.176 | 0.151 | 0.120 | 0.045 | 0.298 | 0.210 | 0.139 | 0.346 | 0.225 | 0.157 | 0.174 | 0.121 | 0.078 |
| DEC [6] | 0.301 | 0.257 | 0.161 | 0.185 | 0.136 | 0.050 | 0.359 | 0.276 | 0.186 | 0.381 | 0.282 | 0.203 | 0.195 | 0.122 | 0.079 |
| DAC [18] | 0.522 | 0.396 | 0.306 | 0.238 | 0.185 | 0.088 | 0.470 | 0.366 | 0.257 | 0.527 | 0.394 | 0.302 | 0.275 | 0.219 | 0.111 |
| ID [49] | 0.440 | 0.309 | 0.221 | 0.267 | 0.221 | 0.108 | 0.514 | 0.362 | 0.285 | 0.632 | 0.478 | 0.420 | 0.365 | 0.248 | 0.172 |
| DCCM [24] | 0.623 | 0.496 | 0.408 | 0.327 | 0.285 | 0.173 | 0.482 | 0.376 | 0.262 | 0.710 | 0.608 | 0.555 | 0.383 | 0.321 | 0.182 |
| PICA [50] | 0.696 | 0.591 | 0.512 | 0.337 | 0.310 | 0.171 | 0.713 | 0.611 | 0.531 | 0.870 | 0.802 | 0.761 | 0.352 | 0.352 | 0.201 |
| DRC [51] | 0.727 | 0.621 | 0.547 | 0.367 | 0.356 | 0.208 | 0.747 | 0.644 | 0.569 | 0.884 | 0.830 | 0.798 | 0.389 | 0.384 | 0.233 |
| IDFD [52] | 0.815 | 0.711 | 0.663 | 0.425 | 0.426 | 0.264 | 0.756 | 0.643 | 0.575 | 0.954 | 0.898 | 0.901 | 0.591 | 0.546 | 0.413 |
| CC [13] | 0.790 | 0.705 | 0.637 | 0.429 | 0.431 | 0.266 | 0.850 | 0.764 | 0.726 | 0.893 | 0.859 | 0.822 | 0.429 | 0.445 | 0.274 |
| DCDC [53] | 0.699 | 0.585 | 0.506 | 0.349 | 0.310 | 0.179 | 0.734 | 0.621 | 0.547 | 0.879 | 0.817 | 0.787 | 0.365 | 0.360 | 0.207 |
| DCSC [54] | 0.798 | 0.704 | 0.644 | 0.469 | 0.452 | 0.293 | 0.865 | 0.792 | 0.749 | 0.904 | 0.867 | 0.838 | 0.443 | 0.462 | 0.299 |
| SACC [35] | 0.851 | 0.765 | 0.724 | 0.443 | 0.448 | 0.282 | 0.759 | 0.691 | 0.626 | 0.905 | 0.877 | 0.843 | 0.437 | 0.455 | 0.285 |
| DeepCluE [15] | 0.764 | 0.727 | 0.646 | 0.457 | 0.472 | 0.288 | - | - | - | 0.924 | 0.882 | 0.856 | 0.416 | 0.448 | 0.273 |
| IcicleGCN [36] | 0.807 | 0.729 | 0.660 | 0.461 | 0.459 | 0.311 | - | - | - | 0.955 | 0.904 | 0.905 | 0.415 | 0.456 | 0.279 |
| DHCL [16] | 0.801 | 0.710 | 0.654 | 0.446 | 0.432 | 0.275 | 0.821 | 0.726 | 0.680 | - | - | - | 0.511 | 0.495 | 0.359 |
| CoHiClust [37] | 0.839 | 0.779 | 0.731 | 0.437 | 0.467 | 0.229 | 0.613 | 0.584 | 0.474 | 0.953 | 0.907 | 0.899 | 0.355 | 0.411 | 0.232 |
| CCGCC [17] | 0.864 | 0.778 | 0.742 | 0.482 | 0.486 | 0.316 | 0.779 | 0.698 | 0.645 | 0.904 | 0.859 | 0.833 | 0.579 | 0.568 | 0.449 |
| ICIRD | 0.877 | 0.801 | 0.750 | 0.504 | 0.496 | 0.333 | 0.887 | 0.774 | 0.737 | 0.945 | 0.893 | 0.906 | 0.601 | 0.571 | 0.464 |
| Dataset | ACC | NMI | ARI |
|---|---|---|---|
| DCSC | 0.097 | 0.259 | 0.034 |
| IDFD | 0.132 | 0.331 | 0.062 |
| CCGCC | 0.140 | 0.341 | 0.059 |
| IcicleGCN | 0.082 | 0.235 | 0.035 |
| DHCL | 0.105 | 0.316 | 0.064 |
| ICIRD | 0.176 | 0.366 | 0.101 |
| Datasets | CIFAR-10 | STL-10 | Stanford-Dogs | ||||||
|---|---|---|---|---|---|---|---|---|---|
| Meric | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI |
| 0.877 | 0.801 | 0.750 | 0.887 | 0.774 | 0.737 | 0.176 | 0.366 | 0.101 | |
| 0.886 | 0.815 | 0.753 | 0.894 | 0.765 | 0.721 | 0.194 | 0.391 | 0.109 | |
| 0.852 | 0.768 | 0.725 | 0.864 | 0.747 | 0.708 | 0.139 | 0.317 | 0.083 | |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zheng, A.; Wu, R.M.X.; Wang, Y.; He, Y. ICIRD: Information-Principled Deep Clustering for Invariant, Redundancy-Reduced and Discriminative Cluster Distributions. Entropy 2025, 27, 1200. https://doi.org/10.3390/e27121200
Zheng A, Wu RMX, Wang Y, He Y. ICIRD: Information-Principled Deep Clustering for Invariant, Redundancy-Reduced and Discriminative Cluster Distributions. Entropy. 2025; 27(12):1200. https://doi.org/10.3390/e27121200
Chicago/Turabian StyleZheng, Aiyu, Robert M. X. Wu, Yupeng Wang, and Yanting He. 2025. "ICIRD: Information-Principled Deep Clustering for Invariant, Redundancy-Reduced and Discriminative Cluster Distributions" Entropy 27, no. 12: 1200. https://doi.org/10.3390/e27121200
APA StyleZheng, A., Wu, R. M. X., Wang, Y., & He, Y. (2025). ICIRD: Information-Principled Deep Clustering for Invariant, Redundancy-Reduced and Discriminative Cluster Distributions. Entropy, 27(12), 1200. https://doi.org/10.3390/e27121200

