Information-Theoretic Principles for Advanced Clustering and Structured Representation Learning
A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".
Deadline for manuscript submissions: 31 July 2026 | Viewed by 12
Special Issue Editors
Interests: clustering algorithm; unsupervised learning; deep learning; multi-view representation learning
Interests: pattern recognition: image processing; deep learning; clustering analysis
Interests: artificial intelligence; medical big data; multimodal machine learning
Special Issues, Collections and Topics in MDPI journals
Special Issue Information
Dear Colleagues,
Clustering is a fundamental approach in unsupervised learning and plays a pivotal role in data mining, pattern recognition, and information theory. As real-world datasets grow in complexity, scale, and heterogeneity across engineering, biomedical, and social domains, existing clustering algorithms face mounting challenges. Traditional methods often rely on fixed assumptions about data structure, which limits generalization and risks local optima when those assumptions are violated; high dimensionality further exacerbates these issues due to the curse of dimensionality. Moreover, many algorithms require extensive parameter tuning, reducing usability and robustness across diverse applications. Recent progress in parameter-free formulations—exemplified by Torque Clustering—suggests that minimizing manual hyperparameters can improve stability and interpretability, especially when combined with information-theoretic criteria such as entropy, mutual information, information bottleneck, and minimum description length. Although deep clustering offers new possibilities, it still faces gaps in transparency and stability; similar concerns arise in clustering adjacent tasks such as denoising and feature extraction. In parallel, fast and scalable clustering is increasingly important for real-time and large-scale scenarios, while emerging application areas—from bioinformatics to complex engineering systems—continue to expand the scope and demands of clustering research. Integrating clustering with structured representation learning provides a promising pathway to uncover latent structures and improve downstream tasks.
This Special Issue aims to gather original research and reviews on advanced clustering theory, algorithms, and applications. Topics of interest include, but are not limited to, the following:
- Development of robust, adaptive, and model-free clustering algorithms;
- Clustering methods for high-dimensional, sparse, or noisy data;
- Parameter-free or self-tuning clustering frameworks;
- Advances in deep clustering: architectures, loss functions, and interpretability;
- Clustering-related denoising and data preprocessing techniques;
- Fast and scalable clustering algorithms for large-scale or streaming data;
- Hybrid clustering models combining optimization, heuristics, or ensemble learning;
- Novel metrics for clustering evaluation and validation;
- Real-world applications of clustering in engineering, medical diagnostics, natural sciences, and social systems;
- Information-theoretic approaches to clustering and complexity analysis.
Dr. Jie Yang
Dr. Yan Ma
Dr. Liang Zhao
Guest Editors
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Keywords
- advanced clustering theory
- automatic clustering
- clustering algorithms
- information-theoretic clustering
- structured representation learning
- parameter-free and self-tuning methods
- unsupervised and semi-supervised learning
- manifold learning and dimensionality reduction
- outlier and anomaly detection
- data analysis
Benefits of Publishing in a Special Issue
- Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
- Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
- Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
- External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
- Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.
Further information on MDPI's Special Issue policies can be found here.


