entropy-logo

Journal Browser

Journal Browser

Information-Theoretic Approaches for Machine Learning and AI

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: 10 December 2025 | Viewed by 642

Special Issue Editors


E-Mail Website
Guest Editor
School of Cyber Science and Engineering, Southeast University, Nanjing 210018, China
Interests: coded distributed computation; privacy-preserving and trustworthy machine learning; blockchain security and scalability
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, City University of Hong Kong, Hong Kong, China
Interests: information theory; machine learning; recommender systems; algorithms; data science

Special Issue Information

Dear Colleagues,

With the rapid development of artificial intelligence (AI) technology, especially large language models, the ways in which information is acquired, processed, and transmitted are undergoing revolutionary changes. In this context, Shannon entropy and information theory, as fundamental theories for understanding and measuring information, play a crucial role.

As the complexity of deep learning models continues to increase, their internal mechanisms often become a “black box”, posing challenges to the credibility and application of these models. By introducing methods from information theory, we can explore how to quantify the uncertainty and information flow within models, thereby revealing their decision-making processes. This not only aids in understanding the internal workings of the models but also provides effective guidance for model optimization and downstream tasks, such as multimodal compression and knowledge editing. Simultaneously, quantum entropy and quantum information theory offer entirely new perspectives and tools, which are expected to propel the forefront of AI in computational capabilities, algorithm design, and secure communication. Coding theory also plays a critical role in machine learning, by improving the efficiency, privacy, and security of data processing through information encoding and error correction.

The aim of this Special Issue is to attract research investigations, from an information–theoretic perspective, addressing current challenges faced by theory and applications of machine learning. Prospective authors are invited to submit original research contributions on leveraging information theory and quantum information theory, in solving problems on (but not limited to) the following topics:

  • Model interpretability;
  • Reinforcement learning;
  • Data compression and semantic communication;
  • Federated learning;
  • Large language models;
  • Optimization;
  • Sustainable AI;
  • Security and privacy;
  • Unbiasedness and fairness in AI.

Prof. Dr. Songze Li
Prof. Dr. Linqi Song
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • information theory
  • coding theory
  • data compression
  • quantum computing
  • semantic information theory
  • statistical learning theory
  • reinforcement learning
  • large language models
  • federated learning
  • security and privacy
  • unbiasedness and fairness

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 2171 KiB  
Article
Cost-Efficient Distributed Learning via Combinatorial Multi-Armed Bandits
by Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh and Deniz Gündüz
Entropy 2025, 27(5), 541; https://doi.org/10.3390/e27050541 - 20 May 2025
Abstract
We consider the distributed stochastic gradient descent problem, where a main node distributes gradient calculations among n workers. By assigning tasks to all workers and waiting only for the k fastest ones, the main node can trade off the algorithm’s error with its [...] Read more.
We consider the distributed stochastic gradient descent problem, where a main node distributes gradient calculations among n workers. By assigning tasks to all workers and waiting only for the k fastest ones, the main node can trade off the algorithm’s error with its runtime by gradually increasing k as the algorithm evolves. However, this strategy, referred to as adaptive k-sync, neglects the cost of unused computations and of communicating models to workers that reveal a straggling behavior. We propose a cost-efficient scheme that assigns tasks only to k workers, and gradually increases k. To learn which workers are the fastest while assigning gradient calculations, we introduce the use of a combinatorial multi-armed bandit model. Assuming workers have exponentially distributed response times with different means, we provide both empirical and theoretical guarantees on the regret of our strategy, i.e., the extra time spent learning the mean response times of the workers. Furthermore, we propose and analyze a strategy that is applicable to a large class of response time distributions. Compared to adaptive k-sync, our scheme achieves significantly lower errors with the same computational efforts and less downlink communication while being inferior in terms of speed. Full article
(This article belongs to the Special Issue Information-Theoretic Approaches for Machine Learning and AI)
Show Figures

Figure 1

21 pages, 7300 KiB  
Article
Public Opinion Propagation Prediction Model Based on Dynamic Time-Weighted Rényi Entropy and Graph Neural Network
by Qiujuan Tong, Xiaolong Xu, Jianke Zhang and Huawei Xu
Entropy 2025, 27(5), 516; https://doi.org/10.3390/e27050516 - 12 May 2025
Viewed by 207
Abstract
Current methods for public opinion propagation prediction struggle to jointly model temporal dynamics, structural complexity, and dynamic node influence in evolving social networks. To overcome these limitations, this paper proposes a public opinion dissemination prediction model based on the integration of dynamic time-weighted [...] Read more.
Current methods for public opinion propagation prediction struggle to jointly model temporal dynamics, structural complexity, and dynamic node influence in evolving social networks. To overcome these limitations, this paper proposes a public opinion dissemination prediction model based on the integration of dynamic time-weighted Rényi entropy (DTWRE) and graph neural networks. By incorporating a time-weighted mechanism, the model devises two tiers of Rényi entropy metrics—local node entropy and global time-step entropy—to effectively quantify the uncertainty and complexity of network topology at different time points. Simultaneously, by integrating DTWRE features with high-dimensional node embeddings generated by Node2Vec and utilizing GraphSAGE to construct a spatiotemporal fusion modeling framework, the model achieves precise prediction of link formation and key node identification in public opinion dissemination. The model was validated on multiple public opinion datasets, and the results indicate that, compared to baseline methods, it exhibits significant advantages in several evaluation metrics such as AUC, thereby fully demonstrating the effectiveness of the dynamic time-weighted mechanism in capturing the temporal evolution of public opinion dissemination and the dynamic changes in network structure. Full article
(This article belongs to the Special Issue Information-Theoretic Approaches for Machine Learning and AI)
Show Figures

Figure 1

Back to TopTop