entropy-logo

Journal Browser

Journal Browser

Applications of Information-Theoretic Concepts for Generative AI Systems

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (31 March 2026) | Viewed by 1541

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Electronics Engineering, Antalya Bilim University, Antalya 07190, Turkey
Interests: bayesian data analysis; statistical signal processing; machine learning (for big data); information theory; source separation; computational mathematics and statistics; autonomous and intelligent systems; data mining and knowledge discovery; remote sensing; climatology; astronomy; systems biology; smart grid
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Center for Voice Intelligence and Security, Carnegie Mellon University, Pittsburgh, PA 15213, USA
Interests: automated discovery; measurement; representation; voice intelligence; generative AI
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Generative Artificial Intelligence (GenAI) has rapidly transformed domains ranging from natural language processing and computer vision to computational creativity. However, its design, optimization, and evaluation present unique challenges—particularly in understanding and controlling uncertainty, bias, and interpretability. Information theory offers a rigorous mathematical framework for addressing these challenges, providing quantifiable entities such as entropy, mutual information, and transfer entropy to analyze, optimize, and interpret generative models.

This Special Issue aims to highlight contributions addressing theoretical advances, computational techniques, and practical applications of information-theoretic concepts in the development, evaluation, and deployment of generative AI systems. Topics of interest include but are not limited to information-theoretic training objectives, causal inference in generative models, rate–distortion theory for compression in AI pipelines, information bottleneck approaches, uncertainty quantification, and the use of transfer entropy for interpretability.

We invite contributions from academia and industry that span theory, algorithms, and real-world applications, fostering a multidisciplinary dialogue to advance the synergy between information theory and next-generation generative AI systems.

Dr. Deniz Gençağa
Dr. Rita Singh
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • information-theoretic learning
  • generative artificial intelligence
  • deep generative models
  • transformer architectures
  • variational autoencoders (VAEs)
  • generative adversarial networks (GANs)
  • diffusion probabilistic models
  • large language models (LLMs)

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 1217 KB  
Article
Detecting Phase Transitions from Data Using Generative Learning
by Xiyu Zhou, Yan Mi and Pan Zhang
Entropy 2026, 28(4), 406; https://doi.org/10.3390/e28040406 - 3 Apr 2026
Viewed by 292
Abstract
Identifying phase transitions in complex many-body systems traditionally necessitates the definition of specific order parameters, a task often requiring prior knowledge of the statistical model and the symmetry-breaking mechanism. In this work, we propose a framework for detecting phase transitions directly from raw [...] Read more.
Identifying phase transitions in complex many-body systems traditionally necessitates the definition of specific order parameters, a task often requiring prior knowledge of the statistical model and the symmetry-breaking mechanism. In this work, we propose a framework for detecting phase transitions directly from raw (experimental) data without requiring knowledge of the underlying model Hamiltonian, parameters, or pre-defined labels. Inspired by generative modeling in machine learning, our method utilizes autoregressive networks to estimate the normalized probability distribution of the system from raw configuration data. We then quantify the intrinsic sensitivity of this learned distribution to control parameters (such as temperature) to construct a robust indicator of phase transitions. This indicator is based on the expectation of the change in absolute logarithmic probability, derived entirely from the raw data. Our approach is purely data-driven: it takes raw data across varying control parameters as input and outputs the most likely estimate of the phase transition point. To validate our approach, we conduct extensive numerical experiments on the 2D Ising model on both triangular and square lattices, and on the Sherrington–Kirkpatrick (SK) model utilizing raw data generated via Markov Chain Monte Carlo and Tensor Network methods. The results demonstrate that our generative approach accurately identifies phase transitions using only raw data. Our framework provides a general tool for exploring critical phenomena in model systems, with the potential to be extended to realistic experimental data where theoretical descriptions remain incomplete. Full article
Show Figures

Figure 1

15 pages, 1351 KB  
Article
An Operator Analysis on Stochastic Differential Equation (SDE)-Based Diffusion Generative Models
by Yunpei Wu and Yoshinobu Kawahara
Entropy 2026, 28(3), 290; https://doi.org/10.3390/e28030290 - 4 Mar 2026
Viewed by 565
Abstract
Score-based generative models, grounded in stochastic differential equations (SDEs), excel in producing high-quality data but suffer from slow sampling due to the extensive nonlinear computations required for iterative score function evaluations. We propose an innovative approach that integrates score-based reverse SDEs with kernel [...] Read more.
Score-based generative models, grounded in stochastic differential equations (SDEs), excel in producing high-quality data but suffer from slow sampling due to the extensive nonlinear computations required for iterative score function evaluations. We propose an innovative approach that integrates score-based reverse SDEs with kernel methods, leveraging the derivative reproducing property of reproducing kernel Hilbert spaces (RKHSs) to efficiently approximate the eigenfunctions and eigenvalues of the Fokker–Planck operator. This enables data generation through linear combinations of eigenfunctions, transforming computationally intensive nonlinear operations into efficient linear ones, thereby significantly reducing computational overhead. Notably, our experimental results demonstrate remarkable progress: despite a slight reduction in sample diversity, the sampling time for a single image on the CIFAR-10 dataset is reduced to an impressive 0.29 s, marking a substantial advancement in efficiency. This work introduces novel theoretical and practical tools for generative modeling, establishing a robust foundation for real-time applications. Full article
Show Figures

Figure 1

34 pages, 2594 KB  
Article
Variational Deep Alliance: A Generative Auto-Encoding Approach to Longitudinal Data Analysis
by Shan Feng, Wenxian Xie and Yufeng Nie
Entropy 2026, 28(1), 113; https://doi.org/10.3390/e28010113 - 18 Jan 2026
Viewed by 301
Abstract
Rapid advancements in the field of deep learning have had a profound impact on a wide range of scientific studies. This paper incorporates the power of deep neural networks to learn complex relationships in longitudinal data. The novel generative approach, Variational Deep Alliance [...] Read more.
Rapid advancements in the field of deep learning have had a profound impact on a wide range of scientific studies. This paper incorporates the power of deep neural networks to learn complex relationships in longitudinal data. The novel generative approach, Variational Deep Alliance (VaDA), is established, where an “alliance” is formed across repeated measurements via the strength of Variational Auto-Encoder. VaDA models the generating process of longitudinal data with a unified and well-structured latent space, allowing outcomes prediction, subjects clustering and representation learning simultaneously. The integrated model can be inferred efficiently within a stochastic Auto-Encoding Variational Bayes framework, which is scalable to large datasets and can accommodate variables of mixed type. Quantitative comparisons to those baseline methods are considered. VaDA shows high robustness and generalization capability across various synthetic scenarios. Moreover, a longitudinal study based on the well-known CelebFaces Attributes dataset is carried out, where we show its usefulness in detecting meaningful latent clusters and generating high-quality face images. Full article
Show Figures

Figure 1

Back to TopTop