entropy-logo

Journal Browser

Journal Browser

Advances in Federated and Collaborative Learning with Applications for Generative AI

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Multidisciplinary Applications".

Deadline for manuscript submissions: 15 October 2025 | Viewed by 61

Special Issue Editors


E-Mail Website
Guest Editor
College of Computing & Data Science, Nanyang Technological University Singapore, Singapore 639798, Singapore
Interests: networks; large-scale optimization; opinion dynamics and graph learning; generative artificial intelligence (AI); AI for health-tech
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electronic Engineering & Information Science, University of Science and Technology of China, Hefei 230027, China
Interests: wireless communications; information theory; statistical inference

Special Issue Information

Dear Colleagues,

The rapid adoption of Generative AI has introduced both exciting opportunities and critical challenges across machine learning, artificial intelligence, and software engineering. A fundamental aspect of advancing these fields is understanding entropy and perplexity, which serve as key measures of uncertainty, information complexity, and model efficiency. These information-theoretic insights are essential for developing scalable and robust AI systems, particularly in handling large-scale, distributed, and privacy-sensitive data.

Federated and collaborative learning play a central role in enabling decentralized AI training, allowing for models to be trained across multiple organizations without centralizing data. However, these frameworks introduce challenges such as model synchronization, communication overhead, and data heterogeneity. Entropy-based optimization techniques, including redundancy reduction and perplexity-aware learning, can enhance efficiency by minimizing unnecessary information transfer while preserving model performance.

In-context learning, an emerging capability in Generative AI, enables models to adapt to new tasks with minimal fine-tuning by leveraging contextual information within prompts. From an information-theoretic perspective, prompt compression and query optimization can be framed as rate distortion and entropy optimization techniques, reducing perplexity and enhancing retrieval efficiency. Optimizing prompt structures not only improves computational efficiency, but also reduces the number of tokens required for effective reasoning in real-world applications.

To further mitigate computational bottlenecks, techniques such as low-rank approximations, structured sparsity, and quantization leverage entropy constraints to improve model compression and inference speed in transformer architectures. These methods enable the scalability of generative AI while maintaining model accuracy, ensuring efficient deployment in edge computing and distributed environments. Additionally, perplexity-aware adaptation mechanisms allow for AI systems to dynamically adjust their complexity based on input uncertainty, improving robustness and generalization. Moreover, the theory of unsupervised learning with large language models shares fundamental connections with Kolmogorov complexity and Shannon’s information theory, highlighting the role of compression and entropy in efficient representation learning.

Another emerging research direction is machine unlearning, where specific data points are selectively removed from trained models without requiring full retraining. From an information-theoretic perspective, this can be viewed as a process of reducing redundant or biased information while preserving model integrity. Techniques such as gradient-based data removal algorithms, model pruning have significant implications for privacy, security, and regulatory compliance.

This Special Issue invites submissions that explore entropy, perplexity, and information-theoretic approaches to enhancing the computational efficiency of Generative AI, particularly in federated and collaborative learning. Topics of interest include, but are not limited to, the following:

  • Entropy and perplexity analysis of large language models, including Kolmogorov complexity bounds and information-theoretic optimization;
  • Federated and collaborative learning with entropy-aware compression and privacy-preserving techniques;
  • Representation learning in decentralized environment;
  • Parallel and distributed computing strategies for large-scale AI models;
  • Perplexity-constrained compression and query optimization in in-context learning;
  • Prompt compression and entropy-minimized retrieval techniques;
  • Data removal methods for machine unlearning, focusing on entropic efficiency;
  • Scalable techniques to detect AI-generated data using statistical and information-theoretic tools;
  • Applications in AI-assisted software, autonomous systems, and edge computing.

This Special Issue aims to bridge the gap between entropy-based learning, perplexity-aware optimization, and AI efficiency, fostering new advancements in generative AI research.

Dr. Chee Wei Tan
Prof. Dr. Wenyi Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • entropy and perplexity
  • model interpretation and representation learning
  • federated learning and collaborative learning
  • machine unlearning
  • in-context learning
  • distributed optimization
  • large language models
  • AI safety
  • data removal algorithms
  • AI-generated data detection

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers

This special issue is now open for submission.
Back to TopTop