Advances in Parallel and Distributed AI Computing

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Parallel and Distributed Algorithms".

Deadline for manuscript submissions: 31 October 2025 | Viewed by 559

Special Issue Editor


E-Mail Website
Guest Editor
Faculty Computer Science, University of Koblenz, 56070 Koblenz, Germany
Interests: artificial intelligence; computational social science; data science; simulation; agents

Special Issue Information

Dear Colleagues,

Distributed and parallel systems have now been investigated for more than six decades. Furthermore, parallel systems are now widely used, but are often prone to limited scalability. Distributed computing is a key technology for scalable software systems, especially in the context of data science and sensor networks. In contrast to a centralized AI infrastructure, where all data and calculations take place in a single place, the distribution of AI over several nodes allows the processing of a large amount of data, enabling complex task solving to be carried out. Moreover, computing power can be scaled to train more sophisticated AI models. With respect to sensor networks—including the IoT and edge computing networks—the distribution enables, on the one hand, the local processing and a reduction in the amount data, and, on the other hand, more resilient and robust networks. Furthermore, distributed computing is inherently coupled with self-* concepts—self-organization, self-adaptivity, and self-healing, amongst others. Of particular interest for this Special Issue are new architectures, services (AIaaS and MLaaS) and virtualization, as well as field studies demonstrating the deployment of distributed architectures in AI/ML as well as sensing systems. The main objective of this Special Issue is to attract high-quality research that presents emerging solutions, enabling technologies, and/or applications based on efficient and reliable distributed and parallel AI techniques, as, for example, with multi-agent systems coupled with machine learning that address recent challenges in intelligent distributed systems to improve robustness, adaptivity, and scalability.

Topics of interest include but are not limited to the following:

  • Application of artificial intelligence in distributed systems;
  • Application of machine learning algorithms in distributed systems, in particular very small low-resource devices deployed in IoT/edge environments;
  • Distributed algorithms, workload balancing, communication and coordination, and sensor fusion—from theory to practice;
  • Distributed data storage;
  • Management solutions for large volumes of data (big data);
  • Distributed and parallel architectures for AI/ML applications;
  • Virtualization and distributed virtual machines;
  • Multi-agent systems;
  • High-performance and efficient distributed systems;
  • New computer processing architectures for ML tasks, accelerators for low-resource and embedded systems, and neural architectures;
  • Security and privacy in distributed systems;
  • Self-* capabilities in distributed systems;
  • Heterogeneous and hierarchical distributed parallel systems.

Prof. Dr. Stefan Bosse
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • self-* capabilities
  • multi-agent systems
  • internet of things
  • tiny ML
  • distributed ML
  • distributed sensor networks (DSNs)

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 654 KiB  
Article
Entropy-Regularized Federated Optimization for Non-IID Data
by Koffka Khan
Algorithms 2025, 18(8), 455; https://doi.org/10.3390/a18080455 - 22 Jul 2025
Viewed by 290
Abstract
Federated learning (FL) struggles under non-IID client data when local models drift toward conflicting optima, impairing global convergence and performance. We introduce entropy-regularized federated optimization (ERFO), a lightweight client-side modification that augments each local objective with a Shannon entropy penalty on the per-parameter [...] Read more.
Federated learning (FL) struggles under non-IID client data when local models drift toward conflicting optima, impairing global convergence and performance. We introduce entropy-regularized federated optimization (ERFO), a lightweight client-side modification that augments each local objective with a Shannon entropy penalty on the per-parameter update distribution. ERFO requires no additional communication, adds a single-scalar hyperparameter λ, and integrates seamlessly into any FedAvg-style training loop. We derive a closed-form gradient for the entropy regularizer and provide convergence guarantees: under μ-strong convexity and L-smoothness, ERFO achieves the same O(1/T) (or linear) rates as FedAvg (with only O(λ) bias for fixed λ and exact convergence when λt0); in the non-convex case, we prove stationary-point convergence at O(1/T). Empirically, on five-client non-IID splits of the UNSW-NB15 intrusion-detection dataset, ERFO yields a +1.6 pp gain in accuracy and +0.008 in macro-F1 over FedAvg with markedly smoother dynamics. On a three-of-five split of PneumoniaMNIST, a fixed λ matches or exceeds FedAvg, FedProx, and SCAFFOLD—achieving 90.3% accuracy and 0.878 macro-F1—while preserving rapid, stable learning. ERFO’s gradient-only design is model-agnostic, making it broadly applicable across tasks. Full article
(This article belongs to the Special Issue Advances in Parallel and Distributed AI Computing)
Show Figures

Figure 1

Back to TopTop