applsci-logo

Journal Browser

Journal Browser

Distributed Computing Systems: Advances, Trends and Emerging Designs

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 November 2025 | Viewed by 3139

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Engineering, Korea Aerospace University, Goyang-si 10540, Republic of Korea
Interests: distributed computing; system SW; operating systems; bigdata and AI computing platforms

E-Mail Website
Guest Editor
BigData and HPC Lab, Seoul National University of Science and Technology, Seoul 01811, Republic of Korea
Interests: high-performance computing; distributed file system; big data analysis; file and storage systems; operating systems; parallel and distributed systems; virtualization and cloud systems; database system
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce the call for submissions for our upcoming Special Issue focusing on Advances, Trends, and Emerging Designs in Distributed Computing Systems.

In today's rapidly evolving technological landscape, distributed computing systems play a pivotal role in shaping the future of computing. From cloud computing to edge computing and from distributed databases to peer-to-peer networks, innovations in distributed systems are driving efficiency, scalability, and reliability across various domains.

Furthermore, the advent of new paradigms, such as serverless computing, containers, virtual machines (VM), and micro-service architecture, presents exciting opportunities for innovation in distributed systems design. These emerging trends demand novel approaches to system architecture, resource allocation, interference management, and fault tolerance to meet the evolving needs of modern applications.

In this Special Issue, we invite contributions that explore cutting-edge research and recent advances in the field of distributed computing systems. Topics of interest include, but are not limited to, the following:

  • Novel architectures and designs for distributed systems;
  • Resource-efficient algorithms and optimization techniques;
  • Scalable and fault-tolerant distributed computing designs;
  • Edge-computing and fog-computing solutions;
  • Blockchain-based distributed systems;
  • Secure and privacy-preserving distributed computing protocols;
  • Machine learning and artificial intelligence for/in distributed systems;
  • Performance evaluation and benchmarking of distributed systems.

We welcome submissions from researchers and practitioners across academia and industry. Both theoretical studies and practical implementations are encouraged, as well as survey papers providing comprehensive insights into the state of the art in distributed computing.

We look forward to your contributions to this exciting Special Issue.

Dr. Jaehwan Lee
Dr. Sunggon Kim
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • distributed computing systems
  • scalability
  • fault tolerance
  • IoT edge computing
  • cloud computing
  • resource allocation
  • machine learning
  • security and privacy

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 3552 KiB  
Article
Low-Scalability Distributed Systems for Artificial Intelligence: A Comparative Study of Distributed Deep Learning Frameworks for Image Classification
by Manuel Rivera-Escobedo, Manuel de Jesús López-Martínez, Luis Octavio Solis-Sánchez, Héctor Alonso Guerrero-Osuna, Sodel Vázquez-Reyes, Daniel Acosta-Escareño and Carlos A. Olvera-Olvera
Appl. Sci. 2025, 15(11), 6251; https://doi.org/10.3390/app15116251 - 2 Jun 2025
Viewed by 419
Abstract
Artificial intelligence has experienced tremendous growth in various areas of knowledge, especially in computer science. Distributed computing has become necessary for storing, processing, and generating large amounts of information essential for training artificial intelligence models and algorithms that allow knowledge to be created [...] Read more.
Artificial intelligence has experienced tremendous growth in various areas of knowledge, especially in computer science. Distributed computing has become necessary for storing, processing, and generating large amounts of information essential for training artificial intelligence models and algorithms that allow knowledge to be created from large amounts of data. Currently, cloud services offer products for running distributed data training, such as NVIDIA Deep Learning Solutions, Amazon SageMaker, Microsoft Azure, and Google Cloud AI Platform. These services have a cost that adapts to the needs of users who require high processing performance to perform their artificial intelligence tasks. This study highlights the relevance of distributed computing in image processing and classification tasks using a low-scalability distributed system built with devices considered obsolete. To this end, two of the most widely used libraries for the distributed training of deep learning models, PyTorch’s Distributed Data Parallel and Distributed TensorFlow, were implemented and evaluated using the ResNet50 model as a basis for image classification, and their performance was compared with modern environments such as Google Colab and a recent Workstation. The results demonstrate that even with low scalability and outdated distributed systems, comprehensive artificial intelligence tasks can still be performed, reducing investment time and costs. With the results obtained and experiments conducted in this study, we aim to promote technological sustainability through device recycling to facilitate access to high-performance computing in key areas such as research, industry, and education. Full article
(This article belongs to the Special Issue Distributed Computing Systems: Advances, Trends and Emerging Designs)
Show Figures

Figure 1

14 pages, 781 KiB  
Article
Efficient I/O Performance-Focused Scheduling in High-Performance Computing
by Soeun Kim, Sunggon Kim and Hwajung Kim
Appl. Sci. 2024, 14(21), 10043; https://doi.org/10.3390/app142110043 - 4 Nov 2024
Viewed by 1969
Abstract
High-performance computing (HPC) systems are becoming increasingly important as contemporary exascale applications with demand extensive computational and data processing capability. To optimize these systems, efficient scheduling of HPC applications is important. In particular, because I/O is a shared resource among applications and is [...] Read more.
High-performance computing (HPC) systems are becoming increasingly important as contemporary exascale applications with demand extensive computational and data processing capability. To optimize these systems, efficient scheduling of HPC applications is important. In particular, because I/O is a shared resource among applications and is becoming more important due to the emergence of big data, it is possible to improve performance by considering the architecture of HPC systems and scheduling jobs based on I/O resource requirements. In this paper, we propose a scheduling scheme that prioritizes HPC applications based on their I/O requirements. To accomplish this, our scheme analyzes the IOPS of scheduled applications by examining their execution history. Then, it schedules the applications at pre-configured intervals based on their expected IOPS to maximize the available IOPS across the entire system. Compared to the existing first-come first-served (FCFS) algorithm, experimental results using real-world HPC log data show that our scheme reduces total execution time by 305 h and decreases costs by USD 53 when scheduling 10,000 jobs utilizing public cloud resources. Full article
(This article belongs to the Special Issue Distributed Computing Systems: Advances, Trends and Emerging Designs)
Show Figures

Figure 1

Back to TopTop