Recent Advances of Cloud, Edge, and Parallel Computing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 June 2025 | Viewed by 15241

Special Issue Editors


E-Mail Website
Guest Editor
School of Advanced Fusion Studies, University of Seoul, Seoul 02504, Republic of Korea
Interests: edge and cloud computing; aerospace and vehicular communications; wireless power transfer and physical-layer security technologies

E-Mail Website
Guest Editor
Department of Information and Communication Engineering, Myongji University, Yongin, Gyeonggi, Republic of Korea
Interests: federated learning; split learning; active learning; MEC-aided video streaming

E-Mail Website
Guest Editor
Department of Information and Communication Engineering, Myongji University, Yongin-si 17058, Gyeonggi-do, Republic of Korea
Interests: wireless communication; signal processing; and distributed computing

Special Issue Information

Dear Colleagues,

The rapid advancement of the Internet of Things (IoT) and 5G technologies has accelerated the developments in computing on a large scale, which brings great opportunities for various fields of science, engineering, business, and everyday life. At the same time, challenges such as an architectural bottle neck occur, e.g., a significant number of devices connected to a rather small number of servers in cloud data centers, yielding the problem of data deluge. To alleviate the computational burden, edge computing and fog computing are alternatives of cloud computing, distributing some of the computations and logics of processing from the cloud to the edge. However, several challenges remain to be addressed, such as the issue of a balanced workload among the computing nodes in a parallel and distributed manner, security, low-complexity and latency, etc. To this end, computing architectures need to be capable of implementing parallel and distributed algorithms efficiently.

This Special Issue aims to solicit conceptual, theoretical and experimental contributions to address the unsolved issues in the field of cloud, edge and parallel computing. The topics of interest include, but are not limited to, the following:

Topics:

(1) Optimization for cloud, edge and parallel computing;

(2) Learning-based algorithms for cloud, edge and parallel computing;

(3) Resource allocation and scheduling in cloud, edge and parallel computing;

(3) MIMO/RIS/IRS techniques for cloud, edge and parallel computing;

(4) Complexity analysis in cloud, edge and parallel computing;

(5) Scalability issues in in cloud, edge and parallel computing;

(6) Security issues in cloud, edge and parallel computing;

(7) Management and orchestration in cloud, edge and parallel computing;

(8) Architecture of cloud, edge and parallel computing;

(9) Applications of cloud, edge and parallel computing in next-generation networks;

(10) Advanced algorithms in cloud, edge and parallel computing.

Dr. Seongah Jeong
Dr. Jin-Hyun Ahn
Dr. Jin-kyu Kang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cloud computing
  • edge computing
  • fog computing
  • parallel computing
  • distributed computing
  • optimization
  • learning
  • security

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 3894 KiB  
Article
Bounded-Error LiDAR Compression for Bandwidth-Efficient Cloud-Edge In-Vehicle Data Transmission
by Ray-I Chang, Ting-Wei Hsu, Chih Yang and Yen-Ting Chen
Electronics 2025, 14(5), 908; https://doi.org/10.3390/electronics14050908 - 25 Feb 2025
Viewed by 466
Abstract
Recent advances in autonomous driving have led to an increased use of LiDAR (Light Detection and Ranging) sensors for high-frequency 3D perceptions, resulting in massive data volumes that challenge in-vehicle networks, storage systems, and cloud-edge communications. To address this issue, we propose a [...] Read more.
Recent advances in autonomous driving have led to an increased use of LiDAR (Light Detection and Ranging) sensors for high-frequency 3D perceptions, resulting in massive data volumes that challenge in-vehicle networks, storage systems, and cloud-edge communications. To address this issue, we propose a bounded-error LiDAR compression framework that enforces a user-defined maximum coordinate deviation (e.g., 2 cm) in the real-world space. Our method combines multiple compression strategies in both axis-wise metric Axis or Euclidean metric L2 (namely, Error-Bounded Huffman Coding (EB-HC), Error-Bounded 3D Compression (EB-3D), and the extended Error-Bounded Huffman Coding with 3D Integration (EB-HC-3D)) with a lossless Huffman coding baseline. By quantizing and grouping point coordinates based on a strict threshold (either axis-wise or Euclidean), our method significantly reduces data size while preserving the geometric fidelity. Experiments on the KITTI dataset demonstrate that, under a 2 cm bounded-error, our single-bin compression reduces the data to 25–35% of their original size, while multi-bin processing can further compress the data to 15–25% of their original volume. An analysis of compression ratios, error metrics, and encoding/decoding speeds shows that our method achieves a substantial data reduction while keeping reconstruction errors within the specified limit. Moreover, runtime profiling indicates that our method is well-suited for deployment on in-vehicle edge devices, thereby enabling scalable cloud-edge cooperation. Full article
(This article belongs to the Special Issue Recent Advances of Cloud, Edge, and Parallel Computing)
Show Figures

Figure 1

13 pages, 891 KiB  
Article
A Hybrid Parallel Computing Architecture Based on CNN and Transformer for Music Genre Classification
by Jiyang Chen, Xiaohong Ma, Shikuan Li, Sile Ma, Zhizheng Zhang and Xiaojing Ma
Electronics 2024, 13(16), 3313; https://doi.org/10.3390/electronics13163313 - 21 Aug 2024
Cited by 1 | Viewed by 1993
Abstract
Music genre classification (MGC) is the basis for the efficient organization, retrieval, and recommendation of music resources, so it has important research value. Convolutional neural networks (CNNs) have been widely used in MGC and achieved excellent results. However, CNNs cannot model global features [...] Read more.
Music genre classification (MGC) is the basis for the efficient organization, retrieval, and recommendation of music resources, so it has important research value. Convolutional neural networks (CNNs) have been widely used in MGC and achieved excellent results. However, CNNs cannot model global features well due to the influence of the local receptive field; these global features are crucial for classifying music signals with temporal properties. Transformers can capture long-range dependencies within an image thanks to adopting the self-attention mechanism. Nevertheless, there are still performance and computational cost gaps between Transformers and existing CNNs. In this paper, we propose a hybrid architecture (CNN-TE) based on CNN and Transformer encoder for MGC. Specifically, we convert the audio signals into mel spectrograms and feed them into a hybrid model for training. Our model employs a CNN to initially capture low-level and localized features from the spectrogram. Subsequently, these features are processed by a Transformer encoder, which models them globally to extract high-level and abstract semantic information. This refined information is then classified using a multi-layer perceptron. Our experiments demonstrate that this approach surpasses many existing CNN architectures when tested on the GTZAN and FMA datasets. Notably, it achieves these results with fewer parameters and a faster inference speed. Full article
(This article belongs to the Special Issue Recent Advances of Cloud, Edge, and Parallel Computing)
Show Figures

Graphical abstract

22 pages, 3408 KiB  
Article
Microservices-Based Resource Provisioning for Multi-User Cloud VR in Edge Networks
by Ho-Jin Choi, Nobuyoshi Komuro and Won-Suk Kim
Electronics 2024, 13(15), 3077; https://doi.org/10.3390/electronics13153077 - 3 Aug 2024
Viewed by 1155
Abstract
Cloud virtual reality (VR) is attracting attention in terms of its lightweight head-mounted display (HMD), providing telepresence and mobility. However, it is still in the research stages due to motion-to-photon (MTP) latency, the need for high-speed network infrastructure, and large-scale traffic processing problems. [...] Read more.
Cloud virtual reality (VR) is attracting attention in terms of its lightweight head-mounted display (HMD), providing telepresence and mobility. However, it is still in the research stages due to motion-to-photon (MTP) latency, the need for high-speed network infrastructure, and large-scale traffic processing problems. These problems are expected to be partially solved through edge computing, but the limited computing resource capacity of the infrastructure presents new challenges. In particular, in order to efficiently provide multi-user content such as remote meetings on edge devices, resource provisioning is needed that considers the application’s traffic patterns and computing resource requirements at the same time. In this study, we present a microservice architecture (MSA)-based application to provide multi-user cloud VR in edge computing and propose a scheme for planning an efficient service deployment considering the characteristics of each service. The proposed scheme not only guarantees the MTP latency threshold for all users but also aims to reduce networking and computing resource waste. The proposed scheme was evaluated by simulating various scenarios, and the results were compared to several studies. It was confirmed that the proposed scheme represents better performance metrics than the comparison schemes in most cases from the perspectives of networking, computing, and MTP latency. Full article
(This article belongs to the Special Issue Recent Advances of Cloud, Edge, and Parallel Computing)
Show Figures

Figure 1

18 pages, 2319 KiB  
Article
Handling Efficient VNF Placement with Graph-Based Reinforcement Learning for SFC Fault Tolerance
by Seyha Ros, Prohim Tam, Inseok Song, Seungwoo Kang and Seokhoon Kim
Electronics 2024, 13(13), 2552; https://doi.org/10.3390/electronics13132552 - 28 Jun 2024
Cited by 2 | Viewed by 1789
Abstract
Network functions virtualization (NFV) has become the platform for decomposing the sequence of virtual network functions (VNFs), which can be grouped as a forwarding graph of service function chaining (SFC) to serve multi-service slice requirements. NFV-enabled SFC consists of several challenges in reaching [...] Read more.
Network functions virtualization (NFV) has become the platform for decomposing the sequence of virtual network functions (VNFs), which can be grouped as a forwarding graph of service function chaining (SFC) to serve multi-service slice requirements. NFV-enabled SFC consists of several challenges in reaching the reliability and efficiency of key performance indicators (KPIs) in management and orchestration (MANO) decision-making control. The problem of SFC fault tolerance is one of the most critical challenges for provisioning service requests, and it needs resource availability. In this article, we proposed graph neural network (GNN)-based deep reinforcement learning (DRL) to enhance SFC fault tolerance (GRL-SFT), which targets the chain graph representation, long-term approximation, and self-organizing service orchestration for future massive Internet of Everything applications. We formulate the problem as the Markov decision process (MDP). DRL seeks to maximize the cumulative rewards by maximizing the service request acceptance ratios and minimizing the average completion delays. The proposed model solves the VNF management problem in a short time and configures the node allocation reliably for real-time restoration. Our simulation result demonstrates the effectiveness of the proposed scheme and indicates better performance in terms of total rewards, delays, acceptances, failures, and restoration ratios in different network topologies compared to reference schemes. Full article
(This article belongs to the Special Issue Recent Advances of Cloud, Edge, and Parallel Computing)
Show Figures

Figure 1

17 pages, 2027 KiB  
Article
Diffusion-Based Radio Signal Augmentation for Automatic Modulation Classification
by Yichen Xu, Liang Huang, Linghong Zhang, Liping Qian and Xiaoniu Yang
Electronics 2024, 13(11), 2063; https://doi.org/10.3390/electronics13112063 - 25 May 2024
Cited by 1 | Viewed by 1395
Abstract
Deep learning has become a powerful tool for automatically classifying modulations in received radio signals, a task traditionally reliant on manual expertise. However, the effectiveness of deep learning models hinges on the availability of substantial data. Limited training data often results in overfitting, [...] Read more.
Deep learning has become a powerful tool for automatically classifying modulations in received radio signals, a task traditionally reliant on manual expertise. However, the effectiveness of deep learning models hinges on the availability of substantial data. Limited training data often results in overfitting, which significantly impacts classification accuracy. Traditional signal augmentation methods like rotation and flipping have been employed to mitigate this issue, but their effectiveness in enriching datasets is somewhat limited. This paper introduces the Diffusion-based Radio Signal Augmentation algorithm (DiRSA), a novel signal augmentation method that significantly enhances dataset scale without compromising signal integrity. Utilizing prompt words for precise signal generation, DiRSA allows for flexible modulation control and significantly expands the training dataset beyond the original scale. Extensive evaluations demonstrate that DiRSA outperforms traditional signal augmentation techniques such as rotation and flipping. Specifically, when applied with the LSTM model in small dataset scenarios, DiRSA enhances modulation classification performance at SNRs above 0 dB by 6%. Full article
(This article belongs to the Special Issue Recent Advances of Cloud, Edge, and Parallel Computing)
Show Figures

Figure 1

15 pages, 1092 KiB  
Article
Performance Evaluation of Parallel Graphs Algorithms Utilizing Graphcore IPU
by Paweł Gepner, Bartłomiej Kocot, Marcin Paprzycki, Maria Ganzha, Leonid Moroz and Tomasz Olas
Electronics 2024, 13(11), 2011; https://doi.org/10.3390/electronics13112011 - 21 May 2024
Cited by 1 | Viewed by 1932
Abstract
Recent years have been characterized by increasing interest in graph computations. This trend can be related to the large number of potential application areas. Moreover, increasing computational capabilities of modern computers allowed turning theory of graph algorithms into explorations of best methods for [...] Read more.
Recent years have been characterized by increasing interest in graph computations. This trend can be related to the large number of potential application areas. Moreover, increasing computational capabilities of modern computers allowed turning theory of graph algorithms into explorations of best methods for their actual realization. These factors, in turn, brought about ideas like creation of a hardware component dedicated to graph computation; i.e., the Graphcore Intelligent Processor Unit (IPU). Interestingly, Graphcore systems are a hardware implementation of the Bulk Synchronous Parallel paradigm, which seemed to be a mostly theoretical concept from the end of last century. In this context, the question that has to be addressed experimentally is as follows: how good are Graphcore systems in comparison with standard systems that can be used to run graph algorithms, i.e., CPUs and GPUs. To provide a partial response to this broad question, in this contribution, PageRank, Single Source Shortest Path and Breadth-First Search algorithms are used to compare the performance of IPU-deployed algorithms to other parallel architectures. Obtained results clearly show that the Graphcore IPU outperforms other devices for the studied heterogeneous algorithms and, currently, provides best-in-class execution time results for a range of graph sizes and densities. Full article
(This article belongs to the Special Issue Recent Advances of Cloud, Edge, and Parallel Computing)
Show Figures

Figure 1

23 pages, 2787 KiB  
Article
Offloading Decision and Resource Allocation in Mobile Edge Computing for Cost and Latency Efficiencies in Real-Time IoT
by Chanthol Eang, Seyha Ros, Seungwoo Kang, Inseok Song, Prohim Tam, Sa Math and Seokhoon Kim
Electronics 2024, 13(7), 1218; https://doi.org/10.3390/electronics13071218 - 26 Mar 2024
Cited by 5 | Viewed by 2456
Abstract
Internet of Things (IoT) devices can integrate with applications requiring intensive contextual data processing, intelligent vehicle control, healthcare remote sensing, VR, data mining, traffic management, and interactive applications. However, there are computationally intensive tasks that need to be completed quickly within the time [...] Read more.
Internet of Things (IoT) devices can integrate with applications requiring intensive contextual data processing, intelligent vehicle control, healthcare remote sensing, VR, data mining, traffic management, and interactive applications. However, there are computationally intensive tasks that need to be completed quickly within the time constraints of IoT devices. To address this challenge, researchers have proposed computation offloading, where computing tasks are sent to edge servers instead of being executed locally on user devices. This approach involves using edge servers located near users in cellular network base stations, and also known as Mobile Edge Computing (MEC). The goal is to offload tasks to edge servers, optimizing both latency and energy consumption. The main objective of this paper mentioned in the summary is to design an algorithm for time- and energy-optimized task offloading decision-making in MEC environments. Therefore, we developed a Lagrange Duality Resource Optimization Algorithm (LDROA) to optimize for both decision offloading and resource allocation for tasks, whether to locally execute or offload to an edge server. The LDROA technique produces superior simulation outcomes in terms of task offloading, with improved performance in computation latency and cost usage compared to conventional methods like Random Offloading, Load Balancing, and the Greedy Latency Offloading scheme. Full article
(This article belongs to the Special Issue Recent Advances of Cloud, Edge, and Parallel Computing)
Show Figures

Figure 1

Review

Jump to: Research

37 pages, 3219 KiB  
Review
Digital Twin-Enabled Internet of Vehicles Applications
by Junting Gao, Chunrong Peng, Tsutomu Yoshinaga, Guorong Han, Siri Guleng and Celimuge Wu
Electronics 2024, 13(7), 1263; https://doi.org/10.3390/electronics13071263 - 28 Mar 2024
Cited by 7 | Viewed by 3386
Abstract
The digital twin (DT) paradigm represents a groundbreaking shift in the Internet of Vehicles (IoV) landscape, acting as an instantaneous digital replica of physical entities. This synthesis not only refines vehicular design but also substantially augments driver support systems and streamlines traffic governance. [...] Read more.
The digital twin (DT) paradigm represents a groundbreaking shift in the Internet of Vehicles (IoV) landscape, acting as an instantaneous digital replica of physical entities. This synthesis not only refines vehicular design but also substantially augments driver support systems and streamlines traffic governance. Diverging from the prevalent research which predominantly examines DT’s technical assimilation within IoV infrastructures, this review focuses on the specific deployments and goals of DT within the IoV sphere. Through an extensive review of scholarly works from the past 5 years, this paper provides a fresh and detailed perspective on the significance of DT in the realm of IoV. The applications are methodically categorized across four pivotal sectors: industrial manufacturing, driver assistance technology, intelligent transportation networks, and resource administration. This classification sheds light on DT’s diverse capabilities to confront and adapt to the intricate challenges in contemporary vehicular networks. The intent of this comprehensive overview is to catalyze innovation within IoV by providing an essential reference for researchers who aspire to swiftly grasp the complex dynamics of this evolving domain. Full article
(This article belongs to the Special Issue Recent Advances of Cloud, Edge, and Parallel Computing)
Show Figures

Graphical abstract

Back to TopTop