Parallel and Distributed Cloud, Edge and Fog Computing: Latest Advances and Prospects

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 14501

Special Issue Editors


E-Mail Website
Guest Editor
Department of Systems and Computer Engineering, Carleton University, 1125 Colonel By Dr, Ottawa, ON K1S 5B6, Canada
Interests: computer networking; IoT; cloud and edge computing; computational offloading; resource allocation; service function chain placement; 6G

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, National Technical University of Athens, 15780 Athens, Greece
Interests: software-defined networks; cognitive radio networks; IoT; big data; social network analysis; recommender systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electrical & Computer Engineering, National Technical University of Athens, Zografou Campus, 9, Iroon Polytechniou Str., 15780 Zografou, Greece
Interests: software-defined networks; cognitive radio networks; IoT; big data; social network analysis; recommender systems

E-Mail Website
Guest Editor
School of Electrical & Computer Engineering, National Technical University of Athens, Zografou Campus, 9, Iroon Polytechniou Str., 15780 Zografou, Greece
Interests: big data; caching; social network analysis; recommender systems; information diffusion

Special Issue Information

Dear Colleagues,

During the 21st century, cloud computing has been established as a breakthrough computing paradigm which provides utility computing at large scale, with applicability and adoption in several application domains. At the same time, the dawn of the 5G era in networking has paved the way for the next generation of cloud technologies, namely edge and fog computing, which tend to reposition the computational resources closer to the user.

However, as often happens with new technologies, there remain several challenges to be resolved. Unbalanced workload among the nodes of a cloud computing system infrastructure can potentially hamper its performance. The centralized management and processing of information can also have a negative impact, especially when big data applications are deployed. These are some of the problems that parallel and distributed techniques can solve when applied in the context of cloud, edge and fog computing, by enabling the aggregation and sharing of an increasing variety of distributed computational resources at large scale. Still, given other challenges such as security issues, increased infrastructure complexity, resilient low-latency communication as well as efficient orchestration and synchronization, there is room for improvement.

To this end, this Special Issue is soliciting conceptual, theoretical, and experimental contributions to a set of currently unresolved challenges in the area of parallel and distributed cloud, edge and fog computing. The topics of interest include, but are not limited to:

  • Distributed resource allocation and scheduling in cloud, edge and fog computing;
  • Optimization algorithms for distributed and parallel computing at network infrastructures;
  • Network routing for distributed and parallel computing;
  • Management and orchestration of distributed computational resources;
  • Middleware and libraries for parallel and distributed computing at the cloud, edge and fog layer;
  • Development of architectures for parallel and distributed computing;
  • Scalability issues in parallel and distributed cloud computing;
  • Applications of parallel and distributed computing in next-generation networking infrastructures;
  • Security issues during network-enabled parallel and distributed computing;
  • Data-resilient, fault-tolerant techniques for intra-infrastructure communication in distributed computing;
  • Advanced algorithms for parallelization and distribution of network applications (AI, control theory, etc.).

Dr. Marios Avgeris
Dr. Dimitrios Dechouniotis
Dr. Konstantinos Tsitseklis
Dr. Vitoropoulou Margarita
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cloud computing
  • edge computing
  • fog computing
  • parallel and distributed computing
  • resource management optimization
  • system architecture optimization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 6419 KiB  
Article
Leveraging Seed Generation for Efficient Hardware Acceleration of Lossless Compression of Remotely Sensed Hyperspectral Images
by Amal Altamimi and Belgacem Ben Youssef
Electronics 2024, 13(11), 2164; https://doi.org/10.3390/electronics13112164 - 1 Jun 2024
Viewed by 495
Abstract
In the field of satellite imaging, effectively managing the enormous volumes of data from remotely sensed hyperspectral images presents significant challenges due to the limited bandwidth and power available in spaceborne systems. In this paper, we describe the hardware acceleration of a highly [...] Read more.
In the field of satellite imaging, effectively managing the enormous volumes of data from remotely sensed hyperspectral images presents significant challenges due to the limited bandwidth and power available in spaceborne systems. In this paper, we describe the hardware acceleration of a highly efficient lossless compression algorithm, specifically designed for real-time hyperspectral image processing on FPGA platforms. The algorithm utilizes an innovative seed generation method for square root calculations to significantly boost data throughput and reduce energy consumption, both of which represent key factors in satellite operations. When implemented on the Cyclone V FPGA, our method achieves a notable operational throughput of 1598.67 Mega Samples per second (MSps) and maintains a power requirement of under 1 Watt, leading to an efficiency rate of 1829.1 MSps/Watt. A comparative analysis with existing and related state-of-the-art implementations confirms that our system surpasses conventional performance standards, thus facilitating the efficient processing of large-scale hyperspectral datasets, especially in environments where throughput and low energy consumption are prioritized. Full article
Show Figures

Figure 1

16 pages, 4728 KiB  
Article
Efficient FPGA Binary Neural Network Architecture for Image Super-Resolution
by Yuanxin Su, Kah Phooi Seng, Jeremy Smith and Li Minn Ang
Electronics 2024, 13(2), 266; https://doi.org/10.3390/electronics13020266 - 6 Jan 2024
Cited by 1 | Viewed by 1971
Abstract
Super-resolution systems refer to computer-based systems designed to enhance the quality of images or video by producing high-resolution renditions from low-resolution counterparts using computational algorithms and technologies. Various methods and techniques have been used in development of super-resolution systems. The development of Convolution [...] Read more.
Super-resolution systems refer to computer-based systems designed to enhance the quality of images or video by producing high-resolution renditions from low-resolution counterparts using computational algorithms and technologies. Various methods and techniques have been used in development of super-resolution systems. The development of Convolution Neural Networks (CNNs) and the Deep Learning (DL) methods have outperformed traditional methods. However, as models become increasingly deeper with wider receptive fields, the number of parameters significantly increases. While this often results in better performance, it renders these models impractical for real-life scenarios such as smartphones or other mobile systems. Currently, most proposed methods with higher perceptual quality demand a substantial amount of time to process a single image, even on powerful hardware like NVIDIA GPUs. Such computationally expensive models are not cost-effective for real-world application scenarios. Optimization is needed to reduce the computational costs and memory requirements to enhance their suitability for less powerful hardware configurations. In this work, we propose an efficient binary neural network architecture, ResBinESPCN, designed for image super-resolution. In our design, we improved the energy efficiency of the architecture through algorithmic and hardware-level optimizations. These optimizations not only enhance computational efficiency and reduce memory consumption but also achieve effective image super-resolution in resource-constrained environments. Our experimental validation highlights the effectiveness of this network structure and includes ablation studies on models with varying data bit widths. Hardware analysis substantiates the efficiency and real-time capabilities of this model. Additionally, deploying the model on FPGA using FINN demonstrates its low hardware resource usage and low power consumption. Full article
Show Figures

Figure 1

25 pages, 2127 KiB  
Article
ModSoft-HP: Fuzzy Microservices Placement in Kubernetes
by Euripides G. M. Petrakis, Vasileios Skevakis, Panayiotis Eliades, Alkiviadis Aznavouridis and Konstantinos Tsakos
Electronics 2024, 13(1), 65; https://doi.org/10.3390/electronics13010065 - 22 Dec 2023
Viewed by 1208
Abstract
The growing popularity of microservices architectures generated the need for tools that orchestrate their deployment in containerized infrastructures, such as Kubernetes. Microservices running in separate containers are packed in pods and placed in virtual machines (nodes). For applications with multiple communicating microservices, the [...] Read more.
The growing popularity of microservices architectures generated the need for tools that orchestrate their deployment in containerized infrastructures, such as Kubernetes. Microservices running in separate containers are packed in pods and placed in virtual machines (nodes). For applications with multiple communicating microservices, the decision of which services should be placed in the same node has a certain impact on both the running time and the operation cost of an application. The default Kubernetes scheduler is not optimal in that case. In this work, the service placement problem is treated as graph clustering. An application is modeled using a graph with nodes and edges representing communicating microservices. Graph clustering partitions the graph into clusters of microservices with high-affinity rates. Then, the microservices of each cluster are placed in the same Kubernetes node. A class of methods resorts to hard clustering (i.e., each microservice is placed in exactly one node). We advocate that graph clustering should be fuzzy to allow high-utilized microservices to run in more than one instance (i.e., pods) in different nodes. ModSoft-HP Scheduler is a custom Kubernetes scheduler that takes scheduling decisions based on the results of the ModSoft fuzzy clustering method followed by heuristic packing (HP). For proof of concept, the workloads of two applications (i.e., an e-commerce application, eShop, and an IoT architecture) are given as input to the default Kubernetes Scheduler, the Bisecting K-means, and the Heuristic First Fit (hard) clustering schedulers and to the ModSoft-HP fuzzy clustering method. The experimental results demonstrate that ModSoft-HP can achieve up to 90% reduction of egress traffic, up to 20% savings in response time, and up to 25% less hosting costs compared to service placement with the default Kubernetes Scheduler in the Google Kubernetes Engine. Full article
Show Figures

Figure 1

19 pages, 2120 KiB  
Article
CLOCIS: Cloud-Based Conformance Testing Framework for IoT Devices in the Future Internet
by Jaehoon Yoo, Jaeyoung Hwang, Jieun Lee, Seongki Yoo and JaeSeung Song
Electronics 2023, 12(24), 4980; https://doi.org/10.3390/electronics12244980 - 12 Dec 2023
Viewed by 946
Abstract
In recent years, the Internet of Things (IoT) has not only become ubiquitous in daily life but has also emerged as a pivotal technology across various sectors, including smart factories and smart cities. Consequently, there is a pressing need to ensure the consistent [...] Read more.
In recent years, the Internet of Things (IoT) has not only become ubiquitous in daily life but has also emerged as a pivotal technology across various sectors, including smart factories and smart cities. Consequently, there is a pressing need to ensure the consistent and uninterrupted delivery of IoT services. Conformance testing has thus become an integral aspect of IoT technologies. However, traditional methods of IoT conformance testing fall short of addressing the evolving requirements put forth by both industry and academia. Historically, IoT testing has necessitated a visit to a testing laboratory, implying that both the testing systems and testers must be co-located. Furthermore, there is a notable absence of a comprehensive method for testing an array of IoT standards, especially given their inherent heterogeneity. With a surge in the development of diverse IoT standards, crafting an appropriate testing environment poses challenges. To address these concerns, this article introduces a method for remote IoT conformance testing, underpinned by a novel conceptual architecture termed CLOCIS. This architecture encompasses an extensible approach tailored for a myriad of IoT standards. Moreover, we elucidate the methods and procedures integral to testing IoT devices. CLOCIS, predicated on this conceptual framework, is actualized, and to attest to its viability, we undertake IoT conformance testing and present the results. When leveraging CLOCIS, small and medium-sized enterprises (SMEs) and entities in the throes of IoT service development stand to benefit from a reduced time to market and cost-efficient testing procedures. Additionally, this innovation holds promise for IoT standardization communities, enabling them to champion their standards with renewed vigor. Full article
Show Figures

Figure 1

20 pages, 715 KiB  
Article
Survey: An Overview of Lightweight RFID Authentication Protocols Suitable for the Maritime Internet of Things
by Glen Mudra, Hui Cui and Michael N. Johnstone
Electronics 2023, 12(13), 2990; https://doi.org/10.3390/electronics12132990 - 7 Jul 2023
Cited by 7 | Viewed by 2530
Abstract
The maritime sector employs the Internet of Things (IoT) to exploit many of its benefits to maintain a competitive advantage and keep up with the growing demands of the global economy. The maritime IoT (MIoT) not only inherits similar security threats as the [...] Read more.
The maritime sector employs the Internet of Things (IoT) to exploit many of its benefits to maintain a competitive advantage and keep up with the growing demands of the global economy. The maritime IoT (MIoT) not only inherits similar security threats as the general IoT, it also faces cyber threats that do not exist in the traditional IoT due to factors such as the support for long-distance communication and low-bandwidth connectivity. Therefore, the MIoT presents a significant concern for the sustainability and security of the maritime industry, as a successful cyber attack can be detrimental to national security and have a flow-on effect on the global economy. A common component of maritime IoT systems is Radio Frequency Identification (RFID) technology. It has been revealed in previous studies that current RFID authentication protocols are insecure against a number of attacks. This paper provides an overview of vulnerabilities relating to maritime RFID systems and systematically reviews lightweight RFID authentication protocols and their impacts if they were to be used in the maritime sector. Specifically, this paper investigates the capabilities of lightweight RFID authentication protocols that could be used in a maritime environment by evaluating those authentication protocols in terms of the encryption system, authentication method, and resistance to various wireless attacks. Full article
Show Figures

Figure 1

13 pages, 431 KiB  
Article
Clustering Algorithms for Enhanced Trustworthiness on High-Performance Edge-Computing Devices
by Marco Lapegna, Valeria Mele and Diego Romano
Electronics 2023, 12(7), 1689; https://doi.org/10.3390/electronics12071689 - 3 Apr 2023
Cited by 2 | Viewed by 1168
Abstract
Trustworthiness is a critical concern in edge-computing environments as edge devices often operate in challenging conditions and are prone to failures or external attacks. Despite significant progress, many solutions remain unexplored. An effective approach to this problem is the use of clustering algorithms, [...] Read more.
Trustworthiness is a critical concern in edge-computing environments as edge devices often operate in challenging conditions and are prone to failures or external attacks. Despite significant progress, many solutions remain unexplored. An effective approach to this problem is the use of clustering algorithms, which are powerful machine-learning tools that can discover correlations within vast amounts of data. In the context of edge computing, clustering algorithms have become increasingly relevant as they can be employed to improve trustworthiness by classifying edge devices based on their behaviors or detecting attack patterns from insecure domains. In this context, we develop a new hybrid clustering algorithm for computing devices that is suitable for edge computing model-based infrastructures and that can categorize nodes based on their trustworthiness. This algorithm is thoroughly assessed and compared to two computing systems equipped with high-end GPU devices with respect to performance and energy consumption. The evaluation results highlight the feasibility of designing intelligent sensor networks to make decisions at the data-collection points, thereby, enhancing the trustworthiness and preventing attacks from unauthorized sources. Full article
Show Figures

Figure 1

19 pages, 604 KiB  
Article
Dynamic Load Balancing in Stream Processing Pipelines Containing Stream-Static Joins
by Josip Marić, Krešimir Pripužić, Martina Antonić and Dejan Škvorc
Electronics 2023, 12(7), 1613; https://doi.org/10.3390/electronics12071613 - 29 Mar 2023
Viewed by 1756
Abstract
Data stream processing systems are used to continuously run mission-critical applications for real-time monitoring and alerting. These systems require high throughput and low latency to process incoming data streams in real time. However, changes in the distribution of incoming data streams over time [...] Read more.
Data stream processing systems are used to continuously run mission-critical applications for real-time monitoring and alerting. These systems require high throughput and low latency to process incoming data streams in real time. However, changes in the distribution of incoming data streams over time can cause partition skew, which is defined as an unequal distribution of data partitions among workers, resulting in sub-optimal processing due to an unbalanced load. This paper presents the first solution designed specifically to address partition skew in the context of joining streaming and static data. Our solution uses state-of-the-art principles to monitor processing load, detect load imbalance, and dynamically redistribute partitions, to achieve optimal load balance. To accomplish this, our solution leverages the collocation of streaming and static data, while considering the processing load of the join and the subsequent stream processing operations. Finally, we present the results of an experimental evaluation, in which we compared the throughput and latency of four stream processing pipelines containing such a join. The results show that our solution achieved significantly higher throughput and lower latency than the competing approaches. Full article
Show Figures

Figure 1

18 pages, 2102 KiB  
Article
A Generic Preprocessing Architecture for Multi-Modal IoT Sensor Data in Artificial General Intelligence
by Nicholas Dmytryk and Aris Leivadeas
Electronics 2022, 11(22), 3816; https://doi.org/10.3390/electronics11223816 - 20 Nov 2022
Cited by 1 | Viewed by 1652
Abstract
A main barrier for autonomous and general learning systems is their inability to understand and adapt to new environments—that is, to apply previously learned abstract solutions to new problems. Supervised learning system functions such as classification require data labeling from an external source [...] Read more.
A main barrier for autonomous and general learning systems is their inability to understand and adapt to new environments—that is, to apply previously learned abstract solutions to new problems. Supervised learning system functions such as classification require data labeling from an external source and do not have the ability to learn feature representation autonomously. This research details an unsupervised learning method for multi-modal feature detection and evaluation to be used for preprocessing in general learning systems. The learning method details a clustering algorithm that can be applied to any generic IoT sensor data, and a seeded stimulus labeling algorithm impacted and evolved by cross-modal input. The method is implemented and tested in two agents consuming audio and image data, each with varying innate stimulus criteria. Their run-time stimulus changes over time depending on their experiences, while newly experienced features become meaningful without preprogrammed labeling of distinct attributes. The architecture provides interfaces for higher-order cognitive processes to be built on top of the unsupervised preprocessor. This method is unsupervised and modular, in contrast to the highly constrained and pretrained learning systems that exist, making it extendable and well-disposed for use in artificial general intelligence. Full article
Show Figures

Figure 1

Back to TopTop