Special Issue "Cloud Computing and Applications"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 29 February 2020.

Special Issue Editor

Prof. Dr. Filipe Araujo
E-Mail Website
Guest Editor
Department of Informatics Engineering, Centre for Informatics and Systems, University of Coimbra, 3030-290 Coimbra, Portugal
Interests: distributed systems, cloud computing, distributed tracing, observability, wireless ad hoc networks

Special Issue Information

Dear Colleagues, 

Tens of years of progress in computer hardware, networks and the recent enhancements in virtualization and containerization made the long-sought vision of cloud computing possible. Cloud providers compete with a wide portfolio of pay-as-you-go services, from simple computing or storage infrastructure, to machine learning services, including image, speech, and text recognition. More than simple IT outsourcing, these services can spark new, innovative and affordable products. 

While the benefits are undeniable, citizens and companies must still consider a few drawbacks. First, as integration with the cloud increases, the complex pricing schemes become harder to manage and control. A second risk is vendor lock-in, as companies upgrade their online presence with state-of-the-art provider-dependent cloud services. At the same time, companies should be able to retain part of their data on premises, or explore different providers, in hybrid and multi-cloud solutions, while maintaining observability, to ensure fine-tuned, highly performant distributed systems. 

The challenges for providers are equally demanding. Downtime, data losses, and data breaches can jeopardize third-party businesses, causing all sorts of damage. To preclude such scenarios, providers must replicate data and services, while maintaining privacy, by preventing access by other users, attackers and their own employees. Finally, providers must operate efficiently, or competition will drive them out of the market.

This Special Issue aims at publishing high-quality manuscripts covering new research on topics related to cloud computing, including but not limited to the following:

  • Cloud applications
  • Cloud architecture
  • Virtualization, containerization and container orchestration
  • Public, private and hybrid clouds
  • Interoperability and portability
  • Microservices
  • Observability and monitoring of distributed systems
  • Security and privacy
  • Reliable operation
  • Efficient operation

Prof. Dr. Filipe Araujo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Cloud applications
  • Cloud architecture
  • Virtualization, containerization and container orchestration
  • Public, private and hybrid clouds
  • Interoperability and portability
  • Microservices
  • Observability and monitoring of distributed systems
  • Security and privacy
  • Reliable operation
  • Efficient operation

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
A Hierarchical Modeling and Analysis Framework for Availability and Security Quantification of IoT Infrastructures
Electronics 2020, 9(1), 155; https://doi.org/10.3390/electronics9010155 - 14 Jan 2020
Abstract
Modeling a complete Internet of Things (IoT) infrastructure is crucial to assess its availability and security characteristics. However, modern IoT infrastructures often consist of a complex and heterogeneous architecture and thus taking into account both architecture and operative details of the IoT infrastructure [...] Read more.
Modeling a complete Internet of Things (IoT) infrastructure is crucial to assess its availability and security characteristics. However, modern IoT infrastructures often consist of a complex and heterogeneous architecture and thus taking into account both architecture and operative details of the IoT infrastructure in a monolithic model is a challenge for system practitioners and developers. In that regard, we propose a hierarchical modeling framework for the availability and security quantification of IoT infrastructures in this paper. The modeling methodology is based on a hierarchical model of three levels including (i) reliability block diagram (RBD) at the top level to capture the overall architecture of the IoT infrastructure, (ii) fault tree (FT) at the middle level to elaborate system architectures of the member systems in the IoT infrastructure, and (iii) continuous time Markov chain (CTMC) at the bottom level to capture detailed operative states and transitions of the bottom subsystems in the IoT infrastructure. We consider a specific case-study of IoT smart factory infrastructure to demonstrate the feasibility of the modeling framework. The IoT smart factory infrastructure is composed of integrated cloud, fog, and edge computing paradigms. A complete hierarchical model of RBD, FT, and CTMC is developed. A variety of availability and security measures are computed and analyzed. The investigation of the case-study’s analysis results shows that more frequent failures in cloud cause more severe decreases of overall availability, while faster recovery of edge enhances the availability of the IoT smart factory infrastructure. On the other hand, the analysis results of the case-study also reveal that cloud servers’ virtual machine monitor (VMM) and virtual machine (VM), and fog server’s operating system (OS) are the most vulnerable components to cyber-security attack intensity. The proposed modeling and analysis framework coupled with further investigation on the analysis results in this study help develop and operate the IoT infrastructure in order to gain the highest values of availability and security measures and to provide development guidelines in decision-making processes in practice. Full article
(This article belongs to the Special Issue Cloud Computing and Applications)
Show Figures

Figure 1

Open AccessArticle
Bouncer: A Resource-Aware Admission Control Scheme for Cloud Services
Electronics 2019, 8(9), 928; https://doi.org/10.3390/electronics8090928 - 24 Aug 2019
Cited by 1
Abstract
Cloud computing is a paradigm that ensures the flexible, convenient and on-demand provisioning of a shared pool of configurable network and computing resources. Its services can be offered by either private or public infrastructures, depending on who owns the operational infrastructure. Much research [...] Read more.
Cloud computing is a paradigm that ensures the flexible, convenient and on-demand provisioning of a shared pool of configurable network and computing resources. Its services can be offered by either private or public infrastructures, depending on who owns the operational infrastructure. Much research has been conducted to improve a cloud’s resource provisioning techniques. Unfortunately, sometimes an abrupt increase in the demand for cloud services results in resource shortages affecting both providers and consumers. This uncertainty of resource demands by users can lead to catastrophic failures of cloud systems, thus reducing the number of accepted service requests. In this paper, we present Bouncer—a workload admission control scheme for cloud services. Bouncer works by ensuring that cloud services do not exceed the cloud infrastructure’s threshold capacity. By adopting an application-aware approach, we implemented Bouncer on software-defined network (SDN) infrastructure. Furthermore, we conduct an extensive study to evaluate our framework’s performance. Our evaluation shows that Bouncer significantly outperforms the conventional service admission control schemes, which are still state of the art. Full article
(This article belongs to the Special Issue Cloud Computing and Applications)
Show Figures

Figure 1

Open AccessArticle
EPSim-C: A Parallel Epoch-Based Cycle-Accurate Microarchitecture Simulator Using Cloud Computing
Electronics 2019, 8(6), 716; https://doi.org/10.3390/electronics8060716 - 24 Jun 2019
Abstract
Recently, computing platforms have been being configured on a large scale to satisfy the diverse requirements of emerging applications like big data and graph processing, neural network, speech recognition and so on. In these computing platforms, each computing node consists of a multicore, [...] Read more.
Recently, computing platforms have been being configured on a large scale to satisfy the diverse requirements of emerging applications like big data and graph processing, neural network, speech recognition and so on. In these computing platforms, each computing node consists of a multicore, an accelerator, and a complex memory hierarchy, which are connected to other nodes using a variety of high-performance networks. Up to now, researchers have been using cycle-accurate simulators to evaluate the performance of computer systems in detail. However, the execution of the simulators, which models modern computing architecture for multi-core, multi-node, datacenter, memory hierarchy, new memory, and new interconnection, is too slow and infeasible; since the architecture has become more complex today, the complexity of the simulator is rapidly increasing. Therefore, it is seriously challenging to employ them in the research and development of next-generation computer systems. To solve this problem, we previously presented EPSim (Epoch-based Simulator), which defines epochs that can be run independently by dividing the simulation run into several sections and executes them in parallel on a multicore platform, resulting in only the limited simulation speedup. In this paper, to overcome the computing resource limitations on multi-core platforms, we propose a novel EPSim-C (EPSim on Cloud) simulator that extends EPSim and achieves higher performance using a cloud computing platform. EPSim-C is designed to perform the epoch-based executions in a massively parallel fashion by using MapReduce on Hadoop-based systems. According to our experiments, we have achieved a maximum speed of 87.0× and an average speed of 46.1× using 256 cores. As far as we know, EPSim-C is the only existing way to accelerate the cycle-accurate simulator on cloud platforms; thus, our significant performance enhancement allows researchers to model and research current and future cutting-edge computing platforms using real workloads. Full article
(This article belongs to the Special Issue Cloud Computing and Applications)
Show Figures

Figure 1

Open AccessArticle
SHIYF: A Secured and High-Integrity YARN Framework
Electronics 2019, 8(5), 548; https://doi.org/10.3390/electronics8050548 - 15 May 2019
Abstract
Cloud computing is becoming a powerful parallel data processing method, and it can be adopted by many network service providers to build a service framework. Although cloud computing is able to efficiently process a large amount of data, it can be attacked easily [...] Read more.
Cloud computing is becoming a powerful parallel data processing method, and it can be adopted by many network service providers to build a service framework. Although cloud computing is able to efficiently process a large amount of data, it can be attacked easily due to its massively distributed cluster nodes. In this paper, we propose a secure and high-integrity YARN framework (SHIYF), which establishes a close relationship between speculative execution and the security of Yet Another Resource Negotiator (YARN, MapReduce 2.0). SHIYF computes and compares the MD5 hashes of the intermediate and final results in the MapReduce process by launching the speculative executions in a certain ratio, which is able to find actual and potentially malicious nodes in the Hadoop cluster. The prototype of SHIYF is implemented based on Hadoop 2.8.0. In this paper, theoretical derivations and experiments show that SHIYF not only guarantees the security and high integrity of the MapReduce process but also successfully locates the malicious nodes and the potential malicious ones in Hadoop, while increasing overhead slightly. Furthermore, the malicious node detection ratio is more than 87%. Full article
(This article belongs to the Special Issue Cloud Computing and Applications)
Show Figures

Figure 1

Open AccessArticle
Load Balancing Scheme for Effectively Supporting Distributed In-Memory Based Computing
Electronics 2019, 8(5), 546; https://doi.org/10.3390/electronics8050546 - 15 May 2019
Abstract
As digital data have increased exponentially due to an increasing number of information channels that create and distribute the data, distributed in-memory systems were introduced to process big data in real-time. However, when the load is concentrated on a specific node in a [...] Read more.
As digital data have increased exponentially due to an increasing number of information channels that create and distribute the data, distributed in-memory systems were introduced to process big data in real-time. However, when the load is concentrated on a specific node in a distributed in-memory environment, the data access performance is degraded, resulting in an overall degradation in the processing performance. In this paper, we propose a new load balancing scheme that performs data migration or replication according to the loading status in heterogeneous distributed in-memory environments. The proposed scheme replicates hot data when the hot data occurs on the node where a load occurs. If the load of the node increases in the absence of hot data, the data is migrated through a hash space adjustment. In addition, when nodes are added or removed, data distribution is performed by adjusting the hash space with the adjacent nodes. The clients store the metadata of the hot data and reduce the access of the load balancer through periodic synchronization. It is confirmed through various performance evaluations that the proposed load balancing scheme improves the overall load balancing performance. Full article
(This article belongs to the Special Issue Cloud Computing and Applications)
Show Figures

Figure 1

Back to TopTop