Special Issue "Algorithms for the Resource Management of Large Scale Infrastructures"

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: closed (31 July 2018)

Special Issue Editors

Guest Editor
Prof. Dr. Danilo Ardagna

Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria Via Golgi 42, 20133 Milano, Italy
Website | E-Mail
Interests: resource provisioning; optimization algorithms; performance modelling; cloud computing; big data management
Guest Editor
Dr. Claudia Canali

Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria, Via Golgi 42, 20133 Milano, Italy
Website | E-Mail
Interests: social networks; cloud computing; mobile and wireless systems; distributed systems for ubiquitous web access
Guest Editor
Dr. Riccardo Lancellotti

Dipartimento di Ingegneria Enzo Ferrari, Università di Modena e Reggio Emilia, Via P. Vivarelli 10/1, 41125 Modena, Italy
Website | E-Mail
Interests: management of virtual elements in software-defined data centers; monitoring in cloud computing IaaS infrastructures; management of cloud computing IaaS infrastructures; performance evaluation of multi-tier web clusters

Special Issue Information

Dear Colleagues,

The growing demand for fast, powerful, scalable and reliable computing and communication infrastructure has driven the evolution of the computation paradigm from in-house solutions towards shared infrastructure, such as the cloud computing paradigm (that in the long run can provide a reduction of the total cost of ownership of up to 66%), and up to the most innovative distributed infrastructure proposed for edge/fog computing. The result is the advent of large distributed infrastructure, composed of multiple data centers where virtualization is applied to the level of both computing and networking, to the point where we can describe the infrastructure, as composed by software-defined data centers.

This infrastructure is becoming more and more popular, especially to support modern applications, which are characterized by a huge volume of data (on the order of peta or exhabytes—so-called big data), possibly collected from thousands of sensors (in the Internet of Things scenario) and may be deployed on tens of thousands of VMs, possibly interacting through a virtualized network spread over a geographic area. For example, more than 60% of Apache Spark installations are deployed on the cloud. However, the management of this large and complex infrastructure represents a major problem. On the one hand, there is the need to maximize resource utilization and minimize energy consumption of the infrastructure, in order to reduce costs and for environmental reasons. On the other hand, there is the need to guarantee the resources needed by applications hosted on the infrastructure, in terms of computing power, storage and communication. These requirements may be expressed directly or may be directed by a Service Level Agreement at the level of application behavior.

The challenge of infrastructure management in this scenario is clear. The simple monitoring of such infrastructure is difficult, but we must consider that complexity of the problem is further exacerbated by the need to react in a timely and unsupervised manner with an ever-changing workload, characterized by unpredictable oscillations in the demands of each application and by the joining and leaving of new applications that are deployed or dropped. The goal is to devise models, techniques and algorithms that can support a self-management system able to adapt, manage and cope with changes, without the need for human supervision.

We invite researchers to submit innovative and original proposals that can advance the state-of-the-art in the field of management of autonomic infrastructure.

A non-exhaustive list of topics of interest includes:

  • Models and algorithms for resource management
  • Economic models for the management of data centers
  • Models and algorithms for fault tolerance in Cloud Computing scenarios
  • Algorithms for VM placement/migration in virtualized datacenters
  • Management of Multi-data center scenarios
  • Energy model for data centers
  • Resource allocation solutions for micro-service based and container based systems
  • Management of distributed infrastructures in presence of Software-Defined Networks, network function virtualization and/or Virtual Routers
  • Solutions for the management of Fog computing scenarios

Prof. Danilo Ardagna
Dr. Claudia Canali
Dr. Riccardo Lancellotti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 850 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Cloud computing 
  • Fog computing 
  • Big-data applications 
  • Self-managing systems 
  • Micro-service and container based systems 
  • Software Defined Networks/Network Function Virtualization 
  • Software-Defined Data Center 
  • Heuristic algorithms 
  • Game theory 
  • Optimization models

Published Papers (4 papers)

View options order results:
result details:
Displaying articles 1-4
Export citation of selected articles as:

Research

Open AccessArticle Modeling and Evaluation of Power-Aware Software Rejuvenation in Cloud Systems
Algorithms 2018, 11(10), 160; https://doi.org/10.3390/a11100160
Received: 30 August 2018 / Revised: 13 October 2018 / Accepted: 15 October 2018 / Published: 18 October 2018
PDF Full-text (1954 KB) | HTML Full-text | XML Full-text
Abstract
Long and continuous running of software can cause software aging-induced errors and failures. Cloud data centers suffer from these kinds of failures when Virtual Machine Monitors (VMMs), which control the execution of Virtual Machines (VMs), age. Software rejuvenation is a proactive fault management
[...] Read more.
Long and continuous running of software can cause software aging-induced errors and failures. Cloud data centers suffer from these kinds of failures when Virtual Machine Monitors (VMMs), which control the execution of Virtual Machines (VMs), age. Software rejuvenation is a proactive fault management technique that can prevent the occurrence of future failures by terminating VMMs, cleaning up their internal states, and restarting them. However, the appropriate time and type of VMM rejuvenation can affect performance, availability, and power consumption of a system. In this paper, an analytical model is proposed based on Stochastic Activity Networks for performance evaluation of Infrastructure-as-a-Service cloud systems. Using the proposed model, a two-threshold power-aware software rejuvenation scheme is presented. Many details of real cloud systems, such as VM multiplexing, migration of VMs between VMMs, VM heterogeneity, failure of VMMs, failure of VM migration, and different probabilities for arrival of different VM request types are investigated using the proposed model. The performance of the proposed rejuvenation scheme is compared with two baselines based on diverse performance, availability, and power consumption measures defined on the system. Full article
(This article belongs to the Special Issue Algorithms for the Resource Management of Large Scale Infrastructures)
Figures

Figure 1

Open AccessArticle SLoPCloud: An Efficient Solution for Locality Problem in Peer-to-Peer Cloud Systems
Algorithms 2018, 11(10), 150; https://doi.org/10.3390/a11100150
Received: 2 September 2018 / Revised: 30 September 2018 / Accepted: 30 September 2018 / Published: 2 October 2018
PDF Full-text (466 KB) | HTML Full-text | XML Full-text
Abstract
Peer-to-Peer (P2P) cloud systems are becoming more popular due to the high computational capability, scalability, reliability, and efficient data sharing. However, sending and receiving a massive amount of data causes huge network traffic leading to significant communication delays. In P2P systems, a considerable
[...] Read more.
Peer-to-Peer (P2P) cloud systems are becoming more popular due to the high computational capability, scalability, reliability, and efficient data sharing. However, sending and receiving a massive amount of data causes huge network traffic leading to significant communication delays. In P2P systems, a considerable amount of the mentioned traffic and delay is owing to the mismatch between the physical layer and the overlay layer, which is referred to as locality problem. To achieve higher performance and consequently resilience to failures, each peer has to make connections to geographically closer peers. To the best of our knowledge, locality problem is not considered in any well known P2P cloud system. However, considering this problem could enhance the overall network performance by shortening the response time and decreasing the overall network traffic. In this paper, we propose a novel, efficient, and general solution for locality problem in P2P cloud systems considering the round-trip-time (RTT). Furthermore, we suggest a flexible topology as the overlay graph to address the locality problem more effectively. Comprehensive simulation experiments are conducted to demonstrate the applicability of the proposed algorithm in most of the well-known P2P overlay networks while not introducing any serious overhead. Full article
(This article belongs to the Special Issue Algorithms for the Resource Management of Large Scale Infrastructures)
Figures

Figure 1

Open AccessArticle Reducing the Operational Cost of Cloud Data Centers through Renewable Energy
Algorithms 2018, 11(10), 145; https://doi.org/10.3390/a11100145
Received: 31 July 2018 / Revised: 31 August 2018 / Accepted: 21 September 2018 / Published: 27 September 2018
PDF Full-text (933 KB) | HTML Full-text | XML Full-text
Abstract
The success of cloud computing services has led to big computing infrastructures that are complex to manage and very costly to operate. In particular, power supply dominates the operational costs of big infrastructures, and several solutions have to be put in place to
[...] Read more.
The success of cloud computing services has led to big computing infrastructures that are complex to manage and very costly to operate. In particular, power supply dominates the operational costs of big infrastructures, and several solutions have to be put in place to alleviate these operational costs and make the whole infrastructure more sustainable. In this paper, we investigate the case of a complex infrastructure composed of data centers (DCs) located in different geographical areas in which renewable energy generators are installed, co-located with the data centers, to reduce the amount of energy that must be purchased by the power grid. Since renewable energy generators are intermittent, the load management strategies of the infrastructure have to be adapted to the intermittent nature of the sources. In particular, we consider EcoMultiCloud , a load management strategy already proposed in the literature for multi-objective load management strategies, and we adapt it to the presence of renewable energy sources. Hence, cost reduction is achieved in the load allocation process, when virtual machines (VMs) are assigned to a data center of the considered infrastructure, by considering both energy cost variations and the presence of renewable energy production. Performance is analyzed for a specific infrastructure composed of four data centers. Results show that, despite being intermittent and highly variable, renewable energy can be effectively exploited in geographical data centers when a smart load allocation strategy is implemented. In addition, the results confirm that EcoMultiCloud is very flexible and is suited to the considered scenario. Full article
(This article belongs to the Special Issue Algorithms for the Resource Management of Large Scale Infrastructures)
Figures

Figure 1

Open AccessArticle Multi-Level Elasticity for Wide-Area Data Streaming Systems: A Reinforcement Learning Approach
Algorithms 2018, 11(9), 134; https://doi.org/10.3390/a11090134
Received: 5 July 2018 / Revised: 31 August 2018 / Accepted: 5 September 2018 / Published: 7 September 2018
PDF Full-text (1402 KB) | HTML Full-text | XML Full-text
Abstract
The capability of efficiently processing the data streams emitted by nowadays ubiquitous sensing devices enables the development of new intelligent services. Data Stream Processing (DSP) applications allow for processing huge volumes of data in near real-time. To keep up with the high volume
[...] Read more.
The capability of efficiently processing the data streams emitted by nowadays ubiquitous sensing devices enables the development of new intelligent services. Data Stream Processing (DSP) applications allow for processing huge volumes of data in near real-time. To keep up with the high volume and velocity of data, these applications can elastically scale their execution on multiple computing resources to process the incoming data flow in parallel. Being that data sources and consumers are usually located at the network edges, nowadays the presence of geo-distributed computing resources represents an attractive environment for DSP. However, controlling the applications and the processing infrastructure in such wide-area environments represents a significant challenge. In this paper, we present a hierarchical solution for the autonomous control of elastic DSP applications and infrastructures. It consists of a two-layered hierarchical solution, where centralized components coordinate subordinated distributed managers, which, in turn, locally control the elastic adaptation of the application components and deployment regions. Exploiting this framework, we design several self-adaptation policies, including reinforcement learning based solutions. We show the benefits of the presented self-adaptation policies with respect to static provisioning solutions, and discuss the strengths of reinforcement learning based approaches, which learn from experience how to optimize the application performance and resource allocation. Full article
(This article belongs to the Special Issue Algorithms for the Resource Management of Large Scale Infrastructures)
Figures

Figure 1

Back to Top