Next Article in Journal
Validation of Francis–Kaplan Turbine Blade Strike Models for Adult and Juvenile Atlantic Salmon (Salmo Salar, L.) and Anadromous Brown Trout (Salmo Trutta, L.) Passing High Head Turbines
Next Article in Special Issue
Does Information Asymmetry Affect Dividend Policy? Analysis Using Market Microstructure Variables
Previous Article in Journal
Institutional Design and Performance of Markets for Watershed Ecosystem Services: A Systematic Literature Review
Previous Article in Special Issue
CO2 and Cost Optimization of Reinforced Concrete Cantilever Soldier Piles: A Parametric Study with Harmony Search Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimized Energy Cost and Carbon Emission-Aware Virtual Machine Allocation in Sustainable Data Centers

1
School of Computing, SASTRA Deemed University, Thanjavur 613401, India
2
School of Electrical and Electronics Engineering, SASTRA Deemed University, Thanjavur 613401, India
3
Department of Energy IT, Gachon University, Seongnam 13120, Korea
*
Authors to whom correspondence should be addressed.
Sustainability 2020, 12(16), 6383; https://doi.org/10.3390/su12166383
Submission received: 29 May 2020 / Revised: 14 July 2020 / Accepted: 4 August 2020 / Published: 7 August 2020
(This article belongs to the Special Issue Sustainability in Applications Using Quantitative Techniques)

Abstract

:
Cloud data center’s total operating cost is conquered by electricity cost and carbon tax incurred due to energy consumption from the grid and its associated carbon emission. In this work, we consider geo-distributed sustainable datacenter’s with varying on-site green energy generation, electricity prices, carbon intensity and carbon tax. The objective function is devised to reduce the operating cost including electricity cost and carbon cost incurred on the power consumption of servers and cooling devices. We propose renewable-aware algorithms to schedule the workload to the data centers with an aim to maximize the green energy usage. Due to the uncertainty and time variant nature of renewable energy availability, an investigation is performed to identify the impact of carbon footprint, carbon tax and electricity cost in data center selection on total operating cost reduction. In addition, on-demand dynamic optimal frequency-based load distribution within the cluster nodes is performed to eliminate hot spots due to high processor utilization. The work suggests optimal virtual machine placement decision to maximize green energy usage with reduced operating cost and carbon emission.

1. Introduction

Large data centers are nowadays an integral part of the information technology (IT) industry. Cloud-based services are of high preference to organizations and individuals. Organizations consolidate multiple clusters to large data centers. Power consumption has been a significant economic and environmental issue in data centers due to growing demand. The growth of the data center’s energy consumption is approximately 10–12% per year [1]. The geo-distributed data centers enable providers to establish different renewable energy sources based on the environment. The energy cost associated with data centers is approximately 42% of the overall operating cost of the data centers [2]. The service providers are compelled to improve the infrastructure related to server power consumption, cooling provisioning and heat dissipation while maintaining service level agreement (SLA). Data centers contribute to 2% of the world’s total carbon dioxide (CO2) emission due to high energy consumption. The cost involved with cooling infrastructure can be 50% or more in a poorly designed data center [3]. Due to increasing power densityheat and thermal management are crucial for data centers to increase the lifetime of the servers and to reduce economic loss in the form of electricity bill. The two possible ways to overcome the problem of CO2 emission are (1) grid power source to be replaced with renewable energy sources; (2) Improve the Power Usage Effectiveness (PUE) of the data centers. The Green Grid consortium [4] defines the PUE metric as the ratio between the total power consumed by the data center (IT power + overhead power) and energy consumed by servers executing IT load (IT power). The overhead power includes the power consumed by data center infrastructure other than server power. The overhead power is mainly dominated by the power consumed by Computer Room Air Conditioning (CRAC) devices. The increase in temperature inside the data center is due to two factors: (1) Utilization of CPU in higher frequencies; (2) Increase in outside temperature. Thermal management of CRAC units is performed based on rack-level IT loads [5,6]. Two temperature-aware algorithms were proposed to prevent hot spots and to minimize the rise of operating temperature [7]. A game-based thermal-aware resource allocation was proposed in [8]. It uses a cooperative Nash-bargaining solution to reduce the thermal imbalance in data centers. Threshold-based thermal management was introduced in [9] to handle hot spots effectively but failed to treat the thermal imbalance. Thermal management is proposed to distribute the load at the rack level to handle temperature drop effectively but fails to handle hotspots [10].
The lower PUE indicates a more efficient data center showing less overhead power and more IT power. The cloud provider’s PUE ranges from 1.1 to 1.2 [11,12]. Collocated small data centers still provide PUE up to 2 [13]. Mixed-integer linear programming was used to minimize operating cost, energy cost and reliability cost by minimizing active PMs in data centers [14]. Stochastic search based on a genetic algorithm was used to reduce IT power consumption and migration cost by considering energy-aware vitual machine migration [15]. Facebook, Amazon, Microsoft, Apple and Google have built their suitable clean energy sources based on its location [16,17,18]. Since clean energy is not consistent, it carries more challenges in its efficient usage. Data centers provide a way in for off-site grid energy to power the infrastructure to balance the inconsistent nature of renewable energy. The nature of variable workloads in data centers and prediction algorithms contribute to power and resource management to use clean energy more effectively in data centers. The two popular on-site energy sources considered are solar and wind. Solar energy follows a pattern; it increases gradually from the morning, reaches its peak at noon, and progressively slows down. Wind energy does not have a pattern of generation. Renewable energy availability varies based on the location of the data center. It paves a way to target the load to the data center with the maximum renewable source to use clean energy effectively.
In the current state of the art, the works are carried out in different perspectives considering traditional energy management techniques to act on energy reduction within data centers. This work highlights the factors, namely, server energy consumption reduction and service providers’ operating cost and carbon emission reduction. For server energy consumption reduction, it considers the variation of the core parameters of DVFS (Dynamic Voltage Frequency Scaling), namely, frequency, utilization and power consumption. Concerning workload, the on-demand dynamic optimal frequency for the nodes in the cluster is identified and load balancing is performed to eliminate hot spots due to high processor utilization. Secondly, as many providers own geo-distributed data centers powered by a mixed supply of both grid and renewable sources, this work aims to efficiently utilize the renewable source to reduce the total operating cost and carbon emission. The impact of electricity price, carbon footprint, carbon cost on server and cooling device power consumption are taken into consideration while formulating the proposed objective function. In our previous work [19], VM placement considering dynamic optimal frequency-based allocation and standard power efficient algorithm (C-PE) were compared. This work is the extension of our previous work with both brown and green energy sources and related energy cost parameters towards the realization of the proposed objective.
In this work, we provoked the following questions: (1) When the renewable energy source is not in a stable condition, how to maximize its usage? (2) How to reduce the power consumed by CRAC devices and IT devices to reduce the total electricity cost? (3) How to reduce the carbon emission? In this work, energy source and DVFS-aware VM placement algorithm is proposed to minimize total cost, carbon footprint and cooling device power consumption for geo-distributed data centers with a mixed supply of grid and clean energy. Container technology along with virtualization is used to provide the necessary environment and isolation for task execution [20].
To achieve the above said objective, the following measures are carried out in this work as key contributions.
  • Optimal DVFS-based VM scheduling is performed to distribute the load among the servers to minimize the operating temperature.
  • Formulation of an objective function for data center selection with the consideration of varying carbon tax, electricity cost and carbon intensity.
  • Investigation on the effect of renewable energy source-based data center selection on total cost, carbon cost and CO2 emission.
  • The efficient utilization of VMs is carried out by appropriate VM sizing and mapping of containers to available VM types.
  • K-medoids algorithm is used to identify container types.
  • Examined the upshot of workload-based tuning of cooling load on total power consumption.
The remaining sections of the paper are structured as follows: In Section 1, data centers’ power consumption information is delineated. In Section 2, existing research works in the literature related to virtual machine placement and containers are discussed. The architecture of the sustainable data center system model and the problem formulation of stochastic virtual machine placement are given in Section 3 and Section 4. Section 5 and Section 6 briefly explains the task classifications of Google cluster workload and the proposed algorithms. In Section 7, the experimental environment and evaluations of proposed algorithms are detailed. Section 8 concludes the findings of this research work.

2. Related Works

Extensive research has been carried out to deal with energy efficiency in data centers. Their focus is towards the optimal QoS, efficient utilization of resources and operation cost reduction. However, still, it is a challenging task to satisfy the necessities of users and service providers with efficient energy management. In an energy efficiency perspective, the focus may be on software level, hardware level or intermediate level [21].

2.1. DVFS and Energy-Aware VM Scheduling

The growth of data centers in terms of size and quantity leads to significant increase in energy consumption resulting in more challenges in its management. In DVFS-based energy efficient power management approach, the working frequency and voltage of CPU are adjusted dynamically to alter the energy utilization of the servers. For effective energy savings in data centers, the task scheduling is carried out based on DVFS. The authors in [22] have proposed an energy-aware VM allocation algorithm intending to solve a multi-objective problem considering the optimization of job and power consumption along with its associated constraints. DVFS-based energy management and scheduling on heterogeneous systems is performed in [23]. Web server’s performance control issues were handled using DVFS as a control variable to reduce the server’s energy consumption [24].
DVFS-based approach has been proposed with an objective to enhance the utilization of resources and minimize the energy consumption without compromising the performance of the system. The workloads are prioritized based on available resource demand and explicit service level agreement requisite [25]. DVFS-based technique has been utilized for constrained parallel tasks in [26]. The authors claim that the proposed method can minimize the energy consumption with minimum task execution time. DVFS-based approach was applied for optimizing the energy efficiency of the data centers in [27]. To enhance the trade-offs among application performance and energy savings, an integrated approach of DVFS and VM consolidation has been addressed and it has been authenticated using real test bed [28]. The results implicate that there is a trade-off between energy and migration time while performing energy efficient VM consolidation among geographically distributed data centers.
A task model has been proposed in [29] which depict the QoS of the tasks with lowest frequency. Energy consumption ratio (ECR) has been utilized to estimate the efficiency of diverse frequencies in task execution. To reduce energy consumption of the servers, the incoming tasks are dispatched to the active servers and then the execution frequencies are adjusted. Migration algorithm has been utilized on individual servers to balance the workload dynamically to minimize the ECR of the server. In [30], a power-aware extension of WorkflowSim has been used to integrate a power model for the optimization of pre-eminent energy saving management considering computing, reconfiguration, network costs and host energy saving is achieved through DVFS. fort. The above-mentioned approaches aim to minimize the energy consumption of the data centers as much as possible with performance trade-off.
Comparatively, in our approach, we consider the renewable energy source along with brown energy for sharing the energy consumption while formulating the optimization problem which would lead to different scenarios to support performance improvement of the data centers.

2.2. Regional Diversity of Electricity Price and Carbon Footprint-Aware VM Scheduling in Multi-Cloud Green Data Centers

Few authors formulated the VM allocation problem by merging the energy consumption of data centers with its carbon footprint. Carbon-aware resource allocation considering a single data center was proposed in [31] for provisioning on-demand resources on servers powered by renewable energy. Load distribution among different data centers was proposed in [32] considering brown energy consumption cost. A Min Brown VM placement algorithm was introduced in [33] to minimize brown energy consumption considering the task deadline, VM migration between federated data centers was performed to minimize brown energy cost by considering dynamic electricity pricing [34]. The migration of VM’s was considered with an aim to minimize carbon footprint in the federated cloud [35]. A combination of wind and solar energy sources was considered with an aim to distribute the load with zero brown energy cost [36]. Delay constraint applications were considered with an aim to reduce electricity pricing [37].
The authors in [38] have addressed the VM placement problem with an aim to minimize energy and the cost associated with the carbon footprint in geologically distributed data centers, located within the same country. A dynamic workload scheduling technique has been proposed in [39] for the servers powered by renewable energy source. To use the renewable energy in an efficient manner, workload migration has been addressed in [40]. The authors in [41] proposed a middleware system called GreenWare with an aim to increase the renewable energy usage by the geo-distributed data centers powered by wind and solar power. The focus of the study was to minimize the carbon footprint of certain requests within a predetermined budget cost by the service provider. An adjustable workload allocation approach within the geographically distributed data centers based on the renewable energy availability has been proposed in [42]. Few researchers focused their research on resource management strategies in the multi-cloud environments. To balance the workload optimally among the geographical distributed data centers, an algorithm has been proposed in [43] to increase the green energy usage and minimize brown energy.
With an aim to minimize the brown energy utilization, a load balancing approach has been proposed by utilizing the available green energy [44]. A framework has been introduced in [45] with an aim to minimize the total electricity price of data centers. Based on the renewable energy availability, load balancing has been done among multiple data centers. A workload and energy management scheme has been introduced to decrease the operational cost of the network and energy costs [46]. A dynamic workloads deferral algorithm has been introduced in [47] for multi-cloud environment. Based on the diverged location of the data centers, the dynamic electricity prices are taken into account while ensuring the workloads deadline. To allocate the workloads in the sustainable data centers located at different locations, Markov Chain-based workload scheduling algorithm has been proposed in [48].
In the above mentioned approaches, the authors focused towards their problem formulation for minimizing the total electricity costs of data centers without the consideration of carbon cost. The data center partially fed by green energy helps the cloud provider to minimize the coal-based energy sources dependency. Comparatively in our approach, we consider the renewable energy source along with brown energy for sharing the energy consumption of the data centers with an aim to reduce the total electricity costs and carbon cost in the geo-distributed data centers.
The amount of renewable energy availability and carbon intensity depends on the location of the data centers. Compared to existing approaches summarized in Table 1, to enhance the renewable energy utilization, we consider the workload shifting approach within the geographically distributed data centers with variation in the carbon intensities and its green energy availability. Based on the availability of green energy, carbon emission in tons/MWh, electricity price and carbon cost, the preference has been given for the selection of data center for workload shifting. However, due to the intermittent nature of green energy generation, it is still essential to exploit the aforementioned parameters on operating cost incurred due to brown energy support.

2.3. Containers

Containers are lightweight with less startup time and communication overhead, alternate to virtual machines. They provide the virtual platform and task isolation at the operating system level. The containers are more prevalent in providing a platform as a service in a cloud environment [49]. The container technologies, namely Docker, was compared with kernel-based virtualization machine (KVM) in terms of processing, memory and storage, and the performance of containers was the same as bare metal with virtualization overhead as in VMs. Containers allow horizontally scalable systems for hosting microservices. There is a constraint of resource exploitation under process groups in container-based virtualization techniques [50]. A container as a service lays a bridge between infrastructure-as-a-service (IaaS) and platform-as-a-service (PaS). Containers offer a portable application environment by providing the application services with a free environment of platform as a service-specific environment [51]. Docker is an open platform for launching application containers. Docker swarm scheduler places containers on available VMs in round-robin fashion without considering resource usage of VMs [52]. The queuing algorithm is proposed for the placement of containers on VMs to reduce response time and efficient utilization of VMs [53]. Constraint satisfaction programming-based container placement algorithm is proposed to decrease billing cost and energy consumption by reducing the number of instantiated VMs [54]. A metaheuristic approach-based container placement is addressed to reduce migration, energy consumption, increase SLA, VM and PM utilization. Figure 1 provides different ways of container placement. The container C1, C2 and C3 emulates an operating system and runs directly on the operating system as in Figure 1a. Containers provide increased performance as they do not emulate the hardware as virtual machines. The container engine provides isolation, security and resource allocation to containers. Hybrid container architecture which the container engine and containers execute on top of the virtual machine is shown in Figure 1b.

3. The Architecture of the Proposed System

3.1. Sustainable Data Center Model

In the data centers, energy consumption plays a critical role which decides the carbon emission of the conventional power generating sources. The data centers ought to be aware of the energy efficiency of IT equipment, cooling subsystems, and carbon footprint with the help of appropriate metrics. Data center ecosystems offer additional flexibility to incorporate the usage of on-site renewable power generation to minimize the carbon footprint. The integration of solar and wind energy impose new challenges into the data center’s energy management. Based on the availability of green energy, workloads are assigned to sustainable data centers located in diverged geographical locations with different local weather conditions.
This paper proposes a comprehensive management strategy for sustainable data centers to reduce the IT load and cooling supply system’s energy consumption. In such situations, the management techniques must regulate the IT workload based on the available solar and grid energy sources. It can be realized by allocating the workload based on the time-varying nature of renewable power. A data center powered with hybrid power infrastructure integrating grid utility and solar-based renewable energy is shown in Figure 2. Each rack contains M number of servers powered by both grid and solar-based renewable energy.

3.2. Proposed Structure of Management System Model

The utility of the management system components presented in Figure 3 are detailed below:
  • Energy-Aware Manager (EAM): The data centers of a cloud provider are located in geo-distributed sites. In addition to physical servers, data centers have additional energy-related parameters PUE, carbon footprint rate with different energy sources, varying electricity prices and proportional power. The EAM is the centralized node responsible to coordinate the input request distribution. It is responsible to direct the request to the data centers to attain minimum operating cost, carbon footprint rate and energy consumption. Each data center registers the cloud information service to EAM and updates it frequently. The energy-aware manager maintains information about the list of clusters, carbon footprint rate (CFR), data center PUE, total cooling load, server load, carbon tax, carbon cost, and the carbon intensity of the data centers.
  • Management Node (MN): Each data center holds several clusters with heterogeneous servers. The cluster manager of each cluster updates the cluster’s current utilization, power consumption, number of servers on/off to MN. The MN receives user requests from the EAM and based on the cluster utilization, distributes the load to the clusters through cluster manager. The main scheduling algorithm responsible for the allocation of VM to PM and the de-allocation of resources after VM termination is the ARM algorithm (Algorithm 1). It is implemented in the management node.
  • Cluster Manager (CM): Each cluster contains heterogeneous servers with different CPU and memory configurations. The power model of the systems in the cluster is considered homogeneous. Each node in the cluster updates information about its power consumption, resource utilization, number of running VMs, resource availability, and its current temperature to the CM. The cluster manager is the head node in the cluster that maintains cluster details concerning total utilization, server power consumption, resource availability, power model, type of energy consumed (grid or green) and temperature of the cluster nodes.
  • Physical Machine Manager (PMM): The PMM is a daemon responsible for maintaining the host CPU utilization percentage, resource allocation for VMs, power consumption, current server temperature, status of VM requests, number of VM request received, and so on. The PMM shares its resources to the virtual machines and increases its utilization through virtual machine manager (VMM). It is responsible to update the aforementioned details to the cluster manager.
  • Virtual Machine Manager (VMM): The VMM utilizes the virtualization technology to share the physical machine resources to the virtual machines with process isolation. It decides on the number of VMs to be hosted, provisioning of resources to VMs and monitors each hosted VM utilization of physical machine resources. It maintains information about CPU utilization, memory utilization, power consumption, arrival time, execution time and remaining execution time of all active VMs, number of tasks under execution in each VM, current state of the VMs, and other resource and process information.
Algorithm 1: ARM Algorithm Approach
Input: DCList, VMinstancelist
Output: TargetVMQ
1 For each interval do
2   ReqQ← Obtain VM request based on VMinstancelist;
3   DCQ← Obtain data centers from DCList;
4   TargetVMQ← Activate placement algorithm;
5   If interval >min-exe-time then
6     Compl-list← Collect executed VMs from TargetVMQ;
7    For each VM in Compl-list do
8     Recover the resources related to the VM;
9 Return TargetVMQ.

4. Problem Formulation

In this work, each physical machine (PM) is characterized by its resource capacity (processor and memory) and processor power model. The power consumption is linearly correlated with its processor utilization [30]. Each PM has fixed k discrete utilization levels in the execution state. When there is no workload assigned, the processor is set to be in an idle state. The power consumption of the processor at different utilization level is determined by its power model. The VM request is assumed to have three parameters: arrival time, resource requirement and execution time. The VM request is accepted and placed by the placement algorithms, if the required resource requirement is fulfilled by the available PM resource capacity.4.1. Energy Consumption in Data Centers
The power consumption by all the servers (SP) and cooling equipment (overhead power (OP)), plays a major role while modeling the data center energy consumption. The amount of energy utilization by the data centers has a direct impact on carbon footprint.

4.1. Power Model of Server

The total facility power (TFP) consumption includes the overhead power consumption (OP) and power consumption of all the servers (SP). It is formulated as (Equation (1)):
T F P d = O P d + c = 1 t c E T c × j = 1 M P j l
where tc, d and M represents number of clusters, datacenters and number of machines.
Pj(l) is the power consumed by jth physical machine. It is derived as [55] (Equation (2)):
P j l = S j l   U j l U j l + 1 U j l × P j l + 1 P j l + P j l
where Uj(l) ˂ Sj(l) ˂ Uj(l + 1), 0 ≤ l <k, l signify the utilization percentage, Sj(l) is current utilization of jth server.
ETc is the energy type represented as 1 when the data center is powered by brown energy (B), else 0 when powered by renewable energy(G).

4.2. Overhead Power Model

In this work, data center’s temperature is maintained using Computer Room Air Condition (CRAC) unit. It is used to circulate the cold air into the clusters to maintain the operating temperature. The overhead power (OP) consumption of the cluster c is formulated in terms of cooling device coefficient of performance (CoP) as (Equation (3)):
O P d = c = 1 t c E T c × j = 1 M P j l C o P T s u p
The coefficient of performance (CoP) is the ratio of the heat removed and the amount of work needed to remove the heat. The coefficient of performance is directly proportional to system efficiency. The coefficient of performance of the CRAC unit rises in proportion with the rise in supply air temperature [56]. Coefficient of performance for CRAC unit can be represented as [57] (Equation (4)):
C o P T s u p = 0.0068 T s u p 2 + 0.0008 T s u p + 0.458
where Tsup denotes the difference between the current operating temperature and required safe operating temperature.

4.3. Green Energy

The availability of green energy is dependent on environmental weather conditions and different time zones in which the data centers are located geographically. We aim to minimize the carbon footprint by coordinating the green energy availability of distributed data centers while handling the user’s demand. In this work, solar energy is assumed as on-site renewable energy used along with brown energy. The solar energy has been given higher priority during its availability than grid energy.

4.4. Carbon Cost (CC) and Electricity Cost (EC)

Carbon cost (CC) and electricity cost (EC) of the data center depends upon the carbon tax (CT), carbon footprint rate (CFR) and energy price (EP). These factors are based on the green or brown energy sources utilized by the data center. In addition, the carbon footprint rate (tons/MWh) and carbon tax (dollars/ton), energy price (cents/kWh) are location-specific. We aim to reduce the cost associated with the data center based on optimal selection of data center considering the nature of energy source, carbon emission, carbon tax and energy price while satisfying the user requests.

4.5. Objective Function

We aim to minimize the data center’s overall operating energy cost (TC). An objective function is formulated to calculate the cost considering power consumption and carbon footprint emission. The total cost (TC) for handling the workload in a data center d is the sum of carbon cost (CC) and electricity cost (EC) formulated as (Equation (5)):
T C d = C C d + E C d
The first part of the Equation (5) represents the carbon cost (CC). It is dependent on carbon tax (CT), carbon footprint rate (CFR) and total facility power (TFP) consumed by data centers calculated as (Equation (6)):
C C d = C T d × C F R d × T F P d
The second part of the Equation (5) calculates the data center electricity cost (EC). It is the product of electricity price (EP) with total facility power (TFP) calculated as (Equation (7)):
E C d = E P d × T F P d

Constraints Associated with the Objective Function

The objective function in Equation (5) is subjected to the following constraints:
The sum of processor requirement R j , i c and memory requirement   R j , i m of the number of VM’s (n) placed in the physical machine PMi are not supposed to exceed the processing P M i c p u . m a x and memory limit P M i m e m . m a x of the physical machine and it is calculated as (Equations (8) and (9)):
j = 1 n R j , i c   P M i c p u . m a x
j = 1 n R j , i m   P M i m e m . m a x
The relation R between VM and PM is many-to-one. More than one VM can be placed in one PM but a VM should be placed only in one physical machine, i.e., RN × M, if
  l ϵ   N   &   m ,   n ϵ   M   : l , m ϵ R   l , n ϵ R m = n .  
The total brown energy (B) and green energy consumed by physical machines should be within the service provider’s approved grid electricity consumption (B) and generated green energy (G) (Equations (10) and (11)):
TFPd ≤ Total assigned brown energy (B)
SPd ≤ Total generated green energy (G)

4.6. Performance Metrics

To check the efficiency of VM to PM mapping, instruction to total energy ratio (IER), instruction to cost ratio (ICR) and instruction to carbon footprint ratio (ICFR) are calculated as (Equations (12)–(14)):
I C R d = d = 1 t d c = 1 t c j = 1 M i = 1 N R d , c , j , i   × R d , c , j , i c × V M i e x d = 1 t d T C d
I C F R d = d = 1 t d c = 1 t c j = 1 M i = 1 N R d , c , j , i   × R d , c , j , i c × V M i e x d = 1 t d C F R d × T F P d
I E R d = d = 1 t d c = 1 t c j = 1 M i = 1 N R d , c , j , i   × R d , c , j , i c × V M i e x d = 1 t d T F P d
where R d , c , j , i c   ,   V M i e x are the processor requirement and execution time of ith VM. td represents the total number of data centers.
The value of Rd,c,j,I is the mapping of VM to PM, set to 1, if VMi is allocated to PMj belonging to cluster c in data center d else set to 0.
The SLA is calculated by the ratio of VM acceptance (RVA) as (Equation (15)):
R V A   V d = c = 1 t c j = 1 M i = 1 N R d , c , j , i N
where N signifies the total number of received VM requests and M is the number of machines.

5. VM Placement Policies

The VM allocation problem can be considered as a multitier bin-packing problem. In the first-tier, containers are mapped to VMs with an objective of efficient VM utilization and in the second-tier, VMs are mapped to PMs to reduce energy consumption and carbon emission. The arrival of a VM request has different choices for its placement with multiple data centers in different locations each with its carbon footprint rate, PUE, carbon tax and electricity price. In this section, different VM placement methods are presented to investigate the impact of different parameters with independent data center selection policies towards energy consumption, RVA acceptance percentage, carbon footprint rate and total cost.

5.1. ARM Algorithm

The allocation and reallocation management (ARM) algorithm is discussed in Algorithm 1. The utility of the ARM algorithm can be categorized into two parts. Part 1: Lines 2–4 performs VM to PM allocation. Part 2: Lines 5–6 performs resource deallocation for every interval.
The input to the algorithm is DCList and VMinstancelist. DCList holds the list of data centers. VMinstancelist holds the set of VM instances as detailed in Section 6.3. The output of the algorithm is the TargetVMQ which holds the VM to PM allocation.

5.2. Renewable and Total Cost-Aware First-Fit Optimal Frequency VM Placement (RC-RFFF)

The proposed RC-RFFF algorithm performs strategy plans to allocate the VM on feasible servers ensuring data center selection based on minimum total cost obtained from Equation (5) including the carbon tax, carbon footprint rate, energy price for both brown and green energy. The physical machine choice is based on the server’s optimal first fit frequency. For data center selection, first preference will be given for renewable source availability followed by the data centers with less total cost.
The RC-RFFF algorithmic approach is presented in Algorithm 2. DCQ contains the data center list, ReqQ holds the input VM request, TargetVMQ holds VM to PM mapping information. RC-RFFF performs data center selection in lines 2–19 of Algorithm 2 based on carbon tax, energy price, carbon footprint rate and available renewable energy. In line 5, the total dynamic power consumption of the servers in the cluster is calculated using Equation (1) eliminating OPd. In line 6, the power consumption of the VM is estimated by considering the power model of the cluster. The Gd in line 8 is set to the available green energy. Line 9–16, considers the green energy availability while calculating the power consumption of clusters. The data center selection is based on the sorted order of TCd in line 18. The clusters inside the data center are ordered in increasing order of Spc and Δtot-uti in line 17.
Algorithm 2: ARM RC-FFF Virtual Machine Placement Algorithm
Sustainability 12 06383 i001
The host choice is based on the first-fit optimal frequency with renewable-aware cost calculation. The host selection procedure starts from line 22 of Algorithm 2. The VM is allocated on the first-fit feasible host with minimum utilization level. For n number of VM requests, d number of data centers, c number of clusters, h number of available host, the complexity of the algorithm is derived as O(ndch). To identify the data center with largest green energy availability, the complexity is O(dclogc). To identify the host with optimal frequency, the complexity is O(ch). The pseudo codes for remaining algorithms discussed in subsequent sections are not written as they are derived from the base Algorithm 2.
The steps of Algorithm 2 carried out in each time interval for new VM allocation is summarized below.
Step 1:
Lines 2–18 identifies the data center to schedule the VM based on renewable energy availability.
Step 2:
Line 17 sorts the clusters within the data centers in increasing order of its energy consumption.
Step 3:
Line 19 sorts the data centers, first in increasing order of total cost (renewable energy electricity cost and carbon tax are set to 0) and then in non-increasing order of green energy availability.
Step 4:
Lines 22–28 performs on-demand dynamic optimal frequency-based node selection within the cluster and is carried out to decide the placement of VM.

5.3. Cost-Aware First-Fit Optimal Frequency VM Placement (C-FFF)

The C-FFF assumes all data centers with the only brown energy source. The C-FFF algorithm performs data center selection based on the carbon tax, carbon footprint rate, and energy price of only available brown energy. The C-FFF algorithm’s data center selection is the same as RC-RFFF except after calculating Δtot-uti in line 7 of Algorithm 2; the available green energy Gd in line 8 is set to zero. The first-fit optimal frequency-based host selection of C-FFF is the same as RC-RFFF.

5.4. Renewable and Energy Cost-Aware First-Fit Optimal Frequency VM Placement (REC-RFFF)

REC-RFF varies from RC-RFFF in calculating total cost by eliminating carbon tax, carbon footprint rate parameters in data center selection. In this case, when there is no sufficient renewable energy available, the data center selection is based on the energy cost of brown energy. The brown energy cost is estimated based on the power consumption and electricity price of corresponding data centers. Renewable energy electricity price is set to 0. The REC-RFFF differs from RC-RFFF in calculating the total cost in Line 18 of Algorithm 2. The CCd of Equation (5) is set to 0 while calculating total-cost TCd. The first-fit optimal frequency-based host selection of REC-RFFF is the same as RC-RFFF.

5.5. Energy Cost with First-Fit Optimal Frequency VM Placement (EC-FFF)

The proposed EC-FFF algorithm assumes all data centers with the only brown energy source. The EC-FFF data center selection is the same as REC-RFFF in considering only the energy cost of brown energy for total cost and eliminating carbon emission parameters. The total cost TCd in line 18 of Algorithm 2 concerning Equation (5) is modified with CCd set to zero and the available green energy Gd in line 8 is set to zero. The host selection of EC-FFF is same as REC-RFFF.

5.6. Renewable and Carbon Footprint-Aware First-Fit Optimal Frequency VM Placement (RCF-RFFF)

The proposed RCF-RFFF algorithm ensures data center selection based only on carbon footprint rate including renewable energy availability. The carbon footprint rate of the renewable source is set to 0. The RCF-RFFF differs from RC-RFFF in data center selection, in calculating total cost in line 18 of Algorithm 2. Set C T d . as 1 in Equation (6) to calculate C C d and replace the total cost equation in line 18 of Algorithm 2 with Equation (6). The rest of the algorithm is the same as Algorithm 2. The host selection of RCF-RFFF is same as RC-RFFF.

5.7. Carbon Footprint Rate-Aware First-Fit Optimal Frequency VM Placement (CF-FFF)

The CF-FFF algorithm assumes data center with only brown energy. CF-FFF data center section is the same as RCF-RFFF except Gd set to zero in line 8 of Algorithm 2. The host selection of CF-FFF is same as RCF-RFFF.

5.8. Renewable and Carbon Cost-Aware First-Fit Optimal Frequency VM Placement (RCC-RFFF)

The RCC-RFFF data center selection is based on carbon cost obtained from Equation (6) including the carbon tax, and carbon footprint rate excluding electricity cost. It is an extension of RCF-RFFF and varies in calculating the total cost in line 18 of Algorithm 2. The total cost equation in line 18 of Algorithm 2 is replaced with Equation (6) with CTd set to data center’s carbon tax. The host selection of RCC-RFFF is the same as RCF-RFFF.

5.9. Carbon Cost-Aware First-Fit Optimal Frequency VM Placement (CC-FFF)

The CC-FFF algorithm assumes data center with only brown energy. It is the same as RCC-RFFF except in data center selection; the Gd in line 8 is set to 0. The host selection of CC-FFF is same as RCC-RFFF.

6. Google Cluster Workload Overview

Three versions of the cloud dataset [58] that are executed on Google compute nodes are publicly available to make visible job types, resource usage, and scheduling constraint of the real workload. The node receives the work in the form of a job. A job contains one or more tasks with individual resource requirements. Linux containers are used to run each task. In this work, the second version is used. The second version [59] holds 29 days of workload information of 11K machines from May 2011. In the second version, two tables, namely, task event table and resource usage table provide information about resource request and resource usage of each task. The task events table provides the timestamp, job-id, task index, resource request for CPU cores, memory and local disk space with other related information. In the task event table each task is considered as container request. In this work, the CPU and memory requirement for each task from the task event table is utilized for container task categorization.

6.1. K-Medoids Clustering

K-medoids is an unsupervised partitioned clustering algorithm that minimizes the sum of dissimilarities between objects in the cluster. It is more robust to noise and outliers. For each cluster, one object is identified as representative of the cluster. The algorithmic procedure is as follows:
Step 1:
K-values from the dataset are identified as medoids.
Step 2:
Calculate Euclidean distance and associate every data point to the closest medoid.
Step 3:
Swapping of a selected object and the new object is done based on the objective.
Step 4:
Steps 2 and 3 are repeated until there is no change in medoids.
The repetition of steps 2 and 3 will lead to four situations as given below:
  • The current cluster member may be shifted out to another cluster.
  • Other cluster members may be assigned to the current cluster with a new medoid.
  • The current medoid may be replaced by a new medoid.
  • The redistribution does not change the objects in the cluster resulting in smaller square error criteria.

6.2. Characteristics of Task Clusters

The random sample of 15,000 records of the first-day trace of Google workload version 2 [59] is considered in this work to identify the container types. The resource requests (processor cores and memory) of the tasks in the trace are normalized based on the maximum resource capacity of the machines [59]. The resource request details are de-normalized based on the machine characteristics given in Physical machine configurations Table. The containers are executed inside the VMs. The containers placed inside the VM share the VM resources.
Figure 4 and Figure 5 display the percentage of task distribution among the 10 clusters identified using K-medoids algorithm presented in Section 6.1. The data pattern represents the container resource requirement. The first four clusters contribute to 67.47% of the overall tasks and the remaining 32.53% is shared between clusters 5 to 10. The tasks under clusters 1 to 4 can be categorized as tasks with minimum resource requirements. The tasks under clusters 3, 4, 5, 7 and 9 can be categorized as tasks with medium resource requirements. Tasks under 6 and 10 can be categorized as the highest resource requirement. Cluster 2 has the highest contribution of 23.8% of tasks with the request for 2.5 CPU cores and 2 GB. The task clusters 5 to 10 display tasks with CPU requirements more than 6 cores and memory requirements more than 7 GB. Task clusters 6 has a 1.5% contribution with the highest CPU and memory request of 22 and 27Gb. Task cluster 10 holds 1.5% with the highest CPU requirement of 30 cores and memory requirement of 9 GB. The statistics of data, the task with more resource requirements, has less frequency of occurrence than the tasks with medium and minimum requirements. The medoids identified under each cluster are considered as the representative of the cluster to determine the appropriate container size for the task within the cluster, as given in Table 2.

6.3. Resource Request-Based Optimal VM Sizing for Container Services (CaaS)

After identifying the cluster types for the tasks from the selected dataset, the virtual machine sizing to execute the tasks of each cluster type has to be identified. The containers are executed on the virtual machines. The virtual machine resources are shared between the containers. The physical machines are partitioned into virtual machines. VM utilizes the virtualization technology to enable the sharing of physical resources with resource isolation and increases the utilization of the physical resource.
To estimate the effective VM size for hosting, the identified cluster types the frequency of occurrence of the task, and its resource usage in each cluster on an hourly basis for 24 h duration is estimated. The resource requirement per hour (CPU-req-hourh−C1) for the tasks in cluster C1 are calculated based on the average number of tasks (Num_taskh−C1) and average resource usage (CPU_Usageh−C1) of the tasks belonging to C1 executed in the system in the hourly basis (h). The CPU-reqh−C1 is approximated based on frequency of occurrence within 24 h period.
The number of CPU that a virtual machine can hold depends on the capacity and the number of virtual machines hosted on a particular physical machine. The number of vCPU a virtual machine can hold depends on the infrastructure and the limit set by the provider. The virtual machine CPU (vCPU) for a VM is decided by dividing CPU-reqh−C1 obtained for hourly basis by an integer m. The integer variable m holds a value between 2 to 9. The set of values obtained by dividing CPU-reqh−C1 by m with modulus zero is considered for vCPU sizing.
The virtual machine vCPU configuration for a specific cluster C1 is estimated on hourly basis (h) as
CPU-reqh-C1 = (Num_taskh-C1 × CPU_Usageh-C1)/m
The virtual machine memory configuration for a specific cluster C1 is estimated as
mem-reqh-C1 = (Num_taskh-C1 × mem_Usageh-C1)/m
The further virtual machine vCPU and memory are identified for each cluster based on better match on number of physical machines and available capacities.

6.4. Determine Optimum Number of Tasks for VM Types

The optimum number of tasks is estimated for each virtual machine type for efficient utilization of virtual machines using Algorithm 3. The aim of this mapping is to avoid underutilization of virtual machines. The Algorithm 3 determines the minimum number of tasks of a cluster type for maximum utilization of each VM type resources. Each cluster type is mapped to the VM types identified in the previous Section 6.3 and the list of feasible VM types are identified as given in Tables below. Minimum numbers of tasks Nt for maximum utilization of feasible VMs for each cluster is considered. Table 3 presents the container to VM mapping based on Algorithm 3. The tasks to VM mapping algorithm identifies the minimum number of tasks to maximize VM utilization. The tasks are mapped to the VMs based on Table 3.
Algorithm 3: Identify optimum number of tasks from each cluster for a VM type
Input: Task-List, VM-instanceist,
Output: NT (task-type, VMtype)
For each tasktype in Task-List
  For each VMtype in VM-instancelist
Nt = Find the minimum number of tasks of tasktype that causes maximum utilization of
VMtype resources.
i.e., Min (Ntmax-CPU,Ntmax-Mem)
NT (tasktype,Vmtype).add(Nt)
   End
End

7. Performance Evaluation

The experimental setup and the results obtained from the aforementioned VM placement algorithms are discussed in this section. In view of the expenditure and time involved in the assessment of comprehensive experimentation in real-time, environment simulation is done using MATLAB.

7.1. Experimental Environment for Investigation of Resource Allocation Policies

7.1.1. Data Center Power Requirement

The power consumption of the task is measured based on processor power consumption incurred due to its utilization. All the servers are considered to be in off state when not in use consuming no power. 23 °C is considered as the data center’s safe operating temperature. The peak server load (IT load) power evaluation of the data center is expected ≈ as 52 kW for the server specification given in Table 4. The floor space of the data center is measured ≈ as 500 square feet. The sum of electricity power requisite is measured as ≈124 kW (including cooling load, UPS, lighting). The total processor power consumption of the servers is supposed to be within 17.30 kW. The cooling load due to processor utilization is restricted to 12.11 kW [60]. The renewable-aware algorithms assume clusters powered by both grid and renewable energy in all the data centers. The clusters are powered by either one of the energy sources at a time. The cooling devices are powered only by grid energy source in all the data centers.

7.1.2. Data Center Physical Machine Configuration

Table 4 and Table 5 correspond to the heterogeneous physical machines used in this simulation with varying power models based on the SPEC power benchmark [61]. In order to evaluate the algorithms presented in Section 5, an IaaS is modeled using four small scale data centers with 100 heterogeneous servers located in four cities, namely Jacksonville, Miami, Orlando and Tampa. Each data center has two clusters of heterogeneous machines powered by both renewable and grid power. The machines in each cluster follow a particular power model. All data centers are assumed to have a cooling device with CoP as in Equation (4) powered only by grid power. VM reservations are modeled as in Table 6 based on Section 6.3. Each data center holds two clusters with unique carbon footprint rates. The data center’s cluster carbon footprint rate, energy price and carbon tax are observed based on [62,63] and given in Table 7 [38].

7.1.3. Solar Energy

The hourly solar irradiance and temperature data was reported for the entire year of 2018 [64]. The solar output power (P) based on Equation (16) was used to generate solar energy (kWh/m2/day) for four data centers. With the Solarbayer configuration detail of flat-plate collectors of 2684 m2 enclosed with fixed angle [65], the solar power output (P) for mean solar irradiance β (kW/m2) and ambient temperature T is calculated as [66] (Equation (16)):
P = λ × A × β   1 0.005 T 25
The A (m2) is the area of the solar unit; λ is the conversion coefficient of solar. We assume the solar energy trace as 0 between prior to 6 a.m. and after 6 p.m. Figure 6 displays the solar power generated at different locations.

7.2. Experimental Results

The Google workload is studied and the tasks are clustered according to their resource request pattern utilizing the clustering presented in Section 6.1. The VM sizing listed in Table 6 are based on the procedure defined in Section 6.3. In our experiment, the identified task containers are hosted in corresponding virtual machine types in each processing window. Each processing window is considered to have duration of 300 s. At the start of each processing window, input request is received. Based on the Lublin-Feitelson model [67], the arrival pattern of identified task containers along with the number of tasks and runtime of the task is generated. The Gamma and hyper Gamma Lublin parameters are utilized to generate tasks with varying holding time with a standard arrival time model. The task containers are mapped to appropriate VM types. Figure 7 displays the CPU demand of VM types for task containers in the generated workload. Only the active execution time of VM is considered. Each VM is assigned a minimum of the single physical core of the host. All containers get the same portion of CPU cycles. CPU limit and CPU requests are considered the same. This work considers only CPU utilization of the VM and does not consider communications between VM and containers. Memory limit and memory requests are considered the same for guaranteed quality of service class. The local disk space 10GBis assumed, allotted for each virtual machine to provide enough space for operating system installation on each VM. The experimental setup is used to evaluate the proposed VM placement model in terms of carbon cost, consumption of green energy, consumption of brown energy, carbon footprint and total operating cost.

7.2.1. Energy and Cost Efficiency of the Proposed Algorithms

We evaluate the proposed VM placement algorithms to explore the impact on grid energy, solar energy consumption, carbon emission and total cost for the CPU demand presented in Figure 6. The renewable-based algorithms, namely, RC-RFFF,REC-RFFF,RCF-RFFF,RCC-RFFF, offers high priority to renewable sources during its availability to power the servers. When there is insufficient renewable source, the data center selection policy is independent for each proposed algorithm based on total cost (TC), carbon cost (CC), and electricity cost (EC). Grid energy-based algorithms, namely, C-FFF, EC-FFF, CF-FFF and CC-FFF, considers only grid source with independent data center selection policy based on the aforementioned parameters.

7.2.2. Discussion on Grid Energy Consumption and Carbon Footprint Emission

The quantity of brown energy consumption by different VM placement algorithms is depicted in Figure 8. In C-FFF, eliminating renewable energy availability with total cost reduction as an objective, considering varying electricity price and carbon tax, the brown energy usage is 11,222.78 kWh with 95% confidence interval (CI): (1007.74, 14,875.94). In RC-RFFF, considering total cost reduction, the brown energy usage is 7220.28 kWh with 95% confidence interval: (218.44, 14,869.16). It is noticed that the RC-RFFF brown energy usage is 35.6% lesser than C-FFF due to renewable energy consideration. In EC-FFF with electricity cost reduction as an objective without the consideration of green energy, the brown energy consumption is 11,128.31 kWh with 95% CI: (958.84, 14,881.43). In REC-RFFF, the brown energy usage is 6913.23 with 95% CI: (277.13, 14,878.51). The obtained results reveal that the REC-RFFF brown energy usage is 37.8% less than EC-FFF due to renewable energy consideration. Similarly in CF-FFF, the brown energy usage is 12,131.7 kWh with 95% CI: (975.20, 14,875.44). In RCF-RFFF, it is 7903.63 with CI: (272.06, 14,871.14) which is 34.85% lesser than CF-FFF. In CC-FFF, the brown energy consumption is 12,029.22 kWh with 95% CI: (1028.02, 14,870.66). In RCC-RFFF, the energy consumption is 7869.22kWh with CI: (269.13, 14,867.92) which is 34.58% lesser than CC-FFF. It can be inferred from the results obtained that the renewable-based algorithms’ counterparts hold less brown energy usage due to the algorithms’ nature of scheduling the workload to the data centers based on green energy availability to maximize its usage.
In Figure 9, the carbon emission of the proposed algorithms is compared. The renewable-based algorithms hold less carbon emission than grid energy consumption. The C-FFF emits 0.44441 tons of carbon with 95% CI: (0.03716, 0.59748). The RC-RFFF emits 0.29734 tons with CL: (0.01794, 0.59738) yields 33.09% less than the former. The EC-FFF holds 0.45197 tons with CL: (0.03796, 0.59842) and CF-FFF holds 0.46218 with CL: (0.02619, 0.59758). Similarly, the REC-FFF holds 0.30034 with CL: (0.02234, 0.59792) and RCF-FFF holds 0.30121 with CI: (0.01084, 0.59745). Both the approaches lead to approximately 34% less carbon emission than the grid counter parts.
It is noteworthy to mention that the energy consumption and carbon emission of renewable- based algorithms in the beginning intervals is significantly less than grid-based algorithms and has more similar power consumption at later intervals which reveals the uncertainty of renewable energy availability in all the intervals within a day.

7.2.3. Discussion on Total Cost

Figure 10 portrays the total operating cost of the proposed algorithms. The C-FFF approach results in total operating cost of 92.29$ with 95% CL: (7.99, 122.79). The RC-RFFF yields 65.35$ with CL: (3.58, 122.75) which is 29.19% lesser than its grid counterpart. The EC-FFF holds 92.13$ with CL: (7.68, 122.84) and REC-RFFF holds 62.47$ with CL: (3.93, 122.82). The REC-RFFF provides 32.19% reduction in total operating cost. The CF-FFF yields 98.45$ with CL: (7.20, 122.82) and RCF-RFFF results in 69.30$ with CL: (3.30, 122.76). The CC-FFF approach presents 97.63$ with CL: (7.96, 122.76) and RCC-RFFF yields 69.55$ with CI (3.82, 122.75) which is 28.76% lower than its equivalent brown energy-based approaches. The renewable-based algorithms hold a significantly lesser cost in initial intervals and holds cost similar to the grid algorithms in later intervals due to non-availability of renewable energy later in the day.
Table 8 depicts the energy consumption of the proposed renewable-based and brown energy algorithms discussed in Section 5. To check the competence of different algorithms on VM to PM mapping, instruction to total energy ratio (IER), instruction to carbon footprint ratio (ICFR), instruction to cost ratio (ICR) and RVA measures are calculated based on the results summarized in Table 8 using Equations (12)–(14).
The RC-RFFF when compared with REC-RFFF has an increased IER and ICFR with 0.4% and 1.38%. RCF-RFFF compared with RCC-RFFF holds 2.28% more IER and 0.5% more ICFR. RC-RFFF when compared with RCF-RFFF has 6.08% increase in IER and 0.4% decrease in ICFR.
Figure 11 displays the ratio of VM acceptance metric of all the algorithms using Equation (15) for the first 12 intervals. The RVA percentage and instruction to energy ratio (IER) are two conflicting factors. The RVA increases depend on the number of VM requests with less resource requirement and less execution time. The instruction to energy ratio prefers VM requests with large resource requirement and execution time. The RC-RFFF and REC-RFFF algorithms have achieved approximately 87.5% RVA and RCC-RFFF, and RCF-RFFF algorithms have obtained approximately 90% RVA.
Based on the above discussion, it can be concluded that scheduling the workload to the data centers with maximum renewable energy availability is to be given first priority to reduce total operating cost and carbon emission. During the absence of renewable source, the priority for data center should be based on carbon footprint emission. It can be concluded that renewable and carbon footprint-aware data center selection along with DVFS-based first-fit optimal frequency host section (RCF-RFFF) would be a better choice to obtain a better trade-off between the considered performance metrics towards total operating cost and carbon footprint reduction.

8. Conclusions

In this paper, the VM placement problem to minimize cloud computing total energy cost and carbon emission is investigated. The data center architecture is first described with hybrid energy supplies including grid and PV-based renewable energy source. Then, the structure of management system model is explored with its utilities. We further formulated the objective function with operating energy cost of servers and cooling devices. We evaluated renewable-aware algorithms with different parameter variations to investigate the effect of carbon intensity, carbon tax and electricity price on total operating cost and carbon emission reduction. Workload-based optimal DVFS selection for server utilization to distribute the load among the servers within the cluster is used to avoid hot spots. Our approach jointly considers power reduction of servers and cooling devices along with the thermal impact and type of energy source for VM placement decision. The investigation includes the impact of varying energy cost and carbon footprint parameters of data centers for VM placement decision in the presence of green and brown energy.
To minimize carbon emission and total operating cost, the renewable-aware algorithms offer high priority for data centers with sustainable energy to maximize renewable energy usage. RVA%, instruction to total cost, energy and carbon emission ratio is considered to evaluate efficient VM to PM mapping of proposed methods. In a nutshell, the investigation of the various parameters for VM placement decision conclude that the total operating cost and carbon emission reduction is achieved by using DVFS-based first-fit optimal frequency for host selection, with renewable and carbon footprint-based data center selection of geo-distributed data centers providing a better trade-off between quality of service, and operating cost. Next, we can expand this work to examine and study the detailed thermal impact of the racks by loading the servers based on its location along with other renewable energy types.

Author Contributions

Conceptualization, T.R. and K.M.; methodology, T.R. and K.M.; writing—original draft preparation, T.R. and K.M.; supervision, K.G. and Z.W.G.; writing—review and editing, K.G. and Z.W.G.; funding, Z.W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Energy Cloud R&D Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT (2019M3F2A1073164). This work was also supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (2020R1A2C1A01011131).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ghatikar, G. Demand Response Opportunities and Enabling Technologies for Data Centers: Findings from Field Studies. Available online: https://escholarship.org/uc/item/7bh6n6kt (accessed on 10 January 2020). [CrossRef] [Green Version]
  2. Hamilton, J. Cooperative expendable micro-slice servers (CEMS): Low cost, low power servers for internet-scale services. In Proceedings of the Conference on Innovative Data Systems Research (CIDR’09), Asilomar, CA, USA, 4–7 January 2009. [Google Scholar]
  3. Grid, G. The Green Grid Power Efficiency Metrics: PUE &DCiE. 2007. Available online: https://www.missioncriticalmagazine.com/ext/resources/MC/Home/Files/PDFs/TGG_Data_Center_Power_Efficiency_Metrics_PUE_and_DCiE.pdf (accessed on 10 January 2020).
  4. Belady, C.; Andy, R.; John, P.; Tahir, C. Green Grid Data Center Power Efficiency Metrics: PUE and DCIE; Technical Report; Green Grid: Beaverton, OR, USA, 2008; Available online: https://www.academia.edu/23433359/Green_Grid_Data_Center_Power_Efficiency_Metrics_Pue_and_Dcie (accessed on 10 January 2020).
  5. Huang, W.; Allen-Ware, M.; Carter, J.B.; Elnozahy, E.; Hamann, H.; Keller, T.; Lefurgy, C.; Li, J.; Rajamani, K.; Rubio, J. TAPO: Thermal-aware power optimization techniques for servers and data centers. In Proceedings of the 2011 International Green Computing Conference and Workshops, Orlando, FL, USA, 25–28 July 2011; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  6. Breen, T.J.; Walsh, E.J.; Punch, J.; Shah, A.J.; Bash, C.E. From chip to cooling tower data centermodeling: Part I Influence of server inlet temperature and temperature rise across cabinet. In Proceedings of the 12th IEEE Intersociety Conference on Thermal and Thermo mechanical Phenomena in Electronic Systems, Las Vegas, NV, USA, 2–5 June 2010; pp. 1–10. [Google Scholar] [CrossRef]
  7. Mukherjee, R.; Memik, S.O.; Memik, G. Temperature-aware resource allocation and binding in high-level synthesis. In Proceedings of the 42nd Annual Design Automation Conference, Anaheim, CA, USA, 13–17 June 2005; pp. 196–201. [Google Scholar] [CrossRef]
  8. Akbar, S.; Malik, S.U.R.; Khan, S.U.; Choo, R.; Anjum, A.; Ahmad, N. A game-based thermal-aware resource allocation strategy for data centers. IEEE Trans. Cloud Comput. 2019. [Google Scholar] [CrossRef]
  9. Villebonnet, V.; Da Costa, G. Thermal-aware cloud middleware to reduce cooling needs. In Proceedings of the 2014 IEEE 23rd International WETICE Conference, Parma, Italy, 23–25 June 2014; pp. 115–120. [Google Scholar] [CrossRef] [Green Version]
  10. Song, M.; Zhu, H.; Fang, Q.; Wang, J. Thermal-aware load balancing in a server rack. In Proceedings of the 2016 IEEE Conference on Control Applications (CCA), Buenos Aires, Argentina, 19–22 September 2016; pp. 462–467. [Google Scholar] [CrossRef]
  11. Latest Microsoft Datacenter Design Gets Close to Unity PUE. Available online: https://www.datacenterknowledge.com/archives/2016/09/27/latest-microsoft-data-center-design-gets-close-to-unity-pue (accessed on 10 January 2020).
  12. Shehabi, A.; Smith, S.J.; Horner, N.; Azevedo, I.; Brown, R.; Koomey, J.; Masanet, E.; Sartor, D.; Herrlin, M.; Lintner, W. United States Data Center Energy Usage Report; Lawrence Berkeley National Laboratory: Berkeley, CA, USA, 2016. Available online: https://www.osti.gov/servlets/purl/1372902/ (accessed on 10 March 2020).
  13. APAC Datacenter Survey Reveals High PUE Figures Across the Region. Available online: https://www.datacenterdynamics.com/news/apac-data-center-survey-reveals-high-pue-figures-across-the-region/ (accessed on 10 January 2020).
  14. Varasteh, A.; Tashtarian, F.; Goudarzi, M. On reliability-aware server consolidation in cloud datacenters. In Proceedings of the 2017 16th International Symposium on Parallel and Distributed Computing (ISPDC), Innsbruck, Austria, 3–6 July 2017; pp. 95–101. [Google Scholar] [CrossRef] [Green Version]
  15. Wang, X.; Du, Z.; Chen, Y.; Yang, M. A Green-Aware Virtual Machine Migration Strategy for Sustainable Datacenter Powered by Renewable Energy. Simul. Model. Pract. Theory 2015, 58, 3–14. [Google Scholar] [CrossRef]
  16. Apple Now Globally Powered by 100 Percent Renewable Energy. Available online: https://www.apple.com/newsroom/2018/04/apple-now-globally-powered-by-100-percent-renewable-energy/ (accessed on 10 January 2020).
  17. Google Environmental Report 2018. Available online: https://sustainability.google/reports/environmental-report-2019 (accessed on 10 January 2020).
  18. Microsoft Says Its Datacenters Will Use 60% Renewable Energy by 2020. Available online: https://venturebeat.com/microsoft-says-it-now-uses-60-renewable-energy-to-power-its-data-centers/ (accessed on 2 February 2020).
  19. Renugadevi, T.; Geetha, K.; Prabaharan, N.; Siano, P. Carbon-Efficient Virtual Machine Placement Based on Dynamic Voltage Frequency Scaling in Geo-Distributed Cloud Data Centers. Appl. Sci. 2020, 10, 2701. [Google Scholar] [CrossRef] [Green Version]
  20. Pahl, C.; Brogi, A.; Jamshidi, J.S. Cloud Container Technologies: A State-of-the-Art Review. IEEE Trans. Cloud Comput. 2019, 7, 677–692. [Google Scholar] [CrossRef]
  21. Shuja, J.; Gani, A.; Shamshirband, S.; Ahmad, R.W.; Bilal, K. Sustainable Cloud Data Centers: A Survey of Enabling Techniques and Technologies. Renew. Sustain. Energy Rev. 2016, 62, 195–214. [Google Scholar] [CrossRef]
  22. Borgetto, D.; Casanova, H.; Da Costa, G.; Pierson, J.M. Energy-Aware Service Allocation. Future Gener. Comput. Syst. 2012, 28, 769–779. [Google Scholar] [CrossRef]
  23. Terzopoulos, G.; Karatza, H. Performance Evaluation and Energy Consumption of a Real-Time Heterogeneous Grid System Using DVS and DPM. Simul. Model. Pract. Theory 2013, 36, 33–43. [Google Scholar] [CrossRef]
  24. Tanelli, M.; Ardagna, D.; Lovera, M.; Zhang, L. Model Identification for Energy-Aware Management of Web Service Systems. In Service-Oriented Computing—ICSOC 2008: Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5364, pp. 599–606. [Google Scholar] [CrossRef] [Green Version]
  25. Wu, C.-M.; Chang, R.-S.; Chan, H.-Y. A Green Energy-Efficient Scheduling Algorithm Using the DVFS Technique for Cloud Datacenters. Future Gener. Comput. Syst. 2014, 37, 141–147. [Google Scholar] [CrossRef]
  26. Wang, L.; von Laszewski, G.; Dayal, J.; Wang, F. Towards energy aware scheduling for precedence constrained parallel tasks in a cluster with DVFS. In Proceedings of the 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, IEEE Computer Society, Melbourne, VIC, Australia, 17–20 May 2010; pp. 368–377. [Google Scholar] [CrossRef] [Green Version]
  27. Guérout, T.; Monteil, T.; da Costa, G.; Calheiros, R.N.; Buyya, R.; Alexandru, M. Energy-Aware Simulation with DVFS. Simul. Model. Pract. Theory 2013, 39, 76–91. [Google Scholar] [CrossRef] [Green Version]
  28. Rossi, F.D.; Xavier, M.G.; de Rose, C.A.; Calheiros, R.N.; Buyya, R. E-Eco: Performance-Aware Energy-Efficient Cloud Data Center Orchestration. J. Netw. Comput. Appl. 2017, 78, 83–96. [Google Scholar] [CrossRef]
  29. Wang, S.; Qian, Z.; Yuan, J.; You, I. A DVFS Based Energy-Efficient Tasks Scheduling in a Data Center. IEEE Acesss 2017, 5, 13090–13102. [Google Scholar] [CrossRef]
  30. Cotes-Ruiz, I.T.; Prado, R.P.; García-Galán, S.; Muñoz-Expósito, J.E.; Ruiz-Reyes, N. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing. PLoS ONE 2017, 12, e0169803. [Google Scholar] [CrossRef] [PubMed]
  31. Deng, N.; Stewart, C.; Gmach, D.; Arlitt, M. Policy and mechanism for carbon-aware cloud applications. In Proceedings of the 2012 IEEE Network Operations and Management Symposium, Maui, HI, USA, 16–20 April 2012; pp. 590–594. [Google Scholar] [CrossRef]
  32. Le, K.; Bianchini, R.; Martonosi, M.; Nguyen, T.D. Cost-and energy-aware load distribution across data centers. In Proceedings of the SOSP Workshop on Power Aware Computing and Systems (Hot Power 2009), Big Sky, MT, USA, 10 October 2009; pp. 1–5. [Google Scholar]
  33. Chen, C.; He, B.; Tang, X. Green-aware workload scheduling in geographically distributed data centers. In Proceedings of the 4th IEEE International Conference on Cloud Computing Technology and Science Proceedings, Taipei, Taiwan, 3–6 December 2012; pp. 82–89. [Google Scholar] [CrossRef]
  34. Giacobbe, M.; Celesti, A.; Fazio, M.; Villari, M.; Puliafito, A. An approach to reduce energy costs through virtual machine migrations in cloud federation. In Proceedings of the 2015 IEEE Symposium on Computers and Communication (ISCC), Larnaca, Cyprus, 6–9 July 2015; pp. 782–787. [Google Scholar] [CrossRef]
  35. Giacobbe, M.; Celesti, A.; Fazio, M.; Villari, M.; Puliafito, A. Evaluating a cloud federation ecosystem to reduce carbon footprint by moving computational resources. In Proceedings of the 2015 IEEE Symposium on Computers and Communication (ISCC), Larnaca, Cyprus, 6–9 July 2015; pp. 99–104. [Google Scholar] [CrossRef]
  36. Lin, M.; Liu, Z.; Wierman, A.; Andrew, L.L. Online algorithms for geographical load balancing. In Proceedings of the 2012 International Green Computing Conference (IGCC), San Jose, CA, USA, 4–8 June 2012; pp. 1–10. [Google Scholar] [CrossRef] [Green Version]
  37. Rao, L.; Liu, X.; Xie, L.; Liu, W. Minimizing electricity cost: Optimization of distributed internet data centers in a multi-electricity-market environment. In Proceedings of the 2010 Proceedings IEEE INFOCOM, San Diego, CA, USA, 14–19 March 2010; pp. 1–9. [Google Scholar] [CrossRef]
  38. Khosravi, A.; Andrew, L.L.; Buyya, R. Dynamic Vm Placement Method for Minimizing Energy and Carbon Cost in Geographically Distributed Cloud Data Centers. IEEE Trans. Sustain. Comput. 2017, 2, 183–196. [Google Scholar] [CrossRef]
  39. Goiri, Í.; Katsak, W.; Le, K.; Nguyen, T.D.; Bianchini, R. Parasol and greenswitch: Managing data centers powered by renewable energy. In ACM SIGARCH Computer Architecture News; ACM SIGPLAN Notices: Houston, TX, USA, 2013; pp. 51–64. [Google Scholar] [CrossRef]
  40. Deng, N.; Stewart, C.; Gmach, D.; Arlitt, M.; Kelley, J. Adaptive green hosting. In Proceedings of the 9th International Conference on Autonomic Computing, ACM, San Jose, CA, USA, 16–20 September 2012; pp. 135–144. [Google Scholar] [CrossRef]
  41. Zhang, Y.; Wang, Y.; Wang, X. GreenWare: Greening cloud-scale data centers to maximize the use of renewable energy. In Proceedings of the ACM/IFIP/USENIX International Conference on Distributed Systems Platforms and Open Distributed Processing, Lisbon, Portugal, 12–16 December 2011; pp. 143–164. [Google Scholar] [CrossRef] [Green Version]
  42. Bird, S.; Achuthan, A.; Maatallah, O.A.; Hu, W.; Janoyan, K.; Kwasinski, A.; Matthews, J.; Mayhew, D.; Owen, J.; Marzocca, P. Distributed (Green) Data Centers: A New Concept for Energy, Computing, and Telecommunications. Energy Sustain. Dev. 2014, 19, 83–91. [Google Scholar] [CrossRef]
  43. Liu, Z.; Lin, M.; Wierman, A.; Low, S.H.; Andrew, L.L. Greening geographical load balancing. In Proceedings of the ACM SIGMETRICS Joint International Conference on Measurement and Modeling of Computer Systems, ACM, San Jose, CA, USA, 7–11 June 2011; pp. 233–244. [Google Scholar] [CrossRef]
  44. Liu, Z.; Chen, Y.; Bash, C.; Wierman, A.; Gmach, D.; Wang, Z.; Marwah, M.; Hyser, C. Renewable and cooling aware workload management for sustainable data centers. In Proceedings of the 12th ACM SIGMETRICS/PERFORMANCE Joint International Conference on Measurement and Modeling of Computer Systems, London, UK, 11–15 June 2012; pp. 175–186. [Google Scholar] [CrossRef] [Green Version]
  45. Toosi, A.N.; Qu, C.; de Assunção, M.D.; Buyya, R. Renewable-Aware Geographical Load Balancing of Web Applications for Sustainable Data Centers. J. Netw. Comput. Appl. 2017, 83, 155–168. [Google Scholar] [CrossRef]
  46. Chen, T.; Zhang, Y.; Wang, X.; Giannakis, G.B. Robust geographical load balancing for sustainable data centers. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Shanghai, China, 20–25 March 2016; pp. 3526–3530. [Google Scholar] [CrossRef]
  47. Adnan, M.A.; Sugihara, R.; Gupta, R.K. Energy efficient geographical load balancing via dynamic deferral of workload. In Proceedings of the 2012 IEEE Fifth International Conference on Cloud Computing, Honolulu, HI, USA, 24–29 June 2012; pp. 188–195. [Google Scholar] [CrossRef] [Green Version]
  48. Neglia, G.; Sereno, M.; Bianchi, G. Geographical Load Balancing across Green Datacenters: A Mean Field Analysis. ACM SIGMETRICS Perform. Eval. Rev. 2016, 44, 64–69. [Google Scholar] [CrossRef]
  49. Dua, R.; Raja, A.R.; Kakadia, D. Virtualization vs. containerization to support PaaS. In Proceedings of the 2014 IEEE International Conference on Cloud Engineering, (IC2E), Boston, MA, USA, 11–14 March 2014; pp. 610–614. [Google Scholar] [CrossRef]
  50. Felter, W.; Ferreira, A.; Rajamony, R.; Rubio, J. An updated performance comparison of virtual machines and Linux containers. In Proceedings of the 2015 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), Philadelphia, PA, USA, 29–31 March 2015; pp. 171–172. [Google Scholar] [CrossRef] [Green Version]
  51. Kozhirbayev, Z.; Sinnott, R.O. A Performance Comparison of Container-Based Technologies for the Cloud. Future Gener. Comput. Syst. 2017, 68, 175–182. [Google Scholar] [CrossRef]
  52. Tao, Y.; Wang, X.; Xu, X.; Chen, Y. Dynamic resource allocation algorithm for container-based service computing. In Proceedings of the IEEE 13th International Symposium on Autonomous Decentralized System (ISADS), Bangkok, Thailand, 22–24 March 2017; pp. 61–67. [Google Scholar] [CrossRef]
  53. Raj, V.K.M.; Shriram, R. Power aware provisioning in cloud computing environment. In Proceedings of the 2011 International Conference on Computer, Communication and Electrical Technology (ICCCET), Tamilnadu, India, 18–19 March 2011; pp. 6–11. [Google Scholar] [CrossRef]
  54. Tchana, A.; Palma, N.D.; Safieddine, I.; Hagimont, D.; Diot, B.; Vuillerme, N. Software consolidation as an efficient energy and cost saving solution for a SaaS/PaaS cloud model. In Proceedings of the 21st International Conference on Parallel and Distributed Computing, Vienna, Austria, 24–28 August 2015; pp. 305–316. [Google Scholar] [CrossRef] [Green Version]
  55. Zhang, X.; Wu, T.; Chen, M.; Wei, T.; Zhou, J.; Hu, S.; Buyya, R. Energy-Aware Virtual Machine Allocation for Cloud with Resource Reservation. J. Syst. Softw. 2019, 147, 147–161. [Google Scholar] [CrossRef]
  56. Moore, J.D.; Chase, J.S.; Ranganathan, P.; Sharma, R.K. Making Scheduling “Cool”: Temperature-AwareWork load Placement in Data Centers. In Proceedings of the USENIX Annual Technical Conference, Marriott Anaheim, CA, USA, 10–15 April 2005; pp. 61–75. [Google Scholar]
  57. Wang, L.; Khan, S.U.; Dayal, J. Thermal Aware Work Load Placement with Task-Temperature Profiles in a Data Center. J. Supercomput. 2012, 61, 780–803. [Google Scholar] [CrossRef]
  58. Three Versions of the Cloud Dataset. Available online: https://github.com/google/cluster-data (accessed on 9 December 2019).
  59. Google Workload Version 2. Available online: https://github.com/google/cluster-data/blob/master/ClusterData2011_2.md (accessed on 10 January 2020).
  60. Sawyer, R. Calculating Total Power Requirements for Data Centers. White Paper, American Power Conversion. 2004. Available online: http://accessdc.net/Download/Access_PDFs/pdf1/Calculating%20Total%20Power%20Requirements%20for%20Data%20Centers.pdf (accessed on 9 December 2019).
  61. Standard Performance Evaluation Corporation. SPEC Power 2008; Standard Performance Evaluation Corporation: Gainesville, VA, USA, 2008; Available online: http://www.spec.org/power_ssj2008 (accessed on 9 December 2019).
  62. Appendix F. Electricity Emission Factors. Available online: http://cloud.agroclimate.org/tools/deprecated/carbonFootprint/references/Electricity_emission_factor.pdf (accessed on 2 February 2020).
  63. EIA-Electricity Data. Available online: https://www.eia.gov/electricity/monthly/ (accessed on 9 December 2019).
  64. The Hourly Solar Irradiance and Temperature Data. Available online: http://www.soda-pro.com/web-services/radiation/nasa-sse (accessed on 2 February 2020).
  65. Solarbayer: Energy Efficient Heating Systems by Renewable Heat Production. Available online: https://www.solarbayer.com/ (accessed on 2 February 2020).
  66. Nguyen, D.T.; Le, L.B. Optimal Bidding Strategy for Microgrids Considering Renewable Energy and Building Thermal Dynamics. IEEE Trans. Smart Grid 2014, 5, 1608–1620. [Google Scholar] [CrossRef]
  67. Lublin, U.; Feitelson, D.G. The Work Load on Parallel sUper Computers: Modeling the Characteristics of Rigid Jobs. J. Parallel Distrib. Comput. 2003, 63, 1105–1122. [Google Scholar] [CrossRef]
Figure 1. Containers (a) Placement on host operating system (b) Placement on VM
Figure 1. Containers (a) Placement on host operating system (b) Placement on VM
Sustainability 12 06383 g001
Figure 2. Sustainable data center model.
Figure 2. Sustainable data center model.
Sustainability 12 06383 g002
Figure 3. Schematic representation of the management system model.
Figure 3. Schematic representation of the management system model.
Sustainability 12 06383 g003
Figure 4. Clusters based on resource requests of the task.
Figure 4. Clusters based on resource requests of the task.
Sustainability 12 06383 g004
Figure 5. Clusters based on resource requests.
Figure 5. Clusters based on resource requests.
Sustainability 12 06383 g005
Figure 6. Solar power generations.
Figure 6. Solar power generations.
Sustainability 12 06383 g006
Figure 7. CPU demand for VM requests.
Figure 7. CPU demand for VM requests.
Sustainability 12 06383 g007
Figure 8. Grid power consumption of servers.
Figure 8. Grid power consumption of servers.
Sustainability 12 06383 g008
Figure 9. Carbon emission.
Figure 9. Carbon emission.
Sustainability 12 06383 g009
Figure 10. Total cost.
Figure 10. Total cost.
Sustainability 12 06383 g010
Figure 11. Ratio of VM acceptance metric.
Figure 11. Ratio of VM acceptance metric.
Sustainability 12 06383 g011
Table 1. Comparison summary of existing work for Virtual Machine (VM) placement.
Table 1. Comparison summary of existing work for Virtual Machine (VM) placement.
Ref. No.ApproachEnvironmentMetrics Considered
DVFSGreen EnergyWorkload ShiftingMulti-CloudEnergyCost of ElectricitySLACarbon Foot- Print
[25] Yes Yes
[26]Yes Yes Yes
[27]Yes Yes Yes
[28]Yes Yes Yes
[44] Yes YesYesYes
[46] Yes YesYesYes
[45] Yes YesYesYes
[47] YesYesYesYesYes
[48] YesYesYesYesYes
[38] Yes YesYesYes Yes
[39] YesYes Yes Yes
Proposed ApproachYesYesYesYesYesYesYesYes
Table 2. Cluster types with container configuration based on the resource request.
Table 2. Cluster types with container configuration based on the resource request.
Cluster TypevCPUMemory (MB)
10.5186.496
22.51889.28
364890.88
46.252234.88
512.59781.76
622.1927,686.4
78.59781.76
86.2510,968.32
918.757304.96
10309781.76
Table 3. Optimal number of containers for VM types.
Table 3. Optimal number of containers for VM types.
Task TypeVM Type-1VM Type-2VM Type-3VM Type-4VM Type-5
11224483660
2257-12
3123-5
4-2435
5---12
6----1
7-1-23
8---13
9--1-2
10----1
Table 4. Physical machine configurations.
Table 4. Physical machine configurations.
MachinesCore Speed (GHz)No. of CoresPower ModelMemory (GB)
M11.72116
M21.74132
M31.78232
M42.48264
M52.482128
Table 5. Utilization (%) and server power consumptions in watts.
Table 5. Utilization (%) and server power consumptions in watts.
Power ModelIdleUtilization Percentage
102030405060708090100
1606366.871.376.883.290.7100111.5125.4140.7
241.646.752.357.965.47380.789.599.6105113
Table 6. VM request types.
Table 6. VM request types.
VM TypevCPUMemory (GB)
Type-117.2
Type-2214.4
Type-3415.360
Type-4317.510
Type-5535.020
Table 7. Features of data center.
Table 7. Features of data center.
Data CenterCarbon Footprint Rate
(tons/MWh)
Carbon Tax
(dollars/ton)
Energy Price
(cents/kWh)
DC10.124246.1
DC20.350226.54
DC30.4661110
DC40.678485.77
Table 8. Energy consumption summary.
Table 8. Energy consumption summary.
Renewable Energy-Based AlgorithmsBrown Energy-Based Algorithms
RC-
RFFF
REC-
RFFF
RCF-
RFFF
RCC-
RFFF
C-
FFF
EC-
FFF
CF-
FFF
CC-
FFF
Total Energy (kWh)2,154,8472,115,7492,219,7822,228,5252,137,6482,104,8822,236,4342,232,912
Grid Energy (kWh)1,260,8171,227,2801,308,2911,311,9691,751,8541,730,3361,830,4311,821,311
Carbon Footprint (tons)51.358051.7279251.669551.972368.793069.952870.700770.4851
Total Cost ($)10,958.4810,787.111,287.6411,384.8114,368.4314,261.5514,950.914,960.33
Total Server Energy (kWh)1,777,4801,739,3621,815,3341,814,7901,751,8541,730,3361,830,4311,821,311
Solar Energy (kWh)516,663512,082.5507,043.1502,820.7----
Total No. of Instructions3.63075 × 10143.60687 × 1014 3.6682 × 10143.66849 × 1014 3.61387 × 1014 3.60258 × 1014 3.70066 × 10143.67871 × 1014

Share and Cite

MDPI and ACS Style

Renugadevi, T.; Geetha, K.; Muthukumar, K.; Geem, Z.W. Optimized Energy Cost and Carbon Emission-Aware Virtual Machine Allocation in Sustainable Data Centers. Sustainability 2020, 12, 6383. https://doi.org/10.3390/su12166383

AMA Style

Renugadevi T, Geetha K, Muthukumar K, Geem ZW. Optimized Energy Cost and Carbon Emission-Aware Virtual Machine Allocation in Sustainable Data Centers. Sustainability. 2020; 12(16):6383. https://doi.org/10.3390/su12166383

Chicago/Turabian Style

Renugadevi, T., K. Geetha, K. Muthukumar, and Zong Woo Geem. 2020. "Optimized Energy Cost and Carbon Emission-Aware Virtual Machine Allocation in Sustainable Data Centers" Sustainability 12, no. 16: 6383. https://doi.org/10.3390/su12166383

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop