Green Cloud Software Engineering for Big Data Processing

: Internet of Things (IoT) coupled with big data analytics is emerging as the core of smart and sustainable systems which bolsters economic, environmental and social sustainability. Cloud-based data centers provide high performance computing power to analyze voluminous IoT data to provide invaluable insights to support decision making. However, multifarious servers in data centers appear to be the black hole of superﬂuous energy consumption that contributes to 23% of the global carbon dioxide (CO 2 ) emissions in ICT (Information and Communication Technology) industry. IoT-related energy research focuses on low-power sensors and enhanced machine-to-machine communication performance. To date, cloud-based data centers still face energy–related challenges which are detrimental to the environment. Virtual machine (VM) consolidation is a well-known approach to a ﬀ ect energy-e ﬃ cient cloud infrastructures. Although several research works demonstrate positive results for VM consolidation in simulated environments, there is a gap for investigations on real, physical cloud infrastructure for big data workloads. This research work addresses the gap of conducting real physical cloud infrastructure-based experiments. The primary goal of setting up a real physical cloud infrastructure is for the evaluation of dynamic VM consolidation approaches which include integrated algorithms from existing relevant research. An open source VM consolidation framework, Openstack NEAT is adopted and experiments are conducted on a Multi-node Openstack Cloud with Apache Spark as the big data platform. Open sourced Openstack has been deployed because it enables rapid innovation, and boosts scalability as well as resource utilization. Additionally, this research work investigates the performance based on service level agreement (SLA) metrics and energy usage of compute hosts. Relevant results concerning the best performing combination of algorithms are presented and discussed.


Introduction
Internet of Things (IoT) is the outcome of the emanating third wave of the Internet of Everything.Gartner (2014) [1] predicts that IoT will hit mainstream by 2020 with almost 25 billion smart objects generating data.Traditional data-processing systems will be replaced with powerful big data processing systems and platforms due to the advent of voluminous and complex data generated by IoT [2].To successfully cope with the exponential growth of IoT generated data and user processing-related demands, scalable and elastic cloud computing technologies provide extensive computational resources for fast, responsive and reliable data processing [3].Cloud-based data center facilities that house physical networked computers and infrastructure play a crucial role in providing elastic computing resources to create an illusion of infinite resources [4].Undeniably, such high performance and responsive computing systems consume a lot of energy.
Statistically, the global energy consumption of data centers has increased by 56% within a short span of 5 years between 2005 and 2010 [5].According to Gartner (2007) [6], data centers contribute to 2% of the global CO 2 emissions which is on par with that of the aviation industry.European energy policies and climate targets for the years 2020 and 2030 have energy efficiency as a core priority [7].Data center energy consumption optimization will undoubtedly help reduce operational costs and its associated carbon footprint [8].
Nowadays, IoT, big data analytics and machine learning play a key role for better power grid infrastructure management, natural disaster-related assessment, as well as more efficient power generation and transmission [9].Predictive analytics is beneficial for capacity planning and operational efficiency improvement.However, enormous power usage by data center's high performance (due to IoT and big data) processing infrastructures [10] calls for an urgent need to optimize data centers equipped with highly-efficient functionality facilities [11] facilitated through the implementation of intelligent algorithms [12] and IoT-based technologies [13].
Generally, the IT industry focuses on improving system performance through efficient system designs and increased number of components based on Moore's law [14].Although output per watt is improving, the improvement in the computing system total power consumption is yet to be greatly evident.As a matter of fact, it seems to be increasing with an exponential rate [15].This trend has thus created a situation where the server energy consumption cost has exceeded the actual hardware cost itself and this is particularly true for large-scale computing infrastructures such as cloud data centers that are greatly impacted by energy-related issues [16,17].ICT infrastructures and cloud providers are still looking for energy efficient solutions to address their overwhelming utility bills and carbon footprint [17].Green Computing focuses on optimizing computing technologies and practices to reduce the negative environmental impact without compromising on performance [18].Recently, the computing infrastructure industry has shifted their focus to energy efficiency coupled with high level of quality of service (QoS) for customers and quality in sustainability (QiS) [19].
Virtual machine (VM) consolidation aims to reduce resource utilization and energy consumption.It determines a mapping of VMs to physical hosts so that a minimum number of hosts are used [20].It is one of the green practices where the number of active computing devices are reduced by transitioning inactive servers to 'energy saving' mode [21].Infrastructure as a Service (IaaS) providers consider numerous metrics to define computing performance to meet service level agreements (SLA) (note a list of metrics could be found in [22,23]).IBM has provided means to record SLA metrics [24].A cloud-based big data processing platform for IoT systems requires elastic resources to meet processing demand needs.VM consolidation-related analysis for such a dynamic system can provide useful insights towards building an energy aware infrastructure.Thus, this research work focuses on energy-related challenges and state-of-the-art energy efficient cloud systems.Additionally, it encompasses the investigation of VM consolidation impact on power usage characteristics of compute hosts in a physical private cloud infrastructure.This is followed by conducting appropriate experiments which focus on VM consolidation algorithms evaluation using SLA and energy consumption-related metrics.

Related Work and Underlying Concepts
This section is divided into five sub-sections that address the following: (a) Green IoT and big data, (b) cloud-based data centers, (c) energy-efficient computing systems, (d) cloud resource management, and (e) consolidation of virtual machines.Several works in these areas are reviewed to highlight the state-of-the-art approaches and identify relevant research gaps.

Green IoT and Big Data (Green Software Engineering)
Internet of Things is currently positioned as an added value for applications in the future [25,26] (e.g., smart systems [27] and provide support for healthcare and assistive living [28]).It involves both intensive as well as extensive deployment of sensors and devices.Thus, the effects of IoT (being a pervasive technology) on the environment must be duly considered [29].For long term use, it is necessary for the optimization of the entire system for energy efficiency, resource utilization, and provisioning (e.g., energy optimization of sensor networks in an IoT system [30] and resource provisioning for IoT services [31,32]).Radio-frequency identification (RFID), machine to machine (M2M) communications, green cloud computing, and data centers are the key focus areas of green IoT [18].Energy efficiency in data centers for IoT is pivotal as servers are equitably as energy hungry as sensors and devices [33].Data processing goals are to facilitate quick yet optimal decisions, provide reliable results with low latency for batch and stream processing, and complex methods for making 'better' informed decisions [4].In order to achieve these goals, powerful computing platforms are essential where hardware and software play crucial role in realizing these goals.At the same time, IoT data is voluminous and complex for which scalable systems are necessary [34].CPU processing time, I/O time, storage resources, and energy efficiency are examples of resource constraints that will have an adverse impact on efficient data processing, thus rendering IoT resource management a challenge [35,36].

Cloud-Based Data Centers
Cloud providers are harnessing effective, as well as energy-efficient, ICT infrastructures to address their overwhelming utility bills and carbon footprint [17].There is a paradigm shift for the computing infrastructure industry.The focus has now shifted to energy efficiency coupled with appropriate management of quality of service (QoS) (e.g., power-aware QoS management [37] and energy-aware QoS routing protocol [38]), quality of experience (QoE) (e.g., QoE-aware power management in network [39]) for customers and quality in sustainability (QiS) [19].However, end-users are impacted by high resource usage costs due increased use (due to overprovisioning), and total cost of ownership (TCO) provided by cloud owners [40].Inadvertently, higher energy consumption incurs increased utility bills, demand for cooling facilities, uninterruptible power supplies (UPS) and power distribution units (PDU).Several studies have shown that system power consumption reduction effectively extends the lifespan of the devices in the system [41,42].
Most cloud data centers use blade servers which provide more computational power and less space consumption [43].However, blade servers are hard to cool as the components inside each rack are densely packed [44].As an example in [15], 60 blade servers can be mounted to a rack of 42U where 'U' is called the rack unit, a measure of height of a server [45].On the contrary, the rack requires up to 4000 W for power supply to the servers and cooling systems compared to a rack with 1U servers which require only 2500 W. The sustainability of data centers and their efficiency measures are listed in [46].Power supply infrastructure, cooling, airflow management, and IT efficiency are the key factors to a data center energy efficiency [47].Power usage effective (PUE) and data center infrastructure efficiency (DCIE) are the widely used energy efficiency metrics which were developed by Green Grid Consortium [48][49][50].PUE is a ratio of energy consumed by the data center to the energy supplied to the computing equipment.DCIE is the inverse of PUE [46,51].

Energy-Efficient Computing Systems
Gordon E. Moore, in the year 1965, stated, "With unit cost falling as the number of components per circuit rises, many as 65,000 components on a single silicon chip" [14].With the increase in the number of components, size has decreased while speed has increased.This is applicable to the increase in the number of cores in a CPU [52].Increased speed is invaluable for mission-critical tasks but the need for more energy adversely affects the system.A simple circuit theory can be used to calculate the power consumption of CPU [52].In this case, CPU can be considered as a variable resistor that changes resistance with increase in workload.Power dissipation can be depicted in Equation (1).P CPU is power dissipated by the CPU, V supply is supply voltage while I is the current.
The relationship between CPU utilization and total power consumption of a server is modelled in [53].It states that with growth in CPU utilization, power consumption grows in a linear fashion from the idle state power consumption up to when server is fully utilized.This relationship is expressed in the Equation (2).P(u) = P idle + P busy − P idle × u P(u) is estimated power consumption, P idle is the idle server power consumption, P busy is the power consumption when server is fully utilized and u is the current CPU utilization (in %).
The problem of energy wastage is addressed by an energy proportional computing system, where the energy consumed by the computing systems is proportional to the workload [54].This is the dynamic voltage frequency scaling (DVFS) technique.In response to resource demand, voltage and frequency of the CPU are dynamically adapted resulting in a 30% of power saving in low-activity states of desktops and servers [54].On the other hand, dynamic power ranges of other components incur savings of less than 50% for dynamic random access memory (DRAM) and 15% for network switches [53].The underlying reason for the variation in dynamic power ranges is attributed to the fact that the CPU is the only component that supports low power modes.Nonetheless, there is a considerable effect on performance caused by the transition between active and inactive states.Idle state power incapability of server components has led to a limited dynamic power range of the server to only 30% [53].A study on benchmarking power usage characteristics of embedded processor and idle power consumption analysis shows that energy consumed by the processor significantly diminishes the instant the processor enters an idle state [41].This study also states that during the idle state, energy consumption reduces without invoking hardware-based frequency scaling or dynamic voltage and frequency scaling (DVFS) techniques, thus, rendering it effective with less overhead [55].

Cloud Resource Management
Several researchers have classified cloud resources into two types: physical and logical or hardware and software resources [56][57][58].Another approach to classify cloud resources based on its utility is proposed in [59] where resources are classified into five types: fast computation utility, storage utility, communication utility, power/energy utility and security utility.This work focuses on fast computing utilities (i.e., processor and memory) to provide computing power for data processing.The data centers' computing infrastructure consists of three main divisions: application domains, computing environments, and physical resources [15].Virtualized and nonvirtualized resources provide the necessary computing resources.In cloud computing, resource management is a set of processes to effectively and efficiently manage resources while assuring guarantee of quality of service (QoS) and QoE (quality of experience) for consumers [56].Resource management in a cloud data center comprises ab-initio resource allocation and periodic resource optimization [29].Periodic resource optimization entails continuous resource monitoring and VM consolidation [59,60].Based on resource management-related taxonomy and classifications [59], resource allocation is clearly a multidimensional problem which encompasses the following: meeting consumer requirements, service level agreement (SLA), load balancing to provide highly available and reliable service and finally, energy optimization.This research work aims to focus on guaranteeing QoS by meeting SLA and energy efficiency requirements.

Consolidation of Virtual Machines (VM)
Dynamic power management (DPM) [61] techniques help temporarily reduce energy consumption (via optimized resource utilization) [62] whereas static power management (SPM) is for permanent power consumption [15] via optimized circuit level design [62].VM consolidation is a potential DPM solution for resource utilization improvement and energy consumption reduction [63,64].Virtualization facilitates provision of multiple virtual machines on a single physical host [65].As a result, resources are better utilized thus, increasing the return on investment (ROI) [45].VM consolidation technique achieves energy saving by eliminating idle power consumption through switching idle hosts to low power mode such as sleep or hibernate modes [66,67].One of the capabilities of virtualization is VM relocation between compute nodes, known as migration [68].Performing VM migration with no downtime is called live migration [68,69].There are two main states when VMs are migrated: when some physical hosts are under-utilized, VMs are migrated to keep the number of active physical servers to be minimum; and relocate VMs from overloaded hosts to avoid performance degradation [70].Dynamic VM consolidation is a complex real-time decision-making problem that involves four subproblems: underload detection, overload detection, VM selection and VM placement [48,71].A tiered software system for VM consolidation is proposed in [65].Virtual machine monitors (VMM) or hypervisors [72] continuously observe VMs resource utilization and thermal state in each physical host.Local managers placed in the VMM observe the VMs resource utilization and send the information to the Global Manager [67].Commands to turn hosts to idle modes are issued by the Global Manager.
Several VM consolidation algorithms have been proposed and the performance of the algorithms are tested for individual sub-problems in [71,73] but there is limited study on the performance of the entire system when different VM consolidation algorithms are combined together.This research work focuses on analyzing the effect of VM consolidation when different combinations of algorithms are applied for an IoT-based eye tracking big data workload.A static threshold-based underload detection is commonly applied as CPU utilization seldom drops below a threshold and thus complex algorithms cause unnecessary overhead [74].On the other hand, sudden peaks are observed with varying workloads that overloads hosts and causes performance degradation [75].Three categories of algorithms are proposed for overload detection in [71,74]: Static threshold-based algorithms, adaptive utilization-based algorithms and regression-based algorithms.We created a composite selection (by choosing one algorithm from each category) to analyze its suitability for a big data workload.Threshold-based Heuristic (THR) [45], median absolute deviation (MAD) [71] and local regression robust (LRR) [71] were the algorithms chosen for overload detection.When an overloaded or underloaded host is detected, it is imperative to select the right VM to effect minimal performance degradation.Random choice (RC) and minimum migration time (MMT) are the two common algorithms proposed for VM selection [48,76].VM placement is regarded as a bin packing problem with varying bin sizes.Compute nodes are referred to as bins and VMs as items.The available CPU capacities are referred to as bin sizes.Best fit decreasing (BFD) algorithm [77] sorts VMs in the decreasing order of CPU utilization and VMs are placed on the host which will experience the least increase in power consumption.We chose BFD as it performs better than first fit decreasing (FFD) for any workload [48].Based on the selected algorithms, we created six different combinations of composite selection to be tested.Table 1 shows the six combinations deployed for this research.In previous researches, VM consolidation is tested on simulated cloud environment using simulation tools such as CloudSim [45,78] on simulated workloads with CPU traces from PlanetLab or Google Cloud Datastore (GCD) [45,69,71].However, the actual performance was not tested on real physical cloud infrastructure.This research aims to evaluate the combination of algorithms on an Openstack (open sourced and private) Cloud as it is a potential cloud platform for big data processing [79,80].A comparative analysis between Openstack and public clouds (e.g., Amazon Web Services, AWS EC2) has been conducted [81].However, for the purpose of this research, the Openstack infrastructure is an ideal choice for Infrastructure as a Service (IaaS) because the Openstack software affords direct control and management facilities for large pools of compute, storage, and networking resources throughout a data center.This open-source platform provides energy efficiency capabilities using APIs of Nova compute service [82].Openstack NEAT [83] is dynamic VM consolidation framework developed as an add-on package for Openstack instances.The framework is proposed in [83] but is not evaluated for big data workloads.

Research Methodology
Exploring unanswered questions or investigating something that currently does not exist is research whereas a systematized effort to gain new knowledge is research methodology [84].Figure 1 depicts the research methodology of this work which encompasses 5 phases: analyze the results relating to SLA and energy metrics, and recommend the best combination of VM consolidation algorithms for IoT eye tracker big data workload.
6 of 24 affords direct control and management facilities for large pools of compute, storage, and networking resources throughout a data center.This open-source platform provides energy efficiency capabilities using APIs of Nova compute service [82].Openstack NEAT [83] is dynamic VM consolidation framework developed as an add-on package for Openstack instances.The framework is proposed in [83] but is not evaluated for big data workloads.

Research Methodology
Exploring unanswered questions or investigating something that currently does not exist is research whereas a systematized effort to gain new knowledge is research methodology [84].

Cloud System Architecture
We developed a tiered cloud architecture based on open-source tools and platforms such as Openstack, Apache Spark and Openstack NEAT [83]. Figure 2 depicts the designed system architecture.An IoT eye tracking system illustrated in Figure 3 was integrated with the cloud system.For simplicity, REST API was used for communication between the IoT system and cloud platform.Other messaging protocols such as Message Queuing Telemetry Transport (MQTT) or COnstrained Application Protocol (CoAP) can also be used.The components of the system are described in the following sections.

Cloud System Architecture
We developed a tiered cloud architecture based on open-source tools and platforms such as Openstack, Apache Spark and Openstack NEAT [83]. Figure 2 depicts the designed system architecture.An IoT eye tracking system illustrated in Figure 3 was integrated with the cloud system.For simplicity, REST API was used for communication between the IoT system and cloud platform.Other messaging protocols such as Message Queuing Telemetry Transport (MQTT) or COnstrained Application Protocol (CoAP) can also be used.The components of the system are described in the following sections.The bottom-most tier is the physical infrastructure consisting of compute, network and storage resources.Openstack cloud platform was deployed on the infrastructure for resource virtualization.To reiterate, Openstack is an open source platform for creating and managing cloud infrastructure which is commonly used by IaaS providers [79].The Openstack project originated with an aim to build a "massively scalable cloud operating system" [85].It is built on the concept of distributed system with asynchronous messaging.It consists of 7 major services for compute, storage, network, monitoring, orchestration and image along with authentication and dashboard services [86].The compute services consist of a web-based API, controller and scheduler [87].Compute controller is responsible for managing VMs on compute hosts.For the purpose of modelling a real, physical cloudbased system, we created a 4-node cloud set up with 1 controller and 3 compute nodes.

IoT Eye Tracking System
The experimental set up of a doctoral research work (at Leeds Beckett University) on Gaze pattern recognition to interpret vision cognitive behavior of pilots during in-flight startle was used as the IoT Eye Tracking system [88].Figure 3 illustrates the set-up which consists of a Flight Simulator, Flight controls and an Eye tracker device.The bottom-most tier is the physical infrastructure consisting of compute, network and storage resources.Openstack cloud platform was deployed on the infrastructure for resource virtualization.To reiterate, Openstack is an open source platform for creating and managing cloud infrastructure which is commonly used by IaaS providers [79].The Openstack project originated with an aim to build a "massively scalable cloud operating system" [85].It is built on the concept of distributed system with asynchronous messaging.It consists of 7 major services for compute, storage, network, monitoring, orchestration and image along with authentication and dashboard services [86].The compute services consist of a web-based API, controller and scheduler [87].Compute controller is responsible for managing VMs on compute hosts.For the purpose of modelling a real, physical cloudbased system, we created a 4-node cloud set up with 1 controller and 3 compute nodes.

IoT Eye Tracking System
The experimental set up of a doctoral research work (at Leeds Beckett University) on Gaze pattern recognition to interpret vision cognitive behavior of pilots during in-flight startle was used as the IoT Eye Tracking system [88].Figure 3 illustrates the set-up which consists of a Flight Simulator, Flight controls and an Eye tracker device.The bottom-most tier is the physical infrastructure consisting of compute, network and storage resources.Openstack cloud platform was deployed on the infrastructure for resource virtualization.To reiterate, Openstack is an open source platform for creating and managing cloud infrastructure which is commonly used by IaaS providers [79].The Openstack project originated with an aim to build a "massively scalable cloud operating system" [85].It is built on the concept of distributed system with asynchronous messaging.It consists of 7 major services for compute, storage, network, monitoring, orchestration and image along with authentication and dashboard services [86].The compute services consist of a web-based API, controller and scheduler [87].Compute controller is responsible for managing VMs on compute hosts.For the purpose of modelling a real, physical cloud-based system, we created a 4-node cloud set up with 1 controller and 3 compute nodes.

IoT Eye Tracking System
The experimental set up of a doctoral research work (at Leeds Beckett University) on Gaze pattern recognition to interpret vision cognitive behavior of pilots during in-flight startle was used as the IoT Eye Tracking system [88].Figure 3 illustrates the set-up which consists of a Flight Simulator, Flight controls and an Eye tracker device.
The relationship between startle and loss of situational awareness (SA) as a causal factor of Loss of Control (LOC) which leads to aviation accidents and fatalities could be better understood by studying the pilot's eye fixations.The potential relationships that may exist within the problem space is examined by combining machine learning and statistical modelling of eye tracking data.Flight simulator and eye tracker generate performance, gaze fixations and pupil position data during 15 flying tasks with different startle scenarios.The data from this IoT system are diverse, voluminous, and demand a reliable big data processing platform to perform statistical analysis and classify the pilots based on performance and gaze fixation analysis.

Big Data Processing Platform
Data obtained from the IoT eye tracking system were processed as Spark jobs.Apache Spark, the in-memory data processing engine which is suitable for both batch and stream processing, was used as the big data platform [89].'Sahara' [90] is the renamed Openstack project 'Savanna' which provides a means for big data application clustering on Openstack.The plugins that are available for creating data-intensive application cluster are Hadoop [91], Spark [92] and Storm [93].When a cluster is configured and launched, Sahara orchestrator sends a create VM request to Nova which in turn, requests 'glance' for Apache Spark image.Virtual Machines are launched via communication with the hypervisor (KVM) and orchestrated by HEAT [94].The data and job to be processed are stored in the object storage 'swift'.The spark jobs are then obtained by Nova API and processed by the infrastructure managed by Sahara Job Manager.

Openstack NEAT
In addition to the Openstack controller components, the NEAT Global Manager [95] also ran in the controller node.Figure 4 shows the components of Openstack NEAT.NEAT Global Manager makes decisions about mapping virtual machines to compute hosts and initiating migration of the selected VMs [83].A local manager ran on each compute host which made decisions on underload or overload situations and VM selection for migration.A data collector ran on compute nodes locally to collect resource utilization data from hypervisor and send the data to the central database in the controller.

of 24
The relationship between startle and loss of situational awareness (SA) as a causal factor of Loss of Control (LOC) which leads to aviation accidents and fatalities could be better understood by studying the pilot's eye fixations.The potential relationships that may exist within the problem space is examined by combining machine learning and statistical modelling of eye tracking data.Flight simulator and eye tracker generate performance, gaze fixations and pupil position data during 15 flying tasks with different startle scenarios.The data from this IoT system are diverse, voluminous, and demand a reliable big data processing platform to perform statistical analysis and classify the pilots based on performance and gaze fixation analysis.

Big Data Processing Platform
Data obtained from the IoT eye tracking system were processed as Spark jobs.Apache Spark, the in-memory data processing engine which is suitable for both batch and stream processing, was used as the big data platform [89].'Sahara' [90] is the renamed Openstack project 'Savanna' which provides a means for big data application clustering on Openstack.The plugins that are available for creating data-intensive application cluster are Hadoop [91], Spark [92] and Storm [93].When a cluster is configured and launched, Sahara orchestrator sends a create VM request to Nova which in turn, requests 'glance' for Apache Spark image.Virtual Machines are launched via communication with the hypervisor (KVM) and orchestrated by HEAT [94].The data and job to be processed are stored in the object storage 'swift'.The spark jobs are then obtained by Nova API and processed by the infrastructure managed by Sahara Job Manager.

Openstack NEAT
In addition to the Openstack controller components, the NEAT Global Manager [95] also ran in the controller node.Figure 4 shows the components of Openstack NEAT.NEAT Global Manager makes decisions about mapping virtual machines to compute hosts and initiating migration of the selected VMs [83].A local manager ran on each compute host which made decisions on underload or overload situations and VM selection for migration.A data collector ran on compute nodes locally to collect resource utilization data from hypervisor and send the data to the central database in the controller.global manager to migrate the VMs from the host using Openstack VM migration API [97] and put the host to sleep mode.On the other hand, if the host was not under-utilized, an overload detection algorithm (i.e., THR-threshold-based Heuristic; MAD-median absolute deviation; LRR-local regression robust) was invoked.If the host was not overloaded, the resource monitoring processes continued.If the host was overloaded, VMs to be relocated were selected by invoking the VM selection algorithm (i.e., RC-random choice; MMT-minimum migration time) and placed in a suitable host via VM placement algorithm (i.e., BFD-best fit decreasing).The status of the destination host was checked before the global manager migrates the VMs.If the host was in sleep mode, the host was awakened by sending magic packets using WakeOnLAN standard [98] Figure 5. VM consolidation workflow.

Experimental Details
The experimental design for this research consists of 3 phases: Plan, Execute and Analyze.The aim, objectives and expected outcome are defined and the necessary equipment is identified in the 'plan' phase.The experiment was executed for repeated runs or repeated for a specified amount of time.Data were collected at the end of each experiment and saved as csv files.The collected data was analyzed, interpreted and validated.The findings were documented for further study.
Figure 6 depicts the experimental set up.The controller and compute nodes were plugged into the power source through plug-in power and energy monitors.The nodes were connected to the internet through a secure proxy server via a 24-port ethernet switch.Two network interface cards (NICs) were present for each node, NIC1 provided access to the internet while NIC2 was connected to the management or internal network.In Openstack terms, the public IP obtained by each virtual machine is called the floating IP address [79].
The compute nodes varied in capacity and configurations.Compute1 was an Intel Core i7-3779 CPU @ 3.40 GHz with 8 cores while Compute2 was an Intel Core 2 Duo CPU E8400 @ 3.00 GHz with 2 cores and Compute3 was an Intel Core 2 Duo CPU E8500 @ 3.16 GHz with 2 cores.Table 2 presents the configuration and idle power consumption (IPC) of the nodes.Underloaded hosts were identified by invoking an underload detection algorithm (i.e., THR-Threshold based Heuristic).When the host was under-utilized, the local manager requested the global manager to migrate the VMs from the host using Openstack VM migration API [97] and put the host to sleep mode.On the other hand, if the host was not under-utilized, an overload detection algorithm (i.e., THR-threshold-based Heuristic; MAD-median absolute deviation; LRR-local regression robust) was invoked.If the host was not overloaded, the resource monitoring processes continued.If the host was overloaded, VMs to be relocated were selected by invoking the VM selection algorithm (i.e., RC-random choice; MMT-minimum migration time) and placed in a suitable host via VM placement algorithm (i.e., BFD-best fit decreasing).The status of the destination host was checked before the global manager migrates the VMs.If the host was in sleep mode, the host was awakened by sending magic packets using WakeOnLAN standard [98].

Experimental Details
The experimental design for this research consists of 3 phases: Plan, Execute and Analyze.The aim, objectives and expected outcome are defined and the necessary equipment is identified in the 'plan' phase.The experiment was executed for repeated runs or repeated for a specified amount of time.Data were collected at the end of each experiment and saved as csv files.The collected data was analyzed, interpreted and validated.The findings were documented for further study.
Figure 6 depicts the experimental set up.The controller and compute nodes were plugged into the power source through plug-in power and energy monitors.The nodes were connected to the internet through a secure proxy server via a 24-port ethernet switch.Two network interface cards (NICs) were present for each node, NIC1 provided access to the internet while NIC2 was connected to the management or internal network.In Openstack terms, the public IP obtained by each virtual machine is called the floating IP address [79].Power consumption of compute servers varied widely during data processing.The average of the power consumed during a specific time period was known as power consumption and the peak value during the period was peak power consumption.Reduction in peak power consumption has a positive impact on cost with respect to power supply and distribution [15].A set of baseline experiments were conducted on the infrastructure to analyze the power usage for each compute node.The first set of experiments were conducted with simulated load generated using stress-ng [99].It is a stress test utility to test OS interfaces and subsystems [100].The peak power consumption of the controller and compute nodes were observed for different CPU, generic input/output and RAM (virtual memory stressor) workloads.To compare the 6 combinations of VM consolidation algorithms (presented in Table 1), an experiment was conducted by invoking each 'Combo' on Openstack NEAT.Openstack allows over-committed CPU resources at a ratio of 16:1 where the scheduler can allocate up to 16 virtual cores per physical core [101].Considering this fact and available CPU and RAM resources, the number of VMs that run on the cluster at a point in time was set to a minimum of 16 and a maximum of 96.As discussed in Section III.C, a big data workload from the IoT eye tracking system (Figure 3) was processed as Spark jobs on the cluster of virtual machines for 24 h when both power consumption and performance data were collected.The experiment was repeated for each 'combo'.It is often argued that virtualization causes overhead on servers [102].However, another study concludes that CPU and memory overhead caused by virtualization is insignificant [103].In this paper, it will provide insight into the effect of the virtualization layer on power consumption.For the baseline experiment, no workload was applied on the compute nodes.The peak power consumption, CPU and memory utilization of the compute nodes were recorded when no virtualization was enabled and when KVM, Openstack and Openstack NEAT services were enabled.The results of the discussed experiments are presented in the subsequent section.The compute nodes varied in capacity and configurations.Compute1 was an Intel Core i7-3779 CPU @ 3.40 GHz with 8 cores while Compute2 was an Intel Core 2 Duo CPU E8400 @ 3.00 GHz with 2 cores and Compute3 was an Intel Core 2 Duo CPU E8500 @ 3.16 GHz with 2 cores.Table 2 presents the configuration and idle power consumption (IPC) of the nodes.Power consumption of compute servers varied widely during data processing.The average of the power consumed during a specific time period was known as power consumption and the peak value during the period was peak power consumption.Reduction in peak power consumption has a positive impact on cost with respect to power supply and distribution [15].A set of baseline experiments were conducted on the infrastructure to analyze the power usage for each compute node.The first set of experiments were conducted with simulated load generated using stress-ng [99].It is a stress test utility to test OS interfaces and subsystems [100].The peak power consumption of the controller and compute nodes were observed for different CPU, generic input/output and RAM (virtual memory stressor) workloads.To compare the 6 combinations of VM consolidation algorithms (presented in Table 1), an experiment was conducted by invoking each 'Combo' on Openstack NEAT.Openstack allows over-committed CPU resources at a ratio of 16:1 where the scheduler can allocate up to 16 virtual cores per physical core [101].Considering this fact and available CPU and RAM resources, the number of VMs that run on the cluster at a point in time was set to a minimum of 16 and a maximum of 96.As discussed in Section III.C, a big data workload from the IoT eye tracking system (Figure 3) was processed as Spark jobs on the cluster of virtual machines for 24 h when both power consumption and performance data were collected.The experiment was repeated for each 'combo'.It is often argued that virtualization causes overhead on servers [102].However, another study concludes that CPU and memory overhead caused by virtualization is insignificant [103].In this paper, it will provide insight into the effect of the virtualization layer on power consumption.For the baseline experiment, no workload was applied on the compute nodes.The peak power consumption, CPU and memory utilization of the compute nodes were recorded when no virtualization was enabled and when KVM, Openstack and Openstack NEAT services were enabled.The results of the discussed experiments are presented in the subsequent section.

Peak Power Consumption for Synthetic Workloads
As discussed in the previous section, stress-ng is used to synthetically stress the compute nodes with CPU, I/O and RAM workloads.The number of cores to be stressed, number of I/O tasks and amount of RAM are provided as input.The experiment is conducted for a time-period of 60 s.Workload is applied in percentages from 0 to 100 in an interval of 10.
This experiment is repeated 10 times with the peak power consumption (PPC) (in Watt) being observed and noted.The average results are tabulated in Table 3.The graph for a single core is depicted in Figure 7 while the graph for all cores is shown in Figure 8.

Peak Power Consumption for Synthetic Workloads
As discussed in the previous section, stress-ng is used to synthetically stress the compute nodes with CPU, I/O and RAM workloads.The number of cores to be stressed, number of I/O tasks and amount of RAM are provided as input.The experiment is conducted for a time-period of 60 s.Workload is applied in percentages from 0 to 100 in an interval of 10.
This experiment is repeated 10 times with the peak power consumption (PPC) (in Watt) being observed and noted.The average results are tabulated in Table 3.The graph for a single core is depicted in Figure 7 while the graph for all cores is shown in Figure 8.It is observed that for similar workloads, the power usage characteristics of servers with single core and multicores are different and this is affirmed by [52].From Figure 7, it is evident that both Compute2 and Compute3 consume approximately similar amount of power.However, Compute1 consumes less power compared to the other compute nodes when one of the cores is stressed.This is due to the fact that the i7 processor is optimized for power consumption compared to the Core 2 Duo processors [52].On the contrary, when all the cores are stressed (see Figure 8), the PPC of Compute1 changes drastically particularly, when the workload is more than 50%.The turbo boost feature of the i7 processor could be responsible for this behavior [104].The turbo boost feature of i7 processors reduces up to 6% percent of the execution time at the cost of increasing the energy consumption by 16% [105].By comparing Compute2 and Compute3, the average PPC value is approximately 60W when workload is 50% for both cases.However, when the workload increases from 60% to 100%, Compute3 tends to consume more power than Compute2.Several factors could be responsible for this behavior, one of which is the electronic hardware ageing phenomenon [106].The above analysis shows that to reduce overall energy consumption during data processing, it is important to reduce the peak power consumption by effectively identifying the underloaded and overloaded hosts, followed by reducing the number of active hosts via putting the other hosts to an idle mode.The idle mode power consumption of the compute nodes is negligible as shown in Table 2.

Performance Metrics
To compare the efficiency of the six VM consolidation approaches, metrics in Table 4 were used to evaluate the performance.

Total Energy Consumption (E)
It is the sum of energy consumed by the compute servers as a result of application workloads over a specific time period.It is measured in kilowatt hour [59].

Number of VM Migrations
VMs are selected to be migrated once a host is identified to be underloaded or overloaded.Minimizing the time for migration is a crucial step which is achieved by reducing the total number of VM migrations.It is observed that for similar workloads, the power usage characteristics of servers with single core and multicores are different and this is affirmed by [52].From Figure 7, it is evident that both Compute2 and Compute3 consume approximately similar amount of power.However, Compute1 consumes less power compared to the other compute nodes when one of the cores is stressed.This is due to the fact that the i7 processor is optimized for power consumption compared to the Core 2 Duo processors [52].On the contrary, when all the cores are stressed (see Figure 8), the PPC of Compute1 changes drastically particularly, when the workload is more than 50%.The turbo boost feature of the i7 processor could be responsible for this behavior [104].The turbo boost feature of i7 processors reduces up to 6% percent of the execution time at the cost of increasing the energy consumption by 16% [105].By comparing Compute2 and Compute3, the average PPC value is approximately 60W when workload is 50% for both cases.However, when the workload increases from 60% to 100%, Compute3 tends to consume more power than Compute2.Several factors could be responsible for this behavior, one of which is the electronic hardware ageing phenomenon [106].The above analysis shows that to reduce overall energy consumption during data processing, it is important to reduce the peak power consumption by effectively identifying the underloaded and overloaded hosts, followed by reducing the number of active hosts via putting the other hosts to an idle mode.The idle mode power consumption of the compute nodes is negligible as shown in Table 2.

Performance Metrics
To compare the efficiency of the six VM consolidation approaches, metrics in Table 4 were used to evaluate the performance.

Total Energy Consumption (E)
It is the sum of energy consumed by the compute servers as a result of application workloads over a specific time period.It is measured in kilowatt hour [59].

Number of VM Migrations
VMs are selected to be migrated once a host is identified to be underloaded or overloaded.Minimizing the time for migration is a crucial step which is achieved by reducing the total number of VM migrations.

Power State Changes
The number of state changes (on and off) of compute nodes must be minimal to avoid unnecessary loss of energy.
Service Level Agreement (SLA) QoS requirements of a system are devised in the form of SLA (service level agreement), determined by attributes such as throughput or response time which are application dependent.In the case of IaaS, QoS can be evaluated using SLA metrics that depend on VM and compute resources [74].IaaS-SLA violations (SLAV) can be measured using two metrics below.SLATAH (SLA violation time per active host)-The period of time when a host experiences 100% CPU utilization and the requested performance is not delivered as it is limited by the node's capacity causing a violation of the SLA as shown in Equation (3).
where N is the number of compute nodes, T si is the total time during which the host i experienced 100% utilization leading to an SLA violation, T ai is the total time when host i actively provides VMs [107].PDM (performance degradation due to migrations) The overall degradation in performance experienced during migration of virtual machines as shown in Equation (4).
where M is the number of virtual machines, C dj is the performance degradation of VM j caused by migrations, C rj is the total processor capacity requested by VM j .In general, C dj is assumed to be 10% of CPU in million instructions per second (MIPS) during migrations [107].SLAV (SLA violation)-As SLATAH and PDM are two independent metrics, SLA violation (SLAV) is a metric that combines both performance degradation caused by overloading as well as VM migrations as shown in Equation (5).SLAV = SLATAH * PDM (5) It denotes the violation that takes place when the promised QoS is not met [107].

Energy and SLA Violations (ESV):
Energy consumption (E) of compute nodes and SLAV are negatively correlated as energy consumed can be reduced at the cost of increased SLA violations.Whereas, the goal of an energy-aware system is to minimize energy as well as SLA violations.Hence, a combined metric energy and SLA violations (ESV) proposed in [67] is shown in Equation (6).ESV = E * SLAV (6) A lower ESV value indicates that energy saving is higher than SLA violations.

Performance Evaluation
The experimental results for the investigation of the impact of six different VM consolidation 'combos' on energy consumption is illustrated in Figure 9.
The following discussion summarizes the results obtained by studying the impact of the six VM consolidation approaches in terms of SLA compliance.SLA violations are caused by both overutilization of resources (performance degradation due to 100% resource utilization) and degradation caused by extensive VM migrations as defined in [67].SLA metrics that define SLA violations are SLATAH and PDM as discussed in the previous section.Table 5 presents the comparison of energy and SLA violation metrics of the six approaches.It is observed that with VM consolidation, minimum amount of energy saved is 8.33% (i.e., compared with Combo1) and maximum amount of energy saved is 44.09% (i.e., compared to Combo6).This shows that 'Combo6 clearly outperforms all the other algorithms with a reduced 2.54 kWh (within a 24-h duration) of electrical energy by switching the underloaded compute nodes to sleep mode.Combo5 and Combo3 save up to 1.73 kWh and 1.44 kWh, respectively (within a 24-h duration).It is observed that VM consolidation selection plays a crucial role in energy saving as random choice (RC) causes aggressive migrations consuming energy which is mitigated by minimum migration time (MMT) algorithm [70].Combining MMT, prediction-based local regression robust (LRR) and statistical median absolute deviation (MAD) algorithms accomplish substantial energy saving.
Effective identification of overloaded/underloaded hosts and VMs to be migrated is crucial in VM consolidation as aggressive VM migrations lead to unnecessary energy loss [74].In addition, power state changes between sleep and on states should be kept to a minimum [64].Figures 10 and 11 compare the number of VM migrations and power state changes of the six approaches.
(LRR) and statistical median absolute deviation (MAD) algorithms accomplish substantial energy saving.
Effective identification of overloaded/underloaded hosts and VMs to be migrated is crucial in VM consolidation as aggressive VM migrations lead to unnecessary energy loss [74].In addition, power state changes between sleep and on states should be kept to a minimum [64].Figures 10 and 11 compare the number of VM migrations and power state changes of the six approaches.
From Figure 10, it is shown that Combo6 has the least number of VMs migrated.The combination of LRR prediction of resource utilization and MMT's strategy to select VMs based on minimum time taken to migrate is effective in saving energy, causing the least migration overhead.Combo5 is the second-best method with less VM migrations.From Figure 10, it is observed that the minimum migration time (MMT) algorithm performs better than random choice (RC) in keeping power state changes to an optimal level as 'combos' that employ MMT (i.e., Combos 4, 5 and 6) have less changes in power states than 'combos' (i.e., Combos 1, 2 and 3) that apply RC algorithm.
The following discussion summarizes the results obtained by studying the impact of the six VM consolidation approaches in terms of SLA compliance.SLA violations are caused by both overutilization of resources (performance degradation due to 100% resource utilization) and degradation caused by extensive VM migrations as defined in [67].SLA metrics that define SLA violations are SLATAH and PDM as discussed in the previous section.Table 5 presents the comparison of energy and SLA violation metrics of the six approaches.SLA violations (SLAV) metric is computed from SLATAH and PDM for each combo.From Table 5, it is observed that Combo5 (MAD and MMT algorithms applied) has the least SLA violations followed by Combo6 (LRR and MMT).Threshold-based Heuristic (THR) and random choice (RC) algorithms cause most SLA violations and are not as effective as MMT, MAD and LRR.The energy service level agreement violation (ESV) metric in the case of Combo6 is reduced by the energy consumption factor.The balance between energy saving and SLA violations is expressed by ESV [71].Although Combo5 has less SLA violations, the energy consumed (4.03 kWh) is more compared to Combo6 (3.22 kWh).Figures 12 and 13 present graphs of SLAV and ESV respectively.The results obtained are similar to the results presented in [67].The experimental results for the effect or overhead caused by the virtualization layer on peak power consumption, CPU and memory utilization, are presented in Table 6.Notations 'A' and 'B' in Table 6 denote 'No virtualization enabled' and 'Virtualization enabled by KVM, Openstack and Openstack NEAT', respectively.It is clear that the increase in PPC caused by the virtualization layer is negligible (in the order of less than 1 Watt of power) and less than 0.5% of CPU and memory utilization.From Figure 10, it is shown that Combo6 has the least number of VMs migrated.The combination of LRR prediction of resource utilization and MMT's strategy to select VMs based on minimum time taken to migrate is effective in saving energy, causing the least migration overhead.Combo5 is the second-best method with less VM migrations.From Figure 10, it is observed that the minimum migration time (MMT) algorithm performs better than random choice (RC) in keeping power state changes to an optimal level as 'combos' that employ MMT (i.e., Combos 4, 5 and 6) have less changes in power states than 'combos' (i.e., Combos 1, 2 and 3) that apply RC algorithm.
The following discussion summarizes the results obtained by studying the impact of the six VM consolidation approaches in terms of SLA compliance.SLA violations are caused by both over-utilization of resources (performance degradation due to 100% resource utilization) and degradation caused by extensive VM migrations as defined in [67].SLA metrics that define SLA violations are SLATAH and PDM as discussed in the previous section.Table 5 presents the comparison of energy and SLA violation metrics of the six approaches.SLA violations (SLAV) metric is computed from SLATAH and PDM for each combo.From Table 5, it is observed that Combo5 (MAD and MMT algorithms applied) has the least SLA violations followed by Combo6 (LRR and MMT).Threshold-based Heuristic (THR) and random choice (RC) algorithms cause most SLA violations and are not as effective as MMT, MAD and LRR.The energy service level agreement violation (ESV) metric in the case of Combo6 is reduced by the energy consumption factor.The balance between energy saving and SLA violations is expressed by ESV [71].Although Combo5 has less SLA violations, the energy consumed (4.03 kWh) is more compared to Combo6 (3.22 kWh).Figures 12 and 13 present graphs of SLAV and ESV respectively.The results obtained are similar to the results presented in [67].The experimental results for the effect or overhead caused by the virtualization layer on peak power consumption, CPU and memory utilization, are presented in Table 6.Notations 'A' and 'B' in Table 6 denote 'No virtualization enabled' and 'Virtualization enabled by KVM, Openstack and Openstack NEAT', respectively.It is clear that the increase in PPC caused by the virtualization layer is negligible (in the order of less than 1 Watt of power) and less than 0.5% of CPU and memory utilization.
consumption factor.The balance between energy saving and SLA violations is expressed by ESV [71].Although Combo5 has less SLA violations, the energy consumed (4.03 kWh) is more compared to Combo6 (3.22 kWh).Figures 12 and 13 present graphs of SLAV and ESV respectively.The results obtained are similar to the results presented in [67].The experimental results for the effect or overhead caused by the virtualization layer on peak power consumption, CPU and memory utilization, are presented in Table 6.Notations 'A' and 'B' in Table 6 denote 'No virtualization enabled' and 'Virtualization enabled by KVM, Openstack and Openstack NEAT', respectively.It is clear that the increase in PPC caused by the virtualization layer is negligible (in the order of less than 1 Watt of power) and less than 0.5% of CPU and memory utilization.In summary, based on the results obtained, it is evident that Combo6 is the best combination of VM consolidation algorithms for our IoT eye tracking big data workload.In order to understand the economic and environmental sustainability implications of Combo6, a projection on the cost of electrical energy for running the compute nodes and carbon emissions for the required energy generation for a period of 30 days is calculated.Viewing the fact that the scope of this research is to study the energy usage of compute nodes, energy cost is calculated only for running the compute nodes while controller and other ICT equipment such as the ethernet switch are not taken into account.A list of 14 countries which are prime locations of hyperscale datacenters [108] and countries that are suitable locations for data centers [109] are chosen.Energy cost in USD and carbon dioxide emission in KgCO2 for a month is calculated using Energy Council (2017) data on electricity-specific energy generation cost and carbon emission of countries [110].
Cost and carbon emissions for wastage during energy generation and transmission are beyond this research scope.A comparison is made between 'When no VM consolidation is applied' and 'Combo6' for processing the same IoT big data workload.Figures 14 and 15 represent the projected  In summary, based on the results obtained, it is evident that Combo6 is the best combination of VM consolidation algorithms for our IoT eye tracking big data workload.In order to understand the economic and environmental sustainability implications of Combo6, a projection on the cost of electrical energy for running the compute nodes and carbon emissions for the required energy generation for a period of 30 days is calculated.Viewing the fact that the scope of this research is to study the energy usage of compute nodes, energy cost is calculated only for running the compute nodes while controller and other ICT equipment such as the ethernet switch are not taken into account.A list of 14 countries which are prime locations of hyperscale datacenters [108] and countries that are suitable locations for data centers [109] are chosen.Energy cost in USD and carbon dioxide emission in KgCO 2 for a month is calculated using Energy Council (2017) data on electricity-specific energy generation cost and carbon emission of countries [110].
Cost and carbon emissions for wastage during energy generation and transmission are beyond this research scope.A comparison is made between 'When no VM consolidation is applied' and 'Combo6' for processing the same IoT big data workload.Figures 14 and 15 represent the projected energy cost and projected carbon dioxide emission for required energy generation in various countries for a period of 30 days.It is observed that there is a significant decrease in the energy cost and carbon dioxide emission with Combo6 for VM consolidation in each country.In countries such as Denmark that generate energy from renewable energy sources, cost per kWh is as high as 0.34 USD.Applying Combo6 for VM consolidation can save up to USD 25.908 for a small 3-node set up.Countries with colder climates are often preferred locations for data centers as there is no need for additional cooling systems.Countries such as China, India and Japan are becoming popular data center locations due to the availability of labor, connectivity and lower cost of electrical energy.Though the cost of electricity generation in these countries is less, (e.g., China-USD 0.09 per kWh), the amount of carbon dioxide and other Green House gases (GHG) emitted are very high (e.g., China-1.33 kgCO 2 /kWh) compared to countries like Finland (0.01 kgCO 2 /kWh) and Sweden (0.02 kgCO 2 /kWh) that primarily use renewable sources of energy.Applying energy saving systems and approaches such as the most suitable VM consolidation technique in data centers can save substantial amounts of money and have a great impact on the environment not only by reducing carbon dioxide emissions but also by increasing the lifetime of the computing systems, therefore, less electronic waste [41].17 of 24 emissions but also by increasing the lifetime of the computing systems, therefore, less electronic waste [41].

Conclusions and Future Work
This study has investigated the energy consumption and performance of compute hosts for IoT big data processing in a private cloud infrastructure.The results obtained from real physical compute resources addressed the limitations and fidelity problems relating to investigation of VM consolidation on simulated environments with simulated workloads.From a data center's perspective, compute hosts and cooling systems are considered major consumers of energy.In addition to hardware and software application efficiency, cloud resource management also plays a key role in energy saving.VM consolidation reduces the overall energy consumption thereby, reducing utility and operational costs.From the observations made, it was obvious that power consumption varies based on workload.Thus, it is vital to choose apt VM consolidation algorithms for processing each workload.Furthermore, an energy-aware system was effective when it meets the QoS requirements in addition to energy saving.Therefore, it is necessary that the selection of appropriate VM consolidation algorithms for IoT big data workload considering lower energy and less SLA violations to meet QoS requirements.For IoT big data workloads, regression-based LRR algorithm outperformed static threshold-based THR and adaptive threshold-based MAD algorithms for overload detection.Combo6 with local regression robust (LRR) overload detection algorithm and

Conclusions and Future Work
This study has investigated the energy consumption and performance of compute hosts for IoT big data processing in a private cloud infrastructure.The results obtained from real physical compute resources addressed the limitations and fidelity problems relating to investigation of VM consolidation on simulated environments with simulated workloads.From a data center's perspective, compute hosts and cooling systems are considered major consumers of energy.In addition to hardware and software application efficiency, cloud resource management also plays a key role in energy saving.VM consolidation reduces the overall energy consumption thereby, reducing utility and operational costs.From the observations made, it was obvious that power consumption varies based on workload.Thus, it is vital to choose apt VM consolidation algorithms for processing each workload.Furthermore, an energy-aware system was effective when it meets the QoS requirements in addition to energy saving.Therefore, it is necessary that the selection of appropriate VM consolidation algorithms for IoT big data workload considering lower energy and less SLA violations to meet QoS requirements.For IoT big data workloads, regression-based LRR algorithm outperformed static threshold-based THR and adaptive threshold-based MAD algorithms for overload detection.Combo6 with local regression robust (LRR) overload detection algorithm and minimum migration time (MMT) VM selection that predicts resource utilization and chooses VMs that require minimum time to migrate, was recommended as it performed better than other combinations.The additional overhead caused by virtualization on the compute hosts was negligible considering its added value.It can also play a vital role in countries that generate electricity from fossil fuel thereby, reducing the negative impact on the environment by burning lesser non-renewables.This work aptly falls under the theme 'Green Technologies and IT'.Furthermore, an energy-aware cloud system must be robust and scalable.Although, the global manager of Openstack NEAT is centralized, a distributed model of the VM consolidation framework can prevent a single point of failure.Future direction of this research could encompass the use of a distributed framework with increased number of compute nodes and different big data platforms for the analysis of the VM consolidation algorithms.Additionally, the portfolio of algorithms could be extended by including additional relevant ones.
1.define the aim and objectives of the research, identify research gaps and understand state-of-the-art energy efficient systems and approaches; 2.analyze and select VM consolidation algorithms extracted from existing research, design the cloud system architecture and set up the cloud-IoT infrastructures; 3.implement a composite selection of VM consolidation algorithms on the configured cloud infrastructure; 4.design and conduct a set of experiments with varying parameters.Experiments are repeated (10 times) for each combination of algorithms and data is collected; 5.

Figure 1 depicts
the research methodology of this work which encompasses 5 phases: 1. define the aim and objectives of the research, identify research gaps and understand state-ofthe-art energy efficient systems and approaches; 2. analyze and select VM consolidation algorithms extracted from existing research, design the cloud system architecture and set up the cloud-IoT infrastructures; 3. implement a composite selection of VM consolidation algorithms on the configured cloud infrastructure; 4. design and conduct a set of experiments with varying parameters.Experiments are repeated (10 times) for each combination of algorithms and data is collected; 5. analyze the results relating to SLA and energy metrics, and recommend the best combination of VM consolidation algorithms for IoT eye tracker big data workload.

Figure 7 .
Figure 7. Average peak power consumption of compute nodes for synthetic workloads (single core).Figure 7. Average peak power consumption of compute nodes for synthetic workloads (single core).

Figure 7 .
Figure 7. Average peak power consumption of compute nodes for synthetic workloads (single core).Figure 7. Average peak power consumption of compute nodes for synthetic workloads (single core).

Figure 8 .
Figure 8.Average peak power consumption of compute nodes for synthetic workloads (all cores).

Figure 8 .
Figure 8.Average peak power consumption of compute nodes for synthetic workloads (all cores).

Figure 9 .
Figure 9.Total energy consumption of compute nodes in 24 h.

Figure 9 .
Figure 9.Total energy consumption of compute nodes in 24 h.

Figure 9 .
Figure 9.Total energy consumption of compute nodes in 24 h.

Figure 11 .
Figure 11.Power state changes of compute servers.

Figure 11 .
Figure 11.Power state changes of compute servers.

Figure 14 .
Figure 14.Projected energy costs for 30 days.Figure 14.Projected energy costs for 30 days.

Figure 14 .
Figure 14.Projected energy costs for 30 days.Figure 14.Projected energy costs for 30 days.

Table 1 .
Combination of algorithms.

Table 2 .
Configuration of servers.

Table 2 .
Configuration of servers.

Table 3 .
Configuration of Servers.

Table 3 .
Configuration of Servers.

Table 4 .
Configuration of servers.

Table 4 .
Configuration of servers.

Table 5 .
Energy and SLA violation metrics.

Table 5 .
Energy and SLA violation metrics.

Table 6 .
Effect of virtualization on peak power consumption and resource utilization.

Table 6 .
Effect of virtualization on peak power consumption and resource utilization.