A Systematic Survey on Energy-Efficient Techniques in Sustainable Cloud Computing

Global warming is one of the most compelling environmental threats today, as the rise in energy consumption and CO2 emission caused a dreadful impact on our environment. The data centers, computing devices, network equipment, etc., consume vast amounts of energy that the thermal power plants mainly generate. Primarily fossil fuels like coal and oils are used for energy generation in these power plants that induce various environmental problems such as global warming ozone layer depletion, which can even become the cause of premature deaths of living beings. The recent research trend has shifted towards optimizing energy consumption and green fields since the world recognized the importance of these concepts. This paper aims to conduct a complete systematic mapping analysis on the impact of high energy consumption in cloud data centers and its effect on the environment. To answer the research questions identified in this paper, one hundred nineteen primary studies published until February 2022 were considered and further categorized. Some new developments in green cloud computing and the taxonomy of various energy efficiency techniques used in data centers have also been discussed. It includes techniques like VM Virtualization and Consolidation, Power-aware, Bio-inspired methods, Thermal-management techniques, and an effort to evaluate the cloud data center’s role in reducing energy consumption and CO2 footprints. Most of the researchers proposed software level techniques as with these techniques, massive infrastructures are not required as compared with hardware techniques, and it is less prone to failure and faults. Also, we disclose some dominant problems and provide suggestions for future enhancements in green computing.


Introduction and Motivation
Cloud computing is evolving continuously to fulfill the increased demand for computing resources, allowing clients to purchase a specific set of resources and only pay for their use [1]. Software as a Service (SaaS), Platform as a Service (PaaS), infrastructure as a Service (IaaS), and Container as a Service (CaaS) are the different types of well-known services provided by the cloud systems [2]. The Pay-per-Use model allows customers to scale services on demand.
One recent research shows that, if the power consumption rate grows, it will even cross a data center infrastructure hardware expense. This will also lead to more emissions of CO2, which have a direct impact on our environment. Here, the concept of green computing comes into the picture. Green computing ecologically uses computers and related resources as its goal is to optimize the energy consumption of computing machines, servers, data centers, networks, and cooling systems [2,3].
The prominent role of green computing is regarding data centers as they represent a significant percentage of global energy, and thus there is a need for such practices that include the proper deployment of energy-efficient technology in data centers and reduce the infrastructure and carbon emissions. There is a fundamental understating that the more hardware and equipment employed, the more energy requirements [3,4].
Moreover, this also diminishes the cost of operating a data center. Green computing is similar to green chemistry [5]. There is a primary focus is to reduce the use of hazardous materials. It also includes the proper disposal of electronic waste, such as computing devices with a minimal impact on the environment.
This also leads to an improved distribution mechanism for resources and services that will result in a reduction in equipment and in the carbon footprint. As shown in Figure 1, the main task is to decrease the energy drop or loss, which is not consumed by data centers, or energy that does not provide helpful output, such as loss, while running on idle mode [2][3][4][5].
There are millions of servers worldwide; Google is estimated to have over a million servers, and Microsoft's Chicago data center has almost more than 3 hundred thousand servers, which consume 23.5% of electric power, which is generated by coal in the USA [3,4]. The increase in energy consumption from data centers has raised many environmental issues and economic concerns. According to Cisco 2018, global data center traffic will hit up to 20.6 zettabytes (ZB), increasing by 6 ZB per year since 2016. They forecast 628 hyper-scale data centers worldwide in 2021, whereas there were 338 in 2016.
If we talk about mobile users, these will increase by 70% of users from the year 2017 to the year 2022 [5]. It would be beneficial for our environmental sustainability and internet firms if data centers adopted the concept of green computing. Virtualization is a core aspect of green cloud computing, which has revolutionized cloud computing by optimizing the resource utilization capacity and increasing the dynamic migration of resources. This allows the equipment to be used more effectively and lowers the cost by combining a set of physical servers into one [6].
Excessive heat liberation also occurs due to increased power consumption, increasing cooling device usage. Cooling in data centers is the second most important source of power consumption, as the first is operating the data center [4,5]. Usually, a data center has a 1-megawatt requirement, which consumes about $20,000,000 in energy [5,6] during its lifespan, and 30%-40% of the total cost is used by its cooling systems [7] The cost of operating and cooling an extreme system is equal to the sum of money to purchase new hardware within two years.
The cooling at or below 70 degrees F (21 °C) controls the entire data center's temperatures, even cooling more than a specific limit can also lead to high humidity, exposing the devices to high moisture, thereby, facilitating salt deposits on conductive filaments in the circuit, which can cause short circuits [8]. Low-temperature maintenance is necessary to assure reliability and maintain the hardware's life. From the above data, we can say that high-performance computation requires more energy for operations and conclude that performance is directly proportional to energy consumption [9][10][11][12].

Motivation for Research
This study primarily intends to address the rapidly growing energy consumption and to analyze this issue by presenting a cross-sectional survey of recent developments in energy-efficient cloud technology. In addition to this, we also offer a brief introduction and attempt to explain the importance of efficient energy usage in cloud data centers. Data center power consumption is incredibly high as they require energy for computation and cooling purposes.
We conducted a systematic mapping study to examine and report our empirical findings on the associations between energy consumption activities in data centers and their environmental impacts. We hope this research will reduce certain ambiguities concerning the advantages and limitations of energy-efficiency technologies. The motivation behind our research can be said in broader terms as follows: • The needs and demands for understanding existing energy-efficient techniques are proliferating. Power-saving innovations, issues related to CO2 emissions, and proper energy distribution are crucial requirements in the current era. This paper reports a comprehensive systematic mapping study (SMS) to analyze, amalgamate, and present our empirical findings relevant to energy-efficiency techniques.

•
There might be no need to implement more infrastructure using virtualization, as it will operate many machines on a single virtual machine. However, this will not control carbon emissions if the infrastructure of a data center does not work efficiently. Such a virtualized environment will require more energy to operate and cool down the equipment, which prompted a comprehensive systemic mapping procedure to examine and evaluate the growing research in this field.

Our Contribution
We initially reported 2903 relevant articles from six digital libraries and academic search engines published until February 2022. These papers were later examined regarding the title, abstract, full text, manual search, snowballing references, and other perspectives. A total of 119 primary studies (PSs) were selected after evaluating the quality of articles. To make the following significant contributions, these selected PSs were further investigated.
1. We provide a novel taxonomical overview of various energy-efficient techniques at the cloud data center level by classifying the existing literature extracted from available research papers. 2. We describe the role of energy-efficient techniques in reducing the ecological and financial impact on cloud data centers. 3. We precisely define the research questions related to effective energy utilization in the cloud environment, further associated with cloud datacenters. 4. We discuss the results generated by various techniques by different simulation software, which helps to predict and evaluate the effects of techniques used to reduce energy consumption. We outline issues, concerns, and recommendations for future research work.
To conclude, most current studies do not give a comprehensive review of all cloudbased energy-saving approaches. Details about the systematic survey technique are not included in the majority of the surveys. The majority of the surveys focused on hardwareand software-level techniques for energy efficiency. Our systematic literature study identifies and finds the significant issues in the current research and presents a taxonomy of various procedures executed on the cloud as energy-saving methods.
Our survey also provides recommendations for the issues in different techniques. None of the previous studies has threats to validity originating from the unintentional contradictions involved while reviewing the literature, selecting the primary study, evaluating the quality assessment, and data extraction from the selected studies. Including conclusion, layout, and internal and external threats, we also discuss these constraints and the actions taken to mitigate these threats. We hope that this SMS outcome will help cloud researchers and professionals to identify the open research fields for further investigation and an up-to-date current state of research in this area.
This research will enable the scientific community to consider the consequences and challenges to be addressed for better energy utilization and resource distribution in the cloud computing industry. Section 2 discuss the evolution of energy-efficiency techniques in the data centers and explains the need and role of energy in data centers. It also focuses on disclosing cloud data centers' contribution to handling high energy consumption and other energy-related issues. Section 3 elaborates on this article's review methodology and identifies the open research questions in this domain. This division explores the details of the included search keywords, search strings, and selection criteria (inclusion and exclusion details).
The systematic review results are explained in Section 4, and we provide a detailed description and analysis of various energy-management strategies in the cloud data centers. Different taxonomies were defined to obtain a comprehensive and elaborated classification of current cloud energy-efficiency techniques. Section 5 highlights some issues and recommendations for every category of energy-efficient technique. Section 6 discusses potential threats to the effectiveness of the work. In the end, Section 7 summarizes the conclusions of this SMS, and Section 8 outlines the possible recommendations for further research in this area.

Background and Related Work
The need for energy-efficient techniques is discussed in Sections 2.1 and 2.2, the role of Energy Efficiency in Cloud Data Centers is discussed. In Section 2.3, the history and evolution of energy-efficient techniques are explained. Later, in Section 2.4, related surveys on energy-efficiency techniques are discussed.

Need of Energy-Efficient Techniques
Energy consumption in data centers was estimated as 1.4% of the total EEC (power consumption), increasing 12% annually [12][13][14]. Green cloud computing is a new area that has attracted researchers' attention world-wide [15]. Today, we need such technology to solve energy and performance tradeoffs. Proper distribution of workloads and maintaining performance by providing access to services anywhere, anytime to users is one of the cloud's prime objectives. Minimizing energy usage may cause a violation of SLA (Service Level Agreement); however, SLA breaches may cause a penalty to cloud service providers, and thus we should avoid any SLA violation while considering energy-efficiency systems [12].
A recent report by Accenture shows that small organizations and mid-range corporations can reduce up to 90% CO2 emissions if they shift toward cloud resources, and even for large organizations, they can save at least 40-60% reductions in emissions if they establish the whole infrastructure on the cloud [12][13][14][15]. Most data centers are only interested in cost reductions regarding electricity, not on carbon emissions; therefore, the plan is to eliminate the source of CO2 as well [13].
If data centers are constructed on the green computing principle, the cloud computing benefits are also more prominent for environmental protection. Massive data centers are typically designed for cloud-computing services, connected with several highspeed networks and virtual servers, including facilities, such as temperature control, power systems, and others [16]. In Figure 1, there are four conditions in a system where energy is used inefficiently, i.e., drop or loss, where both "drop" and "loss" apply to insufficient energy use.
Energy drop refers to the energy transmitted to the process but not consumed by any subsystem (e.g., D1), e.g., electricity is wasted due to transfer or conversion. A second cause of drop includes the support system's overhead costs (D2), such as cooling or lighting in data centers primarily supplied by cloud service providers. Power loss refers to the energy used for its primary purpose but lost due to system abandonment (L1)-for example, when the system is running but in idle mode [16][17][18][19][20]. Another cause of loss is over-the-limit usage or redundant usage (L2), such as keeping the system cool to a maximum at night when the temperature is already low.
Reducing power consumption in cloud data centers and cloud computing systems is a challenge that has raised researcher interest over time and has become a trending research domain today.

Role of Energy Efficiency in Cloud Data Centers
Cloud computing has become a crucial evolutionary paradigm. It provides a powerful and dynamic ability to use a computer to handle all IT needs, including computer storage and software requirements. A cloud data center can deliver a powerful and efficient computing environment by virtualization. This concept has widely disseminated and gained the interest of organizations in reducing hardware and software investment [15]. It offers advantages in cost when thousands of users share the same resources and pay per use with no capital investment providing high scalability and dynamic provisioning.
The key concepts in cloud computing are grid computing, parallel computing, virtualization, containerization, and distributed computing, and the latest trending areas are microservice architecture and SOA (service-oriented architecture). Cloud technology has recently emerged to meet users' computationally intensive needs by using virtual infrastructure. The cloud can be determined and accessible globally via the internet [20,21]. Numerous cloud data centers have also been set up, which offer colossal computing infrastructure.
These data centers are home to massive servers that require additional computing tools, such as storage, cooling devices, and backup system support [22][23][24][25][26][27][28][29][30]. Moreover, these thousands of servers in the data center take up enormous space and consume tremendous electricity. These servers dissipate heat and must be maintained in a completely air-conditioned environment specially designed for data centers. The energy consumption of data centers is also expected to increase rapidly [31].
An estimate assumes the proportion of electricity data center consumption worldwide will increase by 1.3% to 8% from 2010 to 2020 [17]. This massive increase in power consumption from data centers makes it very clear that many data centers and servers are responding to cloud services' growing demand [32][33][34]. This often represents an inefficient use of energy as not all power supplied to a data center is used to perform cloud storage services.
The cooling network consumes a large portion of the energy as a continuous power source. In legacy data centers, up to 50% of the capacity is used by non-server machines [14]. These data centers do not yet operate on the best energy management patterns in their design and operation phases. Still, the industry has attained substantial progress in this field. Therefore, the possibilities for improving data centers' energy efficiency now rely heavily on finding ways to increase energy efficiency for a server in a data center [31][32][33][34][35].

Evolution of Energy Efficiency Techniques
The major aim is to reduce the energy requirements of specific products and services, which will automatically lead to reduced temperature as this will limit the carbon emissions, also known as energy efficiency [36][37][38]. Many reasons exist for improving energy efficiency: reduced energy usage decreases expenditure on energy and could result in a financial benefit to customers [39,40]. Another reason is that harmful gasses are released into the environment with more energy consumption, increasing the earth's temperature. This section shows the evolution of studies on energy efficiency as seen in Figure 2.
This topic deals with energy-management technologies, CO2 emissions from data centers, and maximizing performance with minimum energy consumption. Various authors have suggested different methods for optimizing energy consumption. Ref. [14] presented attributes to aggregate power utilization on massive servers. Simultaneously, a collection of nearly 15,000 points was used for different applications for approximately six months.
The authors used the proposed modeling framework to approximate the power management scheme, thereby, minimizing the peak power load and energy utilization at a data center. A novel software platform for energy-efficient cloud management was proposed by [10], which suggests that the cloud infrastructure has real potential for energy efficiency. It substantially improved the performance regarding the response time and cost reductions in dynamic workload scenarios. Similarly, Ref. [36] recommended a simulation paradigm for energy-aware cloud computing data centers.
This simulator was contrived to explicitly gather information on power usage by data center modules, such as IT equipment, servers, and switches. The workload distribution was handled; however, it also handled packet-level connectivity patterns using different settings. The simulation findings showed the usefulness of applying power control strategies, such as voltage scaling, frequency scaling, and dynamic shutdown equipment. VMeter, an innovative tool for power modeling that expresses direct correlations between online resources and the overall power usage of a data center, was suggested [29].
GRMP-Q protocol is a generic resource allocation gossip protocol introduced by [25]. It aims to reduce power consumption by server consolidation while shifting the load pattern. The results indicate that core efficiency metrics do not change with the machine volume, allowing the resource allocation mechanism to be scalable with around 100,000 servers. Developing an energy reduction model for cloud computing [37][38][39][40][41][42][43][44] offers a practical tool to explore various aspects of the network based on mathematical methods to reduce energy usage.
While performing implementation on AMPL/CPLEX platforms, they also offered an analytical approach to investigate different aspects of the energy reduction network. Ref. [45] developed a Cloud Monitor that can generate data models for cloud-based power forecasting, which also provides insight into the energy costs of activity in the cloud without using any additional external equipment [37]. A new simple energy consumption model was introduced by [34], which also works as an analysis tool for cloud-computing environments to track and calculate the total energy consumption based on different runtime activities and to support static or dynamic system-level optimization [46][47][48][49][50][51][52].
To enhance energy efficiency and control carbon footprints. Ref. [22] developed a conceptual model and realistic recommendations for the integrated management of all infrastructures, including computers, networks, storage, and cooling systems [93][94][95][96][97][98][99]. They also proposed a range of potential research directions and suggested more practical improvements in this field .

Related Surveys
Although earlier surveys [18,100,122] have been innovative, there is still a need for a systematic literature survey to determine, update, and expand research objectives as research in the field of green cloud computing is continuously gaining interest. The first two surveys discussed energy-efficient strategies based on the hardware optimization process. In [122], existing energy-efficient technologies were summarized in hardware and software-oriented optimization systems [19]. However, they primarily focused on software-level-based techniques, and some of the latest innovative technologies, such as bio-inspired thermal-aware technologies were also included.
In this current study, we attempt an evaluation process and to extend the taxonomy of energy-efficiency techniques defined by [122], such as power management systems, bio-inspired approaches, and non-technical techniques. Ref. [25] decomposed sensor cloud energy-efficient approaches into multiple categories and evaluated each one using a variety of metrics.
This survey determines how much of each parameter is used by each technique and the average percentage of parameters used by the sensor cloud energy efficiency. According to the observations by the authors, the vast majority of energy-saving approaches ignore the quality of service (quality of service) factors, scalability, and network lifetime. The total energy usage by a cloud system can be reduced by using service allocation mechanisms outlined in [74].
In this survey, these authors developed the service allocation issue and an energy model based on a generalized system architecture. A taxonomy of energy-efficient resource allocation approaches in the literature is also presented in this survey. Ref. [91] looked at all conceivable locations in high energy consumption in cloud computing infrastructures. This study examines the current solutions that have been built to increase energy efficiency in large-scale cloud-computing environments without sacrificing the quality of service or performance.
Various methods for reducing the power consumption of data centers are discussed in the survey by [51]. All strategies used to lower energy-specific hardware components/levels are covered in extreme detail. There is much emphasis on techniques deployed at the hardware-level (network-or server-level) that can lead to energy-efficient or ecologically friendly data centers [122][123][124][125][126][127][128][129][130][131][132].
Several different applications of the bi-level optimization model in managing ship emissions are discussed in an article [146]. These applications include the Energy Efficiency Design Index, the Emissions Control Area, the Market-Based Measure, the Carbon Intensity Indicator, and the Vessel Speed Reduction Incentive Program. An extensive taxonomy of sustainable cloud computing was developed in the paper [36]. A taxonomy was designed to explore the existing methodologies for sustainable cloud computing, covering numerous areas, such as application design, sustainability measures, capacity planning, energy management, and other methods.
In the paper, the authors [93] highlight the critical issues related to reducing carbon emission from ships. In this paper, the main focus was bulti on managing the critical measure, the carbon intensity indicator (CII), and the carbon emissions per unit of transport work for each ship. In order to lower the carbon footprint of data centers, this research focused on decreasing brown energy and increasing renewable energy consumption. The authors [7] proposed a self-adaptive architecture based on microservices and renewable energy for interactive and batch workloads [140][141][142][143][144][145][146].
To sum up, we concluded that most state-of-the-art papers do not provide a systematic survey on all techniques for energy efficiency in the cloud. Most of the surveys do provide the systematic survey methodology by which they conducted the survey. This review augments previous surveys and introduces a new systematic literature survey to determine and discover the key challenges in existing studies and provide a taxonomy of different techniques conducted on the cloud as energy mitigation techniques. Table 1 compares our conducted systematic survey with previous related studies and highlights the key features that are only included in our survey.

Research Methodology
The research approach in this paper follows the different recommendations submitted for the systematic survey performed by [12,13,31,84,119]. This review was performed in accordance to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines.

Survey Methodology
In the first step, we start with formulating the research questions, discussing the origins of information, collection criteria, quality assurance, and analyzing the research paper's findings. The complete process can be seen in Figure 3. The following segment focused on common keywords from a particular domain, and only relevant combinations of these words were used in the final search strings. In the next step, inclusion and exclusion criteria were defined so that the review contains only the related research papers.
Strings were thoroughly examined from different information sources, followed by reading the titles and abstracts intended to remove those not relevant or even linked with the energy domain but not connected with cloud computing. The procedures were replicated up to all sources, and the related articles were studied and evaluated in detail. The obtained documents were thoroughly investigated and categorized carefully in the final process. Consequently, this study contains 119 papers after performing each step mentioned above.

Source of Information
We thoroughly reviewed research articles and conference papers from Scopus, Web of Science, ACM, Research Gate, Google Scholar, books, magazines, and white papers. In Figure 4, we can see the ratio of papers analyzed from multiple sources. The following databases were used in our search:

Research Questions
The research questions as seen in Table 2 were constructed to explain the different parameters used in different energy-efficiency techniques and their effects on the environment. These were recognized in the beginning stages of the literature survey. All previous works done in this field have been focused on identifying the unknown relation between energy efficiency and its role in making a sustainable environment.

Research Questions (RQs)
RQ1-Which measuring parameters are considered for power consumption in our selected studies? RQ2-Describe the impact of high energy consumption by cloud data centers. Explain the estimation on total energy consumption by data centers. RQ3-What is the current status of cloud energy efficiency? RQ4-What kind of different techniques used for energy efficiency were proposed by our selected studies? RQ5-Describe various energy-efficiency techniques employed at the hardware level. RQ6-Describe the various energy-efficiency techniques employed at the software level. RQ7-Explain the various techniques for virtual machine consolidation applied at data centers. RQ8-Describe the different power-aware management techniques. RQ9-Explain the various bio-inspired techniques for energy efficiency applied at data centers. RQ10-Describe the different thermal-management and cooling technologies employed at data centers. RQ11-Explain the key aspects that make the cloud favorable for reducing carbon footprints and the better utilization of energy. Describe non-technical technologies employed at cloud data centers for enhancing energy efficiency.
We successfully answer and attempt to analyze the research questions listed below based upon the data extracted from the primary studies. We also attempt to identify the most and least explored part in cloud computing, which is high in energy consumption and as well as in carbon emissions. The mapping between the PICOC criteria and SLR RQs can be seen in Table 3. In RQ1, the parameters on which the whole computing system is evaluated are answered. In RQ2 are the factors that affect power consumption, and in RQ3, an estimation was built for the total energy consumption in data centers. In RQ4, the current status of data centers regarding energy efficiency is discussed. Different energy-efficiency techniques and how these techniques will help in reducing carbon footprints are addressed in RQ5 to RQ10. In R11, a brief discussion is conducted on some of the non-technical mechanisms employed for energy mitigation in data centers.

Search Keywords
After retrieving all the related papers, the search keywords contained various combinations of words. The aim was to cover multiple fields, including energy-efficient technology related to ICT and its role in a sustainable environment.
The keywords used for searching were mainly focused on our research questions. The final search strings included primary, secondary, and additional "AND" operation keywords as shown in Figure 5. The primary keywords included basic terms related to energy, power, and cloud computing, while the secondary keywords consisted of general words, such as environment, sustainable, techniques, and reduction. To ensure the search strings cover all related papers, we added some new keywords by performing the 'AND' operation, which combines different keywords to enhance our search for related papers.

Search String
To formulate our search string, we complied with certain guidelines [55]. The following steps were used for the preparation of the preliminary string.

•
First, the leading search words were taken from the research questions.

•
The following lists were obtained for abbreviations, synonyms, and alternate orthography for the main search words. Finally, by using Boolean AND operators, relevant search keywords were constructed. A pre-search string was created with all possible spellings, synonyms, and abbreviations. The ensuing preliminary string is: (cloud OR energy OR power OR power consumption OR methods OR techniques OR carbon OR "energy efficiency") AND (empirical * OR "case study" OR "case studies" OR experiment * OR survey) AND (energy efficiency * OR power-efficiency * ).
We selected 30 papers manually to validate the search terms with the initial search string (cloud energy consumption and energy-efficiency-related). The search strings were searched by the title, abstract, and article keywords in each chosen electronic databases.
We thus verified the search terms for the title, abstract, and keywords from the 30 publications nominated. We assume this validation will ensure that the related literature published in identified digital repositories is wholly covered. However, only 17 of all 30 selected papers were captured by our original search string, which helped us upgrade our search string's article distribution to a broader range. After comprehensive experiments, the resulting general search string is as follows: We specified some search strings for every digital library, although different libraries require distinct search formation/syntax rules (Table 4). We also conducted a pilot study in selected digital libraries before shifting to a data-collection method using formulated search strings. Then, we compared the test sequence set with 30 reference articles that we manually generated. The search string obtained 25 of 30 items, thus, validating our search string. ACM Digital Library (Title:(cloud OR processors OR saving OR methods OR techniques OR scheduling OR emission OR "energy -efficiency" OR "power-efficiency") AND (energy-efficient techniques * OR "power-optimization" OR "energy-mitigation" OR "energy reduction" OR "energy efficient)OR Keywords: (cloud OR processors OR saving OR methods OR techniques OR scheduling OR emission OR "energy-efficiency" OR "power-efficiency")) AND (Title:(energy-efficient techniques * OR "power-optimization" OR "energy-mitigation" OR "energy reduction" OR "energy efficient")) OR Abstract:("power-efficiency") AND (energy-efficient techniques * OR "power-optimization" OR "energy-mitigation" OR "energy reduction" OR "energy efficient")) OR Keywords:("power-efficiency") AND (energy-efficient techniques * OR "power-optimization" OR "energy-mitigation" OR "energy reduction" OR "energy efficient"))) Wiley Online Library (("power-efficiency") AND (energy-efficient techniques * OR "power-optimization" OR "energy-mitigation" OR "energy reduction" OR "energy efficient" OR "power-optimization" OR "energymitigation" OR "energy reduction") in Article-Titles OR (cloud OR processors OR saving OR methods OR techniques OR scheduling OR emission OR "energy -efficiency" OR "power-efficiency") in Abstract OR (cloud OR processors OR saving OR methods OR techniques OR scheduling OR emission OR "energy -efficiency" OR "power-efficiency") in Keywords) AND ((energy-efficiency * OR energy-consumption * ) in Full Text)

Selection Criteria
Once the final search strings were constructed, several articles were identified easily using them. However, a substantial number of articles were especially significant for a survey. A total of 2309 papers were selected using the methods discussed above, of which 119 papers were included in this survey based on the selection criteria and other analysis as seen in Figure 6. The following explanations are as follows: 1. Research papers use the term "energy" in different contexts, as published articles associated with computer networks, wireless sensor networks, and neural networks but not with cloud computing. 2. Some of the research articles were excluded because they mainly concentrated on power consumption in parallel computing, thermal energy, mechanical machines, etc. Our surveys aim to cover only power in the cloud-computing domain only. We included papers from January 2008 to February 2022 in our research. 3.6.1. Criteria for Inclusion and Exclusion Table 5 lists the inclusion and exclusion criteria used to evaluate articles that do not relate to our SMS research questions. While 88 papers were picked at random through an automated search result at the initial stage by which both the authors individually analyzed the inclusion and exclusion criteria. Cohen Kappa statistics measured the agreement of both authors [123] and in the first trial made Kappa = 0.54 according to [65], which was graded into a moderate category.
This mild consensus between the two authors can be due to differing interpretations of inclusion and exclusion criteria. Therefore, the authors held meetings and diligent consultations to establish a shared interpretation of the refined inclusion criteria and address inconsistencies accordingly. Later we re-applied statistics from Cohen Kappa [123] to a different set of 85 papers randomly chosen (substantial). The significant agreement suggests that both authors were sufficiently perspicuous about the inclusion and exclusion requirements and that these criteria could now be added to the paper screening process for more effective and consistent evaluation.
Before screening research articles, we did not use Chen Kappa's coefficient again, as, after the first phase, we already had a refined criterion. Regarding the last inclusion criteria, the essential motivation was to consider only the energy topic related to the cloud domain as we did not involve articles covering the energy domain in networking or other computing devices. Instead, our goal was only to assess the influence of high energy consumption in the cloud.

Inclusion Criteria
1.Article presenting the importance and use of energy efficiency. 2.Clearly describes the technique from energy efficiency in cloud computing. 3. Peer-reviewed and written by an academic researchers or industry professional. 4. Research paper covering different aspects of energy and the issues generated by it. 5. Published in the field of cloud computing and in reputable journals, conferences, and magazines. 6. Research papers that describe the direct role of energy in cloud data centers as well as in generating carbon footprints.
Exclusion Criteria 1. Research articles that are not in context of energy efficiency in the cloud, rather describing some other theme, such as computer networks. 2. Has common challenges and references. 3. Summary of conversations, workshops, book chapters, or conferences. 4. Research papers that are not written in English. 5. Duplicate research articles, e.g., extended version published in difference venues.

Reference Checking
A manually verified referenced list of the 119 relevant articles collected after refining automated and manual search outcomes was then used to conduct a reference snowballing independently by the authors. Additionally, regards to the previously released cloud computing energy-efficient technologies SLRs [18,76,100,122]. We investigated to reduce the probability that related papers cannot be neglected. The results were obtained after the evaluation done by snowball references, and both authors created a consistent list of research papers. In this step, the probability of missing any related literature was minimized to the lowest.

Article Screening
The selection of the primary studies was made by the article screening process, which consists of five steps. The relevant literature search was performed first in December 2021 and later revised in February 2021, covering the literature published until January 2022. Figure 7 shows the screening of articles, and each stage ends with a retrieved item count. In Stage 1, an automated and manual search was performed in selected journals and conference proceedings integrated with six electronic databases for automatic search.
The search results as 2903 articles were obtained by an automatic search and further extracted via Zotero (Zotero is a free and open-source reference management software to manage bibliographic data and related research materials, was originally created at the Center for History and New Media at George Mason University) which was subsequently exported into an MS Excel spreadsheet. In Stage 2, the cleaning process on auto-search results was performed to filter out immaterial articles, such as bibliography, workshops, and symposium summaries, and we manually abolished 1423 articles. In Stage 3, the authors filtered the remaining search results independently in subsequent stages from an excel spreadsheet based on title, abstract, and full text, complying with the inclusion/exclusion criteria specified in Section 3.6.1.
At each phase, we used Kappa statistics [123] to examine the homogeneity between both the authors, as adumbrated by [55]. In Stage 4, based on their titles, we discarded the articles, and the statistical analysis resulted in a kappa value of (0.74) and (0.71) respectively, for Phase 1 and Phase 2, which reflects significant agreement in both cases, according to [65]. In the event of a disagreement, the authors agreed to include research papers in the next step to reducing the risk of eradicating the relevant research articles suggested by [87].
While following this procedure, we chose 221 (192 + 29) articles and excluded 312 research papers from Phase 1 and Phase 2, and these selected articles were supplementarily used in the next stage as input. The subsequent consistency of agreement between the two authors was considered significant in both phases. In Stage 5, both the authors conducted the inclusion and exclusion procedure independently by reading the complete text, and 110 (90 + 20) final papers were obtained. We discussed 19 articles (10 in Phase 1 and 9 in phase 2) to settle the lack of congruity among the two authors; however, both insisted on an additional 9 out of 19.
Hence, this process included 119 respectively, after complete text-based removal. This resulted in the selection of 1480 (1288 + 192) in Stage 2, 947 from Stage 3 (475 + 58), and 221 (192 + 29) from Stage 4. After all stages, we had 221 articles in the final, and reference snowballing results were further processed based on the abstract and full text. Eventually, the studies selected after Stage 5 (including reference snowballing) were regarded as the final list of 119 primary studies.

Quality Assessment of Study
While performing an SLR, the quality of studies needs to be evaluated to determine the study's significance during the synthesis of the results. This helps to pick high-quality studies to deliver accurate results, as indicated by [55]. Evaluating research quality in an SMS is not an obligatory activity; however, we attempted to do it as an add-on. However, we decided to integrate our mapping protocol with a quality assessment step based on a previous SMS [122]. In the first instance, we developed a quality evaluation checklist based on [55].
This section involves ten questions covering different research areas of the same domain, including architecture, conduct, and evaluating data and conclusion. Later, to answer the consistency checklist questions, we read the full text of each article. The author evaluated each study on an iterative three-point scale with a rating 'yes', 'partially', or 'no' for each question (the first author evaluated each study's qualities, and the second author later checked the outcomes of each study).
The numeric values 0, 0.5, and 1 to 'Yes', 'partially', and 'no' were quantified for each question. The study's final quality score was subsequently obtained by summarizing all questions relating to quality assessments. Studies that achieved over 4.4 were eligible for inclusion in the final list. Additionally, Section 3.6.3 outlines the technique employed to evert the incompatibility of the quality evaluation process. A few of the questions did not apply to a few primary studies, and thus we used a point scale 'NA' value for those cases. Table 6 shows the results of this quality evaluation phase as it indicates the positive responses to most quality evaluation questions (QA1, QA3, QA6, QA9, and QA10). In QA1, we analyzed each study's abstract and initial sections to see if the research goals were clearly defined or not. The 119 primary studies identified the study goals appropriately (87.6%) or partially (12.4%). Concerning QA2, we examined whether the authors clearly outline the methodology of research used to determine the effect of the cloud's energy efficiency on the environment.
We discussed the study design and methods sections for this purpose and found that 76.6% of the studies appropriately explained the research methodology. To answer QA3-QA7, we must revise the complete paper composition. Approximately 93.5% of studies have identified the characteristics or metrics for energy efficiency in clouds (QA3), and 65.4% of reviews correctly identified these parameters or measures for energy consumption in cloud systems so that it was easy to evaluate the level of optimization in energy consumption (QA4). Concerning QA5 and QA6, we noted that the dataset's exact size and language were stated correctly (72% and 94.7%, respectively). Regarding QA7, very few (only 14.3 percent) SPs could measure the statistical importance of the ensues achieved involving the effect of high energy consumption in the cloud. We reviewed the articles' discussion, shortcomings, and improvements for QA8 and found no validity threats. We replied 'Yes' to this question for studies where validity threats are specifically addressed and marked 'Partially' for articles where validity threats were listed without sufficient explanation. To answer QA9 and QA10, we reviewed the papers' critical determinations, discussion, and conclusions. This helps us decide whether all the questions for analysis are answered and the research results were reported adequately. For most research, positive answers were obtained (99.1% and 96.2%, respectively).

Data Extraction
In order to address the research questions specified in Section 3.3, we extracted the related information from the collection of 119 papers. Data extraction information was documented after reading each article in detail. The points below were extracted from every article.

•
Primary and complete reference information, including title, author, title, and publication year.

•
Determine energy efficiency techniques used in the cloud.

•
Set features such as the name, type of techniques, and language of programming used.

•
Empirical results concerning the influence of high energy consumption on cloud performance and the environment.

Results
Only high-quality papers were used for a systematic research approach to classifying the energy-efficiency techniques and their role in cloud computing. A total of 2903 publications were found after searching for energy-management applications in the cloud computing area. One hundred nineteen survey papers were finalized for the survey and distribution of published articles in different years as shown in Figure 8. There are the following numbers of articles in various categories: 1. A total of 28 papers were extracted for hardware energy-efficient techniques (controlling the frequency and voltage of servers). A significant portion of 32 papers was extracted for software energy-efficiency techniques. 2. Different energy efficiency techniques have been performed by different types of applications categorized as 17 papers for bio-inspired, 11 for consolidation, for powermanagement techniques 13 papers were selected.

Answer to Research Questions
All 119 PSs were thoroughly reviewed to extract the required information that is demanded to respond the research questions mentioned in Section 3.1. The answers to the research questions are as follows. While answering the research question we are referring to primary studies in format [S and study reference number] to avoid confusion for readers. 4 It is essential to understand the energy consumption pattern for optimizing usage. In cloud environments, servers consume a significant portion of energy, and their energy consumption depends on the type of usage and amount of computation power required [56]. It also depends upon the type of computation the server is performing. For example, data recovery and processing will consume different amounts of energy. Usually, networking equipment, lightning, and cooling devices lend to the overall energy consumption. However, the contribution of each of them corresponds to a few percentage points of the total consumption; however, this is increasing [49].
The PDUs and UPSs consume considerable resources and increase the workload, leading to a rise in consumption. The UPS continuously charges and supplies power until generators can start in the utility's failure. For a basic understanding of parameters used in power consumption, let us start with the mathematical formula [65] for the total energy consumption of an active server. The active server's overall energy consumption is the amount of energy consumed in a time frame when the machine is fixed and dynamic, defined as ∆E Total, is prepared as follows: Further energy is generated by overhead scheduling, as indicated by ESched. This paper mainly concentrates on energy consumption, which requires various parameters, such as server idle conditions, cooling systems, computing, storage, and the use of communications resources. The following are defined: (i) Server idle mode energy consumption is symbolized by EIdle. (ii) Cooling system energy consumed is symbolized by Ecool. (iii) Computation resource consumed energy is symbolized by ECompu. (iv) Storage resource consumed energy is symbolized by EStore.
(v) Communication resource consumed of energy is symbolized by Ecommu.
Equation (1) above can, therefore, be translated into: Metrics are used to understand the working as well as evaluating it, without certain parameters [72], we are not able to validate a technique. Other parameters used to measure energy constraints are given below. i.
The energy consumption must be considered in accordance with performance metrics on efficient use of energy in infrastructure. Below are some energy consumption assessment metrics [17] as seen in Table 7   Table 7. Power-measuring parameters used to measure the power consumption of the data center. In the data center, this is a measure of the amount of green energy (i.e., energy derived from renewable sources) being consumed. Used to assess the environmental impact of a data center's operation.

Metric
Compute Power Efficiency (IT Equipment Utilization * it equipment power)/total facility power This is a measure of the computational capacity of the data center or of the overall power used.
Energy reuse factor ERF = Re-sued Energy Used/Total Energy Consumed A metric of reusable energy (energy that is consumed outside the data center).

Data center productivity
Useful Work-done/Total resource (total resource taken to produce this useful work) A measure of the amount of fruitful work yielded by data center.
Thermal power design The maximum power a computer chip can consume while a process is in execution Determines the maximum power needed by cooling the computer equipment.
SWaP (Space, Wattage and Performance Performance space * power This is a data center Sun Microsystems metric. It was developed for computing the resources and capacity needed by a data center.
ii. Some sustainability metrics considered for evaluating the total energy consumption and its environmental effects, which are aligned to international carbon reduction initiatives can be seen in Table 8: Table 8. Power-measuring parameters used to measure the carbon consumption of a data center.

Metric Formula Explanation
Carbon usage effectiveness Total CO2 emission from energy used/total energy consumed To measure greenhouse gases in environment by the data center.
Water usage effectiveness Water used/EIT A measure of the water requirements by a data center iii.
Rating systems of data centers are dealing with energy efficiency metrics and other, such as operational and regulatory term as global metrics which can be seen in Table  9. Table 9. Power-measuring parameters used to measure the energy consumption efficiency of a data center. Data centers can be considered the internet's brain, and their role is to process, store, and transmit data consisting of the information and services on which we rely in our daily lives [68,69]. This could be video streaming, email, social media, online collaboration, or scientific computing. Data centers' power and energy requirements are very high because they use them for computing and cooling, thus, increasing their expenses and high carbon emissions. The consumption of energy in data centers worldwide is estimated at 1.4% of the total EEC (electricity energy consumption) and grows annually at 12% [28].

Metric
Usually, with a data center, several network appliances and equipment are connected to the internet, facilitating data traffic flows that consume vast amounts of energy, progressively transforming into heat. The heat generated must be extracted from the data centers; otherwise, cooling devices are needed that further consume more electricity [85]. As seen in Figure 9 below, most of the power consumption of data centers is by infrastructure and servers. Many of the largest data centers worldwide also have several thousand IT hardware devices that consume more than 100 megawatts (MW) of energy, sufficient to supply about 80,000 U.S. homes (US DOE 2020) [88].
While there has been a rise in the global demand for data centers, the number of users is also increasing day by day, leading to high Global IP traffic, and the volume of internet data increased by more than double from 2010 to 2018 [27], whereas data center storage capacities worldwide decreased by 25% [71]. The number of computing instances operating on the servers worldwide increased by six times. Most of the data on electricity consumption by the data center is not currently officially available. However, we use data obtained on regional levels, and thus we can create some estimation of the energy consumption.
Using some statistical approaches, also called "bottom-up" models, helps to measure the energy consumption accounting for the installed IT equipment in various data centers, and their energy consumption characteristics used to estimate the overall energy consumption [87]. Although bottom-up studies consider many factors for driving an assessment of energy usage, they do not appear very frequently as they are data-dependent and time-intensive. As [57] reported, data centers accounted for approximately 1.1% to 1.5% of the global energy consumption in 2010. However, with such extrapolation models, we continue to predict significant growth in power usage in data centers, provided that the demand metrics on which they depend often rise rapidly. Some estimations conducted by various authors [8,16] suggested that from 2010 energy used in data centers is doubling and is expected to grow at the same pace in the future. Ref. [46] attempted to create a widespread assumption that a significant increase in demand for data is directly equal to the rapidly rising energy usage of data centers [47,48]. Over the last decade, the global energy usage in data centers has possibly increased by 6 percent between 2010 and 2018 [71]. These findings were centered on integrating various recent datasets that considered account installed stocks, functional properties, and the total energy consumption in data centers. Currently, the community faces a sustainable energy economy because of the high electricity use by CDCs. The CDC'S energy consumption is continuously rising. As shown in Figure 10 most of the energy is consumed by the cooling system and networking equipment. By 2030, the energy consumed will be 8000 Tera Watt hours (TWh) [6].
However, information on the electricity consumed by global data centers is a valuable benchmark to test affirmation regarding the CO2 consequences of data center services. One frequent assumption is that global data centers expel about 900 billion kilograms of CO2 produced by the global aviation industry [86]. The latest claims also indicate that the emissions from 30 min of watching Netflix (1.6 kg CO2) are the same as the emissions from driving nearly four miles. This statement is supported by the assumption that Netflix streaming services in data centers consume approximately 370 TWh per year [50]. However, this is 1.8-times greater than the projected 205 Tera Watt hours (TWh) for all the collective data centers worldwide that provide society with countless former information services other than simply watching Netflix videos. Three main vital factors influence energy performance [58]. First, IT manufacturers must continue to improve technology for energy efficiency in IT devices, especially servers and storage drives [59][60][61][62][63][64]. Second, increasing virtualization software deployment, which empowers many applications to run on a single server, undoubtedly reduces every application's energy consumption. Third, promote the trend toward the large cloud and hyper-scale class data centers that use ultra-efficient cooling systems to reduce energy consumption [88][89][90][91].

RQ3: The Current Status of Cloud Energy Efficiency
With the advancement of cloud-based technologies and increasing collaborative work among customers, remote connectivity has made this feasible [73,79]. Almost all cloud-based companies [82,83], such as Amazon, Google, Microsoft, Sun, and IBM focus on developing improved data centers, thus, strengthening their structure for global data centers [77].
These data centers currently need enormous power or energy to process network equipment, monitors, cooling fans, machines, air conditioners, etc. The worldwide energy use in data centers is 65% higher than in the previous year. Both the issues are due to high energy usage and greenhouse emissions [49,50]. We can suppose that cloud-based systems are one of the causes of global warming because some gasses produced by cloud-

Energy Consumption Percentage
storage infrastructure, such as carbon dioxide, carbon monoxide, have caused significant problems for our environment [70].
Unless data centers' electricity and power usage are minimized, this will favor global warming and an intelligent economy, significantly contributing to environmental protection. As per the predictions, the estimated cost of cloud activities for the Amazon website and operating its data centers is about 53 percent of the overall budget. Energy-related expenses amount to 42 percent of the overall cost, including the cost of power needed for the system operations and cooling facilities [41].
Data centers use around 2% of the world's energy; however, by 2030, this is projected to be as high as 8%. According to Hewlett Packard Enterprise's research, only about 6% of all data generated by us is being used. That means 94% appears to be in a large cyber deposit that would cause significant carbon footprints if it creates a loop [75]. In 2018, the total data available on the planet totaled 33 Zettabytes; however, by an estimation conducted by the International Data Corporation, this will reach 175 zettabytes at the end of 2030. Today, our community generates about 2.5 quintillion bytes of data every day, which will be stored and processed in a data center, leading to gradual hikes in the cloud data center's energy consumption.
Energy efficiency issues have steadily increased since advances in the computing industry with high energy demand were made [37]. This segment focuses on and aims to improve the energy efficiency in data centers. The effect of increased energy use in data centers should be investigated, and this article analyzes the various energy conservation techniques being implemented in cloud data centers. To reduce inevitable data center energy consumption, a parallel investment in renewable power sources would be required [71].

RQ4: What Kind of Different Techniques Used for Energy Efficiency Proposed by Our Selected Studies?
There are many ways to optimizes energy consumption and to improve resource reliability. However, there is scope for improvement in different energy-saving techniques as well as in energy-aware resource allocation heuristics, ISN, SN, and SI policies using green control algorithms, thermal efficient resource management, bin packing algorithms for virtual machine placement, and dynamic allocation based on the current utilization of resources [S17].
Different countries' governments are now collecting taxes on greenhouse gases, such as CO2 dissipation from IT industries. The total cost for cloud services increases due to inefficient energy utilization to provide a proper understanding of various techniques. We divide these techniques into seven categories as shown in Figure 11. Hardware-level, software-level (virtualization and consolidation), and power-aware management; thermalbased, bio-inspired, and other various techniques, such as non-technical techniques. These are the fundamental techniques by which energy consumption can be reduced and interconnected with each other. The latest improvements at the hardware level, such as the latest processors, have proven to be more energy-efficient than older ones. Moreover, improvements to the server's processing capability are made by regulating the frequencies and voltages of the servers. The software solution has a more significant effect on energy usage [49]. It is mainly concerned with servers' energy, which involves reducing the number of servers and the operating number of memory nodes. Network techniques that reduce the traffic between servers take less energy while processing [104]. Power management deals with resource allocation and migrating strategies that lead to more efficient energy usage. Finally, non-technical includes building organization and data center designs [62]. This also helps to reduce energy. Even weather and nearby available renewable energy are essential factors for energy-efficient techniques.

RQ5: Describe Various Energy Efficiency Techniques Employed at the Hardware Level
At the hardware level, we will deal with hardware components by which we are attempting to efficiently use energy even with various servers with different computing capabilities. Efficiency can be achieved by controlling the servers' frequencies and voltages, computation quality, speed of processors, and other hardware components [67]. The processing unit consumes more energy than other hardware components as energy efficiency by processors is improved over time as processors are upgraded in terms of computation, speed, and performance.
Still, more energy optimization needs to be achieved by modifying and upgrading the architecture of microprocessors. The carbon footprint depends on the energy source applied in the data centers [51]. Green computing is incorporated to solve both problems of increasing carbon footprint and energy optimization [52][53][54]. Before discussing hardware-level techniques in detail, we first look at the causes of problems, such as high energy consumption and high carbon footprints.
Moreover, we also attempt to understand the absolute need for cloud and cloud data centers. Today most gadgets, such as smartphones, tablets, smartwatches, health care devices, and sensors connect to clouds for their private data storage purpose. Software applications, such as e-mails, messengers, enterprise apps, social web networks, e-cart apps, audio and video streaming apps, broadcasting, and entertainment services also utilize cloud services to store, process, share, and secure their data.
The most popular search engine giant Google hosts all of its services, such as Gmail, Google Earth, Google Drive, Google Play, and YouTube on its cloud platform (GCF) to offer high-quality services to its worldwide customers. Currently, almost more than 50 percent of mobile users own smartphones and Android phones, which has become prevalent due to their flexibility and usability. Their servers are being hosted in the cloud, including thousands of apps and websites. The most extensive database globally, DNS is also expanding as many domain names are purchased and updated. Cloud computing requires a massive number of data centers or server farms.
Each data center comprises hundreds or thousands of physical machines organized in hundreds of racks that can run without virtual machines (VMs). For providing access to mail, videos, pictures, etc., anytime, data are being distributed and stored in massive data centers. The backup of all data is stored and synchronized at the geographically different data centers to protect data in emergencies and unpredictable natural calamities, such as tsunamis and earthquakes.
Google has at least thirty data centers worldwide and more than a million servers that usually consume 500-681 megawatts. Amazon web services use 38 data centers [34] and 454,400 servers globally. We learned that energy efficiency at the hardware level is achieved from design to the manufacturing and implementation phase of hardware. Intensive computation by processors generates excessive heat, and thus cooling is needed to control the heat and temperature. There is always a need to use some software-level solution for hardware-level techniques as these are very important to efficiently managing hardware resources.
Providing minimum required energy to machines saves energy. However, performance degrades by this, and on the other hand, supplying max required energy increases performance but increases heat liberation and power consumption. In hardware techniques, we include cooling equipment, lighting equipment, power supply, architecture and framework, infrastructure, Virtual Machine Allocation-and Scheduling-Based Techniques. We are treating power supply as a separate domain of techniques. We will discuss different techniques proposed by various researchers based on hardware to reduce energy consumption.
Ref. [S2] demonstrated that the Network Connectivity Proxy (NCP) can be used in an organization to boost computer's energy efficiency significantly. Authors say that PC idle time in enterprises is roughly more than 70 percent, the energy used by new desktop PCs is 1.5 W while off, 2.5 W in sleep, and 60 W energy consumed while on. If we minimize the idle time by putting it into sleep mode or off mode, the total yearly energy saving would be nearly 400 kWh. If one enterprise consists of 10,000 such PCs, the energy savings would be 4,000,000 kWh.
Ref. [S3] introduced a green cloud architecture, which seeks to reduce data centers' resource usage and facilitates extensive online monitoring, lives virtual machine migration, and VM location optimization. Evaluation is conducted using the online real-time game Tremulous as a VM program. Ref. [S1] attempted to address the energy issue by analyzing how much energy is required for virtualized environments. Based on the EARI model, the electrical costs associated with virtual machines, such as booting, usage, and migration, were calculated and analyzed quantitatively.
Such application frameworks combine load-balancing solutions with migration facilities and an on/off infrastructure associated with predictive mechanisms. Ref. [S3] proposed a new framework for energy efficiency with a scalable cloud computing architecture feature with minimal performance overhead. The network's performance is enhanced in the data center through basic power-aware scheduling methods, variable resource management, live migration, and virtual-machine architecture.
Ref. [S4] introduced a display power-management technique that reduces the screen's energy consumption as battery life is still considered a critical issue. When the user is not present in front of the PC or desktop, it will detect by capturing images via webcam, and then the image is processed. If the user is not looking at a desktop or laptop display device, the display power controller senses the user's intent and promotes low power activities by up to 50%. With this, it can reduce energy consumption by up to 13%.
Ref. [S20] suggests a DVFS strategy on cluster systems deployed with a different type of processor that can conveniently work at different voltage and frequency scales to incorporate a power-scheduling algorithm with time-limitation tasks and decreased energy usage without breaching the SLA. A new algorithm is being introduced using the Dynamic Voltage and Frequency Scaling (DVFS) technique and a DVS server that controls the voltage supply. Additional hardware DVS servers help to minimize the energy consumption, reduce CO2 emissions, and fulfill certain quality of service to check that the scheduled work completed within its time limit [S5].
Cloud servers in the data centers are only busy from 10% to 30% of the time; on average, 70% of the time, they remain idle. Virtual machines are consolidated into a minimum number of physical machines, and idle machines are placed into lower power states, such as hibernate, sleep, or off. Green algorithms [S17] achieve cost optimization using three N-Control policies: SI, ISN, and SN. Energy and cost are optimized using those strategies.
Switching physical machines into sleep, off, and idle mode frequently needs more power and performance. N-Control policy limits more switching, which reduces both costs and energy. A Green Cloud Architecture by [S6] for system efficiency in a data center was designed to implement modern energy sustainable design, VM system imaging, and image processing modules to discover innovative ways to save energy.
The Integrated Green Cloud Architecture was introduced by [S7], which provides a Green Cloud client-based middleware that allows us to improve cloud infrastructure control and deployment. The middleware intelligently tackles decisions to be inferred by using specific pre-defined parameters, such as quality of service, SLAs, and equipment specification. Energy-efficient approaches were introduced in e-learning to reduce the cloudbased online study's environmental effects.
The e-learning strategy has significantly transformed the educational systems, decreased the use of paper and documentation, and reduced energy and CO2 emissions. Therefore, cloud-oriented green computer architecture [S8] reduces costs, saves energy, and encourages companies to install and manage apps with limited resources. Ref. [S9] develop a Green Cloud Broker while considering energy-efficient indicators and providing operations to resolve resource procurement problems in an environment-friendly way.
This mechanism can also determine the allocation and payment for the job submitted. Ref. [S10] illustrates the platform firmware's role in DRAM power optimization and control. They proposed a UEFI-based firmware for memory allocation in an energy-aware manner and accurately estimating DRAM power, optimizes the DRAM locality, and enhances the DRAM power limiting (RAPL). PAPI Performance Analysis Library [S11] to measure energy consumption and workload power, RAPL capabilities, and obtain realtime immediate code analysis. While utilizing PAPI Higher-Level Tools, full power and energy analytics are automatically configured, primarily when used with the new PAPI versions.
A quantitative study by [S12] was conducted at the chip level to calculate power and productivity in five hardware generations using different specifications. The results recommend that while designing and evaluating energy-efficient hardware, it is essential to include native and managed workloads and suggest including on-chip power meters, making it easy for the researcher to optimize power and performance. Ref.
[S13] offers a hardware accelerator designed specifically for neural networks in which particular emphasis is placed on the effects of memory productivity and energy consumption. Microsoft uses the server with Field Programmable Gate Arrays (FPGAs) to build a Convolution Neural Network (CNN) accelerator. Ref.
[S14] provides a fully automated process for designing and optimizing DNN hardware accelerators for software, architecture, and circuit stages. These studies are related to overall energy efficiency by a general-purpose processor.
Characterization methods are used to identify energy's effect on various designs with power-restricted IoT or mobile devices. Heterogeneous Multicore Platform Energy Management was proposed by [S14] to improve performance and efficiency, which is better than the DVFS technique. H-EARTtH algorithm manages and schedules a heterogeneous processor's optimal point. Users in clouds demand a bundle of virtual machine (VM) instances.
A new strategy was proposed by [S15] to manage available resources properly and increase its profit. To overcome a physical machine resource management problem, the authors proposed a G-PMRM and VCG-PMRM algorithm and even a winner-determination algorithm. G-PMRM was more efficient for deciding allocation faster than the VCG-PMRM algorithm. The performance of G-PMRM improved as the number of users increased. Ref.
[S16] explores a price-based approach for energy-efficient energy allocation in multiuser relay-assisted networks. The authors imposed a network price as a penalty if the power consumption reached a limit value and analyzed its tradeoff effect on the EE and spectral effectiveness (SE). Routers, switches, bridges, and various other network components require energy in a computing environment. The routing algorithm uses minimum greenhouse gas emissions.
Ref. [S27] proposed a new energy-efficient model for wireless storage area networks. The linear power model and low power blade models were used to decrease the energy consumption in the central processing unit, measure the number of tasks performed by the user, and select them based on the user server's needs. Power-management systems such as voltage scaling, frequency scaling, and dynamic shutdown strategies were optimized for the remaining servers.
Ref. [S21] proposed the NTV concept, promising energy efficiency improvement in magnitude over the current silicon systems range. Intel's Near threshold voltage of IA-32 processor undertakes to explain the circuit techniques and benefits of NTV design. It addresses critical technologies vital for harnessing NTV's real potential to create a new 'green' computing era. A thin client approach for mobile devices was proposed by [S77], which showed a 57% improvement in performance compared with old traditional selfreliant devices.
Laptops are more energy-efficient than desktops; however, the laptop's average life is less than a desktop because cooling facilities increase a machine's life. Regular maintenance of internal cooling system components in laptops and desktops also causes a change in energy consumption. E-waste is another aspect of achieving green computing, as many greenhouse gases liberation, pollution at the time of manufacturing will cause many environmental issues. Proper recycling of hardware components is essential to reduce the carbon footprint and environmental concerns.
Eco-aware online energy management and load scheduling methods were proposed based on renewable energy sources [S117]. Its vital role is to minimize eco-conscious cloud-data-center power consumption costs while ensuring a quality experience (QoE). The method is implemented using Lyapunov optimization theory to develop an online control algorithm to solve an optimization problem known as a constrained stochastic problem. Ref. [S22] aimed at energy-aware scheduling for non-real-time and real-time tasks on the cloud.
During peak hours, priority-based recommendations help with processing requests for keeping performance high. The authors in [S29] addressed physical machine resource management issues in the cloud data center. They attempted to provide a solution using an optimal auction-based setting and design strategy. The winner-determination algorithm was used for selecting users, provisions of VM to PM, allocation of selected users, and payment function for determining the amount to be paid by the user to the service provider [67]. Network software-defined technique (SDN) is a new approach for energyaware flow scheduling, i.e., time-dimensional scheduling and EXR for every flow. In other words, the flow still only uses its routing path links.
EXR can conserve network resources efficiently relative to standard fair sharing routing (FSR) and, while setting higher routing priorities for lower flows, dramatically minimizes flow completion times. Ref. [S19] proposed an architectural model for energy-efficient cloud computing. The energy allocation heuristics, which offer data center services to consumer systems, boost the data center's energy performance while satisfying the quality of service.
Ref. [S26] designed a model to catch the intrinsic trade-off among the electricity cost and WAN connectivity costs and formulate the optimal VM placement problem, which is NP-hard because of its binary and quadratic existence. The authors in [S35] introduced new adaptive heuristics for complex VM consolidation, which focused on understanding and analyzing the resources used by VM. The algorithms aimed to reduce energy consumption and maintain high compliance with the agreement by using real-world workload traces from more than a thousand Planet Lab VMs; the authors validated the proposed algorithms' high efficiency. Ref.
[S24] provides a methodology for analyzing how the conventional integration of the data center with dispatched load impacts grid dynamics and performance. In contrast, adding dispatchable data centers by the power grid reduces stranded power and improves grid cost and stability, even at high RPS. [S25] discussed the complexities of harmonized energy management in data centers and proposed a modern and optimized, unified energy management system framework for data centers developed within the European Commission-funded GENiC project.
This system prototype [S51] implemented joint loads and thermal-control algorithms [3]. Different hardware techniques for energy efficiency are compared in Table 10, and the techniques are evaluated and compared by their main benefits and drawbacks. All hardware-level technique taxonomy is shown in Figures 12 and 13, and the distribution of research articles can be seen. To deal with resource procurement problem they use mechanism design methods to Dynamically determine whether the work submitted will be distributed and paid.
-Reduces cloud users overhead time minimize power consumption -decrease the operational cost -Inability to implement in a practical framework python Proved that the selection of a greener cloud service provider is successful done by proposed system. [S10] UEFI   Mainly, there are two constitutional approaches to eliminating energy usage at the software level. The first is minimizing the resources used by computers (reducing the number of active servers). The other way is to decrease the energy consumed by memory (cutting down the number of active memory ports). This type of technique usually comes under the optimization of scheduling, a standard green cloud approach that is more economical than hardware optimization.
The best way to minimize power consumption [11] at the software level is to map the requests between virtual machines and physical servers correctly. Currently, the focus of virtualization technology at the data center is on energy management; therefore, we consider all virtualization techniques as software techniques. There are two primary issues in the cloud framework: where to position the VMs and where to relocate VM if necessary.
Several proposed techniques are discussed below to solve these issues and reduce energy. The impact of virtualization can be seen on the Amazon EC2 data center performance network, which was assessed by [66]. Virtualization is a concept designed to run several logical (virtual) computers on a single physical computer (hardware device), which enables VM management techniques resulting in high efficiency with low cost as managed by using the abstraction process [42][43][44][45].

HARDWARE LEVEL TECHNIQUES
Services and resource efficiency and availability were enhanced by dynamic migration and aggregating a physical server's collections into one server. One of the significant energy efficiency challenges in virtual cloud environments is where new VM requests should be laid on physical servers. Virtualization [37] is defined as multiple virtual machines that will execute a couple of tasks simultaneously running on the same physical server with a dedicated operating system and hosted applications.
A hypervisor is the system software that works as an operating system (abstraction layer) for virtual machines and coordinates with the underlying hardware components according to the virtual machine's predefined instructions [124][125][126][127]. Virtualization is not a new concept in the IT sector as it has already been implemented with our grand old Main Frames, which belong to second-generation computing devices. Generally, cloud systems are designed with high-end configuration components, such as RAM, processors, disks, routers, and switches.
Traditional processing methods (sequential) will allocate the entire resource set to the running tasks before they begin. The allocated resources of a task would not be able to exchange with any other running tasks. In this way, the allocated resources are underutilized, blocked with some tasks, and the execution takes more time to complete. Hypervisor-based VMs are designed to run multiple jobs in parallel on the same machine with resource sharing facilities to overcome the sequential processing limitations.
Ref. [110] studied and analyzed cloud virtualization for reducing physical hardware by using different virtualization techniques. While analyzing this study, we discovered that hypervisor selection has a significant impact on power consumption. They also attempted to use the optimal number of VMs as, if they cross a specific limit, they will consume more power. The hypervisor is the significant virtualization component of physical servers, which uses only 10% of their capacity [56].
Thus, with virtualization, we have an advantage known as server consolidation. Unutilized physical machines are migrated into virtual machines that are further combined into a single unit to reduce consumption and the amount of hardware required [138]. Achieving high performance from resources, reducing the frequent investments in infrastructure, and efficient resource utilization are the main advantages of virtualization. Dynamic workload balancing with VMs, resource sharing across VMs, the design of secured VMs, and energy optimization techniques for virtualization are green cloud trending activities.
Ref. [S18] introduced the Ena-Cloud method, contributing to an alive and dynamic application placement policy. Ena-Cloud uses virtual servers to install applications and deployment tools, to reduce the number of machines operating and power consumption. Ref. [S114] develop an online energy-aware supply strategy for consolidated and virtualized computing systems for HPC applications. All similar tasks are grouped and sent to the same host system for execution, while an unused subsystem or new element is excluded by switching off them.
Energy efficiency can be achieved through workload-aware, just-right dynamic provisioning, and the ability to supply host subsystems where VM mapping of VMs is not required. Consolidation excessively reduces the number of servers that remain idle and wastes energy. Ref. [S28] developed a technology that consolidates VMs dynamically depending on the adjustable utilization thresholds to meet SLAs [129]. They recommended methods for modifying consumption levels dynamically by evaluating empirical data obtained over the existence of VMs.
Techniques for optimization must choose the correct candidate for the migration of virtual machines. The selected VMs and the new VMs requested by the user are placed into the physical node. Virtualization is proven to be a solution for energy optimization; however, more improvements in algorithms for VM consolidation and deconsolidation are still required. While considering SLA's limitations, ref. [S58] proposed energy-efficient technology to provide resources to manage energy wastage problems through scheduling algorithms.
According to SLAs, resource managers can unify virtual machines with physical machines to meet client SLA requests. In [35], the authors proposed a nature-inspired ant colony system to consolidate VMs; however, the probability of a peak situation arising increases as the consolidation occurs. Similar energy and performance-aware policies for deconsolidation are required. Ref. [S30] also proved that optimization is equally significant while working on energy efficiency [130][131][132]. They proposed a method for scheduling VM workflows in hybrid and private clouds in which pre-power techniques and the least-load-first algorithms were used to make it energy efficient.
As with the hybrid pre-power planning strategy, the time needed to respond to the incoming request decreases, and workload balancing problems are solved with the leastloaded first algorithm. An Energy-Efficient Scheduling System (EESS) [S31] for Migration, Cloning, and Regeneration of ideas through the First Come First Out policy using the minimum load distribution. By Hybrid Energy-Efficient Scheduling Algorithms, the requests for incoming VMs are submitted to the appropriate VMs. However, if requests rise above one level, the scheme will distribute the workload using migration.
Ref. [S55] proposed a technique to solve the problem of Static Virtual Machine Allocation (SVMAP), and a genetic power-aware algorithm (GAPA) was introduced. The GAPA and a baseline scheduling algorithm (BFD) overcome the same SVMAP problem by sorting the virtualmachines list at the beginning and using the best fit decreasing algorithm. Consequently, the total energy consumption of the GAPA algorithm was lower than the simulated baseline algorithm. One such technique was developed by [S28] based on the heuristics of reallocation.
They used the live migration technique for VM migration utilizing some heuristics related to reallocation. These heuristics are based on the current requirements by following up on the desired quality of service. Reallocation is performed to minimize the workload of some nodes that carry loads beyond their limits and exclude unused nodes for power saving. Ref. [S36] proposed an algorithm MLS-ONC using a multi-start local search algorithm implemented in the Open-Nebula through which the cloud infrastructure was fragmented and distributed geographically to reduce the energy consumption and carbon dioxide emissions.
Ref. [S32] proposed a Cloud Global Optimization algorithm for allocating resources at runtime in an energy-efficient way by exploiting the VM live migration technique's ability. The basic idea is to use a unified approach based on consolidation and rearrangement in which it first moves the VM from a server with less load to heavily loaded servers. Next, it changes the VM from the old to a modern server.
A control technique for scheduling VMs and IAS workload in a data center was developed by [S37]. This method offers a better OptSched timer that uses the specified optimization requests time Of VMM. The algorithm aims to reduce maintenance workload requirements as well as the number of servers and the improvement of the cumulative machine uptime (CMU).
An application for energy awareness was proposed by [S38] in which resource scaling and management employ IaaS architectures. The main aim is to ensure that only a minimum number of servers operates beyond their capacity. It also involves the transfer of software from the low-loaded storage server to another server using a mechanism by which unnecessary energy usage can be avoided by shutting off the device, which decreases the operating cost. By reducing the number of active servers, one can reduce the energy consumption. This is usually implemented with an optimized scheduling method [11].
A new, heuristic, energy-efficient approach to deploying VMs in a cloud data center based on a statistical analysis of historical data was proposed by [S39]. The application uses multiple correlation coefficients (MCC) to calculate the intensity of the correlation between two variables and pick a server to provide an acceptable trade-off between power efficiency and SLA violations.
The application uses multiple correlation coefficients (MCC). If the CPU usage coefficient for running VMs is high on a server, then the SLA violations in the data centers will more likely to happen. A solution based on real ants' actions was suggested by [S40]. It uses the Ant Colony Optimization approach to deploy the VM to reduce the number of active servers, as VM deployment is conducted by dynamically estimating the current server load.
Any ant (server) receives and begins scheduling some of the virtual machines' requests in this algorithm. After all the ants have formulated their solution, the best solution is chosen for the target function. Energy-efficient allocation strategy and scheduling algorithms were developed in the context of the quality of service requirements and power consumption characteristics [S34]. This random machine is chosen to migrate an overloaded server based on a discreet random variable. MADLVF algorithms [S41] were indeed prepared to resolve inconsistent resource use, high energy consumption, and increased CO2 emissions [135].
The Interior Search Algorithm (ISA) can be used, which decreases the Data Center's energy consumption. The inefficient use of resources [S42] can be excluded by the Energy-Intensive virtual machine allocation method. The results demonstrate that the energy use of genetic algorithms (GA) and best-fit reductions (BFDs) were 90-95% higher than the estimated EE-IS, which was approximately 65%. The new EE-IS strategy also expanded the average electricity saving by around 30%. Ref. [S43] suggested novel QoS-aware virtual machine allocation algorithms.
The type of allocation depends on the resource history to boost the service efficiency and reduce electricity usage. Ref. [S44] introduced a power-law featured management framework named VMPL for energy efficiency. VMPL predicts the video's resource utilization based on its popularity, ensures enough resources for upcoming videos, and turns off idle servers for power saving. The results show that the energy consumption and resource utilization relative to the Nash and Best-Fit Algorithms decreased using the proposed algorithm [136]. The energy utilization model was suggested by [S45] based on the mathematical approach. This approach sets the workload threshold for each server, and if the server exceeds the capacity, the VM is moved from an overloaded server to a new one. Ref. [S46] planned to build two VM scheduling algorithms to minimize power consumption and migration costs.
Ref. [S47] introduced three VM policies for placement and migration. If there is a rise in or shortage in the number of VM, a strategy is required for VM migration service wholly based on the policies named FDT, DRT, and DDT. This suggested approach indicates that the minimum migration process and few SLA violations reduce CO2 emissions.
Ref. [S48] addressed the VM placement problem and proposed a heuristic greedy VM and live migration algorithm that will to improve the use of the resources and lower energy use. The heuristic algorithm estimates the workload and maps the storage-sensitive and CPU-intensive workload to the same physical server. Compared with single-objective approaches that focused on CPU utilization only, significant developments were made in energy conservation and workload balancing using multi-objective approaches.
Ref. [S49] proposed a technique for VM deployment that supports VM application planning and live migration to reduce the number of active nodes in virtual databases. authors also developed a statistical, mathematical framework for VM implementation that incorporates the entire virtualization expense into the complex migration process and seeks to decrease active host involvement, reducing power consumption.
Ref. [S50] proposed a general algorithm based on a logistical regression model and a median absolute derivation for host overloading detection. In combination with VM-consolidation techniques, the VM migration [137,138] will help to prevent overloading and reduce the number of active servers, ultimately saving energy. Ref. [S51] suggested a VM placement algorithm, in which a process was mapped to VMs user demand, and then VMs are allocated to the physical machine. The algorithm reduces the number of active VM-supporting servers by minimizing the energy consumption and reducing the rejection rate.
Ref. [S65] suggested an energy-sensitive scheduling algorithm using the ESWCT Consolidation Workload-aware. The algorithm attempts to consolidate the VM to a minimal number of servers based on an optimized resource balance (processor, memory, and network width) concurrently shared between a cloud data center and users. This algorithm aims to minimize energy usage by optimizing resource utilization because heterogeneous loads are different in resource utilization. Ref. [S52] proposed an IaaS Cloud platform-oriented online scheduling algorithm for reducing energy consumption. The algorithm aims to ensure improved quality of operation for heterogeneous computers and various workload scenarios.
An energy-sensitive virtual machine scheduling algorithm is called a dynamic round-robin algorithm [S53]. This algorithm turned off the idle physical servers for a temporary period to minimize energy usage in the cloud data center. It saved 43.7 percent of energy concerning other scheduling algorithms. A virtual machine-aware energy-efficient scheduling algorithm named EMinTRE-LFT was proposed by [S55], based on the idea that a reduction in energy consumption is directly equivalent to a reduction in the completion time of all physical servers.
The Cloud algorithms for scheduling algorithms face multiple problems due to cloud user requests' complex and volatile nature. Ref. [S99] conducted a statistical study to establish the relation between the machine's energy consumption and efficiency (system performance). They suggested an algorithm that does not require any previous user request information.
Ref. [S56] suggested an algorithm for real-time distributed systems to achieve an efficient solution by using a polynomial-time algorithm integrating a variety of heuristic rules. Different software-level techniques for energy efficiency are compared in Table 11. In these various techniques, the comparison is made according to their benefits and disadvantages. In Figures 14 and 15, taxonomies of all software-level techniques are described. In Figure 16, the distribution of research papers among different types of software techniques can be seen.

RQ7: Explain the Various Techniques for Virtual Machine Consolidation Applied at Data Centers
In Green Cloud Computing, the concept of consolidation means "the process of deploying different data centers related to data processing applications on a single server with virtualization technology". Consolidation is a critical feature derived from virtualization. It is committed to implementing at the process level for load balancing, better utilization of virtual systems, and reducing power consumption [56].
In the management and organization of Resource Pool Access, Virtualization plays a vital role via being a Virtual Machine Monitor or hypervisor. It hides physical resource information and offers virtualized tools for applications across different levels. The fundamental feature of a virtual machine is that the software it runs is restricted to virtual machine resources and abstractions only. The virtualization also facilitates VM consolidation methodology, which groups many virtual machines into a single physical server as shown in Figure 17.
VM consolidation can bring substantial benefits to cloud computing by allowing greater use of the available data center resources [72]. The consolidation can be implemented statically or dynamically as The VMM assigns physical resources based on the VM's highest demand. The case of static consolidation (over provision) leads to the loss of resources because the workload is not always at an extreme level.

SOFTWARE LEVEL TECHNIQUES
On the other hand, the VMM adjusts the VM capacities based on current workload demands when the consolidated dynamic VM is considered (resizing). It also allows better use of the resources at data centers [140]. The VMs can be re-located with a dynamic VM consolidation scheme by doing live migration to understate the number of active physical servers known as hosts while considering the current PM resource demand. The idle hosts are switched to low power modes with short transition times to reduce energy waste and minimize total power consumption [96].
The hosts are re-activated to ensure that service specifications are not ignored and will meet the demands. Dynamic consolidation, threshold-based consolidation, and optimization of the consolidation process are presently trending topics in green cloud virtual machine consolidation. This approach key role was to derogate energy usage and improve the quality of service. Simultaneously, the different methodology used for VM consolidation has specific issues and benefits discussed in this section.
Ref. [S76] proposed a highly energy-efficiency and dynamic virtual machine consolidation (EQVC) method through algorithms that lead to different stages of VM consolidation. They choose redundant VMs from hosts for energy-saving and quality assurance before the overloading and migration of VMs to other hosts. Ref. [S35] thoroughly discussed the need for consolidation, the procedure of dynamic consolidation of virtual machines, and the advantages in detail. They explained how to consolidate a single physical server with multiple virtual machines (one-many approaches) and multiple physical servers with multiple machines (many-many approaches) [142].
The authors proposed online deterministic and non-deterministic algorithms to explain the process of VM migration in the cloud. Another research paper [S57] proposed a threshold-based approach for the IaaS platform to perform VM consolidation to balance the load efficiently and avoid resource underutilization problems. Apart from their former approaches based on a threshold value, they also introduced the determination of threshold value dynamically, based on the present VM need and historical usage statistics.
Ref. [77] raised live problems while doing the virtual machine consolidation process. The consolidation process is a resource incentive and expects the intelligence support to reduce the server downtime to a minimum. To overcome the consolidation process's limitations, they proposed a DVFS (Dynamic Voltage Frequency Scaling) virtual machine consolidation technique to save energy by running the servers at different voltage frequencies. Ref. [S59,S69] proposed an approach called Datacenter Energy-Efficient Network-Aware Scheduling (DENS). The authors projected DENS to balance power consumption in data centers while taking network insights and performances into account. It works by disseminating tasks between servers according to the workload, the heat emitted, and communication potential [S23], suggesting a VM consolidation approach whose primary purpose was to reduce the energy usage of a data center providing various SLAs. Thus, they proposed an adaptive energy management policy that includes VM preemption to regulate energy consumption based on the user performance requirements.
Ref. [S60] propose two methods: the DVFS technique is used for energy efficiency and the second one for VM consolidation. The first method identifies performance weaknesses with energy consumption and provides familiar workload management with DVFS, which saves up to 39.14% for dynamic load. The second method is VM consolidation, which determines the dynamic frequency by distributing the load to achieve quality service. Various scheduling algorithms for resource utilization have been proposed.
However, they do not solve the hosts' machine heterogeneity problems, leading to greater energy consumption; thus, ref. [S61] introduced a Job Consolidation Algorithm (JCA) to overcome these problems, which utilizes cloud resources effectively by including server malfunctions by implementing DVFS techniques. Ref. [S62] suggested a self-adaptive solution called SAVE, which focuses solely on local knowledge and determines the assignment and transfer of VMs by probabilistic processes [143].
A new approach [S63], named VMDC, was proposed to manage network and communications topology in data centers. Using the multi-objective genetic algorithm simultaneously reduces energy consumption and traffic communication, which cause performance problems while fulfilling the SLA migration time constraints. In [S64], a Bayonianbased network (BNEM) model allows a comprehensive solution involving nine factors in an actual data center.
Criteria for migrating virtual machines and VM placement are presented by combining three algorithms corresponding to VMs consolidation methods. They propose a hybrid Bayesian-network-based VMs consolidation (BN-VMC) method. Ref. [S65] designed and implemented C4, Aliyun's cloud-scale computing cost-effective consolidation service. They analyzed user patterns, resource utilization, and migration costs in Aliyun, showing the need for conventional usage-based consolidation approaches, especially for locally storage computing, which cannot be met in the production environment. The heuristic worst-fit method is used to migrate the load-balancing instances.
The consolidation technique [S66] is built based on historical VM usage data analysis. In choosing a VM for migration, the VM with a minimal migration period was selected compared to other VMs. The algorithm indicated a significant energy reduction, thus, ensuring a high-quality level of operation improves energy consumption standards by making the best use of resources [145]. This is because underutilized resources sit inactive for a long time and do very little work as a node to consume more. As it has more load on it, this degrades the node's efficiency [S66].
Overutilization causes a node to consume more, as it has more load on it, often degrading the node's efficiency [17]. Underutilization prevents a node from functioning optimally, leading to idle time. Simultaneously, different energy-efficient techniques were designed to prevent the underuse and over-use of resources, leading to increased energy consumption. Broad strategies based on consolidation are evaluated by various criteria, such as their pros and cons, implementation environment described in Table 12, and taxonomy for consolidation techniques are also shown below in Figure 18. In Figure 19, the distribution of research papers among different types of consolidation-based techniques can be seen.

Task Level 29%
Virtual Level 7% Server Level 64% In computer science, an optimization algorithm can be developed to solve complex problems using meta-heuristics, which simulate or inspire biological behaviors, such as ants colonies, birds' flocks, swarms of bees, and fish schools. These techniques have brought the attention of computer scientists to figure out a variety of scientific and engineering problems. The story comes with an investigation of the natural coincidence between biological and computer science.

CONSOLIDATION TECHNIQUES
We present an overview of the biologically influenced literature on energy-efficiency technologies developed exclusively on animal behaviors, which are in search of food and match findings. People may also learn from animal behavior to construct optimization algorithms for tackling complicated problems. The Artificial Bee Colony (ABC) algorithm, for example, co-operates with the bee colonization, the Specific Swarm Optimization (PSO) algorithm simulates the biological behavior of the bird and fish schools, the Social Spider Optimization (SSO) algorithm, and the Lion Optimization Algorithm (LOA), which imitates the behavior and co-operation characteristics of the lions. Now, we discuss some of the techniques being developed based on bio-inspired techniques. Machine placement based on bio-inspired in the IaaS environment was first proposed by [S81], in which the initial VM placement problem was identified as a multi-objective optimization problem. Finally, a multi-objective optimization technique based on Ant Colony Optimization was applied. Ref. [S82] suggest the virtual machine placement problem solution. The intention is to achieve a compelling set of non-dominated solutions that simultaneously reduce total resource and electricity consumption using ant colony system algorithms in a multi-objective placement technique.
The proposed techniques are compared with the current multi-objective genetic algorithm and two other algorithms with a single objective, such as popular bin-packing and MMAS algorithms. Ref. [S83] offers an AVVMC VM consolidation scheme based on the balanced resource usage of servers in various computer resources (CPUs, memory, and the I/O network) as they propose modification and incorporation of the meta-heuristic Ant Colony Optimization (ACO) with a balanced use of vector algebra-based computer resources.
The paper [S84] introduced a simultaneous VM placement and data positioning solution. The key goal is to minimize cross-network traffic and bandwidth used by placing the number of VMs and data needed physically closer to Physical Machines (PMs). Authors introduce and evaluate an Ant Colony Optimization (ACO)-based metaheuristic algorithm, which selects an adjacent PM to position data and VMs. An ACO-based VMP solution, based on global search data, which combines artificial ants with other local exchange and migration policies, was proposed by [S85].
This algorithm was configured globally by assigning VMs to the total active PMs [144]. This solution has been implemented on many VMP problems in homogeneous and heterogeneous cloud environments with differing VM sizes as an effective means to minimize the number of active servers. Ref. [S86] proposed a VMP algorithm based on the recruitment process within ant colonies that enhanced the exploitation of PM resources and maintained a maximum balance of resources. With this new approach, VMP was optimized, and based on it, energy consumption and resource waste are reduced and balanced in the DC Cloud.
Ref. [S88] established an optimal way of delivering virtual machines to DC Cloud physical machines. There are two-stage of the VMP algorithm in this model. First, we introduce an architecture that serves a large community of virtual machines and schedules them. Second, to avoid and reduce the waste of resources and energy consumption in DC, a multifunctional VMP algorithm was proposed, known as the VMP Crow Search Algorithm. A new resource-dependent strategy leads to the optimal process. Large VMs on minimal DCs meet the SLA and desired quality of service for optimal DC utilization [S87] based on the meta-heuristic crow search based on greedy crow search and traveling salesman problem. Ref. [S89] proposed a requirement predictor technique by the optimal evolutionary placement of a virtual machine placement (EOVMP) algorithm. Using this, standard computation requirement is obtained and used to assign virtual machines by reservation and job request processing plans.
Ref. [S91] took an idea from Google's massive data processing framework for constructing multifunctional job scheduling using GA. This technique has enormous capabilities to operate on Google Networks and uses many ways to train and interpret networking features. The authors developed a practical individual encoding and decoding method and built 'servers' overall energy efficiency function as the individual fitness value.
Ref. [S92] suggested a low carbon virtual private cloud (LCVPC) technique that lowers the carbon emission rate of remote data centers interconnected by an extensive network, whether private or public but operates in distributed areas. Smart Live VM migration is used on the WAN network by which both carbon footprints and energy used from the entire network were calculated first by VPC managers, and then the modified GA is applied.
Ref. [S93] proposes an energy efficiency mechanism that assigns cloud resources without violating the ant colony framework's service-level agreements (SLA). A new algorithm was proposed by [S94]. The workload consolidation-driven algorithm was developed based on the ACO metaheuristic, and a dynamic decision was made about the workload distribution. This also resolves the workload consolidation problem within the application of a single resource and difficulties in assigning workload as a multidimensional bin-package problem.
The bio-inspired techniques developed by [S95] combines both ACO and Cuckoo searches strengths. In this hybrid Algorithm, the Voltage Scaling Factor Approach reduces energy consumption and efficiently schedules the job. The mechanism for using PSOTBM was designed [S96] to switch off an idle or unused server for energy consumption reduction. PSOTBM is the initial assignment of computing, storage, communications resources, and applying a mathematical system to find the best solution by PSO-optimized algorithms.
Based on the Simplified Swarm Optimization method (SSO), a PSO-based technique on which energy consumption in a cloud-based system can be reduced was proposed by [S97]. The optimized energy is consumed without performance degradation using the DVs technique, storing tasks entering the network, and detecting critical pathways for DVS's successful implementation.
Ref. [S27] proposed a nature-inspired ant colony system to consolidate VM; however, the probability of a peak situation arising increases as the consolidation occurs. Similar energy and performance-aware policies for deconsolidation are required. Various bio-inspired methods are compared by various metrics and advantages, as listed in Table 13. The taxonomy of bio-inspired techniques is shown in Figure 20, and the distribution of research papers among different types of bio-inspired techniques can be seen in Figure 21.  -Uses a graph generator to produce large amounts of Directed Acyclic Graphs (DAG) for testing purposes -SSO was compared to PSO, with both achieving a power reduction of up to 20 percent.
-Particle swarm optimization to a certain level, more efficient than SSO. Data centers were previously intended to focus only on IT equipment's stability instead of saving energy. Data center temperatures and humidity levels were used to exert ideal thermal conditions for the proper working of IT equipment, which is explicitly maintained at 21.5 °C. For years, the IT infrastructure had to comply with the standards for power modules of high density, which include the required equipment cooling and operating condition criteria.
For maintaining a steady temperature and humidity level, a considerable amount of energy is required. ASHRAETC9, the American Society of Heating, Refrigerating, and Air-Conditioning Engineers, specified the IT server temperature between 18-27 °C, and therefore allowable temperature ranges from 15-32 °C (class 1) and 10-35 °C (class 2) with an acceptable relative humidity range from 20-80% [74]. The server's internal parts are kept steady to minimize server inlet temperature effects by increasing the fan speed.
Consequently, the input air temperature does not explicitly influence the stability of the server components. Many experimental studies have been on data center cooling systems under fault conditions, including thermal energy efficiency [20] and optimizing the system delivery. Using a thermal-aware approach for scheduling, numerous strategies have been developed for homogeneous High-Performance Computing (HPC) data centers. An efficient scheduling strategy minimizes the entire data center's temperature. In contrast, energy savings are not limited only to the reduction of server energy consumption but also include reducing cooling equipment resources installed in data centers.
To demonstrate the data center cooling system in the case of a power failure and examine the characteristics of the rise in air temperature in IT environments, ref. [S72] developed a transient thermal modeling process. To achieve the necessary temperature control, the method requirement depends on each cooling system's characteristics. An energy efficiency technique with low first-time cost and ventilation cooling technology (VCT) was proposed by [S100].
A new VCT control system designed to ensure indoor temperature, humidity, and energy savings can be used under specific meteorological conditions for various telecommunication base stations (TBS). When the outside air temperature and humidity satisfy the cooling criteria, fans replace the air conditioners with environmental control.

BIO INSPIRED TECHNIQUES
Ref. [S101] introduced new technology for eliminating the discharge of excess heat from a telecommunications plant. The proposed technique will cool the station using a heat exchanger when the outdoor temperature is low enough. Instead of using air conditions for around 5014 h a year, it uses plate heat exchange, resulting in an energy-saving rate of up to 29%. The aim was to provide a renewable and sustainable energy source for cooling data centers to analyze and demonstrate what amount of heat can be obtained by a data center server blade.
Ref. [S102] proposed 123 servers encapsulated in mineral oil. Using offered technology 10-ton chiller with a design point of 50.2 kWth and the heat quality in the oil discharge line was up to 12.3 °C higher on average from water-cooled experiments. In [S103], a separate microchannel heat pipe (MCSHP), a telecommunications station's cooling system, investigating the effects of geometric parameters and environmental conditions are implemented.
When a difference from 6 to 8 °C was seen in indoor and outdoor temperatures, the cooling capacity was improved by 135 percent. Ref.
[S104] suggested a cooling system consisting of a three-fluid heat exchanger that incorporates two separate loops, the mechanical cooling loop and a thermosyphon loop, to avoid a reliability risk mode due to switching valves.
A cooling system was studied by [S105] in which servers were immersed in a dielectric liquid, and the water was being used for transmitting heat away from the data centers. The research involves a dry air cooler and buffers heat exchanger to dispose of heat from a liquid-cooled stand. The PUE from the cooling system was found to be as low as 1.08. Ref.
[S106] focused on energy consumption aspects in a data center. They worked on a direct-fresh-air-cooled container-based data center (CDC), which reported a 20.8 percent decrease in energy usage over by projected use of a CRAC-cooled CDC for one year.
Ref. [S57] developed a latent heat storage unit (LHSU) model to reduce a significant amount of energy consumed in space cooling. Then, the concept of free cooling and its future use in data centers was explored based on the proven performance characteristics of heat-pipe technologies. Data for a common area was implemented and evaluated by [S107].
There was a case study, and the findings suggest the possible energy savings of up to 75% using a heat pipe-based free cooling system. A heat pipe-based thermal management system for energy-efficient data centers was developed [S108] in data center environments where a fluid loop heat exchanger is connected to the rear of the rack server, which further develops a new Computational Fluid Dynamics technique (CFD) [109]. The study compared liquid back door cooling and conventional air-cooling systems and demonstrated the energy-efficient benefits of air-to-liquid heat exchangers [118][119][120].
In [S110], the authors reported the strength and energy performance of the first hotwater-cooled supercomputer system. The system also has an air-cooled part that allows measuring the output of coolants. This proposed system eliminated chillers and permitted heat reuse [S94]. Different thermal and cooling techniques for energy efficiency are compared in Table 14. All thermal and cooling techniques taxonomy is shown in Figure 22, and the distribution of research papers among different types of thermal management techniques can be seen in Figure 23.    Most energy-efficient strategies rely on migration and consolidation. They maintain energy efficiency at internal logical levels, which will minimize energy consumption by controlling the power supplied to virtual machines. Different energy scaling techniques can minimize power consumption at the superficial level [100]. Power management is categorized into two categories: static and dynamic power management. In static power management, power is managed at the time of design circuit, architecture, and systemlevel, focusing on reducing the switching activity of power in circuits [128][129][130][131][132][133][134].

THERMAL AND COOLING TECHNIQUES
Dynamic power management is applied to running state systems [49]. Dynamic voltage and frequency scaling (DVFS) is one of the methods used in DPM to cut down energy consumption by supplying dynamic voltage and clock frequency when the CPU is in an idle state [26]. Many research studies in the area of power management have been performed. Power management techniques are categorized into two major components as shown in Figure 24. Static Power Management (SPM). Evaluation of hardware efficiency is done by employing an SPM [108], which includes CPU, memory, storage, network equipment, and power supply [90]. The power diminution in SPM is a never-ending process that consists of all optimization approaches used in logic design, circuits, and architecture [92,94]. (c) Architecture-level optimization: The mapping of a high-level problematic architecture to a low-level configuration is a way of saving energy and maintaining a reasonable degree of efficiency [91][92][93][94][95][96][97][98][99].
Dynamic Power Management (DPM) The main focus and the key role of Dynamic Power Management (DPM) is: (1) The resources available and their use in the program.
(2) To optimize application workloads for reducing energy consumption.
We also believe that the DPM technology helps the system automatically change its power rates depending on the current situation [102,103]. Moreover, the workload requirements can also be estimated, which allows the adjustment of future system behavior to execute adequate action under these requirements [107]. Therefore, reducing DPM in the system is temporary, and depending on the available resources, the current workload will persist indefinitely [121]. Their application level defines these techniques as hardware or software level, as explained below.

Hardware-Level Solutions
Applied at the hardware level, DPM technology seeks to dynamically reconfigure devices for developing techniques to fulfil the required facilities with a limited count of active components or a minimal load [111][112][113][114][115][116]. The hardware-level DPM strategies [20] can selectively disable idle device components or reduce the output of slightly unexploited components.
Some modules, such as the CPU, can also be switched among active and idle modes to save power.
DPM can be classified across the hardware level as:

Software-Level Solutions
In the hardware-level approach, any adjustment or reconfiguration is challenging to enforce. It is, therefore, essential to push toward software solutions. Several approaches for energy reduction have been proposed, such as Advanced Power Management (APM) performed by Simple Input/Output System (BIOS), firmware-based systems, and platform-specific power management systems. These solutions, however, depend on both hardware and software. Now, we discuss the new techniques invented by various researcher using power management technology [139][140][141].
A modern system architecture named FAWN (fast array of wimpy nodes) was introduced by [S67] for intensive computing with low-power CPUs involving limited local flash storage volumes. The authors reported that such a combination provides successful parallel machine data access. Ref. [S113] investigated FAWN intensively on different workloads. The results demonstrated that nodes with low-power CPUs are more efficient from an energy perspective than are traditional high-performance CPUs. A new Gordon architecture for low-power data-centric applications was defined in [S68].
Gordon can reduce power consumption by using low-power processors and flash storage. The Non-dominated Sorting Genetic Algorithm II (NSGA-II), a power-efficient algorithm for the static scheduling of independent jobs on the homogeneous single-core infrastructure of cloud computing data centers implemented by [S69], which is based on a genetic algorithm. Ref. [S70] suggested a genetic algorithm for energy awareness (GAPA) to solve the VM allocation problem. A chromosome of an individual task was associated with a tree structure in which each tree structure instance shows VM to PM allowance.
The PSO-based algorithm that could minimize the energy cost and time required to search for feasible solutions, introduced by [S71], involves a heterogeneous multiprocessor system, similar to the cloud data center situation. The proposed algorithm is designed to find the most feasible solution to minimize energy usage by assigning jobs or exchanging appropriate processors. Ref. [S74] stated an energy-conscious network algorithm for data centers. The author proposes to make sleep only a few components of the network system rather than placing the network system entirely in sleep mode.
Ref. [S118] proposed the Elastic Tree for data center the network, a dynamic power management scheme that dynamically specifies a set of active network components, including switches and links. Elastic Tree on open flow switches was implemented for the experiment. The results revealed that network energy was reduced by up to 50 percent. Ref. [S79] proposed a DVFS-enabled Energy-efficient Workflow Task Scheduling algorithm named DEWTS. Ref. [S73] suggested a scheduling technique for virtual machines using the Dynamic Voltage Frequency Scaling (DVFS) in computing clusters to minimize power consumption.
Generally, we use dynamically scaling in supplied voltages to allocate virtual machines in a DVFS-enabled cluster. Ref. [S75] introduced a new architecture that would include practical green improvements to the distributed cloud-computing environment. The system's overall efficiency is greatly enhanced in a cloud-based data center with minimized overhead performance through power-aware scheduling techniques, variable resource management, live migration, and minimal virtual machine design [90].
In [18], the authors explore different power-efficient virtual machine provisioning techniques for real-time services. First, they investigated all such different power-aware DVFS-based schemes. They then used a purveying algorithm to improve the energy consumption and system reliability and reduce operating costs. A systematic approach to optimization of the computational in cloud computing proposed by [S78] implemented a private cloud system with facilities supplied by the DAS-4 cluster.
By implementing various high-performance computing workloads on both VMs and hosts [3], the author compared their performance in power, power efficiency, and energy metrics. Additionally, all these schemes help to minimize the effort to cool down the equipment. A Surge-Guard power trade framework [S80] balances idle times with cooling capacity, which reduces total energy consumption [20]. The framework also aims to reduce overall energy effectiveness, optimizing idle time, and reducing response time.
The diminution of SPC requires reducing the leakage of current, and this can be done in three ways [108].

•
Reduction of the voltage level for device components (CPUs and cache memory) is known as Supply Voltage Reduction (SVR).

•
Cutting down the circuit size of the device, either by designing lower-transistor circuits or cutting power supplies into idle components to minimize the effective transistor count.

•
Cooling technologies can reduce power leakage by enabling circuits for faster operation when electrical resistance is reduced at lower temperatures. Reduce the possibility of degradation of the reliability and life of a chip due to high temperatures.
Four methods can reduce the DPC: • Minimizing the process of switching.

•
Decreasing physical capacitance, which relies on low-level design parameters, such as the size of transistors.

•
Cutting down the voltage of the supply.
• Reducing the frequency of the clock.
The power-management technique taxonomy is shown in Figure 25. The techniques are compared according to their operating environment, methods used, and primary advantages and limitations in Table 15. In Figure 26, the distribution of research papers among different types of software techniques can be seen.    Processor power dissipation significantly increases as the clock frequency increases and the number of transistors expanded. For avoiding an increase in cooling costs, CPU power dissipation must be reduced [45]. Processor consumes energy in the form of electricity for operating various network devices, chips and mainly for its charging purposes. This energy is often emitted into the environment in the form of heat. By using free cooling, the power consumption can be diminished.

POWER -AWARE TECHNIQUES
ii. Use of renewable energy sources Every cloud data center required a diesel generator to supply backup power and ended up in the discharge of CO2 and GHGs from that data center. Rather than using such energy, we can opt for these clean-power options, such as hydro and wind power, which can fulfill operation and colling requirements. Google data centers continuously consume electricity at almost 260 million watts, nearly a quarter of nuclear power production.
Parasol, a solar operated Datacenter was developed by [S111] as a research platform that includes an air conditioning unit, electrical distribution, and monitoring infrastructure that works entirely on renewable energy sources. In [81], they already obtained renewable energy certificates to balance their non-renewable energy consumption and finance renewable energy production from wind and solar sources. AWS 2019 announced that its global infrastructure is focused on creating 100% renewable energy use.
iii. By using energy-efficient storage Energy-efficient storage has already been devised to supplement current cloud storage. As the data center has limited age of 9-10 years, it is possible to use energy-efficient memory elements, such as solid-state storage, as they have no moving mechanical component. It will need less energy to cool down than the usual hard disc drive.

iv. Advanced clock gating mechanism
The clock gate is a hardware switch to transform the clock pulse to on and off modes. The logic unit of the clock can only be triggered if the logic block is running. Moreover, if the logic block is in the perfect model, it needs to be switched off. This famous technique was utilized on many synchronous circuitries and can also reduce dynamic power dissipation on globally asynchronous and locally synchronous circuits [S112].
v. By using split plane power Splitting means dividing up; however, division along the horizontal axis is in the processor's language. Here, the North Bridge does not share the processing device's power (these are Intel microchips that allow CPU and motherboard communication). The motherboard supports a split plane power supply, and separate power supplies are required for the North Bridge and the processor.
vi. By improving the energy efficiency of processors Dynamic voltage scaling and frequency scaling involve adjusting these parameters according to the computer or software specifications. Even the processor voltage or clock rate can be adjusted easily to reduce the CPU power usage. This method minimizes energy consumption through the following techniques in a real-time framework: AMD Cool 'n' Quiet, AMD Power Now, IBM Energy Scale, Intel Speed Step, Transmeta Longrun, and Longrun2.
vii. Through energy-efficient computer architecture design Heterogeneous computers are composed of several processors of a relatively independent type. In using such systems of heterogeneous computers, which usually use processors as the central control unit and other particular purpose processors, such as the accelerator unit, the energy consumption ratio is less effective. Ref.
[S113] developed a Fast Array and Wimpy Nodes (FAWN), an energy-efficient cluster system capable of handling large numbers of wimpy nodes built into an integrated cluster. This can function as a classical cluster, however, with only a fraction of the power consumption by having energy-efficient processors and utilizing a limited flash memory.

viii. By using Nano Data Centers
A new, distributed computing platform called Nano Data Centers (NaDa) increases the energy efficiency in modern and traditional data centers. NaDa offers cloud computing resources to build a decentralized data center architecture using a managed peer-topeer model. Instead of using many large data centers in different geographical locations, it is recommended to use small-sized, interconnected data centers. This energy consumption involves VoD access model data access from NaDa [78], making 30% less consumption than traditional data centers.
To provide processing and storage facilities, NaDa uses ISP-managed home gateways and follows a controlled peer-to-peer architecture to create a distributed data center infrastructure. They choose Video-on-Demand (VoD) platforms to determine the opportunity for energy savings on the NaDa network. Using NaDa, the electricity is saved from standard data centers at least 20 percent to 30 percent, avoids the cooling cost, and reduces network energy consumption.
Optimizing the total power consumption within a data center is an NP-hard problem. The project for the nano data center was started with the promise of consolidating power and cooling costs. This study aims to reduce the gross energy usage in a nano data center by implementing the efficient power-aware server selection procedure. Usually, it is an NP-hard problem to optimize the overall power consumption in a data center. The energy consumed according to various parts of a data center is shown in Figure 27. As the current trend is shifting toward energy consumption in data centers, [S119] proposed a new distributed computing platform called Nano Data Centers (NaDa). Ref.
[S115] developed two heuristics for selecting nano servers and intermediary routers for that. These heuristics were based on the option of a short path for the number of intermediate routers and the use of the least used server to support a new request. Using the proposed heuristics, the total energy usage in a data center decreases while the optimum solution's time complexity increases. ix. Energy Saving Strategy for optimizing compiling process and application software power consumption Compiler optimization also plays a significant role in reducing the energy consumption. Better compilation technology also helps to evaluate the application programs' behavior to reduce the device and processor power usage. Applying specific approaches to the source program structure level will reduce the power consumption by application software. Approaches for application software power consumption concentrate on reducing space and time complexities in the algorithm to reduce energy consumption, including a specific algorithm for compressing data storage space.
The authors in [S116] proposed a model for lowering the execution frequency, which reduces power in the multithreading system structure. Normally programmers do not consider the energy efficiency of code, and even fewer know how to assess and improve the code's energy efficiency. Hence, ref. [2] attempted a comprehensive study to assess the impact while choosing the programming languages, compiler optimization, and implementation. By examining the results, considerable energy can be preserved to solve the same problem with the same input size by meticulously choosing the correct language, optimization flag, and data structure.
x. Thermodynamic computing When we cross-machine constraints to see how researchers are exploring the concept of thermodynamic computing [105] to make computers function more expeditiously, on the premise that thermodynamics contributes to the autonomy and evolution of natural systems. Hence, thermodynamics will guide us towards the self-organization and development of future computing systems, building them to be more adequate, stable, and energy-efficient.

xi. Nano technology
Presently, the industry is under 10 nm node, moving to 5 and 3 nm can provide some development of efficiencies and optimization [S118]; however, in the future, we will require more than today's latest transistor architectures, such as nanosheets and nanowires.

Issues and Recommendations
From the studies performed in the previous section, we created a graph on the number of studies proposed in each year. By analyzing Figure 28, from the year 2011 to 2015, in this time span, the energy efficiency topic became the hottest research domain. With these software techniques, huge infrastructure is not required as it is with hardware techniques, and it is less prone to failure and faults.

Power-Consumption Parameters (RQ1)
Issue 1: The majority of the authors for PSs do not consider the recommendations provided by previous studies to select the parameters. Moreover, no sufficient detail of the parameters used in their research is given. We see that specific techniques significantly gained less attention, such as bio-inspired and thermal-management techniques; however, their results are productive.
Recommendation 1: It is advised to select such parameters by which we can thoroughly analyze the techniques that can produce more accurate results. Moreover, we attempt to justify their cause by selecting specific parameters. Usually, without concerning all parameters, we cannot predict any specific technique to be effective or not.
Most of the selected PSs have not provided any valid justification while choosing their studies' energy-consumption parameters. Based on previous studies, while developing any software technique, most of the researchers gave only the most negligible efforts regarding energy consumption as their focus was on boosting the performance.

Recommendation 2: It is suggested to researchers to further conduct empirical study by thoroughly examining the impact of industrial setup techniques.
This study highlights the divergence between academic and industry research in energy-saving techniques. The energy-efficiency techniques used by academics and energy efficiency activities used throughout implementation by industry practitioners are different. Researchers are also advised to include or derive similar information from previous research done by industry professionals. Furthermore, it would be beneficial if the research involving these energy efficiency initiatives were conducted on professional software systems in an industrial environment.

Impact of High Energy Consumption and Current Trends (RQ2 and RQ3)
Issue 1: Most of the studies utilize energy-efficiency techniques in data centers; however, they do not greatly consider the carbon footprint, as all the energy being consumed by techniques is generated by non-renewable sources of energy. Moreover, there are not many details about the total carbon emissions from data centers.

Number of energy-efficiency techniques proposed in each year
Recommendation 1: It is suggested that researchers propose techniques that involve minimal non-renewable energy sources for operating data centers as all this energy comes from coal or nuclear power stations, which produce harmful gases in the environment. Researchers should also attempt to provide accurate details of the carbon emissions and the other gases emitted from data centers.
Large corporations generally do not share their data; however, mostly all are moving toward green computing technology and attempting to optimize the techniques. Today, ICT can also lessen global pollution by reducing fossil fuel use and utilizing renewable sources. Facebook undertook to use 100% green energy in 2011. Google supplemented in 2012, as did Apple. Almost 20 web corporations did the same in 2017. Google is the top green energy purchaser in the world.

Energy-Efficient Techniques (RQ4)
Service level agreements (SLAs) are used for customer availability and reliability, which are integral aspects of cloud systems. The provider must pay fines in the case of any breach of the SLA. SLAs must cover data leakage, mismanagement of resources, multi-tenancy problems, and data sharing. Metrics in the data center field are measurable comparative characteristics and important characteristics leading to energy efficiency and sustainability improvements.

Hardware Techniques (RQ5)
Issue 1: Many PSs about hardware techniques ignored the critical point: the cost for implementation. Using hardware techniques, we can teach how to efficiently reduce the carbon footprint; however, with all these, we also increase the cost of construction and operation.
Recommendation 1: It is suggested to report the cost factor as only high efficiency would not solve all issues. While constructing hardware, we should also consider the costs of the hardware and equipment.

Issue 2:
The majority of the PSs for hardware techniques have faced an issue regarding the construction of such systems. These systems are complex to implement, and the results obtained by these PSs in the academic environment are not very consistent with the results obtained by industrial systems.

Recommendation 2:
It is suggested to execute the studies in both settings as we can eliminate the possibility of inconsistent results. We must validate the proposed studies in both academic and industrial systems for proper analysis of the results. We will have a more robust result and know more about the system's efficiency by performing such practices.

Software Techniques (RQ6-RQ7)
Issue 1: Most of the PSs had been implemented to evaluate the techniques' impact, mainly on Cloudsim. Sometimes the implementation of such object-oriented software was more complex than in others.

Recommendation:
It is advised to develop such tools to efficiently address the technique's impact by employing easy language, such as Python and C#. Currently, some intelligent languages that support AI and data analysis can also be used, and these can show results more efficiently.

Issue 2:
Most of the PSs missed some critical points, such as Network traffic, Load balancing, and optimization is required in software techniques.

Recommendations 2:
We suggest involving these parameters in research if there is high network traffic. It would lead to more power consumption in data centers that manage the traffic. There should also be care for proper load distribution if the load is oddly assigned to a server. This will lead to degradation of the performance.

Power-Aware Management Techniques (RQ8)
Issue 1: Most of the studies that utilize the power-aware management technique are considered to be costly and are implemented on the tools that are either obsolete or unavailable. Primarily, these tools do not evaluate the system with all concerned parameters.

Recommendation 1:
The development of the latest tool that employs all attributes to predict or evaluate any system and accurate cost details must be added to studies.

Bio-Inspired and Thermal Techniques (RQ9-RQ11)
Issue 1: Mostly, the finding obtained by both techniques are still not implemented in the real-world entities. The techniques were implemented in their simulation, making it challenging to predict the exact results without implementing it in the same environment-there are no details about the resource wastage. Load balancing was found in PSs.
Recommendation 1: It is advised to develop such a framework or tool capable of executing all implementations only to evaluate a system efficiently by all parameters and requirements.

Issue 2:
Most PSs utilizing techniques improve a particular parameter, whereas another PS with the same technique applying different parameters for evaluation may result in ambiguous results.

Recommendation 2:
It is suggested to develop a unified framework for energy-efficient techniques in the cloud, which will be capable of a more significant measure of techniques.
The metrics help designers, managers, and end-users and are observable for monitoring, evaluation, optimization, cost estimation, and planning criteria. The problem might be to evaluate the work of IT equipment or software, which is subjective because of the device, network and storage diversity, and usage ratios.

Threats to Validity
The primary threats to our SMS's validity originate from the unintentional contradictions involved while reviewing the literature [1-111], selecting the primary study, evaluating the quality assessment, and extracting data from the selected studies. Including the conclusion and internal and external threats, we followed the threat description given by [112] to discuss those constraints and the steps taken to mitigate these threats [113][114][115][116][117][118][119][120][121].

Conclusion Validity
The conclusion's validity refers to reproducibility, which helps other researchers produce the same results as the original replication study [30]. This research has demonstrated the search string, electronic search databases, manual search, inclusion and exclusion criteria, and details of the systematic methodology used for undertaking this study.
In addition to this, the entire SMS process information was taken from a standard SMS conducted by [122], which is available online. However, other researchers could produce different results because of their different perceptions of the selection stage. To reduce this threat, both authors separately examined the titles, abstracts, and full texts of the selected articles while considering inclusion and exclusion criteria. Additionally, the authors discussed the discrepancies they identified during the selection process in the case of disagreements.

Convergent and Discriminant Validity
The validity of the construct relates to primary studies identified during the selection process. To select the primary studies unbiased, we adopted a well-known protocol suggested by [55]. First, a peculiar search string was built to select studies relevant to our concept. We further validated our search terms with 20 reference papers to expand our initial collection of keywords or define additional search terms. Paper [55] recommended a methodology for an automatic search for articles conducted most suitably and efficiently.
However, according to the keywords used to search the publication sources, some literature could be missing due to the defined keywords. The findings of the pilot study supported our research string formulations. A pilot study by both the authors was done on 24 sample papers to evaluate the challenges of the automated search and verify our search string. After the automated searching process was completed, a manually searching process was conducted on top publications houses and conference proceeding databases to find any relevant study on energy efficiency to ensure that we do not miss any critical studies.
To enhance the efficiency of our searching process and mitigate the threat of validity of incomplete primary studies, the authors independently evaluated the chosen study sources and previous surveys. The selection stage comprises criteria for inclusion/exclusion and the method for quality evaluation. Due to a significant absence of availability or language barrier, we might have missed out on some critical research related to our domain.

Internal Validity
The reviewing process of research articles utilizing inclusion and exclusion criteria and quality evaluation processes is prone to revisionist bias to exclude relevant studies. However, the preparation against these threats was taken very carefully. A random collection of 76 documents received through automated search was screened separately by both authors to explain any equivocal views of inclusion and exclusion criteria. The results of the screening were compared, and the variations were resolved.
According to [101], the first author performed the quality evaluation and data extraction from primary studies, and the second author-verified each decision. Furthermore, the excluded studies do not fulfil the quality specified for this study, as we follow the welldefined lineament assessment standards proposed by [55]. In the research classification process, the authors have participated entirely, and any conflicts of opinion were settled through discussion to ensure mutual consistency.

External Validity
Such research focuses mainly on the benefits and pitfalls of cloud-based energyefficient technologies. The findings obtained about the environmental effect of high energy consumption will differ from programming paradigms since various techniques occur. Furthermore, the results of this SMS are valid primarily for energy-efficient cloud computing technology. This analysis results may differ for other software devices, such as networking and the operating system's architecture. This study might act as a framework for many researchers to further recognize and classify our climate and ICT technology's effect on high power usage.

Conclusions
This paper addresses a rigorous systematic mapping analysis to disclose the current state of the art for energy-efficiency techniques and recognize future accessible issues in the empirical literature concerning the effect of high energy consumption on the environment. First, we provided a short overview of energy-efficiency techniques used in the cloud to cut down the energy consumption and further discuss their need and role in a sustainable environment. Later, we addressed the complete procedure for conducting an SMS to perform the mapping analysis.
A two-phase methodology was employed to determine the relevant literature regarding our concern topic. Initially, we searched from six databases, Springer, Scopus, Sci-enceDirect, IEEE Xplore, Wiley, and ACM Digital library. There were almost ten esteemed journals and six conferences considered for manual research. After the manual search, we also conducted snowballing to enhance our search efficiency and reduce the chance of neglecting any relevant article. After carefully screening our selected studies against the quality assessment criteria, we selected a pool of 119 PSs out of 2903 initial search results. These PSs were further categorized and analyzed due to our research questions.
The mapping of PSs according to their publishing years shows that the determination to find the impact of high energy usage on the environment is continuously thriving and establishes a highly active research field of cloud computing. We hope that this study will support industry professionals in understanding current research related to the effects and impact of high energy consumption in cloud data centers. The limitations and obstacles addressed in this SMS will also help researchers identify critical research gaps that must be addressed.
In recent years, computer systems' energy consumption has experienced many turning points as it is one of the latest trends in the ICT industry. The authors attempted to highlight both the beneficial and harmful effects of cloud computing and to explain their impact on the ecosystem. According to the World energy outlook report in 2013, annual energy efficiency spending will increase to US$550 billion in 2035 with almost 60 per cent of clean energy investments.
Cloud computing is typically supposed to establish a harmonious relationship with the community because the manufacturers of ICT equipment and the suppliers in this sector have agreed on environmental policy and NGOs' recommendations about minimizing the harmful impact of energy consumed by operating the data centers. The techniques were grouped into VM-based strategies, consolidation techniques, bio-based optimization processes, thermal control, and non-technical techniques. A future study might examine the effect on results and quality of service by various energy-efficiency techniques.

Future Work
Today, non-renewable resources, such as coal, are the primary energy source in the world's energy market. With the limited supply of non-renewable energy sources, the emphasis has been moved from taking concrete steps to derogate energy usage levels in the data centers. An energy-efficient strategy that can significantly deal with the energy consumption issue and proficiently manage such a massive data volume is now a fundamental requirement. Other relevant areas require extensive investigation to optimize energy efficiency in cloud data centers.
In this study, we attempted to draw various facts about energy-efficient systems into one picture. For green computing, we instituted four sections, manufacturing computers, software strategies, awareness of people, and standardized policies. The output of this paper will be beneficial for all the four categories, as we presented how green computing concepts, various mechanisms, and some non-technical activities in data centers can improve energy savings, time, and costs.
Cloud computing is a modern paradigm that combines existing technologies to improve resource usage and better workload distribution. According to our research, the contribution of cloud computing to ecological sustainability is considered important, and the most relevant elements follow as under: According to green ICT standards, cloud technology should be applied with minimal harmful effects on the environment. The providers offering cloud services should decrease their non-renewable energy usage and substitute this consumption with renewable energy to follow environmental conservation provisions.
A rise in renewable energy consumption would result in lower emissions of CO2. However, because the first indicator is still not planned, it is doubtful that the pollution reductions caused by carbon will meet environmental organizations' expectations. Reductions of resource usage, CO2 emissions, and e-waste can concern both cloud service providers and consumers while selecting their services. Cloud computing can impact reduction and substitution for the count of equipment demanded by organizations. Cloud service providers should be alert and customize their practices according to all region's environmental guidelines.
Organizations and NGOs for the environment should verify and appraise the details about the effect of cloud infrastructure on the ecosystem, and these details are readily available online and can be very advantageous for building up new policies to protect our environment. This research is crucial for decision-makers as they can easily predict a requirement for more comprehensive and robust models for future computing technologies and mitigation options.
Researchers can use the output of this study before developing new techniques as they will have a better idea about the limitations and scope of the previous studies conducted on energy efficiency. Moreover, we must develop more techniques for modelling emerging trends, such as the rollout of 5G, AI, and edge and fog computing. Using this paper's results, policymakers will obtain former insights into their potential energy consumption implications and accelerate their investments in energy-saving technologies that reduce consumption gains. Data Availability Statement: The study did not report any data.
Acknowledgments: This research project was supported by a grant from the Research Center of College of Computer and Information Sciences, Deanship of Scientific Research, King Saud University.

Conflicts of Interest:
The authors declare no conflict of interest.