Entropy-Based Economic Denial of Sustainability Detection

In recent years, an important increase in the amount and impact of Distributed Denial of Service (DDoS) threats has been reported by the different information security organizations. They typically target the depletion of the computational resources of the victims, hence drastically harming their operational capabilities. Inspired by these methods, Economic Denial of Sustainability (EDoS) attacks pose a similar motivation, but adapted to Cloud computing environments, where the denial is achieved by damaging the economy of both suppliers and customers. Therefore, the most common EDoS approach is making the offered services unsustainable by exploiting their auto-scaling algorithms. In order to contribute to their mitigation, this paper introduces a novel EDoS detection method based on the study of entropy variations related with metrics taken into account when deciding auto-scaling actuations. Through the prediction and definition of adaptive thresholds, unexpected behaviors capable of fraudulently demand new resource hiring are distinguished. With the purpose of demonstrate the effectiveness of the proposal, an experimental scenario adapted to the singularities of the EDoS threats and the assumptions driven by their original definition is described in depth. The preliminary results proved high accuracy.


Introduction
The main goal of Denial of Service attacks (DoS) is to deplete the resources of the victim systems with the purpose of their disabling.The abbreviation DoS typically refers to threats with a single source; when they are originated in multiple sources, the expression Distributed Denial of Service attacks (DDoS) is applied.In the last decades these threats have grown, become more sophisticated and have acquired a greater intrusive capacity, which magnifies their impact and hinder their mitigation.The different information security agencies have warned about this problem.For example, the European Union Agency for Network and Information Security (ENISA) registered a 30% increase of DDoS threads in the last year [1].Given the magnitude of impact of the threads registered at Autumn 2016 [2], both European Commission (EC) [3] and US Government [4] announced an important reinforcement in their measures against these attacks.According to the European Police (Europol), their harmful capabilities are propitiated by different circumstances: The rapid proliferation of botnets, the emergence of novel vulnerabilities and amplifying elements, a greater offer of malicious products as Crimeware-as-a-Service (CaaS) in the black market, the massive popularization of certain technologies (e.g., mobile devices, Internet of Things (IoT), etc.), and the ignorance of users concerning good practices related with data protection and information security [5].
The most common DDoS approach is based on flooding, which modus operandi is to inject several requests in order to saturate the victim processing capabilities.As highlighted in [6], this is achieved by the constant and continuous generation of large volumes of requests (i.e., high rate flooding), or by seasonal injection of less noisy numbers of requests (i.e., low rate flooding).Consequently, the research community assumed these behaviors and over the last decade has proposed solutions that facilitate their prevention, detection, mitigation and identification of sources, some of them discussed in-depth at [7].However, the emergence of new monitoring environments, in particular those that adapt the technologies that take part of the backbone of fifth generation networks, has led to a variation of these threats that instead of compromising computing resources, has focused on damaging the economic sustainability of the services they support [8].They are well-known as Economic Denial of Sustainability attacks (EDoS), and they are the object of study of the research described throughout the rest of the paper.With the purpose of cooperate with the research community towards their mitigation, the following main contributions are accomplished: • An in-depth review of the EDoS threats and the efforts made by the research community for their detection, mitigation and identification of sources.• A multi-layered architecture for EDoS attack detection, which describes the management of the acquired information from its monitoring to the notification of possible threats.• A novel entropy-based EDoS detection approach, which assuming its original definition, allows to discover unexpected behavior on local-level metrics related with the auto-scaling capabilities of the victim system.• An evaluation methodology adapted to the singularities of the EDoS threats and the assumptions driven by their original definition.• Comprehensive experimental studies that validate the proposed detection strategy, in this way motivating its adaptation to future use cases.
The paper is organized into five sections, being the first of them the present introduction.Section 2 studies the main features of the EDoS attacks and their countermeasures.Section 3 introduces a novel EDoS detection system based on entropy variations analysis.Section 4 describes the performed experimentation.Section 5 describes and discusses the obtained results.Finally, Section 6 presents the conclusions and future work.

Background
This section describes the main features of the Economic Denial of Sustainability threats and some of the most relevant countermeasures in the bibliography.

Economic Denial of Sustainability Attacks
Hoff coined the term Economic Denial of Sustainability attacks (EDoS) in 2008 [9,10] and Cohen extended its definition [11], which is currently adopted by the research community.EDoS attacks are usually directed against Cloud Computing infrastructures, which play an essential role in the emergent communication technologies.Because of this, Singh et al. formally defined EDoS attacks as "threats which target is to make the costing model unsustainable and therefore making it no longer viable for a company to affordability use or pay for their Cloud-based infrastructure" [12].EDoS are also tagged in the bibliography as Reduction of Quality (RoQ) threats [13], or Fraudulent Resource Consumption attacks (FRC) [14].These intrusions take advantage of the "pay-as-you-go" accounting model offered by most of the Cloud Computing providers and their auto-scaling services [15].Their modus operandi slightly varies depending on the providers and the Cloud solutions they offer (e.g., OpenStack, Microsoft Azure, Amazon EC2, etc.) [13], as well as the scaling policies they implement (discrete, adaptive, etc.).However, EDoS tends to display a common pattern: The attacker injects requests that must be processed at server-side.They pose an important workload effect, which may be caused by different actions, among them requesting large files or queries [16], HTTP-requests on XML files [17], or exploiting alternative Application layer vulnerabilities [18][19][20].When the flooding of requests exceeds the computational capabilities of the hired services, the auto-scaling processes trigger the need of contract additional resources, which increases the bill that the client must pay.Sonami et al. [14] studied the consequences of this increasing of costs, which have distinct impacts depending on the side.For example, in addition to the impact on the offered Quality of Service, the economic losses may become unsustainable for the clients, and consequently they probably will try to find a more profitable provider.This obviously also affects the supplier, which loses reputation, and hence money at long-term.The attack also impairs other services and network layers, mostly because the impact of deploying additional resources.This involves, among others, physical infrastructure, Network Function Virtualization (NFV) or multi-tenancy, which may compromise additional network resources [21].For example, in [13], a low-rate flooding variant of EDoS is introduced with the purpose of maximize the collateral damage (i.e., the consequences of auto-scaling) and make its detection more difficult.Such publication reviews its consequences at different Cloud Computing architecture levels.

Countermeasures
In general terms, the extensive literature related to the defense against conventional DDoS threats lacks publications effective against EDoS attacks.This is because EDoS focus on make Cloud resources economically unsustainable instead of their depletion.This often occurs by far less noisy attacks, and with a greater resemblance to the behavior of the legitimate user [16].Because of this, EDoS detection is driven by metrics related with resource consumption at server-side, while conventional DDoS detections usually analyzes network traffic metrics at packet and flow level [22].Several specific approaches against EDoS attacks are collected and discussed in [8,16,23].Some of them aim on their detection, which typically distinguishes two methods.The first of them analyzes network traffic metrics, as is the case of those that describe the web browsing behaviors [24], time spent at web pages [25] or packet header attributes, for example their TTL [26].They are easy to implement and efficient, but lie on Application layer or networking protocol features more related to DDoS than EDoS; hence their accuracy is greatly restricted to each use case [27].On the other hand, the second approach is based on modeling the economical sustainability of the services looking for suspicious discordances [28].This method was significantly less considered by the research community, mainly because of its specificity; in particular it entails a greater difference with conventional DDoS detection strategies and demand more complex processes at server-side.However, it is independent of the exploited network layer and provides a more comprehensive understanding of the impact of the requests to the protected services, the latter usually leading to greater accuracy.
Publications based on prevention and mitigation of EDoS threats focus on hampering their execution and minimizing their impact.The most complex prevention solutions mathematically model the resources required by the protected services, and anticipate the consumption of future requests, which usually adopts game theory or queuing methods [29].They allow anticipating harmful situations facilitating proactive responses, but must be complemented by reactive solutions.Major efforts towards mitigation EDoS threats focus on deploying access control mechanisms, as is the case of Crypto-puzzles [30][31][32], Graphical Turing tests [26,33] or reputation systems [34,35].They are effective, but as highlighted in [23], resolving hard tests or deploying complex reputation schemes consume additional resources at both client and server sides, and significantly affect the Quality of Experience (QoE) on the protected environment.
Once the threats are detected and mitigated, the final step is to identify their source.The bibliography lacks publications that specifically address this problem, excluding certain exceptions as [24].They usually model server usage behaviors based on Application layer metrics, among them web session duration, number of HTTP requests, or their impact on the protected environment.More generalist solutions are inherited from the advances on the conventional DDoS attack source identification.They mainly include packet traceback techniques, some of them being collected in [36].For example, in [37] a novel approach that bypasses the deployment difficulties of the conventional IP traceback techniques by studying ICMP error messages is proposed.As reviewed in [38], the features of the network topology have an important impact in the effectiveness of the source identification approaches, which tend to be problematic in highly non-seasonal environments.Alternatively, traps as honeypots [39], or decoy virtual machines that co-exist with those real in the same physical hosts [40] are deployed.They implement the aforementioned methods, thus providing an additional level of security.

EDoS Attack Detection
With the purpose of establish the basis for defining an appropriate design methodology, the peculiarities of the conventional Denial of Service attacks, the legitimate mass access to the protected services (i.e., flash crowds), and their differences with the Denial of Sustainability threats have been taken into account.They allowed to define the following assumptions and limitations concerning the proposal described in the rest of this section: • As remarked by Hoff in the original definition of EDoS attacks [9], they pose threats that do not aim on deny the service of the victim systems, but increase the economic cost of the services they offer to make them unsustainable.• Hereinafter, Chris clarified that at network-level, EDoS threats resemble activities performed by legitimate users [10].This implies that the distribution of the different network metrics (number of request, number of sessions, frequency, bandwidth computation, etc.) does not vary significantly when these attacks are launched.This is because in order to ensure their effectiveness, they must go unnoticed.• It is possible to identify EDoS attacks by analyzing performance metrics at local-level.Given that at network-level there are no differences between EDoS and normal traffic, the requests performed by these threats must involve a greater operational cost.• Requests performed by EDoS attacks have a similar quality to those from legitimate users (for example, a similar success rate).However, attackers may exploit vulnerabilities (usually at Application layer) to extend their impact [14].• DDoS attacks usually originate from a large number of clients, where each of them performs a huge number of low-quality requests.On the other hand, EDoS attacks also come from many sources, but each client performs an amount of request similar to that of legitimate users.Unlike in flash crowds, EDoS attacks affect the predictability of the performance metrics related to the costs resulting from attending the requests served by the victim [18].
Based on these premises, it is possible to assume that, by studying the predictability of performance metrics at local-level (e.g., processing time, memory consumption, input and output operations, CPU consumption, etc.), it is possible successfully identify EDoS attacks.This is taken into account in the following subsections, where the introduced detection strategy is described.The proposal has the architecture illustrated in Figure 1.Therefore, it must perform three main tasks: (1) monitoring and aggregation; (2) novelty detection and (3) decision-making.They are described below.

Monitoring and Aggregation
At the monitoring stage, the factual knowledge necessary to deduce the nature of the requests to be analyzed is collected.Therefore, the detection system monitors local metrics related to the operational cost of responding the received request.Assuming that in order to success, EDoS attacks attempt to trigger the auto-scaling mechanisms of the victim-side, the metrics that determine these actions acquire special relevance.Note that they are widely studied in the bibliography, which vary according to the management services.Examples of well-known local-level metrics are: CPU utilization, warming time, response time, number of I/O requests, bandwidth or memory consumption [13,14].Because of its relevance in the recent Cloud computing commercial solutions (e.g., Google Cloud, Amazon EC2, etc.) the performed experimentation considered the percentage CPU usage of the victim system.On the other hand, it is important to borne in mind that the analysis of the predictability degree of events has played an essential role in the defense against conventional DDoS threats.Among the most used aggregated metrics, it is worth mentioning the classical entropy adaptation to the information theory proposed by Shannon [41].Note that in approaches like [42] it is demonstrated its effectiveness when applied to DDoS detection, being a strong element in the discovery of flooding threats.Recent publications such as [16,27,28] tried to adapt this paradigm to the EDoS problem.However, most of them made the mistake of only considering information monitored at network-level, hence ignoring part of the information that truly defines the auto-scaling policies.Because of this, the Aggregation stage of the proposed method calculates the information entropy H(X) of the {x 1 , x 2 , . . ., x n } instances of the qualitative variable X monitored per observation, as well as their {p 1 , p 2 , . . ., p n } probabilities.The proposed detection scheme defines X as "the response time (rate) to the different requests performed by each client".Given that X describes discrete events, its entropy is expressed as follows: where log a b.log b x = log a x.H(X) is normalized, hence being calculated when dividing the obtained value by the maximum observable entropy log b n.When the maximum entropy is reached, all the monitored clients made requests with the same CPU overload; on the contrary, if the registered entropy is 0 then (1) a single customer carried out all the requests, or (2) there was no CPU consumption during the observation period.The sequence of monitored entropies is studied as a time series H(X) N t=0 .

Novelty Detection
The next analytic step is to recognize the observations that significantly vary from normal behaviors.This is a one-class classification problem where it is assumed that the normal data compiles the previous H(X) t=1 ,..., H(X) t=N−1 observations and it is intended to deduce if H(X) t=N belongs to the same activities.The bibliography provides a large variety of solutions to this problem [43].However, because it was assumed that EDoS attacks could be identified by discovering discordances at the predictability of local-level aggregated metrics [18], the proposed system implements a forecasting approach.

Detection Criteria
In particular, the entropy for certain horizon h, Ĥ(X) t=N+h , is predicted.Hence, letting the following Euclidean distance: If (X) t=N+h differs from Ĥ(X) t=N+h , so dist(o, ô) > 0 an unexpected behavior is detected.The significance of this anomaly is established by two adaptive thresholds: Upper Threshold (Th sup ) and Lower Threshold (Th in f ).A novelty was discovered if any of the following conditions is met: (3)

Prediction
The implemented prediction methodology adopted the Autoregressive Integrated Moving Average ARI MA(p, d, q) paradigm [44], which defined by the following general-purpose forecast model: where a i are the parameters of the autoregressive part, θ i are the parameters of the moving average part and t is white noise.The adjustment of p, d, q may be the ARIMA model equal to other classical forecasting models.For example simple random walk (ARI MA(1, 1, 0)), AR(ARI MA(1, 0, 0)), MA(ARI MA(0, 0, 1)), simple exponential smoothing (ARI MA(0, 1, 1)), double exponential smoothing (ARI MA(0, 2, 2)), etc. Predictions ( ŷt ) on ARIMA models are inferred by a generalization of the autoregressive forecasting method expressed as follows: and the calibration of the adjustment parameters p, d, q considered the Akaike Information Criterion (AIC) as described in [45].

Adaptive Thresholding
On the other hand, the adaptive thresholds define the Prediction Interval (PI) of the sensor, which is deduced in the same way as it is usually described in the bibliography [4], hence assuming the following expressions: and being K the confidence interval of the estimation (by default Z α 2 ).Note that despite linking its value to the normal distribution, it was demonstrated that when time series does not approach such distribution, the obtained error is unrepresentative [46].Figure 2 illustrates an example of novelty detection.In the first 60 observations non H(X) exceeds the adaptive thresholds; but at observation 61 an EDoS attack was launch, and the inferred changes meet the conditions to be considered novel.

Decision-Making and Response
According to the principles of anomaly-based intrusion detection compiled and discussed by Chandola et al. [47], once assumed the appropriate premises, the identification of discordant behaviors may be indicative of malicious activities.As stated at the beginning of this section, the introduced EDoS detection system lies on the original definitions of C. Hoff and R. Cohen.Therefore, when a local metric directly related with triggering auto-scaling capabilities on Cloud computing became unpredictable, it is possible deduce that the protected environment is misused, hence jeopardized.This occurs when dist(o, ô) > 0 and (1) H(X) t=N+h > Th sup or (2) H(X) t=N+h < Th in f .Because the performed research focused only on detect the threats, its response is to notify the detected incident.The report may trigger mitigation measures such as initiate more restrictive control access [30,31,33] or deploy source identification capabilities [24] (which decision and development is out of scope).Therefore, it entails a good complement to many of the proposals in the bibliography.

Experiments
The following sections describe the Cloud-based testbed and related architectural components considered throughout the performed experimentation.They are depicted in Figure 3.

Execution Environment
The experimental cloud computing environment was built with Openstack [48], a well-known open source cloud platform suitable to deploy public and private cloud environments of any size.The auto-scaling features of this cloud platform have also been tested effectively on recent publications [49,50].The Openstack deployment for the experimental testbed was composed by one controller node and one compute node.The controller runs core Openstack services and it also holds the Networking (Neutron), Compute (Nova) essentials, Telemetry (Ceilometer) and Message Queue (RabbitMQ) services.In addition, it runs the Orchestration (Heat) services to allow the configuration of auto-scaling policies.The compute node runs in a separate server, hosting the Nova core services.A new Compute instance has been launched to deploy the web service used for experimentation.This virtual instance runs an Ubuntu 16.04-x64 server with 8 CPU cores and 8 GB of RAM memory.
On top of the operating system, a REST (Representational State Transfer) web service written in Flask [51] has been implemented.A REST web service has been chosen due to its simplicity and rapid development.REST is the predominant web API design model built upon HTTP methods [52], which accommodates the system to interact with several entities (i.e., humans, IoT devices).In REST every client request (1) only generates a single server response (one-shot) and ( 2) every response must be generated immediately (one-way) [53].This request-response model is suitable to focus the analysis on the measurement of CPU processing times, by tracking the connected user and the impact of its client requests on the CPU consumption.
In addition to the web service, two modules were developed to be run in the background: The HTTP Usage Monitor module and the Entropy Modeler.The former logs information regarding the monitoring of client requests processing times, whereas the latter performs novelty detection methods to trigger anomaly-based alerts to the Openstack orchestration services.
On the client-side, a set of REST-clients have been deployed to generate traffic according to several execution scenarios.The implementation details and characteristics of the components tested in the experimentation stage are explained in the forthcoming sections.

Server-Side Components
The following describes the deployed server-side components: RESTful Web Service, HTTP Usage Monitor and the Entropy Modeler.

RESTful Web Service
To facilitate a seamless interaction with HTTP clients, a REST web service has been implemented on Flask, a Python-based framework for rapid development of web applications.The REST service exposes four HTTP endpoints that produce the execution of different list-sorting operations on the server, each of them consumes a different amount of CPU time which is measured in the background.The endpoints and their average execution times are summarized in Table 1.

HTTP Usage Monitor
Once the server receives a client HTTP request, the Usage Monitor module permanently measures the amount of CPU time consumed to process the request before sending the response back to the client.The module makes use of Python libraries and standard Linux utilities to track the consumption per each client request.The collected data is then aggregated per client in configurable time intervals before being logged to the system.If more than one client connection is being observed in the given time interval, only the sum (aggregated metric) of all the processing times is logged.This allows the creation of a time series, required for the next processing level.

Entropy Modeler
This module gathers the time series logged by the HTTP Usage Monitor and computes the entropy of the CPU time usage of the different requests performed by each client.With the resultant normalized entropy, the module forecasts the next h observations for the given time series, in conformance with the ARIMA model.The predicted values are taken to estimate the forecasting upper and lower thresholds.Whenever the resultant entropy falls outside the prediction intervals, a Traffic Anomaly alert is reported to the auto-scaling engine of the corresponding Cloud platform (i.e., Openstack Heat).

Client-Side Component
On a separate server, several clients have been implemented as Python multi-threading scripts for HTTP traffic generation, which is sent to the web service hosted in the Openstack virtual machine instance.The generated number of traffic requests is a discrete variable that follows a random Poisson distribution, since their similarity with this distribution is widely assumed by the research community [54].It is modeled according to the traffic load requirements for each evaluation scenario.Every client is represented by a process thread, which models multiple parallel clients handling their own sets of requests independently from others.When normal network conditions are modeled, all the clients send an HTTP GET request to the lower CPU-consuming requests (endpoints 1-3) described in Table 1.When an attacker is modeled, it only calls the most complex endpoint (4), which has higher CPU demands at server-side.Note that GET requests can also accept the client ID as a parameter.It facilitates the implementation of different client connections originated in the same computer since all the thread-based clients share the same source IP address, but are differentiated by client ID.

Test Scenarios
Five main scenarios have been showcased to validate the proposal.All of them compare the entropy levels of CPU processing times under normal traffic conditions against the entropy measured when an EDoS attack is launched.Those attacks target to produce CPU overhead.Therefore, the attack decrements the server capacities to handle more connections, and it forces the decision to scale up the current virtual machine instance when the CPU usage is above a pre-defined CPU limit in the Cloud-platform auto-scaling engine.The set of network traffic conditions described in Table 2 are assumed throughout the experiments.There, clients (C) generate the total number of web requests (TR) at the expected rate (ERS).It is worth remarking that ERS corresponds to the expected number of occurrences (λ) of the Poisson distribution.Therefore, the generated web requests represent the sample of connections to be analyzed.The MTR observation number (5000) is the frontier that divides the TR into two groups of 5000 client requests each.The first one operates under the normal traffic conditions described in Table 2; whereas percentage of the second group contains the malicious requests, letting the remaining connections to operate under the normal conditions.For instance, in the second group a 5% malicious requests rate indicates that 250 malicious requests and 4750 normal requests were observed.Table 3 defines the evaluation scenarios (E1 to E5) considered to deploy the EDoS attacks.The experiments performed for each scenario started their execution with the normal web traffic conditions (first group of connections), with all the participant clients requesting the endpoints 1-3, as explained before.However, at the time specified by the MTR connection, the attack was launched.It compromised several normal clients (C), which sent malicious requests to the endpoint 4, thus increasing the CPU overhead.It is important to remark that the attackers connect to the server under the same ratio (ERS) configured for normal clients, making them unnoticeable since their connection rate resembled legitimate traffic, but they targeted to exploit the highest time-consuming endpoint which was exposed as a service vulnerability.To validate the proposal, it has been considered a Cloud auto-scaling policy, configured to launch a new virtual machine instance when the CPU consumption ran above 40% in a one minute interval.

Results
The experiments were performed with the parametrization presented in Table 3, adapted to each evaluation scenario.The first monitored metric was the CPU time consumption caused to process web requests launched from clients.A summary of the CPU consumption of the server, measured on one-second intervals, is depicted in Figure 4. There, in all scenarios, half of the client connections exposed the same behavior until the attack was triggered (MTR).From that moment on, the CPU overhead was influenced by the traffic attack volume described in Table 3. Bearing in mind the defined auto-scaling policy, it is noted that the scenarios E3, E4 and E5 would have automatically launched a new virtual machine instance if the presence of the attack had been unnoticed.Hence demonstrating the consequences of the EDoS threats and bringing the attack detection strategy to play an essential role.On the other hand, besides the CPU estimation, the entropy of the per-client processing time was constantly measured by the Entropy Modeler on one-second intervals, as plotted in Figure 5.The graph shows that the overall behavior of the entropy was contrary to the behavior noticed in the CPU overhead with the higher entropy values before the MTR observation.The slumped entropy level was slightly noticeable on scenario E1 (Figure 5a), but became quite more perceptible on scenarios E2 to E5 (Figure 5b to Figure 5e).Thereby, this pattern was directly influenced by the presence of the compromised devices, decreasing the entropy as long as more malicious requests were generated.Only when the entropy was measured for the observed time, the Entropy Modeler estimated the prediction thresholds to infer if the observed entropy was running outside the predicted intervals, thus leading to the decision of triggering an alert if the EDoS attack was detected.The precision observed at the Receiver Operating Characteristic (ROC) space is summarized in Figure 6.There five curves are illustrated, each one associated with one of the aforementioned evaluation scenarios (E1, E2, E3, E4, E5).Table 4 compiles several evaluation metrics (True Positive Rate (TPR), False Positive Rate (FPR) and Area Under Curve (AUC)) and the best calibrations (K) to reach the highest accuracy.Bearing in mind these results, it is possible to deduce that the proposed method has proven to be more effective when the attack is originated from a larger number of compromised nodes (e.g., E5 with 20% of the total number of connected clients).This is because a greater number of instances of the random variable X represent similar probabilities, which leads to a more significant decrease in the H(X) entropy, and therefore to display less concordance with the normal observations.On the other hand, labeling errors have occurred mainly due to issuing false positives, in situations where fluctuations of H(X) derived from changes in the behavior of legitimate clients acquire a similar relevance to those inferred by malicious activities.Note that the larger is the number of compromised nodes that take part of the attacks, the greater possibility of forcing auto-escalating reactions.Based on this fact it is possible to state that the proposed method improves its detection capabilities when facing more harmful threats.In addition, the existence of a K calibration parameter allows operators to easily configure the level of restriction in which the system operates: When greater discretion is required, K must adopt higher values.This considerably reduces the likelihood of issuing false alerts, hence facilitating to minimize the cost of the countermeasures to be applied.On the opposite case, when the monitoring environments require greater protection it is advisable to decrease K, hence improving the possibility of detecting threats, but potentially leading to deploy more unnecessary countermeasures.

Conclusions
In this paper, an entropy-based model for the detection of EDoS attacks in Cloud environments has been introduced.For this purpose, a comprehensive revision of the EDoS related research has been covered to elaborate a multi-layered architecture tackling the detection of EDoS attacks.The proposed work suggested good detection accuracy, thus preventing the unnecessary consumption of additional Cloud-resources if they were issued by auto-scaling policies based on unreal demands.
The experiments conducted to validate the proposed architecture have encompassed all the stages defined in the architecture, starting from the monitoring and aggregation of metrics that directly affect the Cloud computing cost model, the novelty detection procedures to recognize an EDoS attack, and the decision-making and response actions to be applied in the system.The experimental testbed implemented a client-server REST architecture executed on different network scenarios.On the web server, the monitored per-client CPU times have been evaluated by analyzing the entropy levels, which have exposed a decrement when malicious requests originated by the compromised nodes have been processed at server-side.In such scenarios, entropy has behave indirectly proportional to the consumed CPU.In addition, the detection method has also demonstrated its effectiveness when predicting the entropy thresholds to be compared against the real measured entropy.Thereby, this approach has proven high accuracy by quantifying the area under the ROC curve.It is also worth mentioning the enhancement of the proposed model compared to other resource-consuming approaches presented in the literature; such as the requesting of large files, database queries, or other web vulnerabilities; since this architecture relies on server-side consumption rather than anomalous network-level metric patterns.
The presented approach, evaluation methodology and the experiments conducted throughout this work poses also new potential research lines.The experimental scenarios should be extended to couple diverse network conditions to either enhance the validation or to disclose some evasion techniques.The defined model of measuring the resource consumption and diagnosing its entropy can be accommodated to include more metrics, thus extending its scope to wider analysis scenarios.Furthermore, it might be fitted to enhance adaptive auto-scaling policies on Cloud platforms by incorporating more complex evaluation criteria.Finally, the existing decision-making and countermeasures to EDoS attacks remain far from being evolved, and might effectively complement the conducted research.

Table 1 .
HTTP GET endpoints and CPU average cost.

Table 2 .
Normal traffic conditions for experiments.

Table 3 .
Network attack conditions and scenarios.

Table 4 .
Summary of results in ROC space.