A Quantitative Risk Assessment Model Involving Frequency and Threat Degree under Line-of-Business Services for Infrastructure of Emerging Sensor Networks

The prospect of Line-of-Business Services (LoBSs) for infrastructure of Emerging Sensor Networks (ESNs) is exciting. Access control remains a top challenge in this scenario as the service provider’s server contains a lot of valuable resources. LoBSs’ users are very diverse as they may come from a wide range of locations with vastly different characteristics. Cost of joining could be low and in many cases, intruders are eligible users conducting malicious actions. As a result, user access should be adjusted dynamically. Assessing LoBSs’ risk dynamically based on both frequency and threat degree of malicious operations is therefore necessary. In this paper, we proposed a Quantitative Risk Assessment Model (QRAM) involving frequency and threat degree based on value at risk. To quantify the threat degree as an elementary intrusion effort, we amend the influence coefficient of risk indexes in the network security situation assessment model. To quantify threat frequency as intrusion trace effort, we make use of multiple behavior information fusion. Under the influence of intrusion trace, we adapt the historical simulation method of value at risk to dynamically access LoBSs’ risk. Simulation based on existing data is used to select appropriate parameters for QRAM. Our simulation results show that the duration influence on elementary intrusion effort is reasonable when the normalized parameter is 1000. Likewise, the time window of intrusion trace and the weight between objective risk and subjective risk can be set to 10 s and 0.5, respectively. While our focus is to develop QRAM for assessing the risk of LoBSs for infrastructure of ESNs dynamically involving frequency and threat degree, we believe it is also appropriate for other scenarios in cloud computing.


•
The efforts of both elementary intrusion and intrusion trace are quantified based on the evaluation for security situation of networked systems.

•
The subjective risk is determined based on Shannon entropy of experts' scoring. • QRAM involving frequency and threat degree is proposed to quantify LoBSs' risk based on VaR.
The rest of this paper is organized as follows: Section 2 discusses related works, followed by preliminaries in Section 3. The intrusion effort involving frequency and threat degree is assessed in Section 4. The quantitative risk assessment model involving frequency and threat degree is proposed in Section 5. Section 6 shows the simulation test and discussion, and Section 7 concludes the paper with a summary and some future research directions.

Related Works
We review some related works including user behavior analysis and prediction, user behavior analysis and trust management, user behavior risk assessment and trust management in this section.

User Behavior Analysis and Prediction
Based on the analysis of prevalent network intrusions about multiple behavior information fusion, a new model of security threat evaluation [21] was presented with a set of quantitative indexes. In order to defend against application layer distributed denial-of-service attacks, an anomaly detection based on web user access behavior [24] was proposed, in which the web user's browsing behavior is observed at a web server by a hidden semi-Markov model. Concerning cloud computing, Tian et al. [25] mainly researched the evaluation importance of evaluation strategy and user behavior trust, including the basic idea of evaluating user behavior trust, principles for evaluating user behavior trust, evaluation strategies of behavior trust for each access, trust object analysis and long access. In terms of incomplete information multi-stage dynamic games, Chen et al. [26] proposed a behavior analysis model, where both negative and positive false factors are considered in network detection methods, and the actions both current and historical improved the comprehensiveness and accuracy of the dynamic judgment for end-use trustworthiness. Chen et al. [27] investigated the characteristics of cloud computing requests received by the cloud infrastructure operators. These cluster usage datasets, released by Google, were thoroughly studied. These researches could address the self-similarity and non-stationary characteristics of the workload profile in a cloud computing system. Ashwini et al. [28] discussed user browsing behavior and interest, and web mining technology, web log data.

User Behavior Analysis and Trust Management
A trust quantification algorithm [29] was presented based on grey fuzzy theory, and a new trust-based dynamic access control model [29] was proposed, which used the arcsine function to construct an algorithm to perform mapping between trust values and access permissions for effective access control. With the trust levels' idea for identity management, Parikshit et al. [30] proposed a fuzzy approach to the trust based access control, which was sealed with the linguistic information of devices to descript access control in the Internet of Things. Jaiganesh et al. [31] proposed a fuzzy logic technique called fuzzy ART, where the consumption of resources was periodically scanned, and the virtual machine states were classified into categories from stable to attackers based on the traced-out behaviors. Kambiz and Mehdi et al. [32] used not only the trust manager component, but also machine learning for the system to learn from the user's behavior and recognize access patterns, which not only limited the illegitimate access, but also predicted and prevented potential malicious events and questionable accesses.

User Behavior Risk Assessment and Trust Management
Zhang et al. [33] proposed a trust model based on behavior risk evaluation, which established a set of feature matching rules based on asset identification, vulnerability identification and threat identification for the system, constructed a complex weighting function to compute the potential risk implied in behaviors of the entities, and designed a trust computation method based on risk. Xu and Dou [34] proposed a risk evaluation model based on asset evaluation, vulnerability evaluation and threat evaluation by identifying and quantifying the risk factors, in which the value, vulnerability and threat of asset were combined to compute the system risk, and a risk computation method merging behaviors trust of external entities was presented using the quantitative calculation of information entropy weight of each factor for overcoming subjectivity of direct assignment considering the risk of system was influenced by the behavior of external entity. Jing et al. [14] proposed the user behavior assessment based dynamic access control model by introducing user behavior risk value, user trust degree and other factors into role based access control.
Three aspects of user behavior in network systems is reviewed: user behavior analysis and prediction, user behavior analysis and trust management, user behavior risk assessment and trust management. Under different conditions, user behavior has different characteristics. Frequency and threat degree of user behavior in cloud services is not involved, nothing is retrieved about user behavior under LoBSs.

Preliminaries
In this section, we introduce the notations and related technologies deployed in our scheme.

Definitions
Definition 1. Elementary intrusion: any alert that is evoked by intrusion behavior and reported by an intrusion detection system, described by A [aid] = {src, dst, sp, dp, t, type, sensor, sig}, whose elements' descriptions are as listed in Table 1 [21]. Table 1. Definitions of elements for elementary intrusion, intrusion trace, network packet and network session [21].

Shannon Entropy
Shannon entropy [22] is the expected value (average) of the information contained in each message. Supposed the entropy H(X) [38] of a discrete random variable X with possible n values {x 1 , . . . , x n } and probability mass function Pr(X) is defined as: where I is the information content of X [39,40], I(X) is itself a random variable, and b is the base of logarithm used. Common values of b are 2, Euler's number e, and 10, and the unit of entropy is Shannon for b = 2, nat for b = e, and hartley (unit) for b = 10 [37]. When b = 2, the units of entropy are also commonly referred to as bits. Shannon entropy is characterized by a small number of criteria, any definition of entropy satisfying the assumption as the form [41]: where K is a constant corresponding to a choice of measurement units, and p i = Pr(X = x i ) is the probability of x i , and Pr(x i ) is the probability function of x i .

Historical Simulation Method of Value at Risk
As a measure of investments' risk, VaR assesses how much a group of investments might lose [42]. Supposed a confidence level (Definition 8) α ∈ (0, 1), the VaR of portfolio at the confidence level α is given by the smallest number l such that the probability Pr(L > l) that the loss L exceeds l is at most (1 − α) [43]. Mathematically, if L is the loss of portfolio, then VaR α (L) [44] is the level α-quartile as: The keys to calculate VaR include the speculation of future changes in market factors and the relationship between portfolio value and market factors (linearity, non-linearity). The fundamental computation methods of speculation of future changes in market factors include historical simulation method, parametric method and Monte Carlo method [45].
As a nonparametric method, the core of historical simulation methods is to simulate the future income distribution of portfolio based on historical sample changes, and then use the quartile to calculate the VaR estimation under certain confidence. The method calculates the full value of portfolio rather than the local approximation of a small change in price. At the same time, this method avoids the simulation risk by using real data, and does not need to make specific assumptions on the distribution, nor need to estimate the parameters, so it can deal with the asymmetric and rear tail problems. Because the historical data reflect the simultaneous changes of all risk factors in the market, the problems of volatility, correlation and back-end issues can be reflected in real historical data, which often need to be considered separately.
The general method makes on assumption about the shape of the distribution of returns. Define W 0 as the initial investment and R as its rate of return, which is random. Assuming that the position is fixed, or that there is no trading, the portfolio value at the end of the target horizon is W = W 0 (1 + R) [46]. The expected return and volatility of R are defined as µ and σ. Define now the lowest portfolio value at the given confidence level c as W * = W 0 (1 + R * ) [46]. VaR measures the worst loss at some confidence level, so it is expressed as a positive number. The relative VaR(mean) [46] is defined the dollar loss relative to the mean on the horizon as: Often trading VaR is defined the absolute VaR [46], that is, the dollar loss relative to zero or without reference to the expected value as: At a given confidence level c, we wish to find the worst possible realization W * such that the probability of exceeding this value is c [46], that is: The probability of a value lower than W * , p = Pr(w ≤ W * ), is 1 − c [46], that is: The number W * is called the quartile of distribution, which is the cutoff value with a fixed probability of being exceeded. Note that we did not use the standard deviation to find the VaR. The historical simulation methods have the above advantages, but there are still some limitations. One is that they are entirely dependent on specific historical data, that is, it is assumed that the future situation and the performance of historical data in the past will be the same, but in fact, some of the past impact of the loss of events in the future does not necessarily repeat itself, and future events may also never have occurred in the past. The other is that they are likely to be limited by the amount of data, not fully reflect the risk of all situations, such as some extremes unlikely to happen.

The Intrusion Effort Involving Frequency and Threat Degree
Before assessing LoBSs' risk, the intrusion efforts of both elementary intrusions and intrusion traces should be calculated. The elementary intrusion effort under threat degree is quantified based on network security situation assessment model. The intrusion trace effort under frequency effects is quantified based on fusion of multiple behavior information.

The Overall Framework to Assess the Intrusion Effort
To study the impact of malicious operation on LoBSs' risk, the intensity of attack is described by the intrusion effort, which includes both the element intrusion and the intrusion trace. Situational awareness [47] is the ability to evaluate, process, and understand the information of critical elements about what is happening to the team regarding the mission. The security situation assessment [48] is an effective means to quantify network security, which refers to perceiving and obtaining the security-related elements through technical means from time and space dimensions, to determine the security situation through integrated analysis of data and to forecast its future trends. Aiming at the deficiency that is unable to provide useful security information encountered in the current security evaluation systems, the log database of intrusion detection system is led to the hierarchical and quantitative model, which is used to evaluate the security situation of network system, and its corresponding computation method are proposed based on the importance of service, host, and the structure of network system [20].
Based on the thought of everything as a service [49], the resources of hardware, software and data are provided as a service, so user behavior of LoBSs belongs to a service. Base on the hierarchical and quantitative model [20], the calculation process of intrusion effort involving frequency and threat degree is proposed as Figure 1.  Step 1 The threat level of malicious act is graded and quantified by the threat index.
Step 2 Amending the network security situation assessment model, the elementary intrusion threat is calculated.
Step 3 By combining with the duration which is normalized, the elementary intrusion effort is calculated.
Step 4 By combining the time window frequency, the intrusion trace effort is calculated.

Threat Degree of Elementary Intrusion
LoBSs have the characteristics of openness and sharing [3], so the attacks against SPs' servers are becoming more and more common. Any attack is achieved through a series of malicious behaviors, which must pose a risk to the SP's server. The threat degree to LoBSs varies depending on the severity degree of attacks. In order to quantify LoBSs' risk, the attacks should be graded by their harmful level. These attack classifications are listed in Table 2 [19]. It can be seen from Table 2 that the attacks are currently graded with four default priorities, such as very low, low, medium, and high, in which a priority of 4 (very low) is the least severe, and 1 (high) is the most severe. Based on the severity degree of attacks from low to high, the most common attacks listed are host discovery, port scanning, privilege escalation, denial of service, and covert scanning [20]. The Snort attack classifications are divided in equidistant divisions. In the equidistant division of attacks, the priorities of attacks from low to high are quantified such as 0.2, 0.4, 0.6, 0.8 and 1 as shown in Table 3. Assuming that an undefined degree of harm behavior is a threat with a very low level, its initial quantization level is 0.2. While quantifying LoBSs' risk, the threat degree should be updated in time based on detected user behavior.

Elementary Intrusion Effort under Threat Degree
Because the occurrence of malicious acts is random and dynamic, and is independent of the past, it has the Markov property [50]. The Markov property of malicious acts leads LoBSs' safety to change, so only the current state is involved in calculating the elementary intrusion effort. Since the current state corresponds to a time point, one cannot estimate the effect that the behavior at a time point has on LoBSs' risk. In order to calculate the elementary intrusion effort, a very short period (such as 1 ms, 10 ms, etc.) is seen as a time point, that is, the fully malicious acts in the set period are treated as an elementary intrusion. Based on the theory of integrals, the elementary intrusion effort is assessed during the set period (such as 1 ms, 10 ms, etc.).
The elementary intrusion effort is related to factors such as threat degree, financial costs, duration, attacker's experience, practicability of attack tools, attack time, counting ability, and so on [51]. Based on a principal component analysis [52], the factors of threat degree and frequency are mostly considered in this paper.
At the same time, a SP's server may be attacked by different priority attacks from different sources. Suppose that the evaluation time is t, and the attack's priority is i ∈ {4, 3, 2, 1, 0} which corresponds to {high, medium, low, very low, unknown} in Table 3, and the number of attacks of priority i during evaluation time t is C i , and the severity of the attacks is W i ∈ {4, 3, 2, 1, 0}, in the network security situation assessment model, Hu et al. [53] proposed the network security situation threat i [53] under the attack severity P i as: According to practical experience, the risk indexes for an event ocurring 100 times with severity 1, 10 times with severity 2 and 1 time with severity 3 [20] are equivalent, so the influence coefficient of the risk indexes is 10 i , i ∈ {4, 3, 2, 1, 0}. In order to accurately quantify the impact of threat degree on the risk index, the attack priority quantization is P i ∈ {1, 0.8, 0.6, 0.4, 0.2} in Table 3. Chen et al. [20] put forward the threat degree C i 10 P ij when optimizing W i in [53], which does not reflect the influence coefficient of risk indexes 10 i , i ∈ {4, 3, 2, 1, 0}, so it should be amended to C i 10 (i+1)P i . Under the condition of C i 10 (i+1)P i and Equation (8), the network security situation threat i ' under a number C i of attacks of severity P i is optimized as: Because a SP's server in LoBSs may be attacked by different priority attacks from different sources at the same time, an elementary intrusion may comprise many attacks which are from different attackers at different threat levels. On the basis of Equation (9) and with reference to the mean intrusion effort approach [21], the threat degree threat of an elementary intrusion is improved as: Sensors 2017, 17, 642 10 of 28 The network security situation assessment model [21] focuses on a qualitative analysis, which uses a numerical range to illustrate the risk degree and the probability of occurrence of an attack. In Equation (10), multiple threats are combined using quantitative analysis rather than qualitative descriptions, so the calculated threat degree is more scientific and rigorous.
The dimensionless of attack duration last is defined as: where j represents the ratio order in the duration affecting the factor of an attack, and which is set by experts.
As the main factors affect the intrusion effort, the threat degree is independent of the attack duration. When calculating their effect on the elementary intrusion effort, the addition principle in combinatorics is suitable. Integrating Equations (10) and (11), the elementary intrusion effort ElemEffort is proposed as: where ElemEffort represents the elementary intrusion effort, and threat represents the threat degree, and last represents the normalization of the duration.

Intrusion Trace Efforts under Frequency Conditions
The intrusion trace consists of a series of mutually independent elementary intrusions in a time window which have the Markov property [50]. According to the addition principle in combinatorics, the intrusion trace effort Effort is proposed as: where Effort represents the intrusion trace effort, and rate represents the total count of the elementary intrusions, and ElemEffort represents the elementary intrusion effort. The malicious behavior may cause the Mean-Time-Between-Failures (MTBF) to be shorter [54]. The exponential distribution [55] is used to model the time between the occurrence of events in an interval of time, or the distance between events in space. The exponential distribution has the property of being memoryless [56], which is often used to describe the MTBF distribution of large, complex systems. On the assumption that the potential hacker will eventually succeed in obtaining illegal privileges on an intrusion trace and be willing to expend enough effort to do so, the effort f (Effort) [21] has the nature of a negative exponential distribution described by: where Effort represents the intrusion trace effort, and λ for a negative exponential distribution, which is the success probability assigned to the elementary intrusion, and e is the number 2.71828 . . . , the base of the natural logs. By the cumulative distribution function, the probability Pr i (E f f ort) [21] that the time between events is less than a specified time Effort is given as: The mean or expected value E i (Effort) [21] of an exponentially distributed random variable Effort with rate parameter λ i is given as: In general, the harder it is for the malicious behavior to happen, the lower the probability of a successful invasion is. The probability λ i [21] of a successful intrusion can be represented by the inverse of the degree of difficulty d i as: The degree of difficulty for launching an elementary intrusion is divided into 10 levels as listed in Table 4 [21].

The Algorithms of Intrusion Effort
The activity diagram of intrusion effort is shown as Figure 2.
Sensors 2017, 17, 642 11 of 28 In general, the harder it is for the malicious behavior to happen, the lower the probability of a successful invasion is. The probability λi [21] of a successful intrusion can be represented by the inverse of the degree of difficulty di as: The degree of difficulty for launching an elementary intrusion is divided into 10 levels as listed in Table 4 [21].

The Algorithms of Intrusion Effort
The activity diagram of intrusion effort is shown as Figure 2.  Step 1 The time window is set, and the duration of elementary intrusions is obtained one by one according to the final data file in the Handel Data.
Step 2 The harm degree of a malicious act is graded based on Snort user manual and quantified in equidistant divisions.
Step 3 The elementary intrusion effort under threat degree is quantified based on network security situation assessment model, in which the influence coefficient of risk indexes is amended.
Step 4 Based on the elementary intrusion effort, the intrusion trace effort under frequency is quantified based on multiple behavior information fusion.
The algorithm of intrusion effort can be described as follows (Algorithm 1): The algorithm to the occurrence number of threat level attacks based on Table 3 can be described as follows (Algorithm 2): The algorithm to the threat degree of elementary intrusion based on Equation (10)

A Quantitative Risk Assessment Model Involving Frequency and Threat Degree
Deployed on a SP's server over the Internet, services can be accessed by users. Each access event represents a user behavior, which affects LoBSs' risk. The user behaviors of different frequency and threat degree impact on LoBSs differently. An elementary intrusion is any alert that is evoked by intrusion behavior and reported by an intrusion detection system. An intrusion trace is a trace constituted of a series of elementary intrusions that user privilege has led to getting illegally. On the basis of intrusion trace effort, QRAM involving frequency and threat degree under LoBSs is proposed based on VaR.

Line-of-Business Services' Risk involving Frequency and Threat Degree
LoBSs' risk is impacted by many factors, so it has many evaluation methods. Its main evaluation methods include the subjective risk evaluation method based on subjective data supplied by experts' scoring and the objective risk evaluation method based on the data detected while running LoBSs. Integrating the advantages of both a subjective risk evaluation method and an objective risk evaluation method, a comprehensive risk evaluation method is proposed which is more suitable for LoBSs.

An Objective Risk Evaluation Method
The occurrence of malicious behavior is often driven by interest. The rate of weighted threat in intrusion traces has different effects on LoBSs' risk. In general, the greater is the rate of a weighted threat in an intrusion trace, the greater is its effect on LoBSs' risk.
The time window frequency (Definition 5) under different threat degree is defined as: highp = high/rate mediump = medium/rate lowp = low/rate verylowp = verylow/rate unknownp = unknown/rate (18) where highp, mediump, lowp, verylowp separately represent the frequency of threat degree in an intrusion trace, and high, medium, low, verylow, unknown separately represent the generation numbers of threat degree in an intrusion trace, and rate represents the total generation number of elementary intrusions in an intrusion trace.
Based on Table 3 and Equation (18), the rate of weighted threat in an intrusion trace Tracep (Definition 6) is defined as: Corresponding to the degree of difficulty of an elementary intrusion divided into 10 levels, the objective risk I o was quantified in 1, 2, ..., 10 by the equidistant division of Tracep following as: The integer interval of objective risk is [1,10]. The objective risk I o can be given by the rate of weighted threats in an intrusion trace. For example, suppose that the rate of a weighted threat in a j-th intrusion trace Tracp j is 0.75, then the objective risk I oj is 8.

A Subjective Risk Evaluation Method
A subjective risk evaluation is calculated by experts' scores. In order to ensure consistency with the objective risk, the interval of expert's score limits is [1,10]. Supposing that expert's score set is U = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, and there are m experts to score n elemental intrusions of an intrusion trace, then the expert score matrix for this intrusion trace is described as: where score ij represents the score of the i-th expert for the j-th element intrusion of an intrusion trace, It can be seen from Equation (20) that the score set of expert m for the j-th element intrusion of an intrusion trace is score j = {score 1j , score 2j , . . . , score mj }. The expert's score S j of the j-th element intrusion of an intrusion trace is averaged by expert m's score as: Since the interval of score ij (1 ≤ i ≤ m, 1 ≤ j ≤ n) is between [1,10], it is normalized as: The expert scoring matrix for intrusion trace is transformed as: Since an intrusion trace comprises a lot of elementary intrusions which are evaluated by experts, the subjective risk based on Shannon entropy [37] of expert score matrix H j is proposed as: The subjective risk I sj of the j-th intrusion trace is calculated as:

A Comprehensive Risk Evaluation Method
Objective risk evaluation methods are susceptible to the bias of sample data. Subjective risk evaluation methods are susceptible to experts' subjectivity. Integrating the advantages of subjective risk and objective risk, a comprehensive risk evaluation method is proposed, based on the intrusion trace probability and the proportion between subjective risk and objective risk which is evaluated by experts. If the interval of subjective risk p 1j , p 2j , . . . , p mj undulates violently, it is shown that expert's evaluations are serious differences. The effectiveness of subjective risk is weak, and the proportion of subjective risk in the comprehensive risk should be reduced. On the contrary, the proportion should be increased.
Supposing that the subjective risk's weight of the j-th intrusion trace in the comprehensive risk is W sj , the weight of objective risk W oj is calculated as: The comprehensive risk I cj of the j-th intrusion trace is calculated as: The comprehensive risk is limited to [1,10], which is normalized as: The change rate Q j of risk affecting function is defined by the probability Pr j (Equation (15)) of intrusion trace multiplied by the normalization r j of comprehensive risk, that is:

A Quantitative Risk Assessment Model
Just like financial assets or a portfolio may lose value due to market fluctuations, users' behavior may lead to LoBSs' risk. Based on VaR [18] which is commonly used in financial risk assessment, QRAM involving frequency and threat degree is proposed to quantify LoBSs' risk.
The keys to calculating VaR include the forecast of future market changes and the relationship between the portfolio and the market (linearity, non-linearity). The fundamental calculation methods of forecasting future market changes include historical simulation method, parametric method and the Monte Carlo method [45], whose advantages can be listed as follows: The implicit assumptions of parametric method are a normal distribution and the invariance of volatility and correlation, but when the number of assets in the portfolio is large, it is difficult to ensure that variance and covariance [45].
Based on stochastic simulations, the Monte Carlo method has many shortcomings, such as the choice of models, the quality of random numbers, relying on a particular stochastic process, etc. [45].
The core of historical simulation method is to simulate the future income distribution of the portfolio based on the historical sample changes, and then uses the quartile to calculate the VaR under a certain degree of confidence [45]. Historical simulation method calculates the total value of the portfolio, rather than the local approximation of small changes in price. At the same time, the historical simulation method avoids the simulation risk by using real data, and it do not need to make specific assumptions on the distribution, nor do it need to estimate the parameters, so it can deal with asymmetric and fat tail problems. In addition, as the historical data reflects the simultaneous changes of all risk factors in the market, the problems of volatility, relevance, and fat tail can be reflected by the data.
Based on the comprehensive analysis of the three methods, in this work LoBSs' risk is quantified by the historical simulation method of VaR. Supposing that the initial risk of LoBSs is R 0 , and the change rate of the risk affecting function is Q j (Equation (29)), the risk R after happening the j-th intrusion trace is calculated as: Supposing that the confidence level (Definition 8) is c, and the highest risk is R * = R 0 (1 − Q j * ), and the expectation of Q j (Equation (29)) is u j , and the expectation of R is E(R), then LoBSs' VaR VaR R based on Equation (4) is proposed as: In the other words, to calculate VaR is equivalent to calculate the maximum risk R * or the minimum change rate of risk affecting function Q * of LoBSs. The probability density function f (R) of LoBSs' risk change can be calculated based on R = R 0 (1 − Q j ). Based on Equation (6), the maximum risk R * for LoBSs at a certain confidence level c (Definition 8) is defined as:

The Algorithm to Assess Line-of-Business Services' Risk
The activity diagram of LoBSs' risk assessment is shown in Figure 3.
Supposing that the confidence level (Definition 8) is c, and the highest risk is R * = R0 (1 − Qj * ), and the expectation of Qj (Equation (29)) is uj, and the expectation of R is E(R), then LoBSs' VaR VaRR based on Equation (4) is proposed as: In the other words, to calculate VaR is equivalent to calculate the maximum risk R * or the minimum change rate of risk affecting function Q * of LoBSs. The probability density function f(R) of LoBSs' risk change can be calculated based on R = R0 (1 − Qj). Based on Equation (6), the maximum risk R * for LoBSs at a certain confidence level c (Definition 8) is defined as:

The Algorithm to Assess Line-of-Business Services' Risk
The activity diagram of LoBSs' risk assessment is shown in Figure 3. Step 1 The parameters of confidence degree, initial risk, operational difficult degree, number of expert are initialized.
Step 2 The objective risk are calculated according to the intrusion trace effort.
Step 3 The subjective risk was calculated according to the Shannon entropy of experts' scores. Step 1 The parameters of confidence degree, initial risk, operational difficult degree, number of expert are initialized.
Step 2 The objective risk are calculated according to the intrusion trace effort.
Step 3 The subjective risk was calculated according to the Shannon entropy of experts' scores.
Step 4 A comprehensive risk is combined with objective risk and subjective risk.
Step 5 The rate of risk impact is calculated by combining the comprehensive risk with the probability of intrusion trace.
Step 6 LoBSs' risk is calculated by the historical simulation method of VaR.
The algorithm to assess LoBSs' risk can be described as follows (Algorithm 5):

Simulation Test and Discussion
In order to test QRAM, a prototype is designed based on the unified modeling language, and implemented based on Java. In order to verify QRAM, we would need some data from a SP's server, but we cannot get. In SP server of LoBSs, the services are centrally provided to users by the multi-tenant model which are used by tenant's users over the Internet, so users are out of tenant entity management domain when using a SP's service. The behavior characteristics of user are similar to those in a traditional network system, so for a test it is suitable to simulate LoBSs using a traditional information system, whose data comes from the simple data of Windows NT attack data set (Sim-Data-NT) in 2000 defense advanced research projects agency intrusion detection evaluation data set of massachusetts institute of technology lincoln laboratory [57].

Simulation Data
There are 358 elementary intrusion data in Sim-Data-NT [57], whose threat level includes six high, 46 medium, 306 unknown, and hardly any low and very low. Because there is hardly any data of threat level both low and very low, it is not consistent with reality. In order to be consistent with the real situation, the set of Sim-Data-NT should be optimized. Supposing that SMTP packets are altered by an icmp-event attack, and the FTP packets are altered by a tcp-connetion attack, according to Snort default classification [19], there are 358 elementary intrusion data in the optimized Sim-Data-NT, whose threat level includes six high, 46 medium, 24 low, 18 very low, 264 unknown. According to Table 3, the threat degree of test data in the optimized Sim-Data-NT is quantified.
Because the information of the optimized Sim-Data-NT is imperfect as it cannot determine the intrusion trace constituted by the elementary intrusions, it is assumed that the elementary intrusions within the time window constitute an intrusion trace. By testing the intrusion trace effort for different time windows (such as 1 s, 10 s, 20 s, etc.), the suitable maximum and minimum time windows can be obtained. An intrusion trace is simulated by randomly splitting the time windows between the maximum and the minimum.
In order to facilitate post-processing, the ternary coding system is adopted, for example the number 1 is coded as 001. The source data including the parameters of attack time, duration and type are selected as seen in Figures 4 and 5. By the Snort default classification [19], the threat degree of attack is quantified as shown in Table 5. type are selected as seen in Figures 4 and 5. By the Snort default classification [19], the threat degree of attack is quantified as shown in Table 5.   type are selected as seen in Figures 4 and 5. By the Snort default classification [19], the threat degree of attack is quantified as shown in Table 5.

Testing and Results
Based on the optimized Sim-Data-NT, the simulation test items by our prototype system include: (1)  • Elementary Intrusion Effort: Suppose the parameter j of Equation (11) is respectively assigned values of 2, 3, 4, then the relationship between elementary intrusion effort and duration based on Equation (12) is as shown in Figure 6.

Testing and Results
Based on the optimized Sim-Data-NT, the simulation test items by our prototype system include: (1) elementary intrusion effort; (2) intrusion trace effort; (3) objective risk; (4) subjective risk; (5) comprehensive risk; (6) LoBSs' Quantitative Risk. The simulation testing and results are as follows:


Elementary Intrusion Effort: Suppose the parameter j of Equation (11) is respectively assigned values of 2, 3, 4, then the relationship between elementary intrusion effort and duration based on Equation (12) is as shown in Figure 6. It can be seen from Figure 6 that:


The cures of elementary intrusion effort deviate greatly between j = 2 and no duration, that is, the duration interferes with the elementary intrusion effort too much.  The cures of elementary intrusion effort hardly coincide between j = 4 and no duration, that is, the duration interferes with the elementary intrusion effort next to nothing.  The cures of elementary intrusion effort are almost synchronized between j = 3 and no duration, that is, the duration strengthens the elementary intrusion effort.
It can be concluded that j = 3 is suitable for the experiment because it takes into account the effects of both threat degree and duration, that is, the duration influence on the elementary intrusion effort is reasonable when the normalized parameter is 1000. It can be seen from Figure 6 that:

•
The cures of elementary intrusion effort deviate greatly between j = 2 and no duration, that is, the duration interferes with the elementary intrusion effort too much.

•
The cures of elementary intrusion effort hardly coincide between j = 4 and no duration, that is, the duration interferes with the elementary intrusion effort next to nothing.

•
The cures of elementary intrusion effort are almost synchronized between j = 3 and no duration, that is, the duration strengthens the elementary intrusion effort.
It can be concluded that j = 3 is suitable for the experiment because it takes into account the effects of both threat degree and duration, that is, the duration influence on the elementary intrusion effort is reasonable when the normalized parameter is 1000.

•
Intrusion Trace Effort: Suppose that the time window is respectively assigned as 1 s, 5 s, 10 s, 15 s, 20 s, 25 s, 30 s, the relationship between intrusion trace effort and time widow of Equation (13) is shown in Figure 7.  It can be concluded that 10 s are suitable for the time window of intrusion traces which can effectively avoid the curve fluctuations in small time window, but avoid the curve smoothing in a large time window.


Objective Risk: The objective risk is calculated by the rate of weighted threat in an intrusion trace, and the relationship between intrusion trace effort and objective risk is shown in Figure 8.  In Figure 7, the abscissa axis represents the time, the principal left ordinate represents the invasion trace effort of unit 5; The auxiliary right ordinate represents the invasion trace effort of unit 0.2; Except that the time window 1 s lies in the auxiliary right ordinate, the others lie in the principal left ordinate.
It can be seen from Figure 7 that: • When the time window is 1 s, an intrusion trace only includes an elementary intrusion, that is, an intrusion trace degenerates to an elementary intrusion, and the intrusion trace effort fluctuates with high-frequency.

•
When the time window is 5 s, like an elementary intrusion, the intrusion trace effort fluctuates with high-frequency.

•
When the time window is 30 s, the curve of intrusion trace effort is level and smooth, and many malicious attacks are smoothed and therefore skipped. • When the time window is 10 s, the tendency of the intrusion trace effort coincides with the elementary intrusion effort.
It can be concluded that 10 s are suitable for the time window of intrusion traces which can effectively avoid the curve fluctuations in small time window, but avoid the curve smoothing in a large time window.

•
Objective Risk: The objective risk is calculated by the rate of weighted threat in an intrusion trace, and the relationship between intrusion trace effort and objective risk is shown in Figure 8.  In Figure 7, the abscissa axis represents the time, the principal left ordinate represents the invasion trace effort of unit 5; The auxiliary right ordinate represents the invasion trace effort of unit 0.2; Except that the time window 1 s lies in the auxiliary right ordinate, the others lie in the principal left ordinate.
It can be seen from Figure 7 that: It can be concluded that 10 s are suitable for the time window of intrusion traces which can effectively avoid the curve fluctuations in small time window, but avoid the curve smoothing in a large time window.


Objective Risk: The objective risk is calculated by the rate of weighted threat in an intrusion trace, and the relationship between intrusion trace effort and objective risk is shown in Figure 8.  It can be seen from Figure 8 that the tendencies between intrusion trace effort and objective risk coincide in the overwhelming majority of cases. Only in specific individual time intervals the objective risk is inactivated since the intrusion trace effort fluctuates little, so it is practical to estimate the objective risk by the rate of weighted threat in the intrusion trace.

•
Subjective Risk: The subjective risk is calculated by Shannon entropy based on the experts' scoring matrix, then the relationship between intrusion trace effort and subjective risk is shown in Figure 9.  When the time window is 1 s, an intrusion trace only includes an elementary intrusion, that is, an intrusion trace degenerates to an elementary intrusion, and the intrusion trace effort fluctuates with high-frequency.


When the time window is 5 s, like an elementary intrusion, the intrusion trace effort fluctuates with high-frequency.


When the time window is 30 s, the curve of intrusion trace effort is level and smooth, and many malicious attacks are smoothed and therefore skipped.


When the time window is 10 s, the tendency of the intrusion trace effort coincides with the elementary intrusion effort.
It can be concluded that 10 s are suitable for the time window of intrusion traces which can effectively avoid the curve fluctuations in small time window, but avoid the curve smoothing in a large time window.


Objective Risk: The objective risk is calculated by the rate of weighted threat in an intrusion trace, and the relationship between intrusion trace effort and objective risk is shown in Figure 8.  It can be seen from Figure 9 that the tendency between intrusion trace effort and subjective risk coincides in the overwhelming majority of cases. Only in specific individual time intervals the subjective risk is inactivated since the intrusion trace effort fluctuates little, so it is practical to estimate the subjective risk by Shannon entropy based on the experts' scoring matrix.

•
Comprehensive Risk: The comprehensive risks under different ratio between objective risk and subjective risk are shown as Figure 10.
It can be seen from Figure 10 that when the ratio of objective risk is 0.5, the tendency between intrusion trace effort and comprehensive risk coincides, so it is practical to adopt a comprehensive risk between objective risk and subjective risk under a ratio of 0.5. It can be seen from Figure 8 that the tendencies between intrusion trace effort and objective risk coincide in the overwhelming majority of cases. Only in specific individual time intervals the objective risk is inactivated since the intrusion trace effort fluctuates little, so it is practical to estimate the objective risk by the rate of weighted threat in the intrusion trace.


Subjective Risk: The subjective risk is calculated by Shannon entropy based on the experts' scoring matrix, then the relationship between intrusion trace effort and subjective risk is shown in Figure 9. It can be seen from Figure 9 that the tendency between intrusion trace effort and subjective risk coincides in the overwhelming majority of cases. Only in specific individual time intervals the subjective risk is inactivated since the intrusion trace effort fluctuates little, so it is practical to estimate the subjective risk by Shannon entropy based on the experts' scoring matrix.


Comprehensive Risk: The comprehensive risks under different ratio between objective risk and subjective risk are shown as Figure 10.
It can be seen from Figure 10 that when the ratio of objective risk is 0.5, the tendency between intrusion trace effort and comprehensive risk coincides, so it is practical to adopt a comprehensive risk between objective risk and subjective risk under a ratio of 0.5.   • Under the condition that the confidence level is 95%, and the number of experts is 5, and the attack difficult degree is 0.1, the relationship between initial risk and LoBSs' quantitative risk based on QRAM is shown in Figure 11.  Under the condition that the confidence level is 95%, and the number of experts is 5, and the attack difficult degree is 0.1, the relationship between initial risk and LoBSs' quantitative risk based on QRAM is shown in Figure 11. It can be seen from Figure 11 that there is a linearly-increasing relation between initial risk and LoBSs' quantitative risk based on QRAM.


Under the condition that the confidence level is 95%, and the number of experts is 5, and the initial risk is 100, the relationship between attack difficult degree and LoBSs' quantitative risk based on QRAM is shown in Figure 12.
It can be seen from Figure 12 that LoBSs' quantitative risk based on QRAM hardly changes when the attack difficulty degree is less than 0.3, and it can be quantified in accordance with the quadratic polynomial y = 0.0127x 2 + 0.018x + 45.108.


Under the condition that the number of experts is 5, and the initial risk is 100, and the attack difficulty degree is 0.1, the relationship between confidence level and LoBSs' quantitative risk based on QRAM is shown in Figure 13. It can be seen from Figure 11 that there is a linearly-increasing relation between initial risk and LoBSs' quantitative risk based on QRAM.
• Under the condition that the confidence level is 95%, and the number of experts is 5, and the initial risk is 100, the relationship between attack difficult degree and LoBSs' quantitative risk based on QRAM is shown in Figure 12.
It can be seen from Figure 12 that LoBSs' quantitative risk based on QRAM hardly changes when the attack difficulty degree is less than 0.3, and it can be quantified in accordance with the quadratic polynomial y = 0.0127x 2 + 0.018x + 45.108.  Under the condition that the confidence level is 95%, and the number of experts is 5, and the attack difficult degree is 0.1, the relationship between initial risk and LoBSs' quantitative risk based on QRAM is shown in Figure 11. It can be seen from Figure 11 that there is a linearly-increasing relation between initial risk and LoBSs' quantitative risk based on QRAM.


Under the condition that the confidence level is 95%, and the number of experts is 5, and the initial risk is 100, the relationship between attack difficult degree and LoBSs' quantitative risk based on QRAM is shown in Figure 12.
It can be seen from Figure 12 that LoBSs' quantitative risk based on QRAM hardly changes when the attack difficulty degree is less than 0.3, and it can be quantified in accordance with the quadratic polynomial y = 0.0127x 2 + 0.018x + 45.108.


Under the condition that the number of experts is 5, and the initial risk is 100, and the attack difficulty degree is 0.1, the relationship between confidence level and LoBSs' quantitative risk based on QRAM is shown in Figure 13. • Under the condition that the number of experts is 5, and the initial risk is 100, and the attack difficulty degree is 0.1, the relationship between confidence level and LoBSs' quantitative risk based on QRAM is shown in Figure 13. It can be seen from Figure 13 that there is a linearly-decreasing relation between confidence level and LoBSs' quantitative risk based on QRAM. The lower the confidence level is, the rougher LoBSs' quantitative risk is; the higher the confidence level is, the more accurate LoBSs' quantitative risk is.


Under the condition that the number of experts is 5, and the initial risk is 100, and the attack difficulty degree is 0.1, and the confidence level is 95%, the relationship between time window and LoBSs' quantitative risk based on QRAM is shown in Figure 14. It can be shown from Figure 14 that LoBSs' quantitative risk based on QRAM hardly changes when the time widow is larger than 15 s, and it can be quantified in accordance with the quadratic polynomial y = −0.0146x 3 + 0.5356x 2 − 4.4012x + 51.543. The study just selects the data of elementary intrusions rather than the intrusion trace in the optimized Sim-Data-NT. The intrusion traces are defined by the elementary intrusions in a splitting time window. The splitting time window will influence the constitution of the intrusion traces, and furthermore its effort, but hardly affect LoBSs' quantitative risk based on QRAM.

Conclusions
As one of cloud computing's service models, SaaS is one of development directions of software delivery. As one of SaaS's service types, LoBSs are often large, customizable business solutions offered to enterprises and organizations and aimed at facilitating business processes. A lot of valuable resources are accumulated on SP's server, so the access permission is a top priority in LoBSs' security, which is one of the biggest challenges to LoBSs. LoBSs' users are very diverse as they may come from a wide range of locations with vastly different characteristics. The cost of It can be seen from Figure 13 that there is a linearly-decreasing relation between confidence level and LoBSs' quantitative risk based on QRAM. The lower the confidence level is, the rougher LoBSs' quantitative risk is; the higher the confidence level is, the more accurate LoBSs' quantitative risk is.

•
Under the condition that the number of experts is 5, and the initial risk is 100, and the attack difficulty degree is 0.1, and the confidence level is 95%, the relationship between time window and LoBSs' quantitative risk based on QRAM is shown in Figure 14. It can be seen from Figure 13 that there is a linearly-decreasing relation between confidence level and LoBSs' quantitative risk based on QRAM. The lower the confidence level is, the rougher LoBSs' quantitative risk is; the higher the confidence level is, the more accurate LoBSs' quantitative risk is.


Under the condition that the number of experts is 5, and the initial risk is 100, and the attack difficulty degree is 0.1, and the confidence level is 95%, the relationship between time window and LoBSs' quantitative risk based on QRAM is shown in Figure 14. It can be shown from Figure 14 that LoBSs' quantitative risk based on QRAM hardly changes when the time widow is larger than 15 s, and it can be quantified in accordance with the quadratic polynomial y = −0.0146x 3 + 0.5356x 2 − 4.4012x + 51.543. The study just selects the data of elementary intrusions rather than the intrusion trace in the optimized Sim-Data-NT. The intrusion traces are defined by the elementary intrusions in a splitting time window. The splitting time window will influence the constitution of the intrusion traces, and furthermore its effort, but hardly affect LoBSs' quantitative risk based on QRAM.

Conclusions
As one of cloud computing's service models, SaaS is one of development directions of software delivery. As one of SaaS's service types, LoBSs are often large, customizable business solutions offered to enterprises and organizations and aimed at facilitating business processes. A lot of valuable resources are accumulated on SP's server, so the access permission is a top priority in LoBSs' security, which is one of the biggest challenges to LoBSs. LoBSs' users are very diverse as they may come from a wide range of locations with vastly different characteristics. The cost of It can be shown from Figure 14 that LoBSs' quantitative risk based on QRAM hardly changes when the time widow is larger than 15 s, and it can be quantified in accordance with the quadratic polynomial y = −0.0146x 3 + 0.5356x 2 − 4.4012x + 51.543. The study just selects the data of elementary intrusions rather than the intrusion trace in the optimized Sim-Data-NT. The intrusion traces are defined by the elementary intrusions in a splitting time window. The splitting time window will influence the constitution of the intrusion traces, and furthermore its effort, but hardly affect LoBSs' quantitative risk based on QRAM.

Conclusions
As one of cloud computing's service models, SaaS is one of development directions of software delivery. As one of SaaS's service types, LoBSs are often large, customizable business solutions offered to enterprises and organizations and aimed at facilitating business processes. A lot of valuable resources are accumulated on SP's server, so the access permission is a top priority in LoBSs' security, which is one of the biggest challenges to LoBSs. LoBSs' users are very diverse as they may come from a wide range of locations with vastly different characteristics. The cost of joining could be low and in many cases, the intruders are just eligible users conducting malicious actions. In order to dynamically adjust user access, LoBSs' risk must be dynamically assessed. Both frequency and threat degree of malicious operation have an important effect on LoBSs' risk. The higher is the frequency of malicious operations, the higher is the risk of LoBSs. The larger is the threat degree of malicious actions, the greater is the risk of LoBSs. In order to quantify LoBSs' risk more precisely, the impact both frequency and threat degree of user behavior must be considered.
Based on VaR, QRAM involving frequency and threat degree is proposed under LoBSs for infrastructure of ESNs. The degree of harm of a malicious act is graded based on Snort user manual [19] and quantified in equidistant divisions. The elementary intrusion effort under threat degree is quantified based on a network security situation assessment model [20], in which the influence coefficient of risk indexes is amended. The intrusion trace effort under frequency is quantified based on multiple behavior information fusion [21]. The objective risk of LoBSs is quantified based on the rate of weighted threat in intrusion traces. The subjective risk of LoBSs is quantified based on Shannon entropy [38] of experts' scores. The comprehensive risk of LoBSs is quantified on both the intrusion trace probability and the proportion between subjective risk and objective risk. Under the influence of intrusion trace, LoBSs' risk is accessed dynamically by the historical simulation method of VaR.
In order to perform a simulation test, a prototype is designed based on the unified modeling language, and implemented based on Java. Based on the optimized Sim-Data-NT, simulation testing by the prototype, it can be shown that the duration influence on elementary intrusion effort is reasonable when the normalized parameter is 1000; 10 s are suitable for the time window of intrusion trace; the comprehensive risk can be correctly reflected when the weight ratio between objective risk and subjective risk is 0.5. Under the conditions of confidence level, number of expert, attack difficulty degree, initial risk and time window, after the change tendency of LoBSs' quantitative risk is respectively tested, LoBSs' risk can be assessed dynamically by QRAM involving frequency and threat degree.
QRAM involving frequency and threat degree focuses on LoBSs for infrastructure of ESNs, and may promote to other cloud computing scenarios, but there are more factors such as financial cost, attacker's experience, practicability of attack tool, counting ability, and so on, which may influence on LoBSs' risk, and should be involved in any evaluation.