Next Article in Journal
A Novel Hybrid Metaheuristic Algorithm for Optimization of Construction Management Site Layout Planning
Previous Article in Journal
A Fuzzy-Based Decision Support Model for Risk Maturity Evaluation of Construction Organizations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Expected Utility Insurance Premium Principle with Fourth-Order Statistics: Does It Make a Difference?

by
Alessandro Mazzoccoli
1 and
Maurizio Naldi
1,2,*
1
Department of Civil Engineering and Computer Science, University of Rome Tor Vergata, 00133 Rome, Italy
2
Department of Law, Economics, Politics and Modern languages, LUMSA University, 00192 Rome, Italy
*
Author to whom correspondence should be addressed.
Algorithms 2020, 13(5), 116; https://doi.org/10.3390/a13050116
Submission received: 7 March 2020 / Revised: 17 April 2020 / Accepted: 26 April 2020 / Published: 6 May 2020

Abstract

:
The expected utility principle is often used to compute the insurance premium through a second-order approximation of the expected value of the utility of losses. We investigate the impact of using a more accurate approximation based on the fourth-order statistics of the expected loss and derive the premium under this expectedly more accurate approximation. The comparison between the two approximation levels shows that the second-order-based premium is always lower (i.e., an underestimate of the correct one) for the commonest loss distributions encountered in insurance. The comparison is also carried out for real cases, considering the loss parameters values estimated in the literature. The increased risk of the insurer is assessed through the Value-at-Risk.

1. Introduction

Insurance has long been considered a major pillar in risk management [1,2]. Insurance allows the transfer of risks to the insurer under the payment of a fee (the premium).
Beside its traditional domains (e.g., car and life), insurance has spread to many new application contexts, e.g., communications [3,4], cloud services [5,6], critical infrastructures [7], and cyber-security [8,9,10,11,12,13].
A critical element in the application of insurance mechanisms to new contexts, where the statistics of claims may not be as established, is how to set the insurance premium that customers are asked to pay. The expected utility paradigm has been shown to be a powerful approach to premium pricing [14,15], with the so-called non-expected utility approach being rather its generalization [16]. Though the expected utility in its original formulation provides an upper bound for the insured, i.e., the maximum premium the insured should pay, we consider here as the actual price set by the insurer, assuming that it wishes to set the premium as high as possible. If that is not the case, what follows applies anyway to the maximum premium. However, this approach requires knowledge of the expected value of utility, i.e., a generally nonlinear function of the random loss, which may not be exactly known or easy to compute [17]. That is the case when asymmetric information is present [18], or when we do not know enough about the probability distribution of losses, as in cyber-insurance. The textbook treatment, as in [14], is to approximate the expected utility through a function of the first- and second-order moments of the loss (which we refer to as second-order approximation in the following). This may be too harsh an approximation when we think of the consequences for the insurer: setting the price too high will lead to potential insureds diverted from subscribing the policy, while the opposite mistake may lead to huge losses for the insurer.
In this paper, we wish to investigate the consequences of the second-order approximation. In particular, we compare it with a fourth-order approximation, which is based on the loss statistics up to the fourth order. Intuitively, we expect the fourth-order approximation to be closer to what the full knowledge of the loss distribution would tell us. However, the computation of the amount of premium misestimation when we stop at the second-order approximation matters: if the difference between the premiums returned by the two methods is low, asking for estimating fourth-order statistics is not worthwhile. We build on a previous paper of ours, where the fourth-order approximation was computed for loss occurrences following a Poisson model [19]. Here, we make the following original contributions:
  • the pricing formula is derived under a fourth-order approximation for several choices of the risk-aversion coefficient (Section 2);
  • by comparing the premiums under the two approximation levels, we derive the conditions for the second-order approximation to lead to premium underestimation, which is the most dangerous case for the insurer (Section 3);
  • the second-order approximation is shown to lead to premium underestimation for the commonest loss distributions employed in insurance (Section 3.1);
  • an estimate is provided for the impact of an imperfect knowledge of the fourth-order loss statistics (Section 3.2);
  • the differences between second- and fourth-order approximations are analyzed for realistic values of the loss distribution parameters, as extracted from the literature (Section 3.3);
  • the risk of premium underestimation is assessed for the insurer by using the Value-at-Risk metrics (Section 4).

2. The Expected Utility Principle for Premium Computation

In this paper, we adopt the expected utility principle to compute the insurance premium. The principle is well-rooted in the literature. In this section, starting from its general definition, as described in [14], we derive the premium. We largely follow the derivation reported in [6,19]. The list of symbols used throughout the paper are shown in Table 1 for convenience.
Under the expected utility principle, the relevance of the loss suffered by the insured is evaluated through the utility function u : R R , which is assumed to be a monotone non-decreasing function. In the absence of any event, the utility of the insured depends just on its assets w, so that it is u ( w ) . In the absence of an insurance policy, the occurrence of a damaging event would provoke a monetary loss X, which would bring its utility down to u ( w X ) . Here, we do not make any assumption about the nature of the events leading to the loss, since what follows is derived under very general assumptions. An example of application of the expected utility principle to cloud storage is reported in [6] (where the loss is the compensation provided by the cloud service provider to its customers when service quality falls below what is stated in Service Level Agreements). In Section 3.3, the pricing formulas are instantiated for some contexts. Since the insured has bought an insurance policy (assuming that the event falls fully under the coverage umbrella of the policy), it pays a (fixed) premium P, so that its utility decreases to the fixed quantity u ( w P ) . The insured can then compare the two alternatives: buying an insurance policy and end up with utility u ( w P ) or suffer the (random) monetary loss X and end up with utility u ( w X ) .
A crucial element in deciding whether to buy an insurance policy is then the premium P. Under the expected utility principle, the fair premium is defined as that for which the two alternatives are utility-equivalent (on average), i.e., that resulting from the following equilibrium equation:
E [ u ( w X ) ] = u ( w P )
We can solve that equation for P through an approximation provided by the Taylor series expansion for both sides, centered in w E [ X ] :
u ( w P ) u ( w E [ X ] ) + u i ( w E [ X ] ) ( E [ X ] P ) u ( w X ) u ( w E [ X ] ) + u i ( w E [ X ] ) ( E [ X ] X ) + 1 2 u i i ( w E [ X ] ) ( E [ X ] X ) 2 + 1 6 u i i i ( w E [ X ] ) ( E [ X ] X ) 3 + 1 24 u i v ( w E [ X ] ) ( E [ X ] X ) 4 .
It is to be noted that contrary to the standard treatment reported in [14] and applied, e.g., in [6], where the expansion stops at the second order, here we go through the fourth-order term. We aim to seek for a more accurate, though more complex, premium computation. When we replace those approximate expressions into Equation (1), omitting the argument w E [ X ] of the utility function for the sake of simplicity, we obtain
( E [ X ] P ) u i = 1 2 u i i V [ X ] 1 6 u i i i S [ X ] V 3 / 2 [ X ] + 1 24 u i v K [ X ] V 2 [ X ] ,
where we have introduced the third- and fourth-order statistics (skewness and kurtosis)
S [ X ] = E ( X E [ X ] ) 3 V 3 / 2 [ X ] K [ X ] = E ( X E [ X ] ) 4 V 2 [ X ]
Equation (3) can be solved for the premium (which we label as P 4 : = P , to make it clear that we stop at the fourth-order term of Taylor’s expansion)
P 4 = E [ X ] 1 2 u i i u i V [ X ] + 1 6 u i i i u i S [ X ] V 3 / 2 [ X ] 1 24 u i v u i K [ X ] V 2 [ X ] ,
This is to be compared with the standard second-order approximation, which would give us
P 2 = E [ X ] 1 2 u i i u i V [ X ] ,
However, both expressions depend on the choice of the utility function.
We can now introduce the Arrow–Pratt measure of Absolute Risk Aversion (ARA) [20]:
A ( x ) : = u i i ( x ) u i ( x ) ,
where the normalization by u i ( x ) makes that measure of risk aversion independent of the unit of measurement adopted so that A ( x ) is a dimensionless quantity (hence the absolute qualification). The second-order premium is then
P 2 = E [ X ] + 1 2 A ( X ) V [ X ] ,
A popular choice for the utility function is the exponential form
u ( x ) = 1 e α x ,
which results in the risk-aversion measure being constant A ( x ) α , where α is called the risk-aversion coefficient. The exponential function is the only one possessing this Constant Absolute Risk Aversion (CARA) property. Due to the CARA property, the exponential utility function has been extensively employed in the literature, see, e.g., [8,10,21,22]. In addition to the immediate simplification of the second-order premium, we can also recognize that the ratios of derivatives of the utility functions involved in the fourth-order premium become
u i i u i = α 2 e α x α e α x = α u i i i u i = α 3 e α x α e α x = α 2 u i v u i = α 4 e α x α e α x α 3 ,
so that the premium can then be rewritten as
P 4 = E [ X ] + 1 2 α V [ X ] + 1 6 α 2 S [ X ] V 3 / 2 [ X ] + 1 24 α 3 K [ X ] V 2 [ X ]
for the fourth-order case and
P 2 = E [ X ] + 1 2 α V [ X ]
for the second-order case.
As to the proper value to assign to the risk-aversion coefficient, it should be chosen to reflect the individual sensitivity towards risk: the higher is α , the more importance is attributed to risk. However, some proposals have appeared in the literature to assign sensible values to α . Bohme and Schwartz have considered the range of values α [ 0.5 , 4 ] [23]. Raskin et al. [24] and Thomas [25] have considered several alternatives for risk-aversion coefficient. Pitacco et al. [26] proposed to set it as the inverse of the expected loss
α P i t = 1 E [ X ] .
Babcock et al. proposed to set it as proportional to the inverse of the expected loss through the following formula [27]:
α B a b = ln 1 + 2 η 1 2 η E [ X ] ,
where η [ 0 , 0.5 ] is the probability premium, i.e., the event probability in excess of 0.5 so that an individual gets the same utility as the status quo (i.e., it sees its utility unchanged by the event). As can be seen, Equation (14) is a perturbation of (13) through a logarithmic function of the probability premium.
In Figure 1, we can see that the perturbation factor is lower than 1 for η < 1 2 e 1 e + 1 0.231 but can be significantly larger than 1 for higher probability premium values. We have therefore α Pit α Bab .
The resulting premiums P 2 and P 4 using α Pit and α Bab for the specific risk-aversion coefficient are respectively
P 2 Pit = E [ X ] + 1 2 1 E [ X ] V [ X ] P 2 Bab = E [ X ] + 1 2 ln 1 + 2 η 1 2 η E [ X ] V [ X ]
P 4 Pit = E [ X ] + 1 2 1 E [ X ] V [ X ] + 1 6 1 E 2 [ X ] S [ X ] V 3 / 2 [ X ] + 1 24 1 E 3 [ X ] K [ X ] V 2 [ X ] P 4 Bab = E [ X ] + 1 2 ln 1 + 2 η 1 2 η E [ X ] V [ X ] + 1 6 ln 2 1 + 2 η 1 2 η E 2 [ X ] S [ X ] V 3 / 2 [ X ] + 1 24 ln 3 1 + 2 η 1 2 η E 3 [ X ] K [ X ] V 2 [ X ]

3. Comparison of Premiums

In Section 2, we have derived the premium under two different approximation levels. We wish to see if the more accurate fourth-order approximation entails a higher premium, i.e., if the use of the usual second-order approximation results in an underestimated premium, which may be dangerous for the insurer. Though overestimation is similarly dangerous, here we take the viewpoint of the insurer, for which premium overestimation is the lesser problem, since it may result in a reduced number of insurance subscriptions. In the following, we limit ourselves to consider underestimation. In this section, we compare the two expressions and look for the conditions where that happens. Throughout this section, the number n of claims will be considered to be a fixed quantity; this limitation will be removed in Section 4.
From the two expressions (11) and (12), we can form their ratio
R = P 4 P 2 = E [ X ] + 1 2 α V [ X ] + 1 6 α 2 S [ X ] V 3 / 2 [ X ] + 1 24 α 3 K [ X ] V 2 [ X ] E [ X ] + 1 2 α V [ X ] = 1 + 1 12 4 α 2 S [ X ] V 3 / 2 [ X ] + α 3 K [ X ] V 2 [ X ] 2 E [ X ] + α V [ X ]
We consider the case when setting the premium through the usual second-order approximation can lead to underestimating the premium; we focus on underestimation, since it is the more dangerous error of the two (under- vs overestimation), leading to the possibility that premiums do not cover losses. Going back to Equation (17), we see that we have underestimation when R > 1 . Therefore, whether the fourth-order premium is larger or not than its second-order counterpart depends just on the sign and relative values of the skewness and kurtosis of the loss. In Table 2, we report the general conditions for underestimation of the premium when we stop at the second-order approximation. In two out of four cases (same signs for skewness and kurtosis), the conclusions are general. In the other two cases, the possibility of underestimation depends on the specific distribution of losses. Since we must have R > 1 for underestimation, the underestimation condition can be formulated as
4 α 2 S [ X ] V 3 / 2 [ X ] + α 3 K [ X ] V 2 [ X ] > 0 S [ X ] > 1 4 α K [ X ] V [ X ]
Since the overall loss is determined by the accrual of losses over the single events, we can derive a condition on the statistics of any single event by recalling that
X = i = 1 n L i ,
where L i is the loss suffered during th i-th event and n is the overall number of events (and therefore claims), with L 1 ,…, L n being i.i.d. random variables.
Due to the i.i.d. nature of the L i variables, the skewness and the kurtosis of the random variable X may be derived from that of the loss in any single event (see [28]):
S [ X ] = 1 n S [ L i ] K [ X ] = 1 n K [ L i ] .
The underestimation condition can then be reformulated as follows, by replacing Equation (20) in Equation (18):
1 n S [ L i ] > 1 4 n α K [ L i ] n V [ L i ] S [ L i ] > 1 4 α K [ L i ] V [ L i ]

3.1. Premium Underestimation for Major Distributions

It is interesting to analyze if the underestimation condition, as ascertained through Table 2 or Equation (18), is met for the commonest distribution of losses. In particular, we consider the following distributions: Generalized Pareto (GPD), Lognormal, Gamma, Pareto. The main features of those distributions are shown in Table 3, while their skewness and kurtosis are reported in Table 4 [29,30,31].
We now examine the underestimation condition for each distribution.
For the Generalized Pareto distribution, we see from Table 4 that both the skewness and the kurtosis are positive under the same conditions for the existence of finite skewness and kurtosis (the shape parameter must be lower than 1/3 and 1/4 respectively), so that we fall into the underestimation condition of Table 2.
Similarly, for the lognormal distribution, we have positive skewness and kurtosis. In particular, the kurtosis is always larger than 3. The second-order premium is an underestimate again.
For the Gamma distribution, the positivity of the shape parameter a implies that both the skewness and the kurtosis are positive, ending up again under the underestimation case.
Finally, for the Pareto distribution, the conditions for the existence of finite skewness and kurtosis guarantee the positive of both central moments and therefore the underestimation.
We can then conclude that for all these 4 common distributions, stopping at the second order in the premium computation leads us to underestimate the actual premium.

3.2. Premium Uncertainty

So far, we have assumed that the insurer can compute the premium if it knows the distribution of losses. However, such knowledge is typically approximate, since the insurer will typically estimate the parameters of the loss model based on observations. Consequently, the premium computed according to Equations (11) or (12) will be just an approximation of the correct value. It is important to quantify the uncertainty associated with the premium since that is an additional source of error, which may even mask the error due to a lower-order approximation. In this section, we compute the uncertainty in the premium due to an approximate knowledge of the loss distribution parameters.
Since the computation of the premium is carried out through a complex function, we resort to a Taylor series approximation of that function by considering it as a function of the loss distribution parameters. The expansion is performed around the expected values of those parameters. All the distributions considered so far rely on two parameters so that we can write the Taylor series expansion T ( · ) of the premium by adopting the following generic notation
T ( x , y ) = n = 0 m = 0 ( x x 0 ) n ( y y 0 ) m n ! m ! n + m f ( x 0 , y 0 ) x n y m
where x and y are the estimates of the two parameters, and x 0 and y 0 are their expected values. If we stop at the first order, we get
T ( x , y ) f ( x 0 , y 0 ) + f x ( x 0 , y 0 ) ( x x 0 ) + f y ( x 0 , y 0 ) ( y y 0 )
In the following, the parameters of the distributions of interest will be considered to be random variables to describe their uncertainty, so that the premium (from now on we use the generic symbol P) is a random variable as well. To gauge the uncertainty on the premium, we will compute its variance. As a simplifying assumption, we consider the estimates of the two parameters as independent of each other, though they will be typically estimated on the same sample of observations.
The general expression for the variance of P is then
V [ P ] V [ P ( x 0 , y 0 ) + P x ( x 0 , y 0 ) ( x x 0 ) + P y ( x 0 , y 0 ) ( y y 0 ) ] = V [ P ( x 0 , y 0 ) ] + V [ P x ( x 0 , y 0 ) ( x x 0 ) + P y ( x 0 , y 0 ) ( y y 0 ) ] = P x 2 ( x 0 , y 0 ) V [ x ] + P y 2 ( x 0 , y 0 ) V [ y ] + P x ( x 0 , y 0 ) P y ( x 0 , y 0 ) C o v [ x , y ] = P x 2 ( x 0 , y 0 ) V [ x ] + P y 2 ( x 0 , y 0 ) V [ y ]
We can now compute the variance of the premium for the loss distributions we have reported in Table 3.
After recalling the premium expressions for the Pareto distribution
P 2 = n a h a 1 + α 2 a h 2 ( a 1 ) 2 ( a 2 )
P 4 = n a h a 1 + α 2 a h 2 ( a 1 ) 2 ( a 2 ) + α 2 3 a ( a + 1 ) h 3 ( a 1 ) 3 ( a 2 ) ( a 3 ) + α 3 8 a ( 3 a 2 + a + 2 ) h 4 ( a 1 ) 4 ( a 2 ) ( a 3 ) ( a 4 )
we can get the variances, by indicating with a ¯ e h ¯ the expected values of the two parameters
V [ P 2 ] = n 2 a ¯ a ¯ 1 + α a ¯ h ¯ ( a ¯ 1 ) 2 ( a ¯ 2 ) 2 V [ h + n 2 h ¯ ( a ¯ 1 ) 2 + α ( a ¯ 2 a ¯ 1 ) h ¯ 2 ( a ¯ 1 ) 3 ( a ¯ 2 ) 2 2 V [ a ]
V [ P 4 ] = n 2 ( a ¯ a ¯ 1 + α a ¯ h ¯ ( a ¯ 1 ) 2 ( a ¯ 2 ) + α 2 a ¯ ( a ¯ + 1 ) h ¯ 2 ( a ¯ 1 ) 3 ( a ¯ 2 ) ( a ¯ 3 ) + α 3 2 a ¯ ( 3 a ¯ 2 + a ¯ + 2 ) h ¯ 3 ( a ¯ 1 ) 4 ( a ¯ 2 ) ( a ¯ 3 ) ( a ¯ 4 ) ) 2 V [ h ] + n 2 ( h ¯ ( a ¯ 1 ) 2 + α ( a ¯ 2 a ¯ 1 ) h ¯ 2 ( a ¯ 1 ) 3 ( a ¯ 2 ) 2 + α 2 ( a ¯ 4 2 a ¯ 3 5 a ¯ 2 + 8 a ¯ + 2 ) h ¯ 3 ( a ¯ 1 ) 4 ( a ¯ 2 ) 2 ( a ¯ 3 ) 2 + α 3 ( 3 a ¯ 6 19 a ¯ 5 + 26 a ¯ 4 + 17 a ¯ 3 3 a ¯ 2 48 a ¯ 12 ) h ¯ 4 ( a ¯ 1 ) 5 ( a ¯ 2 ) 2 ( a ¯ 3 ) 2 ( a ¯ 4 ) 2 ) 2 V [ a ] .
Similarly, for the Generalized Pareto distribution we get
P 2 = n β 1 ξ + α 2 β 2 ( 1 ξ ) 2 ( 1 2 ξ )
P 4 = n [ β 1 ξ + α 2 β 2 ( 1 ξ ) 2 ( 1 2 ξ ) + α 2 3 ( 1 + ξ ) β 3 ( 1 ξ ) 3 ( 1 2 ξ ) ( 1 3 ξ ) + α 3 8 ( 2 ξ 2 + ξ + 3 ) β 4 ( 1 ξ ) 4 ( 1 2 ξ ) ( 1 3 ξ ) ( 1 4 ξ ) ]
and consequently, we get the variances
V [ P 2 ] = n 2 1 1 ξ ¯ + α β ¯ ( 1 ξ ¯ ) 2 ( 1 2 ξ ¯ ) 2 V [ β ] + n 2 β ¯ ( 1 ξ ¯ ) 2 α ( 3 ξ ¯ 2 ) β ¯ 2 ( 1 ξ ¯ ) 3 ( 1 2 ξ ¯ ) 2 ( 1 3 ξ ¯ ) 2 2 V [ ξ ]
V [ P 4 ] = n 2 ( 1 1 ξ ¯ + α β ¯ ( 1 ξ ¯ ) 2 ( 1 2 ξ ¯ ) + α 2 ( 1 + ξ ¯ ) β ¯ 2 ( 1 ξ ¯ ) 3 ( 1 2 ξ ¯ ) ( 1 3 ξ ¯ ) + α 3 2 ( 2 ξ ¯ 2 + ξ ¯ + 3 ) β ¯ 3 ( 1 ξ ¯ ) 4 ( 1 2 ξ ¯ ) ( 1 3 ξ ¯ ) ( 1 4 ξ ¯ ) ) 2 V [ β ] + n 2 ( β ¯ ( 1 ξ ¯ ) 2 α ( 3 ξ ¯ 2 ) β ¯ 2 ( 1 ξ ¯ ) 3 ( 1 2 ξ ¯ ) 2 + α 2 ( 8 ξ ¯ 3 + 3 ξ ¯ 2 10 ξ ¯ + 3 ) β ¯ 3 ( 1 ξ ¯ ) 4 ( 1 2 ξ ¯ ) 2 ( 1 3 ξ ¯ ) 2 α 3 2 ( 60 ξ ¯ 5 28 ξ ¯ 4 + 95 ξ ¯ 3 152 ξ ¯ 2 + 71 ξ ¯ 10 ) β ¯ 4 ( 1 ξ ¯ ) 5 ( 1 2 ξ ¯ ) 2 ( 1 3 ξ ¯ ) 2 ( 1 4 ξ ¯ ) 2 ) 2 V [ ξ ] .
Under the gamma distribution, we have instead:
P 2 = n a b + α 2 a b 2
P 4 = n a b + α 2 a b 2 + α 2 3 a b 3 + α 3 8 a ( a + 2 ) b 4
so that the variances are
V [ P 2 ] = n 2 1 b ¯ + α 2 1 b ¯ 2 2 V [ a ] + n 2 a ¯ b ¯ 2 + α a ¯ b ¯ 3 2 V [ b ]
V [ P 4 ] = n 2 1 b ¯ + α 2 1 b ¯ 2 + α 2 3 1 b ¯ 3 + α 3 4 a ¯ + 1 b ¯ 4 2 V [ a ] + n 2 a ¯ b ¯ 2 + α a ¯ b ¯ 3 + α 2 a ¯ b ¯ 4 + α 3 2 a ¯ ( a ¯ + 2 ) b ¯ 5 2 V [ b ] .
Finally, for the lognormal distribution we have
P 2 = n e μ + σ 2 2 + α 2 ( e σ 2 1 ) e 2 μ + σ 2
P 4 = n [ e μ + σ 2 2 + α 2 ( e σ 2 1 ) e 2 μ + σ 2 + α 2 6 ( e σ 2 1 ) 2 ( e σ 2 + 2 ) e 3 ( μ + σ 2 2 ) + α 3 24 ( e 6 σ 2 4 e 3 σ 2 + 6 e 3 σ 2 3 e 2 σ 2 ) e 4 μ + 2 σ 2 ]
and the variances are
V [ P 2 ] = n 2 e μ ¯ + σ ¯ 2 2 + α ( e σ ¯ 2 1 ) e 2 μ ¯ + σ ¯ 2 2 V [ μ ] + n 2 ( σ ¯ e μ ¯ + σ ¯ 2 2 + α σ ¯ ( 2 e σ ¯ 2 1 ) e 2 μ ¯ + σ ¯ 2 ) 2 V [ σ ]
V [ P 4 ] = n 2 ( e μ ¯ + σ ¯ 2 2 + α ( e σ ¯ 2 1 ) e 2 μ ¯ + σ ¯ 2 + α 2 2 ( e σ ¯ 2 1 ) ( e σ ¯ 2 + 2 ) e 3 ( μ ¯ + σ ¯ 2 2 ) + α 3 6 ( e 6 σ ¯ 2 4 e 3 σ ¯ 2 + 6 e σ ¯ 2 3 ) e 4 μ ¯ + 2 σ ¯ 2 ) 2 V [ μ ] + n 2 ( σ ¯ e μ ¯ + σ ¯ 2 2 + α σ ¯ ( 2 e σ ¯ 2 1 ) e 2 μ ¯ + σ ¯ 2 + α 2 6 σ ¯ ( 7 e 2 σ ¯ 2 + 5 e σ ¯ 2 6 ) e 3 ( μ ¯ + σ ¯ 2 2 ) + α 3 12 σ ¯ ( 8 e 6 σ ¯ 2 10 e 3 σ ¯ 2 + 18 e σ ¯ 2 3 ) e 4 μ ¯ + 2 σ ¯ 2 ) 2 V [ σ ] .

3.3. An Application to Realistic Cases

In the previous sections, we have derived two metrics (underestimation factor and premium estimator variance) that allow us to compare the insurance premium as computed at two different approximation levels: second and fourth order. In this section, we get a feeling of how the two approximation levels compare in realistic cases, i.e., when the loss distribution parameters take realistic values.
For this purpose, we consider the parameter values obtained in [6,30,31], which we report in Table 5. In particular, the Generalized Pareto distribution describes the duration (in minutes) of a cloud service outage, where the amount of economic losses is proportional to the duration of the outage. The lognormal distribution describes the severity of the losses due to a cyber-attack by a hacker. The Gamma distribution describes the severity of the losses due to a natural catastrophe (as a hurricane). The Pareto distribution is one of the most used distributions in the literature to describe the severity of losses in many cases, for example, losses due to fire damages, car accidents, and ICT service failures.
All the cases we consider here refer to a policy duration of one year, and assume full compensation. In this section, we adopt the Babcock method with η = 0.4 to determine the risk-aversion coefficient. As assumed in Section 3.2, for the sake of computing the premium uncertainty, the loss distribution parameters are considered to be random variables. We plug their estimators, assumed to be unbiased and with a standard deviation equal to 1% of their expected value (this value is taken as a reference for convenience, but it should be assessed for the specific case at hand), in the premium computation formula.
To compare the two approximations, we first compute the underestimation factor through the percentage relative difference of the premiums, assuming that the distribution parameters are known exactly:
DP ( n ) = P 4 ( n ) P 2 ( n ) P 4 ( n ) · 100
As a second metric for comparison, we compute the coefficient of variation (i.e., the ratio of standard deviation to expected value) for the two approximation levels. We obviously wish that coefficient to be as low as possible.
CV i ( n ) = V [ P i ( n ) ] E [ P i ( n ) ] · 100 i { 2 , 4 }
In Figure 2, Figure 3, Figure 4 and Figure 5, we can see the two metrics to compare the two approximations for the Pareto, GPD, Gamma, and Lognormal distributions respectively.
Moreover, in Table 6 and Table 7, we report the underestimation DP and the coefficient of variation for two different values of the number of events n.
In all cases, we can see that V [ P 4 ] > V [ P 2 ] .
Naturally, using α B a b (therefore also α P i t ) the difference DP between the two approximation levels tends to zero
lim n DP ( n ) = lim n 1 6 n ln 2 ( 1 + 2 η 1 2 η ) S [ L i ] V 3 2 [ L i ] E 2 [ L i ] + 1 24 n 2 ln 3 ( 1 + 2 η 1 2 η ) K [ L i ] V 2 [ L i ] E 3 [ L i ] n E [ L i ] + 1 2 ln ( 1 + 2 η 1 2 η ) V [ L i ] E [ L i ] + 1 6 n ln 2 ( 1 + 2 η 1 2 η ) S [ L i ] V 3 2 [ L i ] E 2 [ L i ] + 1 24 n 2 ln 3 ( 1 + 2 η 1 2 η ) K [ L i ] V 2 [ L i ] E 3 [ L i ] = 0 .
In fact, for large values of n, P 4 ( n ) P 2 ( n ) . Instead, if α is a fixed real constant, the relative difference DP is always positive and does not depend on the number of events n:
DP ( n ) = n 6 α 2 S [ L i ] V 3 2 [ L i ] + 1 4 α 3 K [ L i ] V 2 [ L i ] n E [ L i ] + 1 2 α V [ L i ] + 1 6 α 2 S [ L i ] V 3 2 [ L i ] + 1 24 α 3 K [ L i ] V 2 [ L i ] = α 2 S [ L i ] V 3 2 [ L i ] + 1 4 α 3 K [ L i ] V 2 [ L i ] 6 E [ L i ] + 1 2 α V [ L i ] + 1 6 α 2 S [ L i ] V 3 2 [ L i ] + 1 24 α 3 K [ L i ] V 2 [ L i ] .

4. Insurer’s Risk

As in any total coverage insurance scheme, the insured is held indemnified against any damage by just paying the premium. On the other hand, the insurer must cover all the losses the insured suffers. Since the premium is a fixed quantity and the loss is a random variable, the insurer incurs the risk of paying more than what is cashed in. Of course, this risk is different for the two approximation levels we have considered so far. It is larger for the second-order approximation as long as its premium is lower than the fourth-order approximation. In this section, we quantify that risk by computing the Value-at-Risk (VaR), also considering the number n of events as a random variable.
The Value-at-Risk is a well-known measure of risk, defined first in the financial world and then extended to other fields. For example, it has been employed in the ICT sector in [3,32,33]. In our case, the Value-at-Risk at the confidence level θ is the smallest number x θ such that the probability that the loss X exceeds x θ is not larger than θ [34]:
VaR θ = inf { x θ R : P [ X > x θ ] 1 θ } .
We can now compute the VaR for the four distributions we have considered so far. For the sake of simplicity, we use the notation P i , i = { 2 , 4 } to represent the insurance premium. In the following, the number of claims n is considered to follow a Poisson random variable with parameter λ T , where λ refers to the frequency of the claims, and T is the length of the observation period. The losses taking place over the sequence of events are assumed to be i.i.d. random variables.
Lognormal case: For the case of the lognormal distribution we use Wilkinson’s method as employed in [35] to estimate the queue of lognormal sums. They assume that the sum of lognormal variables is still lognormal. So that the overall loss X = i = 1 n L i follows a lognormal distribution, in particular X e Z , with Z being a Gaussian variable with mean m Z E [ X ] and variance σ Z 2 V [ X ] , which we can compute using Wald identity:
m Z = E [ X ] = E [ n ] E [ L i ] = λ T e μ + σ 2 2 σ Z 2 = V [ X ] = E [ n ] V [ L i ] + E 2 [ L i ] V [ n ] = λ T e 2 μ + 2 σ 2
The Value-at-Risk can therefore be determined as follows
P ( X > x θ ) = 1 θ P ( e Z > x θ ) = 1 θ P ( Z > ln ( x θ ) ) = 1 θ G ln ( x θ ) m Z 2 σ Z = θ x θ = e ( G 1 ( θ ) 2 σ Z + m z ) VaR θ = x θ = e ( G 1 ( θ ) λ T e μ + σ 2 + λ T e μ + σ 2 2 )
where G ( ) is the cumulative distribution function of a standard (zero-mean, unit-variance) Gaussian random variable. In Figure 6, we see how the extreme loss (i.e., the difference between the Value-at-Risk and the premium) grows with the expected number of claims λ T .
Pareto case: In the Pareto and GPD cases, we can use the Generalized Extreme Value Theory [36,37,38] to compute the VaR.
As can be seen, for example, in [39], we can approximate the sum of n i.i.d. Pareto-distributed variables through a Generalized Extreme Value (GEV) variable, so that the overall loss X exhibits the probability density function
f ( μ , σ , η ) = 1 σ r ( x ) η + 1 e r ( x )
where
r ( x ) = 1 + η x μ σ 1 η if η 0 e x μ σ if η = 0
and μ , σ and η are the location, scale and shape parameter of the GEV distribution. η is a parameter that governs the shape of the tail under consideration: the fatness of the tail will then depend on the exact value assumed by it. We can observe that η is related to the number of events n, in fact, when n increases, it changes the slope of the distribution and consequently also η . The most used range in which η is found is [ 0.5 , 0.5 ] . In our case, we estimate the location scale and shape parameters through an R-package (ismev). Therefore, if p is the probability that an event occurs, the quantile function for GEV distribution is
Q = μ + σ η ln 1 p η 1 if η 0 μ σ ln ( ln ( p ) ) if η = 0
Assuming η 0 , we can compute the VaR as follows.
P ( X > x θ ) = 1 θ P ( X < x θ ) = θ x θ = σ η ln 1 θ η 1 + μ VaR θ = x θ = σ η ln 1 θ η 1 + μ
In Figure 7, we see how the extreme loss grows with the number of claims: the extreme loss is anyway lower for the fourth-order approximation.
GPD case:
Now, using the same arguments that we adopted for Pareto case, we can find the VaR for the GPD case.
P ( X > x θ ) = 1 θ P ( X < x θ ) = θ x θ = σ η ln 1 θ η 1 + μ VaR θ = x θ = σ η ln 1 θ η 1 + μ
In Figure 8 the curves are actually very close but not coincident, because the difference in percentage between the two premiums, P 2 and P 4 , is very low.
Gamma case: Since L i G a m m a ( a , b ) , again using Wald’s identity and the property of the Gamma distribution, the overall loss X is also G a m m a distribution with mean m G = a b λ T and variance σ G 2 = a ( 1 + a ) b 2 λ T . If we assume that λ T a b , the loss X can be approximated by a normal distribution with the same mean and variance, so that X N ( m G , σ G 2 ) . We can then estimate the Value-at-Risk as follows
P ( X > x θ ) = 1 θ P ( X < x θ ) = θ G x θ m G 2 σ G = θ VaR θ = x θ = G 1 ( θ ) 2 a ( a + 1 ) λ T b + a b λ T
As can be seen in Figure 9, the extreme loss grows with the number of claims, but is lower for the fourth-order approximation.

5. Conclusions

In this work, we have focused our attention on the comparison of insurance premium when they are computed under the expected utility principle based on two approximation levels for the loss statistics: second-order (mean-variance) vs fourth-order (mean-variance-skewness-kurtosis).
We have shown that for the major cases of interest, computing the premium based on the second-order approximation is riskier for the insurer, since the premium is lower.
However, if we take into account the possibility of incorrectly estimating the higher-order statistics involved in the fourth-order approximation, the comparison outcome may be reversed. In fact, the dispersion in the fourth-order premium may be so high as to have too high a premium. A higher premium would probably divert prospective customers away from subscribing an insurance policy.
A tentative conclusion is then to use the fourth-order approximation to compute the insurance premium as long as the estimate of the fourth-order statistics is sufficiently accurate.

Author Contributions

Conceptualization, A.M., and M.N.; Methodology, A.M., and M.N.; Software, A.M.; Validation, A.M., and M.N.; Formal analysis, A.M., and M.N.; Investigation, A.M., and M.N.; Resources, M.N.; Data curation, A.M.; Writing–original draft preparation, A.M., and M.N.; Writing–review and editing, A.M., and M.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARAAbsolute Risk Aversion
CARAConstant Absolute Risk Aversion
CVCoefficient of Variation between premiums
DPDifference in Percentage between premiums
GEVGeneralized Extreme Value
GPDGeneralized Pareto Distribution
RRatio between premiums
VaRValue-at-Risk

References

  1. Dorfman, M.S. Introduction to Risk Management and Insurance; Prentice Hall Inc.: Englewood Cliffs, NJ, USA, 1998. [Google Scholar]
  2. Oren, S. Market based risk mitigation: Risk management vs. risk avoidance. Proceedings of a White House OSTP/NSF Workshop on Critical Infrastructure Interdependencies, Washington, DC, USA, 14–15 June 2001; pp. 14–15. [Google Scholar]
  3. Chołda, P.; Følstad, E.L.; Helvik, B.E.; Kuusela, P.; Naldi, M.; Norros, I. Towards risk-aware communications networking. Reliab. Eng. Syst. Saf. 2013, 109, 160–174. [Google Scholar] [CrossRef] [Green Version]
  4. Mastroeni, L.; Naldi, M. Network protection through insurance: Premium computation for the on-off service model. In Proceedings of the 8th International Workshop on the Design of Reliable Communication Networks, Krakow, Poland, 10–12 October 2011; pp. 46–53. [Google Scholar]
  5. Mastroeni, L.; Naldi, M. Pricing of insurance policies against cloud storage price rises. Perform. Eval. Rev. 2012, 40, 42–45. [Google Scholar] [CrossRef] [Green Version]
  6. Mastroeni, L.; Mazzoccoli, A.; Naldi, M. Service Level Agreement Violations in Cloud Storage: Insurance and Compensation Sustainability. Future Internet 2019, 11, 142. [Google Scholar] [CrossRef] [Green Version]
  7. Young, D.; Lopez, J., Jr.; Rice, M.; Ramsey, B.; McTasney, R. A framework for incorporating insurance in critical infrastructure cyber risk strategies. Int. J. Crit. Infrastruct. Prot. 2016, 14, 43–57. [Google Scholar] [CrossRef]
  8. Böhme, R.; Schwartz, G. Modeling Cyber-Insurance: Towards a Unifying Framework. 2010. Available online: https://pdfs.semanticscholar.org/7768/84d844f406fbfd82ad67b85ebaabd2b0e360.pdf (accessed on 5 May 2020).
  9. Baer, W.S.; Parkinson, A. Cyberinsurance in it security management. IEEE Secur. Priv. 2007, 5, 50–56. [Google Scholar] [CrossRef]
  10. Marotta, A.; Martinelli, F.; Nanni, S.; Orlando, A.; Yautsiukhin, A. Cyber-insurance survey. Comput. Sci. Rev. 2017, 24, 35–61. [Google Scholar] [CrossRef]
  11. Meland, P.H.; Tondel, I.A.; Solhaug, B. Mitigating risk with cyberinsurance. IEEE Secur. Priv. 2015, 13, 38–43. [Google Scholar] [CrossRef]
  12. Bodin, L. Gordon, L.e.a. Cybersecurity insurance and risk-sharing. J. Account. Pub. Policy 2018, 37, 527–544. [Google Scholar] [CrossRef]
  13. Mazzoccoli, A.; Naldi, M. Robustness of Optimal Investment Decisions in Mixed Insurance/Investment Cyber Risk Management. Risk Anal. 2020, 40, 550–564. [Google Scholar] [CrossRef]
  14. Kaas, R.; Goovaerts, M.; Dhaene, J.; Denuit, M. Modern Actuarial Risk Theory; Kluwer: Dordrecht, The Netherlands, 2001. [Google Scholar]
  15. Gollier, C. Optimal insurance design: What can we do with and without expected utility? In Handbook of Insurance; Springer: Dordrecht, The Netherlands, 2000; pp. 97–115. [Google Scholar]
  16. Machina, M.J. Non-expected utility and the robustness of the classical insurance paradigm. Geneva Pap. Risk Insurance Theory 1995, 20, 9–50. [Google Scholar] [CrossRef]
  17. Jaquith, A. Security Metrics: Replacing Fear, Uncertainty, and Doubt; Pearson Education: Boston, MA, USA, 2007. [Google Scholar]
  18. Franke, U. The cyber insurance market in Sweden. Comput. Secur. 2017, 68, 130–144. [Google Scholar] [CrossRef]
  19. Naldi, M.; Mazzoccoli, A. Computation of the Insurance Premium for Cloud Services Based on Fourth-Order Statistics. Int. J. Simul. Syst. Sci. Tech. 2018, 19, 1–6. [Google Scholar] [CrossRef] [Green Version]
  20. Menezes, C.F.; Hanson, D.L. On the theory of risk aversion. Int. Econ. Rev. 1970, 481–487. [Google Scholar] [CrossRef]
  21. Martinelli, F.; Orlando, A.; Uuganbayar, G.; Yautsiukhin, A. Preventing the drop in security investments for non-competitive cyber-insurance market. In Proceedings of the International Conference on Risks and Security of Internet and Systems, Dinard, France, 19–21 September 2017; pp. 159–174. [Google Scholar]
  22. Brunello, G. Absolute risk aversion and the returns to education. Econ. Educ. Rev. 2002, 21, 635–640. [Google Scholar] [CrossRef] [Green Version]
  23. Böhme, R.; Kataria, G. Models and Measures for Correlation in Cyber-Insurance. 2007. Available online: https://archive.nyu.edu/bitstream/2451/14997/2/Infosec_ISR_Bohme+Kataria.pdf (accessed on 5 May 2020).
  24. Raskin, R.; Cochran, M.J. Interpretations and transformations of scale for the Pratt-Arrow absolute risk aversion coefficient: Implications for generalized stochastic dominance. West. J. Agric. Econ. 1986, 12, 204–210. [Google Scholar]
  25. Thomas, P. Measuring risk-aversion: The challenge. Measurement 2016, 79, 285–301. [Google Scholar] [CrossRef]
  26. Olivieri, A.; Pitacco, E. Introduction to Insurance Mathematics: Technical and Financial Features of Risk Transfers; Springer: Heidelberg, Germany, 2015. [Google Scholar]
  27. Babcock, B.A.; Choi, E.K.; Feinerman, E. Risk and probability premiums for CARA utility functions. J. Agric. Resourc. Econ. 1993, 18, 17–24. [Google Scholar]
  28. Held, M. Deriving Skewness and Excess Kurtosis of the Sum of iid Random Variables. SSRN Electron. J. 2011. [Google Scholar] [CrossRef]
  29. Bury, K.V. Statistical Models in Applied Science; Wiley: New York, NY, USA, 1975. [Google Scholar]
  30. Packová, V.; Brebera, D. Loss Distributions in Insurance Risk Management. 2015. Available online: http://www.inase.org/library/2015/barcelona/ECBAS.pdf#page=17 (accessed on 5 May 2020).
  31. Burnecki, K.; Kukla, G.; Weron, R. Property insurance loss distributions. Phys. Stat. Mech. Appl. 2000, 287, 269–278. [Google Scholar] [CrossRef] [Green Version]
  32. Mastroeni, L.; Naldi, M. Compensation policies and risk in service level agreements: A value-at-risk approach under the on-off service model. In Economics of Converged, Internet-Based Networks; Cohen, J., Maillé, P., Stiller, B., Eds.; Springer: Berlin, Germany, 24 October 2011; Volume 6995, pp. 2–13. [Google Scholar]
  33. Naldi, M. Evaluation of customer’s losses and value-at-risk under cloud outages. In Proceedings of the 2017 40th International Conference on Telecommunications and Signal Processing (TSP), Barcelona, Spain, 5–7 July 2017; pp. 12–15. [Google Scholar]
  34. McNeil, A.J.; Frey, R.; Embrechts, P. Quantitative Risk Management: Concepts, Techniques and Tools, revised ed.; Princeton University Press: Princeton, NJ, USA, 2015. [Google Scholar]
  35. Beaulieu, N.C.; Abu-Dayya, A.A.; McLane, P.J. Estimating the distribution of a sum of independent lognormal random variables. IEEE Trans. Commun. 1995, 43, 2869–2873. [Google Scholar] [CrossRef]
  36. Holmes, J.; Moriarty, W. Application of the generalized Pareto distribution to extreme value analysis in wind engineering. J. Wind. Eng. Ind. Aerod. 1999, 83, 1–10. [Google Scholar] [CrossRef]
  37. Bali, T.G. The generalized extreme value distribution. Econ. Lett. 2003, 79, 423–427. [Google Scholar] [CrossRef]
  38. McNeil, A.J. Estimating the tails of loss severity distributions using extreme value theory. ASTIN Bull. J. IAA 1997, 27, 117–137. [Google Scholar] [CrossRef] [Green Version]
  39. Neftci, S.N. Value at risk calculations, extreme events, and tail estimation. J. Derivatives 2000, 7, 23–37. [Google Scholar] [CrossRef]
Figure 1. Ratio of the Pitacco and Babcock risk-aversion coefficient.
Figure 1. Ratio of the Pitacco and Babcock risk-aversion coefficient.
Algorithms 13 00116 g001
Figure 2. Performance metrics for the Pareto case (distribution parameters as in Table 5).
Figure 2. Performance metrics for the Pareto case (distribution parameters as in Table 5).
Algorithms 13 00116 g002
Figure 3. Performance metrics for the GPD case (distribution parameters as in Table 5).
Figure 3. Performance metrics for the GPD case (distribution parameters as in Table 5).
Algorithms 13 00116 g003
Figure 4. Performance metrics for the Gamma case (distribution parameters as in Table 5).
Figure 4. Performance metrics for the Gamma case (distribution parameters as in Table 5).
Algorithms 13 00116 g004
Figure 5. Performance metrics for the lognormal case (distribution parameters as in Table 5).
Figure 5. Performance metrics for the lognormal case (distribution parameters as in Table 5).
Algorithms 13 00116 g005
Figure 6. Insurer’s extreme loss in the lognormal case (distribution parameters as in Table 5).
Figure 6. Insurer’s extreme loss in the lognormal case (distribution parameters as in Table 5).
Algorithms 13 00116 g006
Figure 7. Insurer’s extreme loss in the Pareto case (distribution parameters as in Table 5).
Figure 7. Insurer’s extreme loss in the Pareto case (distribution parameters as in Table 5).
Algorithms 13 00116 g007
Figure 8. Insurer’s extreme loss in the GPD case (distribution parameters as in Table 5).
Figure 8. Insurer’s extreme loss in the GPD case (distribution parameters as in Table 5).
Algorithms 13 00116 g008
Figure 9. Insurer’s extreme loss in the Gamma case (distribution parameters as in Table 5).
Figure 9. Insurer’s extreme loss in the Gamma case (distribution parameters as in Table 5).
Algorithms 13 00116 g009
Table 1. Variables and parameters used.
Table 1. Variables and parameters used.
Parameters/VariablesMeaning
α Risk-aversion coefficient
Γ ( a , b ) Gamma distribution, with shape parameter a and scale parameter b
G e v ( μ , σ , η ) Generalized Extreme Value distribution, with location ( μ ) , scale ( σ ) and shape ( η ) parameters
G P D ( β , ξ ) Generalized Pareto distribution, with its scale ( β ) and shape ( ξ ) parameters
η Probability premium
λ Frequency of claims
L i Loss suffered during i-th event
L o g n ( μ , σ 2 ) Lognormal distribution, with its mean-log ( μ ) and variance-log ( σ 2 )
pProbability of an event
P a r ( a , h ) Pareto distribution, with its shape ( a ) and scale ( b ) parameters
TLength of observation period
θ Confidence level
XRandom loss
x θ Value at Risk
wWealth under no losses
Table 2. Conditions for underestimation of the premium.
Table 2. Conditions for underestimation of the premium.
SkewnessKurtosisUnderestimation
++YES
-+Distribution dependent
+-Distribution dependent
--NO
Table 3. Some common distribution for losses
Table 3. Some common distribution for losses
DistributionParametersPDF
GDP β , ξ f ( x ) = 1 β 1 + ξ x β ξ + 1 ξ
Lognormal μ , σ 2 f ( x ) = 1 2 π σ x e ln ( x ) μ 2 σ 2
Gammaa, b f ( x ) = b a Γ ( a ) x a 1 e b x
Paretoa, h f ( x ) = α h a x a + 1
Table 4. Moments of common distributions and underestimation presence (the conditions on parameters ξ and a assure that skewness and kurtosis are not infinite).
Table 4. Moments of common distributions and underestimation presence (the conditions on parameters ξ and a assure that skewness and kurtosis are not infinite).
Intensity of Losses DistributionSkewnessKurtosisUnderestimation
GPD 2 ( 1 + ξ ) 1 2 ξ 1 3 ξ , ξ < 1 3 3 ( 1 2 ξ ) ( 2 ξ 2 + ξ + 3 ) ( 1 3 ξ ) ( 1 4 ξ ) , ξ < 1 4 YES
Lognormal ( e σ 2 + 2 ) e σ 2 1 e 4 σ 2 + 2 e 3 σ 2 + 3 e 2 σ 2 3 YES
Gamma 2 a 6 a + 3 YES
Pareto 2 ( 1 + a ) a 3 a 2 a , a > 3 6 ( a 3 + a 2 6 a 2 ) a ( a 3 ) ( a 4 ) + 3 , a > 4 YES
Table 5. Parameters values
Table 5. Parameters values
Loss DistributionParameter 1Parameter 2
GPD ξ ¯ = 0.64 β ¯ = 192.47
Lognormal μ ¯ = 6.83 σ ¯ = 0.87
Gamma a ¯ = 0.78 b ¯ = 12.58
Pareto a ¯ = 4.1 h ¯ = 12
Table 6. Underestimation DP and coefficients of variation C V in percentage for the two premium for values considered in Section 3.2 for n = 5 .
Table 6. Underestimation DP and coefficients of variation C V in percentage for the two premium for values considered in Section 3.2 for n = 5 .
Intensity of Losses DistributionUnderestimation[%] C V P 2 [ % ] C V P 4 [ % ]
GPD 0.31 1.15 1.2
Lognormal 23.89 8.31 13.85
Gamma 11.6 0.06 31.4
Pareto 4.3 1.14 23.29
Table 7. Underestimation D P and coefficients of variation C V in percentage for the two premiums for values considered in Section 3.2 for n = 10 .
Table 7. Underestimation D P and coefficients of variation C V in percentage for the two premiums for values considered in Section 3.2 for n = 10 .
Intensity of Losses DistributionUnderestimation[%] C V P 2 [ % ] C V P 4 [ % ]
GPD 0.08 1.17 1.135
Lognormal 5.85 7.66 8.83
Gamma 2.91 0.05 6.84
Pareto 0.67 1.08 6.76

Share and Cite

MDPI and ACS Style

Mazzoccoli, A.; Naldi, M. The Expected Utility Insurance Premium Principle with Fourth-Order Statistics: Does It Make a Difference? Algorithms 2020, 13, 116. https://doi.org/10.3390/a13050116

AMA Style

Mazzoccoli A, Naldi M. The Expected Utility Insurance Premium Principle with Fourth-Order Statistics: Does It Make a Difference? Algorithms. 2020; 13(5):116. https://doi.org/10.3390/a13050116

Chicago/Turabian Style

Mazzoccoli, Alessandro, and Maurizio Naldi. 2020. "The Expected Utility Insurance Premium Principle with Fourth-Order Statistics: Does It Make a Difference?" Algorithms 13, no. 5: 116. https://doi.org/10.3390/a13050116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop