Next Article in Journal
On Comparison of Stochastic Reserving Methods with Bootstrapping
Previous Article in Journal
How Does Reinsurance Create Value to an Insurer? A Cost-Benefit Analysis Incorporating Default Risk
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Retention Level for Infinite Time Horizons under MADM

Department of Actuarial Sciences, Hacettepe University, 06800 Ankara, Turkey
*
Author to whom correspondence should be addressed.
Submission received: 26 September 2016 / Revised: 16 December 2016 / Accepted: 19 December 2016 / Published: 27 December 2016

Abstract

:
In this paper, we approximate the aggregate claims process by using the translated gamma process under the classical risk model assumptions, and we investigate the ultimate ruin probability. We consider optimal reinsurance under the minimum ultimate ruin probability, as well as the maximum benefit criteria: released capital, expected profit and exponential-fractional-logarithmic utility from the insurer’s point of view. Numerical examples are presented to explain how the optimal initial surplus and retention level are changed according to the individual claim amounts, loading factors and weights of the criteria. In the decision making process, we use The Analytical Hierarchy Process (AHP) and The Technique for Order of Preference by Similarity to ideal Solution (TOPSIS) methods as the Multi-Attribute Decision Making methods (MADM) and compare our results considering different combinations of loading factors for both exponential and Pareto individual claims.

1. Introduction

In the last few years, there has been a growing interest on minimizing the ruin probability or maximizing the survival probability of the insurance company. An insurance company has to control its ruin probability and keep it at a minimum level to sustain its existence.
Current research on ruin probability has been particularly focused on minimizing the ruin probability. Various methods, such as reinsurance arrangement, dividend payment or investment technique, to minimize the ruin probability have been proposed. However, most of the literature on minimizing the ruin probability is based on the reinsurance arrangements. One of the first examples of optimal reinsurance is suggested by De Finetti [1]. In that paper, De Finetti determines the optimal retention level for a non-life insurance portfolio under the minimum variance of the insurer’s profit for a fixed expected profit constraint. De Finetti [1] indicates that the retention level is proportional to the insurance loading factor and inversely proportional to the variance of the risk. Buhlmann [2] provides further details and proofs of De Finetti’s approach. Borch [3] shows that the stop loss-reinsurance is the optimal reinsurance contract since it minimizes the variance of the insurer’s risk when the reinsurance premium is calculated by using the expected value premium principles. Arrow [4] shows that the same stop-loss reinsurance maximizes the expected utility of the terminal wealth of the insurer. Dickson and Waters [5] use ruin probability instead of the variance criterion in De Finetti’s approach. They investigate the optimal reinsurance levels which minimize the finite time ruin probability for both discrete and continuous time in a non-life insurance portfolio. They assume that the aggregate claims process is approximated by a translated gamma process. Ignatov et al. [6] explain the optimality as the levels that maximize the joint survival probability for the finite time horizon of the cedent and the reinsurer. They derive a formula for the expected profit under the probability of survival of the insurer. Kaluszka [7] proposes the optimal reinsurance which minimizes the ruin probability for the truncated stop loss reinsurance based on different pricing rules, such as the economic principle, generalized zero-utility principle, Esscher principle and mean-variance principle. Dickson and Waters [8] focus on a dynamic reinsurance strategy to minimize the ruin probability. They derive a formula for the finite time ruin probability for discrete and continuous time by using the Bellman optimality principle. Moreover, they show how the optimal strategies are determined by approximating the compound Poisson aggregate claims distributions by translated gamma distributions and by approximating the compound Poisson process by a translated gamma process, respectively. Kaishev and Dimitrova [9] generalize a joint survival optimal reinsurance model for the excess of loss reinsurance under the assumption that the individual claim amounts are modeled by continuous dependent random variables with a joint distribution. The optimal retention levels that maximize both the joint survival function and the premium income are determined. Nie et al. [10] propose a new kind of reinsurance arrangement, for which the reinsurer’s payments are bounded above by a fixed level. In this reinsurance type, whenever the insurer’s surplus falls between zero and this fixed level, the reinsurance company makes an additional payment called capital injections. The optimal pair of initial surplus and the fixed reinsurance level is determined to make the ultimate ruin probability minimum. Centeno [11], Aase [12], Ignatov et al. [6], Balbas et al. [13] and Centeno and Simoes [14] summarize the research techniques that are used in optimal reinsurance and provide further references about optimal reinsurance studies.
Briefly, the findings of these studies indicate that optimal reinsurance levels are mostly determined by using a single criterion (e.g., minimizing a ruin probability). Furthermore, there are few studies in the literature that focus on determining the optimal reinsurance level under different constraints. Dimitrova and Kaishev [15] and Hürlimann [16] have studied optimal reinsurance by considering different risks from the point of both the insurer and the reinsurer.
Karageyik and Dickson [17] suggest optimal reinsurance criteria as the released capital, expected profit and expected utility of resulting wealth. They aim to find the pair of initial surplus and reinsurance level that maximizes the output of these three quantities under the minimum finite time ruin probability by using the translated gamma process to approximate the compound Poisson process. In order to obtain the optimal reinsurance, they take the advantage of the decision theory and use the TOPSIS method with the Mahalanobis distance. Based on the approach introduced in Karageyik and Dickson [17], the purpose of this paper is to determine the optimal initial surplus and retention level that maximize the optimal reinsurance criteria under the minimum ultimate ruin probability constraint.
Different from Karageyik and Dickson [17], we investigate the optimal reinsurance level by using three utility functions: exponential, fractional and logarithmic, besides the expected profit and released capital criteria. Although, Karageyik and Dickson [17] examine optimal reinsurance under the finite time ruin probability, we prefer to use the ultimate ruin probability constraint. In addition, we use two multi-attribute decision making methods: AHP and TOPSIS with four normalization and two distance measure techniques in the decision analysis part. We have obtained and compared the optimal initial surplus and retention level for the combinations of different loading factors.
The rest of the paper is structured as follows: Section 2 describes the classical risk model. Section 3 briefly introduces the ultimate ruin probability under the assumption of the aggregate claims amount approximated by the translated gamma process. Section 4 explains the optimal reinsurance criteria: released capital, expected profit, exponential, fractional and logarithmic utility functions. Section 5 presents two multi-attribute decision making methods: AHP and TOPSIS. Section 6 focuses on the application to determine the optimal initial surplus and retention level for the exponential and Pareto claims. Section 7 concludes the paper.

2. Classical Risk Model

The insurer’s surplus process at time t, t 0 , is:
U ( t ) = u + c t S ( t ) ,
where u 0 is the initial surplus, c is the constant premium rate with c > 0 and S ( t ) is the aggregate claim amounts up to time t.
The aggregate claim amount up to time t, S ( t ) , is:
S ( t ) = i = 1 N ( t ) X i ,
where N ( t ) denotes the number of claims that occur in the fixed time interval [ 0 , t ] . In the classical risk model, it is assumed that N ( t ) is a Poisson process with parameter λ. The individual claim amounts are modeled as independent and identically distributed (i.i.d) random variables { X i } i = 1 with distribution function F ( x ) = P ( X i x ) , such that F ( 0 ) = 0 , and X i is the amount of the i-th claim. The density function and the k-th moment of X 1 are represented as f and m k .
The infinite time ruin probability in the continuous case (ultimate ruin probability) is defined as:
ψ ( u ) = P r ( U ( t ) < 0 for some t > 0 ) .
ψ ( u ) defines the probability that the insurer’s surplus falls below zero at some time in the future, that is claims outgo exceed the initial surplus plus premium income. It is usually assumed that the premium income is greater than the expected aggregate claim amount per unit of time ( c > λ m 1 ) . Otherwise, ψ ( u ) = 1 for all u > 0 .
Trufin et al. [18] deal with the infinite time ruin probability of an insurance portfolio in the framework of risk measures. They also point to the advantages of the infinite time approach over the finite time case. They indicate that there are no difficulties of planning and assuming the appropriate operational time in the infinite time analysis contrary to the finite time case.
In this study, the expected value premium principle is applied with the formula c = ( 1 + θ ) λ m 1 where θ is the insurance loading factor with θ > 0 .
Under an excess of loss reinsurance arrangement, the insurer and the reinsurer’s expected individual claim amounts are calculated according to a constant retention level M. When a claim X occurs, the insurer pays Y = min ( X , M ) , and the reinsurer pays Z = max ( 0 , X M ) with X = Y + Z . Hence, the distribution function of Y, F Y ( x ) , is:
F Y ( x ) = F X ( x ) for x < M , 1 for x M ,
and the moments of Y are:
E [ Y n ] = 0 M x n f ( x ) d x + M n 1 F ( M ) .
Similarly, the moments of Z are:
E [ Z n ] = M x M n f ( x ) d x .
The expected aggregate claim amount, denoted by E [ S ] , is calculated by the expected number of claims and the expected amount of each claim as:
E [ S ] = E [ E ( S | N ) ] = E [ N m 1 ] = E [ N ] m 1 .
The aggregate claim amount is shared by the insurance and reinsurance company irrespective of the type of reinsurance arrangement; the aggregate claim amount S can be written as S I + S R , which S I denotes the insurer’s net aggregate claims after the reinsurance arrangement, and S R denotes the reinsurer’s aggregate claim amount. E [ S R ] , the expected total claim amount paid by the reinsurer, is calculated as λ E [ Z ] , whereas the net of expected claim amount paid by the insurer, E [ S I ] is calculated as λ E [ Y ] .
According to the expected value premium principle with the insurance loading factor θ and the reinsurance loading factor ξ, the insurer’s premium income per unit time after the reinsurance premium (i.e., net of reinsurance) is defined as:
c * = ( 1 + θ ) E [ S ] ( 1 + ξ ) E [ S R ] , = ( 1 + θ ) λ E [ X ] ( 1 + ξ ) λ E [ Z ] ,
where we assume that ξ θ > 0 and c * > λ E [ Y ] (see Dickson [19]).

3. Ultimate Ruin Probability

The maximal aggregate loss is defined as:
L = max t 0 { S ( t ) c t } .
Hence, the survival (non-ruin) probability can be obtained as the distribution function of L (see Bowers et al. [20]).
1 ψ ( u ) = P r ( L u ) , u 0 .
It is well known that the maximal aggregate loss L may be written as:
L = L 1 + L 2 + + L N .
where L 1 , L 2 , , are the ladder heights of the process, which are independent and identically distributed (i.i.d) with a probability density function h ( x ) . This function is obtained by using the individual claim amounts with distribution function F ( x ) as:
h ( x ) = 1 F ( x ) 0 [ 1 F ( y ) ] d y , x > 0
In the compound Poisson model, N has a geometric distribution with parameter p having probability mass function:
P r ( N = n ) = p ( 1 p ) n , n = 0 , 1 , 2 ,
where p can be expressed in terms of the probability of ruin given that the initial surplus is zero as p = 1 λ m 1 c = 1 ψ ( 0 ) . The distribution function of the ladder heights is given by:
H ( x ) = 1 m 1 0 x [ 1 F ( y ) ] d y .
Thus, L has a compound geometric distribution, and the distribution function of L, P r ( L u ) , is:
P r ( L u ) = n = 0 p ( 1 p ) n H * n ( u ) .
where H * n ( u ) describes the n-fold convolution of the function H ( u ) . This convolution formula for the ruin probability is called Beekman’s convolution formula [21].

3.1. Ultimate Ruin Probability for the Gamma Process

Dufresne et al. [22] define processes with independent stationary and nonnegative increments. Let Q ( x ) be a non-negative and non-increasing function of x, x > 0 with the properties:
Q ( x ) 0 a s x
and:
0 Q ( x ) d x < .
Then, Dufresne et al. [22] suggest the calculation of finite time ruin probability for the gamma and standardized gamma process.
They show that the compound Poisson process is approximated by a gamma ( α , β ) process { S G ( t ) } { t > 0 } . They assume that the function Q ( x ) is differentiable and that Q ( x ) = q ( x ) is:
q ( x ) = α x e β x , x > 0 .
They point out that a gamma process with α = β = 1 is called a standardized gamma process { S S G ( t ) } { t > 0 } with the properties of:
q ( x ) = e x x x > 0
or
Q ( x ) = x e y y d y x 0 .
Abramowitz and Stegun [23] explain E 1 ( x ) as the Euler expression, and it refers to exponential integral x ( e y / y ) d y . Then, they show that the common probability density function of the random variables { L i } , h ( x ) is:
h ( x ) = Q ( x ) = E 1 ( x ) , x > 0
and then, the distribution function of random variables { L i } , H ( x ) is obtained as:
H ( x ) = 0 x h ( y ) d y = 1 e x + x E 1 ( x ) x 0 .
Abramowitz and Stegun [23] suggest an approximation for this Euler expression as:
E 1 ( x ) = γ l n x n = 1 ( 1 ) n x n n n ! ,
where γ is Euler’s constant and equal to 0.577216.
Dufresne et al. [22] concentrate on the lower and upper bounds for the ultimate ruin probability when the aggregate claim process is the standardized gamma process. They use the method which is suggested by Dufresne and Gerber [24]. This method yields Beekman’s convolution formula for the probability of ruin as given in (4).

3.2. Ultimate Ruin Probability for the Translated Gamma Process

The aggregate claims process { S ( t ) } t > 0 can be approximated by using the translated gamma process { S T G ( t ) } t > 0 . The structure is similar to the gamma process except for the parameter k. For all t > 0 , the translated gamma process can be defined by using the gamma process as follows:
S T G ( t ) = S G ( t ) + k t ,
where { S G ( t ) } t > 0 is a gamma ( α , β ) process and k is a constant.
For all t > 0 , the mean, variance and coefficient of skewness of S ( t ) and S T G ( t ) are matched, and the parameters α, β and k of the translated gamma process are obtained as follows:
E [ S ( t ) ] = λ t m 1 = α t / β + k t = E [ S T G ( t ) ] , V [ S ( t ) ] = λ t m 2 = α t / β 2 = V [ S T G ( t ) ] , S k [ S ( t ) ] = λ t m 3 / ( λ t m 2 ) 3 / 2 = 2 / ( α t ) 1 / 2 = S k [ S T G ( t ) ] .
These identities give the parameter values as:
α = 4 λ m 2 3 / m 3 2 , β = 2 m 2 / m 3 , and k = λ ( m 1 2 m 2 2 / m 3 ) .
Dickson and Waters [25] suggest a different way to calculate the ultimate ruin probability for a standardized gamma process { S S G } t > 0 . They use the crude rounding discretizing method on H ( x ) by 0 , h , 2 h , and then apply Panjer’s [26] method. ψ T G ( u , t ) represents the ultimate ruin probability when the aggregate claims process is a gamma ( α , β ) process and when the premium loading factor is θ ^ = θ ( 1 + k β / α ) . Dickson and Waters [25] also show that the relationship between the gamma and standardized gamma process with respect to the finite time ruin probability is as follows:
ψ ( u , t ) = ψ S G ( β u , α t )
where ψ S G ( u , t ) represents the finite time ruin probability when the aggregate claims process is a standardized gamma process. Hence, ψ ( u , t ) is approximated by ψ S G ( β u , α t ) using a premium loading factor of θ ^ as θ ^ = θ ( 1 + k β / α ) under the translated gamma process approximation.
In a similar manner, the ultimate ruin probability ψ ( u ) for a compound Poisson process with the premium loading factor θ being approximated by the ultimate ruin probability when the aggregate claims process is a gamma ( α , β ) process with θ ^ .
The convolution formula for the probability of ruin under the standardized gamma process with the premium loading factor, θ, is:
ψ S G ( u ) = 1 n = 0 θ ( 1 + θ ) n + 1 H n * ( u ) ,
where H * n ( u ) defines the n-fold convolution of the function H ( u ) in (5).
When a reinsurance arrangement exists, since the compound Poisson process is approximated with the translated gamma process, the parameters of the translated gamma process are calculated according to the moments of the net of reinsurance process. The net of reinsurance loading factor, θ n e t , is calculated by using the net of reinsurance premium income as:
c * = ( 1 + θ n e t ) λ E [ Y ] ,
and hence, a formula for θ n e t can be derived as:
θ n e t = c * λ E [ Y ] 1 .
The insurance loading factor is obtained by using the parameters of the translated gamma process. Since θ ^ = θ ( 1 + k β / α ) , we have the following result for the net of reinsurance loading factor under the translated gamma process approximation:
θ ^ = θ n e t 1 + k β α .
Therefore, ψ S G ( β u ) approximates ψ ( u ) in (8) by using the premium loading factor θ ^ , under the translated gamma process approximation.
Dickson [19] describes a constraint for the retention level M under the excess of loss reinsurance. When the individual claim amount has an exponential distribution with a parameter of one, M > log ( ξ / θ ) is valid.
When the individual claim amount is distributed exponentially with parameter α, the moments of the insurer’s individual claim amount can be obtained in terms of the incomplete gamma function [27]:
m k = E [ Y k ] = k α k γ ( k , α M ) for k = 1 , 2 , ,
where γ ( k , M ) is the incomplete gamma function and defined as:
γ ( k , M ) = 0 M t k 1 e t d t .
When the individual claim amount has a Pareto distribution with parameters γ and δ, the moments of the insurer’s individual claim amount cannot be expressed in a simple closed form as in the exponential case [27].
The first three moments of the insurer’s individual claim amount are obtained as:
m 1 = 1 + M δ γ δ γ M + δ δ + M δ γ γ 1 + M ,
m 2 = 1 + M δ γ 2 γ δ M ( γ 1 ) γ M 2 + 2 δ 2 1 + δ + M δ γ ( γ 2 ) ( γ 1 ) + M 2 ,
and:
m 3 = 1 + M δ γ 6 γ δ 2 M 3 ( 1 + γ ) γ δ M 2 ( γ 2 ) ( γ 1 ) γ M 3 + 6 δ 3 ( 1 + δ + M δ ) γ ( γ 3 ) ( γ 2 ) ( γ 1 ) + M 3 .
In this study, the translated gamma approximation is preferred since it is easily applicable and it fits to the aggregate claims distribution better than the other approximations.

4. Optimal Reinsurance Criteria

Karageyik and Dickson [17] define the optimal reinsurance criteria as released capital, expected profit and expected utility. They study the finite time translated gamma process approximation on the classical risk process and investigate the optimal reinsurance that makes the reinsurance criteria maximum when the finite time ruin probability is minimum.
Although we refer to the earlier work of Karageyik and Dickson [17], the focus of this paper is different. We explore the possibility of five reinsurance criteria on the determination of the optimal reinsurance in the infinite time case. Contrary to the finite time case, only differences on pairs of insurance and reinsurance loading factors cause significant changes in the values of these criteria.
We begin with investigating alternatives between defined smallest and largest points. We assume that the starting point refers to the minimum initial surplus ( u S ), which makes the ruin probability equal to 0.01 when the reinsurance arrangement is valid. It is not possible to get a ruin probability under this certain level if the reinsurance arrangement exists. The ending point refers to the largest initial surplus, ( u L ), such that the ruin probability equals 0.01 when there is no reinsurance. We begin with the smallest initial surplus u S and increase this value by 0.1 or 0.05 to the largest initial surplus u L . Once we reach the largest initial surplus, the insurance company does not need to make a reinsurance arrangement to satisfy the 0.01 ruin probability. We calculate the corresponding retention level for each initial surplus. Hence, we obtain an outcome set that represents all possible combinations of the reinsurance level and corresponding initial surplus between the starting and ending points.

4.1. Released Capital

Released capital, R C , is defined as the difference between the largest and the required initial surplus of each alternative, which makes the ruin probability equal to a certain level [17]. In general, there is a unique reinsurance level that satisfies 0.01 ruin probability for each initial surplus. However, in some cases, there is more than one possible retention level that satisfies the same ruin probability. In these cases, we prefer to use the higher retention level, since we aim to maximize the expected profit. The released capital is calculated as:
R C = u L u i for i = 1 , 2 , .

4.2. Expected Profit

Insurance expected profit, P, is calculated as the difference between the net insurance premium income c * (3) and the insurer’s expected total claim, E [ S I ] , as below:
P = c * E [ S I ] .
Under the classical risk model, according to the expected value premium principle with insurance loading factor θ and reinsurance loading factor ξ, the insurer’s expected profit is obtained as:
P = ( 1 + θ ) E [ S ] ( 1 + ξ ) E [ S R ] E [ S I ] ,

4.3. Expected Utility

The insurer’s wealth U is described from the insurer’s point of view as:
U = u 0 + c * S I .
To avoid confusion of the notations, initial surplus is presented by u 0 instead of u. Dickson [19] considers some mathematical functions, such as exponential, logarithmic, quadratic and fractional power, as the suitable forms of the utility functions.

4.3.1. Exponential Utility

The exponential utility function is defined as:
u ( x ) = 1 exp ( B x ) , B > 0
where B is the parameter of the utility function. The expected utility of the insurance’s wealth, E e U , is:
E e U = E [ u ( U ) ] = E u ( u 0 + c * S I ) , = E 1 exp B ( u 0 + c * S I ) , = 1 E [ exp ( B u 0 ) ] E exp ( B c * ) E exp B S I .
E exp B S I ] shows the moment-generating function of the (net of reinsurance) aggregate claim amount random variable under the translated gamma process as:
m S I = β β B α .
It is provided that β defined in (7) is greater than B [17].

4.3.2. Fractional Power Utility

The fractional power utility function is defined as:
u ( x ) = x B , x > 0 and 0 < B < 1
where B is the parameter of the utility function. The expected fractional power utility of the insurance’s wealth, denoted as E f U , becomes:
E f U = E [ u ( U ( t ) ) ] = E u ( u 0 + c * S I ) , = E ( u 0 + c * S I ) B .

4.3.3. Logarithmic Utility

The logarithmic utility function is defined as:
u ( x ) = B l o g ( x ) , x > 0 and B > 0
where B is the parameter of the utility function. The expected logarithmic utility of the insurance’s wealth, denoted as E l U , becomes:
E l U = E [ u ( U ( t ) ) ] = E u ( u 0 + c * S I ) , = E B l o g u 0 + c * S I .
In this study, we cannot use the quadratic utility function, since it does not show the pattern of the optimal values due to the parabola curve.

5. Multi-Criteria Decision Making

Multi-Criteria Decision Making (MCDM) is used when making decisions in multiple, inconsistent criteria. Hwang and Yoon [28] describe the common characteristics of MCDM problems, such as each problem has multiple objectives/attributes; multiple criteria conflict with each other; each objective/attribute has a different unit of measurement; and solutions are explained as the best alternative among all possible alternatives.
MCDM is classified into two categories depending on whether the problem is a selection problem or a design problem. Multiple-Attribute Decision Making (MADM) is developed to select the best alternative in a set, whereas Multiple-Objective Decision Making (MODM) is developed to design the best alternative. MODM is related to designing a problem, and it depends on making the final decision, rather than selecting the best alternative. Therefore, we focus on MADM methods on determining the optimal retention levels.

5.1. Multiple-Attribute Decision Making

MADM is an approach that is applied on solving the problem among a finite number of alternatives. The MADM method indicates how attribute information is to be processed in order to arrive at a choice [29]. MADM is a widely-used tool, and it is investigated in different types of working areas, such as business, academic, public or personnel [30]. Hwang and Yoon [28] classify the methods for MADM according to different forms of preference information of the decision maker.
In methods for cardinal preference of the attribute given the decision maker’s cardinal preferences of attributes are required. In the expression of the inter-attribute preference information, methods for the cardinal preference of the attribute given are commonly preferred. These methods can be categorized into six main methods: the Linear Assignment Method (LAM), the Simple Additive Weighting (SAW) method, the Hierarchical Additive Weighting Method (HAWM), the Analytical Hierarchy Process (AHP), the Elimination and Choice Translating Reality (ELECTRE) and the Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS).
The LAM method is suggested by Bernardo and Blin [31] and depends on a set of attribute rankings and a set of attribute weights. In spite of this method’s simplicity, it does not cover the basic linear requirement, such as finding an overall ranking that simultaneously uses all of the information, not just the sum of the ranks. Furthermore, this method may not be conventional in all applications because of the requirement of the attribute-wise rankings. The SAW method is probably the best known and widely-used method for MADM analysis. However, the SAW method requires that the value of the attributes must be comparable and in numerical form. Hwang and Yoon [28] clarify the ELECTRE method as “Alternatives can be compared by using the pairwise comparison of alternatives based on the degree which evaluations of the alternatives. The preference weights are clarified confirm or contradict the pairwise dominance relationship between alternatives.” In this study, we focus on the AHP and TOPSIS methods because they allow us to assess the optimal levels better.
Suppose that D is a decision matrix, and there are m alternatives A 1 , A 2 , , A m and n decision attributes (criteria) C 1 , C 2 , , C n . Let x i j denote the attribute (criteria) value of A i on C j for i = 1 , 2 , , m ; j = 1 , 2 , , n in D matrix. Table 1 presents the decision matrix.
In MADM, the importance of each attribute is described by the weights of attributes. A set of weights for n attributes is shown as:
w T = ( w 1 , w 2 , , w n ) ,
where j = 1 n w j = 1 .

5.2. The Analytical Hierarchy Process

The Analytical Hierarchy Process (AHP) decomposes the decision problem into a system of hierarchies of objectives, attributes (criteria) and alternatives. The decision maker evaluates the relative importance of the various elements by pairwise comparisons. Saaty [32] identifies the AHP as a method that converts evaluations to numerical values (weights or priorities), which are used to calculate a score for each alternative.
Saaty [33] introduces a method which depends on the scaling of importance by using the principle eigenvector of a positive pairwise comparison matrix. Moreover, Saaty proposes to use an intensity scale of importance for activities in the eigenvector method instead of the pairwise comparison matrix.
The scale and its description of Saaty’s pairwise comparison matrix is given in Table 2. This pairwise comparison matrix is used to compare the criteria according to their importance.
The main steps of AHP are as follows:
1  Make pairwise comparisons:
A pairwise comparison matrix is obtained by using Saaty’s matrix to use a scale of relative importance. By using Saaty’s pairwise comparison matrix, an initial matrix is obtained. In this matrix, the principal diagonal contains entries of one, as each factor is as important as itself, and the rest of the entries have comparable importance. Making the pairwise comparison is applied to each attribute.
2  Synthesize judgments:
Synthesization is an identification and calculation process of the priority of each criterion according to its contribution to the aim of the decision. The synthesize judgments have the following steps:
Step 1:
The summation of the values in each column of the pairwise comparison matrix is found,
Step 2:
Each element is divided by its column total value (the normalized pairwise comparison matrix),
Step 3:
The average of elements in each row is calculated (relative priorities-priority index).
3  Check for consistency:
In the consistency step, several pairs are compared by considering the consistency of the pairwise judgments. The degree of consistency is measured by the following steps:
Step 1:
The pairwise comparison matrix and its relative priorities are multiplied,
Step 2:
The weighted sum vector elements is divided by the associated priority value,
Step 3:
A consistency index ( C I ) is calculated by the average of the values of Step 2, that is to say the maximum eigenvalue λ m a x , such that:
C I = λ m a x n n 1 .
Step 4:
A consistency ratio ( C R ) is calculated as:
C R = C I R I .
where R I denotes the random index, and it is calculated according to the value of n, as given in Table 3.
In general, it is assumed that if C R 0.1, the consistency is acceptable. However, if the consistency is unacceptable, pairwise comparisons should be revised.
In the AHP method, the geometric mean method is also commonly used to determine the relatively normalized weights of the attributes instead of the arithmetic mean. The detailed information about this method can be found in Saaty [32], Saaty [34], Saaty [35] and Saaty [36].

5.3. TOPSIS Method with Euclidean Distance

The Technique for Order of Preference by Similarity to ideal Solution (TOPSIS) is suggested by Hwang and Yoon [28] to determine the best alternative based on the concepts of a compromise solution. The compromise solution can be regarded as choosing the solution with the shortest Euclidean distance from the ideal solution and the farthest Euclidean distance from the negative ideal solution. The ranking of the alternatives is calculated according to the relative proximity to the ideal solution.
The TOPSIS method is the most preferred decision technique, and in most studies, this method is also stated as the best alternative method among the MADM methods. The TOPSIS methodology has been applied to a wide range of areas. All studies about the TOPSIS method since 2000 are classified in Behzadian et al. [37]. The findings of this study suggest that the areas and global interest in the TOPSIS method gradually increase. The procedures of the TOPSIS method are described as follows.
Step 1:
The decision matrix is normalized by using the vector-normalization technique:
r i j = x i j i = 1 m ( x i j ) 2 ,
where r i j is the normalized value, i = 1 , 2 , , m , j = 1 , 2 , , n .
Step 2:
Weighted-normalized values are calculated by using the weight vector w = ( w 1 , w 2 , , w n ) .
V i j ( x ) = w j r i j , i = 1 , , n ; j = 1 , , m .
Step 3:
Let the positive ideal points and negative ideal points (anti-ideal) be S + and S , respectively. The positive ideal points are equivalent to the maximum value under each criterion:
S + = S 1 + , S 2 + , , S j + , , S n + = { ( max i V i j | j J ) , ( min i V i j | j J ) | i = 1 , 2 , m ) }
and the negative ideal points are equivalent to the minimum value under each criterion:
S = S 1 , S 2 , , S j , , S n = { ( min i V i j | j J ) , ( max i V i j | j J ) | i = 1 , 2 , m ) } ,
where J = { j = 1 , 2 , , n | j associated   with   benefit   criteria } and J = { j = 1 , 2 , , n | j associated with cost criteria}.
Step 4:
The distance between each alternative is calculated by using n-dimensional Euclidean distance. The distance between the alternative A i and the ideal solution is:
d i + = j = 1 n ( V i j S j + ) 2 , i = 1 , 2 , , m ,
and the distance between the alternative A i and the negative-ideal solution is:
d i = j = 1 n ( V i j S j ) 2 , i = 1 , 2 , , m .
Step 5:
The relative closeness of each alternative to the ideal solution (closeness index) is calculated as:
C i = d i d i + + d i , i = 1 , 2 , , m ,
where C i [ 0 , 1 ] for i = 1 , , m . The results are sorted according to the value of C i . A higher C i means that A i is a better solution.

5.4. TOPSIS Method with Mahalanobis Distance

The traditional TOPSIS method is based on the Euclidean distance measure, and it is assumed that there is no relationship between the attributes. This approach suffers from information overlap and either overestimates or underestimates the attributes that take slack information (Wang and Wang [38]).
When attributes are dependent and influence each other, application of the TOPSIS method based on Euclidean distances can lead to inaccurate estimation of the relative significances of alternatives and cause improper ranking results (Antucheviciene et al. [39]). For this reason, another distance measure technique, the Mahalanobis distance, is suggested instead of Euclidean distance in the TOPSIS method.
The Mahalanobis distance is also called quadratic distance and introduced by Mahalanobis [40]. For a multivariate vector x = ( x 1 , x 2 , x 3 , , x N ) T from a group of observations with mean μ = ( μ 1 , μ 2 , μ 3 , , μ N ) T and the covariance matrix Σ, the Mahalanobis distance is defined as follows:
D M ( x ) = ( x μ ) T Σ 1 ( x μ ) .
The Mahalanobis distance is standardized via the factor of inverse of covariance matrix Σ 1 . This distance measure depends on the covariance between variables, and it can give some information about the similarities of an unknown sample to a known one.
When the attributes are not related to one another, the weighted Mahalanobis distance and the weighted Euclidean distance will be equivalent (Wang and Wang [38]).
References to the TOPSIS method with Mahalanobis distance can be found in a number of papers, such as Wang and Wang [38], Garca et al. [41], Ching-Hui et al. [42] and Lahby et al. [43].
Let the ideal solution and negative ideal solution (anti-ideal solution) be S + and S , respectively, as in the case of TOPSIS, and A i denote the i-th alternative. Hence, the Mahalanobis distance from A i to the ideal solution point is calculated as:
d ( r i , S + ) = { S j + r i j } T Ω T Σ 1 Ω { S j + r i j } i = 1 , 2 , , m .
Similarly, the Mahalanobis distance from A i to the negative ideal solution point is calculated as:
d ( r i , S ) = { S j r i j } T Ω T Σ 1 Ω { S j r i j } i = 1 , 2 , , m .
where w is the weight vector, such as w = ( w 1 , w 2 , , w n ) and Ω is defined as Ω = d i a g ( w 1 , w 2 , , w n ) .
The closeness of each alternative is given as:
c i = d ( r i , S ) d ( r i , S ) + d ( r i , S + ) i = 1 , 2 , , m .
The results for each alternative are sorted according to the value of c i . A higher c i suggests that A i is a better solution.

6. Numerical Analysis

In this section, we present some numerical examples on determining the optimal initial surplus and retention level under the minimum ultimate ruin probability and maximum reinsurance criteria. We make the following assumptions: the individual claim amount has an exponential distribution with a probability density function f ( x ) = e x and Pareto distribution with a probability density function f ( x ) = α β [ 1 + x β ] ( α + 1 ) with α = 3 and β = 4 . The number of claims per unit time has a Poisson distribution with the parameter λ = 500 ; the premium loading factors combinations are (θ, ξ), numerically ( 0.1 , 0.15 ) , ( 0.1 , 0.2 ) , ( 0.1 , 0.3 ) and ( 0.2 , 0.3 ) .
Individual claim amounts are assumed to have the exponential and Pareto distributions, which have different tail structures. Pareto is a heavy-tailed distribution, which is essential for modeling extreme losses, whereas exponential is a light tailed distribution. Hence, we can investigate the effects of different individual claim distributions on optimal values. The expectations of the aggregate claims are the same; however, the variances are different.
A set of alternatives which consists of initial surplus and retention level pairs is calculated under the ultimate ruin probability constraint. The insurer’s released capital, expected profit, exponential, logarithmic and fractional expected utilities are calculated by using these pairs. In the decision analysis part, we use both the AHP and TOPSIS method to select the optimal initial surplus and reinsurance level. The algorithm for the excess of loss reinsurance can be summarized as follows.
Step 1: Calculation of the ultimate ruin probability under the translated gamma process approximation:
In the calculation of the ultimate time ruin probability, we obtain the parameters of the translated gamma process and the loading factor, θ ^ , by using (7) and (11), respectively. Then, we use this loading factor in the calculation of the probability of ruin under the standardized gamma process with the premium loading factor θ ^ being ψ S G ( β u ) , with ψ S G defined in (8).
Step 2: Calculation of the largest initial surplus under the minimum ultimate ruin probability:
The largest initial surplus is obtained from (8), such that this probability equals 0.01. The largest initial surplus depends only on θ, since the reinsurance is not involved in this case. When the individual claim has an exponential and Pareto distribution, the largest initial surpluses, which are calculated according to two different insurance loading factors θ = 0.1 and θ = 0.2 , are given in Table 4.
The results show that when the individual claim amount has a Pareto distribution, the required largest initial surplus is higher than in the exponential case.
Step 3: Calculation of the smallest initial surplus under the minimum ultimate ruin probability:
When the reinsurance arrangement is involved, moments of the insurer’s net individual claims are calculated for the exponential claims by using (12) and for the Pareto claims by using (13)–(15). Then, the parameters of the translated gamma process are calculated according to (7). Hence, by using these parameters, the loading factor θ ^ is calculated by (11), and then, we integrate this loading factor into (8). The smallest initial surplus that makes the ruin probability equal to 0.01 is calculated by using the one-dimensional optimization technique, which searches the interval from lower to upper for a minimum or maximum value of (8) in R programming. The corresponding retention levels M of the smallest initial surplus are calculated.
In the calculation of the smallest initial surplus u S and the corresponding retention level M, four different premium loading factors (θ, ξ) are used: (0.1, 0.15), (0.1, 0.2), (0.1, 0.3) and (0.2, 0.3). These loading factors are the same as in Dickson and Waters [25]. The smallest initial surpluses under the excess of loss reinsurance are given in Table 5. These smallest initial surpluses are used as the starting points of the alternative set.
As seen in Table 5, the required smallest initial surplus for the Pareto claims is higher than in the exponential case. The same situation is observed for the largest initial surplus.
Step 4: Constitution of the alternative set that consists of the pair of initial surplus and retention level:
We design a set of alternatives that consist of the pair of initial surplus and retention level. We begin with the smallest initial values u S and increase this value by 0.1 or 0.05 to the largest initial surplus u L . Then, we calculate the corresponding retention level for each initial surplus. Hence, we obtain an outcome set that presents all of the possible combinations of the retention level and the corresponding initial surplus. Each pair of outcome sets is denoted as ( u M , M ) .
Step 5: Calculation of the optimal reinsurance criteria according to the initial surplus and the retention level:
The set of ( u M , M ) is used in the calculation of the optimal reinsurance criteria. The released capital is calculated as u L u M ; the expected profit is calculated by (16); and expected utilities are calculated by (17)–(19), respectively. For the calculation of expected profit, the parameters of the translated gamma process α, β and k are needed. These parameters are calculated according to the retention level, M. However, in the calculation of the expected utility, not only the retention level, but also the initial surplus u M is needed.
Table 6 and Table 7 illustrate each alternative initial surplus and retention level pair with their corresponding optimal reinsurance criteria for four different loading combinations under the exponential and Pareto claim case, respectively.
The analysis indicates that because of the high variance of the Pareto distribution, the differences between the starting point (smallest initial surplus) and the ending point (largest initial surplus) under each criterion are higher than those in the exponential claims case. In particular, we can see a huge range in the expected profit and released capital by comparing with the exponential claims.
Figure 1 shows the behavior of five criteria when θ = 0.1 and ξ = 0.15 for the exponential claims under the excess of loss reinsurance. The x-axis of this graph shows the number of the alternatives for the pair ( u M , M ) , while the results of the five criteria appear on the y-axis. We prefer to use a plot with five y-axes and one shared x-axis because of the different scales.
We can see that each criterion has a different pattern. In the excess of loss reinsurance, as the initial surplus increases, the corresponding retention level also increases in order to get a fixed ruin probability, such as 0.01. When the pair ( u M , M ) increases, the released capital decreases. However, both the expected profit and the expected utility functions increase with different slopes.
It can be clearly seen that the released capital declines steadily to the level of zero as the initial surplus closes to u L . The expected profit increases with a decreasing slope. The first alternative begins with the maximum released capital and the minimum expected profit and minimum expected utility. Conversely, the last alternative has the maximum expected profit and expected utility and the minimum released capital.
Step 6: Decision of the optimal pair of initial surplus and retention level under the AHP and TOPSIS methods:
In order to decide the optimal pair of initial surplus and retention level ( u M , M ) , we use the AHP and TOPSIS methods with four normalization and two distance measure techniques. In the TOPSIS method, we use the Euclidean and Mahalanobis distances. In the TOPSIS method with the Mahalanobis distance, we investigate the relationship between the criteria by using the vector-normalized covariance matrix. The relative proximity to the ideal solution for each alternative is used in deciding the optimal pairs ( u M , M ) . In the decision process, it is required to state how criteria or alternatives affect each other. In our study, five criteria have different scales and maximum/minimum levels.
In order to obtain a comparable scaled values and pairwise comparisons matrix, we use four different normalization techniques: AHP-1, AHP-2, AHP-3 and AHP-4.
AHP-1
uses the linear scale transformation normalization technique. The normalized values are obtained by dividing the outcome of a criterion by its maximum value. Then, we assume that the maximum normalized value of the criterion is equal to nine as in Saaty’s matrix (Table 2) (Hwang and Yoon [28]). Thus, the scale of measurement varies precisely from 1–9 for each criterion.
AHP-2
uses the vector normalization technique as used in the TOPSIS method in (20).
AHP-3
uses the min-max normalization technique, which is given below:
r i j = x i j m i n ( x i j ) m a x ( x i j ) m i n ( x i j ) .
AHP-4
uses the automating pairwise comparison technique [32]. In this technique, a pairwise comparison matrix B ( j ) is comprised of the m criteria, j = 1 , 2 , , m . The matrix B ( j ) is a n × n real matrix, where n is the number of alternatives. Each element b i h ( j ) of the matrix B ( j ) represents the evaluation of the i -th alternative compared to the h -th alternative with respect to the j -th criterion.
The j -th criterion changes in the interval [ I j , m i n , I j , m a x ] , and I j ( i ) and I j ( h ) are the attributes under the i -th and h -th control options. When I j ( i ) I j ( h ) , the analogous value b i h ( j ) of B ( j ) can be computed as:
b i h ( j ) = 8 I j ( i ) I j ( h ) I j , m a x I j , m i n + 1 .
When I j ( i ) I j ( h ) , the analogous value b i h ( j ) of B ( j ) can be computed as:
b i h ( j ) = 8 I j ( h ) I j ( i ) I j , m a x I j , m i n + 1 .
The key limitation of this technique is the assumption of a linear relationship between the difference of I j ( h ) and I j ( i ) .
The variance of the outcomes has an important role in the determination of the optimal levels in the TOPSIS method. The covariance matrix of the vector-normalized outcomes when θ = 0.1 and ξ = 0.3 is obtained as:
Risks 05 00001 i001
Since the variance and covariance values of the expected utility criteria are small, the expected utility criteria seem to be more ineffectual criteria than the others. However, the variance of a set of outcomes for the released capital is higher than the variance of the corresponding outcomes for other criteria, so it has a dominant impact on decision making. Firstly, we calculate the optimal initial surplus and retention level assuming that five criteria have the same importance. Then, we observe the changes in the optimal pairs by modifying the weight of the released capital in the range from 0–1.
Table 8 presents the optimal initial surplus and retention level for the exponential and Pareto claims with four different loading combinations under two TOPSIS and four AHP methods. In this calculation, it is assumed that equal weights are allocated to each criterion, such as 1/5. From this table, it can be seen that the optimal pairs in the Pareto case are higher than in the exponential case for all methods. In addition, the optimal levels increase when the reinsurance loading factors increase, since the higher reinsurance premium causes a decrease on the expected profit. Based on the results, the following conclusions can be drawn. First, the results show that the distance measure has a vital role in optimality. The TOPSIS method with the Mahalanobis distance gives smaller optimal pairs than the TOPSIS method with the Euclidean distance. The most likely explanation for this situation is the effect of dependency between the criteria. When the criteria are affected by each other, there will be some changes on the optimal pairs. Hence, we can see that the TOPSIS method with the Mahalanobis distance gives more accurate and conservative results than the Euclidean case.
Second, Table 8 shows that the normalization technique plays an important role in optimality. AHP-1, AHP-2 and AHP-4 give slightly different optimal results than the TOPSIS method, whereas AHP-3 provides quite different optimal levels than the other methods. The underlying reason for this discrepancy is the normalization technique. AHP-3 depends on the normalization technique, which is calculated as dividing ( x i j m i n ( x i j ) ) by ( m a x ( x i j ) m i n ( x i j ) ) . The denominator gives the range of the set of data, the difference between the highest and lowest values in the set. The range of the released capital is the highest among the other criteria. In addition, the numerator leads with zero in the starting point of the expected profit and expected utility since the starting point of these criteria are equal to their minimum values. Hence, the normalized values of each criteria except the released capital criterion are increasing towards the last points of the set of alternatives. Therefore, optimal levels that are closer to the no-reinsurance case (high initial surplus and high retention level) are determined as the optimal strategy.
The AHP-4 method produces very close optimal results to the TOPSIS methods since the pairwise comparison techniques are used. This normalization technique is efficient especially when the difference between alternatives is based on a linear structure. In our analysis, since each alternative has a linear form, this normalization technique is more suitable than the other normalization techniques.
In order to verify the validity of the optimal pairs, we carry out different scenarios. We check for the presence of the criteria weights on the optimal pairs. We observe the changes on the optimal levels when the weight of the released capital varies between zero and one. We choose the released capital since it has the highest variance compared with other criteria. The equal weight assumption does not enable us to compare the optimal results visually because of the small variance of the expected utility. Therefore, we assume that the weight of the expected profit equals the summation of the weight of the expected utility criterion. Figure 2 presents the optimal initial surplus u * and optimal retention level M * , for the exponential claims when the weight of the released capital changes between zero and one.
Figure 2a presents the optimal retention levels for six methods separately. The optimal levels are obtained as the maximum point of the closeness index in the TOPSIS method and the priority index in the AHP method. When the weight of the released capital increases, the optimal levels get closer to the point where the released capital is maximum. When the weight of the released capital is zero, the optimal levels are obtained at the maximum points of the expected profit and expected utility. It can be clearly seen that the shapes of the optimal levels are different for each method. The significant limitation exists in AHP-4, after the level where each criterion has the same importance, the optimal retention levels are obtained at the points where the released capital has its maximum level.
Figure 2b presents changes in the optimal retention levels according to six methods in the same scale, whereas Figure 2c shows the changes in the optimal initial surplus.
Figure 3 presents the optimal initial surplus u * and optimal retention level M * for the Pareto claims when the weight of the released capital changes between zero and one.
It is seen that Pareto and the exponential distributions give compatible results. We can observe that when the weight of the released capital increases, the optimal pairs go towards the level where the released capital has its maximum. The pattern of the optimal levels does not change for both individual claim assumptions; however, the range of the optimal pairs in Pareto claims is wider than in the exponential case.
However, as can be seen from both figures, the TOPSIS method with the Mahalanobis distance gives different optimal pairs compared to the other methods. The results show that the dependency has a significant effect on the determination of optimal reinsurance levels. In order to verify our method, we compare optimal pairs according to the changes on the weights of the expected profit and expected utility functions. These results are consistent with the findings of the released capital case. We can observe that when the weights of the expected profit or expected utility functions increase, the optimal pairs go towards their maximum level, as well.

7. Conclusions

In this paper, we have determined the optimal initial surplus and retention level under different constraints by using the translated gamma process approximation on infinite time horizons. From the research that has been performed, it is possible to conclude that the optimal initial surplus and retention level can vary according to optimal reinsurance criteria: released capital, expected profit and exponential-fractional-logarithmic utility functions under the minimum ultimate ruin probability. Our study implies that the dependency between the criteria causes significant changes in optimal levels. In the multi-attribute decision making process, we have compared the AHP and TOPSIS method with different normalization and distance measure techniques. Based on the results, it can be concluded that the normalization techniques and distance measures have a vital role in determining the optimal levels. The proposed method can be readily used in practice. In addition, the approach is applicable to different individual claim distributions and reinsurance arrangements. The findings suggest that this approach can be useful on determining the optimal reinsurance level for different criteria under the minimum ultimate ruin probabilities.

Author Contributions

These authors contributed equally to this work. The authors thank the anonymous referees for their helpful comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. B. De Finetti. II Problema Dei Pieni. Rome, Italy: Giornale Istituto Italiano Degli Attuari, 1940, Volume 11, pp. 1–88. [Google Scholar]
  2. H. Buhlmann. Mathematical Methods in Risk Theory. Grundlehren der Mathematischen Wissenschaft: A Series of Comprehensive Studies in Mathematics; Heidelberg, Germany: Springer, 1970. [Google Scholar]
  3. K. Borch. “An attempt to determine the optimum amount of stop loss reinsurance.” In Proceedings of the Transactions of the 16th International Congress of Actuaries, Brussels, Belgium, 15–22 June 1960; Volume 1, pp. 597–610.
  4. K.J. Arrow. “Uncertainty and the welfare economics of medical care.” Am. Econ. Rev. 53 (1963): 941–973. [Google Scholar]
  5. D.C.M. Dickson, and H.R. Waters. “Relative reinsurance retention levels.” ASTIN Bull. 27 (1997): 207–227. [Google Scholar] [CrossRef]
  6. Z.G. Ignatov, V.K. Kaishev, and R.S. Krachunov. “Optimal retention levels, given the joint survival of cedent and reinsurer.” Scand. Actuar. J. 6 (2004): 401–430. [Google Scholar]
  7. M. Kaluszka. “Truncated stop loss as optimal reinsurance agreement in one-period models.” ASTIN Bull. 35 (2005): 337–349. [Google Scholar] [CrossRef]
  8. D.C.M. Dickson, and H.R. Waters. “Optimal dynamic reinsurance.” ASTIN Bull. 36 (2006): 415–432. [Google Scholar] [CrossRef]
  9. V.K. Kaishev, and S.D. Dimitrina. “Excess of loss reinsurance under joint survival optimality.” Insur. Math. Econ. 39 (2006): 376–389. [Google Scholar] [CrossRef]
  10. C. Nie, D.C.M. Dickson, and S. Li. “Minimising the ruin probability through capital injections.” Ann. Actuar. Sci. 5 (2011): 195–209. [Google Scholar] [CrossRef]
  11. M.L. Centeno. Retention and Reinsurance Programmes. Encyclopedia of Actuarial Science; Edited by J. Teugels and B. Sundt. Hoboken, NJ, USA: John Wiley and Sons Ltd., 2004. [Google Scholar]
  12. K. Aase. “Perspectives of risk sharing.” Scand. Actuar. J. 2 (2002): 73–128. [Google Scholar] [CrossRef]
  13. A. Balbas, B. Balbas, and A. Heras. “Optimal reinsurance with general risk measures.” Insur. Math. Econ. 44 (2009): 374–384. [Google Scholar] [CrossRef]
  14. M.L. Centeno, and O. Simoes. “Optimal reinsurance. RACSAM-Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales. Serie A.” Matematicas 103 (2009): 387–404. [Google Scholar]
  15. D.S. Dimitrova, and V.K. Kaishev. “Optimal joint survival reinsurance: An efficient frontier approach.” Insur. Math. Econ. 47 (2010): 27–35. [Google Scholar] [CrossRef]
  16. W. Hürlimann. “Optimal reinsurance revisited-point of view of cedent and reinsurer.” ASTIN Bull. 41 (2011): 547–574. [Google Scholar]
  17. B.B. Karageyik, and D.C.M. Dickson. “Optimal reinsurance under multiple attribute decision making.” Ann. Actuar. Sci. 10 (2016): 65–86. [Google Scholar] [CrossRef]
  18. J. Trufin, H. Albrecher, and M. Denuit. “Properties of a Risk Measure Derived from Ruin Theory.” Geneva Risk Insur. Rev. 36 (2011): 174–188. [Google Scholar] [CrossRef]
  19. D.C.M. Dickson. Insurance Risk and Ruin. Cambridge, UK: Cambridge University Press, 2005. [Google Scholar]
  20. N.L. Bowers, H.U. Gerber, J.C. Hickman, D.A. Jones, and C.J. Nesbitt. Actuarial Mathematics. Schaumburg, IL, USA: Society of Actuaries, 1987. [Google Scholar]
  21. J.A. Beekman. Two Stochastic Processes. A Halsted Press Book; Stockholm, Sweden: Almqvist and Wiksell International, 1974. [Google Scholar]
  22. F.S. Dufresne, H.U. Gerber, and E.S.W. Shiu. “Risk theory with the gamma process.” ASTIN Bull. 21 (1991): 177–192. [Google Scholar] [CrossRef]
  23. M. Abramowitz, and I. Stegun. Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series; New York, NY, USA: Dover Publications, 1964. [Google Scholar]
  24. F.S. Dufresne, and H.U. Gerber. “Three Methods to Calculate the Probability of Ruin.” ASTIN Bull. 19 (1989): 71–90. [Google Scholar] [CrossRef]
  25. D.C.M. Dickson, and H.R. Waters. “Reinsurance and ruin.” Insur. Math. Econ. 19 (1996): 61–80. [Google Scholar] [CrossRef]
  26. H.H. Panjer. “Recursive evaluation of a family of compound distributions.” ASTIN Bull. 12 (1981): 22–26. [Google Scholar] [CrossRef]
  27. B.B. Karageyik. “Optimal Reinsurance Under Competing Benefit Criteria.” Ph.D. Thesis, Department of Actuarial Sciences, Hacettepe University, Ankara, Turkey, 2015. [Google Scholar]
  28. C.L. Hwang, and K. Yoon. Multiple Attribute Decision Making: Methods and Applications: A State-of-the-Art Survey. Lecture Notes in Economics and Mathematical Systems; New York, NY, USA: Springer, 1981. [Google Scholar]
  29. R.V. Rao. Introduction to Multiple Attribute Decision-making (MADM) Methods. Springer Series in Advanced Manufacturing; London, UK: Springer, 2007. [Google Scholar]
  30. C.W. Churchman, R.L. Ackoff, and E.L. Arnoff. Introduction to Operations Research. Hoboken, NJ, USA: John Wiley and Sons, 1957. [Google Scholar]
  31. J.J. Bernardo, and J.M. Blin. “A programming model of consumer choice among multi attributed brands.” J. Consum. Res. 4 (1977): 111–118. [Google Scholar] [CrossRef]
  32. T.L. Saaty. The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation (Decision Making Series). New York, NY, USA: McGraw-Hill, 1980. [Google Scholar]
  33. T.L. Saaty. “A scaling method for priorities in hierarchical structures.” J. Math. Psychol. 15 (1977): 234–281. [Google Scholar] [CrossRef]
  34. T.L. Saaty. Decision Making for Leaders: The Analytic Hierarchy Process for Decisions in a Complex World. The Analytic Hierarchy Process Series; Pittsburgh, PA, USA: RWS Publications, 1990. [Google Scholar]
  35. T.L. Saaty. “How to make a decision: The analytic hierarchy process.” Eur. J. Oper. Res. 48 (1990): 9–26. [Google Scholar] [CrossRef]
  36. T.L. Saaty. “Decision making with the analytic hierarchy process.” Int. J. Serv. Sci. 1 (2008): 83–97. [Google Scholar] [CrossRef]
  37. M. Behzadian, S.K. Otaghsara, M. Yazdani, and J. Ignatius. “A state-of the-art survey of TOPSIS applications.” Expert Syst. Appl. 39 (2012): 13051–13069. [Google Scholar] [CrossRef]
  38. Z.X. Wang, and Y.Y. Wang. “Evaluation of the provincial competitiveness of the Chinese High-Tech Industry using an improved TOPSIS method.” Expert Syst. Appl. 41 (2014): 2824–2831. [Google Scholar] [CrossRef]
  39. J. Antucheviciene, E.K. Zavadskas, and A. Zakarevicius. “Multiple criteria construction management decisions considering relations between criteria.” Technol. Econ. Dev. Econ. 16 (2010): 109–125. [Google Scholar] [CrossRef]
  40. P.C. Mahalanobis. “On the generalised distance in statistics.” Proc. Natl. Inst. Sci. 2 (1936): 49–55. [Google Scholar]
  41. A.J. Garca, M.G. Ibarra, and P.L. Rico. “Improvement of TOPSIS technique through integration of Malahanobis distance: A case study.” In Proceedings of the 14th Annual International Conference on Industrial Engineering Theory, Applications and Practice, Anaheim, CA, USA, 18–21 October 2009.
  42. C. Ching-Hui, L. Jyh-Jiuan, L. Jyh-Horng, and C. Miao-Chen. “Domestic open-end equity mutual fund performance evaluation using extended TOPSIS method with different distance approaches.” Expert Syst. Appl. 37 (2010): 4642–4649. [Google Scholar] [CrossRef]
  43. M. Lahby, L. Cherkaoui, and A. Adib. “New multi access selection method based on mahalanobis distance.” Appl. Math. Sci. 6 (2012): 2745–2760. [Google Scholar]
Figure 1. The graph of the criteria when θ = 0.1 and ξ = 0.15 for the exponential claims.
Figure 1. The graph of the criteria when θ = 0.1 and ξ = 0.15 for the exponential claims.
Risks 05 00001 g001
Figure 2. The graph of the optimal initial surplus and optimal retention level when θ = 0.1 and ξ = 0.15 for the exponential claims.
Figure 2. The graph of the optimal initial surplus and optimal retention level when θ = 0.1 and ξ = 0.15 for the exponential claims.
Risks 05 00001 g002
Figure 3. The graph of the optimal initial surplus and optimal retention level when θ = 0.1 and ξ = 0.15 for the Pareto claims.
Figure 3. The graph of the optimal initial surplus and optimal retention level when θ = 0.1 and ξ = 0.15 for the Pareto claims.
Risks 05 00001 g003
Table 1. Decision matrix for MADM methods.
Table 1. Decision matrix for MADM methods.
Alternatives ( A i )Attributes (Criteria) ( C j )
C 1 C 2 C 3 C n
A 1 X 11 X 12 X 13 X 1 n
A 2 X 21 X 22 X 23 X 2 n
A 3 X 31 X 32 X 33 X 3 n
A m X m 1 X m 2 X m 3 X m n
Table 2. The scale and its description of Saaty’s pairwise comparison matrix.
Table 2. The scale and its description of Saaty’s pairwise comparison matrix.
DefinitionIndex
Equally important1
Equally or slightly more important2
Slightly more important3
Slightly too much more important4
Much more important5
Much too more important6
Far more important7
Far more important to extremely more important8
Extremely more important9
Table 3. The random index value ( R I ) of n.
Table 3. The random index value ( R I ) of n.
n345678
RI0.580.901.121.241.321.41
Table 4. The largest initial surplus, u L , for the translated gamma process.
Table 4. The largest initial surplus, u L , for the translated gamma process.
Individual Claims θ = 0.1 θ = 0.2
Exponential Distribution49.63826.591
Pareto Distribution79.77445.090
Table 5. The smallest initial surplus u S under the excess of loss reinsurance.
Table 5. The smallest initial surplus u S under the excess of loss reinsurance.
θ = 0.1 and ξ = 0.15 θ = 0.1 and ξ = 0.2 θ = 0.1 and ξ = 0.3 θ = 0.2 and ξ = 0.3
Exponential Claims27.79838.30745.75814.367
Pareto Claims30.38244.51057.61615.692
Table 6. Alternative sets for the exponential claims under the excess of loss reinsurance.
Table 6. Alternative sets for the exponential claims under the excess of loss reinsurance.
Loading FactorsNumber of AlternativesInitial Surplus (u)Retention Level (M)Released Capital (RC)Expected Profit (EP)Expected Exponential Utility (EeU)Expected Fractional Utility (EfU)Expected Logarithmic Utility (ElU)
θ = 0.1 and ξ = 0.15 127.7630.85221.87518.0000.5821.1230.116
227.8630.91721.77520.0220.5981.1240.117
327.9630.94621.67520.8650.6041.1240.117
21749.3637.3000.27549.9490.8321.1360.128
21849.4637.7800.17549.9680.8321.1360.128
21949.5638.8000.07549.9890.8321.1360.128
Loading FactorsNumber of AlternativesInitial Surplus (u)Retention Level (M)Released Capital (RC)Expected Profit (EP)Expected Exponential Utility (EeU)Expected Fractional Utility (EfU)Expected Logarithmic Utility (ElU)
θ = 0.1 and ξ = 0.2 138.3021.54811.33628.7400.7131.1300.122
238.4021.652311.23630.8390.7231.1310.123
338.5021.70511.13631.8220.7281.1310.123
11249.4027.4200.23649.9400.8331.1360.128
11349.5028.1580.13649.9710.8331.1360.128
11449.6029.7520.03649.9940.8331.1360.128
Loading FactorsNumber of AlternativesInitial Surplus (u)Retention Level (M)Released Capital (RC)Expected Profit (EP)Expected Exponential Utility (EeU)Expected Fractional Utility (EfU)Expected Logarithmic Utility (ElU)
θ = 0.1 and ξ = 0.3 145.7582.6693.88039.6020.7891.1340.126
245.8582.9073.78041.8080.7971.1340.126
345.9583.0173.68042.6610.8001.1350.127
3749.3587.0500.28049.8700.8321.1360.128
3849.4587.570.18049.9230.8321.1360.128
3949.5588.6310.08049.9730.8321.1360.128
Loading FactorsNumber of AlternativesInitial Surplus (u)Retention Level (M)Released Capital (RC)Expected Profit (EP)Expected Exponential Utility (EeU)Expected Fractional Utility (EfU)Expected Logarithmic Utility (ElU)
θ = 0.2 and ξ = 0.3 114.3670.83512.22434.9410.6111.1230.116
214.4670.92512.12440.5500.6511.1250.117
314.5670.96712.02442.9490.6671.1250.118
12126.3676.9430.22499.8550.9021.1370.129
12226.4677.650.12499.9280.9021.1370.129
12326.5679.7640.02499.9910.9021.1370.129
Table 7. Alternative sets for the Pareto claims under the excess of loss reinsurance.
Table 7. Alternative sets for the Pareto claims under the excess of loss reinsurance.
Loading FactorsNumber of AlternativesInitial Surplus (u)Retention Level (M)Released Capital (RC)Expected Profit (EP)Expected Exponential Utility (EeU)Expected Fractional Utility (EfU)Expected Logarithmic Utility (ElU)
θ = 0.1 and ξ = 0.15 130.3820.93549.39216.7740.5931.1230.115
230.4821.00349.29218.4320.6061.1230.116
330.5821.03449.19219.1520.6111.1240.117
49279.482335.4810.29249.9990.8971.0910.087
49379.582504.9200.19249.9990.8971.0910.087
49479.6821041.3010.092500.8971.0910.087
Loading FactorsNumber of AlternativesInitial Surplus (u)Retention Level (M)Released Capital (RC)Expected Profit (EP)Expected Exponential Utility (EeU)Expected Fractional Utility (EfU)Expected Logarithmic Utility (ElU)
θ = 0.1 and ξ = 0.2 144.5101.79535.26425.5110.7291.1300.122
244.6101.91735.16427.2880.7371.1300.123
344.7101.97235.06428.0390.7411.1310.123
35179.510369.9870.26450.0000.8971.1380.129
35279.610589.2780.16450.0000.8971.1380.129
35379.7101490.6000.06450.0000.8971.1380.129
Loading FactorsNumber of AlternativesInitial Surplus (u)Retention Level (M)Released Capital (RC)Expected Profit (EP)Expected Exponential Utility (EeU)Expected Fractional Utility (EfU)Expected Logarithmic Utility (ElU)
θ = 0.1 and ξ = 0.3 157.6163.37122.15834.3360.8121.1340.126
257.7163.61522.05836.0110.8171.1340.126
357.8163.72821.95836.7000.8201.1340.126
22079.516378.8260.25850.0000.9001.1380.129
22179.616612.5810.15850.0000.9001.1380.129
22279.7161653.2190.05850.0000.9001.1380.129
Loading FactorsNumber of AlternativesInitial Surplus (u)Retention Level (M)Released Capital (RC)Expected Profit (EP)Expected Exponential Utility (EeU)Expected Fractional Utility (EfU)Expected Logarithmic Utility (ElU)
θ = 0.2 and ξ = 0.3 115.6920.91729.39832.6120.6031.1230.116
215.7921.01029.29837.1790.6361.1240.117
315.8921.05429.19839.1950.6501.1240.117
29244.792310.2690.298100.0000.9241.1380.129
29344.892465.9850.198100.0000.9241.1380.129
29444.992938.2720.098100.0000.9241.1380.129
Table 8. Optimal pair of initial surplus and retention level under the TOPSIS and AHP methods.
Table 8. Optimal pair of initial surplus and retention level under the TOPSIS and AHP methods.
Exponential Claims
θ = 0.1 and ξ = 0.15 θ = 0.1 and ξ = 0.2 θ = 0.1 and ξ = 0.3 θ = 0.2 and ξ = 0.3
TOPSIS-E30.663 and 1.32639.202 and 1.93245.858 and 2.90716.067 and 1.334
AHP-135.363 and 1.88339.802 and 2.08645.858 and 2.90718.367 and 1.829
AHP-231.463 and 1.42038.902 and 1.84645.858 and 2.90716.167 and 1.355
AHP-348.363 and 5.39249.202 and 6.67549.558 and 8.63125.067and 4.445
AHP-431.363 and 1.40738.302 and 1.54845.958 and 3.01716.167 and 1.355
TOPSIS-M28.263 and 1.01038.502 and 2.01046.858 and 3.69914.567 and 0.967
Pareto Claims
θ = 0.1 and ξ = 0.15 θ = 0.1 and ξ = 0.2 θ = 0.1 and ξ = 0.3 θ = 0.1 and ξ = 0.4
TOPSIS-E37.782 and 1.74047.910 and 2.81558.916 and 4.98720.092 and 2.266
AHP-147.682 and 3.40951.710 and 3.67659.516 and 4.78025.292 and 3.405
AHP-240.882 and 2.37847.910 and 2.81558.416 and 4.72821.792 and 2.435
AHP-362.582 and 7.64374.110 and 21.76379.716 and 1653.21934.992 and 8.805
AHP-433.882 and 1.51347.710 and 2.77058.116 and 3.97315.992 and 1.090
TOPSIS-M38.382 and 2.18244.510 and 1.79579.616 and 612.58144.792 and 310.269

Share and Cite

MDPI and ACS Style

Bulut Karageyik, B.; Şahin, Ş. Optimal Retention Level for Infinite Time Horizons under MADM. Risks 2017, 5, 1. https://doi.org/10.3390/risks5010001

AMA Style

Bulut Karageyik B, Şahin Ş. Optimal Retention Level for Infinite Time Horizons under MADM. Risks. 2017; 5(1):1. https://doi.org/10.3390/risks5010001

Chicago/Turabian Style

Bulut Karageyik, Başak, and Şule Şahin. 2017. "Optimal Retention Level for Infinite Time Horizons under MADM" Risks 5, no. 1: 1. https://doi.org/10.3390/risks5010001

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop