Next Article in Journal
A User-Friendly Algorithm for Detecting the Influence of Background Risks on a Model
Previous Article in Journal
Optimum Technology Product Life Cycle Technology Innovation Investment-Using Compound Binomial Options
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Quantum-Type Approach to Non-Life Insurance Risk Modelling

1
Département de Mathématique, Université Libre de Bruxelles, Campus de la Plaine C.P. 210, B-1050 Bruxelles, Belgique
2
ISFA, Université Lyon 1, LSAF EA2429, 50 Avenue Tony Garnier, F-69007 Lyon, France
3
Department of Mathematics, University of Leicester, University Road, Leicester LE1 7RH, UK
*
Author to whom correspondence should be addressed.
Risks 2018, 6(3), 99; https://doi.org/10.3390/risks6030099
Submission received: 30 July 2018 / Revised: 24 August 2018 / Accepted: 11 September 2018 / Published: 14 September 2018

Abstract

:
A quantum mechanics approach is proposed to model non-life insurance risks and to compute the future reserve amounts and the ruin probabilities. The claim data, historical or simulated, are treated as coming from quantum observables and analyzed with traditional machine learning tools. They can then be used to forecast the evolution of the reserves of an insurance company. The following methodology relies on the Dirac matrix formalism and the Feynman path-integral method.

1. Introduction

The theory of non-life insurance risk is a major topic in actuarial sciences. The literature is wide and varied, and a comprehensive review can be found in the books Asmussen and Albrecher (2010); Dickson (2017); Schmidli (2018).
This paper proposes a quantum-type approach for the representation and analysis of non-life insurance data. Quantum mechanics methods are successfully applied in various disciplines, including finance for option pricing (e.g., Baaquie 2007, 2010) and econophysics for risk management (e.g., Bouchaud and Potters 2003; Mantegna and Stanley 2000). Their application to insurance, however, is an emerging field of research that has been introduced recently in Tamturk and Utev (2018).
Overall, the current approach is new and consists in representing the observations on an insurance risk in the form of quantum data, that is to say from a quantum mechanical type model. This methodology is based on the Dirac matrix formalism (Dirac 1933) and the Feynman path integral method (Feynman 1948). First, claim data obtained from the past or by simulation are analyzed with standard machine learning tools such as classification, maximum likelihood estimation and risk error function techniques. Then, these data can be used to determine the distribution of the reserve process and the associated finite-time ruin probabilities.
Data analysis plays a key role in many areas and learning techniques provide a key tool for this purpose (e.g., Bishop 2006; Quinlan 1988). In actuarial sciences, practitioners use often such techniques to analyze data and to predict future losses. Taking into account missing data is also important in practice (Graham 2009). This arises in insurance with unreported claims and frauds; the topic will be briefly addressed. Political and economical changes are another risk factor for companies due to possible inflation and trade restrictions; such a situation will be sketched too. An advantage of our framework pertains to handling unknown probabilities of repeated events which, in our experience, can be bypassed with an adapted quantum data representation.
The paper is organized as follows. Section 2 presents the compound Poisson risk process when repeated claims are reported or not, and the two corresponding quantum risk models. For simplicity, the claim amounts are assumed to have a two-point distribution. The data, however, will be treated as values observed with errors, which broadens somewhat the applicability of the analysis. In Section 3, the so-called quantum observables are constructed for the two quantum models. This leads us to determine the eigenvalues of a Hermitian operator to find. In Section 4, the existence of Maxwell-Boltzmann and Bose-Einstein statistics is explicitly indicated, and the associated likelihood functions are derived. Section 5 deals with the estimation of the claim amount distribution from a set of data, historical or simulated. As mentioned before, the followed method is rather simple and standard, and we then discuss several numerical examples. In Section 6, we show how to compute, in the quantum context, the distribution of the reserves of the company in the course of time. We then continue by obtaining the probabilities of ruin over a finite time horizon and this is again illustrated numerically.

2. Quantum Risk Models

Consider the classical compound Poisson risk process (Asmussen and Albrecher 2010; Dickson 2017; Schmidli 2018). The reserve process { R ( t ) , t 0 } is defined by
R ( t ) = x 0 + c t S ( t ) ,
where x 0 is the initial capital, c is the constant premium rate and S ( t ) denotes the total claim amount up to time t defined by
S ( t ) = j = 1 N ( t ) X j ,
where N ( t ) is a Poisson process of rate λ and the X j are the claim amounts which are i.i.d. random variables ( = d X ) .
For simplicity, we assume here that each claim has a two-point distribution given by
X = d with probability q , u with probability p .
Typically, d represents a small amount of claim and u a significant claim amount.
Complete data. In this case, the observed data are treated as coming from the classical model. These data are collected at regular times Δ t , 2 Δ t , , and they provide us with the cumulative claim amounts during the interval. The periods Δ t are small enough to reasonably assume that there are at most two claims per period. Hence, we have
S ( Δ t ) = 0 with probability δ 0 , d with probability q δ 1 , u with probability p δ 1 , d + u with probability 2 q p δ 2 , 2 d with probability q 2 δ 2 , 2 u with probability p 2 δ 2 ,
where
δ 1 = P [ N ( Δ t ) = 1 ] = e λ Δ t λ Δ t , δ 2 = P [ N ( Δ t ) = 2 ] = e λ Δ t ( λ Δ t ) 2 / 2 , δ 0 = 1 δ 1 δ 2 P [ N ( Δ t ) = 0 ] = e λ Δ t .
Quantum data. This time, the observed data are treated as a sample of eigenvalues of operators and are referred to as quantum data. Recall that from the mechanical quantum point of view, the observables are eigenvalues of certain Hermitian operators / self-adjoint matrices. For a nice introduction to that theory, the reader is referred e.g., to Griffiths and Schroeter (2018); Plenio (2002); a thorough analysis is provided in Parthasarathy (1992).
Thus, the different possible claim amounts 0 , d , u , d + u , 2 d , 2 u are considered as energy levels of particles and they are treated as the eigenvalues of an operator H which has to be modelled. This requires a special choice to make with care.
Data with missing values. As before, data on cumulative claim amounts are collected at regular times Δ t , 2 Δ t , with small Δ t . Now, however, we assume that the cases of repeated claims (i.e., 2 d and 2 u ) are not observed. Unreported claims of this kind can be viewed as a deliberate omission. We then have
S ( Δ t ) = 0 with some probability p 0 , d with some probability p 1 , u with some probability p 2 , d + u with some probability p 3 .
This raises the question of how to deal with the unknown probabilities.
Adjusted quantum data. Quantum data can be adjusted to handle missing values in several ways. Three cases are examined here.
Way 1. We use the same quantum observable operator H as in the classical model. The missing unknown probabilities are thus considered as 0.
Way 2. The values 2 d and 2 u are not eigenvalues of the observables. This requires to derive a different Hamiltonian.
Way 3. We consider as only possible jumps either 0 or 1-step jumps d , u , d + u . A new Hamiltonian is then obtained.
Data and simulation. We have assumed that each claim has only two possible values d and u. Nevertheless, the data obtained by simulation are observed values tainted by errors. For example, a simple dataset such as { 4 , 7 , 2 , 11 , 3 , 6 } can be treated as generated both by (1) or (2) with ( u = 3 , d = 7 ) observed with errors. In Section 5, we will discuss and illustrate different simulation procedures.

3. Quantum Observables

In this section, we will construct the Hermitian operator corresponding to the two quantum risk models presented above. We start with some usual notation and preliminaries.
Independent observables. Given two observables A, B, the tensor product A B acts as a quantum product of two independent observables. So, ln ( A B ) acts as a quantum sum of two independent observables. In particular, B B is the quantum product of two i.i.d. observables, and ln ( B B ) the quantum sum of two i.i.d. observables.
In our case, the basic 1-step quantum claim variable is a 2 × 2 matrix B with eigenvalues exp ( u ) , exp ( d ) and interpreted as a 1-step jump geometric random walk, B B as a 2-step jump geometric random walk, etc. To model the standard random walk, we first consider the geometric random walk and then take the ln.
An identity operator I n (in dimension n) is introduced that does not affect the dynamics. Indeed, I n B corresponds to multiply by 1 at the first step, while B I n corresponds to multiply by 1 at the second step. Note that in general, a tensor product is not commutative.
Partitioned space. To deal with an event space which is partitioned into n events, we work with the n orthogonal projection (Hermitian) operators P i onto the eigenspace of observable A.

3.1. Quantum Data

The operator H is constructed as a projection on three claim (jump) cases i = 0 , 1 , 2 . Given that case i occurs, the claims are defined as a quantum type random walk as described above. Applying the argument outlined above with standard quantum-type calculations (Griffiths and Schroeter 2018; Parthasarathy 1992; Plenio 2002), we derive our first observable operator
H = P 0 O 4 + P 1 ln ( B I 2 ) + P 2 ln ( B 2 ) .
More explicitly, the matrices B and B 2 are the 1 and 2-step exponential jump claim operators defined by
B = V * D V = V * e u 0 0 e d V , B 2 = ( V * V * ) ( D D ) ( V V ) ,
where V is a 2 × 2 unitary matrix, V * is its adjoint and I 2 is a 2 × 2 identity matrix that corresponds to the absence of the second claim. Notice that I 2 = V * V . So, the actual 1-step claim operator is ln ( B I 2 ) computed as
ln ( B I 2 ) = ( V * ) 2 u 0 0 0 0 u 0 0 0 0 d 0 0 0 0 d V 2 ,
and the 2-step claim operator is ln ( B 2 ) given by
ln ( B 2 ) = ( V 2 ) * 2 u 0 0 0 0 u + d 0 0 0 0 d + u 0 0 0 0 2 d V 2 .
Moreover, let D i | n , 1 i n , be a n × n diagonal matrix which has a single non-zero element given by ( D i | n ) i , i = 1 , i.e., ( D i | n ) k , m = 0 for ( k , m ) ( i , i ) . The 3 × 3 -matrices P 0 , P 1 , P 2 are the 0 , 1 , 2 claim occurrences operators (projections) defined by
P i = W * D i + 1 | 3 W , i = 0 , 1 , 2 .
Finally, denote by O n an extra n × n matrix with all elements being 0. The matrix O 4 corresponds to a 0 claim size which is given by
O 4 = ( V * ) 2 ( O 2 O 2 ) V 2 .
Overall, we then obtain
H = U * 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 u 0 0 0 0 0 0 0 0 0 0 0 0 u 0 0 0 0 0 0 0 0 0 0 0 0 d 0 0 0 0 0 0 0 0 0 0 0 0 d 0 0 0 0 0 0 0 0 0 0 0 0 2 u 0 0 0 0 0 0 0 0 0 0 0 0 u + d 0 0 0 0 0 0 0 0 0 0 0 0 d + u 0 0 0 0 0 0 0 0 0 0 0 0 2 d U ,
where U is a 12 × 12 unitary matrix such that
U = W V 2 , U * = W * ( V * ) 2 .

3.2. Adjusted Quantum Data

We consider the three ways indicated before to handle missing data.
Way 1. This is the same as the previous quantum observable H. However, the probabilities p 2 u of 2 u and p 2 d of 2 d are taken equal to 0.
Way 2. Now, the values 2 u and 2 d are not taken into consideration as eigenvalues. The new observable operator H is then
H = P 0 O 4 + P 1 ln ( B I 2 ) + P 2 ( ln ( B 2 ) S ) ,
where S = D 2 | 4 + D 3 | 4 is a projection type operator in the previous notation of D i | n .
This operator S is applied because the capital movement can be exposed to a unusual change at the second step (it cannot go to 2 u and 2 d ). In this case, the probabilities of 2 u and 2 d are not equal to 0.
Way 3. This time, we consider u + d as a first jump. The new observable operator H is
H = P 0 O 3 + P 1 ln ( B I 1 ) .
Here, I 1 = [ 1 ] is the 1 × 1 -identity matrix and
ln ( B I 1 ) = ln ( B ) = V ˜ * u 0 0 0 d 0 0 0 u + d V ˜ ,
where V ˜ is a 3 × 3 unitary matrix. Moreover, the 2 × 2 -matrices P ˜ 0 , P ˜ 1 are the 0 , 1 claim occurrence operators defined by
P ˜ i = W ˜ * D i + 1 | 2 W ˜ , i = 0 , 1 ,
where W ˜ is a 2 × 2 unitary matrix.
Therefore, we have
H = U ˜ * 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 u 0 0 0 0 0 0 d 0 0 0 0 0 0 u + d U ˜ ,
where U ˜ = W ˜ V ˜ .

4. Quantum Likelihood

In the Dirac formalism, the so-called bra-ket notation has proven very useful and easy to handle and has become standard in quantum mechanics. Recall it briefly; more detail can be found e.g., in Griffiths and Schroeter (2018); Parthasarathy (1992); Plenio (2002). Consider a class of n × n matrices treated as C * algebra. A column vector x is represented as a ket-vector | x > . An associated bra-vector < x | is a row vector defined as its Hermitian conjugate. Then, < x | y > corresponds to the usual inner product. Moreover, | x > < y | is the outer product, i.e., an operator/matrix defined by
| x > < y | | z > = < y | z > | x > ( a b c = b c a   rule ) .
In particular, for any unit vector e, P e = | e > < e | defines a projection operator which acts as
P e | x > = | e > < e | | x > = < e | x > | e > .
Let ρ be the density operator which describes the statistical state of the system. The projection operator plays the role of an event, and the probability of finding the system in the state e is defined as the expectation
E ( P e ) = E [ | e > < e | ] = t r ( ρ P e ) ,
where t r denotes trace. Now, an operator A C * is an observable if A is self-adjoint ( A = A * ). Thanks to that property, A can be expanded in its spectrum { β i } by projection operators, i.e.,
A = i β i P e i = i β i | e i > < e i | ,
in which e i is the eigenvector for β i . The probability of the measurement is extended by linearity of the expectation as
E ( A ) = E i β i | e i > < e i | = i β i E [ | e i > < e i | ] = i β i t r ( ρ P e i ) .
We are ready to go back to the insurance risk process. First, we examine the case of quantum data using two classical models developed in the literature.

4.1. Maxwell-Boltzmann Statistics

We consider the model (1) with the operator (3). The eigenvalues of the operator H are 0 , u , d , u + d , 2 u , 2 d . The probabilities of finding the system in the corresponding eigenstates are defined via the quantum method sketched before. Now, we assume that the eigenvalues are observed independently and that the density ρ is defined as
ρ = ρ 1 ρ 2 = W * δ 0 0 0 0 δ 1 0 0 0 δ 2 W ( V * ) 2 p 2 0 0 0 0 p q 0 0 0 0 q p 0 0 0 0 q 2 V 2 ,
with ρ 2 too having an independence type tensor product representation given by
ρ 2 = ρ 2 ρ 2 = V * p 0 0 q V V * p 0 0 q V ,
to satisfy the following restrictions
t r ( ρ P 0 ) = δ 0 , t r ( ρ P u ) = p δ 1 , t r ( ρ P d ) = q δ 1 , t r ( ρ P u + d ) = 2 p q δ 2 , t r ( ρ P 2 u ) = p 2 δ 2 , t r ( ρ P 2 d ) = q 2 δ 2 ,
where P β is a projection operator on the eigenvalue β given by
P β = U * D β U ,
where, using the notation D i | n ,
D 0 = D 1 | 3 ( D 1 , 4 + + D 4 | 4 ) , D u = D 2 | 3 ( D 1 , 4 + D 2 | 4 ) , D d = D 2 | 3 ( D 3 , 4 + D 4 | 4 ) , D 2 u = D 3 | 3 D 1 | 4 , D 2 d = D 3 | 3 D 4 | 4 , D u + d = D 3 | 3 ( D 2 | 4 + D 3 | 4 ) .
After standard but relatively lengthy calculations, we obtain the following existence result.
Lemme 1 (Maxwell-Boltzmann density).
The set of densities satisfying the above restrictions (6) is not empty. Moreover, there exists a density of the form ρ = ρ 1 ρ 2 corresponding to Maxwell-Boltzmann statistics.

4.2. Bose-Einstein Statistics

We examine the same model (1) with the operator (3) but when the eigenvalues are not observed independently. More precisely, we assume that the eigenvalues u + d and d + u cannot be distinguished and that the density ρ has to satisfy the following restrictions
t r ( ρ P 0 ) = δ 0 , t r ( ρ P u ) = p δ 1 , t r ( ρ P d ) = q δ 1 , t r ( ρ P u + d ) = C p q δ 2 , t r ( ρ P 2 u ) = C p 2 δ 2 , t r ( ρ P 2 d ) = C q 2 δ 2 ,
where P λ is defined as before and C is chosen to satisfy the normalization condition
C ( p 2 + p q + q 2 ) = 1 .
As before, an existence result is proved after lengthy calculations.
Lemme 2 (Bose-Einstein density).
The set of densities satisfying the above restrictions (7) is not empty.
We now move on to the case with missing data.

4.3. Adjusted Quantum Data

One possibility is to apply statistics of the Bose-Einstein type. Again the three previous methods with the observable operators (3)–(5) are considered. For each one, it can be shown that the set of densities satisfying the restrictions is not empty.
Way 1. Here, we choose for the probabilities
t r ( ρ P 0 ) = C δ 0 , t r ( ρ P u ) = C p δ 1 , t r ( ρ P d ) = C q δ 1 , t r ( ρ P u + d ) = C p q δ 2 ,
where P λ is the same as above and C is chosen to satisfy the normalization condition C ( δ 0 + δ 1 ( p + q ) + p q δ 2 ) = 1 .
Way 2. The eigenvalue u + d is observed twice and so, a natural choice is
t r ( ρ P 0 ) = C δ 0 , t r ( ρ P u ) = C p δ 1 , t r ( ρ P d ) = C q δ 1 , t r ( ρ P u + d ) = C δ 2 p q ,
where C ( δ 0 + δ 1 ( p + q ) + p q δ 2 ) = 1 . Note that using the Maxwell-Boltzmann case, we may also write t r ( ρ P u + d ) = 2 C δ 2 p q .
Way 3. The choice of the probabilities is quite arbitrary and to restrict it, we add the unobserved probabilities of 2 d and 2 u to the probabilities observed in the Bose-Einstein statistics. Thus, the resulting probabilities are
t r ( ρ P 0 ) = C δ 0 , t r ( ρ P u ) = C p δ 1 , t r ( ρ P d ) = C q δ 1 , t r ( ρ P u + d ) = C δ 2 ( p q + p 2 + q 2 ) ,
where C ( δ 0 + δ 1 ( p + q ) + ( p q + p 2 + q 2 ) δ 2 ) = 1 .

4.4. Likelihood Functions

The corresponding likelihood functions are now straightforward. Denote by # x the number of x observed in the data set. For the Maxwell-Boltzmann statistics, the likelihood is given by
L ( p , q ) = ( p 2 δ 2 ) # 2 u ( q 2 δ 2 ) # 2 d ( 2 q p δ 2 ) # u + d ( p δ 1 ) # u ( q δ 1 ) # d ( δ 0 ) # 0 .
For the Bose-Einstein statistics,
L ( p , q ) = ( C p 2 δ 2 ) # 2 u ( C q 2 δ 2 ) # 2 d ( C q p δ 2 ) # u + d = d + u ( p δ 1 ) # u ( q δ 1 ) # d ( δ 0 ) # 0 .
For the adjusted quantum data, following the way 2 for example,
L ( p , q ) = ( C q p δ 2 ) # u + d = d + u ( C p δ 1 ) # u ( C q δ 1 ) # d ( C δ 0 ) # 0 .

5. Data Analysis

We want to analyze the claims data by the non-traditional quantum representation (3)–(5) of the models (1) and (2). This can be done by applying the supervised machine learning method (Bishop 2006; Hastie and Tibshirani 1996; Hastie et al. 2009).
The method uses the cross-validation technique, which is based on dividing the dataset into test data (also known as validation data) and training data. The nearest neighbors algorithms are applied to classify the data, then the maximum likelihood estimate and the error risk calculation are performed to find the optimal parameters. Finally, the part of the training information obtained is applied to analyze the test data.
In the k-fold cross-validation, all data is divided into k subsets of equal size. In each iteration, a subset is chosen as training data and the remaining subsets are used as test data. The process is repeated k times and each subset is chosen only once as a training piece in general. Finally, the estimator is an average of all the iteration results. For illustration, a simple example where k = 2 is presented in the Section 5.2.

5.1. Estimation Procedure

Our goal is to estimate the values ( u , d ) of the claim amounts and their probabilities ( p , q = 1 p ) . The dataset V = { v 1 , v 2 , , v n } consists of claim amounts in successive time intervals Δ t = ( t 1 , t ] ( t = 1 , , m say). We assume that the likelihood function is defined by one of the functions given in (11)–(13).
The estimation method proposed is slightly similar to the EM algorithm and its successive steps are as follows.
-
Choose an initial estimate ( u 0 , d 0 ) .
-
Classify and label the data with respect to ( u = u 0 , d = d 0 ) by using the nearest neighbour algorithm. This leads to the classes
   G 2 u , G 2 d , G u + d , G u , G d , G 0 for the representation (3),(4),
  and G u + d , G u , G d , G 0 for the representation (5).
-
Given u , d , estimate p , q = 1 p by maximizing the appropriate likelihood function L ( p , q ) given in (11)–(13).
-
Estimate u , d by minimizing the corresponding error risk function F ( u , d ) defined below in (14)–(16), for all possible u > d > 0 .
-
Loop it until | F ( u i + 1 , d i + 1 ) F ( u i , d i ) | < M small enough.
For the k fold cross-validation strategy, the steps are first applied to the training data and yield an estimate for ( u , d ) which is then used in the test data as the initial estimate ( u 0 , d 0 ) .
Estimating ( u , d ) by risk functions. Consider a dataset { β i } . The target set to estimate is a set of eigenvalues μ with observed probabilities and clusters p μ and G μ . This can be done by minimizing the weighted L 1 -norm risk function
F ( μ ) = μ β = μ p μ β i G μ | β i μ | .
Note that a L 2 -norm risk function is, of course, a possible alternative.
For the Maxwell-Boltzmann statistic, this risk function is
F ( u , d ) = p 2 δ 2 β i G 2 u | β i 2 u | + q 2 δ 2 β i G 2 d | β i 2 d | + 2 p q δ 2 β i G u + d | β i ( u + d ) | + p δ 1 β i G u | β i u | + q δ 1 β i G d | β i d | + δ 0 β i G 0 | β i 0 | .
For the Bose-Einstein statistics,
F ( u , d ) = C p 2 δ 2 β i G 2 u | β i 2 u | + C q 2 δ 2 β i G 2 d | β i 2 d | + C p q δ 2 β i G u + d | β i ( u + d ) | + p δ 1 β i G u | β i u | + q δ 1 β i G d | β i d | + δ 0 β i G 0 | β i 0 | .
For the adjusted quantum data, following the way 2,
F ( u , d ) = C p q δ 2 β i G u + d | β i ( u + d ) | + C p δ 1 β i G u | β i u | + C q δ 1 β i G d | β i d | + C δ 0 β i G 0 | β i 0 | .

5.2. Numerical Illustrations

We first examine a simple numerical example for quantum data, then a case of data with errors and finally a case of misreported claims. For all the cases, we take λ = 1 and Δ t = 1 , for instance.
Numerical example. Consider the following dataset
V = { 20 , 8 , 1 , 7 , 15 , 17 , 11 , 0 , 19 , 1 } ,
which is divided in two subsets
V 1 = { 20 , 8 , 1 , 7 , 15 } , V 2 = { 17 , 11 , 0 , 19 , 1 } .
The successive steps of the procedure described in Section 5.1 are applied from the estimate ( u 0 , d 0 ) = ( 40 , 25 ) .
(1) Maxwell-Boltzmann likelihood (11) with risk function (14). We obtain the following results (Table 1).
Choosing M = 0.01 , we see that ( u , d ) = ( 15 , 9 ) and ( p , q ) = ( 0.1 , 0.9 ) . The associated minimum risk is 2.987181 and the maximum likelihood value is 2.198608 × 10 7 . The loop takes 8 steps, i.e., it works very fast for a small data set.
To reduce overfitting, we apply the k-fold cross-validation method with k = 2 . This gives the results below (Table 2).
When V 1 is the training set, we get ( u ¯ , d ¯ ) = ( 1 / 2 ) ( u 1 + u 2 , d 1 + d 2 ) = ( 13 , 8.5 ) and ( p ¯ , q ¯ ) = ( 1 / 2 ) ( p 1 + p 2 , q 1 + q 2 ) = ( 0.3 , 0.7 ) , with F ( 13 , 8.5 ) = 1.9884 . Thus, there is a significant reduction in the risk function with a somewhat close ( u , d ) .
(2) Bose-Einstein likelihood (12) with risk function (15). Here are the numerical results (Table 3).
Observe that we obtain the same ( u , d ) = ( 15 , 9 ) but with probabilities ( p , q ) = ( 0.13 , 0.87 ) . Again it takes 8 steps to reach the level M = 0.01 .
A 2-fold cross-validation method improves the results as follows (Table 4).
With V 1 as training set, we get ( u ¯ , d ¯ ) = ( 13 , 8.5 ) and ( p ¯ , q ¯ ) = ( 0.33 , 0.67 ) , with F ( 13 , 8.5 ) = 2.0047 instead of 2.963947 obtained before.
(3) Bose-Einstein likelihood (13) with risk function (16). The results are in the following table (Table 5), again for M = 0.01 .
The results here are somewhat different since ( u , d ) = ( 17 , 9 ) and ( p , q ) = ( 0.57 , 0.43 ) . The loop now takes only six steps. For this dataset, the model which fits best, i.e., with the smallest risk function, is using Bose-Einstein statistics.
We also performed several numerical experiments with simulated data. In the examples (4)–(7) below, the simulations yield datasets of size n = 100 ( n = 1000 was used too), and the calculations are made with M = 0.1 .
(4) Uniform random data (Table 6). As in the examples (1), (2), we apply the usual Maxwell-Boltzmann and Bose-Einstein statistics.
We notice that the best fit is not always given by the Maxwell-Boltzmann statistics.
Data with errors. We wish to examine a dataset disturbed by an error. For that, we start with a set { j 1 , j 2 , , j n } of true observables { 0 , u , d , u + d , 2 u , 2 d } . Then, we add a special random error { e 1 , e 2 , , e n } so that the dataset generated is given by
V = { v 1 , v 2 , , v n } { j 1 , j 2 , , j n } + { e 1 , e 2 , , e n } .
Below, we choose e i { μ , 0 , μ } where μ has three possible values 1 , 2 , 10 .
(5) Random data with errors (Table 7). The non-perturbed data { j 1 , j 2 , , j n } come from a uniform sampling in { 0 , u , d , u + d , 2 u , 2 d } .
We see that, as before, the best model depends on the dataset. In the case of a small error μ , the results are of course very close.
(6) Adjusted random data with errors (Table 8). The non-perturbed dataset { j 1 , j 2 , , j n } is obtained by simulation according to the way 2.
The results are close when μ is small and slightly different when μ increases.
Misreported data. Data samples may not report or misreport claims, either by mistake or voluntarily. This can also occur because of a change of risk. Let V be a dataset with n reported claims and m misreported claims:
V = { v 1 , v 2 , , v n + m } ,
where m is known but the true claim amounts are unknown. To handle the missing data, we apply a nearest neighbour approach and approximate the missing quantity by the average of the k closest neighbours. Below, we choose k = n .
(7) Random data with misreports (Table 9). First, data { v 1 , v 2 , , v n } are generated according to the Maxwell-Boltzman model perturbed by errors via (17). Then, random errors { e 1 , e 2 , , e m } are generated to replace missing data, where m is of values 0 , 5 , 20 ( m = 0 meaning no missing data). Finally, the two datasets are combined by putting the errors at random position.
As expected, a small value of m does not affect the results very much. What is a little surprising is that for a relatively large value m = 20 ( 20 % ), estimates of probabilities change slightly ( 2 % ) but estimates for claim amounts are significantly modified ( 20 % ).
In practice, the algorithm works well and quickly in most situations. We also performed numerical calculations with a grid size of Δ t = 0.1 , and it is essentially the value of the risk function that is affected.

6. Quantum Reserve Process

One of the main objectives of the risk theory is to forecast the evolution of the reserves of an insurance company. This problem has generated a great deal of research using probabilistic techniques. We present below some introductory elements for an alternative quantum approach.

6.1. Distribution of the Reserves

The future reserves of an insurance can be computed by applying path integral methods (Feynman 1948; Feynman and Hibbs 2010). Let x 0 , x 1 , , x n be the capital values at time 0 = t 0 , t 1 , , t n = t . Arguing as in Tamturk and Utev (2018), we first obtain that
P ( R ( t n ) = x n | x 0 ) = ( 1 + o ( 1 ) ) x 1 < x 0 | e Δ t 1 H | x 1 > x 2 < x 1 | e Δ t 2 H | x 2 > x n 1 < x n 2 | e Δ t n 1 H | x n 1 > < x n 1 | e Δ t n H | x n > ,
with Δ t i = t i + 1 t i . For simplicity, take Δ t i Δ t . The error term o ( 1 ) depends on Δ t / t (the grid size relative to the observation time t which is usually small). The propagator for each sub-interval < x i | e Δ t H | x i + 1 > plays the role of the transition probability P ( x i x i + 1 ) . It is expressed in terms of a Markovian generator H called Markovian Hamiltonian. Then, by the completeness property in Dirac’s formalism, we find that
P ( x i x i + 1 ) = < x i | e Δ t H | x i + 1 > = 0 2 π d α 2 π < x i | e Δ t H | α > < α | x i + 1 > = 0 2 π d α 2 π < x i | α > < α | x i + 1 > e Δ t K α = 1 2 π 0 2 π ( e i x i α e i x i + 1 α ) e Δ t K α d α ,
where { | α > , K α } is the set of eigenvalues and eigenstates in the spectral decomposition of the Hamiltonian operator H.
In the risk model discussed here, the reserve process is defined via the Hamiltonian whose eigenvalues K α in the basis | α > are given by
K α = l n [ e i α c ( e λ + e i α u δ 1 p + e i α d δ 1 q + e i α ( 2 u ) δ 2 p 2 + e i α ( 2 d ) δ 2 q 2 + e i α ( u + d ) δ 2 2 p q ) ] .
For the Maxwell-Boltzmann statistics, the transition probabilities (19) become
< x i | e Δ t H | x i + 1 > = 0 2 π d α 2 π < x i | e Δ t H | α > < α | x i + 1 > = e λ for x i x i + 1 + c = 0 , δ 1 p for x i x i + 1 + c u = 0 , δ 1 q for x i x i + 1 + c d = 0 , δ 2 p 2 for x i x i + 1 + c 2 u = 0 , δ 2 q 2 for x i x i + 1 + c 2 d = 0 , δ 2 2 p q for x i x i + 1 + c ( u + d ) = 0 .
For the Bose-Einstein statistics, we have
< x i | e Δ t H | x i + 1 > = 0 2 π d α 2 π < x i | e Δ t H | α > < α | x i + 1 > = e λ for x i x i + 1 + c = 0 , δ 1 p for x i x i + 1 + c u = 0 , δ 1 q for x i x i + 1 + c d = 0 , C δ 2 p 2 for x i x i + 1 + c 2 u = 0 , C δ 2 q 2 for x i x i + 1 + c 2 d = 0 , C δ 2 p q for x i x i + 1 + c ( u + d ) = 0 .
For the adjusted quantum data, following the way 2,
< x i | e Δ t H | x i + 1 > = 0 2 π d α 2 π < x i | e Δ t H | α > < α | x i + 1 > = C e λ for x i x i + 1 + c = 0 , C δ 1 p for x i x i + 1 + c u = 0 , C δ 1 q for x i x i + 1 + c d = 0 , C δ 2 p q for x i x i + 1 + c ( u + d ) = 0 .

6.2. Finite-Time Ruin Probability

Let T be the ruin time, i.e., the first instant when the reserves become negative or null. To obtain the probability of non-ruin up to time t n , we just have to proceed as in (18) and delete the paths where x i is negative or null. This gives directly
P ( T > t n | x 0 ) = ( 1 + o ( 1 ) ) x 1 1 < x 0 | e Δ t 1 H | x 1 > x 2 1 < x 1 | e Δ t 2 H | x 2 > x 3 1 < x 2 | e Δ t 3 H | x 3 > x n 1 < x n 1 | e Δ t n 1 H | x n > .
Extension. The method can be applied to more advanced risk models. For instance, suppose that a change in risk occurs at time t f so that the reserve process is modified as
R ( t ) = x 0 + 0 t c f d t j = 1 N ( t ) X j ,
where
for   t t f : c f = c 1 , and   X j = d 1 = d   or   u 1 = u   with   probabilities   q 1   or   p 1 , for   t > t f : c f = c 2 , and   X j = d 2 = d + f d   or   u 2 = u + f u   with   probabilities   q 2   or   p 2 .
In such a situation, the non-ruin probability when t n > t f is given by
P ( T > t n | x 0 ) = E P 1 ( T > t f | x 0 ) P 2 ( T > t n t f | R ( t f ) ) ,
where P 1 (resp. P 2 ) means that the computation is made with the parameters ( u 1 , d 1 ) , ( p 1 , q 1 ) (resp. ( u 2 , d 2 ) , ( p 2 , q 2 ) ).
(8) Change of risk. Consider the dataset of example (1), i.e., V = { 20 , 8 , 1 , 7 , 15 , 17 , 11 , 0 , 19 , 1 } , and take λ = 0.1 , c = 1 , Δ t = 1 and M = 0.05 . When the analysis is by the Maxwell-Boltzmann statistics, we find ( u , d ) = ( 15 , 8 ) and ( p , q ) = ( 0.2 , 0.8 ) , with L = 6.1720 e 14 and F = 2.1151 . Given an initial capital x 0 = 5 , we compute the probability of non-ruin until time 30 and obtain P ( T > 30 | 5 ) = 0.4021 .
Suppose that, as in example (1), V is divided in two subsets V 1 , V 2 . With the dataset V 1 , we find similarly ( u 1 , d 1 ) = ( 15 , 8 ) and ( p 1 , q 1 ) = ( 0.4 , 0.6 ) , with L 1 = 2.0962 × 10 7 and F 1 = 0.9656 . With V 2 , we have ( u 2 , d 2 ) = ( 17 , 11 ) and ( p 2 , q 2 ) = ( 0.67 , 0.33 ) , with L 2 = 8.9850 × 10 5 and F 2 = 1.0261 .
Now, let us examine a model with an unexpected risk which arises at time t f = 15 . The data sets before and after t f are precisely V 1 and V 2 . Given x 0 = 5 , the non-ruin probability until time 30 is defined by
P ( T > 30 | 5 ) = E P 1 ( T > 15 | 5 ) P 2 ( T > 15 | R ( 15 ) ) .
Below (Table 10), we calculated probabilities of non-ruin when c 1 = c 2 = c = 1 and f u = f d f with possible values 0 , 1 , 2 , 3 , 4 .
Intuitively, increasing the economic burden f implies a larger risk. This is confirmed above since it yields a smaller non-ruin probability.
Discussion. The theory of insurance risk has attracted considerable interest in the actuarial field (see the books Asmussen and Albrecher 2010; Dickson 2017; Schmidli 2018). In particular, problems of ruin have been the subject of numerous investigations. Thus, different methods of calculating ruin probabilities have been proposed (e.g., Dufresne and Gerber 1989; Ignatov et al. 2001 and the Picard-Lefèvre formula (De Vylder 1999; Picard and Lefèvre 1997; Rullière and Loisel 2004)).
Risk theory has a long tradition as a branch of applied probability. In this paper, we present a quantum mechanics approach whose implementation in insurance is novel. This approach requires different techniques, including new representation and data processing in insurance. We have illustrated the methodology by various numerical examples. The advantages and the weaknesses of this approach remain a problem to be discussed in the future.

Author Contributions

All authors contributed equally to this work.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Asmussen, Søren, and Hansjörg Albrecher. 2010. Ruin Probabilities, 2nd ed. Singapore: World Scientific. [Google Scholar]
  2. Baaquie, Belal Ehsan. 2007. Quantum Finance: Path integrals and Hamiltonians for Options and Interest Rates. Cambridge: Cambridge University Press. [Google Scholar]
  3. Baaquie, Belal Ehsan. 2010. Interest Rates and Coupon Bonds in Quantum Finance. Cambridge: Cambridge University Press. [Google Scholar]
  4. Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Berlin: Springer. [Google Scholar]
  5. Bouchaud, Jean-Philippe, and Marc Potters. 2003. Theory of Financial Risk and Derivative Pricing: From Statistical Physics to Risk Management, 2nd ed. Cambridge: Cambridge University Press. [Google Scholar]
  6. De Vylder, F. Etienne. 1999. Numerical finite-time ruin probabilities by the Picard-Lefèvre formula. Scandinavian Actuarial Journal 2: 97–105. [Google Scholar] [CrossRef]
  7. Dickson, David C. M. 2017. Insurance Risk and Ruin, 2nd ed. Cambridge: Cambridge University Press. [Google Scholar]
  8. Dirac, Paul Adrien Maurice. 1933. The Lagrangian in quantum mechanics. Physikalische Zeitschrift der Sowjetunion 3: 64–72. [Google Scholar]
  9. Dufresne, François, and Hans U. Gerber. 1989. Three methods to calculate the probability of ruin. Astin Bulletin 19: 71–90. [Google Scholar] [CrossRef]
  10. Feynman, Richard P. 1948. Space-time approach to non-relativistic quantum mechanics. Reviews of Modern Physics 20: 367–87. [Google Scholar] [CrossRef]
  11. Feynman, Richard P., and Albert R. Hibbs. 2010. Quantum Mechanics and Path Integrals. Edited by Daniel F. Styer. New York: Dover Editions. [Google Scholar]
  12. Graham, John W. 2009. Missing data analysis: Making it work in the real world. Annual Review of Psychology 60: 549–76. [Google Scholar] [CrossRef] [PubMed]
  13. Griffiths, David J., and Darrell F. Schroeter. 2018. Introduction to Quantum Mechanics, 3rd ed. Cambridge: Cambridge University Press. [Google Scholar]
  14. Hastie, Trevor, and Robert Tibshirani. 1996. Discriminant adaptive nearest neighbor classification and regression. Advances in Neural Information Processing Systems 18: 409–15. [Google Scholar]
  15. Hastie, Trevor, Robert Tibshirani, and Jerome H. Friedman. 2009. The Elements of Statistical Learning, 2nd ed. New York: Springer. [Google Scholar]
  16. Ignatov, Zvetan G., Vladimir K. Kaishev, and Rossen S. Krachunov. 2001. An improved finite-time ruin probability formula and its Mathematica implementation. Insurance: Mathematics and Economics 29: 375–86. [Google Scholar] [CrossRef]
  17. Mantegna, Rosario N., and H. Eugene Stanley. 2000. An Introduction to Econophysics: Correlations and Complexity in Finance. Cambridge: Cambridge University Press. [Google Scholar]
  18. Parthasarathy, Kalyanapuram Rangachari. 1992. An Introduction to Quantum Stochastic Calculus. Basel: Springer. [Google Scholar]
  19. Picard, Philippe, and Claude Lefèvre. 1997. The probability of ruin in finite time with discrete claim size distribution. Scandinavian Actuarial Journal 1: 58–69. [Google Scholar] [CrossRef]
  20. Plenio, Martin. 2002. Quantum Mechanics. Ebook. London: Imperial College. [Google Scholar]
  21. Quinlan, Ross. 1988. C4.5: Programs for Machine Learning. San Mateo: Morgan Kaufmann. [Google Scholar]
  22. Rullière, Didier, and Stéphane Loisel. 2004. Another look at the Picard-Lefèvre formula for finite-time ruin probabilities. Insurance: Mathematics and Economics 35: 187–203. [Google Scholar] [CrossRef]
  23. Schmidli, Hanspeter. 2018. Risk Theory. Cham: Springer. [Google Scholar]
  24. Tamturk, Muhsin, and Sergey Utev. 2018. Ruin probability via quantum mechanics approach. Insurance: Mathematics and Economics 79: 69–74. [Google Scholar] [CrossRef]
Table 1. Maxwell-Boltzmann statistics for quantum data.
Table 1. Maxwell-Boltzmann statistics for quantum data.
Given
( u , d )
Maximum
Likelihood ( L )
Optimum
( p , q )
Optimum
u
Optimum
d
Risk
F ( u , d )
| F ( u i , d i )
F ( u i + 1 , d i + 1 ) |
(40,25)4.361099 × 10 5 (0.01,0.99)181712.850012.8500
(18,17)1.569022 × 10 6 (0.4,0.6)19157.72555.1245
(19,15)9.962820 × 10 7 (0.33,0.67)19116.63651.089
(19,11)3.810307 × 10 7 (0.43,0.57)19103.51693.1196
(19,10)1.141128 × 10 7 (0.38,0.62)1792.57680.9401
(17,9)9.649455 × 10 8 (0.22,0.78)1592.6680820.091282
(15,9)2.198608 × 10 7 (0.1,0.9)1592.9871810.319099
(15,9)2.198608 × 10 7 (0.1,0.9)1592.9871810
Table 2. Using k-fold cross-validation with k = 2 .
Table 2. Using k-fold cross-validation with k = 2 .
Training DataTest Data
Training SetTest Set ( u , d ) ( p , q ) ( u , d ) ( p , q )
V 1 V 2 (15,8)(0.4,06)(11,9)(0.2,0.8)
V 2 V 1 (17,11)(0.67,0.33)(15,8)(0.4,0.6)
Table 3. Bose-Einstein statistics for quantum data.
Table 3. Bose-Einstein statistics for quantum data.
Given
( u , d )
Maximum
Likelihood ( L )
Optimum
( p , q )
Optimum
u
Optimum
d
Risk
F ( u , d )
| F ( u i , d i )
F ( u i + 1 , d i + 1 ) |
(40,25)4.361099 × 10 5 (0.01,0.99)181712.850012.8500
(18,17)1.569022 × 10 6 (0.4,0.6)19157.72555.1245
(19,15)9.962820 × 10 7 (0.33,0.67)19116.63651.089
(19,11)3.810307 × 10 7 (0.43,0.57)19103.51693.1196
(19,10)1.492842 × 10 7 (0.38,0.62)1792.6203600.89654
(17,9)1.434357 × 10 7 (0.25,0.75)1592.6812750.060915
(15,9)3.019659 × 10 7 (0.13,0.87)1592.9639470.282672
(15,9)3.019659 × 10 7 (0.13,0.87)1592.9639470
Table 4. Using k-fold cross-validation.
Table 4. Using k-fold cross-validation.
Training DataTest Data
Training SetTest Set ( u , d ) ( p , q ) ( u , d ) ( p , q )
V 1 V 2 (15,8)(0.41,0.59)(11,9)(0.25,0.75)
V 2 V 1 (17,11)(0.67,0.33)(15,8)(0.41,0.59)
Table 5. Bose-Einstein statistics for adjusted quantum data.
Table 5. Bose-Einstein statistics for adjusted quantum data.
Given
( u , d )
Maximum
Likelihood L
Optimum
( p , q )
Optimum
u
Optimum
d
Risk
F ( u , d )
| F ( u i , d i )
F ( u i + 1 , d i + 1 ) |
(40,25)4.361099 × 10 5 (0.01,0.99)181717.42188117.421881
(18,17)1.569022 × 10 6 (0.4,0.6)19159.9056607.516221
(19,15)9.962820 × 10 7 (0.33,0.67)19118.5475351.358125
(19,11)3.810307 × 10 7 (0.43,0.57)19104.5040164.043519
(19,10)3.810307 × 10 7 (0.57,0.43)1793.8350100.669006
(17,9)3.810307 × 10 7 (0.57,0.43)1793.8350100
Table 6. Uniformly generated data.
Table 6. Uniformly generated data.
ModelMaxwell-BoltzmannBose-Einstein
n n = 100 n = 1000 n = 100 n = 1000
p0.300.990.330.99
q0.700.010.670.01
Likelihood5.5996 × 10 84 03.1488 × 10 86 0
u56425640
d35233521
Risk value132.2545771.4357129.8278773.2864
Loop size716716
Table 7. Random data with errors.
Table 7. Random data with errors.
ModelMaxwell-BoltzmannBose-Einstein
μ μ = 1 μ = 2 μ = 10 μ = 1 μ = 2 μ = 10
p0.990.990.990.990.990.99
q0.010.010.010.010.010.01
Likelihood000000
u606080606060
d404050404040
Risk value79.5889141.8641821.699079.6845141.9939630.2925
Loop size227222
Table 8. Adjusted random data with errors.
Table 8. Adjusted random data with errors.
ModelMaxwell-BoltzmannBose-Einstein
μ μ = 1 μ = 2 μ = 10 μ = 1 μ = 2 μ = 10
p0.440.520.270.450.520.16
q0.560.480.730.550.480.84
Likelihood2.0197 × 10 64 3.4390 × 10 59 1.1716 × 10 59 5.0452 × 10 66 3.0095 × 10 60 1.6389 × 10 55
u606070606070
d404050404050
Risk value13.253130.1508134.951213.099430.0282156.9196
Loop size222224
Table 9. Random data with misreports.
Table 9. Random data with misreports.
ModelMaxwell-BoltzmannBose-Einstein
m m = 0 m = 5 m = 20 m = 0 m = 5 m = 20
p0.370.380.380.390.390.37
q0.630.620.620.610.610.63
Likelihood2.7343 × 10 69 4.3868 × 10 69 3.3073 × 10 68 5.4464 × 10 69 1.3609 × 10 68 1.0870 × 10 67
u606068606068
d404050404050
Risk value142.4854142.8270133.9227141.6344142.7307134.5446
Loop size223225
Table 10. Non-ruin probabilities under a change of risk.
Table 10. Non-ruin probabilities under a change of risk.
f = 0 f = 1 f = 2 f = 3 f = 4
P ( T > 30 | 5 ) 0.17450.16760.15730.14700.1368

Share and Cite

MDPI and ACS Style

Lefèvre, C.; Loisel, S.; Tamturk, M.; Utev, S. A Quantum-Type Approach to Non-Life Insurance Risk Modelling. Risks 2018, 6, 99. https://doi.org/10.3390/risks6030099

AMA Style

Lefèvre C, Loisel S, Tamturk M, Utev S. A Quantum-Type Approach to Non-Life Insurance Risk Modelling. Risks. 2018; 6(3):99. https://doi.org/10.3390/risks6030099

Chicago/Turabian Style

Lefèvre, Claude, Stéphane Loisel, Muhsin Tamturk, and Sergey Utev. 2018. "A Quantum-Type Approach to Non-Life Insurance Risk Modelling" Risks 6, no. 3: 99. https://doi.org/10.3390/risks6030099

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop