Open Access This article is
- freely available
Risks 2018, 6(3), 99; https://doi.org/10.3390/risks6030099
A Quantum-Type Approach to Non-Life Insurance Risk Modelling
Département de Mathématique, Université Libre de Bruxelles, Campus de la Plaine C.P. 210, B-1050 Bruxelles, Belgique
ISFA, Université Lyon 1, LSAF EA2429, 50 Avenue Tony Garnier, F-69007 Lyon, France
Department of Mathematics, University of Leicester, University Road, Leicester LE1 7RH, UK
Author to whom correspondence should be addressed.
Received: 30 July 2018 / Accepted: 11 September 2018 / Published: 14 September 2018
A quantum mechanics approach is proposed to model non-life insurance risks and to compute the future reserve amounts and the ruin probabilities. The claim data, historical or simulated, are treated as coming from quantum observables and analyzed with traditional machine learning tools. They can then be used to forecast the evolution of the reserves of an insurance company. The following methodology relies on the Dirac matrix formalism and the Feynman path-integral method.
Keywords:non-life insurance; reserve process; ruin probability; quantum mechanics; Hamiltonian; path-integral; econophysics; learning techniques; data analysis
The theory of non-life insurance risk is a major topic in actuarial sciences. The literature is wide and varied, and a comprehensive review can be found in the books Asmussen and Albrecher (2010); Dickson (2017); Schmidli (2018).
This paper proposes a quantum-type approach for the representation and analysis of non-life insurance data. Quantum mechanics methods are successfully applied in various disciplines, including finance for option pricing (e.g., Baaquie 2007, 2010) and econophysics for risk management (e.g., Bouchaud and Potters 2003; Mantegna and Stanley 2000). Their application to insurance, however, is an emerging field of research that has been introduced recently in Tamturk and Utev (2018).
Overall, the current approach is new and consists in representing the observations on an insurance risk in the form of quantum data, that is to say from a quantum mechanical type model. This methodology is based on the Dirac matrix formalism (Dirac 1933) and the Feynman path integral method (Feynman 1948). First, claim data obtained from the past or by simulation are analyzed with standard machine learning tools such as classification, maximum likelihood estimation and risk error function techniques. Then, these data can be used to determine the distribution of the reserve process and the associated finite-time ruin probabilities.
Data analysis plays a key role in many areas and learning techniques provide a key tool for this purpose (e.g., Bishop 2006; Quinlan 1988). In actuarial sciences, practitioners use often such techniques to analyze data and to predict future losses. Taking into account missing data is also important in practice (Graham 2009). This arises in insurance with unreported claims and frauds; the topic will be briefly addressed. Political and economical changes are another risk factor for companies due to possible inflation and trade restrictions; such a situation will be sketched too. An advantage of our framework pertains to handling unknown probabilities of repeated events which, in our experience, can be bypassed with an adapted quantum data representation.
The paper is organized as follows. Section 2 presents the compound Poisson risk process when repeated claims are reported or not, and the two corresponding quantum risk models. For simplicity, the claim amounts are assumed to have a two-point distribution. The data, however, will be treated as values observed with errors, which broadens somewhat the applicability of the analysis. In Section 3, the so-called quantum observables are constructed for the two quantum models. This leads us to determine the eigenvalues of a Hermitian operator to find. In Section 4, the existence of Maxwell-Boltzmann and Bose-Einstein statistics is explicitly indicated, and the associated likelihood functions are derived. Section 5 deals with the estimation of the claim amount distribution from a set of data, historical or simulated. As mentioned before, the followed method is rather simple and standard, and we then discuss several numerical examples. In Section 6, we show how to compute, in the quantum context, the distribution of the reserves of the company in the course of time. We then continue by obtaining the probabilities of ruin over a finite time horizon and this is again illustrated numerically.
2. Quantum Risk Models
Consider the classical compound Poisson risk process (Asmussen and Albrecher 2010; Dickson 2017; Schmidli 2018). The reserve process is defined bywhere is the initial capital, c is the constant premium rate and denotes the total claim amount up to time t defined bywhere is a Poisson process of rate and the are the claim amounts which are i.i.d. random variables (.
For simplicity, we assume here that each claim has a two-point distribution given byTypically, d represents a small amount of claim and u a significant claim amount.
Complete data. In this case, the observed data are treated as coming from the classical model. These data are collected at regular times , and they provide us with the cumulative claim amounts during the interval. The periods are small enough to reasonably assume that there are at most two claims per period. Hence, we havewhere
Quantum data. This time, the observed data are treated as a sample of eigenvalues of operators and are referred to as quantum data. Recall that from the mechanical quantum point of view, the observables are eigenvalues of certain Hermitian operators / self-adjoint matrices. For a nice introduction to that theory, the reader is referred e.g., to Griffiths and Schroeter (2018); Plenio (2002); a thorough analysis is provided in Parthasarathy (1992).
Thus, the different possible claim amounts are considered as energy levels of particles and they are treated as the eigenvalues of an operator H which has to be modelled. This requires a special choice to make with care.
Data with missing values. As before, data on cumulative claim amounts are collected at regular times with small . Now, however, we assume that the cases of repeated claims (i.e., and ) are not observed. Unreported claims of this kind can be viewed as a deliberate omission. We then haveThis raises the question of how to deal with the unknown probabilities.
Adjusted quantum data. Quantum data can be adjusted to handle missing values in several ways. Three cases are examined here.
Way 1. We use the same quantum observable operator H as in the classical model. The missing unknown probabilities are thus considered as 0.
Way 2. The values and are not eigenvalues of the observables. This requires to derive a different Hamiltonian.
Way 3. We consider as only possible jumps either 0 or 1-step jumps . A new Hamiltonian is then obtained.
Data and simulation. We have assumed that each claim has only two possible values d and u. Nevertheless, the data obtained by simulation are observed values tainted by errors. For example, a simple dataset such as can be treated as generated both by (1) or (2) with observed with errors. In Section 5, we will discuss and illustrate different simulation procedures.
3. Quantum Observables
In this section, we will construct the Hermitian operator corresponding to the two quantum risk models presented above. We start with some usual notation and preliminaries.
Independent observables. Given two observables A, B, the tensor product acts as a quantum product of two independent observables. So, acts as a quantum sum of two independent observables. In particular, is the quantum product of two i.i.d. observables, and the quantum sum of two i.i.d. observables.
In our case, the basic 1-step quantum claim variable is a matrix B with eigenvalues and interpreted as a 1-step jump geometric random walk, as a 2-step jump geometric random walk, etc. To model the standard random walk, we first consider the geometric random walk and then take the ln.
An identity operator (in dimension n) is introduced that does not affect the dynamics. Indeed, corresponds to multiply by 1 at the first step, while corresponds to multiply by 1 at the second step. Note that in general, a tensor product is not commutative.
Partitioned space. To deal with an event space which is partitioned into n events, we work with the n orthogonal projection (Hermitian) operators onto the eigenspace of observable A.
3.1. Quantum Data
The operator H is constructed as a projection on three claim (jump) cases . Given that case i occurs, the claims are defined as a quantum type random walk as described above. Applying the argument outlined above with standard quantum-type calculations (Griffiths and Schroeter 2018; Parthasarathy 1992; Plenio 2002), we derive our first observable operator
More explicitly, the matrices B and are the 1 and 2-step exponential jump claim operators defined bywhere V is a unitary matrix, is its adjoint and is a identity matrix that corresponds to the absence of the second claim. Notice that . So, the actual 1-step claim operator is computed asand the 2-step claim operator is given byMoreover, let , be a diagonal matrix which has a single non-zero element given by , i.e., for . The -matrices are the claim occurrences operators (projections) defined byFinally, denote by an extra matrix with all elements being 0. The matrix corresponds to a 0 claim size which is given by
Overall, we then obtainwhere U is a unitary matrix such that
3.2. Adjusted Quantum Data
We consider the three ways indicated before to handle missing data.
Way 1. This is the same as the previous quantum observable H. However, the probabilities of and of are taken equal to 0.
Way 2. Now, the values and are not taken into consideration as eigenvalues. The new observable operator is thenwhere is a projection type operator in the previous notation of .
This operator S is applied because the capital movement can be exposed to a unusual change at the second step (it cannot go to and ). In this case, the probabilities of and are not equal to 0.
Way 3. This time, we consider as a first jump. The new observable operator isHere, is the -identity matrix andwhere is a unitary matrix. Moreover, the -matrices are the claim occurrence operators defined bywhere is a unitary matrix.
Therefore, we havewhere .
4. Quantum Likelihood
In the Dirac formalism, the so-called bra-ket notation has proven very useful and easy to handle and has become standard in quantum mechanics. Recall it briefly; more detail can be found e.g., in Griffiths and Schroeter (2018); Parthasarathy (1992); Plenio (2002). Consider a class of matrices treated as algebra. A column vector x is represented as a ket-vector . An associated bra-vector is a row vector defined as its Hermitian conjugate. Then, corresponds to the usual inner product. Moreover, is the outer product, i.e., an operator/matrix defined byIn particular, for any unit vector e, defines a projection operator which acts as
Let be the density operator which describes the statistical state of the system. The projection operator plays the role of an event, and the probability of finding the system in the state e is defined as the expectationwhere denotes trace. Now, an operator is an observable if A is self-adjoint (). Thanks to that property, A can be expanded in its spectrum by projection operators, i.e.,in which is the eigenvector for . The probability of the measurement is extended by linearity of the expectation as
We are ready to go back to the insurance risk process. First, we examine the case of quantum data using two classical models developed in the literature.
4.1. Maxwell-Boltzmann Statistics
We consider the model (1) with the operator (3). The eigenvalues of the operator H are . The probabilities of finding the system in the corresponding eigenstates are defined via the quantum method sketched before. Now, we assume that the eigenvalues are observed independently and that the density is defined aswith too having an independence type tensor product representation given byto satisfy the following restrictionswhere is a projection operator on the eigenvalue given bywhere, using the notation ,
After standard but relatively lengthy calculations, we obtain the following existence result.
Lemme 1 (Maxwell-Boltzmann density).
The set of densities satisfying the above restrictions (6) is not empty. Moreover, there exists a density of the form corresponding to Maxwell-Boltzmann statistics.
4.2. Bose-Einstein Statistics
We examine the same model (1) with the operator (3) but when the eigenvalues are not observed independently. More precisely, we assume that the eigenvalues and cannot be distinguished and that the density has to satisfy the following restrictionswhere is defined as before and C is chosen to satisfy the normalization condition
As before, an existence result is proved after lengthy calculations.
Lemme 2 (Bose-Einstein density).
The set of densities satisfying the above restrictions (7) is not empty.
We now move on to the case with missing data.
4.3. Adjusted Quantum Data
One possibility is to apply statistics of the Bose-Einstein type. Again the three previous methods with the observable operators (3)–(5) are considered. For each one, it can be shown that the set of densities satisfying the restrictions is not empty.
Way 1. Here, we choose for the probabilitieswhere is the same as above and is chosen to satisfy the normalization condition
Way 2. The eigenvalue is observed twice and so, a natural choice iswhere Note that using the Maxwell-Boltzmann case, we may also write .
Way 3. The choice of the probabilities is quite arbitrary and to restrict it, we add the unobserved probabilities of and to the probabilities observed in the Bose-Einstein statistics. Thus, the resulting probabilities arewhere .
4.4. Likelihood Functions
The corresponding likelihood functions are now straightforward. Denote by the number of x observed in the data set. For the Maxwell-Boltzmann statistics, the likelihood is given byFor the Bose-Einstein statistics,For the adjusted quantum data, following the way 2 for example,
5. Data Analysis
We want to analyze the claims data by the non-traditional quantum representation (3)–(5) of the models (1) and (2). This can be done by applying the supervised machine learning method (Bishop 2006; Hastie and Tibshirani 1996; Hastie et al. 2009).
The method uses the cross-validation technique, which is based on dividing the dataset into test data (also known as validation data) and training data. The nearest neighbors algorithms are applied to classify the data, then the maximum likelihood estimate and the error risk calculation are performed to find the optimal parameters. Finally, the part of the training information obtained is applied to analyze the test data.
In the k-fold cross-validation, all data is divided into k subsets of equal size. In each iteration, a subset is chosen as training data and the remaining subsets are used as test data. The process is repeated k times and each subset is chosen only once as a training piece in general. Finally, the estimator is an average of all the iteration results. For illustration, a simple example where is presented in the Section 5.2.
5.1. Estimation Procedure
Our goal is to estimate the values of the claim amounts and their probabilities . The dataset consists of claim amounts in successive time intervals ( say). We assume that the likelihood function is defined by one of the functions given in (11)–(13).
The estimation method proposed is slightly similar to the EM algorithm and its successive steps are as follows.
- Choose an initial estimate .
- Classify and label the data with respect to by using the nearest neighbour algorithm. This leads to the classesand for the representation (5).
- Loop it until small enough.
For the fold cross-validation strategy, the steps are first applied to the training data and yield an estimate for which is then used in the test data as the initial estimate .
Estimating by risk functions. Consider a dataset . The target set to estimate is a set of eigenvalues with observed probabilities and clusters and . This can be done by minimizing the weighted -norm risk functionNote that a -norm risk function is, of course, a possible alternative.
For the Maxwell-Boltzmann statistic, this risk function isFor the Bose-Einstein statistics,For the adjusted quantum data, following the way 2,
5.2. Numerical Illustrations
We first examine a simple numerical example for quantum data, then a case of data with errors and finally a case of misreported claims. For all the cases, we take and , for instance.
Numerical example. Consider the following datasetwhich is divided in two subsetsThe successive steps of the procedure described in Section 5.1 are applied from the estimate .
(1) Maxwell-Boltzmann likelihood (11) with risk function (14). We obtain the following results (Table 1).
Choosing , we see that and . The associated minimum risk is and the maximum likelihood value is 2.198608 × 10. The loop takes 8 steps, i.e., it works very fast for a small data set.
To reduce overfitting, we apply the k-fold cross-validation method with . This gives the results below (Table 2).
When is the training set, we get and , with . Thus, there is a significant reduction in the risk function with a somewhat close .
(2) Bose-Einstein likelihood (12) with risk function (15). Here are the numerical results (Table 3).
Observe that we obtain the same but with probabilities . Again it takes 8 steps to reach the level .
A 2-fold cross-validation method improves the results as follows (Table 4).
With as training set, we get and , with instead of obtained before.
(3) Bose-Einstein likelihood (13) with risk function (16). The results are in the following table (Table 5), again for .
The results here are somewhat different since and . The loop now takes only six steps. For this dataset, the model which fits best, i.e., with the smallest risk function, is using Bose-Einstein statistics.
We also performed several numerical experiments with simulated data. In the examples (4)–(7) below, the simulations yield datasets of size ( was used too), and the calculations are made with .
(4) Uniform random data (Table 6). As in the examples (1), (2), we apply the usual Maxwell-Boltzmann and Bose-Einstein statistics.
We notice that the best fit is not always given by the Maxwell-Boltzmann statistics.
Data with errors. We wish to examine a dataset disturbed by an error. For that, we start with a set of true observables . Then, we add a special random error so that the dataset generated is given byBelow, we choose where has three possible values .
(5) Random data with errors (Table 7). The non-perturbed data come from a uniform sampling in .
We see that, as before, the best model depends on the dataset. In the case of a small error , the results are of course very close.
(6) Adjusted random data with errors (Table 8). The non-perturbed dataset is obtained by simulation according to the way 2.
The results are close when is small and slightly different when increases.
Misreported data. Data samples may not report or misreport claims, either by mistake or voluntarily. This can also occur because of a change of risk. Let V be a dataset with n reported claims and m misreported claims:where m is known but the true claim amounts are unknown. To handle the missing data, we apply a nearest neighbour approach and approximate the missing quantity by the average of the k closest neighbours. Below, we choose .
(7) Random data with misreports (Table 9). First, data are generated according to the Maxwell-Boltzman model perturbed by errors via (17). Then, random errors are generated to replace missing data, where m is of values ( meaning no missing data). Finally, the two datasets are combined by putting the errors at random position.
As expected, a small value of m does not affect the results very much. What is a little surprising is that for a relatively large value (), estimates of probabilities change slightly () but estimates for claim amounts are significantly modified ().
In practice, the algorithm works well and quickly in most situations. We also performed numerical calculations with a grid size of , and it is essentially the value of the risk function that is affected.
6. Quantum Reserve Process
One of the main objectives of the risk theory is to forecast the evolution of the reserves of an insurance company. This problem has generated a great deal of research using probabilistic techniques. We present below some introductory elements for an alternative quantum approach.
6.1. Distribution of the Reserves
The future reserves of an insurance can be computed by applying path integral methods (Feynman 1948; Feynman and Hibbs 2010). Let be the capital values at time . Arguing as in Tamturk and Utev (2018), we first obtain thatwith . For simplicity, take . The error term depends on (the grid size relative to the observation time t which is usually small). The propagator for each sub-interval plays the role of the transition probability . It is expressed in terms of a Markovian generator called Markovian Hamiltonian. Then, by the completeness property in Dirac’s formalism, we find thatwhere is the set of eigenvalues and eigenstates in the spectral decomposition of the Hamiltonian operator H.
In the risk model discussed here, the reserve process is defined via the Hamiltonian whose eigenvalues in the basis are given byFor the Maxwell-Boltzmann statistics, the transition probabilities (19) becomeFor the Bose-Einstein statistics, we haveFor the adjusted quantum data, following the way 2,
6.2. Finite-Time Ruin Probability
Let T be the ruin time, i.e., the first instant when the reserves become negative or null. To obtain the probability of non-ruin up to time , we just have to proceed as in (18) and delete the paths where is negative or null. This gives directly
Extension. The method can be applied to more advanced risk models. For instance, suppose that a change in risk occurs at time so that the reserve process is modified aswhereIn such a situation, the non-ruin probability when is given bywhere (resp. ) means that the computation is made with the parameters (resp. ).
(8) Change of risk. Consider the dataset of example (1), i.e., , and take , , and . When the analysis is by the Maxwell-Boltzmann statistics, we find and , with and . Given an initial capital , we compute the probability of non-ruin until time 30 and obtain .
Suppose that, as in example (1), V is divided in two subsets , . With the dataset , we find similarly and , with and . With , we have and , with and .
Now, let us examine a model with an unexpected risk which arises at time . The data sets before and after are precisely and . Given , the non-ruin probability until time 30 is defined byBelow (Table 10), we calculated probabilities of non-ruin when and with possible values .
Intuitively, increasing the economic burden f implies a larger risk. This is confirmed above since it yields a smaller non-ruin probability.
Discussion. The theory of insurance risk has attracted considerable interest in the actuarial field (see the books Asmussen and Albrecher 2010; Dickson 2017; Schmidli 2018). In particular, problems of ruin have been the subject of numerous investigations. Thus, different methods of calculating ruin probabilities have been proposed (e.g., Dufresne and Gerber 1989; Ignatov et al. 2001 and the Picard-Lefèvre formula (De Vylder 1999; Picard and Lefèvre 1997; Rullière and Loisel 2004)).
Risk theory has a long tradition as a branch of applied probability. In this paper, we present a quantum mechanics approach whose implementation in insurance is novel. This approach requires different techniques, including new representation and data processing in insurance. We have illustrated the methodology by various numerical examples. The advantages and the weaknesses of this approach remain a problem to be discussed in the future.
All authors contributed equally to this work.
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
- Asmussen, Søren, and Hansjörg Albrecher. 2010. Ruin Probabilities, 2nd ed. Singapore: World Scientific. [Google Scholar]
- Baaquie, Belal Ehsan. 2007. Quantum Finance: Path integrals and Hamiltonians for Options and Interest Rates. Cambridge: Cambridge University Press. [Google Scholar]
- Baaquie, Belal Ehsan. 2010. Interest Rates and Coupon Bonds in Quantum Finance. Cambridge: Cambridge University Press. [Google Scholar]
- Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Berlin: Springer. [Google Scholar]
- Bouchaud, Jean-Philippe, and Marc Potters. 2003. Theory of Financial Risk and Derivative Pricing: From Statistical Physics to Risk Management, 2nd ed. Cambridge: Cambridge University Press. [Google Scholar]
- De Vylder, F. Etienne. 1999. Numerical finite-time ruin probabilities by the Picard-Lefèvre formula. Scandinavian Actuarial Journal 2: 97–105. [Google Scholar] [CrossRef]
- Dickson, David C. M. 2017. Insurance Risk and Ruin, 2nd ed. Cambridge: Cambridge University Press. [Google Scholar]
- Dirac, Paul Adrien Maurice. 1933. The Lagrangian in quantum mechanics. Physikalische Zeitschrift der Sowjetunion 3: 64–72. [Google Scholar]
- Dufresne, François, and Hans U. Gerber. 1989. Three methods to calculate the probability of ruin. Astin Bulletin 19: 71–90. [Google Scholar] [CrossRef]
- Feynman, Richard P. 1948. Space-time approach to non-relativistic quantum mechanics. Reviews of Modern Physics 20: 367–87. [Google Scholar] [CrossRef]
- Feynman, Richard P., and Albert R. Hibbs. 2010. Quantum Mechanics and Path Integrals. Edited by Daniel F. Styer. New York: Dover Editions. [Google Scholar]
- Graham, John W. 2009. Missing data analysis: Making it work in the real world. Annual Review of Psychology 60: 549–76. [Google Scholar] [CrossRef] [PubMed]
- Griffiths, David J., and Darrell F. Schroeter. 2018. Introduction to Quantum Mechanics, 3rd ed. Cambridge: Cambridge University Press. [Google Scholar]
- Hastie, Trevor, and Robert Tibshirani. 1996. Discriminant adaptive nearest neighbor classification and regression. Advances in Neural Information Processing Systems 18: 409–15. [Google Scholar]
- Hastie, Trevor, Robert Tibshirani, and Jerome H. Friedman. 2009. The Elements of Statistical Learning, 2nd ed. New York: Springer. [Google Scholar]
- Ignatov, Zvetan G., Vladimir K. Kaishev, and Rossen S. Krachunov. 2001. An improved finite-time ruin probability formula and its Mathematica implementation. Insurance: Mathematics and Economics 29: 375–86. [Google Scholar] [CrossRef]
- Mantegna, Rosario N., and H. Eugene Stanley. 2000. An Introduction to Econophysics: Correlations and Complexity in Finance. Cambridge: Cambridge University Press. [Google Scholar]
- Parthasarathy, Kalyanapuram Rangachari. 1992. An Introduction to Quantum Stochastic Calculus. Basel: Springer. [Google Scholar]
- Picard, Philippe, and Claude Lefèvre. 1997. The probability of ruin in finite time with discrete claim size distribution. Scandinavian Actuarial Journal 1: 58–69. [Google Scholar] [CrossRef]
- Plenio, Martin. 2002. Quantum Mechanics. Ebook. London: Imperial College. [Google Scholar]
- Quinlan, Ross. 1988. C4.5: Programs for Machine Learning. San Mateo: Morgan Kaufmann. [Google Scholar]
- Rullière, Didier, and Stéphane Loisel. 2004. Another look at the Picard-Lefèvre formula for finite-time ruin probabilities. Insurance: Mathematics and Economics 35: 187–203. [Google Scholar] [CrossRef]
- Schmidli, Hanspeter. 2018. Risk Theory. Cham: Springer. [Google Scholar]
- Tamturk, Muhsin, and Sergey Utev. 2018. Ruin probability via quantum mechanics approach. Insurance: Mathematics and Economics 79: 69–74. [Google Scholar] [CrossRef]
Table 1. Maxwell-Boltzmann statistics for quantum data.
|(40,25)||4.361099 × 10||(0.01,0.99)||18||17||12.8500||12.8500|
|(18,17)||1.569022 × 10||(0.4,0.6)||19||15||7.7255||5.1245|
|(19,15)||9.962820 × 10||(0.33,0.67)||19||11||6.6365||1.089|
|(19,11)||3.810307 × 10||(0.43,0.57)||19||10||3.5169||3.1196|
|(19,10)||1.141128 × 10||(0.38,0.62)||17||9||2.5768||0.9401|
|(17,9)||9.649455 × 10||(0.22,0.78)||15||9||2.668082||0.091282|
|(15,9)||2.198608 × 10||(0.1,0.9)||15||9||2.987181||0.319099|
|(15,9)||2.198608 × 10||(0.1,0.9)||15||9||2.987181||0|
Table 2. Using k-fold cross-validation with .
|Training Data||Test Data|
|Training Set||Test Set|
Table 3. Bose-Einstein statistics for quantum data.
|(40,25)||4.361099 × 10||(0.01,0.99)||18||17||12.8500||12.8500|
|(18,17)||1.569022 × 10||(0.4,0.6)||19||15||7.7255||5.1245|
|(19,15)||9.962820 × 10||(0.33,0.67)||19||11||6.6365||1.089|
|(19,11)||3.810307 × 10||(0.43,0.57)||19||10||3.5169||3.1196|
|(19,10)||1.492842 × 10||(0.38,0.62)||17||9||2.620360||0.89654|
|(17,9)||1.434357 × 10||(0.25,0.75)||15||9||2.681275||0.060915|
|(15,9)||3.019659 × 10||(0.13,0.87)||15||9||2.963947||0.282672|
|(15,9)||3.019659 × 10||(0.13,0.87)||15||9||2.963947||0|
Table 4. Using k-fold cross-validation.
|Training Data||Test Data|
|Training Set||Test Set|
Table 5. Bose-Einstein statistics for adjusted quantum data.
|(40,25)||4.361099 × 10||(0.01,0.99)||18||17||17.421881||17.421881|
|(18,17)||1.569022 × 10||(0.4,0.6)||19||15||9.905660||7.516221|
|(19,15)||9.962820 × 10||(0.33,0.67)||19||11||8.547535||1.358125|
|(19,11)||3.810307 × 10||(0.43,0.57)||19||10||4.504016||4.043519|
|(19,10)||3.810307 × 10||(0.57,0.43)||17||9||3.835010||0.669006|
|(17,9)||3.810307 × 10||(0.57,0.43)||17||9||3.835010||0|
Table 6. Uniformly generated data.
|Likelihood||5.5996 × 10||0||3.1488 × 10||0|
Table 7. Random data with errors.
Table 8. Adjusted random data with errors.
|Likelihood||2.0197 × 10||3.4390 × 10||1.1716 × 10||5.0452 × 10||3.0095 × 10||1.6389 × 10|
Table 9. Random data with misreports.
|Likelihood||2.7343 × 10||4.3868 × 10||3.3073 × 10||5.4464 × 10||1.3609 × 10||1.0870 × 10|
Table 10. Non-ruin probabilities under a change of risk.
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).