Open Access
This article is

- freely available
- re-usable

*Risks*
**2018**,
*6*(3),
99;
https://doi.org/10.3390/risks6030099

Article

A Quantum-Type Approach to Non-Life Insurance Risk Modelling

^{1}

Département de Mathématique, Université Libre de Bruxelles, Campus de la Plaine C.P. 210, B-1050 Bruxelles, Belgique

^{2}

ISFA, Université Lyon 1, LSAF EA2429, 50 Avenue Tony Garnier, F-69007 Lyon, France

^{3}

Department of Mathematics, University of Leicester, University Road, Leicester LE1 7RH, UK

^{*}

Author to whom correspondence should be addressed.

Received: 30 July 2018 / Accepted: 11 September 2018 / Published: 14 September 2018

## Abstract

**:**

A quantum mechanics approach is proposed to model non-life insurance risks and to compute the future reserve amounts and the ruin probabilities. The claim data, historical or simulated, are treated as coming from quantum observables and analyzed with traditional machine learning tools. They can then be used to forecast the evolution of the reserves of an insurance company. The following methodology relies on the Dirac matrix formalism and the Feynman path-integral method.

Keywords:

non-life insurance; reserve process; ruin probability; quantum mechanics; Hamiltonian; path-integral; econophysics; learning techniques; data analysis## 1. Introduction

The theory of non-life insurance risk is a major topic in actuarial sciences. The literature is wide and varied, and a comprehensive review can be found in the books Asmussen and Albrecher (2010); Dickson (2017); Schmidli (2018).

This paper proposes a quantum-type approach for the representation and analysis of non-life insurance data. Quantum mechanics methods are successfully applied in various disciplines, including finance for option pricing (e.g., Baaquie 2007, 2010) and econophysics for risk management (e.g., Bouchaud and Potters 2003; Mantegna and Stanley 2000). Their application to insurance, however, is an emerging field of research that has been introduced recently in Tamturk and Utev (2018).

Overall, the current approach is new and consists in representing the observations on an insurance risk in the form of quantum data, that is to say from a quantum mechanical type model. This methodology is based on the Dirac matrix formalism (Dirac 1933) and the Feynman path integral method (Feynman 1948). First, claim data obtained from the past or by simulation are analyzed with standard machine learning tools such as classification, maximum likelihood estimation and risk error function techniques. Then, these data can be used to determine the distribution of the reserve process and the associated finite-time ruin probabilities.

Data analysis plays a key role in many areas and learning techniques provide a key tool for this purpose (e.g., Bishop 2006; Quinlan 1988). In actuarial sciences, practitioners use often such techniques to analyze data and to predict future losses. Taking into account missing data is also important in practice (Graham 2009). This arises in insurance with unreported claims and frauds; the topic will be briefly addressed. Political and economical changes are another risk factor for companies due to possible inflation and trade restrictions; such a situation will be sketched too. An advantage of our framework pertains to handling unknown probabilities of repeated events which, in our experience, can be bypassed with an adapted quantum data representation.

The paper is organized as follows. Section 2 presents the compound Poisson risk process when repeated claims are reported or not, and the two corresponding quantum risk models. For simplicity, the claim amounts are assumed to have a two-point distribution. The data, however, will be treated as values observed with errors, which broadens somewhat the applicability of the analysis. In Section 3, the so-called quantum observables are constructed for the two quantum models. This leads us to determine the eigenvalues of a Hermitian operator to find. In Section 4, the existence of Maxwell-Boltzmann and Bose-Einstein statistics is explicitly indicated, and the associated likelihood functions are derived. Section 5 deals with the estimation of the claim amount distribution from a set of data, historical or simulated. As mentioned before, the followed method is rather simple and standard, and we then discuss several numerical examples. In Section 6, we show how to compute, in the quantum context, the distribution of the reserves of the company in the course of time. We then continue by obtaining the probabilities of ruin over a finite time horizon and this is again illustrated numerically.

## 2. Quantum Risk Models

Consider the classical compound Poisson risk process (Asmussen and Albrecher 2010; Dickson 2017; Schmidli 2018). The reserve process $\left\{R\right(t),t\ge 0\}$ is defined by
where ${x}_{0}$ is the initial capital, c is the constant premium rate and $S\left(t\right)$ denotes the total claim amount up to time t defined by
where $N\left(t\right)$ is a Poisson process of rate $\lambda $ and the ${X}_{j}$ are the claim amounts which are i.i.d. random variables (${=}_{d}X)$.

$$R\left(t\right)={x}_{0}+ct-S\left(t\right),$$

$$S\left(t\right)=\sum _{j=1}^{N\left(t\right)}{X}_{j},$$

For simplicity, we assume here that each claim has a two-point distribution given by
Typically, d represents a small amount of claim and u a significant claim amount.

$$X=\left\{\begin{array}{c}d\phantom{\rule{1.em}{0ex}}\mathrm{with}\phantom{\rule{4.pt}{0ex}}\mathrm{probability}\phantom{\rule{1.em}{0ex}}q,\hfill \\ u\phantom{\rule{1.em}{0ex}}\mathrm{with}\phantom{\rule{4.pt}{0ex}}\mathrm{probability}\phantom{\rule{1.em}{0ex}}p.\hfill \end{array}\right.$$

**Complete data**. In this case, the observed data are treated as coming from the classical model. These data are collected at regular times $\Delta t,2\Delta t,\dots $, and they provide us with the cumulative claim amounts during the interval. The periods $\Delta t$ are small enough to reasonably assume that there are at most two claims per period. Hence, we have

$$S(\Delta t)=\left\{\begin{array}{c}0\phantom{\rule{1.em}{0ex}}\mathrm{with}\phantom{\rule{4.pt}{0ex}}\mathrm{probability}\phantom{\rule{1.em}{0ex}}{\delta}_{0},\hfill \\ d\phantom{\rule{1.em}{0ex}}\mathrm{with}\phantom{\rule{4.pt}{0ex}}\mathrm{probability}\phantom{\rule{1.em}{0ex}}q{\delta}_{1},\hfill \\ u\phantom{\rule{1.em}{0ex}}\mathrm{with}\phantom{\rule{4.pt}{0ex}}\mathrm{probability}\phantom{\rule{1.em}{0ex}}p{\delta}_{1},\hfill \\ d+u\phantom{\rule{1.em}{0ex}}\mathrm{with}\phantom{\rule{4.pt}{0ex}}\mathrm{probability}\phantom{\rule{1.em}{0ex}}2qp{\delta}_{2},\hfill \\ 2d\phantom{\rule{1.em}{0ex}}\mathrm{with}\phantom{\rule{4.pt}{0ex}}\mathrm{probability}\phantom{\rule{1.em}{0ex}}{q}^{2}{\delta}_{2},\hfill \\ 2u\phantom{\rule{1.em}{0ex}}\mathrm{with}\phantom{\rule{4.pt}{0ex}}\mathrm{probability}\phantom{\rule{1.em}{0ex}}{p}^{2}{\delta}_{2},\hfill \end{array}\right.$$

$$\begin{array}{c}{\delta}_{1}=P[N(\Delta t)=1]={e}^{-\lambda \Delta t}\lambda \Delta t,\hfill \\ {\delta}_{2}=P[N(\Delta t)=2]={e}^{-\lambda \Delta t}{(\lambda \Delta t)}^{2}/2,\hfill \\ {\delta}_{0}=1-{\delta}_{1}-{\delta}_{2}\approx P[N(\Delta t)=0]={e}^{-\lambda \Delta t}.\hfill \end{array}$$

**Quantum data**. This time, the observed data are treated as a sample of eigenvalues of operators and are referred to as quantum data. Recall that from the mechanical quantum point of view, the observables are eigenvalues of certain Hermitian operators / self-adjoint matrices. For a nice introduction to that theory, the reader is referred e.g., to Griffiths and Schroeter (2018); Plenio (2002); a thorough analysis is provided in Parthasarathy (1992).

Thus, the different possible claim amounts $0,d,u,d+u,2d,2u$ are considered as energy levels of particles and they are treated as the eigenvalues of an operator H which has to be modelled. This requires a special choice to make with care.

**Data with missing values**. As before, data on cumulative claim amounts are collected at regular times $\Delta t,2\Delta t,\dots $ with small $\Delta t$. Now, however, we assume that the cases of repeated claims (i.e., $2d$ and $2u$) are not observed. Unreported claims of this kind can be viewed as a deliberate omission. We then have

$$S(\Delta t)=\left\{\begin{array}{c}0\phantom{\rule{1.em}{0ex}}\mathrm{with}\phantom{\rule{4.pt}{0ex}}\mathrm{some}\phantom{\rule{4.pt}{0ex}}\mathrm{probability}\phantom{\rule{1.em}{0ex}}{p}_{0},\hfill \\ d\phantom{\rule{1.em}{0ex}}\mathrm{with}\phantom{\rule{4.pt}{0ex}}\mathrm{some}\phantom{\rule{4.pt}{0ex}}\mathrm{probability}\phantom{\rule{1.em}{0ex}}{p}_{1},\hfill \\ u\phantom{\rule{1.em}{0ex}}\mathrm{with}\phantom{\rule{4.pt}{0ex}}\mathrm{some}\phantom{\rule{4.pt}{0ex}}\mathrm{probability}\phantom{\rule{1.em}{0ex}}{p}_{2},\hfill \\ d+u\phantom{\rule{1.em}{0ex}}\mathrm{with}\phantom{\rule{4.pt}{0ex}}\mathrm{some}\phantom{\rule{4.pt}{0ex}}\mathrm{probability}\phantom{\rule{1.em}{0ex}}{p}_{3}.\hfill \end{array}\right.$$

**Adjusted quantum data**. Quantum data can be adjusted to handle missing values in several ways. Three cases are examined here.

Way 1. We use the same quantum observable operator H as in the classical model. The missing unknown probabilities are thus considered as 0.

Way 2. The values $2d$ and $2u$ are not eigenvalues of the observables. This requires to derive a different Hamiltonian.

Way 3. We consider as only possible jumps either 0 or 1-step jumps $d,u,d+u$. A new Hamiltonian is then obtained.

**Data and simulation**. We have assumed that each claim has only two possible values d and u. Nevertheless, the data obtained by simulation are observed values tainted by errors. For example, a simple dataset such as $\{4,7,2,11,3,6\}$ can be treated as generated both by (1) or (2) with $(u=3,d=7)$ observed with errors. In Section 5, we will discuss and illustrate different simulation procedures.

## 3. Quantum Observables

In this section, we will construct the Hermitian operator corresponding to the two quantum risk models presented above. We start with some usual notation and preliminaries.

Independent observables. Given two observables A, B, the tensor product $A\otimes B$ acts as a quantum product of two independent observables. So, $ln(A\otimes B)$ acts as a quantum sum of two independent observables. In particular, $B\otimes B$ is the quantum product of two i.i.d. observables, and $ln(B\otimes B)$ the quantum sum of two i.i.d. observables.

In our case, the basic 1-step quantum claim variable is a $2\times 2$ matrix B with eigenvalues $exp\left(u\right),exp\left(d\right)$ and interpreted as a 1-step jump geometric random walk, $B\otimes B$ as a 2-step jump geometric random walk, etc. To model the standard random walk, we first consider the geometric random walk and then take the ln.

An identity operator ${I}_{n}$ (in dimension n) is introduced that does not affect the dynamics. Indeed, ${I}_{n}\otimes B$ corresponds to multiply by 1 at the first step, while $B\otimes {I}_{n}$ corresponds to multiply by 1 at the second step. Note that in general, a tensor product is not commutative.

Partitioned space. To deal with an event space which is partitioned into n events, we work with the n orthogonal projection (Hermitian) operators ${P}_{i}$ onto the eigenspace of observable A.

#### 3.1. Quantum Data

The operator H is constructed as a projection on three claim (jump) cases $i=0,1,2$. Given that case i occurs, the claims are defined as a quantum type random walk as described above. Applying the argument outlined above with standard quantum-type calculations (Griffiths and Schroeter 2018; Parthasarathy 1992; Plenio 2002), we derive our first observable operator

$$H={P}_{0}\otimes {O}_{4}+{P}_{1}\otimes ln(B\otimes {I}_{2})+{P}_{2}\otimes ln\left({B}^{\otimes 2}\right).$$

More explicitly, the matrices B and ${B}^{\otimes 2}$ are the 1 and 2-step exponential jump claim operators defined by
where V is a $2\times 2$ unitary matrix, ${V}^{*}$ is its adjoint and ${I}_{2}$ is a $2\times 2$ identity matrix that corresponds to the absence of the second claim. Notice that ${I}_{2}={V}^{*}V$. So, the actual 1-step claim operator is $ln(B\otimes {I}_{2})$ computed as
and the 2-step claim operator is $ln\left({B}^{\otimes 2}\right)$ given by
Moreover, let ${D}_{i|n},\phantom{\rule{0.166667em}{0ex}}1\le i\le n$, be a $n\times n$ diagonal matrix which has a single non-zero element given by ${\left({D}_{i|n}\right)}_{i,i}=1$, i.e., ${\left({D}_{i|n}\right)}_{k,m}=0$ for $(k,m)\ne (i,i)$. The $3\times 3$-matrices ${P}_{0},{P}_{1},{P}_{2}$ are the $0,1,2$ claim occurrences operators (projections) defined by
Finally, denote by ${O}_{n}$ an extra $n\times n$ matrix with all elements being 0. The matrix ${O}_{4}$ corresponds to a 0 claim size which is given by

$$B={V}^{*}DV={V}^{*}\left(\begin{array}{cc}{e}^{u}& 0\\ 0& {e}^{d}\end{array}\right)V,\phantom{\rule{2.em}{0ex}}{B}^{\otimes 2}=({V}^{*}\otimes {V}^{*})(D\otimes D)(V\otimes V),$$

$$\begin{array}{c}ln(B\otimes {I}_{2})={\left({V}^{*}\right)}^{\otimes 2}\left(\begin{array}{cccc}u& 0& 0\hfill & 0\\ 0& u& 0\hfill & 0\\ 0& 0& d\hfill & 0\\ 0& 0& 0\hfill & d\end{array}\right){V}^{\otimes 2},\end{array}$$

$$\begin{array}{c}ln\left({B}^{\otimes 2}\right)={\left({V}^{\otimes 2}\right)}^{*}\left(\begin{array}{cccc}2u& 0& 0& 0\\ 0& u+d& 0& 0\\ 0& 0& d+u& 0\\ 0& 0& 0& 2d\end{array}\right){V}^{\otimes 2}.\end{array}$$

$$\begin{array}{c}{P}_{i}={W}^{*}{D}_{i+1|3}W,\phantom{\rule{1.em}{0ex}}i=0,1,2.\end{array}$$

$$\begin{array}{c}{O}_{4}={\left({V}^{*}\right)}^{\otimes 2}({O}_{2}\otimes {O}_{2}){V}^{\otimes 2}.\end{array}$$

Overall, we then obtain
where U is a $12\times 12$ unitary matrix such that

$$H={U}^{*}\left(\begin{array}{cccccccccccc}\mathbf{0}& 0& 0\hfill & 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& \mathbf{0}& 0\hfill & 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& \mathbf{0}\hfill & 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0\hfill & \mathbf{0}& 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0\hfill & 0& \mathbf{u}& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0\hfill & 0& 0& \mathbf{u}& 0& 0& 0& 0& 0& 0\\ 0& 0& 0\hfill & 0& 0& 0& \mathbf{d}& 0& 0& 0& 0& 0\\ 0& 0& 0\hfill & 0& 0& 0& 0& \mathbf{d}& 0& 0& 0& 0\\ 0& 0& 0\hfill & 0& 0& 0& 0& 0& \mathbf{2}\mathbf{u}& 0& 0& 0\\ 0& 0& 0\hfill & 0& 0& 0& 0& 0& 0& \mathbf{u}+\mathbf{d}& 0& 0\\ 0& 0& 0\hfill & 0& 0& 0& 0& 0& 0& 0& \mathbf{d}+\mathbf{u}& 0\\ 0& 0& 0\hfill & 0& 0& 0& 0& 0& 0& 0& 0& \mathbf{2}\mathbf{d}\end{array}\right)U,$$

$$\begin{array}{c}U=W\otimes {V}^{\otimes 2},\phantom{\rule{2.em}{0ex}}{U}^{*}={W}^{*}\otimes {\left({V}^{*}\right)}^{\otimes 2}.\end{array}$$

#### 3.2. Adjusted Quantum Data

We consider the three ways indicated before to handle missing data.

Way 1. This is the same as the previous quantum observable H. However, the probabilities ${p}_{2u}$ of $2u$ and ${p}_{2d}$ of $2d$ are taken equal to 0.

Way 2. Now, the values $2u$ and $2d$ are not taken into consideration as eigenvalues. The new observable operator ${H}^{\prime}$ is then
where $S={D}_{2|4}+{D}_{3|4}$ is a projection type operator in the previous notation of ${D}_{i|n}$.

$${H}^{\prime}={P}_{0}\otimes {O}_{4}+{P}_{1}\otimes ln(B\otimes {I}_{2})+{P}_{2}\otimes (ln\left({B}^{\otimes 2}\right)\otimes S),$$

This operator S is applied because the capital movement can be exposed to a unusual change at the second step (it cannot go to $2u$ and $2d$). In this case, the probabilities of $2u$ and $2d$ are not equal to 0.

Way 3. This time, we consider $u+d$ as a first jump. The new observable operator ${H}^{\u2033}$ is
Here, ${I}_{1}=\left[1\right]$ is the $1\times 1$-identity matrix and
where $\tilde{V}$ is a $3\times 3$ unitary matrix. Moreover, the $2\times 2$-matrices ${\tilde{P}}_{0},{\tilde{P}}_{1}$ are the $0,1$ claim occurrence operators defined by
where $\tilde{W}$ is a $2\times 2$ unitary matrix.

$${H}^{\u2033}={P}_{0}\otimes {O}_{3}+{P}_{1}\otimes ln(B\otimes {I}_{1}).$$

$$\begin{array}{c}ln(B\otimes {I}_{1})=ln\left(B\right)={\tilde{V}}^{*}\left(\begin{array}{ccc}u& 0& 0\\ 0& d& 0\\ 0& 0& u+d\end{array}\right)\tilde{V},\end{array}$$

$$\begin{array}{c}{\tilde{P}}_{i}={\tilde{W}}^{*}{D}_{i+1|2}\tilde{W},\phantom{\rule{1.em}{0ex}}i=0,1,\end{array}$$

Therefore, we have
where $\tilde{U}=\tilde{W}\otimes \tilde{V}$.

$${H}^{\u2033}={\tilde{U}}^{*}\left(\begin{array}{cccccc}\mathbf{0}& 0& 0\hfill & 0& 0& 0\\ 0& \mathbf{0}& 0\hfill & 0& 0& 0\\ 0& 0& \mathbf{0}\hfill & 0& 0& 0\\ 0& 0& 0\hfill & \mathbf{u}& 0& 0\\ 0& 0& 0\hfill & 0& \mathbf{d}& 0\\ 0& 0& 0\hfill & 0& 0& \mathbf{u}+\mathbf{d}\end{array}\right)\tilde{U},$$

## 4. Quantum Likelihood

In the Dirac formalism, the so-called bra-ket notation has proven very useful and easy to handle and has become standard in quantum mechanics. Recall it briefly; more detail can be found e.g., in Griffiths and Schroeter (2018); Parthasarathy (1992); Plenio (2002). Consider a class of $n\times n$ matrices treated as ${C}^{*}$ algebra. A column vector x is represented as a ket-vector $|x>$. An associated bra-vector $<x|$ is a row vector defined as its Hermitian conjugate. Then, $<x|y>$ corresponds to the usual inner product. Moreover, $|x><y|$ is the outer product, i.e., an operator/matrix defined by
In particular, for any unit vector e, ${P}_{e}=|e><e|$ defines a projection operator which acts as

$$|x><y||z>=<y|z>|x>\phantom{\rule{1.em}{0ex}}(abc=bca\mathrm{rule}).$$

$$\begin{array}{c}{P}_{e}|x>=|e><e||x>=<e|x>|e>.\end{array}$$

Let $\rho $ be the density operator which describes the statistical state of the system. The projection operator plays the role of an event, and the probability of finding the system in the state e is defined as the expectation
where $tr$ denotes trace. Now, an operator $A\in {C}^{*}$ is an observable if A is self-adjoint ($A={A}^{*}$). Thanks to that property, A can be expanded in its spectrum $\left\{{\beta}_{i}\right\}$ by projection operators, i.e.,
in which ${e}_{i}$ is the eigenvector for ${\beta}_{i}$. The probability of the measurement is extended by linearity of the expectation as

$$\begin{array}{c}E\left({P}_{e}\right)=E[|e><e|]=tr\left(\rho {P}_{e}\right),\end{array}$$

$$A=\sum _{i}{\beta}_{i}{P}_{{e}_{i}}=\sum _{i}{\beta}_{i}|{e}_{i}><{e}_{i}|,$$

$$\begin{array}{c}E\left(A\right)=E\left[\sum _{i}{\beta}_{i}|{e}_{i}><{e}_{i}|\right]=\sum _{i}{\beta}_{i}\phantom{\rule{0.166667em}{0ex}}E[|{e}_{i}><{e}_{i}|]=\sum _{i}{\beta}_{i}\phantom{\rule{0.166667em}{0ex}}tr\left(\rho {P}_{{e}_{i}}\right).\end{array}$$

We are ready to go back to the insurance risk process. First, we examine the case of quantum data using two classical models developed in the literature.

#### 4.1. Maxwell-Boltzmann Statistics

We consider the model (1) with the operator (3). The eigenvalues of the operator H are $0,u,d,u\phantom{\rule{3.33333pt}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}d,2u,2d$. The probabilities of finding the system in the corresponding eigenstates are defined via the quantum method sketched before. Now, we assume that the eigenvalues are observed independently and that the density $\rho $ is defined as
with ${\rho}_{2}$ too having an independence type tensor product representation given by
to satisfy the following restrictions
where ${P}_{\beta}$ is a projection operator on the eigenvalue $\beta $ given by
where, using the notation ${D}_{i|n}$,

$$\rho ={\rho}_{1}\otimes {\rho}_{2}={W}^{*}\left(\begin{array}{ccc}{\delta}_{0}& 0& 0\hfill \\ 0& {\delta}_{1}& 0\hfill \\ 0& 0& {\delta}_{2}\hfill \end{array}\right)W\phantom{\rule{0.166667em}{0ex}}\otimes \phantom{\rule{0.166667em}{0ex}}{\left({V}^{*}\right)}^{\otimes 2}\left(\begin{array}{cccc}{p}^{2}& 0& 0\hfill & 0\\ 0& pq& 0\hfill & 0\\ 0& 0& qp\hfill & 0\\ 0& 0& 0\hfill & {q}^{2}\end{array}\right){V}^{\otimes 2},$$

$${\rho}_{2}={\rho}_{2}^{\prime}\phantom{\rule{0.166667em}{0ex}}\otimes \phantom{\rule{0.166667em}{0ex}}{\rho}_{2}^{\u2033}={V}^{*}\left(\begin{array}{cc}p& 0\\ 0& q\end{array}\right)V\otimes {V}^{*}\left(\begin{array}{cc}p& 0\\ 0& q\end{array}\right)V,$$

$$\begin{array}{c}tr(\rho \otimes {P}_{0})={\delta}_{0},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}tr(\rho \otimes {P}_{u})=p{\delta}_{1},\hfill \\ tr(\rho \otimes {P}_{d})=q{\delta}_{1},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}tr(\rho \otimes {P}_{u+d})=2pq{\delta}_{2},\hfill \\ tr(\rho \otimes {P}_{2u})={p}^{2}{\delta}_{2},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}tr(\rho \otimes {P}_{2d})={q}^{2}{\delta}_{2},\hfill \end{array}$$

$$\begin{array}{c}{P}_{\beta}={U}^{*}{D}_{\beta}U,\end{array}$$

$$\begin{array}{c}{D}_{0}={D}_{1|3}\otimes ({D}_{1,4}+\dots +{D}_{4|4}),\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{D}_{u}={D}_{2|3}\otimes ({D}_{1,4}+{D}_{2|4}),\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{D}_{d}={D}_{2|3}\otimes ({D}_{3,4}+{D}_{4|4}),\hfill \\ {D}_{2u}={D}_{3|3}\otimes {D}_{1|4},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{D}_{2d}={D}_{3|3}\otimes {D}_{4|4},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{D}_{u+d}={D}_{3|3}\otimes ({D}_{2|4}+{D}_{3|4}).\hfill \end{array}$$

After standard but relatively lengthy calculations, we obtain the following existence result.

**Lemme**

**1**(Maxwell-Boltzmann density)

**.**

The set of densities satisfying the above restrictions (6) is not empty. Moreover, there exists a density of the form $\rho ={\rho}_{1}\otimes {\rho}_{2}$ corresponding to Maxwell-Boltzmann statistics.

#### 4.2. Bose-Einstein Statistics

We examine the same model (1) with the operator (3) but when the eigenvalues are not observed independently. More precisely, we assume that the eigenvalues $u+d$ and $d+u$ cannot be distinguished and that the density $\rho $ has to satisfy the following restrictions
where ${P}_{\lambda}$ is defined as before and C is chosen to satisfy the normalization condition

$$\begin{array}{c}tr(\rho \otimes {P}_{0})={\delta}_{0},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}tr(\rho \otimes {P}_{u})=p{\delta}_{1},\hfill \\ tr(\rho \otimes {P}_{d})=q{\delta}_{1},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}tr(\rho \otimes {P}_{u+d})=Cpq{\delta}_{2},\hfill \\ tr(\rho \otimes {P}_{2u})=C{p}^{2}{\delta}_{2},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}tr(\rho \otimes {P}_{2d})=C{q}^{2}{\delta}_{2},\hfill \end{array}$$

$$C({p}^{2}+pq+{q}^{2})=1.$$

As before, an existence result is proved after lengthy calculations.

**Lemme**

**2**(Bose-Einstein density)

**.**

The set of densities satisfying the above restrictions (7) is not empty.

We now move on to the case with missing data.

#### 4.3. Adjusted Quantum Data

One possibility is to apply statistics of the Bose-Einstein type. Again the three previous methods with the observable operators (3)–(5) are considered. For each one, it can be shown that the set of densities satisfying the restrictions is not empty.

Way 1. Here, we choose for the probabilities
where ${P}_{\lambda}$ is the same as above and ${C}^{\prime}$ is chosen to satisfy the normalization condition ${C}^{\prime}({\delta}_{0}+{\delta}_{1}(p+q)+pq{\delta}_{2})=1.$

$$\begin{array}{cc}tr(\rho \otimes {P}_{0})& ={C}^{\prime}{\delta}_{0},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}tr(\rho \otimes {P}_{u})={C}^{\prime}p{\delta}_{1},\hfill \\ tr(\rho \otimes {P}_{d})& ={C}^{\prime}q{\delta}_{1},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}tr(\rho \otimes {P}_{u+d})={C}^{\prime}pq{\delta}_{2},\hfill \end{array}$$

Way 2. The eigenvalue $u+d$ is observed twice and so, a natural choice is
where ${C}^{\u2033}({\delta}_{0}+{\delta}_{1}(p+q)+pq{\delta}_{2})=1.$ Note that using the Maxwell-Boltzmann case, we may also write $tr(\rho \otimes {P}_{u+d})=2{C}^{\prime}{\delta}_{2}pq$.

$$\begin{array}{cc}tr(\rho \otimes {P}_{0})& ={C}^{\u2033}{\delta}_{0},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}tr(\rho \otimes {P}_{u})={C}^{\u2033}p{\delta}_{1},\hfill \\ tr(\rho \otimes {P}_{d})& ={C}^{\u2033}q{\delta}_{1},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}tr(\rho \otimes {P}_{u+d})={C}^{\u2033}{\delta}_{2}pq,\hfill \end{array}$$

Way 3. The choice of the probabilities is quite arbitrary and to restrict it, we add the unobserved probabilities of $2d$ and $2u$ to the probabilities observed in the Bose-Einstein statistics. Thus, the resulting probabilities are
where ${C}^{\u2034}({\delta}_{0}+{\delta}_{1}(p+q)+(pq+{p}^{2}+{q}^{2}){\delta}_{2})=1$.

$$\begin{array}{c}tr(\rho \otimes {P}_{0})={C}^{\u2034}{\delta}_{0},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}tr(\rho \otimes {P}_{u})={C}^{\u2034}p{\delta}_{1},\hfill \\ tr(\rho \otimes {P}_{d})={C}^{\u2034}q{\delta}_{1},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}tr(\rho \otimes {P}_{u+d})={C}^{\u2034}{\delta}_{2}(pq+{p}^{2}+{q}^{2}),\hfill \end{array}$$

#### 4.4. Likelihood Functions

The corresponding likelihood functions are now straightforward. Denote by $\#x$ the number of x observed in the data set. For the Maxwell-Boltzmann statistics, the likelihood is given by
For the Bose-Einstein statistics,
For the adjusted quantum data, following the way 2 for example,

$$L(p,q)={\left({p}^{2}{\delta}_{2}\right)}^{\#2u}{\left({q}^{2}{\delta}_{2}\right)}^{\#2d}{\left(2qp{\delta}_{2}\right)}^{\#u+d}{\left(p{\delta}_{1}\right)}^{\#u}{\left(q{\delta}_{1}\right)}^{\#d}{\left({\delta}_{0}\right)}^{\#0}.$$

$$L(p,q)={\left(C{p}^{2}{\delta}_{2}\right)}^{\#2u}{\left(C{q}^{2}{\delta}_{2}\right)}^{\#2d}{\left(Cqp{\delta}_{2}\right)}^{\#u+d=d+u}{\left(p{\delta}_{1}\right)}^{\#u}{\left(q{\delta}_{1}\right)}^{\#d}{\left({\delta}_{0}\right)}^{\#0}.$$

$$L(p,q)={\left({C}^{\u2033}qp{\delta}_{2}\right)}^{\#u+d=d+u}{\left({C}^{\u2033}p{\delta}_{1}\right)}^{\#u}{\left({C}^{\u2033}q{\delta}_{1}\right)}^{\#d}{\left({C}^{\u2033}{\delta}_{0}\right)}^{\#0}.$$

## 5. Data Analysis

We want to analyze the claims data by the non-traditional quantum representation (3)–(5) of the models (1) and (2). This can be done by applying the supervised machine learning method (Bishop 2006; Hastie and Tibshirani 1996; Hastie et al. 2009).

The method uses the cross-validation technique, which is based on dividing the dataset into test data (also known as validation data) and training data. The nearest neighbors algorithms are applied to classify the data, then the maximum likelihood estimate and the error risk calculation are performed to find the optimal parameters. Finally, the part of the training information obtained is applied to analyze the test data.

In the k-fold cross-validation, all data is divided into k subsets of equal size. In each iteration, a subset is chosen as training data and the remaining subsets are used as test data. The process is repeated k times and each subset is chosen only once as a training piece in general. Finally, the estimator is an average of all the iteration results. For illustration, a simple example where $k=2$ is presented in the Section 5.2.

#### 5.1. Estimation Procedure

Our goal is to estimate the values $(u,d)$ of the claim amounts and their probabilities $(p,q=1-p)$. The dataset $V=\{{v}_{1},{v}_{2},\dots ,{v}_{n}\}$ consists of claim amounts in successive time intervals $\Delta t=(t-1,t]$ ($t=1,\dots ,m$ say). We assume that the likelihood function is defined by one of the functions given in (11)–(13).

The estimation method proposed is slightly similar to the EM algorithm and its successive steps are as follows.

- -
- Choose an initial estimate $({u}_{0},{d}_{0})$.
- -
- Classify and label the data with respect to $(u={u}_{0},d={d}_{0})$ by using the nearest neighbour algorithm. This leads to the classesand ${G}_{u+d},{G}_{u},{G}_{d},{G}_{0}$ for the representation (5).
- -
- -
- -
- Loop it until $|F({u}_{i+1},{d}_{i+1})-F({u}_{i},{d}_{i})|<M$ small enough.

For the $k-$fold cross-validation strategy, the steps are first applied to the training data and yield an estimate for $(u,d)$ which is then used in the test data as the initial estimate $({u}_{0},{d}_{0})$.

**Estimating $(\mathit{u},\mathit{d})$ by risk functions**. Consider a dataset $\left\{{\beta}_{i}\right\}$. The target set to estimate is a set of eigenvalues $\mu $ with observed probabilities and clusters ${p}_{\mu}$ and ${G}_{\mu}$. This can be done by minimizing the weighted ${L}_{1}$-norm risk function

$$\begin{array}{c}F\left(\mu \right)=\parallel \mu -\beta \parallel =\sum _{\mu}{p}_{\mu}\sum _{{\beta}_{i}\in {G}_{\mu}}|{\beta}_{i}-\mu |.\end{array}$$

For the Maxwell-Boltzmann statistic, this risk function is
For the Bose-Einstein statistics,
For the adjusted quantum data, following the way 2,

$$\begin{array}{cc}F(u,d)& ={p}^{2}{\delta}_{2}\sum _{{\beta}_{i}\in {G}_{2u}}|{\beta}_{i}-2u|+{q}^{2}{\delta}_{2}\sum _{{\beta}_{i}\in {G}_{2d}}|{\beta}_{i}-2d|+2pq{\delta}_{2}\sum _{{\beta}_{i}\in {G}_{u+d}}|{\beta}_{i}-(u+d)|\hfill \\ & +p{\delta}_{1}\sum _{{\beta}_{i}\in {G}_{u}}|{\beta}_{i}-u|+q{\delta}_{1}\sum _{{\beta}_{i}\in {G}_{d}}|{\beta}_{i}-d|+{\delta}_{0}\sum _{{\beta}_{i}\in {G}_{0}}|{\beta}_{i}-0|.\hfill \end{array}$$

$$\begin{array}{cc}F(u,d)& =C{p}^{2}{\delta}_{2}\sum _{{\beta}_{i}\in {G}_{2u}}|{\beta}_{i}-2u|+C{q}^{2}{\delta}_{2}\sum _{{\beta}_{i}\in {G}_{2d}}|{\beta}_{i}-2d|+Cpq{\delta}_{2}\sum _{{\beta}_{i}\in {G}_{u+d}}|{\beta}_{i}-(u+d)|\hfill \\ & +p{\delta}_{1}\sum _{{\beta}_{i}\in {G}_{u}}|{\beta}_{i}-u|+q{\delta}_{1}\sum _{{\beta}_{i}\in {G}_{d}}|{\beta}_{i}-d|+{\delta}_{0}\sum _{{\beta}_{i}\in {G}_{0}}|{\beta}_{i}-0|.\hfill \end{array}$$

$$\begin{array}{cc}F(u,d)& ={C}^{\u2033}pq{\delta}_{2}\sum _{{\beta}_{i}\in {G}_{u+d}}|{\beta}_{i}-(u+d)|+{C}^{\u2033}p{\delta}_{1}\sum _{{\beta}_{i}\in {G}_{u}}|{\beta}_{i}-u|+{C}^{\u2033}q{\delta}_{1}\sum _{{\beta}_{i}\in {G}_{d}}|{\beta}_{i}-d|\hfill \\ & +{C}^{\u2033}{\delta}_{0}\sum _{{\beta}_{i}\in {G}_{0}}|{\beta}_{i}-0|.\hfill \end{array}$$

#### 5.2. Numerical Illustrations

We first examine a simple numerical example for quantum data, then a case of data with errors and finally a case of misreported claims. For all the cases, we take $\lambda =1$ and $\Delta t=1$, for instance.

**Numerical example**. Consider the following dataset

$$V=\{20,8,1,7,15,17,11,0,19,1\},$$

$${V}_{1}=\{20,8,1,7,15\},\phantom{\rule{2.em}{0ex}}{V}_{2}=\{17,11,0,19,1\}.$$

(1) Maxwell-Boltzmann likelihood (11) with risk function (14). We obtain the following results (Table 1).

Choosing $M=0.01$, we see that $(u,d)=(15,9)$ and $(p,q)=(0.1,0.9)$. The associated minimum risk is $2.987181$ and the maximum likelihood value is 2.198608 × 10${}^{-7}$. The loop takes 8 steps, i.e., it works very fast for a small data set.

To reduce overfitting, we apply the k-fold cross-validation method with $k=2$. This gives the results below (Table 2).

When ${V}_{1}$ is the training set, we get $(\overline{u},\overline{d})=(1/2)({u}_{1}+{u}_{2},{d}_{1}+{d}_{2})=(13,8.5)$ and $(\overline{p},\overline{q})=(1/2)({p}_{1}+{p}_{2},{q}_{1}+{q}_{2})=(0.3,0.7)$, with $F(13,8.5)=1.9884$. Thus, there is a significant reduction in the risk function with a somewhat close $(u,d)$.

(2) Bose-Einstein likelihood (12) with risk function (15). Here are the numerical results (Table 3).

Observe that we obtain the same $(u,d)=(15,9)$ but with probabilities $(p,q)=(0.13,0.87)$. Again it takes 8 steps to reach the level $M=0.01$.

A 2-fold cross-validation method improves the results as follows (Table 4).

With ${V}_{1}$ as training set, we get $(\overline{u},\overline{d})=(13,8.5)$ and $(\overline{p},\overline{q})=(0.33,0.67)$, with $F(13,8.5)=2.0047$ instead of $2.963947$ obtained before.

(3) Bose-Einstein likelihood (13) with risk function (16). The results are in the following table (Table 5), again for $M=0.01$.

The results here are somewhat different since $(u,d)=(17,9)$ and $(p,q)=(0.57,0.43)$. The loop now takes only six steps. For this dataset, the model which fits best, i.e., with the smallest risk function, is using Bose-Einstein statistics.

We also performed several numerical experiments with simulated data. In the examples (4)–(7) below, the simulations yield datasets of size $n=100$ ($n=1000$ was used too), and the calculations are made with $M=0.1$.

(4) Uniform random data (Table 6). As in the examples (1), (2), we apply the usual Maxwell-Boltzmann and Bose-Einstein statistics.

We notice that the best fit is not always given by the Maxwell-Boltzmann statistics.

**Data with errors**. We wish to examine a dataset disturbed by an error. For that, we start with a set $\{{j}_{1},{j}_{2},\dots ,{j}_{n}\}$ of true observables $\{0,u,d,u+d,2u,2d\}$. Then, we add a special random error $\{{e}_{1},{e}_{2},\dots ,{e}_{n}\}$ so that the dataset generated is given by

$$V=\{{v}_{1},{v}_{2},\dots ,{v}_{n}\}\equiv \{{j}_{1},{j}_{2},\dots ,{j}_{n}\}+\{{e}_{1},{e}_{2},\dots ,{e}_{n}\}.$$

(5) Random data with errors (Table 7). The non-perturbed data $\{{j}_{1},{j}_{2},\dots ,{j}_{n}\}$ come from a uniform sampling in $\{0,u,d,u+d,2u,2d\}$.

We see that, as before, the best model depends on the dataset. In the case of a small error $\mu $, the results are of course very close.

(6) Adjusted random data with errors (Table 8). The non-perturbed dataset $\{{j}_{1},{j}_{2},\dots ,{j}_{n}\}$ is obtained by simulation according to the way 2.

The results are close when $\mu $ is small and slightly different when $\mu $ increases.

**Misreported data**. Data samples may not report or misreport claims, either by mistake or voluntarily. This can also occur because of a change of risk. Let V be a dataset with n reported claims and m misreported claims:

$$V=\{{v}_{1},{v}_{2},\dots ,{v}_{n+m}\},$$

(7) Random data with misreports (Table 9). First, data $\{{v}_{1},{v}_{2},\dots ,{v}_{n}\}$ are generated according to the Maxwell-Boltzman model perturbed by errors via (17). Then, random errors $\{{e}_{1},{e}_{2},\dots ,{e}_{m}\}$ are generated to replace missing data, where m is of values $0,5,20$ ($m=0$ meaning no missing data). Finally, the two datasets are combined by putting the errors at random position.

As expected, a small value of m does not affect the results very much. What is a little surprising is that for a relatively large value $m=20$ ($20\%$), estimates of probabilities change slightly ($2\%$) but estimates for claim amounts are significantly modified ($20\%$).

In practice, the algorithm works well and quickly in most situations. We also performed numerical calculations with a grid size of $\Delta t=0.1$, and it is essentially the value of the risk function that is affected.

## 6. Quantum Reserve Process

One of the main objectives of the risk theory is to forecast the evolution of the reserves of an insurance company. This problem has generated a great deal of research using probabilistic techniques. We present below some introductory elements for an alternative quantum approach.

#### 6.1. Distribution of the Reserves

The future reserves of an insurance can be computed by applying path integral methods (Feynman 1948; Feynman and Hibbs 2010). Let ${x}_{0},{x}_{1},\dots ,{x}_{n}$ be the capital values at time $0={t}_{0},{t}_{1},\dots ,{t}_{n}=t$. Arguing as in Tamturk and Utev (2018), we first obtain that
with $\Delta {t}_{i}={t}_{i+1}-{t}_{i}$. For simplicity, take $\Delta {t}_{i}\equiv \Delta t$. The error term $o\left(1\right)$ depends on $\Delta t/t$ (the grid size relative to the observation time t which is usually small). The propagator for each sub-interval $<{x}_{i}|{e}^{-\Delta tH}|{x}_{i+1}>$ plays the role of the transition probability $P({x}_{i}\to {x}_{i+1})$. It is expressed in terms of a Markovian generator $-H$ called Markovian Hamiltonian. Then, by the completeness property in Dirac’s formalism, we find that
where $\{|\alpha >,{K}_{\alpha}\}$ is the set of eigenvalues and eigenstates in the spectral decomposition of the Hamiltonian operator H.

$$\begin{array}{ccc}P(R\left({t}_{n}\right)={x}_{n}|{x}_{0})& =& \phantom{\rule{0.166667em}{0ex}}(1+o\left(1\right))\sum _{{x}_{1}}<{x}_{0}|{e}^{-\Delta {t}_{1}H}|{x}_{1}>\sum _{{x}_{2}}<{x}_{1}|{e}^{-\Delta {t}_{2}H}|{x}_{2}>\hfill \\ & \dots & \sum _{{x}_{n-1}}<{x}_{n-2}|{e}^{-\Delta {t}_{n-1}H}|{x}_{n-1}><{x}_{n-1}|{e}^{-\Delta {t}_{n}H}|{x}_{n}>,\hfill \end{array}$$

$$\begin{array}{cc}P({x}_{i}\to {x}_{i+1})=<{x}_{i}|{e}^{-\Delta tH}|{x}_{i+1}>& ={\int}_{0}^{2\pi}\frac{d\alpha}{2\pi}<{x}_{i}|{e}^{-\Delta tH}|\alpha ><\alpha |{x}_{i+1}>\hfill \\ & ={\int}_{0}^{2\pi}\frac{d\alpha}{2\pi}<{x}_{i}|\alpha ><\alpha |{x}_{i+1}>{e}^{-\Delta t{K}_{\alpha}}\hfill \\ & =\frac{1}{2\pi}{\int}_{0}^{2\pi}\left({e}^{i{x}_{i}\alpha}{e}^{-i{x}_{i+1}\alpha}\right){e}^{-\Delta t{K}_{\alpha}}d\alpha ,\hfill \end{array}$$

In the risk model discussed here, the reserve process is defined via the Hamiltonian whose eigenvalues ${K}_{\alpha}$ in the basis $|\alpha >$ are given by
For the Maxwell-Boltzmann statistics, the transition probabilities (19) become
For the Bose-Einstein statistics, we have
For the adjusted quantum data, following the way 2,

$${K}_{\alpha}=-ln\left[{e}^{i\alpha c}({e}^{-\lambda}+{e}^{-i\alpha u}{\delta}_{1}p+{e}^{-i\alpha d}{\delta}_{1}q+{e}^{-i\alpha \left(2u\right)}{\delta}_{2}{p}^{2}+{e}^{-i\alpha \left(2d\right)}{\delta}_{2}{q}^{2}+{e}^{-i\alpha (u+d)}{\delta}_{2}2pq)\right].$$

$$\begin{array}{cc}<{x}_{i}|{e}^{-\Delta tH}|{x}_{i+1}>& ={\int}_{0}^{2\pi}\frac{d\alpha}{2\pi}<{x}_{i}|{e}^{-\Delta tH}|\alpha ><\alpha |{x}_{i+1}>\hfill \\ & =\left\{\begin{array}{cc}{e}^{-\lambda}\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c=0,\hfill \\ {\delta}_{1}p\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c-u=0,\hfill \\ {\delta}_{1}q\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c-d=0,\hfill \\ {\delta}_{2}{p}^{2}\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c-2u=0,\hfill \\ {\delta}_{2}{q}^{2}\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c-2d=0,\hfill \\ {\delta}_{2}2pq\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c-(u+d)=0.\hfill \end{array}\right.\hfill \end{array}$$

$$\begin{array}{cc}<{x}_{i}|{e}^{-\Delta tH}|{x}_{i+1}>& ={\int}_{0}^{2\pi}\frac{d\alpha}{2\pi}<{x}_{i}|{e}^{-\Delta tH}|\alpha ><\alpha |{x}_{i+1}>\hfill \\ & =\left\{\begin{array}{cc}{e}^{-\lambda}\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c=0,\hfill \\ {\delta}_{1}p\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c-u=0,\hfill \\ {\delta}_{1}q\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c-d=0,\hfill \\ C{\delta}_{2}{p}^{2}\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c-2u=0,\hfill \\ C{\delta}_{2}{q}^{2}\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c-2d=0,\hfill \\ C{\delta}_{2}pq\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c-(u+d)=0.\hfill \end{array}\right.\hfill \end{array}$$

$$\begin{array}{cc}<{x}_{i}|{e}^{-\Delta tH}|{x}_{i+1}>& ={\int}_{0}^{2\pi}\frac{d\alpha}{2\pi}<{x}_{i}|{e}^{-\Delta tH}|\alpha ><\alpha |{x}_{i+1}>\hfill \\ & =\left\{\begin{array}{cc}{C}^{\u2033}{e}^{-\lambda}\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c=0,\hfill \\ {C}^{\u2033}{\delta}_{1}p\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c-u=0,\hfill \\ {C}^{\u2033}{\delta}_{1}q\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c-d=0,\hfill \\ {C}^{\u2033}{\delta}_{2}pq\hfill & \mathrm{for}\phantom{\rule{1.em}{0ex}}{x}_{i}-{x}_{i+1}+c-(u+d)=0.\hfill \end{array}\right.\hfill \end{array}$$

#### 6.2. Finite-Time Ruin Probability

Let T be the ruin time, i.e., the first instant when the reserves become negative or null. To obtain the probability of non-ruin up to time ${t}_{n}$, we just have to proceed as in (18) and delete the paths where ${x}_{i}$ is negative or null. This gives directly

$$\begin{array}{cc}P(T>{t}_{n}|{x}_{0})=& \phantom{\rule{0.166667em}{0ex}}(1+o\left(1\right))\sum _{{x}_{1}\ge 1}<{x}_{0}|{e}^{-\Delta {t}_{1}H}|{x}_{1}>\sum _{{x}_{2}\ge 1}<{x}_{1}|{e}^{-\Delta {t}_{2}H}|{x}_{2}>\hfill \\ & \sum _{{x}_{3}\ge 1}<{x}_{2}|{e}^{-\Delta {t}_{3}H}|{x}_{3}>\dots \sum _{{x}_{n}\ge 1}<{x}_{n-1}|{e}^{-\Delta {t}_{n-1}H}|{x}_{n}>.\hfill \end{array}$$

**Extension**. The method can be applied to more advanced risk models. For instance, suppose that a change in risk occurs at time ${t}_{f}$ so that the reserve process is modified as

$$R\left(t\right)={x}_{0}+{\int}_{0}^{t}{c}_{f}dt-\sum _{j=1}^{N\left(t\right)}{X}_{j},$$

$$\begin{array}{c}\mathrm{for}t\le {t}_{f}:{c}_{f}={c}_{1},\mathrm{and}{X}_{j}={d}_{1}=d\mathrm{or}{u}_{1}=u\mathrm{with}\mathrm{probabilities}{q}_{1}\mathrm{or}{p}_{1},\hfill \\ \mathrm{for}t{t}_{f}:{c}_{f}={c}_{2},\mathrm{and}{X}_{j}={d}_{2}=d+{f}_{d}\mathrm{or}{u}_{2}=u+{f}_{u}\mathrm{with}\mathrm{probabilities}{q}_{2}\mathrm{or}{p}_{2}.\hfill \end{array}$$

$$P(T>{t}_{n}|{x}_{0})=E\left[{P}_{1}(T>{t}_{f}|{x}_{0})\phantom{\rule{0.166667em}{0ex}}{P}_{2}(T>{t}_{n}-{t}_{f}|R\left({t}_{f}\right))\right],$$

(8) Change of risk. Consider the dataset of example (1), i.e., $V=\{20,8,1,7,15,17,11,0,19,1\}$, and take $\lambda =0.1$, $c=1$, $\Delta t=1$ and $M=0.05$. When the analysis is by the Maxwell-Boltzmann statistics, we find $(u,d)=(15,8)$ and $(p,q)=(0.2,0.8)$, with $L=6.1720e-14$ and $F=2.1151$. Given an initial capital ${x}_{0}=5$, we compute the probability of non-ruin until time 30 and obtain $P(T>30|5)=0.4021$.

Suppose that, as in example (1), V is divided in two subsets ${V}_{1}$, ${V}_{2}$. With the dataset ${V}_{1}$, we find similarly $({u}_{1},{d}_{1})=(15,8)$ and $({p}_{1},{q}_{1})=(0.4,0.6)$, with ${L}_{1}=2.0962\times {10}^{-7}$ and ${F}_{1}=0.9656$. With ${V}_{2}$, we have $({u}_{2},{d}_{2})=(17,11)$ and $({p}_{2},{q}_{2})=(0.67,0.33)$, with ${L}_{2}=8.9850\times {10}^{-5}$ and ${F}_{2}=1.0261$.

Now, let us examine a model with an unexpected risk which arises at time ${t}_{f}=15$. The data sets before and after ${t}_{f}$ are precisely ${V}_{1}$ and ${V}_{2}$. Given ${x}_{0}=5$, the non-ruin probability until time 30 is defined by
Below (Table 10), we calculated probabilities of non-ruin when ${c}_{1}={c}_{2}=c=1$ and ${f}_{u}={f}_{d}\equiv f$ with possible values $0,1,2,3,4$.

$$\begin{array}{c}P(T>30|5)=E\left[{P}_{1}(T>15|5)\phantom{\rule{0.166667em}{0ex}}{P}_{2}(T>15|R\left(15\right))\right].\end{array}$$

Intuitively, increasing the economic burden f implies a larger risk. This is confirmed above since it yields a smaller non-ruin probability.

**Discussion**. The theory of insurance risk has attracted considerable interest in the actuarial field (see the books Asmussen and Albrecher 2010; Dickson 2017; Schmidli 2018). In particular, problems of ruin have been the subject of numerous investigations. Thus, different methods of calculating ruin probabilities have been proposed (e.g., Dufresne and Gerber 1989; Ignatov et al. 2001 and the Picard-Lefèvre formula (De Vylder 1999; Picard and Lefèvre 1997; Rullière and Loisel 2004)).

Risk theory has a long tradition as a branch of applied probability. In this paper, we present a quantum mechanics approach whose implementation in insurance is novel. This approach requires different techniques, including new representation and data processing in insurance. We have illustrated the methodology by various numerical examples. The advantages and the weaknesses of this approach remain a problem to be discussed in the future.

## Author Contributions

All authors contributed equally to this work.

## Funding

This research received no external funding.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

- Asmussen, Søren, and Hansjörg Albrecher. 2010. Ruin Probabilities, 2nd ed. Singapore: World Scientific. [Google Scholar]
- Baaquie, Belal Ehsan. 2007. Quantum Finance: Path integrals and Hamiltonians for Options and Interest Rates. Cambridge: Cambridge University Press. [Google Scholar]
- Baaquie, Belal Ehsan. 2010. Interest Rates and Coupon Bonds in Quantum Finance. Cambridge: Cambridge University Press. [Google Scholar]
- Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Berlin: Springer. [Google Scholar]
- Bouchaud, Jean-Philippe, and Marc Potters. 2003. Theory of Financial Risk and Derivative Pricing: From Statistical Physics to Risk Management, 2nd ed. Cambridge: Cambridge University Press. [Google Scholar]
- De Vylder, F. Etienne. 1999. Numerical finite-time ruin probabilities by the Picard-Lefèvre formula. Scandinavian Actuarial Journal 2: 97–105. [Google Scholar] [CrossRef]
- Dickson, David C. M. 2017. Insurance Risk and Ruin, 2nd ed. Cambridge: Cambridge University Press. [Google Scholar]
- Dirac, Paul Adrien Maurice. 1933. The Lagrangian in quantum mechanics. Physikalische Zeitschrift der Sowjetunion 3: 64–72. [Google Scholar]
- Dufresne, François, and Hans U. Gerber. 1989. Three methods to calculate the probability of ruin. Astin Bulletin 19: 71–90. [Google Scholar] [CrossRef]
- Feynman, Richard P. 1948. Space-time approach to non-relativistic quantum mechanics. Reviews of Modern Physics 20: 367–87. [Google Scholar] [CrossRef]
- Feynman, Richard P., and Albert R. Hibbs. 2010. Quantum Mechanics and Path Integrals. Edited by Daniel F. Styer. New York: Dover Editions. [Google Scholar]
- Graham, John W. 2009. Missing data analysis: Making it work in the real world. Annual Review of Psychology 60: 549–76. [Google Scholar] [CrossRef] [PubMed]
- Griffiths, David J., and Darrell F. Schroeter. 2018. Introduction to Quantum Mechanics, 3rd ed. Cambridge: Cambridge University Press. [Google Scholar]
- Hastie, Trevor, and Robert Tibshirani. 1996. Discriminant adaptive nearest neighbor classification and regression. Advances in Neural Information Processing Systems 18: 409–15. [Google Scholar]
- Hastie, Trevor, Robert Tibshirani, and Jerome H. Friedman. 2009. The Elements of Statistical Learning, 2nd ed. New York: Springer. [Google Scholar]
- Ignatov, Zvetan G., Vladimir K. Kaishev, and Rossen S. Krachunov. 2001. An improved finite-time ruin probability formula and its Mathematica implementation. Insurance: Mathematics and Economics 29: 375–86. [Google Scholar] [CrossRef]
- Mantegna, Rosario N., and H. Eugene Stanley. 2000. An Introduction to Econophysics: Correlations and Complexity in Finance. Cambridge: Cambridge University Press. [Google Scholar]
- Parthasarathy, Kalyanapuram Rangachari. 1992. An Introduction to Quantum Stochastic Calculus. Basel: Springer. [Google Scholar]
- Picard, Philippe, and Claude Lefèvre. 1997. The probability of ruin in finite time with discrete claim size distribution. Scandinavian Actuarial Journal 1: 58–69. [Google Scholar] [CrossRef]
- Plenio, Martin. 2002. Quantum Mechanics. Ebook. London: Imperial College. [Google Scholar]
- Quinlan, Ross. 1988. C4.5: Programs for Machine Learning. San Mateo: Morgan Kaufmann. [Google Scholar]
- Rullière, Didier, and Stéphane Loisel. 2004. Another look at the Picard-Lefèvre formula for finite-time ruin probabilities. Insurance: Mathematics and Economics 35: 187–203. [Google Scholar] [CrossRef]
- Schmidli, Hanspeter. 2018. Risk Theory. Cham: Springer. [Google Scholar]
- Tamturk, Muhsin, and Sergey Utev. 2018. Ruin probability via quantum mechanics approach. Insurance: Mathematics and Economics 79: 69–74. [Google Scholar] [CrossRef]

Given $(\mathit{u},\mathit{d})$ | Maximum Likelihood $\left(\mathit{L}\right)$ | Optimum $(\mathit{p},\mathit{q})$ | Optimum u | Optimum d | Risk $\mathit{F}(\mathit{u},\mathit{d})$ | $|\mathit{F}({\mathit{u}}_{\mathit{i}},{\mathit{d}}_{\mathit{i}})-$ $\mathit{F}({\mathit{u}}_{\mathit{i}+1},{\mathit{d}}_{\mathit{i}+1})|$ |
---|---|---|---|---|---|---|

(40,25) | 4.361099 × 10${}^{-5}$ | (0.01,0.99) | 18 | 17 | 12.8500 | 12.8500 |

(18,17) | 1.569022 × 10${}^{-6}$ | (0.4,0.6) | 19 | 15 | 7.7255 | 5.1245 |

(19,15) | 9.962820 × 10${}^{-7}$ | (0.33,0.67) | 19 | 11 | 6.6365 | 1.089 |

(19,11) | 3.810307 × 10${}^{-7}$ | (0.43,0.57) | 19 | 10 | 3.5169 | 3.1196 |

(19,10) | 1.141128 × 10${}^{-7}$ | (0.38,0.62) | 17 | 9 | 2.5768 | 0.9401 |

(17,9) | 9.649455 × 10${}^{-8}$ | (0.22,0.78) | 15 | 9 | 2.668082 | 0.091282 |

(15,9) | 2.198608 × 10${}^{-7}$ | (0.1,0.9) | 15 | 9 | 2.987181 | 0.319099 |

(15,9) | 2.198608 × 10${}^{-7}$ | (0.1,0.9) | 15 | 9 | 2.987181 | 0 |

Training Data | Test Data | ||||
---|---|---|---|---|---|

Training Set | Test Set | $(\mathit{u},\mathit{d})$ | $(\mathit{p},\mathit{q})$ | $(\mathit{u},\mathit{d})$ | $(\mathit{p},\mathit{q})$ |

${V}_{1}$ | ${V}_{2}$ | (15,8) | (0.4,06) | (11,9) | (0.2,0.8) |

${V}_{2}$ | ${V}_{1}$ | (17,11) | (0.67,0.33) | (15,8) | (0.4,0.6) |

Given $(\mathit{u},\mathit{d})$ | Maximum Likelihood $\left(\mathit{L}\right)$ | Optimum $(\mathit{p},\mathit{q})$ | Optimum u | Optimum d | Risk $\mathit{F}(\mathit{u},\mathit{d})$ | $|\mathit{F}({\mathit{u}}_{\mathit{i}},{\mathit{d}}_{\mathit{i}})-$ $\mathit{F}({\mathit{u}}_{\mathit{i}+1},{\mathit{d}}_{\mathit{i}+1})|$ |
---|---|---|---|---|---|---|

(40,25) | 4.361099 × 10${}^{-5}$ | (0.01,0.99) | 18 | 17 | 12.8500 | 12.8500 |

(18,17) | 1.569022 × 10${}^{-6}$ | (0.4,0.6) | 19 | 15 | 7.7255 | 5.1245 |

(19,15) | 9.962820 × 10${}^{-7}$ | (0.33,0.67) | 19 | 11 | 6.6365 | 1.089 |

(19,11) | 3.810307 × 10${}^{-7}$ | (0.43,0.57) | 19 | 10 | 3.5169 | 3.1196 |

(19,10) | 1.492842 × 10${}^{-7}$ | (0.38,0.62) | 17 | 9 | 2.620360 | 0.89654 |

(17,9) | 1.434357 × 10${}^{-7}$ | (0.25,0.75) | 15 | 9 | 2.681275 | 0.060915 |

(15,9) | 3.019659 × 10${}^{-7}$ | (0.13,0.87) | 15 | 9 | 2.963947 | 0.282672 |

(15,9) | 3.019659 × 10${}^{-7}$ | (0.13,0.87) | 15 | 9 | 2.963947 | 0 |

Training Data | Test Data | ||||
---|---|---|---|---|---|

Training Set | Test Set | $(\mathit{u},\mathit{d})$ | $(\mathit{p},\mathit{q})$ | $(\mathit{u},\mathit{d})$ | $(\mathit{p},\mathit{q})$ |

${V}_{1}$ | ${V}_{2}$ | (15,8) | (0.41,0.59) | (11,9) | (0.25,0.75) |

${V}_{2}$ | ${V}_{1}$ | (17,11) | (0.67,0.33) | (15,8) | (0.41,0.59) |

Given $(\mathit{u},\mathit{d})$ | Maximum Likelihood L | Optimum $(\mathit{p},\mathit{q})$ | Optimum u | Optimum d | Risk $\mathit{F}(\mathit{u},\mathit{d})$ | $|\mathit{F}({\mathit{u}}_{\mathit{i}},{\mathit{d}}_{\mathit{i}})-$ $\mathit{F}({\mathit{u}}_{\mathit{i}+1},{\mathit{d}}_{\mathit{i}+1})|$ |
---|---|---|---|---|---|---|

(40,25) | 4.361099 × 10${}^{-5}$ | (0.01,0.99) | 18 | 17 | 17.421881 | 17.421881 |

(18,17) | 1.569022 × 10${}^{-6}$ | (0.4,0.6) | 19 | 15 | 9.905660 | 7.516221 |

(19,15) | 9.962820 × 10${}^{-7}$ | (0.33,0.67) | 19 | 11 | 8.547535 | 1.358125 |

(19,11) | 3.810307 × 10${}^{-7}$ | (0.43,0.57) | 19 | 10 | 4.504016 | 4.043519 |

(19,10) | 3.810307 × 10${}^{-7}$ | (0.57,0.43) | 17 | 9 | 3.835010 | 0.669006 |

(17,9) | 3.810307 × 10${}^{-7}$ | (0.57,0.43) | 17 | 9 | 3.835010 | 0 |

Model | Maxwell-Boltzmann | Bose-Einstein | ||
---|---|---|---|---|

$\mathit{n}$ | $\mathit{n}=\mathbf{100}$ | $\mathit{n}=\mathbf{1000}$ | $\mathit{n}=\mathbf{100}$ | $\mathit{n}=\mathbf{1000}$ |

p | 0.30 | 0.99 | 0.33 | 0.99 |

q | 0.70 | 0.01 | 0.67 | 0.01 |

Likelihood | 5.5996 × 10${}^{-84}$ | 0 | 3.1488 × 10${}^{-86}$ | 0 |

u | 56 | 42 | 56 | 40 |

d | 35 | 23 | 35 | 21 |

Risk value | 132.2545 | 771.4357 | 129.8278 | 773.2864 |

Loop size | 7 | 16 | 7 | 16 |

Model | Maxwell-Boltzmann | Bose-Einstein | ||||
---|---|---|---|---|---|---|

$\mathit{\mu}$ | $\mathit{\mu}=\mathbf{1}$ | $\mathit{\mu}=\mathbf{2}$ | $\mathit{\mu}=\mathbf{10}$ | $\mathit{\mu}=\mathbf{1}$ | $\mathit{\mu}=\mathbf{2}$ | $\mathit{\mu}=\mathbf{10}$ |

p | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 |

q | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 |

Likelihood | 0 | 0 | 0 | 0 | 0 | 0 |

u | 60 | 60 | 80 | 60 | 60 | 60 |

d | 40 | 40 | 50 | 40 | 40 | 40 |

Risk value | 79.5889 | 141.8641 | 821.6990 | 79.6845 | 141.9939 | 630.2925 |

Loop size | 2 | 2 | 7 | 2 | 2 | 2 |

Model | Maxwell-Boltzmann | Bose-Einstein | ||||
---|---|---|---|---|---|---|

$\mathit{\mu}$ | $\mathit{\mu}=\mathbf{1}$ | $\mathit{\mu}=\mathbf{2}$ | $\mathit{\mu}=\mathbf{10}$ | $\mathit{\mu}=\mathbf{1}$ | $\mathit{\mu}=\mathbf{2}$ | $\mathit{\mu}=\mathbf{10}$ |

p | 0.44 | 0.52 | 0.27 | 0.45 | 0.52 | 0.16 |

q | 0.56 | 0.48 | 0.73 | 0.55 | 0.48 | 0.84 |

Likelihood | 2.0197 × 10${}^{-64}$ | 3.4390 × 10${}^{-59}$ | 1.1716 × 10${}^{-59}$ | 5.0452 × 10${}^{-66}$ | 3.0095 × 10${}^{-60}$ | 1.6389 × 10${}^{-55}$ |

u | 60 | 60 | 70 | 60 | 60 | 70 |

d | 40 | 40 | 50 | 40 | 40 | 50 |

Risk value | 13.2531 | 30.1508 | 134.9512 | 13.0994 | 30.0282 | 156.9196 |

Loop size | 2 | 2 | 2 | 2 | 2 | 4 |

Model | Maxwell-Boltzmann | Bose-Einstein | ||||
---|---|---|---|---|---|---|

$\mathit{m}$ | $\mathit{m}=\mathbf{0}$ | $\mathit{m}=\mathbf{5}$ | $\mathit{m}=\mathbf{20}$ | $\mathit{m}=\mathbf{0}$ | $\mathit{m}=\mathbf{5}$ | $\mathit{m}=\mathbf{20}$ |

p | 0.37 | 0.38 | 0.38 | 0.39 | 0.39 | 0.37 |

q | 0.63 | 0.62 | 0.62 | 0.61 | 0.61 | 0.63 |

Likelihood | 2.7343 × 10${}^{-69}$ | 4.3868 × 10${}^{-69}$ | 3.3073 × 10${}^{-68}$ | 5.4464 × 10${}^{-69}$ | 1.3609 × 10${}^{-68}$ | 1.0870 × 10${}^{-67}$ |

u | 60 | 60 | 68 | 60 | 60 | 68 |

d | 40 | 40 | 50 | 40 | 40 | 50 |

Risk value | 142.4854 | 142.8270 | 133.9227 | 141.6344 | 142.7307 | 134.5446 |

Loop size | 2 | 2 | 3 | 2 | 2 | 5 |

$\mathit{f}=0$ | $\mathit{f}=1$ | $\mathit{f}=2$ | $\mathit{f}=3$ | $\mathit{f}=4$ | |
---|---|---|---|---|---|

$P(T>30|5)$ | 0.1745 | 0.1676 | 0.1573 | 0.1470 | 0.1368 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).