#### 4.1. Gambler’s Ruin Problem

Despite the fact that this procedure works in greater generality, as can be seen below, we consider as a first illustration the computation of the probability that our process

X hits a value

$a>0$ before falling below zero. This is known as the

Gambler’s ruin problem. We therefore denote the first exit time of the interval

$[0,a)$ by

${\tau}_{0,a}=inf\{t\ge 0\phantom{\rule{0.166667em}{0ex}}|\phantom{\rule{0.166667em}{0ex}}{X}_{t}\notin [0,a)\}$. We can now immediately apply Theorem 2 and Lemma 1 with this special exit time. We obtain the result that a function

q which satisfies the requirements of Lemma 1, and in addition to solving the equation

obeying the boundary conditions

admits the following representation:

Here ${\tau}_{a}=inf\{t\ge 0\phantom{\rule{0.166667em}{0ex}}|{X}_{t}\ge a\}$, such that $q:{\mathbb{R}}^{+}\times {\mathbb{R}}^{+}\to [0,1]$. Note, that the preference $\delta $ is set to be zero in order to really obtain the pure probability of the considered event.

Our goal now is to reveal such a function q by solving the associated PIDE, but since this equation contains a non-local term, we have to apply a numerical approximation procedure for general parameter constellations. Namely, we first consider a sequence ${\left\{{q}^{n}\right\}}_{n\in \mathbb{N}}$ of solutions, where each ${q}^{n}$ is the respective expected value, in case we allow the process X to face at most n jumps. Since inter-arrival times are a.s. positive we have the result that ${lim}_{n\to \infty}{q}^{n}=q$ pointwise.

In order to allow

c to be non-constant, we use

$c(x)=\kappa ({b}_{1}-x)(x-{a}_{1}){I}_{\{{a}_{1}<x<{b}_{1}\}}$, where we assume

$0\le {a}_{1}<a<{b}_{1}$ and

$\kappa >0$. In Remark 2 we give motivation for this particular choice. As mentioned above, this only affects boundary conditions and the initial value of the recursive procedure. To start we set

${q}^{0}(x,{t}^{\prime}):={I}_{\{x>{a}_{1}\}}$, since if there are no further jumps we arrive at the upper threshold

a with probability 1—provided we start above

${a}_{1}$. We now iterate over the number of remaining jumps

$n\in \mathbb{N}$. For every

n we discretize the state space

$[{a}_{1},a]$ into

$N\in \mathbb{N}$ equidistant points

$\left\{{x}_{i}\right\}$ and use finite differences to approximate the state derivative, whereas we do not touch the

${t}^{\prime}$ direction. Hence, the Equation (

8) transforms to the following discretized counterpart:

Consequently, we have to solve on every grid line (along

${t}^{\prime}$) the corresponding ordinary differential equation. We make use of

${q}^{n-1}$ here by inserting it in the non-local part. Hence, we have to start at

${x}_{N}=a$ with

${q}^{n}({x}_{N},{t}^{\prime})=1$, since if the initial surplus is equal to

a, then the desired probability is already 1. Further,

${q}^{n}({x}_{i},{t}^{\prime})$, where

${x}_{i}={a}_{1}+ih$ for fixed

$i\in \{1,\cdots ,N-1\}$ and

$h=\frac{a-{a}_{1}}{N}$, solves as a function in

${t}^{\prime}\in [0,{t}_{\mathrm{end}}]$ the ODE:

Due to the special choice of our drift function

c, we have to fix a time horizon

${t}_{\mathrm{end}}$, such that we can solve the considered differential equations on a finite time interval. Note that the above equality in

${t}_{\mathrm{end}}$ holds only asymptotically; actually we have

$\underset{{t}_{\mathrm{end}}\to \infty}{lim}{q}^{n}\left({x}_{i},{t}_{\mathrm{end}}\right)=\underset{0}{\overset{{x}_{i}}{\int}}{q}^{n-1}({x}_{i}-y,0){f}_{Y}(y)dy$. The corresponding imprecision stems from the truncation of the tail jump distribution; namely, we use above

${\overline{F}}_{T}^{{t}_{\mathrm{end}}}({t}^{\prime})={\overline{F}}_{T}({t}^{\prime}){I}_{\{{t}^{\prime}<{t}_{\mathrm{end}}\}}$. The inhomogeneity for every

i has the form

This term is known at step n and state ${x}_{i}$. Note that due to the features of the function c, if the process arrives at a state smaller or equal to ${a}_{1}$, then the process remains at this state. Hence, we have the boundary condition ${q}^{n}({a}_{1},{t}^{\prime})=0$$\forall n\in \mathbb{N}$. Moreover, one can show that $q(x,{t}^{\prime})$ is rightly-continuous in ${a}_{1}$. Finally, we interpolate across the grid points $\left\{{x}_{i}\right\}$ the numerically determined functions in ${t}^{\prime}$ to obtain a function ${q}^{n}(x,{t}^{\prime})$.

**Remark** **2.** In our numerical examples we use the drift function $c(x)=\kappa ({b}_{1}-x)(x-{a}_{1}){I}_{\{{a}_{1}<x<{b}_{1}\}}$, where $0\le {a}_{1}<{b}_{1}$ and $\kappa >0$. Despite the fact that this function appears to be quite specific, it turns out that —modifying the parameters and without the indicator—it is able to cover various practical situations.

First of all, this function can be used to approximate a reflecting barrier at level ${b}_{1}$ in a continuous way. Such a feature is of interest, in case the insurance company is willing to pay out dividends, or otherwise if large cash holdings are penalized. Those situations are nowadays quite reasonable, since negative interest rates are more and more common.

Furthermore, if we increase κ, then the deterministic path approximates an indicator function. Hence, it can mimic deterministic jumps of the surplus process which arise in problems with capital injections. In case we allow c—canceling the indicator—to be negative, then the resulting decreasing paths approach either levels ${b}_{1}$ or zero from above. This corresponds to a post-dividend surplus approaching the dividend barrier ${b}_{1}$ (especially if κ is chosen to be large, this would approximate a jump downwards; i.e., a lump sum dividend), or to a liquidation of the portfolio due to inefficiency of the insurance line.

Overall, if we combine such functions to a piecewise drift with an additional positive constant we are able to reproduce continuous versions of a variety of common dividend strategies (barrier and band type). Another nice application arises for a small choice of κ. We can fix ${b}_{1}>0$ and ${a}_{1}<0$ such thatand choose κ appropriately small enough to get a local approximation of the classical drift rate with investment return $r\in \mathbb{R}$. Here ${a}_{1}=-((-r+\sqrt{{r}^{2}+4c\kappa})/(2\kappa ))<0$ and ${b}_{1}=(r+\sqrt{{r}^{2}+4c\kappa})/(2\kappa )$ tend to zero and infinity as $\kappa \searrow 0$, thereby capturing the natural boundaries of the surplus. Beyond the insurance context, such drift functions are frequently used to describe the growth of a population; see Alvarez and Shepp (1998). #### 4.2. Extended Gerber-Shiu Functional

As another example, we want to compute (

4) in a more general setting including running reward and a Gerber-Shiu function. Here

${q}_{GS}^{n}(x,{t}^{\prime})$ denotes the functional comprising at most

$n\in \mathbb{N}$ jumps. We proceed in a manner analogous to that above. For the sake of clarity we assume that

$l(x)\equiv L$, whereas the function

c remains the same as in the previous case, namely,

$c(x)=\kappa ({b}_{1}-x)(x-{a}_{1}){I}_{\{{a}_{1}<x<{b}_{1}\}}$, where

$0\le {a}_{1}<a<{b}_{1}$ and

$\kappa >0$. Note that we have chosen

${a}_{1}$ to be zero in our subsequent example. For bounding the state space we choose a cut-off value

a. This ensures that the computations are feasible; i.e., we have given boundary values and do not need to solve integral equations to obtain these. We denote with

${t}^{*}(a,x)$ the point in time when we reach the value

a if we start in

x by following the deterministic ODE path. In fact this function is just the inverse in

${t}^{\prime}$ of the deterministic path function

$\varphi (x,{t}^{\prime})$.

The initial value of the approach is

For the further iterative procedure we have at

${x}_{N}=a$ that

${q}_{GS}^{n}({x}_{N},{t}^{\prime})=\frac{L}{\delta}$, since if we start at the cut-off value we just obtain the discounted reward continuously and forever. Analogously as above,

${q}_{GS}^{n}({x}_{i},{t}^{\prime})$, where

${x}_{i}={a}_{1}+ih$ for fixed

$i\in \{1,\cdots ,N-1\}$ and

$h=\frac{a-{a}_{1}}{N}$, solves for

${t}^{\prime}\in [0,{t}_{\mathrm{end}}]$ the ODE:

As above we consider a finite time interval and therefore make use of

${t}_{\mathrm{end}}$. In this case the inhomogeneity admits the following form for every

iDoing this for every point

${x}_{i}$ results in a discretized approximation of

${q}_{GS}^{n}$, which one may denote by

${q}_{GS}^{n,h}$ for highlighting the dependence on the step width

$h>0$ (here

$h=\frac{a-{a}_{1}}{N}$). In contrast to the previous problem, we have in this case that at

${a}_{1}$ the function

${q}_{GS}^{n}(x,{t}^{\prime})$ must be determined. In our numerical example we assume that

${a}_{1}=0$; hence, we obtain the boundary condition

which can be computed explicitly.

Again, interpolation leads to a function

${q}_{GS}^{n}(x,{t}^{\prime})$ on the whole domain which approximates (

4). Note, that in the case of a non-constant reward

l the boundary values need to be replaced by

$\underset{0}{\overset{\infty}{\int}}{e}^{-\delta t}l(\varphi (t,x))\phantom{\rule{0.166667em}{0ex}}dt$.

#### 4.3. Convergence of Numerical Scheme

Consider a PDMP

${\tilde{X}}^{h}=({X}^{h},{t}^{\prime})$ with state space

${E}^{h}=\{k\phantom{\rule{0.166667em}{0ex}}h\phantom{\rule{0.166667em}{0ex}}:\phantom{\rule{0.166667em}{0ex}}k\in \mathbb{Z}\}\times {\mathbb{R}}_{0}^{+}\subset E=\mathbb{R}\times {\mathbb{R}}_{0}^{+}$ for some

$h>0$. We identify

${X}_{t}^{h}=k$ with the actual position

$k\phantom{\rule{0.166667em}{0ex}}h$; i.e., the first component of

${\tilde{X}}^{h}$ describes an external discrete state. For suitable

$g:{E}^{h}\to \mathbb{R}$, this process is described by its generator

where

${p}_{kl}^{h}={F}_{Y}((k-l)h)-{F}_{Y}((k-l-1)h)=P\left[kh-Y\in [lh,(l+1)h)\right]$. Note that this process has its origins in the numerical procedure presented in the section above. In a next step we will apply Theorem 5.16 from

Kritzer et al. (

2019) or directly Theorem 19.25 from

Kallenberg (

2002) to show that

${\tilde{X}}^{h}\stackrel{d}{\to}\tilde{X}=(X,{t}^{\prime})$, our original process. As a consequence, we get that expected values of certain functionals of the underlying processes,

${\tilde{X}}^{h},\phantom{\rule{0.166667em}{0ex}}\tilde{X}$, converge against each other. Lemma 5.14 of

Kritzer et al. (

2019) tells us that the relevant ingredients of

${q}^{n}$ and

${q}_{GS}^{n}$ are appropriately continuous if

$\psi $ and

l are bounded.

Fix

${k}_{x}(h):=\u230a\frac{x}{h}\u230b$ for

$x\in \mathbb{R}$ such that

${X}_{0}^{h}={k}_{x}(h)h\to x={X}_{0}$ as

$h\to 0$. Furthermore, let

$f\in {\mathcal{C}}_{b}^{\infty}(E,\mathbb{R})$ which is certainly an element of

$\mathcal{D}(\mathcal{A})$ and

$\mathcal{D}({\mathcal{A}}^{h})$. We need to focus on

where

${L}_{c}$ denotes the Lipschitz constant of

$c(\xb7)$. The remaining terms, which bound the difference of the two generators, converge to zero uniformly in

$(x,{t}^{\prime})$ if we assume a bounded jump intensity

$\lambda $ and a differentiable, bounded and Lipschitz function

c. Therefore,

Kritzer et al. (

2019, Theorem 5.16) tells us that

${\tilde{X}}^{h}$ and

$\tilde{X}$ converge weakly against each other as

$h\to 0$ and the associated Gerber-Shiu and reward functions do as well, if

$\psi $ and

l are bounded—as previously mentioned.

This is certainly a qualitative and not a quantitative statement (involving convergence rates depending on the discretization h), but it shows that the schema are correctly designed.

**Remark** **3.** Moreover, compare Fleming and Soner (1993)[ch. IX] where they use techniques based on viscosity solutions in order to verify the convergence of the numerical state and time discretization scheme. The basis for such results, as well as for our own, is certainly provided in Kushner and Dupuis (2001) where Markov chain approximations of continuous time stochastic processes are extensively discussed.