Next Article in Journal
The Impact of Management Fees on the Pricing of Variable Annuity Guarantees
Next Article in Special Issue
Optimal Bail-Out Dividend Problem with Transaction Cost and Capital Injection Constraint
Previous Article in Journal
The Interaction of Borrower and Loan Characteristics in Predicting Risks of Subprime Automobile Loans

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Fluctuation Theory for Upwards Skip-Free Lévy Chains

by
Matija Vidmar
Department of Mathematics, Faculty of Mathematics and Physics, University of Ljubljana, 1000 Ljubljana, Slovenia
Risks 2018, 6(3), 102; https://doi.org/10.3390/risks6030102
Submission received: 20 July 2018 / Revised: 13 September 2018 / Accepted: 16 September 2018 / Published: 18 September 2018

## Abstract

:
A fluctuation theory and, in particular, a theory of scale functions is developed for upwards skip-free Lévy chains, i.e., for right-continuous random walks embedded into continuous time as compound Poisson processes. This is done by analogy to the spectrally negative class of Lévy processes—several results, however, can be made more explicit/exhaustive in the compound Poisson setting. Importantly, the scale functions admit a linear recursion, of constant order when the support of the jump measure is bounded, by means of which they can be calculated—some examples are presented. An application to the modeling of an insurance company’s aggregate capital process is briefly considered.

## 1. Introduction

It was shown in Vidmar (2015) that precisely two types of Lévy processes exhibit the property of non-random overshoots: those with no positive jumps a.s., and compound Poisson processes, whose jump chain is (for some $h > 0$) a random walk on $Z h : = { h k : k ∈ Z }$, skip-free to the right. The latter class was then referred to as “upwards skip-free Lévy chains”. Also in the same paper it was remarked that this common property which the two classes share results in a more explicit fluctuation theory (including the Wiener-Hopf factorization) than for a general Lévy process, this being rarely the case (cf. (Kyprianou 2006, p. 172, sct. 6.5.4)).
Now, with reference to existing literature on fluctuation theory, the spectrally negative case (when there are no positive jumps, a.s.) is dealt with in detail in (Bertoin 1996, chp. VII); (Sato 1999, sct. 9.46) and especially (Kyprianou 2006, chp. 8). On the other hand, no equally exhaustive treatment of the right-continuous random walk seems to have been presented thus far, but see Brown et al. (2010); Marchal (2001); Quine (2004); (De Vylder and Goovaerts 1988, sct. 4); (Dickson and Waters 1991, sct. 7); (Doney 2007, sct. 9.3); (Spitzer 2001, passim).1 In particular, no such exposition appears forthcoming for the continuous-time analogue of such random walks, wherein the connection and analogy to the spectrally negative class of Lévy processes becomes most transparent and direct.
In the present paper, we proceed to do just that, i.e., we develop, by analogy to the spectrally negative case, a complete fluctuation theory (including theory of scale functions) for upwards skip-free Lévy chains. Indeed, the transposition of the results from the spectrally negative to the skip-free setting is mostly straightforward. Over and above this, however, and beyond what is purely analogous to the exposition of the spectrally negative case, (i) further specifics of the reflected process (Theorem 1-1), of the excursions from the supremum (Theorem 1-3) and of the inverse of the local time at the maximum (Theorem 1-4) are identified, (ii) the class of subordinators that are the descending ladder heights processes of such upwards skip-free Lévy chains is precisely characterized (Theorem 4), and (iii) a linear recursion is presented which allows us to directly compute the families of scale functions (Equations (20), (21), Proposition 9 and Corollary 1).
Application-wise, note that the classical continuous-time Bienaymé-Galton-Watson branching process is associated with upwards skip-free Lévy chains via a suitable time change (Kyprianou 2006, sct. 1.3.4). Besides, our chains feature as a natural continuous-time approximation of the more subtle spectrally negative Lévy family, that, because of its overall tractability, has been used extensively in applied probability (in particular to model the risk process of an insurance company; see the papers Avram et al. (2007); Chiu and Yin (2005); Yang and Zhang (2001) among others). This approximation point of view is developed in Mijatović et al. (2014, 2015). Finally, focusing on the insurance context, the chains may be used directly to model the aggregate capital process of an insurance company, in what is a continuous-time embedding of the discrete-time compound binomial risk model (for which see Avram and Vidmar (2017); Bao and Liu (2012); Wat et al. (2018); Xiao and Guo (2007) and the references therein). We elaborate on this latter point of view in Section 5.
The organisation of the rest of this paper is as follows. Section 2 introduces the setting and notation. Then Section 3 develops the relevant fluctuation theory, in particular details of the Wiener-Hopf factorization. Section 4 deals with the two-sided exit problem and the accompanying families of scale functions. Finally, Section 5 closes with an application to the risk process of an insurance company.

## 2. Setting and Notation

Let $( Ω , F , F = ( F t ) t ≥ 0 , P )$ be a filtered probability space supporting a Lévy process (Kyprianou 2006, p. 2, Definition 1.1) X (X is assumed to be $F$-adapted and to have independent increments relative to $F$). The Lévy measure (Sato 1999, p. 38, Definition 8.2) of X is denoted by $λ$. Next, recall from Vidmar (2015) (with $supp ( ν )$ denoting the support (Kallenberg 1997, p. 9) of a measure $ν$ defined on the Borel $σ$-field of some topological space):
Definition 1 (Upwards skip-free Lévy chain).
X is an upwards skip-free Lévy chain, if it is a compound Poisson process (Sato 1999, p. 18, Definition 4.2), viz. if $E [ e i z X t ] = e t ∫ ( e i z x − 1 ) λ ( d x )$ for $z ∈ R$ and $t ∈ [ 0 , ∞ )$, and if for some $h > 0$, $supp ( λ ) ⊂ Z h$, whereas $supp ( λ | B ( ( 0 , ∞ ) ) ) = { h }$.
Remark 1.
Of course to say that X is a compound Poisson process means simply that it is a real-valued continuous-time Markov chain, vanishing a.s. at zero, with holding times exponentially distributed of rate $λ ( R )$ and the law of the jumps given by $λ / λ ( R )$ (Sato 1999, p. 18, Theorem 4.3).
In the sequel, X will be assumed throughout an upwards skip-free Lévy chain, with $λ ( { h } ) > 0$ ($h > 0$) and characteristic exponent $Ψ ( p ) = ∫ ( e i p x − 1 ) λ ( d x )$ ($p ∈ R$). In general, we insist on (i) every sample path of X being càdlàg (i.e., right-continuous, admitting left limits) and (ii) $( Ω , F , F , P )$ satisfying the standard assumptions (i.e., the $σ$-field $F$ is $P$-complete, the filtration $F$ is right-continuous and $F 0$ contains all $P$-null sets). Nevertheless, we shall, sometimes and then only provisionally, relax assumption (ii), by transferring X as the coordinate process onto the canonical space $D h : = { ω ∈ Z h [ 0 , ∞ ) : ω is c à dl à g }$ of càdlàg paths, mapping $[ 0 , ∞ ) → Z h$, equipping $D h$ with the $σ$-algebra and natural filtration of evaluation maps; this, however, will always be made explicit. We allow $e 1$ to be exponentially distributed, mean one, and independent of X; then define $e p : = e 1 / p$ ($p ∈ ( 0 , ∞ ) \ { 1 }$).
Furthermore, for $x ∈ R$, introduce $T x : = inf { t ≥ 0 : X t ≥ x }$, the first entrance time of X into $[ x , ∞ )$. Please note that $T x$ is an $F$-stopping time (Kallenberg 1997, p. 101, Theorem 6.7). The supremum or maximum (respectively infimum or minimum) process of X is denoted $X ¯ t : = sup { X s : s ∈ [ 0 , t ] }$ (respectively $X ̲ t : = inf { X s : s ∈ [ 0 , t ] }$) ($t ≥ 0$). $X ̲ ∞ : = inf { X s : s ∈ [ 0 , ∞ ) }$ is the overall infimum.
With regard to miscellaneous general notation we have:
• The nonnegative, nonpositive, positive and negative real numbers are denoted by $R + : = { x ∈ R : x ≥ 0 }$, $R − : = { x ∈ R : x ≤ 0 }$, $R + : = R + \ { 0 }$ and $R − : = R − \ { 0 }$, respectively. Then $Z + : = R + ∩ Z$, $Z − : = R − ∩ Z$, $Z + : = R + ∩ Z$ and $Z − : = R − ∩ Z$ are the nonnegative, nonpositive, positive and negative integers, respectively.
• Similarly, for $h > 0$: $Z h + : = Z h ∩ R +$, $Z h + + : = Z h ∩ R +$, $Z h − : = Z h ∩ R −$ and $Z h − − : = Z h ∩ R −$ are the apposite elements of $Z h$.
• The following introduces notation for the relevant half-planes of $C$; the arrow notation is meant to be suggestive of which half-plane is being considered: $C → : = { z ∈ C : ℜ z > 0 }$, $C ← : = { z ∈ C : ℜ z < 0 }$, $C ↓ : = { z ∈ C : ℑ z < 0 }$ and $C ↑ : = { z ∈ C : ℑ z > 0 }$. $C → ¯$, $C ← ¯$, $C ↓ ¯$ and $C ↑ ¯$ are then the respective closures of these sets.
• $N = { 1 , 2 , … }$ and $N 0 = N ∪ { 0 }$ are the positive and nonnegative integers, respectively. $⌈ x ⌉ : = inf { k ∈ Z : k ≥ x }$ ($x ∈ R$) is the ceiling function. For ${ a , b } ⊂ [ − ∞ , + ∞ ]$: $a ∧ b : = min { a , b }$ and $a ∨ b : = max { a , b }$.
• The Laplace transform of a measure $μ$ on $R$, concentrated on $[ 0 , ∞ )$, is denoted by $μ ^$: $μ ^ ( β ) = ∫ [ 0 , ∞ ) e − β x μ ( d x )$ (for all $β ≥ 0$ such that this integral is finite). To a nondecreasing right-continuous function $F : R → R$, a measure $d F$ may be associated in the Lebesgue-Stieltjes sense.
The geometric law $geom ( p )$ with success parameter $p ∈ ( 0 , 1 ]$ has $geom ( p ) ( { k } ) = p ( 1 − p ) k$ ($k ∈ N 0$), $1 − p$ is then the failure parameter. The exponential law $Exp ( β )$ with parameter $β > 0$ is specified by the density $Exp ( β ) ( d t ) = β e − β t 𝟙 ( 0 , ∞ ) ( t ) d t$. A function $f : [ 0 , ∞ ) → [ 0 , ∞ )$ is said to be of exponential order, if there are ${ α , A } ⊂ R +$, such that $f ( x ) ≤ A e α x$ ($x ≥ 0$); $f ( + ∞ ) : = lim x → ∞ f ( x )$, when this limit exists. DCT (respectively MCT) stands for the dominated (respectively monotone) convergence theorem. Finally, increasing (respectively decreasing) will mean strictly increasing (respectively strictly decreasing), nondecreasing (respectively nonincreasing) being used for the weaker alternative; we will understand $a / 0 = ± ∞$ for $a ∈ ± ( 0 , ∞ )$.

## 3. Fluctuation Theory

In the following section, to fully appreciate the similarity (and eventual differences) with the spectrally negative case, the reader is invited to directly compare the exposition of this subsection with that of (Bertoin 1996, sct. VII.1) and (Kyprianou 2006, sct. 8.1).

#### 3.1. Laplace Exponent, the Reflected Process, Local Times and Excursions from the Supremum, Supremum Process and Long-Term Behaviour, Exponential Change of Measure

Since the Poisson process admits exponential moments of all orders, it follows that $E [ e β X ¯ t ] < ∞$ and, in particular, $E [ e β X t ] < ∞$ for all ${ β , t } ⊂ [ 0 , ∞ )$. Indeed, it may be seen by a direct computation that for $β ∈ C → ¯$, $t ≥ 0$, $E [ e β X t ] = exp { t ψ ( β ) }$, where $ψ ( β ) : = ∫ R ( e β x − 1 ) λ ( d x )$ is the Laplace exponent of X. Moreover, $ψ$ is continuous (by the DCT) on $C → ¯$ and analytic in $C →$ (use the theorems of Cauchy (Rudin 1970, p. 206, 10.13 Cauchy’s theorem for triangle), Morera (Rudin 1970, p. 209, 10.17 Morera’s theorem) and Fubini).
Next, note that $ψ ( β )$ tends to $+ ∞$ as $β → ∞$ over the reals, due to the presence of the atom of $λ$ at h. Upon restriction to $[ 0 , ∞ )$, $ψ$ is strictly convex, as follows first on $( 0 , ∞ )$ by using differentiation under the integral sign and noting that the second derivative is strictly positive, and then extends to $[ 0 , ∞ )$ by continuity.
Denote then by $Φ ( 0 )$ the largest root of $ψ | [ 0 , ∞ )$. Indeed, 0 is always a root, and due to strict convexity, if $Φ ( 0 ) > 0$, then 0 and $Φ ( 0 )$ are the only two roots. The two cases occur, according as to whether $ψ ′ ( 0 + ) ≥ 0$ or $ψ ′ ( 0 + ) < 0$, which is clear. It is less obvious, but nevertheless true, that this right derivative at 0 actually exists, indeed $ψ ′ ( 0 + ) = ∫ R x λ ( d x ) ∈ [ − ∞ , ∞ )$. This follows from the fact that $( e β x − 1 ) / β$ is nonincreasing as $β ↓ 0$ for $x ∈ R −$ and hence the monotone convergence applies. Continuing from this, and with a similar justification, one also gets the equality $ψ ″ ( 0 + ) = ∫ x 2 λ ( d x ) ∈ ( 0 , + ∞ ]$ (where we agree $ψ ″ ( 0 + ) = + ∞$ if $ψ ′ ( 0 + ) = − ∞$). In any case, $ψ : [ Φ ( 0 ) , ∞ ) → [ 0 , ∞ )$ is continuous and increasing, it is a bijection and we let $Φ : [ 0 , ∞ ) → [ Φ ( 0 ) , ∞ )$ be the inverse bijection, so that $ψ ∘ Φ = id R +$.
With these preliminaries having been established, our first theorem identifies characteristics of the reflected process, the local time of X at the maximum (for a definition of which see e.g., (Kyprianou 2006, p. 140, Definition 6.1)), its inverse, as well as the expected length of excursions and the probability of an infinite excursion therefrom (for definitions of these terms see e.g., (Kyprianou 2006, pp. 140–47); we agree that an excursion (from the maximum) starts immediately after X leaves its running maximum and ends immediately after it returns to it; by its length we mean the amount of time between these two time points).
Theorem 1 (Reflected process; (inverse) local time; excursions).
Let $q n : = λ ( { − n h } ) / λ ( R )$ for $n ∈ N$ and $p : = λ ( { h } ) / λ ( R )$.
• The generator matrix $Q ˜$ of the Markov process $Y : = X ¯ − X$ on $Z h +$ is given by (with ${ s , s ′ } ⊂ Z h +$): $Q ˜ s s ′ = λ ( { s − s ′ } ) − δ s s ′ λ ( R )$, unless $s = s ′ = 0$, in which case we have $Q ˜ s s ′ = − λ ( ( − ∞ , 0 ) )$.
• For the reflected process Y, 0 is a holding point. The actual time spent at 0 by Y, which we shall denote L, is a local time at the maximum. Its right-continuous inverse $L − 1$, given by $L t − 1 : = inf { s ≥ 0 : L s > t }$ (for $0 ≤ t < L ∞$; $L t − 1 : = ∞$ otherwise), is then a (possibly killed) compound Poisson subordinator with unit positive drift.
• Assuming that $λ ( ( − ∞ , 0 ) ) > 0$ to avoid the trivial case, the expected length of an excursion away from the supremum is equal to $λ ( { h } ) h − ψ ′ ( 0 + ) ( ψ ′ ( 0 + ) ∨ 0 ) λ ( ( − ∞ , 0 ) )$; whereas the probability of such an excursion being infinite is $λ ( { h } ) λ ( ( − ∞ , 0 ) ) ( e Φ ( 0 ) h − 1 ) = : p *$.
• Assume again $λ ( ( − ∞ , 0 ) ) > 0$ to avoid the trivial case. Let N, taking values in $N ∪ { + ∞ }$, be the number of jumps the chain makes before returning to its running maximum, after it has first left it (it does so with probability 1). Then the law of $L − 1$ is given by (for $θ ∈ [ 0 , + ∞ )$):
$− log E exp ( − θ L 1 − 1 ) 𝟙 { L 1 − 1 < + ∞ } = θ + λ ( ( − ∞ , 0 ) ) 1 − ∑ k = 1 ∞ P ( N = k ) λ ( R ) λ ( R ) + θ k .$
In particular, $L − 1$ has a killing rate of $λ ( ( − ∞ , 0 ) ) p *$, Lévy mass $λ ( ( − ∞ , 0 ) ) ( 1 − p * )$ and its jumps have the probability law on $( 0 , + ∞ )$ given by the length of a generic excursion from the supremum, conditional on it being finite, i.e., that of an independent N-fold sum of independent $Exp ( λ ( R ) )$-distributed random variables, conditional on N being finite. Moreover, one has, for $k ∈ N$, $P ( N = k ) = ∑ l = 1 k q l p l , k$, where the coefficients $( p l , k ) l , k = 1 ∞$ satisfy the initial conditions:
$p l , 1 = p δ l 1 , l ∈ N ;$
the recursions:
$p l , k + 1 = 0 i f l = k o r l > k + 1 ∑ m = 1 k − 1 q m p m + 1 , k i f l = 1 p k + 1 i f l = k + 1 p p l − 1 , k + ∑ m = 1 k − l q m p m + l , k i f 1 < l < k , { l , k } ⊂ N ;$
and $p l , k$ may be interpreted as the probability of X reaching level 0 starting from level $− l h$ for the first time on precisely the k-th jump (${ l , k } ⊂ N$).
Proof.
Theorem 1-1 is clear, since, e.g., Y transitions away from 0 at the rate at which X makes a negative jump; and from $s ∈ Z h + \ { 0 }$ to 0 at the rate at which X jumps up by s or more etc.
Theorem 1-2 is standard (Kyprianou 2006, p. 141, Example 6.3 & p. 149, Theorem 6.10).
We next establish Theorem 1-3. Denote, provisionally, by $β$ the expected excursion length. Furthermore, let the discrete-time Markov chain W (on the state space $N 0$) be endowed with the initial distribution $w j : = q j 1 − p$ for $j ∈ N$, $w 0 : = 0$; and transition matrix P, given by $P 0 i = δ 0 i$, whereas for $i ≥ 1$: $P i j = p$, if $j = i − 1$; $P i j = q j − i$, if $j > i$; and $P i j = 0$ otherwise (W jumps down with probability p, up i steps with probability $q i$, $i ≥ 1$, until it reaches 0, where it gets stuck). Further let N be the first hitting time for W of ${ 0 }$, so that a typical excursion length of X is equal in distribution to an independent sum of N (possibly infinite) $Exp ( λ ( R ) )$-random variables. It is Wald’s identity that $β = ( 1 / λ ( R ) ) E [ N ]$. Then (in the obvious notation, where $∞ ̲$ indicates the sum is inclusive of ∞), by Fubini: $E [ N ] = ∑ n = 1 ∞ ̲ n ∑ l = 1 ∞ w l P l ( N = n ) = ∑ l = 1 ∞ w l k l$, where $k l$ is the mean hitting time of ${ 0 }$ for W, if it starts from $l ∈ N 0$, as in (Norris 1997, p. 12). From the skip-free property of the chain W it is moreover transparent that $k i = α i$, $i ∈ N 0$, for some $0 < α ≤ ∞$ (with the usual convention $0 · ∞ = 0$). Moreover we know (Norris 1997, p. 17, Theorem 1.3.5) that $( k i : i ∈ N 0 )$ is the minimal solution to $k 0 = 0$ and $k i = 1 + ∑ j = 1 ∞ P i j k j$ ($i ∈ N$). Plugging in $k i = α i$, the last system of linear equations is equivalent to (provided $α < ∞$) $0 = 1 − p α + α ζ$, where $ζ : = ∑ j = 1 ∞ j q j$. Thus, if $ζ < p$, the minimal solution to the system is $k i = i / ( p − ζ )$, $i ∈ N 0$, from which $β = ζ / ( λ ( ( − ∞ , 0 ) ) ( p − ζ ) )$ follows at once. If $ζ ≥ p$, clearly we must have $α = + ∞$, hence $E [ N ] = + ∞$ and thus $β = + ∞$.
To establish the probability of an excursion being infinite, i.e., $∑ i = 1 ∞ q i ( 1 − α i ) / ∑ i = 1 ∞ q i$, where $α i : = P i ( N < ∞ ) > 0$, we see that (by the skip-free property) $α i = α 1 i$, $i ∈ N 0$, and by the strong Markov property, for $i ∈ N$, $α i = p α i − 1 + ∑ j = 1 ∞ q j α i + j$. It follows that $1 = p α 1 − 1 + ∑ j = 1 ∞ q j α 1 j$, i.e., $0 = ψ ( log ( α 1 − 1 ) / h )$. Hence, by Theorem 2-2, whose proof will be independent of this one, $α 1 = e − Φ ( 0 ) h$ (since $α 1 < 1$, if and only if X drifts to $− ∞$).
Finally, Theorem 1-4 is straightforward. ☐
We turn our attention now to the supremum process $X ¯$. First, using the lack of memory property of the exponential law and the skip-free nature of X, we deduce from the strong Markov property applied at the time $T a$, that for every $a , b ∈ Z h +$, $p > 0$: $P ( T a + b < e p ) = P ( T a < e p ) P ( T b < e p ) .$ In particular, for any $n ∈ N 0$: $P ( T n h < e p ) = P ( T h < e p ) n .$ And since for $s ∈ Z h +$, ${ T s < e p } = { X ¯ e p ≥ s }$ ($P$-a.s.) one has (for $n ∈ N 0$): $P ( X ¯ e p ≥ n h ) = P ( X ¯ e p ≥ h ) n$. Therefore $X ¯ e p / h ∼ geom ( 1 − P ( X ¯ e p ≥ h ) )$.
Next, to identify $P ( X ¯ e p ≥ h )$, $p > 0$, observe that (for $β ≥ 0$, $t ≥ 0$): $E [ exp { Φ ( β ) X t } ] = e t β$ and hence $( exp { Φ ( β ) X t − β t } ) t ≥ 0$ is an $( F , P )$-martingale by stationary independent increments of X, for each $β ≥ 0$. Then apply the optional sampling theorem at the bounded stopping time $T x ∧ t$ ($t , x ≥ 0$) to get:
$E [ exp { Φ ( β ) X ( T x ∧ t ) − β ( T x ∧ t ) } ] = 1 .$
Please note that $X ( T x ∧ t ) ≤ h ⌈ x / h ⌉$ and $Φ ( β ) X ( T x ∧ t ) − β ( T x ∧ t )$ converges to $Φ ( β ) h ⌈ x / h ⌉ − β T x$ ($P$-a.s.) as $t → ∞$ on ${ T x < ∞ }$. It converges to $− ∞$ on the complement of this event, $P$-a.s., provided $β + Φ ( β ) > 0$. Therefore we deduce by dominated convergence, first for $β > 0$ and then also for $β = 0$, by taking limits:
$E [ exp { − β T x } 𝟙 { T x < ∞ } ] = exp { − Φ ( β ) h ⌈ x / h ⌉ } .$
Before we formulate our next theorem, recall also that any non-zero Lévy process either drifts to $+ ∞$, oscillates or drifts to $− ∞$ (Sato 1999, pp. 255–56, Proposition 37.10 and Definition 37.11).
Theorem 2 (Supremum process and long-term behaviour).
• The failure probability for the geometrically distributed $X ¯ e p / h$ is $exp { − Φ ( p ) h }$ ($p > 0$).
• X drifts to $+ ∞$, oscillates or drifts to $− ∞$ according as to whether $ψ ′ ( 0 + )$ is positive, zero, or negative. In the latter case $X ¯ ∞ / h$ has a geometric distribution with failure probability $exp { − Φ ( 0 ) h }$.
• $( T n h ) n ∈ N 0$ is a discrete-time increasing stochastic process, vanishing at 0 and having stationary independent increments up to the explosion time, which is an independent geometric random variable; it is a killed random walk.
Remark 2.
Unlike in the spectrally negative case (Bertoin 1996, p. 189), the supremum process cannot be obtained from the reflected process, since the latter does not discern a point of increase in X when the latter is at its running maximum.
Proof.
We have for every $s ∈ Z h +$:
$P ( X ¯ e p ≥ s ) = P ( T s < e p ) = E [ exp { − p T s } 𝟙 { T s < ∞ } ] = exp { − Φ ( p ) s } .$
Thus Theorem 2-1 obtains.
For Theorem 2-2 note that letting $p ↓ 0$ in (2), we obtain $X ¯ ∞ < ∞$ ($P$-a.s.), if and only if $Φ ( 0 ) > 0$, which is equivalent to $ψ ′ ( 0 + ) < 0$. If so, $X ¯ ∞ / h$ is geometrically distributed with failure probability $exp { − Φ ( 0 ) h }$ and then (and only then) does X drift to $− ∞$.
It remains to consider the case of drifting to $+ ∞$ (the cases being mutually exclusive and exhaustive). Indeed, X drifts to $+ ∞$, if and only if $E [ T s ]$ is finite for each $s ∈ Z h +$(Bertoin 1996, p. 172, Proposition VI.17). Using again the nondecreasingness of $( e − β T s − 1 ) / β$ in $β ∈ [ 0 , ∞ )$, we deduce from (1), by the monotone convergence, that one may differentiate under the integral sign, to get $E [ T s 𝟙 { T s < ∞ } ] = ( β ↦ − exp { − Φ ( β ) s } ) ′ ( 0 + )$. So the $E [ T s ]$ are finite, if and only if $Φ ( 0 ) = 0$ (so that $T s < ∞$$P$-a.s.) and $Φ ′ ( 0 + ) < ∞$. Since $Φ$ is the inverse of $ψ | [ Φ ( 0 ) , ∞ )$, this is equivalent to saying $ψ ′ ( 0 + ) > 0$.
Finally, Theorem 2-3 is clear. ☐
Table 1 briefly summarizes for the reader’s convenience some of our main findings thus far.
We conclude this section by offering a way to reduce the general case of an upwards skip-free Lévy chain to one which necessarily drifts to $+ ∞$. This will prove useful in the sequel. First, there is a pathwise approximation of an oscillating X, by (what is again) an upwards skip-free Lévy chain, but drifting to infinity.
Remark 3.
Suppose X oscillates. Let (possibly by enlarging the probability space to accommodate for it) N be an independent Poisson process with intensity 1 and $N t ϵ : = N t ϵ$ ($t ≥ 0$) so that $N ϵ$ is a Poisson process with intensity ϵ, independent of X. Define $X ϵ : = X + h N ϵ$. Then, as $ϵ ↓ 0$, $X ϵ$ converges to X, uniformly on bounded time sets, almost surely, and is clearly an upwards skip-free Lévy chain drifting to $+ ∞$.
The reduction of the case when X drifts to $− ∞$ is somewhat more involved and is done by a change of measure. For this purpose assume until the end of this subsection, that X is already the coordinate process on the canonical space $Ω = D h$, equipped with the $σ$-algebra $F$ and filtration $F$ of evaluation maps (so that $P$coincides with the law of X on $D h$ and $F = σ ( pr s : s ∈ [ 0 , + ∞ ) )$, while for $t ≥ 0$, $F t = σ ( pr s : s ∈ [ 0 , t ] )$, where $pr s ( ω ) = ω ( s )$, for $( s , ω ) ∈ [ 0 , + ∞ ) × D h$). We make this transition in order to be able to apply the Kolmogorov extension theorem in the proposition, which follows. Note, however, that we are no longer able to assume the standard conditions on $( Ω , F , F , P )$. Notwithstanding this, $( T x ) x ∈ R$ remain $F$-stopping times, since by the nature of the space $D h$, for $x ∈ R$, $t ≥ 0$, ${ T x ≤ t } = { X ¯ t ≥ x } ∈ F t$.
Proposition 1 (Exponential change of measure).
Let $c ≥ 0$. Then, demanding:
$P c ( Λ ) = E [ exp { c X t − ψ ( c ) t } 𝟙 Λ ] ( Λ ∈ F t , t ≥ 0 )$
this introduces a unique measure $P c$ on $F$. Under the new measure, X remains an upwards skip-free Lévy chain with Laplace exponent $ψ c = ψ ( · + c ) − ψ ( c )$, drifting to $+ ∞$, if $c ≥ Φ ( 0 )$, unless $c = ψ ′ ( 0 + ) = 0$. Moreover, if $λ c$ is the new Lévy measure of X under $P c$, then $λ c ≪ λ$ and $d λ c d λ ( x ) = e c x$λ-a.e. in $x ∈ R$. Finally, for every $F$-stopping time T, $P c ≪ P$ on restriction to $F T ′ : = { A ∩ { T < ∞ } : A ∈ F T }$, and:
$d P c | F T ′ d P | F T ′ = exp { c X T − ψ ( c ) T } .$
Proof.
That $P c$ is introduced consistently as a probability measure on $F$ follows from the Kolmogorov extension theorem (Parthasarathy 1967, p. 143, Theorem 4.2). Indeed, $M : = ( exp { c X t − ψ ( c ) t } ) t ≥ 0$ is a nonnegative martingale (use independence and stationarity of increments of X and the definition of the Laplace exponent), equal identically to 1 at time 0.
Furthermore, for all $β ∈ C → ¯$, ${ t , s } ⊂ R +$ and $Λ ∈ F t$:
$E c [ exp { β ( X t + s − X t ) } 𝟙 Λ ] = E [ exp { c X t + s − ψ ( c ) ( t + s ) } exp { β ( X t + s − X t ) } 𝟙 Λ ] = E [ exp { ( c + β ) ( X t + s − X t ) − ψ ( c ) s } ] E [ exp { c X t − ψ ( c ) t } 𝟙 Λ ] = exp { s ( ψ ( c + β ) − ψ ( c ) ) } P c ( Λ ) .$
An application of the Functional Monotone Class Theorem then shows that X is indeed a Lévy process on $( Ω , F , F , P c )$ and its Laplace exponent under $P c$ is as stipulated (that $X 0 = 0$$P c$-a.s. follows from the absolute continuity of $P c$ with respect to $P$ on restriction to $F 0$).
Next, from the expression for $ψ c$, the claim regarding $λ c$ follows at once. Then clearly X remains an upwards skip-free Lévy chain under $P c$, drifting to $+ ∞$, if $ψ ′ ( c + ) > 0$.
Finally, let $A ∈ F T$ and $t ≥ 0$. Then $A ∩ { T ≤ t } ∈ F T ∧ t$, and by the Optional Sampling Theorem:
$P c ( A ∩ { T ≤ t } ) = E [ M t 𝟙 A ∩ { T ≤ t } ] = E [ E [ M t 𝟙 A ∩ { T ≤ t } | F T ∧ t ] ] = E [ M T ∧ t 𝟙 A ∩ { T ≤ t } ] = E [ M T 𝟙 A ∩ { T ≤ t } ] .$
Using the MCT, letting $t → ∞$, we obtain the equality $P c ( A ∩ { T < ∞ } ) = E [ M T 𝟙 A ∩ { T < ∞ } ]$. ☐
Proposition 2 (Conditioning to drift to +∞).
Assume $Φ ( 0 ) > 0$ and denote $P ♮ : = P Φ ( 0 )$ (see (3)). We then have as follows.
• For every $Λ ∈ A : = ∪ t ≥ 0 F t$, $lim n → ∞ P ( Λ | X ¯ ∞ ≥ n h ) = P ♮ ( Λ )$.
• For every $x ≥ 0$, the stopped process $X T x = ( X t ∧ T x ) t ≥ 0$ is identical in law under the measures $P ♮$ and $P ( · | T x < ∞ )$ on the canonical space $D h$.
Proof.
With regard to Proposition 2-1, we have as follows. Let $t ≥ 0$. By the Markov property of X at time t, the process $X ▵ : = ( X t + s − X t ) s ≥ 0$ is identical in law with X on $D h$ and independent of $F t$ under $P$. Thus, letting $T ▵ y : = inf { t ≥ 0 : X ▵ t ≥ y }$ ($y ∈ R$), one has for $Λ ∈ F t$ and $n ∈ N 0$, by conditioning:
$P ( Λ ∩ { t < T n h < ∞ } ) = E [ E [ 𝟙 Λ 𝟙 { t < T n h } 𝟙 { T ▵ n h − X t < ∞ } | F t ] ] = E [ e Φ ( 0 ) ( X t − n h ) 𝟙 Λ ∩ { t < T n h } ] ,$
since ${ Λ , { t < T n h } } ∪ σ ( X t ) ⊂ F t$. Next, noting that ${ X ¯ ∞ ≥ n h } = { T n h < ∞ }$:
$P ( Λ | X ¯ ∞ > n h ) = e Φ ( 0 ) n h P ( Λ ∩ { T n h ≤ t } ) + P ( Λ ∩ { t < T n h < ∞ } ) = e Φ ( 0 ) n h P ( Λ ∩ { T n h ≤ t } ) + E [ e Φ ( 0 ) ( X t − n h ) 𝟙 Λ ∩ { t < T n h } ] = e Φ ( 0 ) n h P ( Λ ∩ { T n h ≤ t } ) + P ♮ ( Λ ∩ { t < T n h } ) .$
The second term clearly converges to $P ♮ ( Λ )$ as $n → ∞$. The first converges to 0, because by (2) $P ( X ¯ e 1 ≥ n h ) = e − n h Φ ( 1 ) = o ( e − n h Φ ( 0 ) )$, as $n → ∞$, and we have the estimate $P ( T n h ≤ t ) = P ( X ¯ t ≥ n h ) = P ( X ¯ t ≥ n h | e 1 ≥ t ) ≤ P ( X ¯ e 1 ≥ n h | e 1 ≥ t ) ≤ e t P ( X ¯ e 1 ≥ n h )$.
We next show Proposition 2-2. Note first that X is $F$-progressively measurable (in particular, measurable), hence the stopped process $X T x$ is measurable as a mapping into $D h$ (Karatzas and Shreve 1988, p. 5, Problem 1.16).
Furthermore, by the strong Markov property, conditionally on ${ T x < ∞ }$, $F T x$ is independent of the future increments of X after $T x$, hence also of ${ T x ′ < ∞ }$ for any $x ′ > x$. We deduce that the law of $X T x$ is the same under $P ( · | T x < ∞ )$ as it is under $P ( · | T x ′ < ∞ )$ for any $x ′ > x$. Proposition 2-2 then follows from Proposition 2-1 by letting $x ′$ tend to $+ ∞$, the algebra $A$ being sufficient to determine equality in law by a $π$/$λ$-argument. ☐

#### 3.2. Wiener-Hopf Factorization

Definition 2.
We define, for $t ≥ 0$, $G ¯ t * : = inf { s ∈ [ 0 , t ] : X s = X ¯ t }$, i.e., $P$-a.s., $G ¯ t *$ is the last time in the interval $[ 0 , t ]$ that X attains a new maximum. Similarly we let $G ̲ t : = sup { s ∈ [ 0 , t ] : X s = X ̲ s }$ be, $P$-a.s., the last time on $[ 0 , t ]$ of attaining the running infimum ($t ≥ 0$).
While the statements of the next proposition are given for the upwards skip-free Lévy chain X, they in fact hold true for the Wiener-Hopf factorization of any compound Poisson process. Moreover, they are (essentially) known in Kyprianou (2006). Nevertheless, we begin with these general observations, in order to (a) introduce further relevant notation and (b) provide the reader with the prerequisites needed to understand the remainder of this subsection. Immediately following Proposition 3, however, we particularize to our the skip-free setting.
Proposition 3.
Let $p > 0$. Then:
• The pairs $( G ¯ e p * , X ¯ e p )$ and $( e p − G ¯ e p * , X ¯ e p − X e p )$ are independent and infinitely divisible, yielding the factorisation:
$p p − i η − Ψ ( θ ) = Ψ p + ( η , θ ) Ψ p − ( η , θ ) ,$
where for ${ θ , η } ⊂ R$,
$Ψ p + ( η , θ ) : = E [ exp { i η G ¯ e p * + i θ X ¯ e p } ] and Ψ p − ( η , θ ) : = E [ exp { i η G ̲ e p + i θ X ̲ e p } ] .$
Duality: $( e p − G ¯ e p * , X ¯ e p − X e p )$ is equal in distribution to $( G ̲ e p , − X ̲ e p )$. $Ψ p +$ and $Ψ p −$ are the Wiener-Hopf factors.
• The Wiener-Hopf factors may be identified as follows:
$E [ exp { − α G ¯ e p * − β X ¯ e p } ] = κ * ( p , 0 ) κ * ( p + α , β )$
and
$E [ exp { − α G ̲ e p + β X ̲ e p } ] = κ ^ ( p , 0 ) κ ^ ( p + α , β )$
for ${ α , β } ⊂ C → ¯$.
• Here, in terms of the law of X,
$κ * ( α , β ) : = k * exp ∫ 0 ∞ ∫ ( 0 , ∞ ) ( e − t − e − α t − β x ) 1 t P ( X t ∈ d x ) d t$
and
$κ ^ ( α , β ) = k ^ exp ∫ 0 ∞ ∫ ( − ∞ , 0 ] ( e − t − e − α t + β x ) 1 t P ( X t ∈ d x ) d t$
for $α ∈ C →$, $β ∈ C → ¯$ and some constants ${ k * , k ^ } ⊂ R +$.
Proof.
These claims are contained in the remarks regarding compound Poisson processes in (Kyprianou 2006, pp. 167–68) pursuant to the proof of Theorem 6.16 therein. Analytic continuations have been effected in part Proposition 3-3 using properties of zeros of holomorphic functions (Rudin 1970, p. 209, Theorem 10.18), the theorems of Cauchy, Morera and Fubini, and finally the finiteness/integrability properties of q-potential measures (Sato 1999, p. 203, Theorem 30.10(ii)). ☐
Remark 4.
• (Kyprianou 2006, pp. 157, 168) $κ ^$ is also the Laplace exponent of the (possibly killed) bivariate descending ladder subordinator $( L ^ − 1 , H ^ )$, where $L ^$ is a local time at the minimum, and the descending ladder heights process $H ^ = X L ^ − 1$ (on ${ L ^ − 1 < ∞ }$; $+ ∞$ otherwise) is X sampled at its right-continuous inverse $L ^ − 1$:
$E [ e − α L ^ 1 − 1 − β H ^ 1 𝟙 { 1 < L ^ ∞ } ] = e − κ ^ ( α , β ) , { α , β } ⊂ C → ¯ .$
• As for the strict ascending ladder heights subordinator $H * : = X L * − 1$ (on $L * − 1 < ∞$; $+ ∞$ otherwise), $L * − 1$ being the right-continuous inverse of $L *$, and $L *$ denoting the amount of time X has spent at a new maximum, we have, thanks to the skip-free property of X, as follows. Since $P ( T h < ∞ ) = e − Φ ( 0 ) h$, X stays at a newly achieved maximum each time for an $Exp ( λ ( R ) )$-distributed amount of time, departing it to achieve a new maximum later on with probability $e − Φ ( 0 ) h$, and departing it, never to achieve a new maximum thereafter, with probability $1 − e − Φ ( 0 ) h$. It follows that the Laplace exponent of $H *$ is given by:
$− log E [ e − β H 1 𝟙 ( H 1 < + ∞ ) ] = ( 1 − e − β h ) λ ( R ) e − Φ ( 0 ) h + λ ( R ) ( 1 − e − Φ ( 0 ) h ) = λ ( R ) ( 1 − e − ( β + Φ ( 0 ) ) h )$
(where $β ∈ R +$). In other words, $H * / h$ is a killed Poisson process of intensity $λ ( R ) e − Φ ( 0 ) h$ and with killing rate $λ ( R ) ( 1 − e − Φ ( 0 ) h )$.
Again thanks to the skip-free nature of X, we can expand on the contents of Proposition 3, by offering further details of the Wiener-Hopf factorization. Indeed, if we let $N t : = X ¯ t / h$ and $T k : = T k h$ ($t ≥ 0$, $k ∈ N 0$) then clearly $T : = ( T k ) k ≥ 0$ are the arrival times of a renewal process (with a possibly defective inter-arrival time distribution) and $N : = ( N t ) t ≥ 0$ is the ‘number of arrivals’ process. One also has the relation: $G ¯ t * = T N t$, $t ≥ 0$ ($P$-a.s.). Thus the random variables entering the Wiener-Hopf factorization are determined in terms of the renewal process $( T , N )$.
Moreover, we can proceed to calculate explicitly the Wiener-Hopf factors as well as $κ ^$ and $κ *$. Let $p > 0$. First, since $X ¯ e p / h$ is a geometrically distributed random variable, we have, for any $β ∈ C → ¯$:
$E [ e − β X ¯ e p ] = ∑ k = 0 ∞ e − β h k ( 1 − e − Φ ( p ) h ) e − Φ ( p ) h k = 1 − e − Φ ( p ) h 1 − e − β h − Φ ( p ) h .$
Note here that $Φ ( p ) > 0$ for all $p > 0$. On the other hand, using conditioning (for any $α ≥ 0$):
$E e − α G ¯ e p * = E ( u , t ) ↦ ∑ k = 0 ∞ 𝟙 [ 0 , ∞ ) ( t k ) e − α t k 𝟙 [ t k , t k + 1 ) ( u ) ∘ ( e p , T ) = E t ↦ ∑ k = 0 ∞ 𝟙 [ 0 , ∞ ) ( t k ) e − α t k ( e − p t k − e − p t k + 1 ) ∘ T , sin ce e p ⊥ T = E ∑ k = 0 ∞ 𝟙 { T k < ∞ } e − ( p + α ) T k − e − ( p + α ) T k e − p ( T k + 1 − T k ) = E ∑ k = 0 ∞ e − ( p + α ) T k 𝟙 { T k < ∞ } 1 − e − p ( T k + 1 − T k ) .$
Now, conditionally on $T k < ∞$, $T k + 1 − T k$ is independent of $T k$ and has the same distribution as $T 1$. Therefore, by (1) and the theorem of Fubini:
$E [ e − α G ¯ e p * ] = ∑ k = 0 ∞ e − Φ ( p + α ) h k ( 1 − e − Φ ( p ) h ) = 1 − e − Φ ( p ) h 1 − e − Φ ( p + α ) h .$
We identify from (4) for any $β ∈ C → ¯$: $κ * ( p , 0 ) κ * ( p , β ) = 1 − e − Φ ( p ) h 1 − e − β h − Φ ( p ) h$ and therefore for any $α ≥ 0$: $κ * ( p + α , 0 ) κ * ( p + α , β ) = 1 − e − Φ ( p + α ) h 1 − e − β h − Φ ( p + α ) h .$ We identify from (5) for any $α ≥ 0$: $κ * ( p , 0 ) κ * ( p + α , 0 ) = 1 − e − h Φ ( p ) 1 − e − Φ ( p + α ) h .$ Therefore, multiplying the last two equalities, for $α ≥ 0$ and $β ∈ C → ¯$, the equality:
$κ * ( p , 0 ) κ * ( p + α , β ) = 1 − e − Φ ( p ) h 1 − e − β h − Φ ( p + α ) h$
obtains. In particular, for $α > 0$ and $β ∈ C → ¯$, we recognize for some constant $k * ∈ ( 0 , ∞ )$: $κ * ( α , β ) = k * ( 1 − e − ( β + Φ ( α ) ) h )$. Next, observe that by independence and duality (for $α ≥ 0$ and $θ ∈ R$):
$E [ exp { − α G ¯ e p * + i θ X ¯ e p } ] E [ exp { − α G ̲ e p + i θ X ̲ e p } ] = ∫ 0 ∞ d t p e − p t E [ exp { − α t + i θ X t } ] = ∫ 0 ∞ d t p e − p t − α t + Ψ ( θ ) t = p p + α − Ψ ( θ ) .$
Therefore:
$( p + α − ψ ( i θ ) ) κ ^ ( p , 0 ) κ ^ ( p + α , i θ ) = p 1 − e i θ h − Φ ( p + α ) h 1 − e − Φ ( p ) h .$
Both sides of this equality are continuous in $θ ∈ C ↓ ¯$ and analytic in $θ ∈ C ↓$. They agree on $R$, hence agree on $C ↓ ¯$ by analytic continuation. Therefore (for all $α ≥ 0$, $β ∈ C → ¯$):
$( p + α − ψ ( β ) ) κ ^ ( p , 0 ) κ ^ ( p + α , β ) = p 1 − e β h − Φ ( p + α ) h 1 − e − Φ ( p ) h ,$
i.e., for all $β ∈ C → ¯$ and $α ≥ 0$ for which $p + α ≠ ψ ( β )$ one has:
$E [ exp { − α G ̲ e p + β X ̲ e p } ] = p p + α − ψ ( β ) 1 − e ( β − Φ ( p + α ) ) h 1 − e − Φ ( p ) h .$
Moreover, for the unique $β 0 > 0$, for which $ψ ( β 0 ) = p + α$, one can take the limit $β → β 0$ in the above to obtain: $E [ exp { − α G ̲ e p + β 0 X ̲ e p } ] = p h ψ ′ ( β 0 ) ( 1 − e − Φ ( p ) h ) = p h Φ ′ ( p + α ) 1 − e − Φ ( p ) h$. We also recognize from (7) for $α > 0$ and $β ∈ C → ¯$ with $α ≠ ψ ( β )$, and some constant $k ^ ∈ ( 0 , ∞ )$: $κ ^ ( α , β ) = k ^ α − ψ ( β ) 1 − e ( β − Φ ( α ) ) h$. With $β 0 = Φ ( α )$ one can take the limit in the latter as $β → β 0$ to obtain: $κ ^ ( α , β 0 ) = k ^ ψ ′ ( β 0 ) / h = k ^ h Φ ′ ( α )$.
In summary:
Theorem 3 (Wiener-Hopf factorization for upwards skip-free Lévy chains).
We have the following identities in terms of ψ and Φ:
• For every $α ≥ 0$ and $β ∈ C → ¯$:
$E [ exp { − α G ¯ e p * − β X ¯ e p } ] = 1 − e − Φ ( p ) h 1 − e − ( β + Φ ( p + α ) ) h$
and
$E [ exp { − α G ̲ e p + β X ̲ e p } ] = p p + α − ψ ( β ) 1 − e ( β − Φ ( p + α ) ) h 1 − e − Φ ( p ) h$
(the latter whenever $p + α ≠ ψ ( β )$; for the unique $β 0 > 0$ such that $ψ ( β 0 ) = p + α$, i.e., for $β 0 = Φ ( p + α )$, one has the right-hand side given by $p h ψ ′ ( β 0 ) ( 1 − e − Φ ( p ) h ) = p h Φ ′ ( p + α ) 1 − e − Φ ( p ) h$).
• For some ${ k * , k ^ } ⊂ R +$ and then for every $α > 0$ and $β ∈ C → ¯$:
$κ * ( α , β ) = k * ( 1 − e − ( β + Φ ( α ) ) h )$
and
$κ ^ ( α , β ) = k ^ α − ψ ( β ) 1 − e ( β − Φ ( α ) ) h$
(the latter whenever $α ≠ ψ ( β )$; for the unique $β 0 > 0$ such that $ψ ( β 0 ) = α$, i.e., for $β 0 = Φ ( α )$, one has the right-hand side given by $k ^ ψ ′ ( β 0 ) / h = k ^ h Φ ′ ( α )$).
☐
As a consequence of Theorem 3-1, we obtain the formula for the Laplace transform of the running infimum evaluated at an independent exponentially distributed random time:
$E [ e β X ̲ e p ] = p p − ψ ( β ) 1 − e ( β − Φ ( p ) ) h 1 − e − Φ ( p ) h ( β ∈ R + \ { Φ ( p ) } )$
(and $E [ e Φ ( p ) X ̲ e p ] = p Φ ′ ( p ) h 1 − e − Φ ( p ) h$). In particular, if $ψ ′ ( 0 + ) > 0$, then letting $p ↓ 0$ in (8), one obtains by the DCT:
$E [ e β X ̲ ∞ ] = e β h − 1 Φ ′ ( 0 + ) h ψ ( β ) ( β > 0 ) .$
We obtain next from Theorem 3-2 (recall also Remark 4-1), by letting $α ↓ 0$ therein, the Laplace exponent $ϕ ( β ) : = − log E [ e − β H ^ 1 𝟙 ( H ^ 1 < ∞ ) ]$ of the descending ladder heights process $H ^$:
$ϕ ( β ) ( e β h − e Φ ( 0 ) h ) = ψ ( β ) , β ∈ R + ,$
where we have set for simplicity $k ^ = e − Φ ( 0 ) h$, by insisting on a suitable choice of the local time at the minimum. This gives the following characterization of the class of Laplace exponents of the descending ladder heights processes of upwards skip-free Lévy chains (cf. (Hubalek and Kyprianou 2011, Theorem 1)):
Theorem 4.
Let $h ∈ ( 0 , ∞ )$, ${ γ , q } ⊂ R +$, and $( ϕ k ) k ∈ N ⊂ R +$, with $q + ∑ k ∈ N ϕ k ∈ ( 0 , ∞ )$. Then:
There exists (in law) an upwards skip-free Lévy chain X with values in $Z h$ and with (i) γ being the killing rate of its strict ascending ladder heights process (see Remark 4-2), and (ii) $ϕ ( β ) = q + ∑ k = 1 ∞ ϕ k ( 1 − e − β k h )$, $β ∈ R +$, being the Laplace exponent of its descending ladder heights process.
if and only if the following conditions are satisfied:
• $γ q = 0$.
• Setting x equal to 1, when $γ = 0$, or to the unique solution of the equation:
$γ = ( 1 − 1 / x ) ϕ 1 + x ∑ k ∈ N ϕ k$
on the interval $x ∈ ( 1 , ∞ )$, otherwise2; and then defining $λ 1 : = q + ∑ k ∈ N ϕ k$, $λ − k : = x ϕ k − ϕ k + 1$, $k ∈ N$; it holds:
$λ − k ≥ 0 , k ∈ N .$
Such an X is then unique (in law), is called the parent process, its Lévy measure is given by $∑ k ∈ N λ − k δ − k h + λ 1 δ h$, and $x = e Φ ( 0 ) h$.
Remark 5.
Condition Theorem 4-2 is actually quite explicit. When $γ = 0$ (equivalently, the parent process does not drift to $− ∞$), it simply says that the sequence $( ϕ k ) k ∈ N$ should be nonincreasing. In the case when the parent process X drifts to $− ∞$ (equivalently, $γ > 0$ (hence $q = 0$)), we might choose $x ∈ ( 1 , ∞ )$ first, then $( ϕ k ) k ≥ 1$, and finally γ.
Proof.
Please note that with $ϕ ( β ) = : q + ∑ k = 1 ∞ ϕ k ( 1 − e − β k h )$, $x : = e Φ ( 0 ) h$, and comparing the respective Fourier components of the left and the right hand-side, (10) is equivalent to:
• $q + ∑ k ∈ N ϕ k = λ ( { h } )$.
• $x ( q + ∑ k ∈ N ϕ k ) + ϕ 1 = λ ( R )$.
• $x ϕ k − ϕ k + 1 = λ ( { − k h } )$, $k ∈ N$.
Moreover, the killing rate of the strict ascending ladder heights processes expresses as $λ ( R ) ( 1 − 1 / x )$, whereas (1) and (3) alone, together imply $q + x ∑ k ∈ N ϕ k + ϕ 1 = λ ( R )$.
Necessity of the conditions. Remark that the strict ascending ladder heights and the descending ladder heights processes cannot simultaneously have a strictly positive killing rate. Everything else is trivial from the above (in particular, we obtain that such an X, when it exists, is unique, and has the stipulated Lévy measure and $Φ ( 0 )$).
Sufficiency of the conditions. The compound Poisson process X whose Lévy measure is given by $λ = ∑ k ∈ N λ − k δ − k h + λ 1 δ h$ (and whose Laplace exponent we shall denote $ψ$, likewise the largest zero of $ψ$ will be denoted $Φ ( 0 )$) constitutes an upwards skip-free Lévy chain. Moreover, since $x = 1$, unless $q = 0$, we obtain either way that $ϕ ( β ) ( e β h − x ) = ψ ( β )$ with $ϕ ( β ) : = q + ∑ k = 1 ∞ ϕ k ( 1 − e − β k h )$, $β ≥ 0$. Substituting in this relation $β : = ( log x ) / h$, we obtain at once that if $γ > 0$ (so $q = 0$), that then X drifts to $− ∞$, $x = e Φ ( 0 ) h$, and hence $γ = ( 1 − e − Φ ( 0 ) ) λ ( R )$ is the killing rate of the strict ascending ladder heights process. On the other hand, when $γ = 0$, then $x = 1$, and a direct computation reveals $ψ ′ ( 0 + ) = h λ 1 − ∑ k ∈ N k h ( ϕ k − ϕ k + 1 ) = h ( λ 1 − ∑ k ∈ N ϕ k ) = h q ≥ 0$. So X does not drift to $− ∞$, and $Φ ( 0 ) = 0$, whence (again) $x = e Φ ( 0 ) h$. Also in this case, the killing rate of the strict ascending ladder heights process is $0 = ( 1 − x ) λ ( R )$. Finally, and regardless of whether $γ$ is strictly positive or not, compared with (10), we conclude that $ϕ$ is indeed the Laplace exponent of the descending ladder heights process of X. ☐

## 4. Theory of Scale Functions

Again the reader is invited to compare the exposition of the following section with that of (Bertoin 1996, sct. VII.2) and (Kyprianou 2006, sct. 8.2), which deal with the spectrally negative case.

#### 4.1. The Scale Function W

It will be convenient to consider in this subsection the times at which X attains a new maximum. We let $D 1$, $D 2$ and so on, denote the depths (possibly zero, or infinity) of the excursions below these new maxima. For $k ∈ N$, it is agreed that $D k = + ∞$ if the process X never reaches the level $( k − 1 ) h$. Then it is clear that for $y ∈ Z h +$, $x ≥ 0$ (cf. (Bühlmann 1970, p. 137, para. 6.2.4(a)) (Doney 2007, sct. 9.3)):
$P ( X ̲ T y ≥ − x ) = P ( D 1 ≤ x , D 2 ≤ x + h , … , D y / h ≤ x + y − h ) = P ( D 1 ≤ x ) · P ( D 1 ≤ x + h ) ⋯ P ( D 1 ≤ x + y − h ) = ∏ r = 1 ⌊ ( y + x ) / h ⌋ P ( D 1 ≤ ( r − 1 ) h ) ∏ r = 1 ⌊ x / h ⌋ h P ( D 1 ≤ ( r − 1 ) h ) = W ( x ) W ( x + y ) ,$
where we have introduced (up to a multiplicative constant) the scale function:
$W ( x ) : = 1 / ∏ r = 1 ⌊ x / h ⌋ P ( D 1 ≤ ( r − 1 ) h ) ( x ≥ 0 ) .$
(When convenient, we extend W by 0 on $( − ∞ , 0 )$.)
Remark 6.
If needed, we can of course express $P ( D 1 ≤ h k )$, $k ∈ N 0$, in terms of the usual excursions away from the maximum. Thus, let $D ˜ 1$ be the depth of the first excursion away from the current maximum. By the time the process attains a new maximum (that is to say h), conditionally on this event, it will make a total of N departures away from the maximum, where (with $J 1$ the first jump time of X, $p : = λ ( { h } ) / λ ( R )$, $p ˜ : = P ( X J 1 = h | T h < ∞ ) = p / P ( T h < ∞ )$) $N ∼ geom ( p ˜ )$. So, denoting $θ ˜ k : = P ( D ˜ 1 ≤ h k )$, one has $P ( D 1 ≤ h k ) = P ( T h < ∞ ) ∑ l = 0 ∞ p ˜ ( 1 − p ˜ ) l θ ˜ k l = p 1 − ( 1 − e Φ ( 0 ) h p ) θ ˜ k$, $k ∈ N 0$.
The following theorem characterizes the scale function in terms of its Laplace transform.
Theorem 5 (The scale function).
For every $y ∈ Z h +$ and $x ≥ 0$ one has:
$P ( X ̲ T y ≥ − x ) = W ( x ) W ( x + y )$
and $W : [ 0 , ∞ ) → [ 0 , ∞ )$ is (up to a multiplicative constant) the unique right-continuous and piecewise continuous function of exponential order with Laplace transform:
$W ^ ( β ) = ∫ 0 ∞ e − β x W ( x ) d x = e β h − 1 β h ψ ( β ) ( β > Φ ( 0 ) ) .$
Proof.
(For uniqueness see e.g., (Engelberg 2005, p. 14, Theorem 10). It is clear that W is of exponential order, simply from the definition (11).)
Suppose first X tends to $+ ∞$. Then, letting $y → ∞$ in (12) above, we obtain $P ( − X ̲ ∞ ≤ x ) = W ( x ) / W ( + ∞ )$. Here, since the left-hand side limit exists by the DCT, is finite and non-zero at least for all large enough x, so does the right-hand side, and $W ( + ∞ ) ∈ ( 0 , ∞ )$.
Therefore $W ( x ) = W ( + ∞ ) P ( − X ̲ ∞ ≤ x )$ and hence the Laplace-Stieltjes transform of W is given by (9)—here we consider W as being extended by 0 on $( − ∞ , 0 )$:
$∫ [ 0 , ∞ ) e − β x d W ( x ) = W ( + ∞ ) e β h − 1 Φ ′ ( 0 + ) h ψ ( β ) ( β > 0 ) .$
Since (integration by parts (Revuz and Yor 1999, chp. 0, Proposition 4.5)) $∫ [ 0 , ∞ ) e − β x d W ( x ) = β ∫ ( 0 , ∞ ) e − β x W ( x ) d x$,
$∫ 0 ∞ e − β x W ( x ) d x = W ( + ∞ ) Φ ′ ( 0 + ) e β h − 1 β h ψ ( β ) ( β > 0 ) .$
Suppose now that X oscillates. Via Remark 3, approximate X by the processes $X ϵ$, $ϵ > 0$. In (14), fix $β$, carry over everything except for $W ( + ∞ ) Φ ′ ( 0 + )$, divide both sides by $W ( 0 )$, and then apply this equality to $X ϵ$. Then on the left-hand side, the quantities pertaining to $X ϵ$ will converge to the ones for the process X as $ϵ ↓ 0$ by the MCT. Indeed, for $y ∈ Z h +$, $P ( X ̲ T y = 0 ) = W ( 0 ) / W ( y )$ and (in the obvious notation): $1 / P ( X ϵ ̲ T y ϵ = 0 ) ↑ 1 / P ( X ̲ T y = 0 ) = W ( y ) / W ( 0 )$, since $X ϵ ↓ X$, uniformly on bounded time sets, almost surely as $ϵ ↓ 0$. (It is enough to have convergence for $y ∈ Z h +$, as this implies convergence for all $y ≥ 0$, W being the right-continuous piecewise constant extension of $W | Z h +$.) Thus we obtain in the oscillating case, for some $α ∈ ( 0 , ∞ )$ which is the limit of the right-hand side as $ϵ ↓ 0$:
$∫ 0 ∞ e − β x W ( x ) d x = α e β h − 1 β h ψ ( β ) ( β > 0 ) .$
Finally, we are left with the case when X drifts to $− ∞$. We treat this case by a change of measure (see Proposition 1 and the paragraph immediately preceding it). To this end assume, provisionally, that X is already the coordinate process on the canonical filtered space $D h$. Then we calculate by Proposition 2-2 (for $y ∈ Z h +$, $x ≥ 0$):
$P ( X ̲ T y ≥ − x ) = P ( T y < ∞ ) P ( X ̲ T y ≥ − x | T y < ∞ ) = e − Φ ( 0 ) y P ( X T y ̲ ∞ ≥ − x | T y < ∞ ) = e − Φ ( 0 ) y P ♮ ( X T y ̲ ∞ ≥ − x ) = e − Φ ( 0 ) y P ♮ ( X ̲ T ( y ) ≥ − x ) = e − Φ ( 0 ) y W ♮ ( x ) / W ♮ ( x + y ) ,$
where the third equality uses the fact that $( ω ↦ inf { ω ( s ) : s ∈ [ 0 , ∞ ) } ) : ( D h , F ) → ( [ − ∞ , ∞ ) , B ( [ − ∞ , ∞ ) )$ is a measurable transformation. Here $W ♮$ is the scale function corresponding to X under the measure $P ♮$, with Laplace transform:
$∫ 0 ∞ e − β x W ♮ ( x ) d x = e β h − 1 β h ψ ( Φ ( 0 ) + β ) ( β > 0 ) .$
Please note that the equality $P ( X ̲ T y ≥ − x ) = e − Φ ( 0 ) y W ♮ ( x ) / W ♮ ( x + y )$ remains true if we revert back to our original X (no longer assumed to be in its canonical guise). This is so because we can always go from X to its canonical counter-part by taking an image measure. Then the law of the process, hence the Laplace exponent and the probability $P ( X ̲ T y ≥ − x )$ do not change in this transformation.
Now define $W ˜ ( x ) : = e Φ ( 0 ) ⌊ 1 + x / h ⌋ h W ♮ ( x )$ ($x ≥ 0$). Then $W ˜$ is the right-continuous piecewise-constant extension of $W ˜ | Z h +$. Moreover, for all $y ∈ Z h +$ and $x ≥ 0$, (12) obtains with W replaced by $W ˜$. Plugging in $x = 0$ into (12), $W ˜ | Z h$ and $W | Z h$ coincide up to a multiplicative constant, hence $W ˜$ and W do as well. Moreover, for all $β > Φ ( 0 )$, by the MCT:
$∫ 0 ∞ e − β x W ˜ ( x ) d x = e Φ ( 0 ) h ∑ k = 0 ∞ ∫ k h ( k + 1 ) h e − β x e Φ ( 0 ) k h W ♮ ( k h ) d x = e Φ ( 0 ) h ∑ k = 0 ∞ 1 β e − β k h ( 1 − e − β h ) e Φ ( 0 ) k h W ♮ ( k h ) = e Φ ( 0 ) h β − Φ ( 0 ) β 1 − e − β h 1 − e − ( β − Φ ( 0 ) ) h ∫ 0 ∞ e − ( β − Φ ( 0 ) ) x W ♮ ( x ) d x = e Φ ( 0 ) h β − Φ ( 0 ) β 1 − e − β h 1 − e − ( β − Φ ( 0 ) ) h e ( β − Φ ( 0 ) ) h − 1 ( β − Φ ( 0 ) ) h ψ ( β ) = ( e β h − 1 ) β h ψ ( β ) .$
☐
Remark 7.
Henceforth the normalization of the scale function W will be understood so as to enforce the validity of (13).
Proposition 4.
$W ( 0 ) = 1 / ( h λ ( { h } ) )$, and $W ( + ∞ ) = 1 / ψ ′ ( 0 + )$ if $Φ ( 0 ) = 0$. If $Φ ( 0 ) > 0$, then $W ( + ∞ ) = + ∞$.
Proof.
Integration by parts and the DCT yield $W ( 0 ) = lim β → ∞ β W ^ ( β )$. (13) and another application of the DCT then show that $W ( 0 ) = 1 / ( h λ ( { h } ) )$. Similarly, integration by parts and the MCT give the identity $W ( + ∞ ) = lim β ↓ 0 β W ^ ( β )$. The conclusion $W ( + ∞ ) = 1 / ψ ′ ( 0 + )$ is then immediate from (13) when $Φ ( 0 ) = 0$. If $Φ ( 0 ) > 0$, then the right-hand side of (13) tends to infinity as $β ↓ Φ ( 0 )$ and thus, by the MCT, necessarily $W ( + ∞ ) = + ∞$. ☐

#### 4.2. The Scale Functions $W ( q )$, $q ≥ 0$

Definition 3.
For $q ≥ 0$, let $W ( q ) ( x ) : = e Φ ( q ) ⌊ 1 + x / h ⌋ h W Φ ( q ) ( x )$ ($x ≥ 0$), where $W c$ plays the role of W but for the process $( X , P c )$ ($c ≥ 0$; see Proposition 1). Please note that $W ( 0 ) = W$. When convenient we extend $W ( q )$ by 0 on $( − ∞ , 0 )$.
Theorem 6.
For each $q ≥ 0$, $W ( q ) : [ 0 , ∞ ) → [ 0 , ∞ )$ is the unique right-continuous and piecewise continuous function of exponential order with Laplace transform:
$W ( q ) ^ ( β ) = ∫ 0 ∞ e − β x W ( q ) ( x ) d x = e β h − 1 β h ( ψ ( β ) − q ) ( β > Φ ( q ) ) .$
Moreover, for all $y ∈ Z h +$ and $x ≥ 0$:
$E [ e − q T y 𝟙 { X ̲ T y ≥ − x } ] = W ( q ) ( x ) W ( q ) ( x + y ) .$
Proof.
The claim regarding the Laplace transform follows from Proposition 1, Theorem 5 and Definition 3 as it did in the case of the scale function W (cf. final paragraph of the proof of Theorem 5). For the second assertion, let us calculate (moving onto the canonical space $D h$ as usual, using Proposition 1 and noting that $X T y = y$ on ${ T y < ∞ }$):
$E [ e − q T y 𝟙 { X ̲ T y ≥ − x } ] = E [ e Φ ( q ) X T y − q T y 𝟙 { X ̲ T y ≥ − x } ] e − Φ ( q ) y = e − Φ ( q ) y P Φ ( q ) ( X ̲ T y ≥ − x ) = e − Φ ( q ) y W Φ ( q ) ( x ) W Φ ( q ) ( x + y ) = W ( q ) ( x ) W ( q ) ( x + y ) .$
☐
Proposition 5.
For all $q > 0$: $W ( q ) ( 0 ) = 1 / ( h λ ( { h } ) )$ and $W ( q ) ( + ∞ ) = + ∞$.
Proof.
As in Proposition 4, $W ( q ) ( 0 ) = lim β → ∞ β W ( q ) ^ ( β ) = 1 / ( h λ ( { h } ) )$. Since $Φ ( q ) > 0$, $W ( q ) ( + ∞ ) = + ∞$ also follows at once from the expression for $W ( q ) ^$. ☐
Moreover:
Proposition 6.
For $q ≥ 0$:
• If $Φ ( q ) > 0$ or $ψ ′ ( 0 + ) > 0$, then $lim x → ∞ W ( q ) ( x ) e − Φ ( q ) ⌊ 1 + x / h ⌋ h = 1 / ψ ′ ( Φ ( q ) )$.
• If $Φ ( q ) = ψ ′ ( 0 + ) = 0$ (hence $q = 0$), then $W ( q ) ( + ∞ ) = + ∞$, but $lim sup x → ∞ W ( q ) ( x ) / x < ∞$. Indeed, $lim x → ∞ W ( q ) ( x ) / x = 2 / m 2$, if $m 2 : = ∫ y 2 λ ( d y ) < ∞$ and $lim x → ∞ W ( q ) ( x ) / x = 0$, if $m 2 = ∞$.
Proof.
The first claim is immediate from Proposition 4, Definition 3 and Proposition 1. To handle the second claim, let us calculate, for the Laplace transform $d W ^$ of the measure $d W$, the quantity (using integration by parts, Theorem 5 and the fact that (since $ψ ′ ( 0 + ) = 0$) $∫ y λ ( d y ) = 0$):
$lim β ↓ 0 β d W ^ ( β ) = lim β ↓ 0 β 2 ψ ( β ) = 2 m 2 ∈ [ 0 , + ∞ ) .$
For:
$lim β ↓ 0 ∫ ( e β y − 1 ) λ ( d y ) / β 2 = lim β ↓ 0 ∫ e β y − β y − 1 β 2 y 2 y 2 λ ( d y ) = m 2 2 ,$
by the MCT, since $( u ↦ e − u + u − 1 u 2 )$ is nonincreasing on $( 0 , ∞ )$ (the latter can be checked by comparing derivatives). The claim then follows by the Karamata Tauberian Theorem (Bingham et al. 1987, p. 37, Theorem 1.7.1 with ρ = 1). ☐

#### 4.3. The Functions $Z ( q )$, $q ≥ 0$

Definition 4.
For each $q ≥ 0$, let $Z ( q ) ( x ) : = 1 + q ∫ 0 ⌊ x / h ⌋ h W ( q ) ( z ) d z$ ($x ≥ 0$). When convenient we extend these functions by 1 on $( − ∞ , 0 )$.
Definition 5.
For $x ≥ 0$, let $T x − : = inf { t ≥ 0 : X t < − x }$.
Proposition 7.
In the sense of measures on the real line, for every $q > 0$:
$P − X ̲ e q = q h e Φ ( q ) h − 1 d W ( q ) − q W ( q ) ( · − h ) · Δ ,$
where $Δ : = h ∑ k = 1 ∞ δ k h$ is the normalized counting measure on $Z h + + ⊂ R$, $P − X ̲ e q$ is the law of $− X ̲ e q$ under $P$, and $( W ( q ) ( · − h ) · Δ ) ( A ) = ∫ A W ( q ) ( y − h ) Δ ( d y )$ for Borel subsets A of $R$.
Theorem 7.
For each $x ≥ 0$,
$E [ e − q T x − 𝟙 { T x − < ∞ } ] = Z ( q ) ( x ) − q h e Φ ( q ) h − 1 W ( q ) ( x )$
when $q > 0$, and $P ( T x − < ∞ ) = 1 − W ( x ) / W ( + ∞ )$. The Laplace transform of $Z ( q )$, $q ≥ 0$, is given by:
$Z ( q ) ^ ( β ) = ∫ 0 ∞ Z ( q ) ( x ) e − β x d x = 1 β 1 + q ψ ( β ) − q , ( β > Φ ( q ) ) .$
Proofs of Proposition 7 and Theorem 7. First, with regard to the Laplace transform of $Z ( q )$, we have the following derivation (using integration by parts, for every $β > Φ ( q )$):
$∫ 0 ∞ Z ( q ) ( x ) e − β x d x = ∫ 0 ∞ e − β x β d Z ( q ) ( x ) = 1 β 1 + q ∑ k = 1 ∞ e − β k h W ( q ) ( ( k − 1 ) h ) h = 1 β 1 + q e − β h β h 1 − e − β h ∑ k = 1 ∞ ( 1 − e − β h ) β e − β ( k − 1 ) h W ( q ) ( ( k − 1 ) h ) = 1 β 1 + q β h e β h − 1 W ( q ) ^ ( β ) = 1 β 1 + q ψ ( β ) − q .$
Next, to prove Proposition 7, note that it will be sufficient to check the equality of the Laplace transforms (Bhattacharya and Waymire 2007, p. 109, Theorem 8.4). By what we have just shown, (8), integration by parts, and Theorem 6, we then only need to establish, for $β > Φ ( q )$:
$q ψ ( β ) − q e ( β − Φ ( q ) ) h − 1 1 − e − Φ ( q ) h = q h e Φ ( q ) h − 1 β ( e β h − 1 ) ( ψ ( β ) − q ) β h − q ψ ( β ) − q ,$
which is clear.
Finally, let $x ∈ Z h +$. For $q > 0$, evaluate the measures in Proposition 7 at $[ 0 , x ]$, to obtain:
$E [ e − q T x − 𝟙 { T x − < ∞ } ] = P ( e q ≥ T x − ) = P ( X ̲ e q < − x ) = 1 − P ( X ̲ e q ≥ − x ) = 1 + q ∫ 0 x W ( q ) ( z ) d z − q h e Φ ( q ) h − 1 W ( q ) ( x ) ,$
whence the claim follows. On the other hand, when $q = 0$, the following calculation is straightforward: $P ( T x − < ∞ ) = P ( X ̲ ∞ < − x ) = 1 − P ( X ̲ ∞ ≥ − x ) = 1 − W ( x ) / W ( + ∞ )$ (we have passed to the limit $y → ∞$ in (12) and used the DCT on the left-hand side of this equality). ☐
Proposition 8.
Let $q ≥ 0$, $x ≥ 0$, $y ∈ Z h +$. Then:
$E [ e − q T x − 𝟙 { T x − < T y } ] = Z ( q ) ( x ) − Z ( q ) ( x + y ) W ( q ) ( x ) W ( q ) ( x + y ) .$
Proof.
Observe that ${ T x − = T y } = ∅$, $P$-a.s. The case when $q = 0$ is immediate and indeed contained in Theorem 5, since, $P$-a.s., $Ω \ { T x − < T y } = { T x − ≥ T y } = { X ̲ T y ≥ − x }$. For $q > 0$ we observe that by the strong Markov property, Theorem 6 and Theorem 7:
$E [ e − q T x − 𝟙 { T x − < T y } ] = E [ e − q T x − 𝟙 { T x − < ∞ } ] − E [ e − q T x − 𝟙 { T y < T x − < ∞ } ] = Z ( q ) ( x ) − q h e Φ ( q ) h − 1 W ( q ) ( x ) − E [ e − q T y 𝟙 { T y < T x − } ] E [ e − q T x + y − 𝟙 { T x + y − < ∞ } ] = Z ( q ) ( x ) − q h e Φ ( q ) h − 1 W ( q ) ( x ) − W ( q ) ( x ) W ( q ) ( x + y ) Z ( q ) ( x + y ) − q h e Φ ( q ) h − 1 W ( q ) ( x + y ) = Z ( q ) ( x ) − Z ( q ) ( x + y ) W ( q ) ( x ) W ( q ) ( x + y ) ,$
which completes the proof. ☐

#### 4.4. Calculating Scale Functions

In this subsection it will be assumed for notational convenience, but without loss of generality, that $h = 1$. We define:
$γ : = λ ( R ) , p : = λ ( { 1 } ) / γ , q k : = λ ( { − k } ) / γ , k ≥ 1 .$
Fix $q ≥ 0$. Then denote, provisionally, $e m , k : = E [ e − q T k 𝟙 { X ̲ T k ≥ − m } ]$, and $e k : = e 0 , k$, where ${ m , k } ⊂ N 0$ and note that, thanks to Theorem 6, $e m , k = e m + k e m$ for all ${ m , k } ⊂ N 0$. Now, $e 0 = 1$. Moreover, by the strong Markov property, for each $k ∈ N 0$, by conditioning on $F T k$ and then on $F J$, where J is the time of the first jump after $T k$ (so that, conditionally on $T k < ∞$, $J − T k ∼ Exp ( γ )$):
$e k + 1 = E [ e − q T k 𝟙 { X ̲ T k ≥ 0 } e − q ( J − T k ) ( 𝟙 ( next jump after T k up ) + 𝟙 ( next jump after T k 1 down , then up 2 before down more than k − 1 ) + ⋯ + 𝟙 ( next jump after T k k down & k + 1 before down more than 0 ) ) e − q ( T k + 1 − J ) ] = e k γ γ + q [ p + q 1 e k − 1 , 2 + ⋯ + q k e 0 , k + 1 ] = e k γ γ + q [ p + q 1 e k + 1 e k − 1 + ⋯ + q k e k + 1 e 0 ] .$
Upon division by $e k e k + 1$, we obtain:
$W ( q ) ( k ) = γ γ + q [ p W ( q ) ( k + 1 ) + q 1 W ( q ) ( k − 1 ) + ⋯ + q k W ( q ) ( 0 ) ] .$
Put another way, for all $k ∈ Z +$:
$p W ( q ) ( k + 1 ) = 1 + q γ W ( q ) ( k ) − ∑ l = 1 k q l W ( q ) ( k − l ) .$
Coupled with the initial condition $W ( q ) ( 0 ) = 1 / ( γ p )$ (from Proposition 5 and Proposition 4), this is an explicit recursion scheme by which the values of $W ( q )$ obtain (cf. (De Vylder and Goovaerts 1988, sct. 4, eq. (6) & (7)) (Dickson and Waters 1991, sct. 7, eq. (7.1) & (7.5)) (Marchal 2001, p. 255, Proposition 3.1)). We can also see the vector $W ( q ) = ( W ( q ) ( k ) ) k ∈ Z$ as a suitable eigenvector of the transition matrix P associated with the jump chain of X. Namely, we have for all $k ∈ Z +$: $1 + q γ W ( q ) ( k ) = ∑ l ∈ Z P k l W ( q ) ( l )$.
Now, with regard to the function $Z ( q )$, its values can be computed directly from the values of $W ( q )$ by a straightforward summation, $Z ( q ) ( n ) = 1 + q ∑ k = 0 n − 1 W ( q ) ( k )$ ($n ∈ N 0$). Alternatively, (20) yields immediately its analogue, valid for each $n ∈ Z +$ (make a summation $∑ k = 0 n − 1$ and multiply by q, using Fubini’s theorem for the last sum):
$p Z ( q ) ( n + 1 ) − p − p q W ( q ) ( 0 ) = 1 + q γ ( Z ( q ) ( n ) − 1 ) − ∑ l = 1 n − 1 q l ( Z ( q ) ( n − l ) − 1 ) ,$
i.e., for all $k ∈ Z +$:
$p Z ( q ) ( k + 1 ) + 1 − p − ∑ l = 1 k − 1 q l = 1 + q γ Z ( q ) ( k ) − ∑ l = 1 k − 1 q l Z ( q ) ( k − l ) .$
Again this can be seen as an eigenvalue problem. Namely, for all $k ∈ Z +$: $1 + q γ Z ( q ) ( k ) = ∑ l ∈ Z P k l Z ( q ) ( l )$. In summary:
Proposition 9 (Calculation of $W ( q )$ and $Z ( q )$).
Let $h = 1$ and $q ≥ 0$. Seen as vectors, $W ( q ) : = ( W ( q ) ( k ) ) k ∈ Z$ and $Z ( q ) : = ( Z ( q ) ( k ) ) k ∈ Z$ satisfy, entry-by-entry (P being the transition matrix associated with the jump chain of X; $λ q : = 1 + q / λ ( R )$):
$( P W ( q ) ) | Z + = λ q W ( q ) | Z + a n d ( P Z ( q ) ) | Z + = λ q Z ( q ) | Z + ,$
i.e., (20) and (21) hold true for $k ∈ Z +$. Additionally, $W ( q ) | Z − = 0$ with $W ( q ) ( 0 ) = 1 / λ ( { 1 } )$, whereas $Z ( q ) | Z − = 1$.
An alternative form of recursions (20) and (21) is as follows:
Corollary 1.
We have for all $n ∈ N 0$:
$W ( q ) ( n + 1 ) = W ( q ) ( 0 ) + ∑ k = 1 n + 1 W ( q ) ( n + 1 − k ) q + λ ( − ∞ , − k ] λ ( { 1 } ) , W ( q ) ( 0 ) = 1 / λ ( { 1 } ) ,$
and for $Z ( q ) ˜ : = Z ( q ) − 1$,
$Z ( q ) ˜ ( n + 1 ) = ( n + 1 ) q λ ( { 1 } ) + ∑ k = 1 n Z ( q ) ˜ ( n + 1 − k ) q + λ ( − ∞ , − k ] λ ( { 1 } ) , Z ( q ) ˜ ( 0 ) = 0 .$
Proof.
Recursion (23) obtains from (20) as follows (cf. also (Asmussen and Albrecher 2010, (proof of) Proposition XVI.1.2)):
$p W ( q ) ( n + 1 ) + ∑ k = 1 n q k W ( q ) ( n − k ) = ν q W ( q ) ( n ) , ∀ n ∈ N 0 ⇒ p W ( q ) ( k + 1 ) + ∑ m = 0 k − 1 q k − m W ( q ) ( m ) = ν q W ( q ) ( k ) , ∀ k ∈ N 0 ⇒ ( making a summation ∑ k = 0 n ) p ∑ k = 0 n W ( q ) ( k + 1 ) + ∑ k = 0 n ∑ m = 0 k − 1 q k − m W ( q ) ( m ) = ν q ∑ k = 0 n W ( q ) ( k ) , ∀ n ∈ N 0 ⇒ ( Fubini ) p W ( q ) ( n + 1 ) + p ∑ k = 0 n W ( q ) ( k ) + ∑ m = 0 n − 1 W ( q ) ( m ) ∑ k = m + 1 n q k − m = p W ( q ) ( 0 ) + ν q ∑ k = 0 n W ( q ) ( k ) , ∀ n ∈ N 0 ⇒ ( relabeling )$
$p W ( q ) ( n + 1 ) + p ∑ k = 0 n W ( q ) ( k ) + ∑ k = 0 n − 1 W ( q ) ( k ) ∑ l = 1 n − k q l = p W ( q ) ( 0 ) + ( 1 + q / γ ) ∑ k = 0 n W ( q ) ( k ) , ∀ n ∈ N 0 ⇒ ( rearranging ) W ( q ) ( n + 1 ) = W ( q ) ( 0 ) + ∑ k = 0 n W ( q ) ( k ) q + γ ∑ l = n − k + 1 ∞ q l p γ , ∀ n ∈ N 0 ⇒ ( relabeling ) W ( q ) ( n + 1 ) = W ( q ) ( 0 ) + ∑ k = 1 n + 1 W ( q ) ( n + 1 − k ) q + γ ∑ l = k ∞ q l p γ , ∀ n ∈ N 0 .$
Then (24) follows from (23) by another summation from $n = 0$ to $n = w − 1$, $w ∈ N 0$, say, and an interchange in the order of summation for the final sum. ☐
Now, given these explicit recursions for the calculation of the scale functions, searching for those Laplace exponents of upwards skip-free Lévy chains (equivalently, their descending ladder heights processes, cf. Theorem 4), that allow for an inversion of (16) in terms of some or another (more or less exotic) special function, appears less important. This is in contrast to the spectrally negative case, see e.g., Hubalek and Kyprianou (2011).
That said, when the scale function(s) can be expressed in terms of elementary functions, this is certainly note-worthy. In particular, whenever the support of $λ$ is bounded from below, then (20) becomes a homogeneous linear difference equation with constant coefficients of some (finite) order, which can always be solved for explicitly in terms of elementary functions (as long as one has control over the zeros of the characteristic polynomial). The minimal example of this situation is of course when X is skip-free to the left also. For simplicity let us only consider the case $q = 0$.
• Skip-free chain. Let $λ = p δ 1 + ( 1 − p ) δ − 1$. Then $W ( k ) = 1 1 − 2 p 1 − p p k + 1 − 1$, unless $p = 1 / 2$, in which case $W ( k ) = 2 ( 1 + k )$, $k ∈ N 0$.
Indeed one can in general reverse-engineer the Lévy measure, so that the zeros of the characteristic polynomial of (20) (with $q = 0$) are known a priori, as follows. Choose $l ∈ N$ as being $− inf supp ( λ )$; $p ∈ ( 0 , 1 )$ as representing the probability of an up-jump; and then the numbers $λ 1$, …, $λ l + 1$ (real, or not), in such a way that the polynomial (in x) $p ( x − λ 1 ) ⋯ ( x − λ l + 1 )$ coincides with the characteristic polynomial of (20) (for $q = 0$):
$p x l + 1 − x l + q 1 x l − 1 + ⋯ + q l$
of some upwards skip-free Lévy chain, which can jump down by at most (and does jump down by) l units (this imposes some set of algebraic restrictions on the elements of ${ λ 1 , … , λ l + 1 }$). A priori one then has access to the zeros of the characteristic polynomial, and it remains to use the linear recursion in order to determine the first $l + 1$ values of W, thereby finding (via solving a set of linear equations of dimension $l + 1$) the sought-after particular solution of (20) (with $q = 0$), that is W. A particular parameter set for the zeros is depicted in Figure 1 and the following is a concrete example of this procedure.
• “Reverse-engineered” chain. Let $l = 2$, $p = 1 2$ and, with reference to (the caption of) Figure 1, $λ 1 = 1$, $λ 2 = − 1 2$, $λ 3 = 3 2$. Then this corresponds (in the sense that has been made precise above) to an upwards skip-free Lévy chain with $λ / λ ( R ) = 1 2 δ 1 + 1 8 δ − 1 + 3 8 δ − 2$ and with $W ( n ) = A + B ( − 1 2 ) n + C ( 3 2 ) n$, for all $n ∈ Z +$, for some ${ A , B , C } ⊂ R$. Choosing (say) $λ ( R ) = 2$, we have from Proposition 4, $W ( 0 ) = 1$; and then from (20), $W ( 1 ) = 2$, $W ( 2 ) = 15 4$. This renders $A = − 4 3$, $B = 1 12$, $C = 9 4$.
An example in which the support of $λ$ is not bounded, but one can still obtain closed form expressions in terms of elementary functions, is the following.
• “Geometric” chain. Assume $p ∈ ( 0 , 1 )$, take an $a ∈ ( 0 , 1 )$, and let $q l = ( 1 − p ) ( 1 − a ) a l − 1$ for $l ∈ N$. Then (20) implies for $z ( k ) : = W ( k ) / a k$ that $p a z ( k + 1 ) = z ( k ) − ∑ l = 1 k ( 1 − p ) ( 1 − a ) z ( k − l ) / a$, i.e., for $γ ( k ) : = ∑ l = 0 k z ( l )$ the relation $p a 2 γ ( k + 1 ) − ( a + p a 2 ) γ ( k ) + ( 1 − p + p a ) γ ( k − 1 ) = 0$, a homogeneous second order linear difference equation with constant coefficients. Specialize now to $p = a = 1 2$ and take $γ = λ ( R ) = 2$. Solving the difference equation with the initial conditions that are got from the known values of $W ( 0 )$ and $W ( 1 )$ leads to $W ( k ) = 2 ( 3 2 ) k − 1$, $k ∈ Z +$. This example is further developed in Section 5, in the context of the modeling of the capital surplus process of an insurance company.
Beyond this “geometric” case it seems difficult to come up with other Lévy measures for X that have unbounded support and for which W could be rendered explicit in terms of elementary functions.
We close this section with the following remark and corollary (cf. (Biffis and Kyprianou 2010, eq. (12)) and (Avram et al. 2004, Remark 5), respectively, for their spectrally negative analogues): for them we no longer assume that $h = 1$.
Remark 8.
Let L be the infinitesimal generator (Sato 1999, p. 208, Theorem 31.5) of X. It is seen from (22), that for each $q ≥ 0$, $( ( L − q ) W ( q ) ) | R + = ( ( L − q ) Z ( q ) ) | R + = 0$.
Corollary 2.
For each $q ≥ 0$, the stopped processes Y and Z, defined by $Y t : = e − q ( t ∧ T 0 − ) W ( q ) ∘ X t ∧ T 0 −$ and $Z t : = e − q ( t ∧ T 0 − ) W ( q ) ∘ X t ∧ T 0 −$, $t ≥ 0$, are nonnegative $P$-martingales with respect to the natural filtration $F X = ( F s X ) s ≥ 0$ of X.
Proof.
We argue for the case of the process Y, the justification for Z being similar. Let $( H k ) k ≥ 1$, $H 0 : = 0$, be the sequence of jump times of X (where, possibly by discarding a $P$-negligible set, we may insist on all of the $T k$, $k ∈ N 0$, being finite and increasing to $+ ∞$ as $k → ∞$). Let $0 ≤ s < t$, $A ∈ F s X$. By the MCT it will be sufficient to establish for ${ l , k } ⊂ N 0$, $l ≤ k$, that:
$E [ 𝟙 ( H l ≤ s < H l + 1 ) 𝟙 A Y t 𝟙 ( H k ≤ t < H k + 1 ) ] = E [ 𝟙 ( H l ≤ s < H l + 1 ) 𝟙 A Y s 𝟙 ( H k ≤ t < H k + 1 ) ] .$
On the left-hand (respectively right-hand) side of (25) we may now replace $Y t$ (respectively $Y s$) by $Y H k$ (respectively $Y H l$) and then harmlessly insist on $l < k$. Moreover, up to a completion, $F s X ⊂ σ ( ( H m ∧ s , X ( H m ∧ s ) ) m ≥ 0 )$. Therefore, by a $π$/$λ$-argument, we need only verify (25) for sets A of the form: $A = ⋂ m = 1 M { H m ∧ s ∈ A m } ∩ { X ( H m ∧ s ) ∈ B m }$, $A m$, $B m$ Borel subsets of $R$, $1 ≤ m ≤ M$, $M ∈ N$. Due to the presence of the indicator $𝟙 ( H l ≤ s < H l + 1 )$, we may also take, without loss of generality, $M = l$ and hence $A ∈ F H l X$. Furthermore, $H : = σ ( H l + 1 − H l , H k − H l , H k + 1 − H l )$ is independent of $F H l X ∨ σ ( Y H k )$ and then $E [ Y H k | F H l X ∨ H ] = E [ Y H k | F H l X ] = Y H l$, $P$-a.s. (as follows at once from (22) of Proposition 9), whence (25) obtains. ☐

## 5. Application to the Modeling of an Insurance Company’s Risk Process

Consider an insurance company receiving a steady but temporally somewhat uncertain stream of premia—the uncertainty stemming from fluctuations in the number of insurees and/or simply from the randomness of the times at which the premia are paid in—and which, independently, incurs random claims. For simplicity assume all the collected premia are of the same size $h > 0$ and that the claims incurred and the initial capital $x ≥ 0$ are all multiples of h. A possible, if somewhat simplistic, model for the aggregate capital process of such a company, net of initial capital, is then precisely the upwards skip-free Lévy chain X of Definition 1.
Fix now the X. We retain the notation of the previous sections, and in particular of Section 4.4, assuming still that $h = 1$ (of course this just means that we are expressing all monetary sums in the unit of the sizes of the received premia).
As an illustration we may then consider the computation of the Laplace transform (and hence, by inversion, of the density) of the time until ruin of the insurance company, which is to say of the time $T x −$.
To make it concrete let us take the parameters as follows. The masses of the Lévy measure on the down jumps: $λ ( { − k } ) = ( 1 2 ) k$, $k ∈ N$; mass of Lévy measure on the up jump: $λ ( { 1 } ) = 1 2 + ∑ n = 1 ∞ n · ( 1 2 ) n = 5 2$ /positive “safety loading” $s : = 1 2$/; initial capital: $x = 10$. This is a special case of the “geometric” chain from Section 4.4 with $γ = 7 2$, $p = 5 7$ and $a = 1 2$ (see p. 20 for a). Setting, for $k ∈ N 0$, $γ ( q ) ( k ) : = ∑ l = 0 k W ( q ) ( l ) 2 l$ produces the following difference equation: $5 γ ( q ) ( k + 1 ) − ( 19 + 4 q ) γ ( q ) ( k ) + ( 18 + 4 q ) γ ( q ) ( k − 1 ) = 0$, $k ∈ N$. The initial conditions are $γ ( q ) ( 0 ) = W ( q ) ( 0 ) = 2 5$ and $γ ( q ) ( 1 ) = γ ( q ) ( 0 ) + 2 W ( q ) ( 1 ) = 2 5 + ( 2 5 ) 2 ( 7 + 2 q )$. Finishing the tedious computation with the help of Mathematica produces the results reported in Figure 2.
On a final note, we should point out that the assumptions made above concerning the risk process are, strictly speaking, unrealistic. Indeed (i) the collected premia will typically not all be of the same size, and, moreover, (ii) the initial capital, and incurred claims will not be a multiple thereof. Besides, there is no reason to believe (iii) that the times that elapse between the accrual of premia are (approximately) i.id. exponentially distributed. Nevertheless, these objections can be reasonably addressed to some extent. For (ii) one just need to choose h small enough so that the error committed in “rounding off” the initial capital and the claims is negligible (of course even a priori the monetary units are not infinitely divisible, but e.g., $h = 0.01 €$, may not be the most computationally efficient unit to consider in this context). Concerning (i) and (iii) we would typically prefer to see a premium drift (with slight stochastic deviations). This can be achieved by taking $λ ( { h } )$ sufficiently large: we will then be witnessing the arrival of premia with very high-intensity, which by the law of large numbers on a large enough time scale will look essentially like premium drift (but slightly stochastic), interdispersed with the arrivals of claims. This is basically an approximation of the Cramér-Lundberg model in the spirit of Mijatović et al. (2015), which however (because we are not ultimately effecting the limits $h ↓ 0$, $λ ( { h } ) → ∞$) retains some stochasticity in the premia. Keeping this in mind, it would be interesting to see how the upwards skip-free model behaves when fitted against real data, but this investigation lies beyond the intended scope of the present text.

## Funding

The support of the Slovene Human Resources Development and Scholarship Fund under contract number 11010-543/2011 is acknowledged.

## Acknowledgments

I thank Andreas Kyprianou for suggesting to me some of the investigations in this paper. I am also grateful to three anonymous Referees whose comments and suggestions have helped to improve the presentation as well as the content of this paper. Finally my thanks goes to Florin Avram for inviting me to contribute to this special issue of Risks.

## Conflicts of Interest

The author declares no conflict of interest.

## References

1. Asmussen, Søren, and Hansjörg Albrecher. 2010. Ruin Probabilities. Advanced series on statistical science and applied probability; Singapore: World Scientific. [Google Scholar]
2. Avram, Florin, Andreas E. Kyprianou, and Martijn R. Pistorius. 2004. Exit Problems for Spectrally Negative Lévy Processes and Applications to (Canadized) Russian Options. The Annals of Applied Probability 14: 215–38. [Google Scholar]
3. Avram, Florin, Zbigniew Palmowski, and Martijn R. Pistorius. 2007. On the optimal dividend problem for a spectrally negative Lévy process. The Annals of Applied Probability 17: 156–80. [Google Scholar] [CrossRef]
4. Avram, Florin, and Matija Vidmar. 2017. First passage problems for upwards skip-free random walks via the Φ, W, Z paradigm. arXiv, arXiv:1708.0608. [Google Scholar]
5. Bao, Zhenhua, and He Liu. 2012. The compound binomial risk model with delayed claims and random income. Mathematical and Computer Modelling 55: 1315–23. [Google Scholar] [CrossRef]
6. Bertoin, Jean. 1996. Lévy Processes. Cambridge Tracts in Mathematics. Cambridge: Cambridge University Press. [Google Scholar]
7. Bhattacharya, Rabindra Nath, and Edward C. Waymire. 2007. A Basic Course in Probability Theory. New York: Springer. [Google Scholar]
8. Biffis, Enrico, and Andreas E. Kyprianou. 2010. A note on scale functions and the time value of ruin for Lévy insurance risk processes. Insurance: Mathematics and Economics 46: 85–91. [Google Scholar]
9. Bingham, Nicholas Hugh, Charles M. Goldie, and Jozef L. Teugels. 1987. Regular Variation. Encyclopedia of Mathematics and its Applications. Cambridge: Cambridge University Press. [Google Scholar]
10. Brown, Mark, Erol A. Peköz, and Sheldon M. Ross. 2010. Some results for skip-free random walk. Probability in the Engineering and Informational Sciences 24: 491–507. [Google Scholar] [CrossRef]
11. Bühlmann, Hans. 1970. Mathematical Methods in Risk Theory. Grundlehren der mathematischen Wissenschaft: A series of comprehensive studies in mathematics; Berlin/ Heidelberg: Springer. [Google Scholar]
12. Chiu, Sung Nok, and Chuancun Yin. 2005. Passage times for a spectrally negative Lévy process with applications to risk theory. Bernoulli 11: 511–22. [Google Scholar] [CrossRef]
13. De Vylder, Florian, and Marc J. Goovaerts. 1988. Recursive calculation of finite-time ruin probabilities. Insurance: Mathematics and Economics 7: 1–7. [Google Scholar] [CrossRef]
14. Dickson, David C. M., and Howard R. Waters. 1991. Recursive calculation of survival probabilities. ASTIN Bulletin 21: 199–221. [Google Scholar] [CrossRef]
15. Doney, Ronald A. 2007. Fluctuation Theory for Lévy Processes: Ecole d’Eté de Probabilités de Saint-Flour XXXV-2005. Edited by Jean Picard. Number 1897 in Ecole d’Eté de Probabilités de Saint-Flour. Berlin/Heidelberg: Springer. [Google Scholar]
16. Engelberg, Shlomo. 2005. A Mathematical Introduction to Control Theory. Series in Electrical and Computer Engineering; London: Imperial College Press, vol. 2. [Google Scholar]
17. Hubalek, Friedrich, and Andreas E. Kyprianou. 2011. Old and New Examples of Scale Functions for Spectrally Negative Lévy Processes. In Seminar on Stochastic Analysis, Random Fields and Applications VI. Edited by Robert Dalang, Marco Dozzi and Francesco Russo. Basel: Springer, pp. 119–45. [Google Scholar]
18. Kallenberg, Olav. 1997. Foundations of Modern Probability. Probability and Its Applications. New York and Berlin/Heidelberg: Springer. [Google Scholar]
19. Karatzas, Ioannis, and Steven E. Shreve. 1988. Brownian Motion and Stochastic Calculus. Graduate Texts in Mathematics. New York: Springer. [Google Scholar]
20. Kyprianou, Andreas E. 2006. Introductory Lectures on Fluctuations of Lévy Processes with Applications. Berlin/ Heidelberg: Springer. [Google Scholar]
21. Marchal, Philippe. 2001. A Combinatorial Approach to the Two-Sided Exit Problem for Left-Continuous Random Walks. Combinatorics, Probability and Computing 10: 251–66. [Google Scholar] [CrossRef]
22. Mijatović, Aleksandar, Matija Vidmar, and Saul Jacka. 2014. Markov chain approximations for transition densities of Lévy processes. Electronic Journal of Probability 19: 1–37. [Google Scholar] [CrossRef]
23. Mijatović, Aleksandar, Matija Vidmar, and Saul Jacka. 2015. Markov chain approximations to scale functions of Lévy processes. Stochastic Processes and their Applications 125: 3932–57. [Google Scholar] [CrossRef]
24. Norris, James R. 1997. Markov Chains. Cambridge series in statistical and probabilistic mathematics; Cambridge: Cambridge University Press. [Google Scholar]
25. Parthasarathy, Kalyanapuram Rangachari. 1967. Probability Measures on Metric Spaces. New York and London: Academic Press. [Google Scholar]
26. Quine, Malcolm P. 2004. On the escape probability for a left or right continuous random walk. Annals of Combinatorics 8: 221–23. [Google Scholar] [CrossRef]
27. Revuz, Daniel, and Marc Yor. 1999. Continuous Martingales and Brownian Motion. Berlin/Heidelberg: Springer. [Google Scholar]
28. Rudin, Walter. 1970. Real and Complex Analysis. International student edition. Maidenhead: McGraw-Hill. [Google Scholar]
29. Sato, Ken-iti. 1999. Lévy Processes and Infinitely Divisible Distributions. Cambridge studies in advanced mathematics. Cambridge: Cambridge University Press. [Google Scholar]
30. Spitzer, Frank. 2001. Principles of Random Walk. Graduate texts in mathematics. New York: Springer. [Google Scholar]
31. Vidmar, Matija. 2015. Non-random overshoots of Lévy processes. Markov Processes and Related Fields 21: 39–56. [Google Scholar]
32. Wat, Kam Pui, Kam Chuen Yuen, Wai Keung Li, and Xueyuan Wu. 2018. On the compound binomial risk model with delayed claims and randomized dividends. Risks 6: 6. [Google Scholar] [CrossRef]
33. Xiao, Yuntao, and Junyi Guo. 2007. The compound binomial risk model with time-correlated claims. Insurance: Mathematics and Economics 41: 124–33. [Google Scholar] [CrossRef]
34. Yang, Hailiang, and Lianzeng Zhang. 2001. Spectrally negative Lévy processes with applications in risk theory. Advances in Applied Probability 33: 281–91. [Google Scholar] [CrossRef]
 1 However, such a treatment did eventually become available (several years after this manuscript was essentially completed, but before it was published), in the preprint Avram and Vidmar (2017). 2 It is part of the condition, that such an x should exist (automatically, given the preceding assumptions, there is at most one).
Figure 1. Consider the possible zeros $λ 1$, $λ 2$ and $λ 3$ of the characteristic polynomial of (20) (with $q = 0$), when $l : = − inf supp ( λ ) = 2$ and $p = 1 / 2$. Straightforward computation shows they are precisely those that satisfy (o) $λ 3 = 2 − λ 1 − λ 2$; (i) $( λ 1 − 1 ) ( λ 2 − 1 ) ( λ 1 + λ 2 − 1 ) = 0$ and (ii) $λ 1 λ 2 + ( λ 1 + λ 2 ) ( 2 − λ 1 − λ 2 ) ≥ 0$ & $λ 1 λ 2 ( 2 − λ 1 − λ 2 ) < 0$. In the plot one has $λ 1$ as the abscissa, $λ 2$ as the ordinate. The shaded area (an ellipse missing the closed inner triangle) satisfies (ii), the black lines verify (i). Then $q 1 = ( λ 1 λ 2 + ( λ 1 + λ 2 ) ( 2 − λ 1 − λ 2 ) ) / 2$ and $q 2 = ( − λ 1 λ 2 ( 2 − λ 1 − λ 2 ) ) / 2$.
Figure 1. Consider the possible zeros $λ 1$, $λ 2$ and $λ 3$ of the characteristic polynomial of (20) (with $q = 0$), when $l : = − inf supp ( λ ) = 2$ and $p = 1 / 2$. Straightforward computation shows they are precisely those that satisfy (o) $λ 3 = 2 − λ 1 − λ 2$; (i) $( λ 1 − 1 ) ( λ 2 − 1 ) ( λ 1 + λ 2 − 1 ) = 0$ and (ii) $λ 1 λ 2 + ( λ 1 + λ 2 ) ( 2 − λ 1 − λ 2 ) ≥ 0$ & $λ 1 λ 2 ( 2 − λ 1 − λ 2 ) < 0$. In the plot one has $λ 1$ as the abscissa, $λ 2$ as the ordinate. The shaded area (an ellipse missing the closed inner triangle) satisfies (ii), the black lines verify (i). Then $q 1 = ( λ 1 λ 2 + ( λ 1 + λ 2 ) ( 2 − λ 1 − λ 2 ) ) / 2$ and $q 2 = ( − λ 1 λ 2 ( 2 − λ 1 − λ 2 ) ) / 2$.
Figure 2. (a): The Laplace transform $l : = ( [ 0 , ∞ ) ∋ q ↦ E [ e − q T x − ; T x − < ∞ ] )$ for the parameter set described in the body of the text, on the interval $[ 0 , 0.2 ]$. The probability of ruin is $P ( T x − < ∞ ) = l ( 0 ) = 1 − W ( 10 ) / W ( ∞ ) = 1 − ψ ′ ( 0 + ) W ( x ) = 1 − s W ( x ) ≐ 0.28$ and the mean ruin time conditionally on ruin is $E [ T x − | T x − < ∞ ] = − l ′ ( 0 + ) / l ( 0 ) ≐ 21.8$ (graphically this is one over where the tangent to l at zero meets the abscissa); (b): Density of $T x −$ on ${ T x − < ∞ }$, plotted on the interval $[ 0 , 20 ]$, and obtained by means of numerically inverting the Laplace transform l (the Lebesgue integral of this density on $[ 0 , ∞ )$ is equal to $P ( T x − < ∞ )$).
Figure 2. (a): The Laplace transform $l : = ( [ 0 , ∞ ) ∋ q ↦ E [ e − q T x − ; T x − < ∞ ] )$ for the parameter set described in the body of the text, on the interval $[ 0 , 0.2 ]$. The probability of ruin is $P ( T x − < ∞ ) = l ( 0 ) = 1 − W ( 10 ) / W ( ∞ ) = 1 − ψ ′ ( 0 + ) W ( x ) = 1 − s W ( x ) ≐ 0.28$ and the mean ruin time conditionally on ruin is $E [ T x − | T x − < ∞ ] = − l ′ ( 0 + ) / l ( 0 ) ≐ 21.8$ (graphically this is one over where the tangent to l at zero meets the abscissa); (b): Density of $T x −$ on ${ T x − < ∞ }$, plotted on the interval $[ 0 , 20 ]$, and obtained by means of numerically inverting the Laplace transform l (the Lebesgue integral of this density on $[ 0 , ∞ )$ is equal to $P ( T x − < ∞ )$).
Table 1. Connections between the quantities $ψ ′ ( 0 + )$, $Φ ( 0 )$, $Φ ′ ( 0 + )$, the behaviour of X at large times, and the behaviour of its excursions away from the running supremum (the latter when $λ ( ( − ∞ , 0 ) ) > 0$).
Table 1. Connections between the quantities $ψ ′ ( 0 + )$, $Φ ( 0 )$, $Φ ′ ( 0 + )$, the behaviour of X at large times, and the behaviour of its excursions away from the running supremum (the latter when $λ ( ( − ∞ , 0 ) ) > 0$).
$ψ ′ ( 0 + )$$Φ ( 0 )$$Φ ′ ( 0 + )$Long-Term BehaviourExcursion Length
$∈ ( 0 , ∞ )$0$∈ ( 0 , ∞ )$drifts to $+ ∞$finite expectation
00$+ ∞$oscillatesa.s. finite with infinite expectation
$∈ [ − ∞ , 0 )$$∈ ( 0 , ∞ )$$∈ ( 0 , ∞ )$drifts to $− ∞$infinite with a positive probability

## Share and Cite

MDPI and ACS Style

Vidmar, M. Fluctuation Theory for Upwards Skip-Free Lévy Chains. Risks 2018, 6, 102. https://doi.org/10.3390/risks6030102

AMA Style

Vidmar M. Fluctuation Theory for Upwards Skip-Free Lévy Chains. Risks. 2018; 6(3):102. https://doi.org/10.3390/risks6030102

Chicago/Turabian Style

Vidmar, Matija. 2018. "Fluctuation Theory for Upwards Skip-Free Lévy Chains" Risks 6, no. 3: 102. https://doi.org/10.3390/risks6030102

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.