Next Article in Journal
U.S. Equity Mean-Reversion Examined
Next Article in Special Issue
Impact of Climate Change on Heat Wave Risk
Previous Article in Journal
Optimal Dynamic Portfolio with Mean-CVaR Criterion
Previous Article in Special Issue
Optimal Deterministic Investment Strategies for Insurers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Risk Model with an Observer in a Markov Environment

by
Hansjörg Albrecher
1,2,* and
Jevgenijs Ivanovs
1,*
1
Department of Actuarial Science, University of Lausanne, Lausanne CH-1015, Switzerland
2
Swiss Finance Institute, University of Lausanne, Lausanne CH-1015, Switzerland
*
Authors to whom correspondence should be addressed.
Risks 2013, 1(3), 148-161; https://doi.org/10.3390/risks1030148
Submission received: 11 October 2013 / Revised: 5 November 2013 / Accepted: 5 November 2013 / Published: 11 November 2013
(This article belongs to the Special Issue Application of Stochastic Processes in Insurance)

Abstract

:
We consider a spectrally-negative Markov additive process as a model of a risk process in a random environment. Following recent interest in alternative ruin concepts, we assume that ruin occurs when an independent Poissonian observer sees the process as negative, where the observation rate may depend on the state of the environment. Using an approximation argument and spectral theory, we establish an explicit formula for the resulting survival probabilities in this general setting. We also discuss an efficient evaluation of the involved quantities and provide a numerical illustration.

1. Introduction

In classical risk theory, the ruin of an insurance portfolio is defined as the event that the surplus process becomes negative. In practice, it may be more reasonable to assume that the surplus value is not checked continuously, but at certain times only. If these times are not fixed deterministically, but are assumed to be epochs of a certain independent renewal process, then one often still has sufficient analytical structure to obtain explicit expressions for ruin probabilities and related quantities; see [1,2] for corresponding studies in the framework of the Cramér–Lundberg risk model and Erlang inter-observation times. An alternative ruin concept is studied in [3], where negative surplus does not necessarily lead to bankruptcy, but bankruptcy is declared at the first instance of an inhomogeneous Poisson process with a rate depending on the surplus value, whenever it is negative. When this rate is constant, this bankruptcy concept corresponds to the one in [1,2] for exponential inter-observation times. Yet another related concept is the one of Parisian ruin, where ruin is only reported if the surplus process stays negative for a certain amount of time (see, e.g., [4,5]). If this time is assumed to be an independent exponential random variable instead of a deterministic value, one recovers the former models with exponential inter-observation times and a constant bankruptcy rate function, respectively. Recently, simple expressions for the corresponding ruin probability have been derived when the surplus process follows a spectrally-negative Lévy process, see [6].
In this paper, we extend the above model and allow the surplus process to be a spectrally-negative Markov additive process. The dynamics of such a process change according to an external environment process, modeled by a Markov chain, and changes of the latter may also cause a jump in the surplus process. We assume that the value of the surplus process is only observed at epochs of a Poisson process, and ruin occurs when at any such observation time, the surplus process is negative. We also allow the rate of observations to depend on the current state of the environment (one possible interpretation being that if the environment states refer to different economic conditions, a regulator may increase the observation rates in states of distress). Using an approximation argument and the spectral theory for Markov additive processes, we explicitly calculate for any initial capital the survival probability and the probability to reach a given level before ruin in this model. The resulting formulas turn out to be quite simple. At the same time, these formulas provide information on certain occupation times of the process, which may be of independent theoretical interest.
In Section 2, we introduce the model and the considered quantities in more detail. Section 3 gives a brief summary of general fluctuation results for Markov additive processes that are needed later on. In Section 4, we state our main results and discuss their relation with previous results, and the proofs are given in Section 5. In Section 6, we reconsider the classical ruin concept and show how the present results implicitly extend the classical simple formula for the ruin probability with zero initial capital to the case of a Markov additive surplus process. Finally, in Section 7, we give a numerical illustration of the results for our relaxed ruin concept in a Markov-modulated Cramér–Lundberg model.

2. The Model

Let ( X ( t ) , J ( t ) ) , t 0 be a Markov additive process (MAP), where X ( t ) is a surplus process and J ( t ) is an irreducible Markov chain on n states representing the environment; see, e.g., [7]. While J ( t ) = i , X ( t ) evolves as some Lévy process X i ( t ) , and X ( t ) has a jump distributed as U i j when J ( t ) switches from i to j. Consequently, X ( t ) has stationary and independent increments given the corresponding states of the environment. We assume that X ( t ) has no positive jumps and that none of the processes, X i ( t ) , is a non-increasing Lévy process. The latter assumption is not a real restriction, because one can always remove the corresponding states of J ( t ) and replace non-increasing processes by the appropriate negative jumps. Note that the Markov-modulated Cramér–Lundberg risk model with
X ( t ) = u + 0 t c J ( v ) d v - j = 1 N ( t ) Y j
is a particular case of the present framework, where u is the initial capital of an insurance portfolio, c i > 0 is the premium density in state i, N ( t ) is an inhomogeneous Poisson process with claim arrival intensity β i in state i and Y j are independent claim sizes with distribution function F i , if, at the time of occurrence, the environment is in state i (in this case, U i j 0 for all i , j ); see [7].
Write E u [ Y ; J ( t ) ] for a matrix with i j th element E ( Y 1 { J ( t ) = j } | J ( 0 ) = i , X ( 0 ) = u ) , where Y is an arbitrary random variable, and P u [ A , J ( t ) ] = E u [ 1 A ; J ( t ) ] for the probability matrix corresponding to an event A. If u = 0 , then we simply drop the subscript. We write I , O , 1 , 0 for an identity matrix, a zero matrix, a column vector of ones and a column vector of zeros of dimension n, respectively. For x 0 , define the first passage time above x (below - x ) by:
τ x ± = inf { t 0 : ± X ( t ) > x }
As in [2], we assume that ruin occurs when an independent Poissonian observer sees negative X ( t ) , where, in our setup, the rate of observations depends on the state of J ( t ) , i.e., the rate is ω J ( t ) 0 for given ω 1 , , ω n . Recall that a Poisson process of rate ω has no jumps (observations) in some Borel set B [ 0 , ) with probability exp ( - ω B t ̥ ) . Hence, the probability of survival (non-ruin) in our model with initial capital u is given by the column vector:
ϕ ( u ) = E u e - j ω j A j , where A j : = 0 1 { X ( t ) < 0 , J ( t ) = j } t ̥
which follows by conditioning on the A j s. The ith component of this vector refers to the probability of survival with initial state J ( 0 ) = i . Define for any u x the n × n matrix:
R ( u , x ) : = E u [ e - j ω j A j ( x ) ; J ( τ x + ) ] , with A j ( x ) : = 0 τ x + 1 { X ( s ) < 0 , J ( s ) = j } s ̥
so R ( u , x ) is the matrix of probabilities of reaching level x without ruin, when starting at level u.
It is known that X ( t ) / t converges to a deterministic constant, μ (the asymptotic drift of X ( t ) ) a.s. as t , independently of the initial state, J ( 0 ) . If μ < 0 , then X ( t ) - a.s.; so A j a.s. for all j, and consequently, ruin is certain (unless all ω j = 0 ). If μ 0 , then τ x + < a.s. for all x, and so:
ϕ ( u ) = lim x R ( u , x ) 1
Finally, note that R ( u , x ) can be interpreted as a joint transform of the occupation times, A j ( x ) . Moreover, with the definition R ( x ) : = R ( 0 , x ) , the strong Markov property and the absence of positive jumps give:
R ( x ) R ( x , y ) = R ( y )
for 0 x y (see, also, [8]). Hence, R ( x , y ) can be expressed in terms of R ( x ) and R ( y ) , given that these matrices are invertible. That is, it suffices to study the matrix-valued function, R ( x ) .
Remark 2.1. The present framework can be extended to include positive jumps of phase type, cf. [7]. One can convert a MAP with positive jumps of phase type into a spectrally-negative MAP using the so-called fluid embedding, which amounts to an expansion of the state space of J ( t ) and to putting X i ( t ) = t for each auxiliary state, i; see e.g., [9] and [10] (Section 2.7). Next, we set ω i = 0 for all the new auxiliary states, i, and compute the corresponding survival probability vector for the new model, which, when restricted to the original states, yields the survival probabilities of interest.

3. Review of Exit Theory for MAPs

Let us quickly recall the recently established exit theory for spectrally-negative MAPs, which is an extension of the one for scalar Lévy processes (see, e.g., [11], Section 8). A spectrally-negative MAP, ( X ( t ) , J ( t ) ) , is characterized by a matrix-valued function, F ( θ ) , via E [ e X ( t ) ; J ( t ) ] = e F ( ) t for 0 . We let π be the stationary distribution of J ( t ) . It is not hard to see that J ( τ x + ) , x 0 is a Markov chain, and thus,
P ( J ( τ x + ) = j | J ( 0 ) = i ) = ( e Λ x ) i j
for a certain n × n transition rate matrix, Λ, which can be computed using an iterative procedure or a spectral method; see [12,13] and the references therein. It is easy to see that J ( τ x + ) , x 0 is non-defective (with a stationary distribution, π Λ ), if and only if μ 0 .
The two-sided exit problem for MAPs without positive jumps was solved in [14], where it is shown that:
P u [ τ x + < τ 0 - , J ( τ x + ) ] = W ( u ) W ( x ) - 1
for 0 u x and x > 0 , where W ( x ) , x 0 is a continuous matrix-valued function (called the scale function) characterized by the transform:
0 e - x W ( x ) x ̥ = F ( θ ) - 1
for sufficiently large θ. It is known that W ( x ) is non-singular for x > 0 and so is F ( θ ) in the domain of interest. In addition:
W ( x ) = e - Λ x L ( x )
where L ( x ) is a positive matrix increasing (as x ) to L, a matrix of expected occupation times at zero (note that in the case of the Markov modulated Cramér–Lundberg model, Equation (1), c j L i j provides the expected number of times when the surplus is zero in state j given J ( 0 ) = i and X ( 0 ) = 0 ). If μ 0 , then L has finite entries and is invertible. Finally:
E u [ e X ( τ 0 - ) ; τ 0 - < τ x + , J ( τ 0 - ) ] = Z ( , u ) - W ( u ) W ( x ) - 1 Z ( , x )
where
Z ( , x ) = e x I - 0 x e - y W ( y ) y ̥ F ( θ )
is analytic in for fixed x 0 in the domain ( θ ) > 0 .
Importantly, all the above identities hold for defective (killed) MAPs, as well, i.e., when the state space of J ( t ) is complemented by an absorbing `cemetery’ state; the original states of J ( t ) then form a transient communicating class, and the (killing) rate from a state, i, into the absorbing state is ω i 0 . We refer to [15] for applications of the killing concept in risk theory.
Note that killed MAPs preserve the stationarity and independence of increments given the environment state. Furthermore, we get probabilistic identities of the following type:
e Λ ^ x = P ^ [ J ( τ x + ) ] = E [ e - j ω j 0 τ x + 1 { J t = j } t ̥ ; J ( τ x + ) ]
where P ^ and Λ ^ refer to the killed process, and we are still concerned with the original n states only. The right-hand side of Equation (8) is similar to the definition of the matrix, R ( x ) , in Equation (3); it is also the joint transform of certain occupation times. However, R ( x ) is more complicated, as there, the killing is only applied when the surplus process is below zero; so with the setup of this paper, one leaves the class of defective MAPs (the increments now depend on the current value of X ( t ) ). Let us recall the relation between F ( θ ) and its killed analogue F ^ ( θ ) :
F ^ ( θ ) = F ( θ ) - Δ , Δ = diag ( ω 1 , , ω n )
Letting Δ π be a diagonal matrix with the stationary distribution vector, π, of J on the diagonal, we note that F ˜ ( θ ) = Δ π - 1 F ( ) T Δ π corresponds to a time-reversed process, which is, again, a spectrally-negative MAP (with no non-increasing Lévy processes as building blocks) with the same asymptotic drift, μ; see [7]. Using the characterization Equation (5), one can see that the corresponding scale function is given by W ˜ ( x ) = Δ π - 1 W ( x ) T Δ π .

4. Results

The following main result determines the matrix of probabilities of reaching a level, x, without ruin:
Theorem 4.1. For x 0 , we have:
R ( x ) = E [ e - j ω j A j ( x ) ; J ( τ x + ) ] = e Λ ^ x I - 0 x W ( y ) Δ e Λ ^ y y ̥ - 1
where Λ ^ corresponds to the killed process with killing rates, ω i 0 , identified by F ^ ( θ ) in Equation (9).
The vector of survival probabilities according to our relaxed ruin concept has the following simple form:
Theorem 4.2. Assume that the asymptotic drift μ > 0 , all obervation rates, ω i , are positive and Λ and Λ ^ do not have a common eigenvalue. Then, the vector of survival probabilities is given by:
ϕ ( 0 ) = lim x R ( x ) 1 = U - 1 1
where U is the unique solution of:
Λ U - U Λ ^ = L Δ
Equation (4) then immediately gives:
Corollary 4.1. For 0 u x it holds that
R ( u , x ) = I - 0 u W ( y ) Δ e Λ ^ y y ̥ e Λ ^ ( x - u ) I - 0 x W ( y ) Δ e Λ ^ y y ̥ - 1
Under the conditions of Theorem 4.2, we have for every u 0 :
ϕ ( u ) = R ( u ) - 1 ϕ ( 0 )
Equation (10) is known as the Sylvester equation in control theory. Under the conditions of Theorem 4.2, it has a unique solution [16], which has full rank, because L Δ has full rank, see Theorem 2 in [17]. Moreover, the solution, U, can be found by solving a system of linear equations with n 2 unknowns. With regard to coefficient matrices, there are two methods to compute Λ and Λ ^ ; see Section 3. In principle, the matrix, L, can be obtained from W ( x ) , cf. Equation (6). This method, however, is ineffective and numerically unstable. In the following, we give a more direct way of evaluating L.
Proposition 4.1. Let μ 0 . Then, for a left eigen pair ( γ , h ) of - Λ , i.e., - h Λ = γ h , it holds that:
h L = lim q 0 q h F ( γ + q ) - 1
More generally, if h 1 , , h j is a left Jordan chain of - Λ corresponding to an eigenvalue γ, i.e., - h 1 Λ = γ h 1 and - h i Λ = γ h i + h i - 1 for i = 2 , j , then:
h j L = lim q 0 q i = 0 j - 1 1 i ! h j - i [ F ( q + γ ) - 1 ] ( i )
Remark 4.1. Consider the special case n = 1 , i.e., X ( t ) is a spectrally-negative Lévy process with Laplace exponent F ( θ ) = log E e X ( 1 ) , with observation rate ω. Then, Λ ^ = - Φ ( ω ) , where Φ ( · ) is the right-inverse of F ( θ ) , i.e., F ( Φ ( ω ) ) = ω . According to Theorem 4.1, we have:
R ( x ) = e - Φ ( ω ) x / 1 - ω 0 x e - Φ ( ω ) y W ( y ) y ̥ = 1 / Z ( Φ ( ω ) , x )
Note that 1 / Z ( , x ) is a certain transform corresponding to X ( t ) reflected at zero at the time of passage over level x; see [14], which may lead one to an alternative direct probabilistic derivation of Equation (11). Finally, if μ = E X ( 1 ) > 0 , then Λ = 0 , and hence, L = 1 / F ( 0 ) = 1 / μ , according to Proposition 4.1. Accordingly, in this case, Theorem 4.2 reduces to:
ϕ ( 0 ) = E exp - ω 0 1 { X ( t ) < 0 } t ̥ = Φ ( ω ) ω μ
which coincides with Theorem 1 in [6].
Remark 4.2. In the case when X ( t ) has no jumps, i.e., it is a Markov-modulated Brownian motion (MMBM), the matrices, W ( x ) and L, can be given in an explicit form; see [10,18,19]. Furthermore, the same is true for a model with arbitrary phase-type jumps downwards, because such a model can be reduced to an MMBM using fluid embedding. Note that the fluid embedding is only used to determine the matrices, W ( x ) and L, corresponding to the original process (the setup of this paper would not allow for processes X i ( t ) = - t otherwise); compare this to Remark 2.1 discussing phase-type jumps upwards.

5. Proofs

The proofs rely on a spectral representation of the matrix, Λ ^ , which we quickly review in the following. Let v 1 , , v j be a Jordan chain of - Λ ^ corresponding to an eigenvalue γ, i.e., - Λ ^ v 1 = γ v 1 and - Λ v i = γ v i + v i - 1 for i = 2 , j . From the classical theory of Jordan chains, we know that:
e - Λ ^ x v j = i = 0 j - 1 x i i ! e γ x v j - i
for any x R and j = 1 , , k and, in particular, e - Λ ^ x v 1 = e γ x v 1 . Moreover, this Jordan chain turns out to be a generalized Jordan chain of an analytic matrix function F ^ ( θ ) , ( θ ) > 0 corresponding to a generalized eigenvalue, γ, i.e., for any j = 1 , , k , it holds that:
i = 0 j - 1 1 i ! F ^ ( i ) ( γ ) v j - i = i = 0 j - 1 1 i ! F ( i ) ( γ ) v j - i - Δ v j = 0
and, in particular, F ( γ ) v 1 = Δ v 1 ; see [13] for details.
Proof of Proposition 4.1. Observe that h e - Λ x = e γ x h , and so Equation (5) and Equation (6) yield:
h F ( θ ) - 1 = 0 e - x e γ x h L ( x ) x ̥
for large enough θ. Since L ( x ) is bounded from above by L, this equation can be analytically continued to ( θ ) > ( γ ) with non-singular F ( θ ) . Hence, for small enough q > 0 , we can write:
q h F ( q + γ ) - 1 = q 0 e - q x h L ( x ) x ̥ = h E L ( e q )
where e q is an exponentially distributed r.v. with parameter q. Letting q 0 completes the proof of the first part.
According to Equation (12), we have h j - i e - Λ x = k = 0 j - i - 1 x k k ! e γ x h j - i - k . Next, consider:
h j - i [ F ( θ ) - 1 ] ( i ) = 0 ( - x ) i e - x h j - i e - Λ x L ( x ) x ̥ = k = i j - 1 ( - 1 ) i ( k - i ) ! h j - k 0 x k e - x + γ x L ( x ) x ̥
where differentiation under the integral sign can be justified using standard arguments. Finally:
i = 0 j - 1 1 i ! h j - i [ F ( θ ) - 1 ] ( i ) = k = 0 j - 1 i = 0 k ( - 1 ) i i ! ( k - i ) ! h j - k 0 x k e - x + γ x L ( x ) x ̥ = h j 0 e - x + γ x L ( x ) x ̥
because the second sum is ( 1 - 1 ) k = 0 for k 1 . The final step of the proof is the same as in the case of j = 1 . ☐
The proof of Theorem 4.1 relies on an approximation idea, which has already appeared in various papers; see, e.g., [6,20,21]. We consider an approximation, R ϵ ( x ) , of the matrix, R ( x ) . When computing the occupation times, we start the clock when X ( t ) goes below - ϵ (rather than zero), but stop it when X ( t ) reaches the level of zero. Mathematically, we write, using the strong Markov property:
R ϵ ( x ) = P [ τ x + < τ ϵ - , J ( τ x + ) ] + - - ϵ P [ τ ϵ - < τ x + , X ( τ ϵ - ) y ̥ , J ( τ ϵ - ) ] E y [ e - j ω j 0 τ 0 + 1 { J t = j } t ̥ ; J ( τ 0 + ) ] R ϵ ( x )
Using the exit theory for MAPs discussed in Section 3, we note that the first term on the right is W ( ϵ ) W ( x + ϵ ) - 1 , and the second, according to Equation (8), is:
0 P ϵ [ τ 0 - < τ x + ϵ + , - X ( τ 0 - ) y ̥ , J ( τ 0 - ) ] e Λ ^ ( y + ϵ ) R ϵ ( x )
By the monotone convergence theorem, the approximating occupation times converge to A j ( x ) as ϵ 0 , and then, the dominated convergence theorem implies convergence of the transforms: R ϵ ( x ) R ( x ) as ϵ 0 for any x > 0 . Hence, we have:
W ( x ) lim ϵ 0 W ( ϵ ) - 1 I - 0 P ϵ [ τ 0 - < τ x + ϵ + , - X ( τ 0 - ) y ̥ , J ( τ 0 - ) ] e Λ ^ ( y + ϵ ) R ( x ) = I
where we also used the continuity of W ( x ) . We will need the following auxiliary result for the analysis of the above limit.
Lemma 5.1. Let f ( y ) , y 0 be a Borel function bounded around zero. Then:
lim ϵ 0 W ( ϵ ) - 1 0 ϵ f ( y ) W ( y ) y ̥ = O
Proof. Consider a scale function of the time-reversed process: W ˜ ( x ) = Δ π - 1 W ( x ) T Δ π . It is enough to show that: lim ϵ 0 0 ϵ f ( y ) W ˜ ( y ) y ̥ W ˜ ( ϵ ) - 1 = 0 , but:
0 ϵ f ( y ) W ˜ ( y ) y ̥ W ˜ ( ϵ ) - 1 = 0 ϵ f ( y ) P ˜ y ( τ ϵ + < τ 0 - ; J ( τ ϵ + ) ) y ̥
which clearly converges to the zero matrix. ☐
Proof of Theorem 4.1. First, we provide a proof under a simplifying assumption, and then, we deal with the general case.
Part I: Assume that - Λ ^ has n linearly-independent eigenvectors v: - Λ ^ v = γ v . Considering Equation (14), we observe that the integral multiplied by v is given by:
0 e - γ ( y + ϵ ) P ϵ [ τ 0 - < τ x + ϵ + , - X ( τ 0 - ) y ̥ , J ( τ 0 - ) ] v = e - γ ϵ E ϵ [ e γ X ( τ 0 - ) ; τ 0 - < τ x + ϵ + , J ( τ 0 - ) ] v = e - γ ϵ ( Z ( γ , ϵ ) - W ( ϵ ) W ( x + ϵ ) - 1 Z ( γ , x + ϵ ) ) v
according to Equation (7). Hence, the limit in Equation (14) multiplied by v is given by:
lim ϵ 0 W ( ϵ ) - 1 0 ϵ e - γ y W ( y ) y ̥ F ( γ ) v + W ( x ) - 1 Z ( γ , x ) v = W ( x ) - 1 Z ( γ , x ) v
according to the form of Z ( γ , ϵ ) and Lemma 5.1. Finally, from Equation (13), we have:
Z ( γ , x ) v = e γ x v - 0 x W ( y ) Δ e γ ( x - y ) v y ̥ = e - Λ ^ x - 0 x W ( y ) Δ e Λ ^ ( y - x ) y ̥ v
which, under the assumption that there are n linearly-independent eigenvectors, shows that:
e - Λ ^ x - 0 x W ( y ) Δ e Λ ^ ( y - x ) y ̥ R ( x ) = I
completing the proof.
Part II: In general, we consider a Jordan chain, v 1 , , v j , of - Λ ^ corresponding to an eigenvalue, γ. Using Equation (12), we see that the integral in Equation (14) multiplied by v j is given by:
i = 0 j - 1 1 i ! E ϵ [ ( X ( τ 0 - ) - ϵ ) i e γ ( X ( τ 0 - ) - ϵ ) ; τ 0 - < τ x + ϵ + , J ( τ 0 - ) ] v j - i
where all the terms can be obtained by considering Equation (7) for = γ , multiplying it by e - ϵ γ and taking derivatives with respect to γ. Again, Lemma 5.1 allows us to show that:
W ( ϵ ) - 1 i γ i I - e - ϵ γ Z ( γ , ϵ ) = W ( ϵ ) - 1 i γ i 0 ϵ e - γ y W ( y ) y ̥ F ( γ )
converges to the zero matrix as ϵ 0 . Hence, the expression on the left of R ( x ) in Equation (14) when multiplied by v j is equal to:
i = 0 j - 1 1 i ! Z ( i ) ( γ , x ) v j - i
The definition of Z ( γ , x ) leads to:
Z ( i ) ( γ , x ) = x i e γ x I - k = 0 i i ! k ! ( i - k ) ! 0 x ( x - y ) k e γ ( x - y ) W ( y ) y ̥ F ( i - k ) ( γ )
Plugging this in Equation (15), interchanging summation and using Equation (13), we can rewrite Equation (15) in the following way:
i = 0 j - 1 1 i ! x i e γ x v j - i - k = 0 j - 1 1 k ! 0 x ( x - y ) k e γ ( x - y ) W ( y ) y ̥ Δ v j - k
which is just:
e - Λ ^ x - 0 x W ( y ) Δ e Λ ^ ( y - x ) y ̥ v j
according to Equation (12). The proof is complete, since there are n linearly-independent vectors in the corresponding Jordan chains.
Proof of Theorem 4.2. First, we provide a proof under the assumption that both - Λ and - Λ ^ have semi-simple eigenvalues and that the real parts of the eigenvalues of - Λ ^ are large enough. Assume for a moment that every eigenvalue, γ, of - Λ ^ is such that the transform Equation (5) holds for θ = γ . In the following, we will study the limit of M ( x ) = e Λ x R ( x ) - 1 .
Consider an eigen pair ( γ , v ) of - Λ ^ and a left eigen pair ( γ * , h * ) of - Λ , i.e., - Λ ^ v = γ v and - h * Λ = γ * h * . Then, Theorem 4.1 implies:
h * M ( x ) v = h * I - 0 x e - γ y W ( y ) y ̥ Δ v e ( γ - γ * ) x
where ( γ ) > ( γ * ) by the above assumption. Note that the expression in brackets converges to a zero matrix, because of Equation (5) and Equation (13). Therefore, we can apply L’Hôpital’s rule to get:
lim x h * M ( x ) v = 1 γ - γ * lim x e - γ * x h * W ( x ) Δ v = 1 γ - γ * h * L Δ v
where the second equality follows from Equation (6). Under the assumption that all the eigenvalues of Λ and Λ ^ are semi-simple (there are n eigenvectors in each case), this implies that M ( x ) converges to a finite limit, U, and:
h * L Δ v = ( γ - γ * ) h * U v = h * ( Λ U - U Λ ^ ) v
which yields Equation (10). Since M ( x ) - 1 1 = R ( x ) 1 is bounded and U is invertible, we see that the former converges to U - 1 1 .
Jordan chains: When some eigenvalues are not semi-simple, the proof follows the same idea, but the calculus becomes rather tedious. Therefore, we only present the main steps. Consider an arbitrary Jordan chain, v 1 , , v k , of - Λ ^ with eigenvalue γ and an arbitrary left Jordan chain, h 1 * , , h m * of - Λ , with eigenvalue γ * . We need to show that M ( x ) has a finite limit, U, as x , and that this U satisfies:
h m * ( Λ U - U Λ ^ ) v k = ( γ - γ * ) h m * U v k - h m - 1 * U v k + h m * U v k - 1 = h m * L Δ v k
where h 0 * = v 0 = 0 by convention. For this, we compute h i * M ( x ) v j using Equation (12) and its analogue for the left chain and take the limit using L’Hôpital’s rule, which is applicable because of Equation (13). This then confirms that:
( γ - γ * ) h m * M ( x ) v k - h m - 1 * M ( x ) v k + h m * M ( x ) v k - 1 h m * L Δ v k
and the result follows.
Analytic continuation: Finally, it remains to remove the assumption that the real part of every eigenvalue of - Λ ^ is large enough. For some q > 0 , we can define new killing rates by ω i ( q ) = ω i + q and consider the corresponding new matrices, Λ ^ ( q ) , Δ ( q ) (note that Λ and L stay unchanged). By choosing large enough q, we can ensure that the real parts of the zeros of det ( F ( θ ) - Δ ( q ) ) (in the right half complex plane) are arbitrarily large. These zeros are exactly the eigenvalues of - Λ ^ ( q ) , and so, the result of our Theorem holds for large enough q.
We now use analytic continuation in q in the domain ( q ) > - min { ω 1 , , ω n } . In this domain, e Λ ^ ( q ) x is analytic for every x, which follows from its probabilistic interpretation. This and the invertibility of Λ ^ ( q ) can be used to show that Λ ^ ( q ) is also analytic. Furthermore, one can show that only for a finite number of different q’s, the matrices, Λ and Λ ^ ( q ) , can have common eigenvalues. Now, we express U ( q ) = G ( q ) - 1 L Δ ( q ) , where G ( q ) is formed from the elements of Λ and Λ ^ ( q ) ; see, e.g., [22]. Hence, U ( q ) can be analytically continued to the domain of interest excluding the above finite set of points. Hence, also, ϕ q ( 0 ) = U ( q ) - 1 1 in the latter domain, where U ( q ) is the unique solution of the corresponding Sylvester equation. In particular, this holds for q = 0 , and the proof is complete.

6. Remarks on Classical Ruin

Let us briefly return to the classical ruin concept, i.e., all ω i . From Equation (7), the matrix of probabilities to reach level x before ruin is in this case given by:
P u [ τ x + < τ 0 - , J ( τ x + ) ] = I - Z ( 0 , u ) + W ( u ) W ( x ) - 1 Z ( 0 , x )
which for u = 0 reduces to W ( 0 ) W ( x ) - 1 Z ( 0 , x ) . It is known that W ( 0 ) is a diagonal matrix with W i i ( 0 ) equal to zero or 1 / c i , according to X i having unbounded variation or bounded variation on compacts, and c i > 0 being the linear drift of X i (the premium density in the case of Equation (1)).
In order to obtain survival probabilities when μ > 0 , we need to compute:
t = lim x W ( x ) - 1 Z ( 0 , x ) 1
which, similarly to the proof of Theorem 4.2, is a non-trivial problem. Using recent results from [23], in particular Lemma 1, Proposition 1 and Lemma 3, we find that this limit is given by:
t = μ Δ π - 1 π Λ ˜ T
where π Λ ˜ is the stationary distribution associated with Λ ˜ and the latter corresponds to the time-reversed process. Hence, the probability of survival according to the classical ruin concept with zero initial capital and J ( 0 ) = i is given by:
P ( τ 0 - = | X ( 0 ) = 0 , J ( 0 ) = i ) = μ c i ( π Λ ˜ ) i π i , if X i is of bounded variation with linear drift c i 0 , if X i is of unbounded variation
In the case of the classical Cramér–Lundberg model ( n = 1 ), this further simplifies to the well-known expression, μ / c .
The simplicity of all the terms in Equation (16) motivates a direct probabilistic argument, which we provide in the following. Assuming that μ > 0 and X i is a bounded variation process with linear drift c i , we consider P i ( τ 0 - > e q ) = P i ( X ̲ ( e q ) = 0 ) (with an independent exponentially distributed e q ), which provides the required vector of survival probabilities upon taking q 0 . According to a standard time-reversal argument, we write:
P i ( X ̲ ( e q ) = 0 | J ( e q ) = j ) = P ˜ j ( X ( e q ) - X ¯ ( e q ) = 0 | J ( e q ) = i )
which yields:
P i ( τ 0 - > e q ) = j P ˜ j ( X ( e q ) = X ¯ ( e q ) , J ( e q ) = i ) π j π i
Moreover:
P ˜ j ( X ( e q ) = X ¯ ( e q ) , J ( e q ) = i ) = q E ˜ j 0 e q 1 { X ( t ) = X ¯ ( t ) , J ( t ) = i } d t = q c i E ˜ j 0 X ¯ ( e q ) 1 { J ( τ x + ) = i } d x
where the last equality follows from the structure of the sample paths (or local time at the maximum). It is known that X ¯ ( t ) / t μ as t , which then shows that the above expression converges to μ c i ( π Λ ˜ ) i as q 0 , where the interchange of the limit and integral can be made precise using the generalized dominated convergence theorem. Combining this with Equation (17) yields Equation (16).

7. A Numerical Example

Let us finally consider a numerical illustration of our results for a Markov-modulated Cramér–Lundberg model Equation (1) with two states, exponential claim sizes with a mean of one in both states, premium densities c 1 = c 2 = 1 , claim arrival rates β 1 = 1 , β 2 = 0 . 5 , observation rates ω 1 = 0 . 4 , ω 2 = 0 . 2 and the Markov chain, J ( t ) , having transition rates of 1, 1, which results in the asymptotic drift μ = 1 / 4 > 0 . For this model, we specify the matrix-valued functions, F ( θ ) (see [7] (Proposition 4.2)) and F ^ ( θ ) , cf. Equation (9). Using the spectral method, we determine the matrices Λ and Λ ^ , and then also the matrix L according to Proposition 4.1:
Λ = - 1 . 39 1 . 39 1 . 16 - 1 . 16 , Λ ^ = - 1 . 99 1 . 20 1 . 09 - 1 . 45 and L = 2 . 63 1 . 47 1 . 47 2 . 44
We use Theorem 4.2 to compute the vector of survival probabilities for zero initial capital:
U = 1 . 58 0 . 58 0 . 53 1 . 54 , ϕ ( 0 ) = U - 1 1 = 0 . 45 0 . 49
Furthermore, Corollary 4.1 yields the vector of survival probabilities for an arbitrary initial capital, u 0 , in terms of a matrix-valued function, W ( x ) . Due to the exponential jumps, the matrix, W ( x ) , has an explicit form; see Remark 4.2. Figure 1 depicts the survival probabilities as a function of the initial capital, u.
Figure 1. Survival probabilities, ϕ 1 ( u ) and ϕ 2 ( u ) .
Figure 1. Survival probabilities, ϕ 1 ( u ) and ϕ 2 ( u ) .
Risks 01 00148 g001
Figure 2 confirms the correctness of our results. It depicts ( R ( x ) 1 ) 1 (i.e., the probability of reaching level x before being observed as being ruined when starting in State 1 with zero initial capital), and the dots represent Monte Carlo simulation estimates of the same quantity based on 10,000 runs, the horizontal line representing ϕ 1 ( 0 ) = 0 . 45 . One sees that for large values of x, the numerical determination of R ( x ) (as well as ϕ ( x ) ) becomes a challenge, which underlines the importance of our limiting result, i.e., Theorem 4.2.
Figure 2. The probability of reaching level x before ruin for J ( 0 ) = 1 , X ( 0 ) = 0 .
Figure 2. The probability of reaching level x before ruin for J ( 0 ) = 1 , X ( 0 ) = 0 .
Risks 01 00148 g002

Acknowledgments

Financial support by the Swiss National Science Foundation Project 200021-124635/1 is gratefully acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. H. Albrecher, E.C.K. Cheung, and S. Thonhauser. “Randomized observation times for the compound Poisson risk model: Dividends.” Astin Bull. 41 (2011): 645–672. [Google Scholar]
  2. H. Albrecher, E.C.K. Cheung, and S. Thonhauser. “Randomized observation times for the compound Poisson risk model: The discounted penalty function.” Scand. Act. J. 6 (2013): 424–452. [Google Scholar] [CrossRef] [Green Version]
  3. H. Albrecher, and V. Lautscham. “From ruin to bankruptcy for compound Poisson surplus processes.” Astin Bull. 43 (2013): 213–243. [Google Scholar] [CrossRef]
  4. A. Dassios, and S. Wu. Parisian Ruin with Exponential Claims. Report; London, UK: Department of Statistics, London School of Economics and Political Science, 2008. [Google Scholar]
  5. R. Loeffen, I. Czarna, and Z. Palmowski. “Parisian ruin probability for spectrally negative Lévy processes.” Bernoulli 19 (2013): 599–609. [Google Scholar] [CrossRef]
  6. D. Landriault, J.F. Renaud, and X. Zhou. “Occupation times of spectrally negative Lévy processes with applications.” Stoch. Process. Appl. 121 (2011): 2629–2641. [Google Scholar] [CrossRef] [Green Version]
  7. S. Asmussen, and H. Albrecher. Ruin Probabilities, 2nd ed. Advanced Series on Statistical Science & Applied Probability, 14; Hackensack, NJ, USA: World Scientific Publishing Co. Pte. Ltd., 2010. [Google Scholar]
  8. H.U. Gerber, X.S. Lin, and H. Yang. “A note on the dividends-penalty identity and the optimal dividend barrier.” Astin Bull. 36 (2006): 489–503. [Google Scholar] [CrossRef]
  9. S. Asmussen, F. Avram, and M.R. Pistorius. “Russian and American put options under exponential phase-type Lévy models.” Stoch. Process. Appl. 109 (2004): 79–111. [Google Scholar] [CrossRef]
  10. J. Ivanovs. “One-Sided Markov Additive Processes and Related Exit Problems.” Ph.D. Dissertation, Uitgeverij BOXPress, Oisterwijk, The Netherlands, 2011. [Google Scholar]
  11. A.E. Kyprianou. Introductory Lectures on Fluctuations of Lévy Processes with Applications. Berlin, Germany: Springer-Verlag, 2006. [Google Scholar]
  12. L. Breuer. “First passage times for Markov additive processes with positive jumps of phase type.” J. Appl. Probab. 45 (2008): 779–799. [Google Scholar] [CrossRef]
  13. B. D’Auria, J. Ivanovs, O. Kella, and M. Mandjes. “First passage of a Markov additive process and generalized Jordan chains.” J. Appl. Probab. 47 (2010): 1048–1057. [Google Scholar] [CrossRef]
  14. J. Ivanovs, and Z. Palmowski. “Occupation densities in solving exit problems for Markov additive processes and their reflections.” Stoch. Process. Appl. 122 (2012): 3342–3360. [Google Scholar] [CrossRef]
  15. J. Ivanovs. “A note on killing with applications in risk theory.” Insur. Math. Econ. 52 (2013): 29–33. [Google Scholar] [CrossRef]
  16. D.E. Rutherford. “On the solution of the matrix equation AX + XB = C.” Ned. Akad. Wet. 35 (1932): 54–59. [Google Scholar]
  17. E. De Souza, and S.P. Bhattacharyya. “Controllability, observability and the solution of AX - XB = C.” Linear Algebra Appl. 39 (1981): 167–188. [Google Scholar] [CrossRef]
  18. Z. Jiang, and M.R. Pistorius. “On perpetual American put valuation and first-passage in a regime-switching model with jumps.” Financ. Stoch. 12 (2008): 331–355. [Google Scholar] [CrossRef]
  19. L. Breuer. “The resolvent and expected local times for Markov-modulated Brownian motion with phase-dependent termination rates.” J. Appl. Probab. 50 (2013): 430–438. [Google Scholar] [CrossRef]
  20. L. Breuer. “Threshold dividend strategies for a Markov-additive risk model.” Eur. Actuar. J. 1 (2011): 237–258. [Google Scholar] [CrossRef]
  21. L. Breuer. “Exit problems for reflected Markov-modulated Brownian motion.” J. Appl. Probab. 49 (2012): 697–709. [Google Scholar] [CrossRef]
  22. P. Lancaster. “Explicit solutions of linear matrix equations.” SIAM Rev. 12 (1970): 544–566. [Google Scholar] [CrossRef]
  23. J. Ivanovs. “Potential measures of one-sided Markov additive processes with reflecting and terminating barriers. arXiv.org e-Print archive.” 2013. Available online at http://arxiv.org/abs/1309.4987 (accessed on 11 October 2013).

Share and Cite

MDPI and ACS Style

Albrecher, H.; Ivanovs, J. A Risk Model with an Observer in a Markov Environment. Risks 2013, 1, 148-161. https://doi.org/10.3390/risks1030148

AMA Style

Albrecher H, Ivanovs J. A Risk Model with an Observer in a Markov Environment. Risks. 2013; 1(3):148-161. https://doi.org/10.3390/risks1030148

Chicago/Turabian Style

Albrecher, Hansjörg, and Jevgenijs Ivanovs. 2013. "A Risk Model with an Observer in a Markov Environment" Risks 1, no. 3: 148-161. https://doi.org/10.3390/risks1030148

APA Style

Albrecher, H., & Ivanovs, J. (2013). A Risk Model with an Observer in a Markov Environment. Risks, 1(3), 148-161. https://doi.org/10.3390/risks1030148

Article Metrics

Back to TopTop