On the moments and the distribution of aggregate discounted claims in a Markovian environment

This paper studies the moments and the distribution of the aggregate discounted claims (ADC) in a Markovian environment, where the claim arrivals, claim amounts and forces of interest (for discounting) are inﬂuenced by an underlying Markov process. Speciﬁcally, we assume that claims occur according to a Markovian arrival process (MAP). The paper shows that the joint Laplace transform of the ADC occurred in each state of the environment process by any speciﬁc time satisﬁes a ﬁrst order partial diﬀerential equation through which a recursive formula is derived for the moments of the ADC occurred in certain states (a subset). We also study two types of covariances of the ADC occurred in any two subsets of the state space and with two diﬀerent time lengths. The distribution of the ADC occurred in certain states by any speciﬁc time is also investigated. Numerical results are also presented for a two-state Markov-modulated model case.


Introduction
Consider a line of business or an insurance portfolio to be insured by a property and casualty insurance company. Suppose that random claims arrive in the future according to a counting process, denoted by {N (t)} t≥0 , i.e., N (t) is the random number of claims up to time t. Assume that {T n } n≥1 is a sequence of random claim occurrence times and {X n } n≥1 is a sequence of corresponding random positive claim amounts (also called claim severities), and δ(t) is the force of interest at time t which is modeled by a stochastic process. Then S(t) defined by is the aggregate discounted claims (ADC) up to certain time t, or the present value of the total amounts paid out by the company up to time t, which describes the random Moreover, we assume that the force of interest process {δ(t)} t≥0 in (1) is also governed by the same Markov process {J(t)} t≥0 and is assumed constant while staying at certain state, that is, when J(t) = i, δ(t) = δ i (> 0), for all i ∈ E. As the force of interest used for evaluation is mainly driven by the local or global economics conditions we would reasonably model its random fluctuations by a stochastic process which is different from {J(t)} t≥0 . Technically, we can assume a two dimensional Markov process as the environment or background process and other mathematical treatments would be the same as we do below. Hence, we make the above assumption in this paper to simplify notations and presentations. We note that studies of the influence of economic conditions such as interest and inflation on the classical risk theory can be found in Taylor (1979), Delbaen and Haezendonk (1987), Willmot (1989), and Garrido and Léveille (2004).
The MAP has received considerable attention in recent decade due to its versatility and feasibility in modeling stochastic insurance claims dynamics. The MAP includes the Poisson process, renewal process with the inter-arrival times following phase-type distributions and Markov-modulated Poisson process as special cases, that are intensively studied in actuarial science literature. Detailed characteristics and properties of MAPs can be found in Neuts (1979) and Asmussen (2003). Below we present a brief literature review on the related work based on model (1) (including its special cases).
Most of the studies on model (1) are under the assumption that {δ(t)} t≥0 is deterministic. For the ADC, Léveille and Garrido (2001a) give explicit expressions for its first two moments in the compound renewal risk process by using renewal theory arguments, while Léveille and Garrido (2001b) further derive a recursive formula for the moments calculation.  Jang (2004) obtains the Laplace transform of the distribution of the ADC using a shot noise process. Woo and Cheung (2013) derive recursive formulae for the moments of the ADC, using the techniques in Léveille and Garrido (2001b), for a renewal risk process with certain dependence between the claim arrival and the amount caused. The impact of the dependency on the ADC are illustrated numerically. Kim and Kim (2007) derive simple expressions for the first two moments of the ADC when the rates of claim arrivals and the claim sizes depend on the states of an underlying Markov process. Ren (2008) studies the Laplace transform and the first two moments of the ADC following a MAP process, and Li (2008) further derives a recursive formula for the moments of the discounted claims for the same model. Bargés et al. (2011) study the moments of the ADC in a compound Poisson model with dependence introduced by a Farlie-Gumbel-Morgenstern (FGM) copula; Jang and Ramli (2014) further derive Neumann series expression of the recursive moments by using the method of successive approximation.
There are only few papers that study model (1) with a stochastic process {δ(t)} t≥0 in the literature of actuarial science. Adékambi (2011, 2012) study the covariance and the joint moments of the discounted compound renewal sum at two different times with a stochastic interest rate where the Ho-Lee-Merton and the Vasicek interest rate models are considered. Their idea of studying the covariance and the joint moments is adopted and extended in this paper. Here, we assume that the components of the ADC process {S(t)} t≥0 described by (1), the number of clams, the size of the claims and the force of interest for discounting, are all influenced by a same Markovian environment process, which enhances the flexibility of the model parameter settings. It follows that S(t) depends on the trajectory of this underlying process whose states may represent different external conditions or circumstances which affect insurance claims. The main objective of this paper is to study the moments and the distribution of S(t) given in (1), occurred in certain states (e.g., certain conditions) by time t.
In general, while the expectation of S(t) at any given time t can be used as a reference for insurer's liability, the higher moments of S(t), describing further characteristics of the random variable such as the variability around the mean and how extreme outcomes could go, may be used to determine the marginals on reserves. Furthermore, the distributional results regarding S(t) would be useful for obtaining the risk measures such as the value at risk and the conditional tail expectation, which may help insurers prevent or minimize their losses from extreme cases.
Our work is basically a generalization of some aforementioned studies. We first obtain formulae for calculating mean, variance and distribution of the ADC occurred in a subset of states at a certain time. The subset may represent a collection of similar conditions that the insurer would consider them as a whole. We then derive explicit matrix-analytic expressions for covariances of the ADC occurred in two subsets of the state space at a certain time and that occurred in certain subset of states with two different time lengths. The motivation of studying these two types of covariance is that we believe they could reveal the correlation between the random discounted sums either between different underlying conditions or with different time lengths, and the information would be helpful for insurers to set their capital requirements for preventing future losses, and make strategic and contingency plans.
Moreover, we obtain a matrix form partial integro-differential equation satisfied by the distribution function of the ADC occurred in certain subset of states. The equation can be solved numerically to obtain the probability distribution of the ADC, which again could be useful for measuring insurers' risks of insolvency.
The rest of paper is organized as follows. In Section 2, we study the joint Laplace transforms of the ADC occurred in each state by time t and pay attention to some special cases. Recursive formulae for calculating the moments of the ADC occurred in certain states are obtained. A formula for computing the covariance of the ADC occurred in two subsets of the state space is derived in Section 3, while the covariance of the ADC occurred in certain states with two different time lengths is studied in Section 4. The distribution of the ADC occurred in certain states is investigated in Section 5. Finally, some numerical illustrations are presented in Section 6.

The Laplace transforms and moments
We first decompose S(t) into m components as is the ADC occurred in state j ∈ E, with I(·) being the indicator function. For given to be the ADC occurred in the subset of state space E k . In particular, S E (t) = S(t) and is the number of claims occurred in the sub-state space E k by time t.
Let P i and E i denote conditional probability and conditional expectation given to be the joint Laplace transform of S 1 (t), S 2 (t), . . ., S m (t), given that the initial state is i.
In particular, we have We define, for n ∈ N + , the n-th moment of S(t), S j (t), and S E k (t), respectively, as given that the initial state is i.
We write the following column vectors for the Laplace transforms In this section, we first show that L(ξ 1 , ξ 2 , . . . , ξ m ; t) satisfies a matrix form first order linear partial differential equation, and derive recursive formulae for calculating the moments of various ADC depending on the initial state of the underlying Markovian process. We also consider some special cases.
Theorem 1 L(ξ 1 , ξ 2 , . . . , ξ m ; t) satisfies the following first order linear partial differential equation: Proof: For an infinitesimal h > 0, conditioning on three possible events which can occur in [0, h]: no change in the MAP phase (state), a change in the MAP phase accompanied by no claims, and a change in the MAP phase accompanied by a claim, we have Taylor's expansion gives where lim h→0 (o(h)/h) = 0. Substituting the expression above into (3), dividing both sides by h, and letting h → 0, we have Rewriting (4) in matrix form gives (2). 2 Remark 1 Using the same argument, we have the follow results.
(1) L E k (ξ; t) satisfies the following first order linear partial differential equation: wheref E k (ξ) is an m × m diagonal matrix with the l i -th entry beingf l i (ξ), for i = 1, 2, . . ., k and all other entries being 1.
(2) L(ξ; t) satisfies We now study the moments of the ADC considered in Theorem 1. Denote the vectors of the n-th moment of the corresponding ADC as From equation (5), we obtain in Theorem 2 a matrix form first order differential equation satisfied by the moments of S E k (t), V n,E k (t), and then in Theorem 3 obtain recursive formulae for calculating them.
with initial conditions V n,E k (0) = 0 and V 0,E k (t) = 1, and in particular m , I E k is an m × m diagonal matrix with the l i -th entry being 1, for i = 1, 2, . . ., k, and all other diagonal entries being 0.

Corollary 1
We have the following results for the moments of S(t) and S j (t).
(i) V n (t) satisfies the matrix form first order linear differential equation: where V n (0) = 0 and V 0 (t) = 1. In particular, V 1 (t) satisfies (ii) V n,j (t) satisfies the matrix form first order linear differential equation: where I j = I {j} is a diagonal matrix with j-th entry being 1, and 0 otherwise, V n,j (0) = 0 and V 0,j (t) = 1. In particular, V 1,j (t) satisfies Solving differential equation (6) with V n,E k (0) = 0, we obtain the following recursive formulae for V n,E k (t).
Theorem 3 For t > 0 and n ∈ N + , we have In particular, Corollary 2 If we set E k = E and E k = {j} in Theorem 3, we have the following recursive formulae for the moments of S(t) and S j (t): In particular, Remark 2 When t → ∞, we have the following asymptotic results for the moments of the ADC for n ∈ N + :

The covariance of ADC occurred in two sub-state spaces
In this section, we first calculate the joint moment of the ADC occurred in two subsets of the state space and then the covariance between them could be calculated.
For 1 ≤ l 1 < l 2 < . . . < l k ≤ m and 1 ≤ n 1 < n 2 < . . . < n j ≤ m, where 2 ≤ k + j ≤ m. Denote E k = {l 1 , l 2 , . . . , l k } and E j = {n 1 , n 2 , . . . , n j } to be two disjoint subsets of E, i.e., E k ∩ E j = ∅. The aggregate discounted claim amounts occurred in E k and E j are to be the joint Laplace transform of S E k (t) and S E j (t). Let L E k ,E j (ξ k , ξ j ; t) be a column vector with the i-th entry being i L E k ,E j (ξ k , ξ j ; t). Moreover, let be the joint moment of S E k (t) and S E j (t). Denote V E k ,E j (t) as an m × 1 column vector with i-th entry being i V E k ,E j (t).

It follows from (2) that
wheref E k ,E j (ξ k , ξ j ) is a diagonal matrix with the l i -th entry beingf l i (ξ k ), for i = 1, 2, . . ., k, with the n i -th entry beingf n i (ξ j ), for i = 1, 2, . . ., j, and all other elements being 1.
Taking partial derivatives with respect to ξ k and ξ j on both sides of equation (8), setting ξ k = 0 and ξ j = 0, and noting that , we obtain the following matrix form first order differential equation for V E k ,E j (t): Solving it gives When t → ∞, we have the expression below for the joint moments of S E k (∞) and S E j (∞): When t → ∞, the joint moment of S k (∞) and S j (∞) can be expressed as

Remark 4 If two subsets
All the covariance terms in the expression above are for ADC occurred in two disjoint sets.

The covariance of the ADC with two different time lengths
In this section, we investigate the covariance of the ADC occurred in two (overlapped) time periods, i.e., we want to evaluate for t, h > 0 and E k = {l 1 , l 2 , . . . , l k } with k ≤ m.
Define F t = σ(S(v); 0 ≤ v ≤ t) to be σ-algebra generated by the ADC process by time t.
Using the law of iterated expectation, we have where S E k (t, t + h) is the present value, at time t, of the claims occurred in states within E k over (t, t + h]. Conditioning on the events that may occur over an infinitesimal interval (0, ∆t), we have We can obtain a matrix form differential equation for M E k (t) from (11) as follows: with It is easy to show that v(t) = e (D 0 +D 1 −δ)t , with v(0) = I and v(∞) = 0. Solving (12) gives Let q i,j (t) = P i (J(t) = j) and Q(t) = q i,j (t) m×m is the transition matrix of the underlying Markov process {J(t)} t≥0 at time t. It follows from Ren (2008) that Q(t) = e (D 0 +D 1 )t .
Denote R E k (t, t + h) as a column vector with the i-th entry being It follows from (9) and (10) that where (M E k • Q)(t) is the Hadamard product of M E k (t) and Q(t), i.e., the (i, j)-th element Remark 5 If E k = E or E k = {k}, formula (13) simplifies to the joint moment of S(t) and S(t + h), or the joint moment of S k (t) and S k (t + h).

The distributions of the ADC
In this section, we investigate the distributions of S E k (t) and its two special cases, S(t) and S k (t), for E k = {l 1 , l 2 , . . . , l k } ⊆ E. To precede, we define for x ≥ 0 and i ∈ E, with the following conditions: l=1 I(J(T l ) = k) is the number of claims occurred in state k and N E k (t) = j∈E k N j (t) is the number of claims occurred in the subset E k . Denote We present in the theorem below that G E k (x, t) satisfies a first order partial integrodifferential equation.
Theorem 4 G E k (x, t) satisfies the following matrix form first order partial integro-differential equation: with initial conditions where G E k (0, t) is the solution of the differential equation obtained from (14) by setting Proof: Using the same arguments as in Section 2, we have by conditioning on events that may occur over (0, h] that It follows from Taylor's expansion that Substituting (17) into (16), rearranging terms, dividing both sides by h, and taking limit as h → 0, give Taylor's expansion gives Equations for i ∈ E k and i ∈ E k can be expressed in matrix form (14). 2

Remark 6
If we set E k = E and E k = {k}, respectively, we have the following results: with initial conditions Here G k (0, t) is the solution of the differential equation obtained from (19) by setting x = 0.

Remark 7
The matrix form partial integro-differential equation (14) with corresponding initial condition (15) may be solved numerically as follows.

Remark 9
Li et al. (2015) show that, when δ(s) = 0, G i (x, t) can be used to find an expression for the density of the time of ruin in a MAP risk model.

Numerical Illustrations
In this section, we consider a two-state Markov-modulated with intensity matrix We also assume that f 1 (x) = e −x , f 2 (x) = 0.5e −0.5x , x > 0, λ 1 = 1, λ 2 = 2/3, δ 1 = 0.03, δ 2 = 0.05. Table 1 gives the first moments of S 1 (t) and S 2 (t) and their covariance for t = 1, 2, 5, 10, 20, 30, ∞, given J(0) = 1 and J(0) = 2, respectively, in which the covariances, for i = 1, 2, are calculated by It shows that, as expected, the expected values of S 1 (t) and S 2 (t) (and hence S(t)) are increasing in t given J(0) = i for i = 1, 2. It is not surprised to see that S 1 (t) and S 2 (t) are negatively correlated for any t as claims occurred in the two states compete each other. Moreover, the larger the time t, the more the negative correlation between S 1 (t) and S 2 (t). Table 1: Expected values and covariances of S 1 (t) and S 2 (t)   Figure 1 plots the variances of S(t), S 1 (t), and S 2 (t), given J(0) = 1. The variances all increase with time t. The variance of S(t) is bigger than those of S 1 (t) and S 2 (t) for a fixed t. When time t goes to ∞, the three variances converge. Tables 2 and 3 display the covariances of the ADC at time t and t + h, given J(0) = 1, for some selected t values and for h = 1 and h = 5. It is shown that S(t) and S(t + h), S 1 (t) and S 1 (t + h), and S 2 (t) and S 2 (t + h), are all positively correlated. Moreover, when t increases, the covariances increase, and when h increases, the covariances decrease. When t → ∞, the covariances of the pairs S(t) and S(t + h), S i (t) and S i (t + h) converge to the variances of S(∞) and S i (∞), respectively. Similar patterns should be expected for J(0) = 2. Table 2: Covariances of discounted claims at t and t + 1 t J(0) = 1 Cov 1 S(t), S(t + 1) Cov 1 (S 1 (t), S 1 (t + 1)) Cov 1 (S 2 (t), S 2 (t + 1)) 1  Table 3: Covariances of discounted claims at t and t + 5 t J(0) = 1 Cov 1 S(t), S(t + 5) Cov 1 (S 1 (t), S 1 (t + 5)) Cov 1 (S 2 (t), S 2 (t + 5)) 1 Finally, we display in Figure 2 the numerical values of the distribution function of S(t) with initial state i, G i (x, t) = P i (S(t) ≤ x), for t = 1 and 4, 0 ≤ x ≤ 25 and i = 1, 2. Note that G(x, t) = (G 1 (x, t), G 2 (x, t)) satisfies the partial differential equation (18); its solution can be obtained numerically. From the graph, it shows clearly that the probability of S(t) being bigger than a fixed x is smaller for small values of t as expected. For most of x values, G 1 (x, t) is bigger than G 2 (x, t) due to the fact that the underlying Markov process in our example tends to stay in state 1 more often than staying at state 2.