Next Article in Journal
Risk Aversion, Loss Aversion, and the Demand for Insurance
Next Article in Special Issue
On a Multiplicative Multivariate Gamma Distribution with Applications in Insurance
Previous Article in Journal
A Credit-Risk Valuation under the Variance-Gamma Asset Return
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Moments and the Distribution of Aggregate Discounted Claims in a Markovian Environment

1
Centre for Actuarial Studies, Department of Economics, The University of Melbourne, Melbourne 3010, Australia
2
Department of Statistics and Actuarial Science, Simon Fraser University, Burnaby, BC V5A 1S6, Canada
*
Author to whom correspondence should be addressed.
Risks 2018, 6(2), 59; https://doi.org/10.3390/risks6020059
Submission received: 7 May 2018 / Revised: 17 May 2018 / Accepted: 21 May 2018 / Published: 23 May 2018
(This article belongs to the Special Issue Risk, Ruin and Survival: Decision Making in Insurance and Finance)

Abstract

:
This paper studies the moments and the distribution of the aggregate discounted claims (ADCs) in a Markovian environment, where the claim arrivals, claim amounts, and forces of interest (for discounting) are influenced by an underlying Markov process. Specifically, we assume that claims occur according to a Markovian arrival process (MAP). The paper shows that the vector of joint Laplace transforms of the ADC occurring in each state of the environment process by any specific time satisfies a matrix-form first-order partial differential equation, through which a recursive formula is derived for the moments of the ADC occurring in certain states (a subset). We also study two types of covariances of the ADC occurring in any two subsets of the state space and with two different time lengths. The distribution of the ADC occurring in certain states by any specific time is also investigated. Numerical results are also presented for a two-state Markov-modulated model case.

1. Introduction

Consider a line of business or an insurance portfolio to be insured by a property and casualty insurance company. Suppose that random claims arrive in the future according to a counting process, denoted by { N ( t ) } t 0 , i.e., N ( t ) is the random number of claims up to time t. Assume that { T n } n 1 is a sequence of random claim occurrence times and { X n } n 1 is a sequence of corresponding random positive claim amounts (also called claim severities), and δ ( t ) is the force of interest at time t, which is modeled by a stochastic process. Then S ( t ) defined by
S ( t ) = n = 1 N ( t ) X n e 0 T n δ ( s ) d s , t 0
is the aggregate discounted claims (ADCs) up to certain time t, or the present value of the total amounts paid out by the company up to time t, which describes the random change over time of the insurer’s future liabilities at present time. Accordingly, { S ( t ) } t 0 is the ADC process (compound discounted claims) for this business. At a fixed time t, the randomness of S ( t ) comes from the number of claims up to time t, claim occurrence times, and corresponding sizes as well as the values of δ ( s ) , 0 s t . It is an important quantity in the sense that, at the time of issue ( t = 0 ), this quantity would help insurers set a premium for this particular line of business, and predict and manage their future liabilities.
A simple case of model (1) is one in which the counting process { N ( t ) } t 0 is a homogeneous Poisson process, independent of claim amounts, and the force of interest is deterministic. In this paper, we assume that the counting process { N ( t ) } t 0 is a Markovian arrival process (MAP) with representation ( γ , D 0 , D 1 ) , introduced by Neuts (1979). That is, claim arrivals are influenced by an underlying continuous-time Markov process { J ( t ) } t 0 on state space E = { 1 , 2 , , m } with an m × m intensity matrix D and initial distribution γ , where D = D 0 + D 1 = d 0 , i j + d 1 , i j , and is assumed to be irreducible. Precisely, d 0 , i j represents the intensity of transitions from state i to state j without claim arrivals, while d 1 , i j ( 0 ) represents the intensity of transitions from state i to state j with an accompanying claim, having a cumulative distribution function F i , density function f i , k-th moment μ i ( k ) , and Laplace transform f ^ i ( s ) = 0 e s x f i ( x ) d x . Here, the process { J ( t ) } t 0 models the random environment, which affects the frequency and the severity of claims and thus the insurance business; for example, it is well known that the weather or climate conditions have impacts on automobile, property and casualty insurance claims.
Moreover, we assume that the force of interest process { δ ( t ) } t 0 in (1) is also governed by the same Markov process { J ( t ) } t 0 and is assumed constant while staying at certain state, that is, when J ( t ) = i , δ ( t ) = δ i ( > 0 ) , for all i E . As the force of interest used for evaluation is mainly driven by the local or global economics conditions, we would reasonably model its random fluctuations by a stochastic process that is different from { J ( t ) } t 0 . Technically, we can assume a two-dimensional Markov process as the environment or background process and other mathematical treatments would be the same as we do below. Hence, we make the above assumption in this paper to simplify notations and presentations. We note that studies of the influence of economic conditions such as interest and inflation on the classical risk theory can be found in papers by Taylor (1979), Delbaen and Haezendonck (1987), Willmot (1989), and Garrido and Léveill (2004).
The MAP has received considerable attention in recent decades due to its versatility and feasibility in modeling stochastic insurance claims dynamics. MAPs include Poisson processes, renewal processes with the inter-arrival times following phase-type distributions, and Markov-modulated Poisson processes as special cases, which are intensively studied in actuarial science literature. Detailed characteristics and properties of MAPs can be found in papers by Neuts (1979) and Asmussen (2003). Below, we present a brief literature review on the related work based on models given by Equation (1) (including its special cases).
Most of the studies on model (1) are under the assumption that { δ ( t ) } t 0 is deterministic. For the ADC, Léveillé and Garrido (2001a) give explicit expressions for its first two moments in the compound renewal risk process by using renewal theory arguments, while Léveillé and Garrido (2001b) further derive a recursive formula for the moments calculation. Léveillé et al. (2010) study the moment generating function (mgf) of the ADC by finite and infinite time under a renewal risk model or a delayed renewal risk model. Recently, Wang et al. (2018) studied the distribution of discounted compound phase-type renewal sums using the analytical results of their mgf obtained by Léveillé et al. (2010). Jang (2004) obtains the Laplace transform of the distribution of the ADC using a shot noise process. Woo and Cheung (2013) derive recursive formulas for the moments of the ADC using techniques used by Léveillé and Garrido (2001b), for a renewal risk process with certain dependence between the claim arrival and the amount caused. The impact of the dependency on the ADC are illustrated numerically. Kim and Kim (2007) derive simple expressions for the first two moments of the ADC when the rates of claim arrivals and the claim sizes depend on the states of an underlying Markov process. Ren (2008) studies the Laplace transform and the first two moments of the ADC following a MAP process, and Li (2008) further derives a recursive formula for the moments of the discounted claims for the same model. Barges et al. (2011) study the moments of the ADC in a compound Poisson model with dependence introduced by a Farlie–Gumbel–Morgenstern (FGM) copula; Mohd Ramli and Jang (2014) further derive Neumann series expression of the recursive moments by using the method of successive approximation.
There are few papers that study models described by Equation (1) with a stochastic process { δ ( t ) } t 0 in the literature of actuarial science. Leveille and Adekambi (2011, 2012) study the covariance and the joint moments of the discounted compound renewal sum at two different times with a stochastic interest rate where the Ho–Lee–Merton and the Vasicek interest rate models are considered. Their idea of studying the covariance and the joint moments is adopted and extended in this paper. Here, we assume that the components of the ADC process { S ( t ) } t 0 described by Equation (1)—the number of claims, the size of the claims, and the force of interest for discounting—are all influenced by the same Markovian environment process, which enhances the flexibility of the model parameter settings. It follows that S ( t ) depends on the trajectory of this underlying process whose states may represent different external conditions or circumstances that affect insurance claims. The main objective of this paper is to study the moments and the distribution of S ( t ) given in Equation (1), occurring in certain states (e.g., certain conditions) by time t.
In general, while the expectation of S ( t ) at any given time t can be used as a reference for the insurer’s liability, the higher moments of S ( t ) , describing further characteristics of the random variable such as the variability around the mean and how extreme outcomes could go, may be used to determine the marginals on reserves. Furthermore, the distributional results regarding S ( t ) would be useful for obtaining the risk measures such as the value at risk and the conditional tail expectation, which may help insurers prevent or minimize their losses from extreme cases.
Our work is basically a generalization of some aforementioned studies. We first obtain formulas for calculating mean, variance, and distribution of the ADC occurring in a subset of states at a certain time. The subset may represent a collection of similar conditions that the insurer would consider them as a whole. We then derive explicit matrix-analytic expressions for covariances of the ADC occurring in two subsets of the state space at a certain time and those occurring in a certain subset of states with two different time lengths. The motivation of studying these two types of covariance is that we believe they can reveal the correlation between the random discounted sums either between different underlying conditions or with different time lengths, and the information would be helpful for insurers to set their capital requirements for preventing future losses, and make strategic and contingency plans. Moreover, we obtain a matrix-form partial integro-differential equation satisfied by the distribution function of the ADC occurring in certain subset of states. The equation can be solved numerically to obtain the probability distribution function of the ADC, which again could be useful for measuring insurers’ risks of insolvency.
The rest of the paper is organized as follows. In Section 2, we study the joint Laplace transforms of the ADC occurring in each state by time t and pay attention to some special cases. Recursive formulas for calculating the moments of the ADC occurring in certain states are obtained. A formula for computing the covariance of the ADC occurring in two subsets of the state space is derived in Section 3, while the covariance of the ADC occurring in certain states with two different time lengths is studied in Section 4. The distribution of the ADC occurring in certain states is investigated in Section 5. Finally, some numerical illustrations are presented in Section 6.

2. The Laplace Transforms and Moments

We first decompose S ( t ) into m components as
S ( t ) = j = 1 m S j ( t )
where
S j ( t ) = n = 1 N ( t ) X n I ( J ( T n ) = j ) e 0 T n δ ( s ) d s
is the ADC occurring in state j E , with I ( · ) being the indicator function. For a given k ( 1 k m ), 1 l 1 < l 2 < < l k m denote E k = { l 1 , l 2 , , l k } E , a sub-state space of E. We then define
S E k ( t ) = j E k S j ( t )
to be the ADC occurring in the subset of state space E k . In particular, S E ( t ) = S ( t ) and S { j } ( t ) = S j ( t ) . If δ ( t ) = 0 and X i 1 for all i N + , then S E k ( t ) = N E k ( t ) , where N E k ( t ) is the number of claims occurring in the sub-state space E k by time t.
Let P i and E i denote conditional probability and conditional expectation given J ( 0 ) = i , respectively. Define
i L ( ξ 1 , ξ 2 , , ξ m ; t ) = E i e j = 1 m ξ j S j ( t ) , ξ j 0 , t 0 , i E
to be the joint Laplace transform of S 1 ( t ) , S 2 ( t ) , , S m ( t ) , given that the initial state is i . In particular, we have
i L ( ξ ; t ) = E i e ξ S ( t ) = i L ( ξ , ξ , , ξ ; t ) i L E k ( ξ ; t ) = E i e ξ S E k ( t ) = i L ( ξ 1 , ξ 2 , , ξ m ; t ) | ξ j = ξ I ( j = l n ) , n = 1 , 2 , , k i L j ( ξ j ; t ) = E i e ξ j S j ( t ) = i L ( ξ 1 , ξ 2 , , ξ m ; t ) | ξ k = 0 , k j .
We define, for n N + , the n-th moment of S ( t ) , S j ( t ) , and S E k ( t ) , respectively, as
i V ( n ) ( t ) = E i S n ( t ) , i E i V j ( n ) ( t ) = E i S j n ( t ) , i , j E i V E k ( n ) ( t ) = E i S E k n ( t ) , 1 k m
given that the initial state is i.
We write the following column vectors for the Laplace transforms
L ( ξ 1 , ξ 2 , , ξ m ; t ) = 1 L ( ξ 1 , ξ 2 , , ξ m ; t ) , , m L ( ξ 1 , ξ 2 , , ξ m ; t ) L ( ξ ; t ) = 1 L ( ξ ; t ) , 2 L ( ξ ; t ) , , m L ( ξ ; t ) L E k ( ξ ; t ) = 1 L E k ( ξ ; t ) , 2 L E k ( ξ ; t ) , , m L E k ( ξ ; t ) L j ( ξ j ; t ) = 1 L j ( ξ j ; t ) , 2 L j ( ξ j ; t ) , , m L j ( ξ j ; t ) ,
with L ( 0 ; t ) = L E k ( 0 ; t ) = L j ( 0 ; t ) = 1 = ( 1 , 1 , , 1 ) .
In this section, we first show that L ( ξ 1 , ξ 2 , , ξ m ; t ) satisfies a matrix-form first-order partial differential equation, and derive recursive formulas for calculating the moments of various ADC depending on the initial state of the underlying Markovian process. We also consider some special cases.
Theorem 1.
L ( ξ 1 , ξ 2 , , ξ m ; t ) satisfies
L ( ξ 1 , ξ 2 , , ξ m ; t ) t + δ j = 1 m ξ j L ( ξ 1 , ξ 2 , , ξ m ; t ) ξ j = D 0 L ( ξ 1 , ξ 2 , , ξ m ; t ) + f ^ ( ξ 1 , ξ 2 , , ξ m ) D 1 L ( ξ 1 , ξ 2 , , ξ m ; t )
where δ = diag ( δ 1 , δ 2 , , δ m ) and f ^ ( ξ 1 , ξ 2 , , ξ m ) = diag ( f ^ 1 ( ξ 1 ) , f ^ 2 ( ξ 2 ) , , f ^ m ( ξ m ) ) .
Proof. 
For an infinitesimal h > 0 , conditioning on three possible events which can occur in [ 0 , h ] —no change in the MAP phase (state), a change in the MAP phase accompanied by no claims, and a change in the MAP phase accompanied by a claim—we have
i L ( ξ 1 , ξ 2 , , ξ m ; t ) = [ 1 + d 0 , i i h ] i L ξ 1 e δ i h , ξ 2 e δ i h , , ξ m e δ i h ; t h + k = 1 , k i m d 0 , i k h k L ξ 1 e δ i h , ξ 2 e δ i h , , ξ m e δ i h ; t h + k = 1 m d 1 , i k h f ^ i ξ i e δ i h k L ξ 1 e δ i h , ξ 2 e δ i h , , ξ m e δ i h ; t h .
As i L ( ξ 1 , ξ 2 , , ξ m ; t ) is differentiable with respect to ξ i ( i E ) and t (the differentiability of i L with respect to t is justified in Appendix), we have
i L ξ 1 e δ i h , ξ 2 e δ i h , , ξ m e δ i h ; t h = i L ( ξ 1 , ξ 2 , , ξ m ; t ) h i L ( ξ 1 , ξ 2 , , ξ m ; t ) t δ i h l = 1 m ξ l i L ( ξ 1 , ξ 2 , , ξ m ; t ) ξ l + o ( h )
where lim h 0 ( o ( h ) / h ) = 0 . Substituting the expression above into Equation (4), dividing both sides by h, and letting h 0 , we have
δ i l = 1 m ξ l i L ( ξ 1 , ξ 2 , , ξ m ; t ) ξ l + i L ( ξ 1 , ξ 2 , , ξ m ; t ) t = k = 1 m d 0 , i k k L ( ξ 1 , ξ 2 , , ξ m ; t ) + k = 1 m d 1 , i k f ^ i ξ i k L ( ξ 1 , ξ 2 , , ξ m ; t ) .
Rewriting Equation (6) in matrix form gives Equation (3). ☐
Remark 1.
Using the same argument, we have the follow results.
(1) 
L E k ( ξ ; t ) satisfies the following matrix-form first-order partial differential equation:
L E k ( ξ ; t ) t + δ ξ L E k ( ξ ; t ) ξ = D 0 L E k ( ξ ; t ) + f ^ E k ( ξ ) D 1 L E k ( ξ ; t )
where f ^ E k ( ξ ) is an m × m diagonal matrix with the l i -th entry being f ^ l i ( ξ ) , for i = 1 , 2 , , k and all other entries being 1.
(2) 
L ( ξ ; t ) satisfies
L ( ξ ; t ) t + δ ξ L ( ξ ; t ) ξ = D 0 L ( ξ ; t ) + f ^ ( ξ ) D 1 L ( ξ ; t )
where f ^ ( ξ ) = diag ( f ^ 1 ( ξ ) , f ^ 2 ( ξ ) , , f ^ m ( ξ ) ) .
(3) 
L j ( ξ j ; t ) satisfies
L j ( ξ j ; t ) t + δ ξ j L j ( ξ j ; t ) ξ j = D 0 L j ( ξ j ; t ) + f ^ j ( ξ j ) D 1 L j ( ξ j ; t )
where f ^ j ( ξ j ) = diag ( 1 , 1 , , f ^ j ( ξ j ) , 1 , , 1 ) .
We now study the moments of the ADC considered in Theorem 1. Denote the vectors of the n-th moment of the corresponding ADC as
V n ( t ) = ( 1 V ( n ) ( t ) , 2 V ( n ) ( t ) , , m V ( n ) ( t ) ) V n , E k ( t ) = ( 1 V E k ( n ) ( t ) , 2 V E k ( n ) ( t ) , , m V E k ( n ) ( t ) ) V n , j ( t ) = ( 1 V j ( n ) ( t ) , 2 V j ( n ) ( t ) , , m V j ( n ) ( t ) ) .
From Equation (7), we obtain in Theorem 2 a matrix-form first-order differential equation satisfied by the moments of S E k ( t ) , V n , E k ( t ) and then, in Theorem 3, obtain recursive formulas for calculating them.
Theorem 2.
The moments of S E k ( t ) satisfy
V n , E k ( t ) + n δ D 0 D 1 V n , E k ( t ) = r = 1 n n r I E k μ r D 1 V n r , E k ( t ) , n N + ,
with initial conditions V n , E k ( 0 ) = 0 and V 0 , E k ( t ) = 1 . In particular,
V 1 , E k ( t ) + δ D 0 D 1 V 1 , E k ( t ) = I E k μ 1 D 1 1 , t 0
where μ r = diag ( μ 1 ( r ) , μ 2 ( r ) , , μ m ( r ) ) , I E k is an m × m diagonal matrix with the l i -th entry being 1 , for i = 1 , 2 , , k , and all other diagonal entries being 0 .
Proof. 
By Taylor’s expansion (its existence is easily justified as we assume that f i has moment μ i ( n ) for any n N + ), we have
f ^ i ( ξ ) = 1 + n = 1 ( 1 ) n ξ n n ! μ i ( n ) .
In matrix notation,
f ^ E k ( ξ ) = 1 + n = 1 ( 1 ) n ξ n n ! I E k μ n .
Substituting Equation (9) together with
L E k ( ξ ; t ) = n = 0 ( 1 ) n ξ n n ! V n , E k ( t )
into Equation (7) and equating the coefficients of ξ n give Equation (8). ☐
Corollary 1.
We have the following results for the moments of S ( t ) and S j ( t ) .
(i) 
V n ( t ) satisfies the matrix-form first-order differential equation:
V n ( t ) + n δ D 0 D 1 V n ( t ) = r = 1 n n r μ r D 1 V n r ( t ) , n N +
where V n ( 0 ) = 0 and V 0 ( t ) = 1 . In particular, V 1 ( t ) satisfies
V 1 ( t ) + δ D 0 D 1 V 1 ( t ) = μ 1 D 1 1 , t 0 .
(ii) 
V n , j ( t ) satisfies
V n , j ( t ) + n δ D 0 D 1 V n , j ( t ) = r = 1 n n r I j μ r D 1 V n r , j ( t )
where I j = I { j } is a diagonal matrix with the j-th entry being 1 , and 0 otherwise, V n , j ( 0 ) = 0 and V 0 , j ( t ) = 1 . In particular, V 1 , j ( t ) satisfies
V 1 , j ( t ) + δ D 0 D 1 V 1 , j ( t ) = I j μ 1 D 1 1 , t 0 .
Solving differential Equation (8) with V n , E k ( 0 ) = 0 , we obtain the following recursive formulas for V n , E k ( t ) .
Theorem 3.
For t > 0 and n N + , we have
V n , E k ( t ) = r = 1 n n r 0 t e n δ ( D 0 + D 1 ) x I E k μ r D 1 V n r , E k ( t x ) d x .
In particular,
V 1 , E k ( t ) = 0 t e δ ( D 0 + D 1 ) x d x I E k μ 1 D 1 1 = [ δ ( D 0 + D 1 ) ] 1 I e [ δ ( D 0 + D 1 ) ] t I E k μ 1 D 1 1 .
Clearly, we have V 1 , E k ( t ) + V 1 , E k c ( t ) = V 1 ( t ) , where E k c = E \ E k .
Corollary 2.
If we set E k = E and E k = { j } in Theorem 3, we have the following recursive formulas for the moments of S ( t ) and S j ( t ) :
V n ( t ) = k = 1 n n k 0 t e n δ ( D 0 + D 1 ) x μ k D 1 V n k ( t x ) d x V n , j ( t ) = k = 1 n n k 0 t e n δ ( D 0 + D 1 ) x I j μ k D 1 V n k , j ( t x ) d x .
In particular,
V 1 ( t ) = [ δ ( D 0 + D 1 ) ] 1 I e [ δ ( D 0 + D 1 ) ] t μ 1 D 1 1 V 1 , j ( t ) = [ δ ( D 0 + D 1 ) ] 1 I e [ δ ( D 0 + D 1 ) ] t I j μ 1 D 1 1 .
Remark 2.
When t , we have the following asymptotic results for the moments of the ADC for n N + :
V n , E k ( ) = [ n δ ( D 0 + D 1 ) ] 1 r = 1 n n r I E k μ r D 1 V n r , E k ( ) V n ( ) = [ n δ ( D 0 + D 1 ) ] 1 r = 1 n n r μ r D 1 V n r ( ) V n , j ( ) = [ n δ ( D 0 + D 1 ) ] 1 r = 1 n n r I j μ r D 1 V n r , j ( )
where V 0 , E k ( ) = V 0 ( ) = V 0 , j ( ) = 1 .

3. The Covariance of ADC Occurring in Two Sub-State Spaces

In this section, we first calculate the joint moment of the ADC occurring in two subsets of the state space and then the covariance between them could be calculated.
For 1 l 1 < l 2 < < l k m and 1 n 1 < n 2 < < n j m , where 2 k + j m , denote E k = { l 1 , l 2 , , l k } and E j = { n 1 , n 2 , , n j } to be two disjoint subsets of E, i.e., E k E j = . The aggregate discounted claim amounts occurring in E k and E j are
S E k ( t ) = i E k S i ( t ) , S E j ( t ) = i E j S i ( t ) .
Define
i L E k , E j ( ξ k , ξ j ; t ) = E i e ξ k S E k ( t ) ξ j S E j ( t )
to be the joint Laplace transform of S E k ( t ) and S E j ( t ) . Let L E k , E j ( ξ k , ξ j ; t ) be a column vector with the i-th entry being i L E k , E j ( ξ k , ξ j ; t ) . Moreover, let
i V E k , E j ( t ) = E i S E k ( t ) S E j ( t )
be the joint moment of S E k ( t ) and S E j ( t ) . Denote V E k , E j ( t ) as an m × 1 column vector with the i-th entry being i V E k , E j ( t ) . A matrix-form integral expression of V E k , E j ( t ) and its asymptotic formula when t are presented in the theorem below.
Theorem 4.
For two disjoint subsets of E, E k and E j , the joint moment of S E k ( t ) and S E j ( t ) satisfies
V E k , E j ( t ) = 0 t e 2 δ ( D 0 + D 1 ) x I E k μ 1 D 1 V 1 , E j ( t x ) d x + 0 t e 2 δ ( D 0 + D 1 ) x I E j μ 1 D 1 V 1 , E k ( t x ) d x
where V 1 , E k ( t ) is given by Equation (10) in Theorem 3. When t , we have
V E k , E j ( ) = 2 δ ( D 0 + D 1 ) 1 I E k μ 1 D 1 V 1 , E j ( ) + I E j μ 1 D 1 V 1 , E k ( ) .
Proof. 
Following from Equation (3), we have
L E k , E j ( ξ k , ξ j ; t ) t + δ ξ k L E k , E j ( ξ k , ξ j ; t ) ξ k + δ ξ j L E k , E j ( ξ k , ξ j ; t ) ξ j = D 0 L E k , E j ( ξ k , ξ j ; t ) + f ^ E k , E j ( ξ k , ξ j ) D 1 L E k , E j ( ξ k , ξ j ; t )
where f ^ E k , E j ( ξ k , ξ j ) is a diagonal matrix with the l i -th entry being f ^ l i ( ξ k ) , for i = 1 , 2 , , k , with the n i -th entry being f ^ n i ( ξ j ) , for i = 1 , 2 , , j , and all other elements being 1.
Taking partial derivatives with respect to ξ k and ξ j on both sides of Equation (13), setting ξ k = 0 and ξ j = 0 , and noting that
i V E k , E j ( t ) = 2 i L E k , E j ( ξ k , ξ j ; t ) ξ k ξ j | ξ k = ξ j = 0 ,
we obtain the following matrix-form first-order differential equation for V E k , E j ( t ) :
V E k , E j ( t ) + 2 δ D 0 D 1 V E k , E j ( t ) = I E k μ 1 D 1 V 1 , E j ( t ) + I E j μ 1 D 1 V 1 , E k ( t ) .
Solving it gives Equation (11).
Letting t in Equation (11), we obtain expression (12) for the joint moment of S E k ( ) and S E j ( ) . ☐
Remark 3.
If E k = { k } and E j = { j } , and k j , we have
V { k } , { j } ( t ) = 0 t e 2 δ ( D 0 + D 1 ) x I j μ 1 D 1 V 1 , k ( t x ) d x + 0 t e 2 δ ( D 0 + D 1 ) x I k μ 1 D 1 V 1 , j ( t x ) d x .
When t , the joint moment of S k ( ) and S j ( ) can be expressed as
V { k } , { j } ( ) = 2 δ ( D 0 + D 1 ) 1 I j μ 1 D 1 V 1 , k ( ) + I k μ 1 D 1 V 1 , j ( ) .
Remark 4.
If two subsets E k and E j are not disjoint, i.e., E k E j = E k j , then
Cov i S E k ( t ) , S E j ( t ) = Cov i S E k \ E k j ( t ) + S E k j ( t ) , S E j \ E k j ( t ) + S E k j ( t ) = Cov i S E k \ E k j ( t ) , S E k j ( t ) + Cov i S E j \ E k j ( t ) , S E k j ( t ) + Cov i S E k \ E k j ( t ) , S E j \ E k j ( t ) + Var i S E k j ( t ) .
All the covariance terms in the expression above are for ADCs occurring in two disjoint sets.

4. The Covariance of the ADC with Two Different Time Lengths

In this section, we investigate the covariance of the ADCs occurring in two (overlapped) time periods, i.e., we want to evaluate
Cov i ( S E k ( t ) , S E k ( t + h ) ) Cov ( S E k ( t ) , S E k ( t + h ) | J ( 0 ) = i ) = E i S E k ( t ) S E k ( t + h ) E i [ S E k ( t ) ] E i [ S E k ( t + h ) ]
for t , h > 0 and E k = { l 1 , l 2 , , l k } with k m . Denote R E k ( t , t + h ) as a column vector with the i-th entry being E i S E k ( t ) S E k ( t + h ) . In the following, we first show in a lemma a result that is needed for deriving the expression for R E k ( t , t + h ) . We then present an explicit formula of R E k ( t , t + h ) in a theorem below.
As S E k ( t + h ) = S E k ( t + h ) S E k ( t ) + S E k ( t ) , we have
E i S E k ( t ) S E k ( t + h ) = E i S E k 2 ( t ) + E i S E k ( t ) S E k ( t + h ) S E k ( t ) .
Define F t = σ ( S ( v ) ; 0 v t ) to be σ -algebra generated by the ADC process by time t. Using the law of iterated expectation, we have
E i S E k ( t ) S E k ( t + h ) S E k ( t ) = E i E S E k ( t ) S E k ( t + h ) S E k ( t ) | F t = E i S E k ( t ) E S E k ( t + h ) S E k ( t ) | F t = E i S E k ( t ) e 0 t δ ( s ) d s E S E k ( t , t + h ) | F t = E i S E k ( t ) e 0 t δ ( s ) d s E S E k ( t , t + h ) | J ( t ) = j = 1 m E i S E k ( t ) e 0 t δ ( s ) d s E S E k ( t , t + h ) | J ( t ) = j P ( J ( t ) = j | J ( 0 ) = i ) = j = 1 m E i S E k ( t ) e 0 t δ ( s ) d s I ( J ( t ) = j ) E j [ S E k ( h ) ] P ( J ( t ) = j | J ( 0 ) = i )
where S E k ( t , t + h ) is the present value, at time t, of the claims occurring in states within E k over ( t , t + h ] .
Denote M E k ( t ) = M i , j , E k ( t ) m × m , where
M i , j , E k ( t ) = E i S E k ( t ) e 0 t δ ( s ) d s I ( J ( t ) = j ) .
The following lemma gives a matrix-form integral expression for M E k ( t ) .
Lemma 1.
M E k ( t ) is of the form
M E k ( t ) = 0 t e 2 δ ( D 0 + D 1 ) x I E k μ 1 D 1 v ( t x ) d x
where v ( t ) is a matrix with ( i , j ) -th element being
v i , j ( t ) = E i e 0 t δ ( s ) d s I ( J ( t ) = j ) , i , j E .
Proof. 
Conditioning on the events that may occur over an infinitesimal interval ( 0 , Δ t ) , we have
M i , j , E k ( t ) = ( 1 + d 0 , i i Δ t ) e 2 δ i Δ t M i , j , E k ( t Δ t ) + l i d 0 , i l Δ t e 2 δ i Δ t M l , j , E k ( t Δ t ) + l = 1 m d 1 , i l Δ t e 2 δ i Δ t I ( i E k ) μ i ( 1 ) E l ( e Δ t t δ ( s ) d s I ( J ( t ) = j ) ) + M l , j , E k ( t Δ t ) .
We can then obtain a matrix-form differential equation for M E k ( t ) from Equation (17) as follows:
M E k ( t ) = ( D 0 + D 1 2 δ ) M E k ( t ) + I E k μ 1 D 1 v ( t ) ,
with M E k ( 0 ) = 0 . In fact, it is easy to show that v ( t ) = e ( D 0 + D 1 δ ) t , with v ( 0 ) = I and v ( ) = 0 . Solving Equation (18) gives Equation (16). ☐
Let q i , j ( t ) = P i ( J ( t ) = j ) . Then Q ( t ) = q i , j ( t ) m × m is the transition matrix of the underlying Markov process { J ( t ) } t 0 at time t. It follows from Ren (2008) that Q ( t ) = e ( D 0 + D 1 ) t .
Theorem 5.
R E k ( t , t + h ) can be expressed as
R E k ( t , t + h ) = V 2 , E k ( t ) + ( M E k Q ) ( t ) V 1 , E k ( h )
where ( M E k Q ) ( t ) is the Hadamard product of M E k ( t ) and Q ( t ) , i.e., the ( i , j ) -th element of ( M E k Q ) ( t ) is M i , j , E k ( t ) × q i , j ( t ) , and M E k ( t ) is given by Equation (16) in Lemma 1.
Proof. 
Equation (19) follows immediately from Equations (14) and (15). ☐
Remark 5.
If E k = E or E k = { k } , Equation (19) simplifies to the joint moment of S ( t ) and S ( t + h ) , or the joint moment of S k ( t ) and S k ( t + h ) .

5. The Distributions of the ADC

In this section, we investigate the distributions of S E k ( t ) and its two special cases, S ( t ) and S k ( t ) , for E k = { l 1 , l 2 , , l k } E . To precede, we define for x 0 and i E ,
G i ( x , t ) = P i ( S ( t ) x ) G i , k ( x , t ) = P i ( S k ( t ) x ) G i , E k ( x , t ) = P i S E k ( t ) x ,
with the following conditions:
G i ( x , 0 ) = G i , k ( x , 0 ) = G i , E k ( x , 0 ) = 1 , x 0 G i ( 0 , t ) = P i ( N ( t ) = 0 ) G i , k ( 0 , t ) = P i ( N k ( t ) = 0 ) G i , E k ( 0 , t ) = P i N E k ( t ) = 0
where N k ( t ) = l = 1 N ( t ) I ( J ( T l ) = k ) is the number of claims occurring in state k and N E k ( t ) = j E k N j ( t ) is the number of claims occurring in the subset E k . Denote
G ( x , t ) = G 1 ( x , t ) , G 2 ( x , t ) , , G m ( x , t ) G k ( x , t ) = G 1 , k ( x , t ) , G 2 , k ( x , t ) , , G m , k ( x , t ) G E k ( x , t ) = G 1 , E k ( x , t ) , G 2 , E k ( x , t ) , , G m , E k ( x , t ) .
We present in the theorem below that G E k ( x , t ) satisfies a first-order partial integro-differential equation in matrix form.
Theorem 6.
G E k ( x , t ) satisfies
G E k ( x , t ) t x δ G E k ( x , t ) x = ( D 0 + D 1 I E k D 1 ) G E k ( x , t ) + 0 x I E k f ( y ) D 1 G E k ( x y , t ) d y ,
with initial conditions
G E k ( x , 0 ) = 1 , G E k ( 0 , t ) = e ( D 0 + D 1 I E k D 1 ) t 1
where G E k ( 0 , t ) is the solution of the differential equation obtained from Equation (20) by setting x = 0 .
Proof. 
Using the same arguments as in Section 2, we have, by conditioning on events that may occur over ( 0 , h ] ,
G i , E k ( x , t ) = [ 1 + d 0 , i i h ] G i , E k x e δ i h , t h + j = 1 , j i m d 0 , i j h G j , E k x e δ i h , t h + j = 1 m d 1 , i j h G j , E k x e δ i h , t h , i E k .
As G i , E k ( x , t ) is differentiable with respect to x and t, we have
G i , E k x e δ i h , t h = G i , E k ( x , t ) + δ i x h G i , E k ( x , t ) x h G i , E k ( x , t ) t + o ( h ) .
The justification of Equation (23) can be done similarly as that for Equation (5) (see Appendix A). Substituting Equation (23) into Equation (22), rearranging terms, dividing both sides by h, and taking limit as h 0 , give
G i , E k ( x , t ) t δ i x G i , E k ( x , t ) x = j = 1 m d 0 , i j G j , E k ( x , t ) + j = 1 m d 1 , i j G j , E k ( x , t ) , i E k .
For i E k = { l 1 , l 2 , , l k } , we have
G i , E k ( x , t ) = [ 1 + d 0 , i i h ] G i , E k x e δ i h , t h + j = 1 , j i m d 0 , i k h G j , E k x e δ i h , t h + j = 1 m d 1 , i j h 0 x e δ i h f i ( y ) G j , E k x e δ i h y , t h d y .
Taylor’s expansion gives
G i , E k ( x , t ) t δ i x G i , E k ( x , t ) x = j = 1 m d 0 , i j G j , E k ( x , t ) + j = 1 m d 1 , i j 0 x f i ( y ) G j , E k ( x y , t ) d y , i E k .
Equations for i E k and i E k can then be expressed in matrix form (20). ☐
Remark 6.
If we set E k = E and E k = { k } , respectively, we have the following results:
G ( x , t ) t x δ G ( x , t ) x = D 0 G ( x , t ) + 0 x f ( y ) D 1 G ( x y , t ) d y G k ( x , t ) t x δ G k ( x , t ) x = ( D 0 + D 1 I k D 1 ) G k ( x , t ) + 0 x I k f ( y ) D 1 G k ( x y , t ) d y ,
with initial conditions
G ( x , 0 ) = 1 , G ( 0 , t ) = e D 0 t 1 G k ( x , 0 ) = 1 , G k ( 0 , t ) = e ( D 0 + D 1 I k D 1 ) t 1 .
Here, G k ( 0 , t ) is the solution of the differential equation obtained from Equation (24) by setting x = 0 .
Remark 7.
The matrix-form partial integro-differential Equation (20) with the corresponding initial conditions given by Equation (21) may be solved numerically as follows.
(a) 
For two infinitesimal h 1 and h 2 , we set G E k ( l h 1 , 0 ) = 1 , for l = 1 , 2 , , and we calculate G E k ( 0 , n h 2 ) using Equation (21) for n = 1 , 2 , .
(b) 
With Equation (20), G E k ( l h 1 , n h 2 ) can be calculated recursively, for n , l = 1 , 2 , , by
G E k ( l h 1 , n h 2 ) = I l h 2 δ h 2 ( D 0 + D 1 I E k D 1 ) 1 × [ G E k ( l h 1 , ( n 1 ) h 2 ) l h 2 δ G E k ( l 1 ) h 1 , n h 2 + h 2 h 1 j = 0 l 1 I E k f ( j h 1 ) D 1 G E k ( l 1 j ) h 1 , n h 2 ] .
Remark 8.
If f i ( x ) = β i e β i x , β i > 0 , then f ( x ) = β e β x , with f ( x ) = β f ( x ) , where β = diag ( β 1 , β 2 , , β m ) . Taking partial derivative with respect to x on both sides of Equation (20) and performing some manipulations, we obtain the following matrix-form second-order partial differential equation for G E k ( x , t ) :
2 G E k ( x , t ) t x x δ 2 G E k ( x , t ) x 2 + I E k β G E k ( x , t ) t + δ D 0 D 1 + I E k ( x δ β + D 1 ) G E k ( x , t ) x I E k ( β D 0 + D 1 ) G E k ( x , t ) = 0 .
This partial differential equation can also be solved numerically by using forward finite difference methods.
Remark 9.
Li et al. (2015) show that, when δ ( s ) = 0 , G i ( x , t ) can be used to find an expression for the density of the time of ruin in a MAP risk model.

6. Numerical Illustrations

In this section, we consider a two-state Markov-modulated with intensity matrix
A = 1 / 4 1 / 4 3 / 4 3 / 4 .
We also assume that f 1 ( x ) = e x , f 2 ( x ) = 0.5 e 0.5 x , x > 0 , λ 1 = 1 , λ 2 = 2 / 3 , δ 1 = 0.03 , and δ 2 = 0.05 . Table 1 gives the first moments of S 1 ( t ) and S 2 ( t ) and their covariance for t = 1 , 2 , 5 , 10 , 20 , 30 , and , given J ( 0 ) = 1 and J ( 0 ) = 2 , respectively, in which the covariances, for i = 1 , 2 , are calculated by
Cov i ( t ) Cov S 1 ( t ) , S 2 ( t ) | J ( 0 ) = i = E i S 1 ( t ) S 2 ( t ) E i [ S 1 ( t ) ] E i [ S 2 ( t ) ] .
It shows that, as expected, the expected values of S 1 ( t ) and S 2 ( t ) (and hence S ( t ) ) are increasing in t given J ( 0 ) = i for i = 1 , 2 . It is not surprised to see that S 1 ( t ) and S 2 ( t ) are negatively correlated for any t, as claims occurring in the two states compete with each other. Moreover, the larger the time t, the more the negative correlation between S 1 ( t ) and S 2 ( t ) .
Figure 1 plots the variances of S ( t ) , S 1 ( t ) , and S 2 ( t ) , given J ( 0 ) = 1 , for 0 t 150 . The variances all increase with time t. The variance of S ( t ) is bigger than those of S 1 ( t ) and S 2 ( t ) for a fixed t. When time t goes to , the three variances converge.
Table 2 and Table 3 display the covariances of the ADC at time t and t + h , given J ( 0 ) = 1 , for some selected t values and for h = 1 and h = 5 . It is shown that S ( t ) and S ( t + h ) , S 1 ( t ) and S 1 ( t + h ) , and S 2 ( t ) and S 2 ( t + h ) are all positively correlated. Moreover, when t increases, the covariances increase; moreover, when h increases, the covariances decrease. When t , the covariances of the pairs S ( t ) and S ( t + h ) , S i ( t ) , and S i ( t + h ) converge to the variances of S ( ) and S i ( ) , respectively. Similar patterns should be expected for J ( 0 ) = 2 .
Finally, we display in Figure 2 the numerical values of the distribution function of S ( t ) with initial state i, G i ( x , t ) = P i ( S ( t ) x ) , for t = 1 and 4, 0 x 25 , and i = 1 , 2 . Note that G ( x , t ) = ( G 1 ( x , t ) , G 2 ( x , t ) ) satisfies the partial differential Equation (25); its solution can be obtained numerically. From the graph, it shows clearly that the probability of S ( t ) being bigger than a fixed x is smaller for small values of t as expected. For most x values, G 1 ( x , t ) is bigger than G 2 ( x , t ) due to the fact that the underlying Markov process in our example tends to stay in state 1 more often than staying at state 2.

Author Contributions

The two authors contribute equally to this article.

Acknowledgments

The authors would like to thank two anonymous reviewers for providing helpful comments and suggestions that improved the presentation of this paper. This research for Dr. Yi Lu was supported by the Natural Science and Engineering Research Council (NSERC) of Canada (grant number 611467).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Justification of Equation (5)

It is easy to see that the partial derivation of i L ( ξ 1 , ξ 2 , , ξ m ; t ) in Equation (2) with respect to ξ i exists for i E . For its partial derivation with respect to t, we have
i L ( ξ 1 , ξ 2 , , ξ m ; t ) t = lim h 0 i L ( ξ 1 , ξ 2 , , ξ m ; t ) i L ( ξ 1 , ξ 2 , , ξ m ; t + h ) h = lim h 0 1 h E i e j = 1 m ξ j S j ( t ) 1 e j = 1 m ξ j ( S j ( t + h ) S j ( t ) ) .
Now, under certain regularity conditions, we obtain
lim h 0 E i e j = 1 m ξ j S j ( t ) 1 e j = 1 m ξ j ( S j ( t + h ) S j ( t ) ) h lim h 0 E i 1 e j = 1 m ξ j ( S j ( t + h ) S j ( t ) ) h = lim h 0 1 E i e j = 1 m ξ j ( S j ( t + h ) S j ( t ) ) h .
Let ξ = max ( ξ 1 , ξ 2 , , ξ m ) . Then
lim h 0 1 E i e j = 1 m ξ j ( S j ( t + h ) S j ( t ) ) h lim h 0 1 E i e ξ j = 1 m ( S j ( t + h ) S j ( t ) ) h = lim h 0 1 E i e ξ ( S ( t + h ) S ( t ) ) h = lim h 0 1 E i e ξ S ( h ) h .
Since
E i e ξ S ( h ) = [ 1 + d 0 , i i h ] + k = 1 , k i m d 0 , i k h + k = 1 m d 1 , i k h f ^ i ξ e δ i h + o ( h ) = 1 + k = 1 m d 0 , i k h + k = 1 m d 1 , i k h f ^ i ξ e δ i h + o ( h ) = 1 + k = 1 m ( d 0 , i k + d 1 , i k ) h + k = 1 m d 1 , i k h f ^ i ξ e δ i h 1 + o ( h ) = 1 + k = 1 m d 1 , i k h f ^ i ξ e δ i h 1 + o ( h ) ,
it follows from Equation (A1) that
lim h 0 1 E i e j = 1 m ξ j ( S j ( t + h ) S j ( t ) ) h lim h 0 1 E i e ξ S ( h ) h = k = 1 m d 1 , i k 1 f ^ i ξ ,
which justifies the existence of the partial derivation of i L ( ξ 1 , ξ 2 , , ξ m ; t ) with respect to t.

References

  1. Asmussen, Søren. 2003. Applied Probability and Queues. New York: Springer. [Google Scholar]
  2. Barges, Mathieu, Hélene Cossette, Stéphane Loisel, and Etienne Marceau. 2011. On the moments of the aggregate discounted claims with dependence introduced by a FGM copula. ASTIN Bulletin 41: 215–38. [Google Scholar]
  3. Delbaen, Freddy, and Jean Haezendonck. 1987. Classical risk theory in an economic environment. Insurance: Mathematics and Economics 6: 85–116. [Google Scholar] [CrossRef]
  4. Garrido, José, and Ghislain Léveillé. 2004. Inflation impact on aggregate claims. Encyclopedia of Actuarial Science 2: 875–78. [Google Scholar]
  5. Kim, Bara, and Hwa-Sung Kim. 2007. Moments of claims in a Markov environment. Insurance: Mathematics and Economics 40: 485–97. [Google Scholar]
  6. Jang, Ji-Wook. 2004. Martingale approach for moments of discounted aggregate claims. Journal of Risk and Insurance 71: 201–11. [Google Scholar] [CrossRef]
  7. Leveille, Ghislain, and Franck Adekambi. 2011. Covariance of discounted compound renewal sums with stochastic interest. Scandinavian Actuarial Journal 2: 138–53. [Google Scholar] [CrossRef]
  8. Léveillé, Ghislain, and Franck Adékambi. 2012. Joint moments of discounted compound renewal sums. Scandinavian Actuarial Journal 1: 40–55. [Google Scholar] [CrossRef]
  9. Leveille, Ghislain, and Jose Garrido. 2001a. Moments of compound renewal sums with discounted claims. Insurance: Mathematics and Economics 28: 201–11. [Google Scholar]
  10. Léveillé, Ghislain, and José Garrido. 2001b. Recursive moments of compound renewal sums with discounted claims. Scandinavian Actuarial Journal 2: 98–110. [Google Scholar] [CrossRef]
  11. Léveillé, Ghislain, José Garrido, and Ya Fang Wang. 2010. Moment generating functions of compound renewal sums with discounted claims. Scandinavian Actuarial Journal 3: 165–84. [Google Scholar] [CrossRef]
  12. Li, Shuanming. 2008. Discussion of "On the Laplace transform of the aggregate discounted claims with Markovian arrivals". North American Actuarial Journal 12: 208–10. [Google Scholar] [CrossRef]
  13. Li, Jingchao, David CM Dickson, and Shuanming Li. 2015. Some ruin problems for the MAP risk model. Insurance: Mathematics and Economics 65: 1–8. [Google Scholar] [CrossRef]
  14. Mohd Ramli, Siti Norafidah, and Jiwook Jang. 2014. Neumann series on the recursive moments of copula-dependent aggregate discounted claims. Risks 2: 195–210. [Google Scholar] [CrossRef] [Green Version]
  15. Neuts, Marcel F. 1979. A versatile Markovian point process. Journal of Applied Probability 16: 764–79. [Google Scholar] [CrossRef]
  16. Ren, Jiandong. 2008. On the Laplace transform of the aggregate discounted claims with Markovian arrivals. North American Actuarial Journal 12: 198–206. [Google Scholar] [CrossRef]
  17. Taylor, Gregory Clive. 1979. Probability of ruin under inflationary conditions or under experience rating. ASTIN Bulletin 10: 149–62. [Google Scholar] [CrossRef]
  18. Wang, Ya Fang, José Garrido, and Ghislain Léveillé. 2018. The distribution of discounted compound PH-renewal processes. Methodology and Computing in Applied Probability 20: 69–96. [Google Scholar] [CrossRef]
  19. Willmot, Gordon E. 1989. The total claims distribution under inflationary conditions. Scandinavian Actuarial Journal 1: 1–12. [Google Scholar] [CrossRef]
  20. Woo, Jae-Kyung, and Eric CK Cheung. 2013. A note on discounted compound renewal sums under dependency. Insurance: Mathematics and Economics 52: 170–79. [Google Scholar] [CrossRef]
Figure 1. Variances of S ( t ) , S 1 ( t ) and S 2 ( t ) with initial state J ( 0 ) = 1 .
Figure 1. Variances of S ( t ) , S 1 ( t ) and S 2 ( t ) with initial state J ( 0 ) = 1 .
Risks 06 00059 g001
Figure 2. Distribution functions of S ( 1 ) in (a) and S ( 4 ) in (b) for J ( 0 ) = 1 , 2 .
Figure 2. Distribution functions of S ( 1 ) in (a) and S ( 4 ) in (b) for J ( 0 ) = 1 , 2 .
Risks 06 00059 g002
Table 1. Expected values and covariances of S 1 ( t ) and S 2 ( t ) .
Table 1. Expected values and covariances of S 1 ( t ) and S 2 ( t ) .
t J ( 0 ) = 1 J ( 0 ) = 2
E 1 [ S 1 ( t ) ] E 1 [ S 2 ( t ) ] Cov 1 ( t ) E 2 [ S 1 ( t ) ] E 2 [ S 2 ( t ) ] Cov 2 ( t )
1 0.8948 0.1196 0.0599 0.2690 0.9444 0.1412
2 1.6665 0.3607 0.2832 0.8117 1.4717 0.5475
5 3.7056 1.1998 1.3303 2.6996 2.4452 1.8361
10 6.6248 2.4695 2.9252 5.5563 3.6966 3.4208
20 11.1330 4.4336 5.0170 9.9757 5.6221 5.4630
30 14.3123 5.8188 6.1938 13.0922 6.9800 6.6142
21.9178 9.1324 7.9012 20.5479 10.2283 8.2962
Table 2. Covariances of discounted claims at t and t + 1 .
Table 2. Covariances of discounted claims at t and t + 1 .
t J ( 0 ) = 1
Cov 1 S ( t ) , S ( t + 1 ) Cov 1 ( S 1 ( t ) , S 1 ( t + 1 ) ) Cov 1 ( S 2 ( t ) , S 2 ( t + 1 ) )
1 1 . 9327 1.7143 0.5328
2 3.9024 3.2169 1.7021
5 9.5545 7.3372 5.8448
10 17.0771 12.8965 11.4471
20 26.6637 20.2686 18.0782
30 31.9796 24.5961 21.2571
40.3073 32.2449 23.8648
Table 3. Covariances of discounted claims at t and t + 5 .
Table 3. Covariances of discounted claims at t and t + 5 .
t J ( 0 ) = 1
Cov 1 S ( t ) , S ( t + 5 ) Cov 1 ( S 1 ( t ) , S 1 ( t + 5 ) ) Cov 1 ( S 2 ( t ) , S 2 ( t + 5 ) )
1 0.8651 1.2213 0.4437
2 1.3181 2.0228 1.4775
5 3.4481 4.5219 5.2676
10 7.5945 8.5121 10.5187
20 15.2651 14.9882 16.9435
30 21.6039 19.7868 20.2186
40.3073 32.2449 23.8648

Share and Cite

MDPI and ACS Style

Li, S.; Lu, Y. On the Moments and the Distribution of Aggregate Discounted Claims in a Markovian Environment. Risks 2018, 6, 59. https://doi.org/10.3390/risks6020059

AMA Style

Li S, Lu Y. On the Moments and the Distribution of Aggregate Discounted Claims in a Markovian Environment. Risks. 2018; 6(2):59. https://doi.org/10.3390/risks6020059

Chicago/Turabian Style

Li, Shuanming, and Yi Lu. 2018. "On the Moments and the Distribution of Aggregate Discounted Claims in a Markovian Environment" Risks 6, no. 2: 59. https://doi.org/10.3390/risks6020059

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop