Next Article in Journal
Parametric Conditions of High Financial Risk in the SME Sector
Next Article in Special Issue
Potential Densities for Taxed Spectrally Negative Lévy Risk Processes
Previous Article in Journal
Loss Reserving Models: Granular and Machine Learning Forms
Previous Article in Special Issue
De Finetti’s Control Problem with Parisian Ruin for Spectrally Negative Lévy Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Logarithmic Asymptotics for Probability of Component-Wise Ruin in a Two-Dimensional Brownian Model

1
Mathematical Institute, University of Wrocław, 50-137 Wrocław, Poland
2
School of Mathematics, University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Risks 2019, 7(3), 83; https://doi.org/10.3390/risks7030083
Submission received: 14 June 2019 / Revised: 19 July 2019 / Accepted: 29 July 2019 / Published: 1 August 2019

Abstract

:
We consider a two-dimensional ruin problem where the surplus process of business lines is modelled by a two-dimensional correlated Brownian motion with drift. We study the ruin function P ( u ) for the component-wise ruin (that is both business lines are ruined in an infinite-time horizon), where u is the same initial capital for each line. We measure the goodness of the business by analysing the adjustment coefficient, that is the limit of ln P ( u ) / u as u tends to infinity, which depends essentially on the correlation ρ of the two surplus processes. In order to work out the adjustment coefficient we solve a two-layer optimization problem.

1. Introduction

In classical risk theory, the surplus process of an insurance company is modelled by the compound Poisson risk model. For both applied and theoretical investigations, calculation of ruin probabilities for such model is of particular interest. In order to avoid technical calculations, diffusion approximation is often considered e.g., (Asmussen and Albrecher 2010; Grandell 1991; Iglehart 1969; Klugman et al. 2012), which results in tractable approximations for the interested finite-time or infinite-time ruin probabilities. The basic premise for the approximation is to let the number of claims grow in a unit time interval and to make the claim sizes smaller in such a way that the risk process converges to a Brownian motion with drift. Precisely, the Brownian motion risk process is defined by
R ( t ) = x + p t σ B ( t ) , t 0 ,
where x > 0 is the initial capital, p > 0 is the net profit rate and σ B ( t ) models the net loss process with σ > 0 the volatility coefficient. Roughly speaking, σ B ( t ) is an approximation of the total claim amount process by time t minus its expectation, the latter is usually called the pure premium amount and calculated to cover the average payments of claims. The net profit, also called safety loading, is the component which protects the company from large deviations of claims from the average and also allows an accumulation of capital. Ruin related problems for Brownian models have been well studied; see, for example, Asmussen and Albrecher (2010); Gerber and Shiu (2004).
In recent years, multi-dimensional risk models have been introduced to model the surplus of multiple business lines of an insurance company or the suplus of collaborating companies (e.g., insurance and reinsurance). We refer to Asmussen and Albrecher (2010) [Chapter XIII 9] and Avram and Loke (2018); Avram and Minca (2017); Avram et al. (2008a, 2008b); Albrecher et al. (2017); Azcue and Muler (2018); Azcue et al. (2019); Foss et al. (2017); Ji and Robert (2018) for relevant recent discussions. It is concluded in the literature that in comparison with the well-understood 1-dimensional risk models, study of multi-dimensional risk models is much more challenging. It was shown recently in Delsing et al. (2019) that multi-dimensional Brownian model can serve as approximation of a multi-dimensional classical risk model in a Markovian environment. Therefore, obtained results for multi-dimensional Brownian model can serve as approximations of the multi-dimensional classical risk models in a Markovian environment; ruin probability approximation has been used in the aforementioned paper. Actually, multi-dimensional Brownian models have drawn a lot of attention due to its tractability and practical relevancy.
A d-dimensional Brownian model can be defined in a matrix form as
R ( t ) = x + p t X ( t ) , t 0 , with X ( t ) = A B ( t ) ,
where x = ( x 1 , , x d ) , p = ( p 1 , , p d ) ( 0 , ) d are, respectively, (column) vectors representing the initial capital and net profit rate, A R d × d is a non-singular matrix modelling dependence between different business lines and B ( t ) = ( B 1 ( t ) , , B d ( t ) ) , t 0 is a standard d-dimensional Brownian motion (BM) with independent coordinates. Here ⊤ is the transpose sign. In what follows, vectors are understood as column vectors written in bold letters.
Different types of ruin can be considered in multi-dimensional models, which are relevant to the probability that the surplus of one or more of the business lines drops below zero in a certain time interval [ 0 , T ] with T either a finite constant or infinity. One of the commonly studied is the so-called simultaneous ruin probability defined as
Q T ( x ) : = P t [ 0 , T ] i = 1 d { R i ( t ) < 0 } ,
which is the probability that at a certain time t [ 0 , T ] all the surpluses become negative. Here for T < , Q T ( x ) is called finite-time simultaneous ruin probability, and Q ( x ) is called infinite-time simultaneous ruin probability. Simultaneous ruin probability, which is essentially the hitting probability of R ( t ) to the orthant { y R d : y i < 0 , i = 1 , , d } , has been discussed for multi-dimensional Brownian models in different contexts; see Dȩbicki et al. (2018); Garbit and Raschel (2014). In Garbit and Raschel (2014), for fixed x the asymptotic behaviour of Q T ( x ) as T has been discussed. Whereas, in Dȩbicki et al. (2018), the asymptotic behaviour, as u , of the infinite-time ruin probability Q ( x ) , with x = α u = ( α 1 u , α 2 u , , α d u ) , α i > 0 , 1 i d has been obtained. Note that it is common in risk theory to derive the later type of asymptotic results for ruin probabilities; see, for example, Avram et al. (2008a); Embrechts et al. (1997); Mikosch (2008).
Another type of ruin probability is the component-wise (or joint) ruin probability defined as
P T ( x ) : = P i = 1 d { t [ 0 , T ] R i ( t ) < 0 } = P i = 1 d { sup t i [ 0 , T ] ( X i ( t i ) p i t i ) > x i } ,
which is the probability that all surpluses get below zero but possibly at different times. It is this possibility that makes the study of P T ( x ) more difficult.
The study of joint distribution of the extrema of multi-dimensional BM over finite-time interval has been proved to be important in quantitative finance; see, for example, He et al. (1998); Kou and Zhong (2016). We refer to Delsing et al. (2019) for a comprehensive summary of related results. Due to the complexity of the problem, two-dimensional case has been the focus in the literature and for this case some explicit formulas can be obtained by using a PDE approach. Of particular relevance to the ruin probability P T ( x ) is a result derived in He et al. (1998) which shows that
P sup t [ 0 , T ] ( X 1 ( t ) p 1 t ) x 1 , sup s [ 0 , T ] ( X 2 ( s ) p 2 s ) x 2 = e a 1 x 1 + a 2 x 2 + b T f ( x 1 , x 2 , T ) ,
where a 1 , a 2 , b are known constants and f is a function defined in terms of infinite-series, double-integral and Bessel function. Using the above formula one can derive an expression for P T ( x ) in two-dimensional case as follows
P T ( x ) = 1 P sup t [ 0 , T ] ( X 1 ( t ) p 1 t ) x 1 P sup s [ 0 , T ] ( X 2 ( s ) p 2 s ) x 2 + P sup t [ 0 , T ] ( X 1 ( t ) p 1 t ) x 1 , sup s [ 0 , T ] ( X 2 ( s ) p 2 s ) x 2 ,
where the expression for the distribution of single supremum is also known; see He et al. (1998). Note that even though we have obtained explicit expression of P T ( x ) in (2) for the two-dimensional case, it seems difficult to derive the explicit form of the corresponding infinite-time ruin probability P ( x ) by simply putting T in (2).
By assuming x = α u = ( α 1 u , α 2 u , , α d u ) , α i > 0 , 1 i d , we aim to analyse the asymptotic behaviour of the infinite-time ruin probability P ( x ) as u . Applying Theorem 1 in Dȩbicki et al. (2010) we arrive at the following logarithmic asymptotics
1 u ln P ( x ) 1 2 inf t > 0 inf v α + p t v t 1 v , as u
provided t is non-singular, where pt : = ( p 1 t 1 , , p d t d ) , inequality of vectors are meant component-wise, and t 1 is the inverse matrix of the covariance function t of ( X 1 ( t 1 ) , , X d ( t d ) ) , with t = ( t 1 , , t d ) and 0 = ( 0 , , 0 ) R d . Let us recall that conventionally for two given positive functions f ( · ) and h ( · ) , we write f ( x ) h ( x ) if lim x f ( x ) / h ( x ) = 1 .
For more precise analysis on P ( x ) , it seems crucial to first solve the two-layer optimization problem in (3) and find the optimization points t 0 . As it can be recognized in the following, when dealing with d-dimensional case with d > 2 the calculations become highly nontrivial and complicated. Therefore, in this contribution we only discuss a tractable two-dimensional model and aim for an explicit logarithmic asymptotics by solving the minimization problem in (3).
In the classical ruin theory when analysing the compound Poisson model or Sparre Andersen model, the so-called adjustment coefficient is used as a measure of goodness; see, for example, Asmussen and Albrecher (2010) or Rolski et al. (2009). It is of interest to obtain the solution of the minimization problem in (3) from a practical point of view, as it can be seen as an analogue of the adjustment coefficient and thus we could get some insights about the risk that the company is facing. As discussed in Asmussen and Albrecher (2010) and Li et al. (2007) it is also of interest to know how the dependence between different risks influences the joint ruin probability, which can be easily analysed through the obtained logarithmic asymptotics; see Remark 2.
The rest of this paper is organised as follows. In Section 2, we formulate the two-dimensional Brownian model and give the main results of this paper. The main lines of proof with auxiliary lemmas are displayed in Section 3. In Section 4 we conclude the paper. All technical proofs of the lemmas in Section 3 are presented in Appendix A.

2. Model Formulation and Main Results

Due to the fact that component-wise ruin probability P ( x ) does not change under scaling, we can simply assume that the volatility coefficient for all business lines is equal to 1. Furthermore, noting that the timelines for different business lines should be distinguished as shown in (1) and (3), we introduce a two-parameter extension of correlated two-dimensional BM defined as
X 1 ( t ) , X 2 ( s ) = B 1 ( t ) , ρ B 1 ( s ) + 1 ρ 2 B 2 ( s ) , t , s 0 ,
with ρ ( 1 , 1 ) and mutually independent Brownian motions B 1 , B 2 . We shall consider the following two dependent insurance risk processes
R i ( t ) = u + μ i t X i ( t ) , t 0 , i = 1 , 2 ,
where μ 1 , μ 2 > 0 are net profit rates, u is the initial capital (which is assumed to be the same for both business lines, as otherwise, the calculations become rather complicated). We shall assume without loss of generality that μ 1 μ 2 . Here, μ i is different from p i (see (1)) in the sense that it corresponds to the (scaled) model with volatility coefficient standardized to be 1.
In this contribution, we shall focus on the logarithmic asymptotics of
P ( u ) : = P ( u ( 1 , 1 ) ) = P { t 0 R 1 ( t ) < 0 } { s 0 R 2 ( s ) < 0 } = P sup t 0 ( X 1 ( t ) μ 1 t ) > u , sup s 0 ( X 2 ( s ) μ 2 s ) > u , as u .
Define
ρ ^ 1 = μ 1 + μ 2 ( μ 1 + μ 2 ) 2 4 μ 1 ( μ 2 μ 1 ) 4 μ 1 [ 0 , 1 2 ) , ρ ^ 2 = μ 1 + μ 2 2 μ 2
and let
t * = t * ( ρ ) = s * = s * ( ρ ) : = 2 ( 1 ρ ) μ 1 2 + μ 2 2 2 ρ μ 1 μ 2 .
The following theorem constitutes the main result of this contribution.
Theorem 1.
For the joint infinite-time ruin probability (4) we have, as u ,
log ( P ( u ) ) u 2 ( μ 2 + ( 1 2 ρ ) μ 1 ) , if 1 < ρ ρ ^ 1 ; μ 1 + μ 2 + 2 / t * 1 + ρ , if ρ ^ 1 < ρ < ρ ^ 2 ; 2 μ 2 , if ρ ^ 2 ρ < 1 .
Remark 2.
(a) Following the classical one-dimensional risk theory we can call quantities on the right hand side in Theorem 1 as adjustment coefficients. They serve sometimes as a measure of goodness for a risk business.
(b) One can easily check that adjustment coefficient as a function of ρ is continuous, strictly decreasing on ( 1 , ρ ^ 2 ] and it is constant, equal to 2 μ 2 on [ ρ ^ 2 , 1 ) . This means that as the two lines of business becomes more positively correlated the risk of ruin becomes larger, which is consistent with the intuition.
Define
g ( t , s ) : = inf x 1 + μ 1 t y 1 + μ 2 s ( x , y ) t s 1 ( x , y ) , t , s > 0 ,
where t s 1 is the inverse matrix of t s = t ρ t s ρ t s s , with t s = min ( t , s ) and ρ ( 1 , 1 ) .
The proof of Theorem 1 follows from (3) which implies that the logarithmic asymptotics for P ( u ) is of the form
1 u ln P ( u ) g ( t 0 ) 2 , u ,
where
g ( t 0 ) = inf ( t , s ) ( 0 , ) 2 g ( t , s ) ,
and Proposition 3 below, wherein we list dominating points t 0 that optimize the function g over ( 0 , ) 2 and the corresponding optimal values g ( t 0 ) .
In order to solve the two-layer minimization problem in (9) (see also (7)) we define for t , s > 0 the following functions:
g 1 ( t ) = ( 1 + μ 1 t ) 2 t , g 2 ( s ) = ( 1 + μ 2 s ) 2 s , g 3 ( t , s ) = ( 1 + μ 1 t , 1 + μ 2 s ) t s 1 ( 1 + μ 1 t , 1 + μ 2 s ) .
Since t s appears in the above formula, we shall consider a partition of the quadrant ( 0 , ) 2 , namely
( 0 , ) 2 = A L B , A = { s < t } , L = { s = t } , B = { s > t } .
For convenience we denote A ¯ = { s t } = A L and B ¯ = { s t } = B L . Hereafter, all sets are defined on ( 0 , ) 2 , so ( t , s ) ( 0 , ) 2 will be omitted.
Note that g 3 ( t , s ) can be represented in the following form:
g 3 ( t , s ) = g A ( t , s ) : = ( 1 + μ 2 s ) 2 s + ( ( 1 + μ 1 t ) ρ ( 1 + μ 2 s ) ) 2 t ρ 2 s , if ( t , s ) A ¯ g B ( t , s ) : = ( 1 + μ 1 t ) 2 t + ( ( 1 + μ 2 s ) ρ ( 1 + μ 1 t ) ) 2 s ρ 2 t , if ( t , s ) B ¯ .
Denote further
g L ( s ) : = g A ( s , s ) = g B ( s , s ) = ( 1 + μ 1 s ) 2 + ( 1 + μ 2 s ) 2 2 ρ ( 1 + μ 1 s ) ( 1 + μ 2 s ) ( 1 ρ 2 ) s , s > 0 .
In the next proposition we identify the so-called dominating points, that is, points t 0 for which function defined in (7) achieves its minimum. This identification might also be useful for deriving a more subtle asymptotics for P ( u ) .
Notation:In the following, in order to keep the notation consistent, ρ μ 1 / μ 2 is understood as ρ < 1 if μ 1 = μ 2 .
Proposition 3.
(i) 
Suppose that 1 < ρ < 0 .
For μ 1 < μ 2 we have
g ( t 0 ) = g A ( t A , s A ) = 4 ( μ 2 + ( 1 2 ρ ) μ 1 ) ,
where, ( t A , s A ) = ( t A ( ρ ) , s A ( ρ ) ) : = 1 2 ρ μ 1 , 1 μ 2 2 μ 1 ρ is the unique minimizer of g ( t , s ) , ( t , s ) ( 0 , ) 2 .
For μ 1 = μ 2 = : μ we have
g ( t 0 ) = g A ( t A , s A ) = g B ( t B , s B ) = 8 ( 1 ρ ) μ ,
where ( t A , s A ) = 1 2 ρ μ , 1 ( 1 2 ρ ) μ A , ( t B , s B ) : = 1 ( 1 2 ρ ) μ , 1 2 ρ μ B are the only two minimizers of g ( t , s ) , ( t , s ) ( 0 , ) 2 .
(ii) 
Suppose that 0 ρ < ρ ^ 1 . We have
g ( t 0 ) = g A ( t A , s A ) = 4 ( μ 2 + ( 1 2 ρ ) μ 1 ) ,
where ( t A , s A ) A is the unique minimizer of g ( t , s ) , ( t , s ) ( 0 , ) 2 .
(iii) 
Suppose that ρ = ρ ^ 1 . We have
g ( t 0 ) = g A ( t A , s A ) = 4 ( μ 2 + ( 1 2 ρ ) μ 1 ) ,
where ( t A , s A ) = ( t A ( ρ ^ 1 ) , s A ( ρ ^ 1 ) ) = ( t * ( ρ ^ 1 ) , s * ( ρ ^ 1 ) ) L , is the unique minimizer of g ( t , s ) , ( t , s ) ( 0 , ) 2 , with ( t * , s * ) defined in (6).
(iv) 
Suppose that ρ ^ 1 < ρ < ρ ^ 2 . We have
g ( t 0 ) = g A ( t * , s * ) = g L ( t * ) = 2 1 + ρ ( μ 1 + μ 2 + 2 / t * ) ,
where ( t * , s * ) L is the unique minimizer of g ( t , s ) , ( t , s ) ( 0 , ) 2 .
(v) 
Suppose that ρ = ρ ^ 2 . We have t * ( ρ ^ 2 ) = s * ( ρ ^ 2 ) = 1 / μ 2 and
g ( t 0 ) = g A ( 1 / μ 2 , 1 / μ 2 ) = g L ( 1 / μ 2 ) = g 2 ( 1 / μ 2 ) = 4 μ 2 ,
where the minimum of g ( t , s ) , ( t , s ) ( 0 , ) 2 is attained at ( 1 / μ 2 , 1 / μ 2 ) , with g 3 ( 1 / μ 2 , 1 / μ 2 ) = g 2 ( 1 / μ 2 ) and 1 / μ 2 is the unique minimizer of g 2 ( s ) , s ( 0 , ) .
(vi) 
Suppose that ρ ^ 2 < ρ < 1 . We have
g ( t 0 ) = g 2 ( 1 / μ 2 ) = 4 μ 2 ,
where the minimum of g ( t , s ) , ( t , s ) ( 0 , ) 2 is attained when g ( t , s ) = g 2 ( s ) .
Remark 4.
In case that μ 1 = μ 2 , we have ρ ^ 1 = 0 , ρ ^ 2 = 1 and thus scenarios (ii) and (vi) do not apply.

3. Proofs of Main Results

As discussed in the previous section, Proposition 3 combined with (8), straightforwardly implies the thesis of Theorem 1. In what follows, we shall focus on the proof of Proposition 3, for which we need to find the dominating points t 0 by solving the two-layer minimization problem (9).
The solution of quadratic programming problem of the form (7) (inner minimization problem of (9)) has been well understood; for example, Hashorva (2005); Hashorva and Hüsler (2002) (see also Lemma 2.1 of Dȩbicki et al. (2018)). For completeness and for reference, we present below Lemma 2.1 of Dȩbicki et al. (2018) for the case where d = 2 .
We introduce some more notation. If I { 1 , 2 } , then for a vector a R 2 we denote by a I = ( a i , i I ) a sub-block vector of a . Similarly, if further J { 1 , 2 } , for a matrix M = ( m i j ) i , j { 1 , 2 } R 2 × 2 we denote by M I J = M I , J = ( m i j ) i I , j J the sub-block matrix of M determined by I and J. Further, write M I I 1 = ( M I I ) 1 for the inverse matrix of M I I whenever it exists.
Lemma 5.
Let M R 2 × 2 be a positive definite matrix. If b R 2 \ ( , 0 ] 2 , then the quadratic programming problem
P M ( b ) : Minimise x M 1 x under the linear constraint x b
has a unique solution b ˜ and there exists a unique non-empty index set I { 1 , 2 } such that
b ˜ I = b I 0 I , M I I 1 b I > 0 I , and if I c = { 1 , 2 } \ I , then b ˜ I c = M I c I M I I 1 b I b I c .
Furthermore,
min x b x M 1 x = b ˜ M 1 b ˜ = b I M I I 1 b I > 0 , x M 1 b ˜ = x I M I I 1 b ˜ I = x I M I I 1 b I , x R 2 .
For the solution of the quadratic programming problem (7) a suitable representation for g ( t , s ) is worked out in the following lemma.
For 1 > ρ > μ 1 / μ 2 , let D 2 = { ( t , s ) : w 1 ( s ) t f 1 ( s ) } and D 1 = ( 0 , ) 2 \ D 2 , with boundary functions given by
f 1 ( s ) = ρ 1 μ 1 + ρ μ 2 μ 1 s , w 1 ( s ) = s ρ + ( ρ μ 2 μ 1 ) s , s 0 ,
and the unique intersection point of f 1 ( s ) , w 1 ( s ) , s 0 , given by
s 1 * = s 1 * ( ρ ) : = 1 ρ ρ μ 2 μ 1 ,
as depicted in Figure 1.
Lemma 6.
Let g ( t , s ) , t , s > 0 be given as in (7). We have:
(i) 
If 1 < ρ μ 1 / μ 2 , then
g ( t , s ) = g 3 ( t , s ) , ( t , s ) ( 0 , ) 2 .
(ii) 
If 1 > ρ > μ 1 / μ 2 , then
g ( t , s ) = g 3 ( t , s ) , if ( t , s ) D 1 g 2 ( s ) , if ( t , s ) D 2 .
Moreover, we have g 3 ( f 1 ( s ) , s ) = g 3 ( w 1 ( s ) , s ) = g 2 ( s ) for all s s 1 * .

3.1. Proof of Proposition 3

We shall discuss in order the case when 1 < ρ < 0 and the case when 0 ρ < 1 in the following two subsections. In both scenarios we shall first derive the minimizers of the function g ( t , s ) on regions A ¯ and B ¯ (see (10)) separately and then look for a global minimizer by comparing the two minimum values. For clarity some scenarios are analysed in forms of lemmas.

3.1.1. Case 1 < ρ < 0

By Lemma 6, we have that
g ( t , s ) = g 3 ( t , s ) , ( t , s ) ( 0 , ) 2 .
We shall derive the minimizers of g 3 ( t , s ) on A ¯ , B ¯ separately.
Minimizers of g 3 ( t , s ) on A ¯ . We have, for any fixed s,
g 3 ( t , s ) t = g A ( t , s ) t = 0 ( μ 1 t + 1 ρ ρ μ 2 s ) ( μ 1 t ( 2 μ 1 ρ 2 ρ μ 2 ) s + ρ 1 ) = 0 ,
where the representation (11) is used. Two roots of the above equation are:
t 1 = t 1 ( s ) : = ρ 1 + ρ μ 2 s μ 1 , t 2 = t 2 ( s ) : = 1 ρ + ( 2 μ 1 ρ 2 ρ μ 2 ) s μ 1 .
Note that, due to the form of the function g A ( t , s ) given in (11), for any fixed s, there exists a unique minimizer of g A ( t , s ) on A ¯ which is either an inner point t 1 or t 2 (the one that is larger than s), or a boundary point s. Next, we check if any of t i , i = 1 , 2 , is larger than s. Since ρ < 0 , t 1 < 0 < t 2 . So we check if t 2 > s . It can be shown that
t 2 > s ( μ 1 + ρ μ 2 2 μ 1 ρ 2 ) s < 1 ρ .
Two scenarios μ 1 + ρ μ 2 2 μ 1 ρ 2 0 and μ 1 + ρ μ 2 2 μ 1 ρ 2 > 0 will be distinguished.
Scenario μ 1 + ρ μ 2 2 μ 1 ρ 2 0 . We have from (16) that
t 1 < 0 < s < t 2 ,
and thus
inf ( t , s ) A ¯ g 3 ( t , s ) = inf s > 0 f A ( s ) ,
where
f A ( s ) : = g A ( t 2 ( s ) , s ) = ( 1 + μ 2 s ) 2 s + 4 μ 1 ( ( 1 ρ ) + ( ρ 2 μ 1 ρ μ 2 ) s ) .
Next, since
f A ( s ) = 0 s A = s A ( ρ ) : = 1 | μ 2 2 ρ μ 1 | = 1 μ 2 2 ρ μ 1 > 0 ,
the unique minimizer of g 3 ( t , s ) on A ¯ is given by ( t A , s A ) with
t A : = t 2 ( s A ) = 1 2 ρ μ 1 .
Scenario μ 1 + ρ μ 2 2 μ 1 ρ 2 > 0 . We have from (16) that
t 1 < 0 < s < t 2 s < 1 ρ μ 1 + ρ μ 2 2 μ 1 ρ 2 = 1 ρ ρ ( μ 2 μ 1 ρ ) + μ 1 ( 1 ρ 2 ) = : s * * ( ρ ) = s * * ,
and in this case,
inf ( t , s ) A ¯ g 3 ( t , s ) = min inf 0 < s < s * * f A ( s ) , inf s s * * g L ( s ) ,
where g L ( s ) is given in (12). Note that
g L ( s ) = 0 s * = s * ( ρ ) = 2 ( 1 ρ ) μ 1 2 + μ 2 2 2 ρ μ 1 μ 2 .
Next, for 1 < ρ < 0 we have that (recall s * * given in (18))
s * * 1 ρ μ 1 ( 1 ρ 2 ) > 1 μ 1 1 μ 2 > s A , s * * > 1 ρ μ 1 > s * .
Therefore, by (19) we conclude that the unique minimizer of g 3 ( t , s ) on A ¯ is again given by ( t A , s A ) . Consequently, for all 1 < ρ < 0 , we have that the unique minimizer of g 3 ( t , s ) on A ¯ is given by ( t A , s A ) , and
inf ( t , s ) A ¯ g 3 ( t , s ) = g A ( t A , s A ) = 4 ( μ 2 + ( 1 2 ρ ) μ 1 ) .
Minimizers of g 3 ( t , s ) on B ¯ . Similarly, we have, for any fixed t,
g 3 ( t , s ) s = g B ( t , s ) s = 0 ( μ 2 s + 1 ρ ρ μ 1 t ) ( μ 2 s ( 2 μ 2 ρ 2 ρ μ 1 ) t + ρ 1 ) = 0 .
Two roots of the above equation are:
s 1 = s 1 ( t ) : = ρ 1 + ρ μ 1 t μ 2 , s 2 = s 2 ( t ) : = 1 ρ + ( 2 μ 2 ρ 2 ρ μ 1 ) t μ 2 .
Next, we check if any of s i , i = 1 , 2 , is greater than t. Again s 1 < 0 < s 2 as ρ < 0 . So we check if s 2 > t . It can be shown that
s 2 > t ( μ 2 + ρ μ 1 2 μ 2 ρ 2 ) t < 1 ρ .
Thus, for Scenario μ 2 + ρ μ 1 2 μ 2 ρ 2 0 we have that
s 1 < 0 < t < s 2
and in this case
inf ( t , s ) B ¯ g 3 ( t , s ) = inf t > 0 f B ( t ) ,
with
f B ( t ) : = g B ( t , s 2 ( t ) ) = ( 1 + μ 1 t ) 2 t + 4 μ 2 ( ( 1 ρ ) + ( ρ 2 μ 2 ρ μ 1 ) t ) .
Next, note that
f B ( t ) = 0 t B = t B ( ρ ) : = 1 | μ 1 2 ρ μ 2 | = 1 μ 1 2 ρ μ 2 > 0 .
Therefore, the unique minimizer of g 3 ( t , s ) on B ¯ is given by ( t B , s B ) with
s B : = s 2 ( t B ) = 1 2 ρ μ 2 , inf ( t , s ) B ¯ g 3 ( t , s ) = g B ( t B , s B ) = 4 ( μ 1 + ( 1 2 ρ ) μ 2 ) .
For Scenario μ 2 + ρ μ 1 2 μ 2 ρ 2 > 0 we have from (23) that
s 1 < 0 < t < s 2 t < 1 ρ μ 2 + ρ μ 1 2 μ 2 ρ 2 = 1 ρ ρ ( μ 1 ρ μ 2 ) + μ 2 ( 1 ρ 2 ) = : t * * ( ρ ) = t * * .
In this case,
inf ( t , s ) B ¯ g 3 ( t , s ) = min inf 0 < t < t * * f B ( t ) , inf t t * * g L ( t ) .
Though it is not easy to determine explicitly the optimizer, we can conclude that the minimizer should be taken at ( t B , s B ) , ( t * , t * ) or ( t * * , t * * ) , where t * = t * ( ρ ) = s * ( ρ ) . Further, we have from the discussion in (19) that
g A ( t A , s A ) < g L ( s * ) = g L ( t * ) = min ( g L ( t * ) , g L ( t * * ) ) ,
and
g B ( t B , s B ) = 4 ( μ 1 + ( 1 2 ρ ) μ 2 ) 4 ( μ 2 + ( 1 2 ρ ) μ 1 ) = g A ( t A , s A ) .
Combining the above discussions on A ¯ , B ¯ , we conclude that Proposition 3 holds for 1 < ρ < 0 .

3.1.2. Case 0 ρ < 1

We shall derive the minimizers of g ( t , s ) on A ¯ , B ¯ separately. We start with discussions on B ¯ , for which we give the following lemma. Recall t * ( ρ ) = s * ( ρ ) defined in (20) (see also (6)), t B ( ρ ) defined in (24), t * * ( ρ ) defined in (25) and s 1 * ( ρ ) defined in (14) for μ 1 / μ 2 < ρ < 1 . Note that where it applies, 1 / 0 is understood as + and 1 / is understood as 0.
Lemma 7.
We have:
(a) 
The function t * ( ρ ) is a decreasing function on [ 0 , 1 ] and both t B ( ρ ) and s 1 * ( ρ ) are decreasing functions on ( μ 1 / μ 2 , 1 ) .
(b) 
The function t * * ( ρ ) decreases from 1 / μ 2 at ρ = 0 to some positive value and then increases to 1 / μ 2 at ρ ^ 2 (defined in (5)) and then increases to + at the root ρ ^ ( 0 , 1 ] of the equation μ 2 + ρ μ 1 2 μ 2 ρ 2 = 0 .
(c) 
For 0 ρ μ 1 / μ 2 , we have
t B ( ρ ) t * * ( ρ ) , t * ( ρ ) t * * ( ρ ) ,
where both equalities hold only when ρ = 0 and μ 1 = μ 2 .
(d) 
It holds that
t * ( ρ ^ 2 ) = t B ( ρ ^ 2 ) = s 1 * ( ρ ^ 2 ) = t * * ( ρ ^ 2 ) = 1 μ 2 .
Moreover, for μ 1 / μ 2 < ρ < 1 we have
(i) 
t * ( ρ ) < s 1 * ( ρ ) for all ρ ( μ 1 / μ 2 , ρ ^ 2 ) , t * ( ρ ) > s 1 * ( ρ ) for all ρ ( ρ ^ 2 , 1 ) .
(ii) 
t B ( ρ ) < s 1 * ( ρ ) for all ρ ( μ 1 / μ 2 , ρ ^ 2 ) , t B ( ρ ) > s 1 * ( ρ ) for all ρ ( ρ ^ 2 , 1 ) .
(iii) 
t * * ( ρ ) < s 1 * ( ρ ) for all ρ ( μ 1 / μ 2 , ρ ^ 2 ) , t * * ( ρ ) > s 1 * ( ρ ) for all ρ ( ρ ^ 2 , ρ ^ ) .
(iv) 
t * * ( ρ ) < t * ( ρ ) for all ρ ( μ 1 / μ 2 , ρ ^ 2 ) , t * * ( ρ ) > t * ( ρ ) for all ρ ( ρ ^ 2 , ρ ^ ) .
(v) 
t * * ( ρ ) < t B ( ρ ) for all ρ ( μ 1 / μ 2 , ρ ^ 2 ) , t * * ( ρ ) > t B ( ρ ) for all ρ ( ρ ^ 2 , ρ ^ ) .
Recall that by definition g L ( s ) = g A ( s , s ) = g B ( s , s ) , s > 0 (cf. (12)). For the minimum of g ( t , s ) on B ¯ we have the following lemma.
Lemma 8.
We have
(i) 
If 0 ρ < ρ ^ 2 , then
inf ( t , s ) B ¯ g ( t , s ) = g L ( t * ) = 2 1 + ρ ( μ 1 + μ 2 + 2 / t * ) ,
where ( t * , t * ) is the unique minimizer of g ( t , s ) on B ¯ .
(ii) 
If ρ = ρ ^ 2 , then t * ( ρ ^ 2 ) = s * ( ρ ^ 2 ) = 1 / μ 2 and
inf ( t , s ) B ¯ g ( t , s ) = g L ( 1 / μ 2 ) = g 2 ( 1 / μ 2 ) = 4 μ 2 ,
where the minimum of g ( t , s ) on B ¯ is attained at ( 1 / μ 2 , 1 / μ 2 ) , with g 3 ( 1 / μ 2 , 1 / μ 2 ) = g 2 ( 1 / μ 2 ) and 1 / μ 2 is the unique minimizer of g 2 ( s ) , s ( 0 , ) .
(iii) 
If ρ ^ 2 < ρ < 1 , then
inf ( t , s ) B ¯ g ( t , s ) = inf ( t , s ) D 2 g 2 ( s ) = g 2 ( 1 / μ 2 ) = 4 μ 2 ,
where the minimum of g ( t , s ) on B ¯ is attained when g ( t , s ) = g 2 ( s ) on D 2 (see Figure 1).
Next we consider the minimum of g ( t , s ) on A ¯ . Recall s * ( ρ ) defined in (20), s A ( ρ ) defined in (17) and s * * ( ρ ) defined in (18). We first give the following lemma.
Lemma 9.
We have
(a) 
Both s * ( ρ ) and s * * ( ρ ) are decreasing functions on [ 0 , 1 ] .
(b) 
That ρ ^ 1 is the unique point on [ 0 , 1 ) such that
s A ( ρ ^ 1 ) = s * * ( ρ ^ 1 ) = s * ( ρ ^ 1 ) ,
and
(i) 
s A ( ρ ) < s * * ( ρ ) for all ρ [ 0 , ρ ^ 1 ) , s A ( ρ ) > s * * ( ρ ) for all ρ ( ρ ^ 1 , 1 ) ,
(ii) 
s * ( ρ ) < s * * ( ρ ) for all ρ [ 0 , ρ ^ 1 ) , s * ( ρ ) > s * * ( ρ ) for all ρ ( ρ ^ 1 , 1 ) .
(c) 
For all μ 1 / μ 2 < ρ < 1 , it holds that s * * ( ρ ) < s 1 * ( ρ ) .
For the minimum of g ( t , s ) on A ¯ we have the following lemma.
Lemma 10.
We have
(i) 
If 0 ρ < ρ ^ 1 , then
inf ( t , s ) A ¯ g ( t , s ) = g A ( t A , s A ) = 4 ( μ 2 + ( 1 2 ρ ) μ 1 ) ,
where ( t A , s A ) A is the unique minimizer of g ( t , s ) on A ¯ .
(ii) 
If ρ = ρ ^ 1 , then
inf ( t , s ) A ¯ g ( t , s ) = g A ( t A , s A ) = 4 ( μ 2 + ( 1 2 ρ ) μ 1 ) ,
where ( t A , s A ) = ( t * , s * ) L is the unique minimizer of g ( t , s ) on A ¯ .
(iii) 
If ρ ^ 1 < ρ < ρ ^ 2 , then
inf ( t , s ) A ¯ g ( t , s ) = g L ( s * ) = 2 1 + ρ ( μ 1 + μ 2 + 2 / s * ) ,
where ( s * , s * ) is the unique minimizer of g ( t , s ) on A ¯ .
(iv) 
If ρ = ρ ^ 2 , then t * ( ρ ^ 2 ) = s * ( ρ ^ 2 ) = 1 / μ 2 and
inf ( t , s ) A ¯ g ( t , s ) = g L ( s * ) = g 2 ( 1 / μ 2 ) = 4 μ 2 ,
where the minimum of g ( t , s ) on A ¯ is attained at ( 1 / μ 2 , 1 / μ 2 ) , with g 3 ( 1 / μ 2 , 1 / μ 2 ) = g 2 ( 1 / μ 2 ) .
(v) 
If ρ ^ 2 < ρ < 1 , then
inf ( t , s ) A ¯ g ( t , s ) = g 2 ( 1 / μ 2 ) = 4 μ 2 ,
where the minimum of g ( t , s ) on A ¯ is attained when g ( t , s ) = g 2 ( s ) on D 2 (see Figure 1).
Consequently, combining the results in Lemma 8 and Lemma 10, we conclude that Proposition 3 holds for 0 ρ < 1 . Thus, the proof is complete.

4. Conclusions and Discussions

In the multi-dimensional risk theory, the so-called “ruin” can be defined in different manner. Motivated by diffusion approximation approach, in this paper we modelled the risk process via a multi-dimensional BM with drift. We analyzed the component-wise infinite-time ruin probability for dimension d = 2 by solving a two-layer optimization problem, which by the use of Theorem 1 from Dȩbicki et al. (2010) led to the logarithmic asymptotics for P ( u ) as u , given by explicit form of the adjustment coefficient γ = g ( t 0 ) / 2 (see (8)). An important tool here is Lemma 5 on the quadratic programming, cited from Hashorva (2005). In this way we were also able to identify the dominating points by careful analysis of different regimes for ρ and specify three regimes with different formulas for γ (see Theorem 1). An open and difficult problem is the derivation of exact asymptotics for P ( u ) in (4), for which the problem of finding dominating points would be the first step. A refined double-sum method as in Dȩbicki et al. (2018) might be suitable for this purpose. A detailed analysis of the case for dimensions d > 2 seems to be technically very complicated, even for getting the logarithmic asymptotics. We also note that a more natural problem of considering R i ( t ) = α i u + μ i t X i ( t ) , with general α i > 0 , i = 1 , 2 , leads to much more difficult technicalities with the analysis of γ .
Define the ruin time of component i, 1 i d , by T i = min { t : R i ( t ) < 0 } and let T ( 1 ) T ( 2 ) T ( d ) be the order statistics of ruin times. Then the component-wise infinite-time ruin probability is equivalent to P T ( d ) < while the ruin time of at least one business line is T min = T ( 1 ) = min i T i . Other interesting problems like P T ( j ) < have not yet been analysed. For instance, it would be interesting for d = 3 to study the case T ( 2 ) . The general scheme on how to obtain logarithmic asymptotics for such problems was discussed in Dȩbicki et al. (2010).
Random vector X ¯ = ( sup t 0 ( X 1 ( t ) p 1 t ) , , sup t 0 ( X d ( t ) p d t ) ) has exponential marginals and if it is not concentrated on a subspace of dimension less than d, it defines a multi-variate exponential distribution. In this paper for dimension d = 2 , we derived some asymptotic properties of such distribution. Little is known about properties of this multi-variate distriution and more studies on it would be of interest. For example a correlation structure of X ¯ is unknown. In particular, in the context of findings presented in this contribution, it would be interesting to find the correlation between sup t 0 ( X 1 ( t ) μ 1 t ) and sup t 0 ( X 2 ( t ) μ 2 t ) .

Author Contributions

Investigation, K.D., L.J., T.R.; writing—original draft preparation, L.J.; writing—review and editing, K.D., T.R.

Funding

T.R. & K.D. acknowledge partial support by NCN grant number 2015/17/B/ST1/01102 (2016-2019).

Acknowledgments

We are thankful to the referees for their carefully reading and constructive suggestions which significantly improved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
BMBrownian motion

Appendix A

In this appendix, we present the proofs of the lemmas used in Section 3.
Proof of Lemma 6.
Referring to Lemma 5, we have, for any fixed t , s , there exists a unique index set
I ( t , s ) { 1 , 2 }
such that
g ( t , s ) = ( 1 + μ 1 t , 1 + μ 2 s ) I ( t , s ) ( t s ) I ( t , s ) , I ( t , s ) 1 ( 1 + μ 1 t , 1 + μ 2 s ) I ( t , s ) ,
and
( t s ) I ( t , s ) , I ( t , s ) 1 ( 1 + μ 1 t , 1 + μ 2 s ) I ( t , s ) > 0 I ( t , s ) .
Since I ( t , s ) = { 1 } , { 2 } or { 1 , 2 } , we have that
(S1)
On the set E 1 = { ( t , s ) : ρ t s s 1 ( 1 + μ 2 s ) ( 1 + μ 1 t ) } , g ( t , s ) = g 2 ( s )
(S2)
On the set E 2 = { ( t , s ) : ρ t s t 1 ( 1 + μ 1 t ) ( 1 + μ 2 s ) } , g ( t , s ) = g 1 ( t )
(S3)
On the set E 3 = ( 0 , ) 2 \ ( E 1 E 2 ) , g ( t , s ) = g 3 ( t , s ) .
Clearly, if ρ 0 then
E 1 = E 2 = , E 3 = ( 0 , ) 2 .
In this case,
g ( t , s ) = g 3 ( t , s ) , ( t , s ) ( 0 , ) 2 .
Next, we focus on the case where ρ > 0 . We consider the regions A ¯ and B separately.
Analysis on A ¯ . We have
A 1 = A ¯ E 1 = { s t f 1 ( s ) } , f 1 ( s ) = ρ 1 μ 1 + ρ μ 2 μ 1 s , A 2 = A ¯ E 2 = { s t f 2 ( s ) } , f 2 ( s ) = ρ s 1 + ( μ 2 ρ μ 1 ) s , A 3 = A ¯ E 3 = { t s , t > max ( f 1 ( s ) , f 2 ( s ) ) } .
Next we analyse the intersection situation of the functions f ( s ) = s , f 1 ( s ) , f 2 ( s ) on region A ¯ .
Clearly, for any s > 0 we have f 2 ( s ) < s . Furthermore, f 1 ( s ) = f 2 ( s ) has a unique positive solution s 1 given by
s 1 = 1 ρ ρ ( μ 2 ρ μ 1 ) .
Finally, for ρ μ 2 μ 1 we have that f 1 ( s ) does not intersect with f ( s ) on ( 0 , ) but for ρ μ 2 > μ 1 the unique intersection point is given by s 1 * > s 1 (cf. (14)). To conclude, we have, for ρ μ 1 / μ 2 ,
g ( t , s ) = g 3 ( t , s ) , ( t , s ) A ¯ ,
and for ρ > μ 1 / μ 2 ,
g ( t , s ) = g 3 ( t , s ) , if ( t , s ) A ¯ { t max ( s , f 1 ( s ) ) , t > f 1 ( s ) } g 2 ( s ) , if ( t , s ) A ¯ { s t f 1 ( s ) } .
Additionally, we have from Lemma 5 g 3 ( f 1 ( s ) , s ) = g 2 ( s ) for all s s 1 * .
Analysis on B. The two scenarios ρ μ 1 / μ 2 and ρ > μ 1 / μ 2 will be considered separately. For ρ μ 1 / μ 2 , we have
B 1 = B E 1 = { t < s h 1 ( t ) } , h 1 ( t ) = ρ t 1 + ( μ 1 ρ μ 2 ) t , B 2 = B E 2 = { t < s h 2 ( t ) } , h 2 ( t ) = ρ 1 μ 2 + ρ μ 1 μ 2 t , B 3 = B E 3 = { s > max ( t , h 1 ( t ) , h 2 ( t ) ) } .
It is easy to check that
t > h 1 ( t ) , t > h 2 ( t ) , t > 0 ,
and thus
g ( t , s ) = g 3 ( t , s ) , ( t , s ) B .
For ρ > μ 1 / μ 2 , we have
B 1 = B E 1 = { w 1 ( s ) t < s } , w 1 ( s ) = s ρ + ( ρ μ 2 μ 1 ) s , B 2 = B E 2 = { w 2 ( s ) t < s } , w 2 ( s ) = 1 ρ μ 1 ρ + μ 2 μ 1 ρ s , B 3 = B E 3 = { t < min ( s , w 1 ( s ) , w 2 ( s ) ) } .
Next we analyze the intersection situation of the functions w ( s ) = s , w 1 ( s ) , w 2 ( s ) on region B.
Clearly, for any s > 0   , w 2 ( s ) > s . w 1 ( s ) and w 2 ( s ) do not intersect on ( 0 , ) . w ( s ) and w 1 ( s ) has a unique intersection point s 1 * (cf. (14)).
To conclude, we have, for ρ μ 1 / μ 2 ,
g ( t , s ) = g 3 ( t , s ) , ( t , s ) B ,
and for ρ > μ 1 / μ 2 ,
g ( t , s ) = g 3 ( t , s ) , if ( t , s ) B { t < min ( s , w 1 ( s ) ) } g 2 ( s ) , if ( t , s ) B { w 1 ( s ) t < s } .
Additionally, it follows from Lemma 5 that g 3 ( w 1 ( s ) , s ) = g 2 ( s ) for all s s 1 * .
Consequently, the claim follows by a combination of the above results. This completes the proof. □
Proof of Lemma 7.
(a)
The claim for t * ( ρ ) follows by noting its following representation:
t * ( ρ ) = s * ( ρ ) = 2 ( 1 ρ ) μ 1 2 + μ 2 2 2 μ 1 μ 2 + 2 μ 1 μ 2 2 ρ μ 1 μ 2 = 2 ( μ 1 μ 2 ) 2 1 ρ + 2 μ 1 μ 2 .
The claims for t B ( ρ ) and s 1 * ( ρ ) follow directly from their definition.
(b)
First note that
t * * ( 0 ) = t * * ( ρ ^ 2 ) = 1 μ 2 .
Next it is calculated that
t * * ( ρ ) ρ = 2 μ 2 ρ 2 + 4 μ 2 ρ μ 1 μ 2 ( μ 2 + ρ μ 1 2 μ 2 ρ 2 ) 2 .
Thus, the claim of (b) follows by analysing the sign of t * * ( ρ ) ρ over ( 0 , ρ ^ ) .
(c)
For any 0 ρ μ 1 / μ 2 we have | μ 1 2 ρ μ 2 | μ 1 and thus
t B ( ρ ) 1 u 1 1 u 2 1 ρ u 2 ( 1 ρ 2 ) 1 ρ ρ ( μ 1 ρ μ 2 ) + μ 2 ( 1 ρ 2 ) = t * * ( ρ ) .
Further, since
μ 1 2 + μ 2 2 2 ρ μ 1 μ 2 = μ 1 ( μ 1 ρ μ 2 ) + μ 2 ( μ 2 ρ μ 1 ) μ 2 ( μ 1 ρ μ 2 ) + μ 2 ( μ 2 ρ μ 1 ) 2 μ 2 2 ( 1 ρ ) ,
it follows that
t * ( ρ ) 1 μ 2 t * * ( ρ ) .
(d)
It is easy to check that (26) holds. For (i) we have
t * ( ρ ) s 1 * ( ρ ) = ( 1 ρ ) 1 f 1 ( ρ ) 1 f 2 ( ρ ) ,
where
f 1 ( ρ ) = ( 1 ρ ) ( μ 1 2 + μ 2 2 2 ρ μ 1 μ 2 ) 2 = μ 1 μ 2 ρ 2 ( μ 1 + μ 2 ) 2 2 ρ + μ 1 2 + μ 2 2 2 f 2 ( ρ ) = ρ μ 2 μ 1 .
Analysing the properties of the above two functions, we have f 1 ( ρ ) is strictly decreasing on [ 0 , 1 ] with
f 1 ( 0 ) = μ 1 2 + μ 2 2 2 > μ 1 = f 2 ( 0 ) , f 1 ( 1 ) = 0 μ 2 μ 1 = f 2 ( 1 ) ,
and thus there is a unique intersection point of the two curves t * ( ρ ) and s 1 * ( ρ ) which is ρ = ρ ^ 2 . Therefore, the claim of (i) follows. Similarly, the claim of (ii) follows since
t B ( ρ ) s 1 * ( ρ ) = ( μ 1 + μ 2 ) ρ + 2 μ 2 ρ 2 ( ρ μ 2 μ 1 ) ( 2 ρ μ 2 μ 1 ) .
Finally, the claims of (iii), (iv) and (v) follow easily from (a), (b) and (26). This completes the proof. □
Proof of Lemma 8.
Consider first the case where 0 ρ μ 1 / μ 2 . Recall (22). We check if any of s i , i = 1 , 2 , is greater than t. Clearly, s 1 t . Next, we check whether s 2 > t . It is easy to check that
s 2 > t t < t * * ,
where (recall (25))
t * * = t * * ( ρ ) = 1 ρ ρ ( μ 1 μ 2 ρ ) + μ 2 ( 1 ρ 2 ) > 0 .
Then
inf ( t , s ) B ¯ g 3 ( t , s ) = min inf 0 < t < t * * g B ( t , s 2 ( t ) ) , inf t t * * g B ( t , t ) .
Consequently, it follows from (c) of Lemma 7 the claim of (i) holds for 0 ρ μ 1 / μ 2 .
Next, we consider μ 1 / μ 2 < ρ < 1 . Recall the function w 1 ( s ) defined in (13). Denote the inverse function of w 1 ( s ) by
w ^ 1 ( t ) = ρ t 1 ( ρ μ 2 μ 1 ) t , t s 1 * .
We have from Lemma 6 that
g B ( t , w ^ 1 ( t ) ) = g 2 ( t ) , t s 1 * .
Further note that 1 / μ 2 is the unique minimizer of g 2 ( s ) , s > 0 . For μ 1 / μ 2 < ρ < ρ ^ 2 , we have from (d) in Lemma 7 that
inf s 1 * s g 2 ( s ) = g 2 ( s 1 * ) = g L ( s 1 * ) > g L ( t * ) ,
and further
inf ( t , s ) B ¯ g ( t , s ) = min ( inf 0 < t < t * * g B ( t , s 2 ( t ) ) , inf t * * t < s 1 * g B ( t , t ) , inf s 1 * t g B ( t , w ^ 1 ( t ) ) , inf s 1 * s g 2 ( s ) ) = g B ( t * , t * ) = g L ( t * ) ,
where ( t * , t * ) is the unique minimizer of g ( t , s ) on B ¯ . Therefore, the claim for μ 1 / μ 2 < ρ < ρ ^ 2 is established.
For ρ = ρ ^ 2 , because of (26) we have
inf ( t , s ) B ¯ g ( t , s ) = min ( inf 0 < t < 1 / μ 2 g B ( t , s 2 ( t ) ) , inf 1 / μ 2 t g B ( t , w ^ 1 ( t ) ) , inf 1 / μ 2 s g 2 ( s ) ) = g B ( 1 / μ 2 , 1 / μ 2 ) = g L ( 1 / μ 2 ) = g 2 ( 1 / μ 2 ) ,
and the unique minimum of g ( t , s ) on B ¯ is attained at ( 1 / μ 2 , 1 / μ 2 ) . Moreover, for all ρ ^ 2 < ρ < 1 we have
s 2 ( t B ) = w ^ 1 ( t B ) = 1 μ 2 > s 1 * .
Thus,
inf ( t , s ) B ¯ g ( t , s ) = min ( inf 0 < t < t B g B ( t , s 2 ( t ) ) , inf t B t g B ( t , w ^ 1 ( t ) ) , inf s 1 * s g 2 ( s ) ) = g B ( t B , 1 / μ 2 ) = g 2 ( 1 / μ 2 ) ,
and the unique minimum of g ( t , s ) on B ¯ is attained when g ( t , s ) = g 2 ( s ) on D 2 . This completes the proof. □
Proof of Lemma 9.
(a)
The claim for s * ( ρ ) has been shown in the proof of (a) in Lemma 7. Next, we show the claim for s * * ( ρ ) , for which it is sufficient to show that s * * ( ρ ) ρ < 0 for all ρ [ 0 , 1 ] . In fact, we have
s * * ( ρ ) ρ = 2 μ 1 ρ 2 + 4 μ 1 ρ μ 1 μ 2 ( μ 1 + ρ μ 2 2 μ 1 ρ 2 ) 2 < 0 .
(b)
In order to prove (i), the following two scenarios will be discussed separately:
( S 1 ) . μ 2 < 2 μ 1 ; ( S 2 ) . μ 2 2 μ 1 .
First consider (S1). If 0 ρ < μ 2 2 μ 1 , then
s A ( ρ ) s * * ( ρ ) = ( μ 1 + ρ μ 2 2 μ 1 ρ 2 ) ( 1 ρ ) ( μ 2 2 ρ μ 1 ) ( μ 2 2 ρ μ 1 ) ( μ 1 + ρ μ 2 2 μ 1 ρ 2 ) = f ( ρ ) ( μ 2 2 ρ μ 1 ) ( μ 1 + ρ μ 2 2 μ 1 ρ 2 ) ,
where
f ( ρ ) = 4 μ 1 ρ 2 + 2 ( μ 2 + μ 1 ) ρ μ 2 + μ 1 .
Analysing the function f, we conclude that
f ( ρ ) < 0 , for ρ [ 0 , ρ ^ 1 ) , f ( ρ ) > 0 , for ρ ( ρ ^ 1 , μ 2 2 μ 1 ) .
Further, for μ 2 2 μ 1 ρ < 1 we have
s A ( ρ ) s * * ( ρ ) = μ 1 + μ 2 2 μ 1 ρ ( 2 ρ μ 1 μ 2 ) ( μ 1 + ρ μ 2 2 μ 1 ρ 2 ) > 0 .
Thus, the claim in (i) is established for (S1). Similarly, the claim in (i) is valid for (S2).
Next, note that
s * ( ρ ) s * * ( ρ ) = ( 1 ρ ) 1 f 1 ( ρ ) 1 f 2 ( ρ )
with
f 1 ( ρ ) = ( 1 ρ ) ( μ 1 2 + μ 2 2 2 ρ μ 1 μ 2 ) 2 = μ 1 μ 2 ρ 2 ( μ 1 + μ 2 ) 2 2 ρ + μ 1 2 + μ 2 2 2 f 2 ( ρ ) = μ 1 + ρ μ 2 2 μ 1 ρ 2 .
Analysing the properties of the above two functions, we have f 1 ( ρ ) is strictly decreasing on [ 0 , 1 ] with
f 1 ( 0 ) = μ 1 2 + μ 2 2 2 μ 1 = f 2 ( 0 ) , f 1 ( 1 ) = 0 μ 2 μ 1 = f 2 ( 1 ) ,
and thus there is a unique intersection point ρ ( 0 , 1 ) of s * ( ρ ) and s * * ( ρ ) . It seems not clear at the moment whether this unique point is ρ ^ 1 or not, since we have to solve a polynomial equation of order 4. Instead, it is sufficient to show that
s A ( ρ ^ 1 ) = s * ( ρ ^ 1 ) .
In fact, basic calculations show that the above is equivalent to
( 2 μ 1 ρ ^ 1 ( u 1 + μ 2 ) ) f ( ρ ^ 1 ) = 0
which is valid due to the fact that f ( ρ ^ 1 ) = 0 . Finally, the claim in (c) follows since
ρ μ 2 μ 1 < μ 1 + ρ μ 2 2 ρ 2 μ 1 .
This completes the proof. □
Proof of Lemma 10.
Two cases ρ ^ 1 μ 1 / μ 2 and ρ ^ 1 > μ 1 / μ 2 should be distinguished. Since the proofs for these two cases are similar, we give below only the proof for the more complicated case ρ ^ 1 μ 1 / μ 2 .
Note that, for 0 ρ μ 1 / μ 2 , as in (19),
inf ( t , s ) A ¯ g ( t , s ) = inf ( t , s ) A ¯ g 3 ( t , s ) = min inf 0 < s < s * * f A ( s ) , inf s s * * g L ( s ) ,
and thus the claim for 0 ρ μ 1 / μ 2 follows directly from (i)–(ii) of (b) in Lemma 9.
Next, we consider the case μ 1 / μ 2 < ρ < ρ ^ 2 (note here ρ ^ 1 < μ 1 / μ 2 < ρ ). We have by (i) of (d) in Lemma 7 and (i)–(ii) of (b) in Lemma 9 that
s * * ( ρ ) < s * ( ρ ) = t * ( ρ ) < s 1 * ( ρ ) , s 1 * ( ρ ) > 1 μ 2 , s A ( ρ ) > s * * ( ρ ) .
Thus, it follows from Lemma 6 that
inf ( t , s ) A ¯ g ( t , s ) = min inf 0 < s < s * * g A ( t 2 ( s ) , s ) , inf s * * s s 1 * g A ( s , s ) , inf s 1 * < s g A ( f 1 ( s ) , s ) , inf s 1 * < s g 2 ( s ) = g A ( t * , s * ) = g L ( s * ) ,
and ( t * , s * ) L is the unique minimizer of g ( t , s ) on A ¯ . Here we used the fact that
inf s 1 * < s g A ( f 1 ( s ) , s ) = inf s 1 * < s g 2 ( s ) = g A ( f 1 ( s 1 * ) , s 1 * ) = g 2 ( s 1 * ) > g L ( s * ) .
Next, if ρ = ρ ^ 2 , then
s 1 * ( ρ ^ 2 ) = s * ( ρ ^ 2 ) = 1 μ 2 ,
and thus
inf ( t , s ) A ¯ g ( t , s ) = min inf 0 < s < s * * g A ( t 2 ( s ) , s ) , inf s * * s 1 / μ 2 g A ( s , s ) , inf 1 / μ 2 < s g A ( f 1 ( s ) , s ) , inf 1 / μ 2 < s g 2 ( s ) = g A ( 1 / μ 2 , 1 / μ 2 ) = g L ( 1 / μ 2 ) = g 2 ( 1 / μ 2 ) .
Furthermore, the unique minimum of g ( t , s ) on A is attained at ( 1 / μ 2 , 1 / μ 2 ) , with g 3 ( 1 / μ 2 , 1 / μ 2 ) = g 2 ( 1 / μ 2 ) .
Finally, for ρ ^ 2 < ρ < 1 , we have
s * * ( ρ ) < s 1 * ( ρ ) < s * ( ρ ) < 1 μ 2 , s A ( ρ ) > s * * ( ρ ) ,
and thus
inf ( t , s ) A ¯ g ( t , s ) = min inf 0 < s < s * * g A ( t 2 ( s ) , s ) , inf s * * s s 1 * g A ( s , s ) , inf s 1 * < s g A ( f 1 ( s ) , s ) , inf s 1 * < s g 2 ( s ) = g 2 ( 1 / μ 2 ) ,
where the unique minimum of g ( t , s ) on A ¯ is attained when g 3 ( t , s ) = g 2 ( s ) on D 2 . This completes the proof. □

References

  1. Albrecher, Hansjörg, Pablo Azcue, and Nora Muler. 2017. Optimal dividend strategies for two collaborating insurance companies. Advances in Applied Probability 49: 515–48. [Google Scholar] [CrossRef]
  2. Asmussen, Søren, and Hansjörg Albrecher. 2010. Ruin Probabilities, 2nd ed. Advanced Series on Statistical Science & Applied Probability, 14; Hackensack: World Scientific Publishing Co. Pte. Ltd. [Google Scholar]
  3. Avram, Florin, and Sooie-Hoe Loke. 2018. On central branch/reinsurance risk networks: Exact results and heuristics. Risks 6: 35. [Google Scholar] [CrossRef]
  4. Avram, Florin, and Andreea Minca. 2017. On the central management of risk networks. Advances in Applied Probability 49: 221–37. [Google Scholar] [CrossRef]
  5. Avram, Florin, Zbigniew Palmowski, and Martijn R. Pistorius. 2008a. Exit problem of a two-dimensional risk process from the quadrant: Exact and asymptotic results. Annals of Applied Probability 19: 2421–49. [Google Scholar] [CrossRef]
  6. Avram, Florin, Zbigniew Palmowski, and Martijn R. Pistorius. 2008b. A two-dimensional ruin problem on the positive quadrant. Insurance: Mathematics and Economics 42: 227–34. [Google Scholar] [CrossRef]
  7. Azcue, Pablo, and Nora Muler. 2018. A multidimensional problem of optimal dividends with irreversible switching: A convergent numerical scheme. arXiv. [Google Scholar]
  8. Azcue, Pablo, Nora Muler, and Zbigniew Palmowski. 2019. Optimal dividend payments for a two-dimensional insurance risk process. European Actuarial Journal 9: 241–72. [Google Scholar] [CrossRef]
  9. Dȩbicki, Krzysztof, Enkelejd Hashorva, Lanpeng Ji, and Tomasz Rolski. 2018. Extremal behavior of hitting a cone by correlated Brownian motion with drift. Stochastic Processes and their Applications 12: 4171–206. [Google Scholar] [CrossRef]
  10. Dȩbicki, Krzysztof, Kamil MarcinKosiński, Michel Mandjes, and Tomasz Rolski. 2010. Extremes of multidimensional Gaussian processes. Stochastic Processes and their Applications 120: 2289–301. [Google Scholar] [CrossRef]
  11. Delsing, Guusje, Michel Mandjes, Peter Spreij, and Erik Winands. 2018. Asymptotics and approximations of ruin probabilities for multivariate risk processes in a Markovian environment. arXiv. [Google Scholar]
  12. Embrechts, Paul, Claudia Klüppelberg, and Thomas Mikosch. 1997. Modelling Extremal Events of Applications of Mathematics (New York). Berlin: Springer, vol. 33. [Google Scholar]
  13. Foss, Sergey, Dmitry Korshunov, Zbigniew Palmowski, and Tomasz Rolski. 2017. Two-dimensional ruin probability for subexponential claim size. Probability and Mathematical Statistics 2: 319–35. [Google Scholar]
  14. Garbit, Rodolphe, and Kilian Raschel. 2014. On the exit time from a cone for Brownian motion with drift. Electronic Journal of Probability 19: 1–27. [Google Scholar] [CrossRef]
  15. Gerber, Hans U., and Elias SW Shiu. 2004. Optimal Dvidends: Analysis with Brownian Motion. North American Actuarial Journal 8: 1–20. [Google Scholar] [CrossRef]
  16. Grandell, Jan. 1991. Aspects of Risk Theory. New York: Springer. [Google Scholar]
  17. Hashorva, Enkelejd. 2005. Asymptotics and bounds for multivariate Gaussian tails. Journal of Theoretical Probability 18: 79–97. [Google Scholar] [CrossRef]
  18. Hashorva, Enkelejd, and Jürg Hüsler. 2002. On asymptotics of multivariate integrals with applications to records. Stochastic Models 18: 41–69. [Google Scholar] [CrossRef]
  19. He, Hua, William P. Keirstead, and Joachim Rebholz. 1998. Double lookbacks. Mathematical Finance 8: 201–28. [Google Scholar] [CrossRef]
  20. Iglehart, L. Donald. 1969. Diffusion approximations in collective risk theory. Journal of Applied Probability 6: 285–92. [Google Scholar] [CrossRef]
  21. Ji, Lanpeng, and Stephan Robert. 2018. Ruin problem of a two-dimensional fractional Brownian motion risk process. Stochastic Models 34: 73–97. [Google Scholar] [CrossRef]
  22. Klugman, Stuart A., Harry H. Panjer, and Gordon E. Willmot. 2012. Loss Models: From Data to Decisions. Hoboken: John Wiley and Sons. [Google Scholar]
  23. Kou, Steven, and Haowen Zhong. 2016. First-passage times of two-dimensional Brownian motion. Advances in Applied Probability 48: 1045–60. [Google Scholar] [CrossRef]
  24. Li, Junhai, Zaiming Liu, and Qihe Tang. 2007. On the ruin probabilities of a bidimensional perturbed risk model. Insurance: Mathematics and Economics 41: 185–95. [Google Scholar] [CrossRef]
  25. Mikosch, Thomas. 2008. Non-life Insurance Mathematics. An Introduction with Stochastic Processes. Berlin: Springer. [Google Scholar]
  26. Rolski, Tomasz, Hanspeter Schmidli, Volker Schmidt, and Jozef Teugels. 2009. Stochastic Processes for Insurance and Finance. Hoboken: John Wiley & Sons, vol. 505. [Google Scholar]
Figure 1. Partition of ( 0 , ) 2 into D 1 , D 2 .
Figure 1. Partition of ( 0 , ) 2 into D 1 , D 2 .
Risks 07 00083 g001

Share and Cite

MDPI and ACS Style

Dȩbicki, K.; Ji, L.; Rolski, T. Logarithmic Asymptotics for Probability of Component-Wise Ruin in a Two-Dimensional Brownian Model. Risks 2019, 7, 83. https://doi.org/10.3390/risks7030083

AMA Style

Dȩbicki K, Ji L, Rolski T. Logarithmic Asymptotics for Probability of Component-Wise Ruin in a Two-Dimensional Brownian Model. Risks. 2019; 7(3):83. https://doi.org/10.3390/risks7030083

Chicago/Turabian Style

Dȩbicki, Krzysztof, Lanpeng Ji, and Tomasz Rolski. 2019. "Logarithmic Asymptotics for Probability of Component-Wise Ruin in a Two-Dimensional Brownian Model" Risks 7, no. 3: 83. https://doi.org/10.3390/risks7030083

APA Style

Dȩbicki, K., Ji, L., & Rolski, T. (2019). Logarithmic Asymptotics for Probability of Component-Wise Ruin in a Two-Dimensional Brownian Model. Risks, 7(3), 83. https://doi.org/10.3390/risks7030083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop