Next Article in Journal
Model Risk in Portfolio Optimization
Next Article in Special Issue
Tail Risk in Commercial Property Insurance
Previous Article in Journal
Random Shifting and Scaling of Insurance Risks
Previous Article in Special Issue
When the U.S. Stock Market Becomes Extreme?

Risks 2014, 2(3), 289-314; https://doi.org/10.3390/risks2030289

Article
Joint Asymptotic Distributions of Smallest and Largest Insurance Claims
1
Department of Actuarial Science, Faculty of Business and Economics, University of Lausanne, 1015 Lausanne, Switzerland
2
Swiss Finance Institute, Lausanne 1015, Switzerland
3
Université de Lyon, Université Lyon 1, Institut de Science Financière et d'Assurances,Lyon 69007, France
4
Department of Mathematics, University of Leuven, Leuven 3001, Belgium
*
Author to whom correspondence should be addressed.
Received: 25 February 2014; in revised form: 15 June 2014 / Accepted: 15 July 2014 / Published: 31 July 2014

## Abstract

:
Assume that claims in a portfolio of insurance contracts are described by independent and identically distributed random variables with regularly varying tails and occur according to a near mixed Poisson process. We provide a collection of results pertaining to the joint asymptotic Laplace transforms of the normalised sums of the smallest and largest claims, when the length of the considered time interval tends to infinity. The results crucially depend on the value of the tail index of the claim distribution, as well as on the number of largest claims under consideration.
Keywords:
aggregate claims; ammeter problem; near mixed Poisson process; reinsurance; subexponential distributions; extremes

## 1. Introduction

When dealing with heavy-tailed insurance claims, it is a classical problem to consider and quantify the influence of the largest among the claims on their total sum, see e.g., Ammeter [1] for an early reference in the actuarial literature. This topic is particularly relevant in non-proportional reinsurance applications when a significant proportion of the sum of claims is consumed by a small number of claims. The influence of the maximum of a sample on the sum has in particular attracted considerable attention over the last fifty years (see Ladoucette and Teugels [2] for a recent overview of existing literature on the subject). Different modes of convergence of the ratios sum over maximum or maximum over sum have been linked with conditions on additive domain of attractions of a stable law (see e.g., Darling [3], Bobrov [4], Chow and Teugels [5], O’Brien [6] and Bingham and Teugels [7]).
It is also of interest to study the joint distribution of normalised smallest and largest claims. We will address this question in this paper under the assumption that the number of claims over time is described by a rather general counting process. These considerations may be helpful for the design of possible reinsurance strategies and risk management in general. In particular, extending the understanding of the influence of the largest claim on the aggregate sum (which is a classical topic in the theory of subexponential distributions) towards the relative influence of several large claims together, can help to assess the potential gain from reinsurance treaties of large claim reinsurance and ECOMOR type (see e.g., Section 1).
In this paper we consider a homogeneous insurance portfolio, where the distribution of the individual claims has a regularly varying tail. The number of claims is generated by a near mixed Poisson process. For this rather general situation we derive a number of limiting results for the joint Laplace transforms of the smallest and largest claims, as the time t tends to infinity. These turn out to be quite explicit and crucially depend on the rule of what is considered to be a large claim as well as on the value of the tail index.
Let $X 1 , X 2 , …$ be a sequence of independent positive random variables (representing claims) with common distribution function F. For $n ≥ 1$, denote by $X 1 * ≤ X 2 * ≤ … ≤ X n *$ the corresponding order statistics. We assume that the claim size distribution satisfies the condition
$1 − F ( x ) = F ¯ ( x ) = x − α ℓ ( x ) , x > 0$
where $α > 0$ and is a slowly varying function at infinity. The tail index is defined as $γ = 1 / α$ and we will typically express our results in terms of γ. Denote by $U ( y ) = F ← ( 1 − 1 / y )$ the tail quantile function of F where $F ← t = inf x ∈ R : F x ≥ t$. Under (1), $U ( y ) = y γ ℓ 1 ( y )$, where $ℓ 1$ is again a slowly varying function. For textbook treatments of regularly varying distributions and/or their applications in insurance modelling, see e.g., Bingham et al. [8], Embrechts et al. [9], Rolski et al. [10] and Asmussen and Albrecher [11].
Denote the number of claims up to time t by $N ( t )$ with $p n ( t ) = P ( N ( t ) = n )$. The probability generating function of $N ( t )$ is given by
$Q t ( z ) = E z N ( t ) = ∑ n = 0 ∞ p n ( t ) z n$
which is defined for $| z | ≤ 1$. Let
$Q t ( r ) ( z ) = r ! E N ( t ) r z N ( t ) − r$
be its derivative of order r with respect to z. In this paper we assume that $N ( t )$ is a near mixed Poisson (NMP) process, i.e., the claim counting process satisfies the condition
$N ( t ) t → D Θ , t ↑ ∞$
for some random variable Θ, where D denotes convergence in distribution. This condition implies that
$Q t 1 − w t → E e − w Θ$
and
$1 t r Q t ( r ) 1 − w t → E e − w Θ Θ r : = q r ( w ) , t ↑ ∞$
Note also that, for $β > 0$ and $r ∈ N$,
$∫ 0 ∞ w β − 1 q r ( w ) d w = Γ β E Θ r − β .$
A simple example of an NMP process is the homogeneous Poisson process, which is very popular in claims modelling and plays a crucial role in both actuarial literature and in practice. The class of mixed Poisson processes (for which condition (2) holds not only in the limit, but for any t) has found numerous applications in (re)insurance modelling because of its flexibility, its success in actuarial data fitting and its property of being more dispersed than the Poisson process (see Grandell [12] for a general account and various characterisations of mixed Poisson processes). The mixing may, e.g., be interpreted as claims coming from a heterogeneity of groups of policyholders or of contract specifications. The more general class of NMP processes is used here mainly because the results hold under this more general setting as well. The NMP distributions (for fixed t) contain the class of infinitely divisible distributions (if the component distribution has finite mean). Moreover any renewal process generated by an interclaim distribution with finite mean ν is an NMP process (note that then by the weak law of large numbers in renewal theory, $N ( t ) / t → D Θ$ where Θ is degenerate at the point $1 / ν$).
The aggregate claim up to time t is given by
$S ( t ) = ∑ j = 1 N ( t ) X j$
where it is assumed that $N ( t ) t ≥ 0$ is independent of the claims $X i i ≥ 1$. For $s ∈ N$ and $N ( t ) ≥ s + 2$, we define the sum of the $N ( t ) − s − 1$ smallest and the sum of the s largest claims by
$Σ s ( t ) = ∑ j = 1 N ( t ) − s − 1 X j * , Λ s ( t ) = ∑ j = N ( t ) − s + 1 N ( t ) X j *$
so that $S ( t ) = Σ s ( t ) + X N ( t ) − s * + Λ s ( t )$. Here Σ refers to small while Λ refers to large.
In this paper we study the limiting behaviour of the triplet $( Λ s ( t ) , X N ( t ) − s * , Σ s ( t ) )$ with appropriate normalisation coefficients depending on γ, the tail index, and on s, the number of terms in the sum of the largest claims. We will consider three asymptotic cases, as they show a different behaviour: s is fixed, s tends to infinity but slower than the expected number of claims, and s tends to infinity and is asymptotically equal to a proportion of the number of claims.
Example 1. In large claim reinsurance for the s largest claims in a specified time interval $[ 0 , t ]$, the reinsured amount is given by $Λ s ( t )$, so the interpretation of our results to the analysis of such reinsurance treaties is immediate. A variant of large claim reinsurance that also has a flavour of excess-of-loss treaties is the so-called ECOMOR (excedent du cout moyen relatif) treaty with reinsured amount
$∑ j = N ( t ) − s + 1 N ( t ) ( X j * − X N ( t ) − s * ) ,$
i.e., the deductible is itself an order statistic (or in other words, the reinsurer pays all exceedances over the s-largest claim). This treaty has some interesting properties, for instance with respect to inflation, see e.g., Ladoucette & Teugels [13]. If the reinsurer accepts to only cover a proportion β ($0 < β < 1$) of this amount, the cedent’s claim amount is given by
$Σ s ( t ) + β s X N ( t ) − s * + ( 1 − β ) Λ s ( t ) ,$
which is a weighted sum of the quantities $Σ s ( t )$, $X N ( t ) − s *$ and $Λ s ( t )$. The asymptotic results above in terms of the Laplace transform can then be used to approximate the distribution of the cedent’s and reinsurer’s claim amount in such a contract, and correspondingly the design of a suitable contract (including the choice of s) will depend quite substantially on the value of the extreme value index of the underlying claim distribution.
The paper is organised as follows. We first give the joint Laplace transform of the triplet $( Λ s ( t ) , X N ( t ) − s * , Σ s ( t ) )$ for a fixed t in Section 2. Section 3 deals with asymptotic joint Laplace transforms in the case $γ > 1$. We also discuss consequences for moments of ratios of the limiting quantities. The behaviour for $γ = 1$ depends on whether $E [ X i ]$ is finite or not. In the first case, the analysis for $γ < 1$ applies, whereas in the latter one has to adapt the analysis of Section 3 exploiting the slowly varying function $∫ 0 x y d F ( y )$, but we refrain from treating this very special case in detail (see e.g., Albrecher and Teugels [14] for a similar adaptation in another context). Section 4 and Section 5 treat the case $γ < 1$ without and with centering, respectively. The proofs of the results in Section 3, Section 4 and Section 5 are given in Section 6. Section 7 concludes.

## 2. Preliminaries

In this section, we state a versatile formula that will allow us later to derive almost all the desired asymptotic properties of the joint distributions of the triplet $( Λ s ( t ) , X N ( t ) − s * , Σ s ( t ) )$. We consider the joint Laplace transform of $( Λ s ( t ) , X N ( t ) − s * , Σ s ( t ) )$ to study their joint distribution in an easy fashion. For a fixed t, it is denoted by
$Ω s ( u , v , w ; t ) = E exp ( − u Λ s ( t ) − v X N ( t ) − s * − w Σ s ( t ) ) .$
Then the following representation holds:
Proposition 1. We have
$Ω s ( u , v , w ; t ) = ∑ n = 0 s p n ( t ) ∫ 0 ∞ e − u x d F ( x ) n + 1 s ! ∫ 0 ∞ E [ 1 { X > y } e − u X ] s e − v y Q t ( s + 1 ) E 1 { X < y } e − w X d F ( y ) .$
Proof. The proof is standard if we interpret $X r * = 0$ whenever $r ≤ 0$. Indeed, condition on the number of claims at the time epoch t and subdivide the requested expression into three parts:
$Ω s ( u , v , w ; t ) = ∑ n = 0 s p n ( t ) E exp − u ∑ j = 1 n X j N ( t ) = n + p s + 1 ( t ) E exp − u ∑ j = 2 s + 1 X j * − v X 1 * N ( t ) = s + 1 + ∑ n = s + 2 ∞ p n ( t ) E exp − u ∑ j = n − s + 1 n X j * − v X n − s * − w ∑ j = 1 n − s − 1 X j * N ( t ) = n$
The conditional expectation in the first term on the right simplifies easily to the form $( ∫ 0 ∞ e − u x d F ( x ) ) n$. For the conditional expectations in the second and third term, we condition additionally on the value y of the order statistic $X n − s *$; the $n − s − 1$ order statistics $X 1 * , X 2 * , … , X n − s − 1 *$ are then distributed independently and identically on the interval $[ 0 , y ]$ yielding the factor $( ∫ 0 y e − w x d F ( x ) ) n − s − 1$. A similar argument works for the s order statistics $X n − s + 1 * , X n − s + 2 * , … , X n *$. Combining the two terms yields
$Ω s ( u , v , w ; t ) = ∑ n = 0 s p n ( t ) ∫ 0 ∞ e − u x d F ( x ) n + ∑ n = s + 1 ∞ p n ( t ) n ! s ! ( n − s − 1 ) ! ∫ 0 ∞ ∫ y ∞ e − u x d F ( x ) s e − v y ∫ 0 y e − w x d F ( x ) n − s − 1 d F ( y ) .$
A straightforward calculation finally shows
$Ω s ( u , v , w ; t ) = ∑ n = 0 s p n ( t ) ∫ 0 ∞ e − u x d F ( x ) n + 1 s ! ∫ 0 ∞ ∫ y ∞ e − u x d F ( x ) s e − v y Q t ( s + 1 ) ∫ 0 y e − w x d F ( x ) d F ( y ) .$
☐
Consequently, it is possible to easily derive the expectations of products (or ratios) of $Λ s ( t )$, $X N ( t ) − s *$, $Σ s ( t )$ and $S ( t )$ by differentiating (or integrating) the joint Laplace transform. We only write down their first moment for simplicity.
Corollary 2. We have
$E Λ s ( t ) = ∑ n = 1 s n p n ( t ) E X 1 + 1 ( s − 1 ) ! ∫ 0 ∞ F ¯ ( y ) s − 1 ∫ y ∞ x d F ( x ) Q t ( s + 1 ) F ( y ) d F ( y ) E { X N ( t ) − s * } = 1 s ! ∫ 0 ∞ y F ¯ ( y ) s Q t ( s + 1 ) F ( y ) d F ( y ) E Σ s ( t ) = 1 s ! ∫ 0 ∞ F ¯ ( y ) s Q t ( s + 2 ) F ( y ) ∫ 0 y x d F ( x ) d F ( y ) E S ( t ) = E N ( t ) E X 1$
Proof. The individual Laplace transforms can be written in the following form:
$E exp ( − u Λ s ( t ) = ∑ n = 0 s p n ( t ) ∫ 0 ∞ e − u x d F ( x ) n + 1 s ! ∫ 0 ∞ ∫ y ∞ e − u x d F ( x ) s Q t ( s + 1 ) F ( y ) d F ( y ) E exp ( − v X N ( t ) − s * = Π s + 1 ( t ) + 1 s ! ∫ 0 ∞ F ¯ ( y ) s e − v y Q t ( s + 1 ) F ( y ) d F ( y ) E exp ( − w Σ s ( t ) = Π s + 1 ( t ) + 1 s ! ∫ 0 ∞ F ¯ ( y ) s Q t ( s + 1 ) ∫ 0 y e − w x d F ( x ) d F ( y ) E exp ( − u S ( t ) = Q t ∫ 0 ∞ e − u x d F ( x ) ,$
where $Π s + 1 ( t ) = ∑ n = 0 s p n ( t )$. By taking the first derivative, we arrive at the respective expectations.     ☐

## 3. Asymptotics for the Joint Laplace Transforms when $γ > 1$

Before giving the asymptotic joint Laplace transform of the sum of the smallest and the sum of the largest claims, we first recall an important result about convergence in distribution of order statistics and derive a characterisation of their asymptotic distribution. All proofs of this section are deferred to Section 6.
It is well-known that there exists a sequence $E 1 , E 2 , …$ of exponential random variables with unit mean such that
$( X n * , … , X 1 * ) = D ( U Γ n + 1 / Γ 1 , … , U Γ n + 1 / Γ n ,$
where $Γ k = E 1 + … + E k$. Let $Z n = X n * , … , X 1 * , 0 , … / U ( n )$. It may be shown that $Z n$ converges in distribution to $Z = Z 1 , Z 2 , …$ in $R + N$, where $Z k = Γ k − γ$ (see Lemma 1 in LePage et al. [15]). For $γ > 1$, the series $( ∑ k = 1 n Γ k − γ ) n ≥ 1$ converges almost surely. Therefore, for a fixed s, we deduce that, as $n → ∞$,
$∑ j = n − s + 1 n X j * , X n − s * , ∑ j = 1 n − s − 1 X j * / U ( n ) → D ∑ k = 1 s Γ k − γ , Γ s + 1 − γ , ∑ k = s + 2 ∞ Γ k − γ .$
In particular, we derive by the Continuous Mapping Theorem that
$∑ j = 1 n − s X j * X n − s * → D R ( s ) = ∑ k = s + 1 ∞ Γ k − γ Γ s + 1 − γ .$
Note that the first moment of $R ( s )$ (but only the first moment) may be easily derived since
$E R ( s ) = 1 + ∑ j = s + 2 ∞ E { B j γ } = 1 + s + 1 γ − 1 ,$
where $B j = ∑ i = 1 s + 1 E i / ∑ i = 1 j E i$ has a Beta$s + 1 , j − 1$ distribution. We also recall that F belongs to the (additive) domain of attraction of a stable law with index $γ > 1$ if and only if
$lim n → ∞ E ∑ j = 1 n X j * X n * = E R ( 0 ) = 1 + 1 γ − 1 = 1 1 − 1 / γ$
(see e.g., Theorem 1 in Ladoucette and Teugels [2]).
When $N ( t ) t ≥ 0$ is an NMP process, we also have, as $t → ∞$,
$∑ j = N ( t ) − s + 1 N ( t ) X j * , X N ( t ) − s * , ∑ j = 1 N ( t ) − s − 1 X j * / U ( N ( t ) ) → D ∑ k = 1 s Γ k − γ , Γ s + 1 − γ , ∑ k = s + 2 ∞ Γ k − γ$
and
$∑ j = 1 N ( t ) − s X j * X N ( t ) − s * → D R ( s ) = ∑ k = s + 1 ∞ Γ k − γ Γ s + 1 − γ$
(see e.g., Lemma 2.5.6 in Embrechts et al. [9]). However, note that if the triplet $( Λ s ( t ) , X N ( t ) − s * , Σ s ( t ) )$ is normalised by $U ( t )$ instead of $U ( N ( t ) )$ in (6), then the asymptotic distribution will differ due to the randomness brought in by the counting process $N ( t ) t ≥ 0$.
The following proposition gives the asymptotic Laplace transform when the triplet $( Λ s ( t ) , X N ( t ) − s * , Σ s ( t ) )$ is normalised by $U ( t )$.
Proposition 3. For a fixed s $∈ N$, as $t → ∞$, we have $Λ s ( t ) / U ( t ) , X N ( t ) − s * / U ( t ) , Σ s ( t ) / U ( t ) → D ( Λ s , Ξ s , Σ s )$ where
$E exp ( − u Λ s − v Ξ s − w Σ s ) = 1 s ! ∫ 0 ∞ z γ ∫ 1 ∞ e − u z − γ η η 1 + 1 / γ d η s e − v z − γ q s + 1 z 1 + 1 γ ∫ 0 1 1 − e − w z − γ η η 1 + 1 / γ d η d z .$
If $Θ = 1$ a.s., this expression simplifies to
$E exp ( − u Λ s − v Ξ s − w Σ s ) = 1 s ! ∫ 0 ∞ z γ ∫ 1 ∞ e − u z − γ η η 1 + 1 / γ d η s e − v z − γ exp − z 1 + 1 γ ∫ 0 1 1 − e − w z − γ η η 1 + 1 / γ d η d z .$
We observe that $N ( t ) t ≥ 0$ modifies the asymptotic Laplace transform by introducing $q s + 1$ into the integral (7). However, the moments of $R ( s )$ do not depend on the law of Θ:
Corollary 4. For $k ∈ N *$, we have
$E R ( s ) k = 1 + ∑ i = 1 k k i ∑ j = 1 i ( s + j ) ! s ! C i , j γ ,$
where
$C i , j γ = ∑ m 1 + … + m i − j + 1 = j 1 m 1 + 2 m 2 + … + ( i − j + 1 ) m i − j + 1 = i i ! m 1 ! m 2 ! … m i − j + 1 ! ∏ l = 1 i − j + 1 1 l ! l γ − 1 m l .$
Note that this corollary only provides the moments of $R ( s )$. In order to have moment convergence results for the ratios, it is necessary to assume uniform integrability of $( { ∑ j = 1 N ( t ) − s X j * / X N ( t ) − s * } k ) t ≥ 0$. It is also possible to use the Laplace transform of the triplet with a fixed t to characterise the moments of the ratios ${ ∑ j = 1 N ( t ) − s X j * / X N ( t ) − s * }$ (see Corollary 2), and then to follow the same approach as proposed by Ladoucette [16] for the ratio of the random sum of squares to the square of the random sum under the condition that $E Θ ε < ∞$ and $E Θ − ε < ∞$ for some $ε > 0$.
Remark 1. For $k = 1$, (8) reduces again to $E R ( s ) = 1 + ( s + 1 ) C 1 , 1 ( γ ) = 1 + s + 1 γ − 1 ,$ which is (5). Furthermore, for all $s ≥ 0$
$Var R ( s ) = ( s + 1 ) γ 2 ( γ − 1 ) 2 ( 2 γ − 1 ) = ( s + 1 ) / γ ( 2 − 1 / γ ) ( 1 − 1 / γ ) 2 .$
Remark 2. $R ( s )$ is the ratio of the sum $Ξ s + Σ s$ over $Ξ s$. By taking the derivative of (7), it may be shown that, for $1 < γ < s + 1$ and $E Θ γ < ∞$,
$E Ξ s + Σ s = Γ s − γ + 1 ( γ − 1 ) Γ s E Θ γ .$
Therefore the mean of $Ξ s + Σ s$ will only be finite for sufficiently small γ. An alternative interpretation is that for given value of γ, the number s of removed maximal terms in the sum has to be sufficiently large to make the mean of the remaining sum finite. The normalisation of the sum by $Ξ s$, on the other hand, ensures the existence of the moments of the ratio $R ( s )$ for all values of s and $γ > 1$.
Remark 3. It is interesting to compare Formula (8) with the limiting moment of the statistic
$T N ( t ) = X 1 2 + ⋯ + X N ( t ) 2 ( X 1 + … + X N ( t ) ) 2 .$
For instance, $lim t → ∞ E T N ( t ) = 1 − 1 / γ$, $lim t → ∞ Var T N ( t ) = 1 − 1 / γ / ( 3 γ )$ and the limit of the nth moment can be expressed as an nth-order polynomial in $1 / γ$, see Albrecher and Teugels [14], Ladoucette [16] and Albrecher et al. [17]. Motivated by this similarity, let us study the link in some more detail. By using once again Lemma 1 in LePage et al. [15], we deduce that
$T N ( t ) → D T ∞ = ∑ k = 1 ∞ Γ k − 2 γ ( ∑ j = 1 ∞ Γ k − γ ) 2 .$
Recall that $R ( 0 )$ is the weak limit of the ratio $( ∑ j = 1 N ( t ) X j * ) / X N ( t ) *$ and $E { R ( 0 ) } = 1 + 1 / ( γ − 1 )$. Using Equation (10) and $E { R ( 0 ) 2 T ∞ } = 2 γ / ( 2 γ − 1 )$ (which is a straightforward consequence of the fact that $X i 2$ has regularly varying tail with tail index $2 γ$), one then obtains a simple formula for the covariance between $R ( 0 ) 2$ and $T ∞$:
$Cov R ( 0 ) 2 , T ∞ = − 2 γ 1 − 3 γ + 2 γ 2 .$
Determining $Var { R ( 0 ) 2 }$ by exploiting Equation (8) for $k = 4$, we then arrive at the linear correlation coefficient
$ρ ( R ( 0 ) 2 , T ∞ ) = − 3 ( γ − 1 ) ( 3 γ − 1 ) ( 4 γ − 1 ) γ ( 43 γ 2 − 7 γ − 6 ) .$
Figure 1 depicts $ρ ( R ( 0 ) 2 , T ∞ )$ as a function of $α = 1 / γ$. Note that $lim γ → ∞ ρ ( R ( 0 ) 2 , T ∞ ) = − 6 / 43$. The correlation coefficient allows to quantify the negative linear dependence between the two ratios (the dependence becomes weaker when α increases, as the maximum term will then typically be less dominant in the sum).
Figure 1. $ρ ( R ( 0 ) 2 , T ∞ )$ as a function of α.
Figure 1. $ρ ( R ( 0 ) 2 , T ∞ )$ as a function of α.
Next, let us consider the case when the number of largest terms also increases as $t → ∞$, but slower than the expected number of claims. It is now necessary to change the normalisation coefficients of $X N ( t ) − s *$ and $Σ s ( t )$.
Proposition 5. Let $s = p ( t ) N ( t ) → ∞$ for a function $p ( t )$ with $p ( t ) → 0$ and $t p ( t ) → ∞$. Then $Λ s ( t ) / U ( t ) , X N ( t ) − s * / U ( p − 1 ( t ) ) , Σ s ( t ) / ( t p ( t ) U ( p − 1 ( t ) ) ) → D ( Λ , Ξ , Σ )$ where
$E exp ( − u Λ − v Ξ − w Σ ) = e − v q 0 ∫ 0 ∞ ( 1 − e − u z − γ η ) η 1 + 1 / γ d η + w γ − 1 .$
If $Θ = 1$ a.s.
$E exp ( − u Λ − v Ξ − w Σ ) = exp − ∫ 0 ∞ ( 1 − e − u z − γ η ) η 1 + 1 / γ d η − v − w γ − 1 .$
Several messages may be derived from (11). First note that the asymptotic distribution of $X N ( t ) − s *$ is degenerated for $s = p ( t ) N ( t )$, since $X N ( t ) − s * / U ( p − 1 ( t ) ) → D 1$ as $t → ∞$. Second, the asymptotic distribution of the sum of the smallest claims is the distribution of Θ up to a scaling factor, since $Σ s ( t ) / ( t p ( t ) U ( p − 1 ( t ) ) ) → D Θ / ( γ − 1 )$ as $t → ∞$.
Finally, for a fixed proportion of maximum terms, it is also necessary to change the normalisation coefficients of $X N ( t ) − s *$ and $Σ s ( t )$. We have
Proposition 6. Let $s = p N ( t )$ for a fixed $0 < p < 1$. Then $Λ s ( t ) / U ( t ) , X N ( t ) − s * , Σ s ( t ) / t → D ( Λ p , Ξ p , Σ p )$ where
$E exp ( − u Λ p − v Ξ p − w Σ p ) = e − v x p q 0 u 1 / γ Γ ( 1 − 1 / γ ) 1 − p + w E X | X ≤ x p$
and $x p = F − 1 ( p )$. If $Θ = 1$ a.s.,
$E exp ( − u Λ p − v Ξ p − w Σ p ) = exp − u 1 / γ Γ ( 1 − 1 / γ ) 1 − p − v x p − w E X | X ≤ x p .$
As expected, $X N ( t ) − s * → D x p$ and $Σ s ( t ) / t → D Θ E X | X ≤ x p$ as $t → ∞$. If $Θ = 1$ a.s. and $γ = 2$, then $Λ p$ has an inverse Gamma distribution with shape parameter equal to $1 / 2$.

## 4. Asymptotics for the Joint Laplace Transforms when $γ < 1$

In this section, we assume that $γ < 1$ and hence the expectation of the claim distribution is finite. We let $μ = E X 1$. The normalisation coefficient of the sum of the smallest claims, $Σ s ( t )$, will therefore be $t − 1$ as it is the case for $S ( t )$ for the Law of Large Numbers. In Section 5, we will then consider the sum of the smallest centered claims with another normalisation coefficient.
Again, consider a fixed s $∈ N$ first. The normalisation coefficients of $Λ s ( t )$ and $X N ( t ) − s *$ are the same as for the case $γ > 1$, but the normalisation coefficient of $Σ s$ is now $t − 1$.
Proposition 7. For fixed s $∈ N$, we have $Λ s ( t ) / U ( t ) , X N ( t ) − s * / U ( t ) , Σ s ( t ) / t → D ( Λ s , Ξ s , Σ s )$ where
$E exp ( − u Λ s − v Ξ s − w Σ s ) = 1 s ! ∫ 0 ∞ z γ ∫ 1 ∞ e − u z − γ η η 1 + 1 / γ d η s e − v z − γ q s + 1 z + w μ d z .$
If $Θ = 1$ a.s.,
$E exp ( − u Λ s − v Ξ s − w Σ s ) = e − w μ 1 s ! ∫ 0 ∞ z γ ∫ 1 ∞ e − u z − γ η η 1 + 1 / γ d η s e − v z − γ − z d z .$
Corollary 8. We have
$E Σ s Ξ s = μ Γ ( s − γ + 1 ) s ! E Θ 1 + γ$
and
$E Ξ 0 Λ s + Ξ s + Σ s = 1 − μ ∫ 0 ∞ ∫ 0 ∞ e − u z − γ q 2 z + u μ d u d z .$
We first note that
$E exp ( − w Σ s ) = 1 s ! ∫ 0 ∞ z s q s + 1 z + w μ d z = E 1 s ! ∫ 0 ∞ z s ( e − z + w μ Θ Θ s + 1 ) d z = E e − w μ Θ$
and therefore $Σ s ( t ) / t → D μ Θ$ as $t → ∞$ for any fixed s $∈ N$. The influence of the largest claims on the sum becomes less and less important as t is large and is asymptotically negligible. This is very different from the case $γ > 1$. In Theorem 1 in Downey and Wright [18], it is moreover shown that, as $n → ∞$,
$E X n * ∑ j = 1 n X j * = E X n * E { ∑ j = 1 n X j * } 1 + o ( 1 ) .$
This result is no more true in our framework when Θ is not degenerate at 1. Assume that $E Θ γ < ∞$. Using (3) and under a uniform integrability condition, one has
$lim t → ∞ E X N ( t ) * ∑ j = 1 N ( t ) X j * t U ( t ) ≠ lim t → ∞ E X N ( t ) * / U ( t ) lim t → ∞ E ∑ j = 1 N ( t ) X j * / t = Γ ( 1 − γ ) E Θ γ μ E Θ .$
Next, we consider the case with varying number of maximum terms. The normalisation coefficients of $Λ s ( t )$ and $X N ( t ) − s *$ now differ.
Proposition 9. Let $s = p ( t ) N ( t ) → ∞$ and $p ( t ) → 0$, i.e., $t p ( t ) → ∞$. Then $Λ s ( t ) / ( t p ( t ) U ( p − 1 ( t ) ) ) , X N ( t ) − s * / U ( p − 1 ( t ) ) , Σ s ( t ) / t → D ( Λ , Ξ , Σ )$ where
$E exp ( − u Λ − v Ξ − w Σ ) = e − v q 0 u 1 − γ + w μ .$
If $Θ = 1$ a.s.,
$E exp ( − u Λ − v Ξ − w Σ ) = e − u / ( 1 − γ ) e − v e − w μ .$
As for the case $γ > 1$, $X N ( t ) − s * / U ( p − 1 ( t ) ) → P 1$ as $t → ∞$. Moreover the asymptotic distribution of the sum of the largest claim is the distribution of Θ up to a scaling factor since $Λ s ( t ) / ( t p ( t ) U ( p − 1 ( t ) ) ) → D Θ / ( 1 − γ )$ as $t → ∞$. Finally note that $Σ s ( t ) / t → D μ Θ$ as $t → ∞$ as for the case when s was fixed.
Finally we fix p. Only the normalisation coefficient of $Λ s ( t )$ and its asymptotic distribution differ from the case $γ > 1$.
Proposition 10. Let $s = p N ( t )$ and $0 < p < 1$. Then $Λ s ( t ) / t , X N ( t ) − s * , Σ s ( t ) / t → D ( Λ p , Ξ p , Σ p )$ where
$E exp ( − u Λ p − v Ξ p − w Σ p ) = e − v x p q 0 u E X | X > x p + w E X | X ≤ x p$
and $x p = F − 1 ( p )$. If $Θ = 1$ a.s.,
$E exp ( − u Λ p − v Ξ p − w Σ p ) = e − u E X | X > x p e − v x p e − w E X | X ≤ x p .$
We note that the normalisation of $Λ s ( t )$ is the same as for $Σ s ( t )$ and that $Λ s ( t ) / t → D Θ E X | X > x p$ as $t → ∞$.

## 5. Asymptotics for the Joint Laplace Transform for $γ < 1$ with centered claims when s is fixed

In this section, we consider the sum of the smallest centered claims
$Σ s ( μ ) ( t ) = ∑ j = 1 N ( t ) − s − 1 X j * − μ$
instead of the sum of the smallest claims $Σ s ( t )$. Like for the Central Limit Theorem, we have to consider two sub-cases: $1 / 2 < γ < 1$ and $γ < 1 / 2$.
For the sub-case $1 / 2 < γ < 1$, the normalisation coefficient of $Σ s ( μ ) ( t )$ is now $U − 1 ( t )$.
Proposition 11. For fixed s $∈ N$ and $1 / 2 < γ < 1$, we have
$Λ s ( t ) / U ( t ) , X N ( t ) − s * / U ( t ) , Σ s ( μ ) ( t ) / U ( t ) → D ( Λ s , Ξ s , Σ s ( μ ) ) ,$
where
$E exp ( − u Λ s − v Ξ s − w Σ s ( μ ) ) = 1 s ! ∫ 0 ∞ z γ ∫ 1 ∞ e − u z − γ η η 1 + 1 / γ d η s e − v z − γ q s + 1 z 1 + 1 γ ∫ 0 1 1 − w z − γ η − e − w z − γ η η 1 + 1 / γ d η − z − γ 1 − γ w d z .$
If $Θ = 1$ a.s.,
$E exp ( − u Λ s − v Ξ s − w Σ s ( μ ) ) = 1 s ! ∫ 0 ∞ z γ ∫ 1 ∞ e − u z − γ η η 1 + 1 / γ d η s e − v z − γ exp − z 1 + 1 γ ∫ 0 1 1 − w z − γ η − e − w z − γ η η 1 + 1 / γ d η − z − γ 1 − γ w d z .$
If $s = 0$, then
$E exp ( − v Ξ 0 − w Σ 0 ( μ ) ) = ∫ 0 ∞ s e − v z − γ exp − z 1 + 1 γ ∫ 0 1 1 − w z − γ η − e − w z − γ η η 1 + 1 / γ d η − z − γ 1 − γ w d z$
and we see that $Ξ 0$ and $Σ 0 ( μ )$ are not independent.
Corollary 12. We have
$E 1 + Σ s ( μ ) Ξ s = 1 + s + 1 γ − 1 .$
This result is to compare with the one obtained by Bingham and Teugels [7] for $s = 0$ (see also Ladoucette and Teugels [2]).
For the sub-case $γ < 1 / 2$, let $σ 2 = Var X 1$. The normalisation coefficient of $Σ s ( μ ) ( t )$ becomes $t − 1 / 2$.
Proposition 13. For s $∈ N$ fixed and $γ < 1 / 2$, we have $Λ s ( t ) / U ( t ) , X N ( t ) − s * / U ( t ) , Σ s ( μ ) ( t ) / t 1 / 2 → D ( Λ s , Ξ s , Σ s ( μ ) )$ where
$E exp ( − u Λ s − v Ξ s − w Σ s ( μ ) ) = 1 s ! ∫ 0 ∞ z γ ∫ 1 ∞ e − u z − γ η η 1 + 1 / γ d η s e − v z − γ q s + 1 z − 1 2 w 2 σ 2 d z .$
If $Θ = 1$ a.s.,
$E exp ( − u Λ s − v Ξ s − w Σ s ( μ ) ) = 1 s ! exp 1 2 w 2 σ 2 ∫ 0 ∞ z γ ∫ 1 ∞ e − u z − γ η η 1 + 1 / γ d η s e − v z − γ − z d z .$
If $s = 0$ and $Θ = 1$ a.s., we note that the maximum, $Ξ 0$, and the centred sum, $Σ 0 ( μ )$, are independent. If $s > 0$ and $Θ = 1$ a.s., $Λ s , Ξ s$ is independent of $Σ s ( μ )$.

## 6. Proofs

Proof of Proposition 3. In formula (7), we first use the substitution $F ¯ ( y ) = z / t$, i.e., $y = U ( t / z )$:
$Ω s ( u / U ( t ) , v / U ( t ) , w / U ( t ) ) = ∑ n = 0 s p n ( t ) ∫ 0 ∞ e − u x / U ( t ) d F ( x ) n + 1 s ! ∫ 0 t ∫ U ( t / z ) ∞ e − u x / U ( t ) d F ( x ) s e − v U ( t / z ) / U ( t ) Q t ( s + 1 ) ∫ 0 U ( t / z ) e − w x / U ( t ) d F ( x ) d z t$
$= ∑ n = 0 s p n ( t ) ∫ 0 ∞ e − u x / U ( t ) d F ( x ) n + 1 s ! ∫ 0 t t ∫ U ( t / z ) ∞ e − u x / U ( t ) d F ( x ) s e − v U ( t / z ) / U ( t ) × 1 t s + 1 Q t ( s + 1 ) 1 − 1 t t − t ∫ 0 U ( t / z ) e − w x / U ( t ) d F ( x ) d z .$
Next, the substitution $F ¯ ( x ) = ρ z / t$, i.e., $x = U ( t / ( z ρ ) )$ leads to
$t ∫ U ( t / z ) ∞ e − u x / U ( t ) d F ( x ) = z ∫ 0 1 e − u U ( t / ( z ρ ) ) / U ( t ) d ρ → z ∫ 0 1 e − u ( z ρ ) − γ d ρ = z γ ∫ 1 ∞ e − u z − γ η η 1 + 1 / γ d η$
as $t → ∞$ and also
$t − t ∫ 0 U ( t / z ) e − w x / U ( t ) d F ( x ) = t ( 1 − F ( U ( t / z ) ) ) + t ∫ 0 U ( t / z ) ( 1 − e − w x / U ( t ) ) d F ( x ) = z + z ∫ 1 ∞ ( 1 − e − w U ( t / ( z ρ ) ) / U ( t ) ) d ρ → z 1 + ∫ 1 ∞ ( 1 − e − w ( z ρ ) − γ ) d ρ = z 1 + 1 γ ∫ 0 1 1 − e − w z − γ η η 1 + 1 / γ d η .$
Note that the integral is well defined since $γ > 1$. Moreover $e − v U ( t / z ) / U ( t ) → e − v z − γ$ and
$p n ( t ) ∫ 0 ∞ e − u x / U ( t ) d F ( x ) n ≤ p n ( t ) → 0 as t → ∞ .$
☐
Proof of Corollary 4. From Proposition 3 we have
$E exp ( − ( u + v ) Ξ s − u Σ s ) = 1 s ! ∫ 0 ∞ z s e − u z − γ e − v z − γ q s + 1 z 1 + 1 γ ∫ 0 1 1 − e − u z − γ η η 1 + 1 / γ d η d z .$
Hence
$∂ ∂ u E exp ( − ( u + v ) Ξ s − u Σ s ) u = 0 = − 1 s ! ∫ 0 ∞ z s z − γ e − v z − γ q s + 1 z d z − 1 s ! γ − 1 ∫ 0 ∞ z s + 1 z − γ e − v z − γ q s + 2 z d z .$
This gives indeed, using (3),
$E Ξ s + Σ s Ξ s = − ∫ 0 ∞ ∂ ∂ u E exp ( − ( u + v ) Ξ s − u Σ s ) u = 0 d v = 1 s ! ∫ 0 ∞ z s q s + 1 z d z + 1 s ! γ − 1 ∫ 0 ∞ z s + 1 q s + 2 z d z = 1 + s + 1 γ − 1$
which extends (5) to the case of NMP processes. Next, we focus on (8) for general k. We first consider the case $s = 0$. We have
$E 1 + Σ 0 Ξ 0 k = ∑ i = 0 k k i E Σ 0 Ξ 0 i .$
Let
$θ ( z , w ) = z 1 + 1 γ ∫ 0 1 1 − e − w z − γ η η 1 + 1 / γ d η .$
By Proposition 3
$E exp ( − v Ξ 0 − w Σ 0 ) = ∫ 0 ∞ e − v z − γ q 1 θ ( z , w ) d z$
and clearly
$E Σ 0 Ξ 0 i = ( − 1 ) i Γ ( i ) ∫ 0 ∞ v i − 1 ∂ i ∂ w i E ( exp ( − v Ξ 0 − w Σ 0 ) ) w = 0 d v .$
Note that
$θ 1 ( z , w ) : = ∂ ∂ w θ ( z , w ) = z − γ + 1 γ ∫ 0 1 η − 1 / γ e − w z − γ η d η θ n ( z , w ) : = ∂ n ∂ w n θ ( z , w ) = − 1 n + 1 z − n γ + 1 γ ∫ 0 1 η − 1 / γ + ( n − 1 ) e − w z − γ η d η ,$
so
$θ n ( z , 0 ) = − 1 n + 1 z − n γ + 1 1 n γ − 1 .$
By the Faa di Bruno’s formula
$∂ n ∂ w n q 1 θ ( z , w ) = ∑ k = 0 n q 1 ( k ) θ ( z , w ) B n , k θ 1 ( z , w ) , … , θ n − k + 1 ( z , w )$
where
$B n , k x 1 , … , x n − k + 1 = ∑ m 1 + … + m n − k + 1 = k 1 m 1 + 2 m 2 + … + ( n − k + 1 ) m n − k + 1 = n n ! m 1 ! m 2 ! … m n − k + 1 ! ∏ j = 1 n − k + 1 x j j ! m j .$
Therefore
$∂ n ∂ w n q 1 θ ( z , w ) w = 0 = ∑ k = 1 n − 1 k q k + 1 z B n , k z − γ + 1 γ − 1 , … , − 1 n − k z − ( n − k + 1 ) γ + 1 ( n − k + 1 ) γ − 1 .$
Subsequently,
$B n , k z − γ + 1 γ − 1 , … , − 1 n − k z − ( n − k + 1 ) γ + 1 ( n − k + 1 ) γ − 1 = ∑ m 1 + … + m n − k + 1 = k 1 m 1 + 2 m 2 + … + ( n − k + 1 ) m n − k + 1 = n n ! m 1 ! m 2 ! … m n − k + 1 ! ∏ j = 1 n − k + 1 − 1 j + 1 j ! z − j γ + 1 1 j γ − 1 m j = z − n γ + k − 1 n + k ∑ m 1 + … + m n − k + 1 = k 1 m 1 + 2 m 2 + … + ( n − k + 1 ) m n − k + 1 = n n ! m 1 ! m 2 ! … m n − k + 1 ! ∏ j = 1 n − k + 1 1 j ! j γ − 1 m j = z − n γ + k − 1 n + k C n , k γ$
with definition (9). This gives
$∂ n ∂ w n q 1 θ ( z , w ) w = 0 = z − n γ + k − 1 n ∑ k = 1 n q k + 1 z C n , k γ$
and
$∂ i ∂ w i E ( exp ( − v Ξ 0 − w Σ 0 ) ) w = 0 = ∫ 0 ∞ e − v z − γ z − i γ + k − 1 i ∑ k = 1 i q k + 1 z C i , k γ d z ,$
so that
$E Σ 0 Ξ 0 i = 1 Γ ( i ) ∫ 0 ∞ v i − 1 ∫ 0 ∞ e − v z − γ z − i γ + k ∑ k = 1 i q k + 1 z C i , k γ d z d v = 1 Γ ( i ) ∑ k = 1 i C i , k γ ∫ 0 ∞ q k + 1 z z − i γ + k ∫ 0 ∞ v i − 1 e − v z − γ d v d z = 1 Γ ( i ) ∑ k = 1 i C i , k γ ∫ 0 ∞ q k + 1 z z − i γ + k Γ ( i ) z − γ i d z = ∑ k = 1 i C i , k γ ∫ 0 ∞ q k + 1 z z k d z = ∑ k = 1 i k ! C i , k γ$
cf. (3), and the result follows.
For the case $s > 0$, we proceed in an analogous way. Equation (12) becomes
$E exp ( − v Ξ s − w Σ s ) = ∫ 0 ∞ z s e − v z − γ q s + 1 θ ( z , w ) d z .$
Then $Σ 0 / Ξ 0$ is replaced by $Σ s / Ξ s$, $q 1 z$ by $q s + 1 z$, $q 1 ( k )$ by $q s + 1 ( k )$ and, by following the same path as for $s = 0$, we get
$E Σ s Ξ s i = ∑ k = 1 i C i , k γ ∫ 0 ∞ q s + k + 1 z z s + k d z = ∑ k = 1 i ( s + k ) ! s ! C i , k γ .$
☐
Proof of Proposition 5. The proof is similar to the previous one, so we just highlight the differences here: Conditioning on $N ( t ) = n$ we have
$E N ( t ) = n exp ( − u Λ s ( t ) / U ( t ) − v X n ( 1 − p ( t ) ) * / U ( p − 1 ( t ) ) ) − w Σ s ( t ) ) / ( t p ( t ) U ( p − 1 ( t ) ) ) = n ! ( n p ( t ) ) ! ( n ( 1 − p ( t ) ) ) ! ∫ 0 ∞ ∫ y ∞ e − u x / U ( t ) d F ( x ) n p ( t ) e − v y / U ( p − 1 ( t ) ) × ∫ 0 y e − w x / ( t p ( t ) U ( p − 1 ( t ) ) ) d F ( x ) n ( 1 − p ( t ) ) − 1 d F ( y ) .$
We first replace F by the substitution $F ¯ ( y ) = p ( t ) z$, i.e., $y = U ( 1 / ( p ( t ) z ) )$:
$Ω s ( u / U ( t ) , v / U ( p − 1 ( t ) ) , w / ( t p ( t ) U ( p − 1 ( t ) ) ) ; t ) N ( t ) = n = n ! ( n p ( t ) ) ! ( n ( 1 − p ( t ) ) ) ! ∫ 0 ∞ ∫ U ( 1 / ( p ( t ) z ) ) ∞ e − u x / ( t p ( t ) U ( p − 1 ( t ) ) d F ( x ) n p ( t ) e − v U ( 1 / ( p ( t ) z ) ) / U ( p − 1 ( t ) ) × ∫ 0 U ( 1 / ( p ( t ) z ) ) e − w x / ( t p ( t ) U ( p − 1 ( t ) ) ) d F ( x ) n ( 1 − p ( t ) ) − 1 p ( t ) d z$
The factor involving v converges to
$e − v U ( 1 / ( p ( t ) z ) ) / U ( p − 1 ( t ) ) → e − v z − γ .$
The factor containing w behaves as
$∫ 0 U ( 1 / ( p ( t ) z ) ) e − w x / ( t p ( t ) U ( p − 1 ( t ) ) ) d F ( x ) = 1 − p ( t ) z − ∫ 0 U ( 1 / ( p ( t ) z ) ) ( 1 − e − w x / ( t p ( t ) U ( p − 1 ( t ) ) ) d F ( x ) = 1 − p ( t ) z − w ∫ 0 U ( 1 / ( p ( t ) z ) ) x ( t p ( t ) U ( p − 1 ( t ) ) ) d F ( x ) + … = 1 − p ( t ) z − w r U ( 1 / ( p ( t ) z ) ) ( t p ( t ) U ( p − 1 ( t ) ) ) + … = 1 − p ( t ) z − w t z 1 − γ γ − 1 + …$
and hence for the power
$∫ 0 U ( 1 / ( p ( t ) z ) ) e − w x / ( t p ( t ) U ( p − 1 ( t ) ) ) d F ( x ) n ( 1 − p ( t ) ) − 1 = exp n ( 1 − p ( t ) ) ln 1 − p ( t ) z − w t z 1 − γ γ − 1 + … = exp n p ( t ) z − n t w z 1 − γ γ − 1 − n p ( t ) ln 1 − p ( t ) z + … .$
Finally, for the factor containing u, replace F by the substitution $F ¯ ( x ) = ρ z / t$, i.e., $x = U ( t / ( z ρ ) )$:
$∫ U ( 1 / ( p ( t ) z ) ) ∞ e − u x / U ( t ) d F ( x ) ∼ z t ∫ 0 t p ( t ) e − u U ( t / ( z ρ ) ) / U ( t ) d ρ ∼ z t ∫ 0 t p ( t ) e − u U ( t / ( z ρ ) ) / U ( t ) − 1 d ρ + p ( t ) z ∼ p ( t ) z 1 − 1 t p ( t ) ∫ 0 ∞ ( 1 − e − u ( z ρ ) − γ ) d ρ ∼ p ( t ) z 1 − 1 t p ( t ) ∫ 0 ∞ ( 1 − e − u z − γ η ) η 1 + 1 / γ d η$
so that, as $t → ∞$,
$∫ U ( 1 / ( p ( t ) z ) ) ∞ e − u x / ( t p ( t ) U ( p − 1 ( t ) ) d F ( x ) n p ( t ) ∼ exp n p ( t ) ln z + n p ( t ) ln p ( t ) − n t ∫ 0 ∞ ( 1 − e − u z − γ η ) η 1 + 1 / γ d η .$
For the factor with the factorials, we have by Stirling’s formula
$n ! n p ( t ) ! ( n ( 1 − p ( t ) ) − 1 ) ! ∼ 1 2 π n 1 / 2 e ( n p ( t ) + 1 / 2 ) ln ( p ( t ) ) e ( n ( 1 − p ( t ) ) + 1 / 2 ) ln ( 1 − p ( t ) .$
Equivalent for the integral in z:
$g ( z ) = ln ( z ) − z g ′ ( z ) = 1 z − 1 g ′ ( 1 ) = 0 g ″ ( z ) = − 1 z 2 g ″ ( 1 ) = − 1$
By Laplace’s method, we deduce that
$∫ 0 ∞ exp n p ( t ) ( ln ( z ) − z ) d z ∼ 2 π n 1 / 2 p 1 / 2 ( t ) exp − n p ( t ) .$
Altogether
$Ω s ( u / U ( t ) , v / U ( p − 1 ( t ) ) , w / ( t p ( t ) U ( p − 1 ( t ) ) ) ) N ( t ) = n ∼ e − v exp − n t w 1 γ − 1 + o n t + n p ( t ) ln 1 − p ( t ) + n p ( t ) ln p ( t ) × exp − n t ∫ 0 ∞ ( 1 − e − u z − γ η ) η 1 + 1 / γ d η − n p ( t ) ln p ( t ) − n p ( t ) − 1 2 ln ( p ( t ) ) + ln ( p ( t ) ) × exp − ( n p ( t ) + 1 / 2 ) ln ( p ( t ) ) − ( n ( 1 − p ( t ) ) + 1 / 2 ) ln ( 1 − p ( t ) ∼ exp − n t ∫ 0 ∞ ( 1 − e − u z − γ η ) η 1 + 1 / γ d η − v − n t w 1 γ − 1 .$
☐
Proof of Proposition 6. Again, we condition on $N ( t ) = n$:
$E N ( t ) = n exp ( − u Λ s ( t ) / U ( t ) − v X n ( 1 − p ) * − w Σ s ( t ) ) / t = n ! n p ! ( n ( 1 − p ) − 1 ) ! ∫ 0 ∞ ∫ y ∞ e − u x / U ( t ) d F ( x ) n p e − v y ∫ 0 y e − w x / t d F ( x ) n ( 1 − p ) − 1 d F ( y ) .$
For the factor containing w, we have
$∫ 0 y e − w x / t d F ( x ) = P ( X ≤ y ) − w t ∫ 0 y x d F ( x ) + o ( t − 1 ) = P ( X ≤ y ) 1 − w t E X | X ≤ y + o ( t − 1 ) = F ( y ) 1 − w t E X | X ≤ y + o ( t − 1 ) .$
For the factor involving u one can write
$∫ y ∞ e − u x / U ( t ) d F ( x ) = − e − u x / U ( t ) F ¯ ( x ) y ∞ − ∫ y ∞ e − u x / U ( t ) x U ( t ) F ¯ ( x ) d x = e − u y / U ( t ) F ¯ ( y ) − ∫ u y / U ( t ) ∞ e − w F ¯ w u U ( t ) d w = F ¯ ( y ) − 1 t u 1 / γ ∫ 0 ∞ e − w w − 1 / γ d w 1 + o ( 1 ) = F ¯ ( y ) − 1 t u 1 / γ Γ ( 1 − 1 / γ ) 1 + o ( 1 ) .$
The ratio with factorials behaves, by Stirling’s formula, as
$n ! n p ! ( n ( 1 − p ) − 1 ) ! ∼ 2 π n n + 1 / 2 e − n 2 π ( n p ) n p + 1 / 2 e − n p 2 π ( n ( 1 − p ) ) n ( 1 − p ) + 1 / 2 e − n p ∼ 1 2 π n 1 / 2 p n p + 1 / 2 ( 1 − p ) n ( 1 − p ) + 1 / 2 .$
For the integral in y we have the equivalence
$∫ y ∞ e − u x / t d F ( x ) n p ∫ 0 y e − w x / t d F ( x ) n ( 1 − p ) − 1 = exp n p ln ( F ( y ) + ( 1 − p ) ln ( F ¯ ( y ) ) exp − w n t p E X | X ≤ y − u n t E X | X > y .$
Let
$g ( y ) = p ln ( F ( y ) ) + ( 1 − p ) ln ( F ¯ ( y ) ) g ′ ( y ) = p f ( y ) F ( y ) − ( 1 − p ) f ( y ) F ¯ ( y ) F ( y p ) = p , y p = F − 1 ( p ) g ( y p ) = p ln ( p ) + ( 1 − p ) ln ( 1 − p ) g ″ ( y ) = p f ′ ( y ) F ( y ) − f 2 ( y ) F 2 ( y ) − ( 1 − p ) f ′ ( y ) F ¯ ( y ) + f 2 ( y ) F ¯ 2 ( y ) g ″ ( y p ) = f ′ ( y p ) − f 2 ( y p ) p − f ′ ( y p ) − f 2 ( y p ) 1 − p = − f 2 ( y p ) p ( 1 − p ) .$
By Laplace’s method, we deduce that
$exp n p ln ( F ( y ) + ( 1 − p ) ln ( F ¯ ( y ) ) ∼ 2 π p ( 1 − p ) n 1 / 2 e n p ln ( p ) + ( 1 − p ) ln ( 1 − p )$
and then
$∫ 0 ∞ ∫ y ∞ e − u x / t d F ( x ) n p e − v y ∫ 0 y e − w x / t d F ( x ) n ( 1 − p ) − 1 d F ( y ) ∼ 2 π p ( 1 − p ) n 1 / 2 e n p ln ( p ) + ( 1 − p ) ln ( 1 − p ) exp − u Θ E X | X > y p − v y p − w Θ p E X | X ≤ y p .$
Altogether
$Ω s ( u / t , v , w / t ; t ) → e − v y p q 0 u E X | X > y p + w E X | X ≤ y p .$
☐
Proof of Proposition 7. We first replace F by the substitution $F ¯ ( y ) = z / t$, i.e., $y = U ( t / z )$
$Ω s ( u / U ( t ) , v / U ( t ) , w / t ; t ) = ∑ n = 0 s p n ( t ) ∫ 0 ∞ e − u x / U ( t ) d F ( x ) s + 1 s ! ∫ 0 t t ∫ U ( t / z ) ∞ e − u x / U ( t ) d F ( x ) s e − v U ( t / z ) / U ( t ) 1 t s + 1 Q t ( s + 1 ) ∫ 0 U ( t / z ) e − w x / t d F ( x ) d z .$
Then we have
$∫ 0 U ( t / z ) e − w x / t d F ( x ) = F ( U ( t / z ) ) − w t ∫ 0 U ( t / z ) x d F ( x ) + o 1 t = 1 − z t − w t μ + o 1 t = 1 − 1 t z + w μ + o 1 .$
Now use analogous arguments as in the proof of Proposition 3 and note that
$E exp ( − u S ( t ) / t ) = Q t E exp ( − u X / t ) = Q t ∫ 0 ∞ e − u x / t d F ( x ) = Q t 1 − u t ∫ 0 ∞ x d F ( x ) + o 1 t → q 0 u μ .$
☐
Proof of Corollary 8. By Proposition 7
$E exp ( − v Ξ s − u Σ s ) = 1 s ! ∫ 0 ∞ z s e − v z − γ q s + 1 z + u μ d z .$
Hence
$∂ ∂ u E exp ( − v Ξ s − u Σ s ) u = 0 = − μ s ! ∫ 0 ∞ z s e − v z − γ q s + 2 z d z$
and therefore
$− ∫ 0 ∞ ∂ ∂ u E exp ( − ( u + v ) Ξ s − u Σ s ) u = 0 d v = μ s ! ∫ 0 ∞ z s − γ q s + 2 z d z = μ Γ ( s − γ + 1 ) s ! E Θ 1 + γ .$
By Proposition 7
$E exp ( − ( u + v ) Ξ 0 − u Σ 0 ) = ∫ 0 ∞ e − ( u + v ) z − γ q 1 z + u μ d z$
$∂ ∂ v E exp ( − ( u + v ) Ξ 0 − u Σ 0 ) v = 0 = − ∫ 0 ∞ z − γ e − u z − γ q 1 z + u μ d z$
$− ∫ 0 ∞ ∂ ∂ v E exp ( − ( u + v ) Ξ 0 − u Σ 0 ) v = 0 d u = ∫ 0 ∞ ∫ 0 ∞ z − γ e − u z − γ q 1 z + u μ d u d z = ∫ 0 ∞ − e − u z − γ q 1 z + u μ 0 ∞ − μ ∫ 0 ∞ e − u z − γ q 2 z + u μ d u d z = ∫ 0 ∞ q 1 z d z − μ ∫ 0 ∞ ∫ 0 ∞ e − u z − γ q 2 z + u μ d u d z = 1 − μ ∫ 0 ∞ ∫ 0 ∞ e − u z − γ q 2 z + u μ d u d z .$
☐
Proof of Proposition 9. Condition on $N ( t ) = n$ to see that
$E N ( t ) = n exp ( − u Λ s ( t ) / ( t p ( t ) U ( p − 1 ( t ) ) ) − v X n ( 1 − p ( t ) ) * / U ( p − 1 ( t ) ) ) − w Σ s ( t ) ) / t = n ! n p ( t ) ! ( n ( 1 − p ( t ) ) ) ! ∫ 0 ∞ ∫ y ∞ e − u x / ( t p ( t ) U ( p − 1 ( t ) ) d F ( x ) n p ( t ) e − v y / U ( p − 1 ( t ) ) × ∫ 0 y e − w x / t d F ( x ) n ( 1 − p ( t ) ) − 1 d F ( y ) .$
Now replace F by the substitution $F ¯ ( y ) = p ( t ) z$, i.e., $y = U ( 1 / ( p ( t ) z ) )$
$Ω s ( u / ( t p ( t ) U ( p − 1 ( t ) ) ) , v / U ( p − 1 ( t ) ) , w / t ; t ) = n ! n p ( t ) ! ( n ( 1 − p ( t ) ) ) ! ∫ 0 ∞ ∫ U ( 1 / ( p ( t ) z ) ) ∞ e − u x / ( t p ( t ) U ( p − 1 ( t ) ) d F ( x ) n p ( t ) × e − v U ( 1 / ( p ( t ) z ) ) / U ( p − 1 ( t ) ) ∫ 0 U ( 1 / ( p ( t ) z ) ) e − w x / t d F ( x ) n ( 1 − p ( t ) ) − 1 p ( t ) d z .$
Like before,
$e − v U ( 1 / ( p ( t ) z ) ) / U ( p − 1 ( t ) ) → e − v z − γ .$
For the factor with w, one sees that
$∫ 0 U ( 1 / ( p ( t ) z ) ) e − w x / t d F ( x ) = F ( U ( 1 / ( p ( t ) z ) ) ) − w t ∫ 0 U ( 1 / ( p ( t ) z ) ) x d F ( x ) + o 1 t . . . = 1 − p ( t ) z − w t μ + o 1 t . . .$
and then
$∫ 0 U ( 1 / ( p ( t ) z ) ) e − w x / t d F ( x ) n ( 1 − p ( t ) ) − 1 = exp n ( 1 − p ( t ) ) ln 1 − p ( t ) z − w t μ + o 1 t = exp − n p ( t ) z − n t w μ + o n t + n p ( t ) ln 1 − p ( t ) z .$
For the factor with u, replace F by the substitution $F ¯ ( x ) = p ( t ) ρ z$, i.e., $x = U ( 1 / ( p ( t ) z ρ ) )$:
$t ∫ U ( t / z ) ∞ e − u x / U ( t ) d F ( x ) = z ∫ 0 1 e − u U ( t / ( z ρ ) ) / U ( t ) d ρ → z ∫ 0 1 e − u ( z ρ ) − γ d ρ = z γ ∫ 1 ∞ e − u z − γ η η 1 + 1 / γ d η$
$∫ U ( 1 / ( p ( t ) z ) ) ∞ e − u x / ( t p ( t ) U ( p − 1 ( t ) ) d F ( x ) = p ( t ) z ∫ 0 1 e − u U ( p − 1 ( t ) / ( z ρ ) ) / U ( p − 1 ( t ) ) t p ( t ) d ρ = p ( t ) z 1 − 1 t p ( t ) ∫ 0 1 u ( z ρ ) − γ d ρ + = p ( t ) z 1 − 1 1 − γ z − γ t p ( t ) u ∫ 0 1 ρ − γ d ρ + …$
and then
$∫ U ( 1 / ( p ( t ) z ) ) ∞ e − u x / ( t p ( t ) U ( p − 1 ( t ) ) d F ( x ) n p ( t ) = exp n p ( t ) ln p ( t ) + n p ( t ) ln ( z ) − n t u 1 − γ z − γ + … .$
The ratio of factorials coincides with (13). Also (14) applies here. Altogether
$n ! n p ( t ) ! ( n ( 1 − p ( t ) ) ) ! ∫ 0 ∞ ∫ U ( 1 / ( p ( t ) z ) ) ∞ e − u x / ( t p ( t ) U ( p − 1 ( t ) ) d F ( x ) n p ( t ) e − v U ( 1 / ( p ( t ) z ) ) / U ( p − 1 ( t ) ) × ∫ 0 U ( 1 / ( p ( t ) z ) ) e − w x / t d F ( x ) n ( 1 − p ( t ) ) − 1 p ( t ) d z ∼ e − v exp − n t w μ + o n t + n p ( t ) ln 1 − p ( t ) + n p ( t ) ln p ( t ) − n t u 1 − γ − n p ( t ) − 1 2 ln p ( t ) + ln p ( t ) × exp − ( n p ( t ) + 1 / 2 ) ln p ( t ) − ( n ( 1 − p ( t ) ) + 1 / 2 ) ln ( 1 − p ( t ) ∼ exp − n t u 1 − γ − v − n t w μ .$
☐
Proof of Proposition 10. Given $N ( t ) = n$
$E N ( t ) = n exp ( − u Λ s ( t ) / t − v X n ( 1 − p ) * − w Σ s ( t ) ) / t = n ! n p ! ( n ( 1 − p ) − 1 ) ! ∫ 0 ∞ ∫ y ∞ e − u x / t d F ( x ) n p e − v y ∫ 0 y e − w x / t d F ( x ) n ( 1 − p ) − 1 d F ( y ) .$
The part involving w coincides with the one in the proof of Proposition 6. For the factor involving u, we have
$∫ y ∞ e − u x / t d F ( x ) = P ( X > y ) − u t ∫ y ∞ x d F ( x ) + o ( t − 1 ) = F ¯ ( y ) 1 − u t E X | X > y + o ( t − 1 ) .$
The rest of the proof is completely analogous to the one for Proposition 6.     ☐
Proof of Proposition 11. Use the substitution $F ¯ ( y ) = z / t$, i.e., $y = U ( t / z )$:
$Ω s ( μ ) ( u Λ s ( t ) / U ( t ) , v X N ( t ) − s * / U ( t ) , w Σ s ( μ ) ( t ) / U ( t ) ; t ) = ∑ n = 0 s p n ( t ) ∫ 0 ∞ e − u x / U ( t ) d F ( x ) s + 1 s ! ∫ 0 t t ∫ U ( t / z ) ∞ e − u x / U ( t ) d F ( x ) s e − v U ( t / z ) / U ( t ) × 1 t s + 1 Q t ( s + 1 ) 1 − 1 t t − t e w μ / U ( t ) ∫ 0 U ( t / z ) e − w x / U ( t ) d F ( x ) d z .$
We then replace F by the substitution $F ¯ ( x ) = ρ z / t$, i.e., $x = U ( t / ( z ρ ) )$:
$t − t e w μ / U ( t ) ∫ 0 U ( t / z ) e − w x / U ( t ) d F ( x ) = t + t e w μ / U ( t ) ∫ 0 U ( t / z ) 1 − w x U ( t ) − e − w x / U ( t ) d F ( x ) − t e w μ / U ( t ) F ( U ( t / z ) ) + t e w μ / U ( t ) ∫ 0 U ( t / z ) w x U ( t ) d F ( x ) = t 1 − e w μ / U ( t ) 1 − z t + z ∫ 1 ∞ 1 − w U ( t / ( z ρ ) ) U ( t ) − e − w U ( t / ( z ρ ) ) / U ( t ) d ρ + t w U ( t ) e w μ / U ( t ) μ − z 1 − γ 1 − γ U ( t ) t ( 1 + o ( 1 ) ) = t 1 − 1 + w μ U ( t ) + 1 2 w μ U ( t ) 2 + o 1 U 2 ( t ) 1 − z t + z ∫ 1 ∞ 1 − w U ( t / ( z ρ ) ) U ( t ) − e − w U ( t / ( z ρ ) ) / U ( t ) d ρ + t w μ U ( t ) 1 + w μ U ( t ) + o 1 U ( t ) 1 + O ( 1 / t )$
$= z + z ∫ 1 ∞ 1 − w U ( t / ( z ρ ) ) U ( t ) − e − w U ( t / ( z ρ ) ) / U ( t ) d ρ − z 1 − γ 1 − γ w + O 1 U ( t ) + O t U 2 ( t ) → z 1 + ∫ 1 ∞ ( 1 − w ( z ρ ) − γ − e − w ( z ρ ) − γ ) d ρ − z − γ 1 − γ w = z 1 + 1 γ ∫ 0 1 1 − w z − γ v − e − w z − γ η η 1 + 1 / γ d η − z − γ 1 − γ w .$
Now use the same arguments as in the proof of Proposition     ☐
Proof of Corollary 12. Note that
$E exp ( − ( u + v ) Ξ s − u Σ s ( μ ) ) = 1 s ! ∫ 0 ∞ z s e − v z − γ e − u z − γ q s + 1 z 1 + 1 γ ∫ 0 1 1 − u z − γ v − e − u z − γ η η 1 + 1 / γ d η − z − γ 1 − γ u d z$
$∂ ∂ u E exp ( − ( u + v ) Ξ s − u Σ s ) u = 0 = − 1 s ! ∫ 0 ∞ z s z − γ e − v z − γ q s + 1 z d z + 1 1 − γ s ! ∫ 0 ∞ z 1 + s z − γ e − v z − γ q s + 2 z d z − ∫ 0 ∞ ∂ ∂ u E exp ( − ( u + v ) Ξ s − u Σ s ( μ ) ) u = 0 d v = 1 + s + 1 γ − 1 .$
☐
Proof of Proposition 13. Use the substitution $F ¯ ( y ) = z / t$, i.e., $y = U ( t / z )$:
$Ω s ( μ ) ( u Λ s ( t ) / U ( t ) , v X N ( t ) − s * / U ( t ) , w Σ s ( μ ) / t 1 / 2 ; t ) = ∑ n = 0 s p n ( t ) ∫ 0 ∞ e − u x / U ( t ) d F ( x ) s + 1 s ! ∫ 0 t t ∫ U ( t / z ) ∞ e − u x / U ( t ) d F ( x ) s e − v U ( t / z ) / U ( t ) × 1 t s + 1 Q t ( s + 1 ) 1 − 1 t t − t e w μ / t 1 / 2 ∫ 0 U ( t / z ) e − w x / t 1 / 2 d F ( x ) d z .$
Then we have
$t − t e w μ / t 1 / 2 ∫ 0 U ( t / z ) e − w x / t 1 / 2 d F ( x ) = t + t e w μ / t 1 / 2 ∫ 0 U ( t / z ) 1 − w x t 1 / 2 + 1 2 w x 2 t − e − w x / t 1 / 2 d F ( x ) − t e w μ / t 1 / 2 F ( U ( t / z ) ) + t e w μ / t 1 / 2 ∫ 0 U ( t / z ) w x t 1 / 2 d F ( x ) − 1 2 e w μ / t 1 / 2 ∫ 0 U ( t / z ) w x 2 d F ( x ) .$
First note that
$t e w μ / t 1 / 2 ∫ 0 U ( t / z ) 1 − w x t 1 / 2 + 1 2 w x 2 t − e − w x / t 1 / 2 d F ( x ) = z e w μ / t 1 / 2 ∫ 1 ∞ 1 − w U ( t / ( z ρ ) ) t 1 / 2 + 1 2 w U ( t / ( z ρ ) ) t 1 / 2 2 − e − w U ( t / ( z ρ ) ) / t 1 / 2 d ρ → 0 .$
Secondly,
$t − t e w μ / t 1 / 2 F ( U ( t / z ) ) + t e w μ / t 1 / 2 ∫ 0 U ( t / z ) w x t 1 / 2 d F ( x ) − 1 2 e w μ / t 1 / 2 ∫ 0 U ( t / z ) w x 2 d F ( x ) = t 1 − e w μ / t 1 / 2 1 − z t + t w t 1 / 2 e w μ / t 1 / 2 μ − U ( t / z ) z t − ∫ U ( t / z ) ∞ F ¯ ( x ) d x − 1 2 e w μ / t 1 / 2 E X 1 2 − ∫ U ( t / z ) ∞ x 2 d F ( x ) = t 1 − e w μ / t 1 / 2 1 − z t + t w t 1 / 2 e w μ / t 1 / 2 μ − 1 1 − γ U ( t / z ) z t ( 1 + o ( 1 ) ) = − 1 2 w 2 e w μ / t 1 / 2 E X 1 2 − γ U ( t / z ) 2 z t ( 1 + o ( 1 ) = t 1 − 1 + w μ t 1 / 2 + 1 2 w μ t 1 / 2 2 + o 1 t 1 − z t + t 1 / 2 μ w 1 + w μ t 1 / 2 + 1 2 w μ t 1 / 2 2 + o 1 t 1 − 1 μ ( 1 − γ ) U ( t / z ) z t ( 1 + o ( 1 ) − 1 2 w 2 1 + w μ t 1 / 2 + 1 2 w μ t 1 / 2 2 + o 1 t E X 1 2 − γ U ( t / z ) 2 z t ( 1 + o ( 1 )$
and it follows that
$t − t e w μ / t 1 / 2 F ( U ( t / z ) ) + t e w μ / t 1 / 2 ∫ 0 U ( t / z ) w x t 1 / 2 d F ( x ) − 1 2 e w μ / t 1 / 2 ∫ 0 U ( t / z ) w x 2 d F ( x ) = z − t 1 / 2 μ w − 1 2 w μ 2 + o ( 1 ) + t 1 / 2 μ w + w μ 2 + O U ( t ) t 1 / 2 − 1 2 w 2 E X 1 2 + O U ( t ) t 1 / 2 2 → z − 1 2 w 2 σ 2 ,$
which completes the proof. Note that
$E exp ( − u ( S ( t ) − N ( t ) μ ) / t 1 / 2 ) = Q t E exp ( − u ( X − μ ) / t 1 / 2 ) = Q t ∫ 0 ∞ e − u ( x − μ ) / t 1 / 2 d F ( x ) = Q t 1 + u 2 2 t σ 2 + o 1 t . → q 0 − u 2 2 σ 2 = E e u 2 σ 2 Θ / 2 .$
☐

## 7. Conclusions

In this paper we provided a fairly general collection of results on the joint asymptotic Laplace transforms of the normalised sums of smallest and largest among regularly varying claims, when the length of the considered time interval tends to infinity. This extends several classical results in the field. The appropriate scaling of the different quantities is essential. We showed to what extent the type of the near mixed Poisson process counting the number of claim instances influences the limit results, and also identified quantities for which this influence is asymptotically negligible. We further related the dominance of the maximum term in such a random sum to another quantity that exhibits the effect of the tail index on the aggregate claim rather explicitly, namely the ratio of sum of squares of the claims over the sum of the claims squared. The results allow to further quantify the effect of large claims on the total claim amount in an insurance portfolio, and could hence be helpful in the design of appropriate reinsurance programs when facing heavy-tailed claims with regularly varying tail. Particular emphasis is given to the case when the tail index exceeds 1, which corresponds to infinite-mean claims, a situation that is particularly relevant for catastrophe modelling.

## Acknowledgments

H.A. acknowledges support from the Swiss National Science Foundation Project 200021-124635/1.

## Author Contributions

Three authors contributed to all aspects of this work.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. H. Ammeter. “Note concerning the distribution function of the total loss excluding the largest individual claims.” Astin Bull. 3 (1964): 132–143. [Google Scholar]
2. S.A. Ladoucette, and J.L. Teugels. “Asymptotics for ratios with applications to reinsurance.” Methodol. Comput. Appl. Probab. 9 (2007): 225–242. [Google Scholar] [CrossRef]
3. D.A. Darling. “The influence of the maximum term in the addition of independent random variables.” Trans. Am. Math. Soc. 73 (1952): 95–107. [Google Scholar] [CrossRef]
4. A.A. Bobrov. “The growth of the maximal summand in sums of independent random variables.” Math. Sb. Kiev. Gosuniv. 5 (1954): 15–38. [Google Scholar]
5. T.L. Chow, and J.L. Teugels. “The sum and the maximum of i.i.d. random variables.” In Proceedings of the Second Prague Symposium on Asymptotic Statistics, Hradec Kralove, Czech Republic, 21–25 August 1978; Amsterdam, The Netherlands: North-Holland, 1979, pp. 81–92. [Google Scholar]
6. G.L. O’Brien. “A limit theorem for sample maxima and heavy branches in Galton-Watson trees.” J. Appl. Probab. 17 (1980): 539–545. [Google Scholar] [CrossRef]
7. N.H. Bingham, and J.L. Teugels. “Conditions implying domains of attraction.” In Proceedings of the Sixth Conference on Probability Theory, Brasov, Romania, 10–15 September, 1979; pp. 23–34.
8. N.H. Bingham, C.M. Goldie, and J.L. Teugels. Regular Variation. Cambridge, UK: Cambridge University Press, 1987. [Google Scholar]
9. P. Embrechts, C. Klüppelberg, and T. Mikosch. Modelling Extremal Events for Insurance and Finance. Berlin, Germany: Springer-Verlag, 1997. [Google Scholar]
10. T. Rolski, H. Schmidli, V. Schmidt, and J.L. Teugels. Stochastic Processes for Insurance and Finance. Chichester, UK: John Wiley & Sons, 1999. [Google Scholar]
11. S. Asmussen, and H. Albrecher. Ruin Probabilities, 2nd ed. Hackensack, NJ, USA: World Scientific, 2010. [Google Scholar]
12. J. Grandell. Mixed Poisson Processes, Monographs on Statistics and Applied Probability 77. London, UK: Chapman & Hall, 1997. [Google Scholar]
13. S.A. Ladoucette, and J.L. Teugels. “Reinsurance of large claims.” J. Comput. Appl. Math. 186 (2006): 163–190. [Google Scholar] [CrossRef]
14. H. Albrecher, and J.L. Teugels. “Asymptotic analysis of measures of variation.” Theory Prob. Stat. 74 (2006): 1–9. [Google Scholar] [CrossRef]
15. R. LePage, M. Woodroofe, and J. Zin. “Convergence to a stable distribution via order statistics.” Ann. Probab. 9 (1981): 624–632. [Google Scholar] [CrossRef]
16. S.A. Ladoucette. “Asymptotic behavior of the moments of the ratio of the random sum of squares to the square of the random sum.” Stat. Probab. Lett. 77 (2007): 1021–1033. [Google Scholar] [CrossRef]
17. H. Albrecher, K. Scheicher, and J.L. Teugels. “A combinatorial identity for a problem in asymptotic statistics.” Appl. Anal. Discret. Math. 3 (2009): 64–68. [Google Scholar] [CrossRef]
18. P.J. Downey, and P.E. Wright. “The ratio of the extreme to the sum in a random sequence.” Extremes 10 (2007): 249–266. [Google Scholar] [CrossRef]