Next Article in Journal
On the Čech-Completeness of the Space of τ-Smooth Idempotent Probability Measures
Previous Article in Journal
Nonlocal Extensions of First Order Initial Value Problems
Previous Article in Special Issue
Strong Comonotonic Additive Systemic Risk Measures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extreme Behavior of Competing Risks with Random Sample Size

1
Department of Financial and Actuarial Mathematics, School of Mathematics and Physics, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China
2
Academy of Pharmacy, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China
3
Department of Mathematical Sciences, University of Liverpool, Liverpool L69 3BX, UK
4
College of Data Science, Jiaxing University, Jiaxing 314001, China
5
The Key Lab of Jiangsu Higher Education Institutions (under Construction), Xi’an Jiaotong-Liverpool University, Suzhou 215123, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2024, 13(8), 568; https://doi.org/10.3390/axioms13080568
Submission received: 5 July 2024 / Revised: 17 August 2024 / Accepted: 20 August 2024 / Published: 21 August 2024
(This article belongs to the Special Issue Advances in Financial Mathematics)

Abstract

:
The advances in science and technology have led to vast amounts of complex and heterogeneous data from multiple sources of random sample length. This paper aims to investigate the extreme behavior of competing risks with random sample sizes. Two accelerated mixed types of stable distributions are obtained as the extreme limit laws of random sampling competing risks under linear and power normalizations, respectively. The theoretical findings are well illustrated by typical examples and numerical studies. The developed methodology and models provide new insights into modeling complex data across numerous fields.

1. Introduction

Extreme value theory (EVT) focuses on modeling extreme events within a sequence of a large number of independent and identically distributed (i.i.d.) random variables. Its applications span diverse fields such as finance, insurance, environmental science, and engineering [1,2]. Let X 1 , X 2 , , X n be a sequence of i.i.d. random variables with a common distribution function (d.f.), F, and denote by M n = max ( X 1 , X 2 , , X n ) the sample maxima. The risk X F is called the max-domain attraction of G (cf. Definition 3.3.1 in [2]), if there exist some normalization constants a n > 0 , b n R , and a non-degenerate d.f., G, such that (with d convergence in distribution or convergence weakly)
P a n M n b n x d G x a s n .
The limit distribution, G, is the so-called generalized extreme value distribution (GEV), satisfying the stability relation G n ( a n x + b n ) = G ( x ) , x R , n 1 , for every integer n, where a n > 0 and b n R are some suitable constants. The GEV distribution, G, is thus of the l-max stable laws, written as
G ( x ; γ , μ , σ ) = exp 1 + γ x μ σ + 1 / γ ,
where ( x ) + = max ( x , 0 ) , the positive part of x R . We denote this by F D l ( G ) . Here, the three parameters γ , μ R , σ > 0 are called the shape, location, and scale parameters. In addition, the tail behavior of the potential risk, X, is well classified into Fréchet, Gumbel, and Weibull domains, corresponding to the cases with γ > , = , < 0 , respectively [3].
Given the wide applications of EVT, numerous studies have extensively explored the limit theory similar to Equation (1). Pantcheva [4] extended first the GEV distributions under linear normalization in Equation (1) to the power limit laws D p ( H ) , i.e., X F D p ( H ) if there exist some power normalization constants α n , β n > 0 , and a non-degenerate d.f., H, such that
P α n M n β n sign M n x d H ( x )
with the sign function sign ( x ) equal 1, 1 , and 0 for x being positive, negative, and zero, respectively. It is well known that H in Equation (3) is a p-max stable distribution (that is, for any integer n 1 , there exist suitable constants α n , β n > 0 , such that H n ( α n | x | β n sign ( x ) ) = H ( x ) ). The limit distribution, H, consists of six types of distributions, which can be written uniformly in the form of Equation (4) below [5]: for some constants μ , σ > 0 , and γ R (recall G is the GEV defined in Equation (2)),
H ( x ; γ , μ , σ ) = G ( log x ; γ , μ , σ ) , if   the   support   is   included   in   ( 0 , ) , G ( log ( x ) ; γ , μ , σ ) , otherwise .
In what follows, we denote this by F D p ( H ) . We refer to [6] for exponential normalization with generalized Pareto families of asymmetric distributions as its limit laws, extending further the p-stable laws under power normalization.
Recently, Cao and Zhang [7] and Hu et al. [8] explored the limit behavior of extremes under linear and power normalization in the scenario of competing risks, with the practical consideration of aggregating multiple heterogeneous information in terms of geography, environment, and socioeconomics [9]. Namely, the studied sample maxima, M n , is actually obtained from k heterogeneous subsamples X j , i , i = 1 , , n j from source/population X j F j , j = 1 , , k . This considerate modeling in the big data era is desirable due to the complexity of real applications [10,11]. The limit behavior of M n obtained for multiple sources is the so-called limit theory of competing risks since
M n = max 1 j k M j , n j with   M j , n j = max 1 i n j X j , i .
Clearly, the limit laws obtained for competing risks specified in Equation (5), which extend the classical extreme value theory presented in Equations (1) and (3), correspond to the so-called accelerated l-max stable and accelerated p-max stable distributions (see Cao and Zhang [7] (Theorem 2.1) and Hu et al. [8] (Theorem 2.1)). Note that the key condition in determining accelerated limit theory is the interplay of the sample length and the tail behavior among the multiple competing risks.
As the advances of science and technology have led to vast amounts of complex and heterogeneous data from multiple sources of random sample length, a natural question is how the extreme law of competing risks varies in the uncertainty of the sample size involved. This is very common in environmental and financial fields; for instance, the potential extreme claim size among k insured policyholders, each holding the insurance for a n j -day period with heterogeneous claim risk and a total of ν n j claims [12]. Another example is the extreme daily precipitation among k regions, where each region is exposed to a ν n j -day wet period with different extreme precipitation risks [13]. Although the study of such heterogeneous risks under a random sampling scenario is key in risk management for relevant decision-makers, its extreme behavior remains a significant and unresolved issue. This paper aims to establish the limit theory of competing risks under both linear and power normalization when the sample sizes are random rather than determinant.
Many authors have refined the extreme limit theory of sample maxima from a single risk, X, under linear/power normalization with random sample sizes, ν n , for two common cases:
  • Case (I) with a random sample size independent of the basic risk. The random sample X 1 , , X n X is supposed to be independent of the sample size index ν n . Assume that ν n / n converges weakly to a non-degenerate distribution function [14,15,16,17];
  • Case (II) with a random sample size not independent of the basic risk. There exists a positive-valued variable, V, such that ν n / n converges to V in probability, allowing the interrelation of the basic risk and sample size index ν n [14,18,19].
The limit behaviors of extreme samples with random sample sizes were well investigated for a range of extensions, including sample minima [20], extreme order statistics under power normalization [19,21], stationary Gaussian processes [17], stationary chi-processes [16], and recent contributions on multivariate extreme behavior [22].
This paper will focus on the limit behavior of competing extremes (maxima of maxima defined by Equation (5) and minima of minima in Equation (14)) under linear and power normalization, extending the accelerated l-max and p-max stable distributions to be in the mixed form, as we consider that the sample size sequence { ν n , n 1 } is a random sequence satisfying conditions indicated in Cases I and II above. The theoretical results will be illustrated by typical examples with random sample sizes, ν n , following time-shifted Poisson, (negative) binomial distributions, and numerical studies (cf. Section 3). Our theoretical findings are expected to be applied in finance, insurance, and hydrology [23,24]. The developed methodology and models provide new insights into modeling complex data across numerous fields.
The remainder of the paper is organized as follows. Section 2 presents the main results for maxima of maxima under both linear and power normalization with sample sizes. Extensional results for competing minima and typical examples are given in Section 3. Numerical studies illustrating our theoretical findings are presented in Section 4. The proofs of all theoretical results are deferred to Appendix A.

2. Main Results

Notation. Recall that the competing risks defined in Equation (5) are generated from k independent samples of size n j ’s from risks X j F j , 1 j k . Let { ν n j , n j 1 } , a random sequence from ν j , 1 j k , which are mutually independent, positive integer-valued variables, stand for the random sample size. Similar to Equation (5), we write
M ν n = max 1 j k M j , ν n j with M j , ν n j = max 1 i ν n j X j , i .
Here, ν n : = 1 j k ν n j and n = j = 1 k n j . Throughout this paper, for any risk, X, following a cumulative distribution function (cdf), F, we write F ̲ ( x ) : = 1 lim t x F ( t ) , standing for the cdf of X . Further, we denote by p the convergence in probability, and all limits are taken as min 1 j k n j .
To simplify the notation, in what follows, we consider competing risks from two sources, namely k = 2 . We will present the limit behavior of M ν n for Cases I and II in Section 2.1 and Section 2.2, respectively.

2.1. Limit Theorem for Case (I) with Independent Sample Size

In this section, we present our main results concerning the limit behavior of competing risks under linear and power normalization, as detailed in Theorems 1 and 2, respectively. We focus on the following random sample size scenario: assume that there exist k positive variables V j , 1 j k , such that
ν n j n j d V j .
Condition (7) is commonly used for the limit behavior of extremes with random sample sizes [16,21,25]. In general, one may consider a random stopping sampling process [26]. Typical examples of random sample sizes satisfying Equation (7) are given in Section 3.2 (cf. Examples 1∼3); see also Peng et al. [15] for more examples. In addition, it is worthy to note that, under condition (7), the V j ’s inherit the mutual independence of the random sample size sequences { ν n j , n j 1 } , j = 1 , 2 , , k .
1. Limit behavior of M ν n under linear normalization. Clearly, for F j D l ( G j ) , there exist some constants a j , n j > 0 , b j , n j R , such that M j , n j satisfies Equation (1) as n j . It follows further by Theorem 6.2.1 in Galambos [18] (Ch.6, p.330) that, under condition (7), we can find a subsequence ν n j / n j p V j by Skorokhod representation theorem and a j , n j ( M j , ν n j b j , n j ) d L j , a mixed GEV distribution defined by
L j ( x ) = 0 G j z ( x ) d P V j z .
Below, we will show in Theorem 1 that, under condition (7), the limit theorem for competing extremes, M ν n , holds with an accelerated mixed GEV distribution, which can be written as a product of independent L j ’s.
Theorem 1.
Let M ν n be given by Equation (6) with the basic risks X j F j , j = 1 , 2 , and random sample sizes ν 1 , ν 2 mutually independent. Assume conditions (1) and (7) hold for sample maxima M j , n j with suitable constants a j , n j , b j , n j , and the random sample size ν n j ν j , j = 1 , 2 . Suppose further that there exist two constants a [ 0 , ] and b R , such that
a n : = a 1 , n 1 a 2 , n 2 a , b n : = a 1 , n 1 ( b 2 , n 2 b 1 , n 1 ) b
 as min ( n 1 , n 2 ) .
(i). 
If Equation (9) holds with a > 0 and b < , then
P a 2 , n 2 ( M ν n b 2 , n 2 ) x d L 1 a x + b L 2 ( x ) .
(ii). 
If Equation (9) holds with a = 0 and b = , then
P a 2 , n 2 ( M ν n b 2 , n 2 ) x d L 2 ( x ) .
Here,  L j , j = 1 , 2  are given by Equation (8).
Remark 1. (a) Theorem 1 is reduced to Theorem 2.1 by Cao and Zhang [7] if all V j ’s are degenerate at one (that is, ν n j n j in probability), the limit theorem for competing maxima with determinant sample size.
(b) In addition, the two results in (i) and (ii) correspond to the cases for two competing risks being comparable tails with a balanced sampling process and the dominated case, respectively.
(c) In general, our results introduce a fairly larger class of accelerated mixed GEV distribution, as a product of the mixed GEV distributions written in the form of Equation (8).
2. Limit behavior of M ν n under power normalization. Clearly, for F j D p ( H j ) , there exist α j , n j , β j , n j > 0 , such that M j , n j satisfies Equation (3) as n j . We will show in Theorem 2 that, under condition (7), the limit theorem for competing extremes, M ν n , holds with an accelerated mixed distribution, P, which can be written as the product of P j ’s given below
P j ( x ) = 0 ( H j ( x ) ) z d P V j z .
Here, H j , j = 1 , 2 are of the same p-type of H defined in Equation (4).
Theorem 2.
Let M ν n be given by Equation (6) with the basic risks X j F j , j = 1 , 2 , and random sample sizes ν 1 , ν 2 mutually independent. Assume conditions (3) and (7) hold for sample maxima M j , n j with suitable constants α j , n j , β j , n j > 0 , and the random sample size ν n j ν j , j = 1 , 2 . Suppose further there exist two non-negative constants α and β such that
α n : = α 1 , n 1 1 α 2 , n 2 β 1 , n 1 β 2 , n 2 α , β n : = β 1 , n 1 β 2 , n 2 β
as min ( n 1 , n 2 ) . The following claims hold with mixed p-stable distributions P j , j = 1 , 2 defined by Equation (10).
(i). 
If condition (11) holds with two positive constants α and β, then
P α 2 , n 2 M ν n β 2 , n 2 sign M ν n x d P 1 α | x | β sign ( x ) P 2 ( x ) .
(ii)
The following limit distribution holds
P α 2 , n 2 M ν n β 2 , n 2 sign M ν n x d P 2 ( x )
provided that one of the following four conditions is satisfied (notation: x 1 * : = inf { x : H 1 ( x ) < 1 } , the right endpoint of H 1 )
(a). 
When H 2 is one of the same p-types of G ( log x ; γ , μ , σ ) , and H 1 is one of the same p-types of G ( log ( x ) ; γ , μ , σ ) .
(b). 
When H 2 is one of the same p-types of G ( log x ; γ , μ , σ ) , and H 1 is one of the same p-types of G ( log x ; γ , μ , σ ) for γ 0 . In addition, Equation (11) holds with α = and 0 β < .
(c). 
When H 2 is one of the same p-types of G ( log x ; γ , μ , σ ) , and H 1 is one of the same p-types of G ( log x ; γ , μ , σ ) for γ < 0 . In addition, Equation (11) holds with x 1 * α < and β = 0 or α = and 0 β < .
(d). 
When both H 1 and H 2 are one of the same p-types of G ( log ( x ) ; γ , μ , σ ) . In addition, Equation (11) holds with 0 α x 1 * and β = 0 or α = 0 and 0 β < .
Remark 2.  (a) Theorem 2 is reduced to the extreme limit behavior of competing risks with determinant sample size if all V j ’s are degenerate at one, which was extensively discussed in Hu et al. [8]. The random sampling size scenarios are very common in practice. For instance, in physics and insurance fields, ν n follows a shifted Poisson df with mean λ n such that λ n n . For more examples, see Example 1 below and Remark 2.2 by Abd Elgawad et al. [27].
(b) In addition, the two results in i) and ii) correspond to the two different cases with α β > 0 and α β = 0 , in condition (11). These results illustrate the limit behavior of two competing risks with comparable tails and a balanced sampling process and the dominated case, respectively.
(c) Theorem 2 extends Theorem 2.1 by Barakat and Nigm [19] for a non-competing risk scenario, where the extremes are from one single source. In general, the accelerated mixed power-stable distributions family is a larger class including those of form in Equation (10).

2.2. Limit Theorem for Case (II) with Non-Independent Sample Size

In this section, we focus on Case (II), where we relax the independent condition between the basic risk and random sample size. On the other hand, we need to strengthen the convergence in distribution as the convergence in probability, as stated below. Assume that there exist positive random variables V j , j = 1 , 2 , such that
ν n j / n j p V j , j = 1 , 2 .
Theorem 3.
Let M ν n be given by Equation (6) with two random sampling maxima from two independent pairs of basic risk and sample size ( X j , ν j ) , j = 1 , 2 . Suppose that conditions (9) and (13) hold for X j F j D l ( G j ) with Equation (2) satisfied for M j , n j and a j , n j , b j , n j , j = 1 , 2 . Then the claim in Theorem 1 holds.
Theorem 4.
Let M ν n be given by Equation (6) with two random sampling maxima from two independent pairs of basic risk and sample size ( X j , ν j ) , j = 1 , 2 . Suppose that conditions (11) and (13) hold for X j F j D p ( H j ) with Equation (2) satisfied for M j , n j and α j , n j , β j , n j , j = 1 , 2 . Then the claim in Theorem 2 holds.
Remark 3.
Recalling that G and H given by Equations (2) and (4) are the so-called l-max stable and p-max stable distributions, we call L and P the accelerated mixed l-max stable and the accelerated mixed p-max stable distributions if they can be written as a product of L j and P j , respectively. Thus, the limit laws obtained by Theorems 3 and 4 correspond to fairly larger families of accelerated mixed l-max stable and accelerated mixed p-max stable distributions, respectively.

3. Extension and Examples

In this section, we first extend our results for competing minima risks in Section 3.1, and then present typical examples of random sizes with specific mixed extreme distributions in Section 3.2.

3.1. Extreme Limit Theory for Competing Minima Risks

In some practical applications, such as the lifetime analysis in reliability studies or race time of athletes in physical studies, extreme minima play an important role. As we will see in Corollaries 1 and 2 below, analytical claims follow for competing risks with random sample sizes in terms of minima of minima. Essentially, noting that the right tail behavior of X j F j is demonstrated by its sample maxima, M j , n j , the left tail behavior of X j F ̲ j ( x ) can be shown by the sample minima m ̲ j , n j = min ( X j , 1 , , X j , n j ) . Further, the left tail behavior of X can be obtained through the study of the right tail of X since (cf. Theorem 1.8.3 in Leadbetter et al. [1] and Grigelionis [28])
m ̲ n = min 1 j k m ̲ j , n j = max 1 j k M j , n j = M n .
Noting that, the condition that F j D l ( G j ) , i.e., there exist some constants a j , n j > 0 , b j , n j R , such that M j , n j satisfies Equation (1) as n j , is equivalent that
P a j , n j ( m ̲ j , n j + b j , n j ) x d G ̲ j x ,
where G j is of the same l-type of GEV distributions given in Equation (2).
Corollary 1.
Suppose the same conditions as for Theorems 1 or 3 are satisfied.
(i). 
If Equation (9) holds with a > 0 and b < , then
P a 2 , n 2 ( m ̲ ν n + b 2 , n 2 ) x d 1 L 1 ( a x b ) L 2 ( x ) .
(ii). 
If Equation (9) holds with a = 0 and b = , then
P a 2 , n 2 ( m ̲ ν n + b 2 , n 2 ) x d 1 L 2 ( x ) .
Here,  L j , j = 1 , 2  are given by Equation (8).
Noting that α n | m ̲ n | β n sign ( m ̲ n ) = α n | M n | β n sign ( M n ) , the following corollary holds for the power normalized minima of minima specified in Equation (14).
Corollary 2.
Suppose that the same conditions as for Theorems 2 or 4 are satisfied. The following claims hold for P ̲ j ( x ) , j = 1 , 2 , with P j the mixed H j distributions defined in Equation (10).
(i). 
If condition (11) holds with two positive constants α and β, then
P α 2 , n 2 m ̲ ν n β 2 , n 2 sign m ̲ ν n x d 1 P 1 α | x | β sign ( x ) P 2 ( x ) .
(ii). 
The following limit distribution holds
P α 2 , n 2 m ̲ ν n β 2 , n 2 sign m ̲ ν n x d 1 P 2 ( x )
provided that one of the conditions (a)∼(d) in Theorem 2 holds.
Remark 4.
Recalling that G ̲ and H ̲ are the so-called l-min stable and p-min stable distributions [28] (Corollary 1), if G and H are given by Equations (2) and (4), we call L ̲ and P ̲ the mixed l-min stable and the mixed p-min stable distributions if they can be written as a product of L ̲ j and P ̲ j , respectively.

3.2. Examples

Below, we will give three examples (Examples 1∼3) to illustrate our main results obtained in Theorems 1 and 2. Specifically, we consider that random sample size follows, respectively, time-shifted versions of Poisson or binomial distribution, as well as geometric and negative binomial distributions with relevant parameters satisfying certain average stable conditions [15].
Example 1
(Time-shifted binomial/Poisson distributed random sample size). Let ν n follow a time-shifted binomial distribution with probability mass function (pmf) given as
P ν n = k + m = l n k p n k q n l n k , k + m = m , m + 1 , , l n + m .
If l n p n 1 , then ν n / n converges in probability to one. Similarly, for a time-shifted Poisson distributed ν n = d m + P o i s s o n ( λ n ) with λ n / n 1 , then ν n / n converges in probability to 1 [15] (Lemmas 4.3). For the random sample size aforementioned, the claims of Theorems 1 and 2 follow as the reduced determinant random size cases, see Remarks 1(a) and 2(a).
Example 2
(Time-shifted geometric distributed sample size). In the case of linear normalization, with G being one of the three l-types of distributions, say G ( x ; γ , μ , σ ) specified in Equation (2). Suppose that the random sample size, ν n j , follows a geometric distribution with mean n j . We have ν n j n j V in distribution with a random scale, V, following a standard exponential distribution. Consequently, Theorem 1 holds with an accelerated mixed l-max stable distribution, L L , which is the product of mixed l-max stable distributions of form L as described below.
L ( x ; γ , μ , σ ) = 0 ( G ( x ; γ , μ , σ ) ) z d ( 1 e z ) = 0 exp z 1 + γ x μ σ + 1 / γ + 1 d z = 1 1 + 1 + γ x μ σ + 1 / γ ,
which is taken as its limit 1 / [ 1 + exp ( x μ ) / σ , x > μ for γ = 0 .
Similarly, for the power normalization case, recalling H is specified in Equation (4), as the p-max type of limit distributions, Theorem 2 follows with an accelerated mixed p-max stable distribution, P P , the product of mixed p-max stable distribution P, as described below (cf. Barakat and Nigm [19] [Example 2.1]).
P ( x ; γ , μ , σ ) = 0 ( H ( x ; γ , μ , σ ) ) z d ( 1 e z ) = 0 G z ( log x ; γ , μ , σ ) e z d z , if   the   support   is   included   in   ( 0 , ) , 0 G z ( log ( x ) ; γ , μ , σ ) e z d z , otherwise , = 1 1 + 1 + γ log x μ σ + 1 / γ , if   the   support   is   included   in   ( 0 , ) , 1 1 + 1 + γ log ( x ) + μ σ + 1 / γ , otherwise .
Example 3
(Time-shifted negative binomial distributed sample size). As an extension of m-shifted geometric distributions, we consider time-shifted negative binomial distributed sample size ν n with r 1 given by
P ν n = k = r k r m p n r [ ( 1 p n ) ] k r m , k = r m , r m + 1 , .
It follows from Lemma 4.1 by Peng et al. [15] that, as n p n 1 , we have that ν n / n converges in distribution to V, a gamma random variable with shape parameter r and scale parameter 1, i.e., the cdf of V is given by
F V ( z ) = P V z = 0 z 1 Γ ( r ) t r 1 e t d t , z > 0 ,
where Γ ( · ) denotes the gamma function. It follows by Theorem 1 that
L ( x ; γ , μ , σ ) = 0 ( G ( x ; γ , μ , σ ) ) z d F V ( z ) = 0 1 Γ ( r ) z r 1 exp z 1 + γ x μ σ + 1 / γ + 1 d z = 1 + 1 + γ x μ σ + 1 / γ r .
Similarly, Theorem 2 follows with the accelerated mixed p-max stable distributions, which are products of cdfs of form P given below.
P ( x ; γ , μ , σ ) = 1 + 1 + γ log x μ σ + 1 / γ r , if   the   support   is   included   in   ( 0 , ) , 1 + 1 + γ log ( x ) + μ σ + 1 / γ r , otherwise .

4. Numerical Studies

We will conduct a Monte Carlo simulation to illustrate Theorems 1 and 2 with the m-shifted random sample size given in Examples 1∼2. In the following simulations, we set the shift parameter m = 5 for all time-shifted random sample size distributions. The basic risks X 1 , X 2 are drawn from Pareto ( α 1 ) and Pareto ( α 2 ) (recall the cdf of Pareto ( α ) is given as P X x = 1 x α , x > 1 ), and the random sample sizes ν n 1 , ν n 2 are supposed to be mutually independent. Additionally, the repeated time is taken as R = 10 , 000 . We will illustrate our main results specified in Theorems 1 using the three examples given in Section 3.2 above.
1. Comparison of Pareto competing extremes with determinant sample size and Poisson distributed random sample size. In Figure 1, we will demonstrate that the competing extremes with a Poisson distributed sample size are similar to the case with a non-random sample size. Let ν n j follow m-shifted Poisson distribution with mean parameters n j , j = 1 , 2 . We then generate competing Pareto extremes with basic risks following Pareto ( α j ) for given α j > 0 , j = 1 , 2 . It follows from Theorems 1, 2 and Example 1 together with Example 4.6 by Hu et al. [8] that (recall Φ ( x ; α ) = exp x α , x > 0 , α > 0 , the Fréchet distribution)
  • For n 2 = n 1 or n 2 = n 1 c with c > α 2 / α 1 , we have
    n 2 1 / α 2 M ν n d Φ ( x ; α 2 ) , n 2 1 M ν n α 2 d Φ ( x ; 1 ) .
  • For n 2 = n 1 c with c = α 2 / α 1 , we have
    n 2 1 / α 2 M ν n d Φ ( x ; α 1 ) Φ ( x ; α 2 ) , n 2 1 M ν n α 2 d Φ ( x ; α 1 / α 2 ) Φ ( x ; 1 ) .
Note that the power normalized extremes behave similarly to the linear normalized ones, up to a power transformation. Therefore, we will focus on the behavior of linear normalization in the numerical studies presented below.
In Figure 1, we take α 1 = 2 , α 2 = 4 and n 1 = 100 , n 2 = n 1 c with c = 2 , 2.5 to show the above two cases. Overall, the competing Pareto extremes are well fitted by the accelerated GEV distribution for the non-random sample size, with a slightly better fit compared with the random sample size cases. Furthermore, the accelerated GEV approximation (Figure 1a,c) is relatively closer to the empirical competing extremes than the dominated case.
2. Comparison of Pareto competing extremes with geometric distributed and negative binomial distributed sample size. We consider the maxima of maxima M ν n with basic risks X j P a r e t o ( α j ) and random sample sizes ν n j following m-shifted negative binomial distribution with probability 1 / n j , j = 1 , 2 , and r 1 . It follows by Example 4.6 by Hu et al. [8], Theorem 1 (a, b), and Example 3 that, with L ˜ ( x ; α ) = [ 1 + x α ] r , x > 0
  • For n 2 = n 1 or n 2 = n 1 c with c > α 2 / α 1 , we have n 2 1 / α 2 M ν n d L ˜ ( x ; α 2 ) .
  • For n 2 = n 1 c with c = α 2 / α 1 , we have n 2 1 / α 2 M ν n d L ˜ ( x ; α 1 ) L ˜ ( x ; α 2 ) .
Thus, its density function is given by
l ( x ; α 1 , α 2 ) = d L ˜ ( x ; α 2 ) d x = r α 2 x α 2 + 1 ( 1 + x α 2 ) r + 1 , d L ˜ ( x ; α 1 ) L ˜ ( x ; α 2 ) d x = r α 1 x α 1 1 ( 1 + x α 1 ) r + 1 ( 1 + x α 2 ) r + r α 2 x α 2 1 ( 1 + x α 2 ) r + 1 ( 1 + x α 1 ) r ,
In Figure 2, we set n 1 = 100 , n 2 = n 1 c with c = 2 , 2.5 in (a, c) and (b, d), respectively. The random sample size follows a 5-shifted negative binomial distribution with r = 1 in (a, b) (namely geometric distribution), r = 2 in (c, d), and probability 1 / n j , j = 1 , 2 . The Pareto basic risks are set with α 1 = 2 and α 2 = 4 . Consequently, the sub-maxima are completely competing when n 2 = n 1 2 , resulting in the accelerated mixed extreme limit distributions as shown in Figure 2a,c. In contrast, the dominated limit behavior is given in Figure 2b,d as n 2 = n 1 2.5 .
In general, our theoretical density curve given by Equation (16) approximates the histogram very well (Figure 2). Further, we see that the approximation with geometric distributed random size is slightly better than the negative binomial case. In addition, the approximation for the dominated case (Figure 2d) is slightly better than the accelerated case when negative binomial random size applies.

Author Contributions

Conceptualization, L.B., K.H. and C.L.; methodology, K.H. and C.L.; software, L.B.; validation, L.B., Z.T. and C.L.; formal analysis, L.B., K.H. and C.L.; investigation, K.H.; writing—original draft preparation, C.L. and K.H.; writing—review and editing, C.W., Z.T. and C.L.; visualization, C.L.; supervision, C.L.; project administration, C.L.; funding acquisition, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

Long Bai is supported by National Natural Science Foundation of China Grant no. 11901469, Natural Science Foundation of the Jiangsu Higher Education Institutions of China grant no. 19KJB110022, and University Research Development Fund no. RDF-21-02-071. Chengxiu Ling is supported by the Research Development Fund [RDF1912017] and the Post-graduate Research Fund [PGRS2112022] at Xi’an Jiaotong-Liverpool University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

We thank the editors and all reviewers for their constructive suggestions and comments that greatly helped to improve the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proofs of Theorems 1∼4

We will first present a lemma, followed by the proofs of each theorem established in Section 2.
Lemma A1
(Theorem 2.1 by Cao and Zhang [7]). If M 1 , n 1 and M 2 , n 2 satisfy Equation (1) with G j , j = 1 , 2 , the limit distribution of M n as n can be determined in the following cases:
Case (i). If condition (9) holds with a > 0 and b < + , then
P ( a 2 , n 2 ( M n b 2 , n 2 ) x ) G 1 ( a x + b ) G 2 ( x ) .
Case (ii). If condition (9) holds with a = 0 , b = , then
P ( a 2 , n 2 ( M n b 2 , n 2 ) x ) G 2 ( x ) .
The proof of Lemma A1 is omitted since one can find its detailed proof given in Cao and Zhang [7] (page 250).
Below, we present the proofs of Theorems 1∼4 subsequently.
Proof of Theorem 1.
In view of Theorem 6.2.1 by Galambos [18], it follows by the independence of the basic risk, X j , and the random sample size, ν j , and conditions (1) and (7) that
P a j , n j ( M j , ν n j b j , n j ) x d L j ( x ) = 0 ( G j ( x ) ) z d P V j z , j = 1 , 2 .
Further, it follows by the mutual independence between ( X 1 , ν 1 ) and ( X 2 , ν 2 ) that
P a 2 , n 2 ( M ν n b 2 , n 2 ) x = P max ( a 2 , n 2 ( M 1 , ν n 1 b 2 , n 2 ) , a 2 , n 2 ( M 2 , ν n 2 b 2 , n 2 ) ) = P a 2 , n 2 ( M 1 , ν n 1 b 2 , n 2 ) x P a 2 , n 2 ( M 2 , ν n 2 b 2 , n 2 ) x = : I n · I I n .
The straightforward application of Equation (A1) gives
I I n d L 2 ( x ) , n 2 .
Next, we turn to show the limit behavior of I n . First, we rewrite I n as
I n = P a 2 , n 2 ( M 1 , ν n 1 b 2 , n 2 ) x = P a 1 , n 1 ( M 1 , ν n 1 b 1 , n 1 ) a 1 , n 1 x a 2 , n 2 + b 2 , n 2 b 1 , n 1 .
Case (i). It follows by conditions (1) and (9) with a > 0 , b R that (see also the relevant proof of Lemma A1 for Cao and Zhang [7]) (Theorem 2.1, page 250)
P a 1 , n 1 ( M 1 , n 1 b 1 , n 1 ) a 1 , n 1 x a 2 , n 2 + b 2 , n 2 b 1 , n 1 d G 1 ( a x + b ) .
Therefore, in view of Theorem 6.2.1 in Galambos [18], we have by condition (7) and the dominated convergence theorem
I n d L 1 ( a x + b ) .
Case (ii). Noting that condition (9) holds with a = 0 and b = , we have
P a 1 , n 1 ( M 1 , n 1 b 1 , n 1 ) a 1 , n 1 x a 2 , n 2 + b 2 , n 2 b 1 , n 1 1 .
Similar arguments to Case (i) imply that I n 1 .
Consequently, we complete the proof of Theorem 1. □
Proof of Theorem 2.
Noting that ( X 1 , ν 1 ) and ( X 2 , ν 2 ) are independent, we rewrite the left-hand side of Equation (12) as follows.
P α 2 , n 2 | M ν n | β 2 , n 2 sign ( M ν n ) x = P M ν n | x α 2 , n 2 | 1 / β 2 , n 2 sign ( x ) = P α 2 , n 2 | M 1 , ν n 1 | β 2 , n 2 sign ( M 1 , ν n 1 ) x P α 2 , n 2 | M 2 , ν n 2 | β 2 , n 2 sign ( M 2 , ν n 2 ) x = : I n · I I n .
Since condition (3) holds for X 2 F 2 and condition (7) is satisfied for ν n 2 , it follows by Theorem 2.1 in Barakat and Nigm [19] that
I I n d P 2 ( x ) , n 2 .
Next, we show the limit of I n . We rewrite I n as
I n = P M 1 , ν n 1 ( x n α 1 , n 1 ) 1 / β 1 , n 1 sign ( x ) ,
where x n = α 1 , n 1 ( | x | α 2 , n 2 ) β 1 , n 1 / β 2 , n 2 = : α n | x | β n with α n , β n given by Equation (11).
Case (i). Noting that x n α | x | β as n , condition (3) holds uniformly for x n and thus
P M 1 , n 1 ( x n α 1 , n 1 ) 1 / β 1 , n 1 sign ( x ) H 1 ( α | x | β ) .
Therefore, using again Theorem 2.1 in Barakat and Nigm [19], we have by the dominated convergence theorem
I n d P 1 ( α | x | β ) .
Case (ii). It remains to show that I n 1 , which can be confirmed by applying Theorem 2.1 in Barakat and Nigm [19] together with the dominated convergence theorem and condition (7) if we can show
I n * : = P M 1 , n 1 ( x n α 1 , n 1 ) 1 / β 1 , n 1 sign ( x ) 1 .
In what follows, we will show that Equation (A6) holds if one of the four conditions specified in ( a ) ( d ) is satisfied.
(a).
Since H 2 and H 1 are one of the same p-types of G ( log x ; γ , μ , σ ) and G ( log ( x ) ; γ , μ , σ ) , respectively, we have, for x 0 ,
H 1 ( x n sign ( x ) ) = H 1 ( α n | x | β n sign ( x ) ) H 1 ( 0 ) = 1 .
We thus obtain Equation (A6).
(b).
For H 2 being one of the same p-types of G ( log x ; γ , μ , σ ) , and H 1 being one of the same p-types of G ( log x ; γ , μ , σ ) with γ 0 , we have, for α = , 0 β <
log x n = log α n + β n log x a s min ( n 1 , n 2 )
holds for all x > 0 . Therefore, Equation (A6) follows.
(c).
For H 2 being one of the same p-types of G ( log x ; γ , μ , σ ) , and H 1 being the same p-type of G ( log x ; γ , μ , σ ) with γ < 0 , we have, for α = , 0 β < or α x 1 * > 0 , β = 0 and any x > 0
log x n = log α n + β n log x log α log x 1 * a s min ( n 1 , n 2 ) .
We have thus I n * H 1 ( x n ) 1 .
(d).
For H 1 , H 2 being one of the same p-types of G ( log ( x ) ; γ , μ , σ ) , we have, for 0 α x 1 * , β = 0 or α = 0 , 0 β < , and any x < 0
x n sign ( x ) = α n | x | β n x 1 * ,
indicating that I n * H 1 ( x n sign ( x ) ) 1 .
We complete the proof of Theorem 2. □
Proof of Theorem 3.
It follows by Theorem 6.2.1 of Galambos [18] (see Equation (8)) that, for the jth sample maxima M j , ν n j , when the constant sequences a j , n j > 0 , b j , n j R , such that (1) holds, and ν n j satisfies Equation (7), we have the claim in Equation (A1). Consequently, the claim follows by Lemma A1 and the dominated convergence theorem. □
Proof of Theorem 4.
We show first that, the claim follows for the jth sample maxima M j , ν n j with normalizing constants α j , n j , β j , n j > 0 , i.e.,
P α j , n j M ν n j β j , n j sign M ν n j x d P j ( x ) .
Denote by { p n j ( k ) , k 0 } the probability mass function of ν n j . We have
p n j ( k ) 0 , k = 0 p n j ( k ) = 1 .
It follows by the total law of probability and the independence between basic risks and sample size that
P α j , n j | M j , ν n j | β j , n j sign ( M j , ν n j ) x = p n j ( 0 ) + k = 1 p n j ( k ) P α j , n j | M j , n j | β j , n j sign ( M j , n j ) x k .
Since ν n j p as n j , we have lim n j p n j ( 0 ) = 0 . Therefore,
P α j , n j | M j , ν n j | β j , n j sign ( M j , ν n j ) x E exp ( ν n j n j ) n j log F j ( | x / α j , n j | 1 / β j , n j sign ( x ) ) .
Noting that condition (7) implies that there exists a sub-sequence, ν n j , such that ν n j / n j p V j . It follows thus by Theorem 2.1 in Berman [25] that, for every s > 0 ,
lim n j E exp s ν n j n j = 0 e s z d P V j z .
This together with condition (3) for M j , n j , α j , n j , β j , n j and Equation (A8) implies that
lim n j E exp ( ν n j n j ) n j log F j ( | x / α j , n j | 1 / β j , n j sign ( x ) ) = 0 exp ( z log H j ( x ) ) d P V j z .
Consequently, we obtained the claim in (A7). Finally, the proof can be completed by combining verified Equation (A7), those arguments for Theorem 2, and the dominated convergence theorem. □

References

  1. Leadbetter, M.R.; Lindgren, G.; Rootzén, H. Extremes and Related Properties of Random Sequences and Processes; Springer: New York, NY, USA, 1983. [Google Scholar]
  2. Embrechts, P.; Kluppelberg, C.; Mikosch, T. Modelling Extremal Events; Stochastic Modelling and Applied Probability; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
  3. Beirlant, J.; Teugels, J.L. Limit distributions for compounded sums of extreme order statistics. J. Appl. Probab. 1992, 29, 557–574. [Google Scholar] [CrossRef]
  4. Pantcheva, E. Limit Theorems for Extreme Order Statistics under Nonlinear Normalization; Springer: Berlin/Heidelberg, Germany, 1985; pp. 284–309. [Google Scholar]
  5. Nasri-Roudsari, D. Limit distributions of generalized order statistics under power normalization. Commun. Stat.-Theory Methods 1999, 28, 1379–1389. [Google Scholar] [CrossRef]
  6. Barakat, H.M.; Khaled, O.M.; Rakha, N.K. Modeling of extreme values via exponential normalization compared with linear and power normalization. Symmetry 2020, 12, 1876. [Google Scholar] [CrossRef]
  7. Cao, W.; Zhang, Z. New extreme value theory for maxima of maxima. Stat. Theory Relat. Fields 2021, 5, 232–252. [Google Scholar] [CrossRef]
  8. Hu, K.; Wang, K.; Constantinescu, C.; Zhang, Z.; Ling, C. Extreme Limit Theory of Competing Risks under Power Normalization. arXiv 2023, arXiv:2305.02742. [Google Scholar]
  9. Chen, Y.; Guo, K.; Ji, Q.; Zhang, D. “Not all climate risks are alike”: Heterogeneous responses of financial firms to natural disasters in China. Financ. Res. Lett. 2023, 52, 103538. [Google Scholar] [CrossRef]
  10. Cui, Q.; Xu, Y.; Zhang, Z.; Chan, V. Max-linear regression models with regularization. J. Econom. 2021, 222, 579–600. [Google Scholar] [CrossRef]
  11. Zhang, Z. Five critical genes related to seven COVID-19 subtypes: A data science discovery. J. Data Sci. 2021, 19, 142–150. [Google Scholar] [CrossRef]
  12. Soliman, A.A. Bayes Prediction in a Pareto Lifetime Model with Random Sample Size. J. R. Stat. Soc. Ser. 2000, 49, 51–62. [Google Scholar] [CrossRef]
  13. Korolev, V.; Gorshenin, A. Probability models and statistical tests for extreme precipitation based on generalized negative binomial distributions. Mathematics 2020, 8, 604. [Google Scholar] [CrossRef]
  14. Barakat, H.; Nigm, E. Convergence of random extremal quotient and product. J. Stat. Plan. Inference 1999, 81, 209–221. [Google Scholar] [CrossRef]
  15. Peng, Z.; Jiang, Q.; Nadarajah, S. Limiting distributions of extreme order statistics under power normalization and random index. Stochastics 2012, 84, 553–560. [Google Scholar] [CrossRef]
  16. Tan, Z.Q. The limit theorems for maxima of stationary Gaussian processes with random index. Acta Math. Sin. 2014, 30, 1021–1032. [Google Scholar] [CrossRef]
  17. Tan, Z.; Wu, C. Limit laws for the maxima of stationary chi-processes under random index. Test 2014, 23, 769–786. [Google Scholar] [CrossRef]
  18. Galambos, J. The Asymptotic Theory of Extreme Order Statistics; Wiley Series in Probability and Mathematical Statistics; Wiley: New York, NY, USA, 1978. [Google Scholar]
  19. Barakat, H.; Nigm, E. Extreme order statistics under power normalization and random sample size. Kuwait J. Sci. Eng. 2002, 29, 27–41. [Google Scholar]
  20. Dorea, C.C.; GonÇalves, C.R. Asymptotic distribution of extremes of randomly indexed random variables. Extremes 1999, 2, 95–109. [Google Scholar] [CrossRef]
  21. Peng, Z.; Shuai, Y.; Nadarajah, S. On convergence of extremes under power normalization. Extremes 2013, 16, 285–301. [Google Scholar] [CrossRef]
  22. Hashorva, E.; Padoan, S.A.; Rizzelli, S. Multivariate extremes over a random number of observations. Scand. J. Stat. 2021, 48, 845–880. [Google Scholar] [CrossRef]
  23. Shi, P.; Valdez, E.A. Multivariate negative binomial models for insurance claim counts. Insur. Math. Econ. 2014, 55, 18–29. [Google Scholar] [CrossRef]
  24. Ribereau, P.; Masiello, E.; Naveau, P. Skew generalized extreme value distribution: Probability-weighted moments estimation and application to block maxima procedure. Commun. Stat.-Theory Methods 2016, 45, 5037–5052. [Google Scholar] [CrossRef]
  25. Berman, S.M. Limiting distribution of the maximum term in sequences of dependent random variables. Ann. Math. Stat. 1962, 33, 894–908. [Google Scholar] [CrossRef]
  26. Freitas, A.; Hüsler, J.; Temido, M.G. Limit laws for maxima of a stationary random sequence with random sample size. Test 2012, 21, 116–131. [Google Scholar] [CrossRef]
  27. Abd Elgawad, M.; Barakat, H.; Qin, H.; Yan, T. Limit theory of bivariate dual generalized order statistics with random index. Statistics 2017, 51, 572–590. [Google Scholar] [CrossRef]
  28. Grigelionis, B. On the extreme-value theory for stationary diffusions under power normalization. Lith. Math. J. 2004, 44, 36–46. [Google Scholar] [CrossRef]
Figure 1. Distribution approximation of linear normalized M ν n = max ( M 1 , ν n 1 , M 2 , ν n 2 ) (a,b) and M n = max ( M 1 , n 1 , M 2 , n 2 ) (c,d) with both M j , n j ’s from Pareto ( α j ) and Poisson distributed sample size ν n j with mean n j . Here, ( α 1 , α 2 ) = ( 2 , 4 ) and n 1 = 100 , n 2 = n 1 c with c = 2 , 2.5 in (b,d) and (a,c) by Φ ( x ; α 1 ) Φ ( x ; α 2 ) and Φ ( x ; α 2 ) , respectively.
Figure 1. Distribution approximation of linear normalized M ν n = max ( M 1 , ν n 1 , M 2 , ν n 2 ) (a,b) and M n = max ( M 1 , n 1 , M 2 , n 2 ) (c,d) with both M j , n j ’s from Pareto ( α j ) and Poisson distributed sample size ν n j with mean n j . Here, ( α 1 , α 2 ) = ( 2 , 4 ) and n 1 = 100 , n 2 = n 1 c with c = 2 , 2.5 in (b,d) and (a,c) by Φ ( x ; α 1 ) Φ ( x ; α 2 ) and Φ ( x ; α 2 ) , respectively.
Axioms 13 00568 g001
Figure 2. Distribution approximation of linear normalized M ν n = max ( M 1 , ν n 1 , M 2 , ν n 2 ) with both M j , n j ’s from Pareto ( α j ) . The random sample size follows a negative binomial distribution with r = 1 (the geometric distribution) (a,b), r = 2 (c,d), and probability 1 / n j , j = 1 , 2 . Here, ( α 1 , α 2 ) = ( 2 , 4 ) and n 1 = 100 , n 2 = n 1 c with c = 2 , 2.5 in (a,c) and (b,d) with pdf curves of L ˜ ( x ; α 1 ) L ˜ ( x ; α 2 ) and L ˜ ( x ; α 2 ) , respectively.
Figure 2. Distribution approximation of linear normalized M ν n = max ( M 1 , ν n 1 , M 2 , ν n 2 ) with both M j , n j ’s from Pareto ( α j ) . The random sample size follows a negative binomial distribution with r = 1 (the geometric distribution) (a,b), r = 2 (c,d), and probability 1 / n j , j = 1 , 2 . Here, ( α 1 , α 2 ) = ( 2 , 4 ) and n 1 = 100 , n 2 = n 1 c with c = 2 , 2.5 in (a,c) and (b,d) with pdf curves of L ˜ ( x ; α 1 ) L ˜ ( x ; α 2 ) and L ˜ ( x ; α 2 ) , respectively.
Axioms 13 00568 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bai, L.; Hu, K.; Wen, C.; Tan, Z.; Ling, C. Extreme Behavior of Competing Risks with Random Sample Size. Axioms 2024, 13, 568. https://doi.org/10.3390/axioms13080568

AMA Style

Bai L, Hu K, Wen C, Tan Z, Ling C. Extreme Behavior of Competing Risks with Random Sample Size. Axioms. 2024; 13(8):568. https://doi.org/10.3390/axioms13080568

Chicago/Turabian Style

Bai, Long, Kaihao Hu, Conghua Wen, Zhongquan Tan, and Chengxiu Ling. 2024. "Extreme Behavior of Competing Risks with Random Sample Size" Axioms 13, no. 8: 568. https://doi.org/10.3390/axioms13080568

APA Style

Bai, L., Hu, K., Wen, C., Tan, Z., & Ling, C. (2024). Extreme Behavior of Competing Risks with Random Sample Size. Axioms, 13(8), 568. https://doi.org/10.3390/axioms13080568

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop