Limiting Distributions for the Minimum-Maximum Models

: We consider the extreme value problem of the minimum-maximum models for the independent and identically distributed random sequence and stationary random sequence, respectively. By invoking some probability formulas and Taylor’s expansions of the distribution functions, the limiting distributions for these two kinds of sequences are obtained. Moreover, convergence analysis is carried out for those extreme value distributions. Several numerical experiments are conducted to validate our theoretical results. Contributions: writing—original draft preparation, writing—review visualization, and project L.P.


Introduction
Consider a collection of random variables {X ij } for i = 1, . . . , n, j = 1, . . . , m following a common distribution function F. Let's denote the minimum-maximum model as: where m = An with A being a fixed positive constant.
The minimum-maximum model (1) provides a novel framework for a variety of applications. For instance, consider the risk management strategies by a risk-averse investor. Let i denote the i-th asset in the asset pool, and j denote the time interval. Assume that X ij is the risk measurement of i-th asset at time interval j. Suppose the risk-averse investor always chooses to buy an asset when its risk reaches the bottom, then M n implies the largest risk the investor would bear, and the limiting distribution is applicable for controlling the risk management processes.
Note that if we fix j = j 0 , then M n denotes the partial maximum of the random sequence {X ij 0 , i = 1, . . . , n}. We further assume that {X ij 0 , i = 1, . . . , n} being independently identically distributed (i.i.d.), then, according to the classical extreme value theory [1][2][3][4][5], there exists a non-degenerate distribution G(x) with some normalized constants a n > 0 and b n ∈ R such that lim n→∞ P( M n − b n a n ≤ x) = lim n→∞ F n (a n x + b n ) = G(x) for every x. Here, based on the difference in domains of attraction with respect to the distribution F, G(x) belongs to one of the three fundamental classes: with parameter α > 0. The methods of determining the normalized constants a n and b n are provided in [4,5]. Similar conclusions have been drawn for the random sequences under strong or weak mixing conditions by [6][7][8]. Moreover, the convergence rate of F n (a n x + b n ) with respect to each of the three types can be found in [9][10][11][12][13][14][15]. In particular, the authors in [10,[12][13][14] obtained the uniform convergence rate of extremes. Although extreme value theory has been extensively studied in a large variety of models, existing literature gives little insight into the limiting distribution for the minimum-maximum model. To fill this gap, we present some theoretical results of limiting distributions for the minimum-maximum model (1), which probably give some insights on application.
In this paper, we focus on the problem of obtaining the limiting distributions for the minimum-maximum model (1), as well as the convergence rate of P(M n ≤ a n x + b n ) to its extreme value limit. Motivated by [1,4,5], we first provide the methods for selecting the normalized constants a n , b n . Then, combining their properties with Taylor's expansions of the distribution functions, we obtain the limiting distributions for i.i.d. and stationary random sequence, respectively. Our results show that P(M n ≤ a n x + b n ) converges to a non-standard Gumbel distribution as long as the distribution functions have continuous and bounded first derivatives. In order to obtain the convergence rate of extremes, we take the advantage of an important inequality and a classical probability result adopted from [12]. A closer examination of convergence results reveals a uniform convergence rate of O(n −1 ) for some probability distributions. Finally, numerical examples are provided to verify our theoretical results.
The rest of the paper is organized as follows: In Sections 2 and 3, we derive the extreme value distribution for model (1) with i.i.d. and stationary sequence, respectively. The convergence analysis for the limiting functions is presented in Section 4. Section 5 is devoted to the numerical experiments implying the asymptotic behaviors and uniform convergence of different distribution functions.

Extreme Value Distribution for i.i.d. Sequences
Suppose X 11 , X 12 , · · · , X nm is a sequence of i.i.d. random variables with common distribution function F. Note that model (1) can be rewritten as The following theorem implies that the limiting distribution of (4) with i.i.d. random sequence belongs to the Gumbel class.
Theorem 1. Suppose {X ij , 1 ≤ i ≤ n, 1 ≤ j ≤ m} are i.i.d. random variables with common distribution function F having a continuous and bounded first derivative F , then there exist some normalized constants a n = 1−F Proof. Since X 11 , X 12 , · · · , X mn are i.i.d., we have According to the expressions of a n and b n , it follows that An and a n = 1 Combining Taylor's expansion of F(a n x + b n ) with (7) yields F(a n x + b n ) = F(b n ) + a n xF (b n + θ 1 a n x), where θ 1 ∈ (0, 1). Taking the limit of (6) as n goes to infinity and substituting (8) into it, we have The right-hand side of the first line in (9) is by the assumption m = An. On the right-hand side of the third line in (9), we applied (7) An . Note that lim n→∞ a n = lim thus we obtain the following limit: Combining (9) with (11), we finish the proof of the theorem.
Remark 1. It can be easily seen that the probability density function of M n will converge to G A (x), which will be verified by numerical experiments in Section 5.

Extreme Value Distribution for Stationary Sequences
In this section, we turn to the case for the strictly stationary sequence {X ij } for i = 1, . . . , N, j = 1, . . . L. Similarly, the minimum-maximum model of {X ij } can be defined as where N =ĀL withĀ is a fixed positive constant. In order to get asymptotic results, it is necessary to put some restrictions on the distribution functions. We further assume that the random sequence satisfies the following conditions: For any fixed i, the joint distribution of the random sequence {X ij , j = 1, . . . , L}, denoted as F i,1···L (x i1 , · · · ,x iL ), satisfying the following strong mixing condition D(u L ) defined as: Now, we consider the limiting distribution ofM N . Similar to (4), Model (12) can be rewritten as Motivated by [16], under the conditions D(u L ) stated above, we can prove that 1 − P(Ȳ i ≤ u L ) can be approximated by P L (X i1 ≤ u L ). The analysis results are provided by the following theorem. Theorem 2. Suppose {X ij , i = 1, . . . , N, j = 1, . . . L} is a strictly stationary sequence with joint distribution F i,1···N (x i1 , · · · ,x iN ) for i = 1, 2, · · · , L satisfying the condition (13), then for any Proof. Under the condition D(u L ) and by the definition ofȲ i1 , it follows that Note that, for any r ∈ {1, 2, . . . , N}, we have lim L→∞ α L,r = 0. Therefore, for any sufficiently small ε > 0, there exists an integer L 0 such that, if L > L 0 , then α L,r < ε. Consequently, if L ≤ L 0 , then α L,r < α 0 where α 0 is a positive constant. We now proceed by dividing (17) into two parts: To deal with the first term on the right-hand side of (18), we use Taylor's expansion ofF i1 (u L ) to get as N goes to infinity, where the parameter θ 2 ∈ (0, 1).
In the same manner, we bound the second term on the right-hand side of (18) by Substituting (19) and (20) into (18), we finish the proof of the theorem.
With the help of Theorem 2, we can now proceed to obtain the limiting distribution of (M N − b L )/a L . (21) Proof. By the definitions ofM N andȲ i , we have According to asymptotic results implied by Theorem 2, it follows easily that Note that the resulting limit (23) shares the same form with (6), We can now proceed analogously to the proof of Theorem 1 to conclude our limiting distribution (21). The detailed proofs are omitted here.

Rate of Convergence of the Minimum-Maximum Model
In this section, we will be concerned with the convergence rate of |P( M n −b n a n ≤ x) − G A (x)|. Here, we present the proofs for the i.i.d. random sequence. It can be proved in much the same way for the strictly stationary sequence. Our convergence analysis needs the following result given by [12] in Proposition 1: Theorem 4. Suppose the random sequence {X ij , 1 ≤ i ≤ m, 1 ≤ j ≤ m} satisfies the conditions of Theorem 1, and the normalized constants a n and b n as defined in Theorem 1, then the following convergence rate is achieved for n ≥ max{ 1 A , 1} and x ∈ D n , where γ n (x) = F (θa n x + b n )/F (b n ) and D n := {x|xγ n (x) < n} ∩ {x|0 < x < n}.
Proof. Combining (6) with (8), we have where γ n (x) := F (θa n x + b n )/F (b n ). According to (24), we can bound the second term on the right-hand side of "≤" sign in (26) as We now turn to the first term, it is easy to check that where we have used the following inequality: Combining Theorem 1 with the monotone of (1 − x n ) n in n, the first part on the right-hand side of "≤" sign in (27) can be controlled by G A (x). Meanwhile, note that we assume m = An in our model, then the rest part of the term can be handled as Combining (29) with the monotone of (1 − x n ) n in n, we have where x should satisfy both xγ n (x) < n and 0 < x < n. For the second term on the right-hand side of "≤" in (30), applying (24) again yields A combination of (26), (31) and (32) completes the proof of the theorem.
Note that Theorem 4 shows the point-wise convergence of the distribution of (1) to extreme value distribution G A (x). However, a closer examination of this theorem yields a uniform rate of convergence for some probability distributions. With the help of Taylor's expansion, we are now in a position to show the uniform convergence rate. Corollary 1. Suppose the random sequence {X ij , 1 ≤ i ≤ m, 1 ≤ j ≤ m} satisfies the conditions of Theorem 4 with distributions F having continuous and bounded first and second derivates, then there exist some constants K, K independent of n such that the following uniform convergence rate holds for n ≥ max{ 1 A , 1}, Proof. According to the proofs of Theorem 4, what is left is to verify |1 − γ n (x)| = O(n −1 ). Combining Taylor 's expansion of F (θa n x + b n ) in b n with (7) yields that Therefore, Note that F and F are bounded on D n and 0 < θ 1 < 1, thus we can assert that where K is a positive constant which should not rely on n and satisfy It can be easily proved that the function x 2 e −Ax is bounded by 4(Ae) −2 for x ∈ D n , which implies the convergence is uniform on D n .

Numerical Experiments
In this section, numerical experiments are conducted to validate the approximation capability of G A (x) and the uniform convergence rate provided by Theorem 1 and Corollary 1, respectively.
We begin by generating random sequences following three types of distributions (i.e., uniform distribution, exponential distribution and Cauchy distribution); then Theorem 1 leads to explicit expression of the corresponding normalized constants a n and b n , which are given in the following Table 1. Table 1. Normalized constants a n and b n for the corresponding distributions.

Distribution Function F
Normalized Constant a n and b n π arctan x + 1 2 , x ∈ R a n = n −1− 1 An π(1 + cot 2 (n − 1 An π)), b n = cot(n − 1 An π) Here, we choose A = 1 2 , 1, 20, respectively. The results are presented in Tables 2 and 3 and Figures 1-3. Note that the second derivative of Cauchy distribution function is unbounded on D n ; consequently, this kind of distribution could not converge to G A (x) uniformly in this domain. For this reason, we only test the asymptotic behavior of extremes by feedback sketch. The results are presented in Figure 3. These results suggest that the uniform convergence rate is of order O( 1 n ) for different A as predicted by Theorem 1. Figures 1-3 show that the asymptotic density is essentially coincident with the extreme value density G A (x) as posed in Remark 1.

Conclusions
In this paper we have provided the analysis of deriving the limiting distributions and convergence rates for the minimum-maximum model. Under the framework of this model, we considered the i.i.d. random sequence and the stationary random sequence satisfying the strong mixing condition respectively. Without specifying the form of the distribution functions of the two sequences, our results yield that the limiting distribution of M n converges to a non-standard Gumbel distribution. For the distribution functions having continuous and bounded first derivatives, the limiting distribution has