 Next Article in Journal
On Pata–Suzuki-Type Contractions
Previous Article in Journal
Data-Driven Iron and Steel Inventory Control Policies

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Limiting Distributions for the Minimum-Maximum Models

School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(8), 719; https://doi.org/10.3390/math7080719
Received: 9 July 2019 / Revised: 2 August 2019 / Accepted: 3 August 2019 / Published: 7 August 2019

## Abstract

:
We consider the extreme value problem of the minimum-maximum models for the independent and identically distributed random sequence and stationary random sequence, respectively. By invoking some probability formulas and Taylor’s expansions of the distribution functions, the limiting distributions for these two kinds of sequences are obtained. Moreover, convergence analysis is carried out for those extreme value distributions. Several numerical experiments are conducted to validate our theoretical results.

## 1. Introduction

Consider a collection of random variables ${ X i j }$ for $i = 1 , ⋯ , n , j = 1 , ⋯ , m$ following a common distribution function F. Let’s denote the minimum-maximum model as:
$M n = max 1 ≤ i ≤ n min 1 ≤ j ≤ m { X i j } ,$
where $m = A n$ with A being a fixed positive constant.
The minimum-maximum model (1) provides a novel framework for a variety of applications. For instance, consider the risk management strategies by a risk-averse investor. Let i denote the i-th asset in the asset pool, and j denote the time interval. Assume that $X i j$ is the risk measurement of i-th asset at time interval j. Suppose the risk-averse investor always chooses to buy an asset when its risk reaches the bottom, then $M n$ implies the largest risk the investor would bear, and the limiting distribution is applicable for controlling the risk management processes.
Note that if we fix $j = j 0$, then $M n$ denotes the partial maximum of the random sequence ${ X i j 0 , i = 1 , ⋯ , n }$. We further assume that ${ X i j 0 , i = 1 , ⋯ , n }$ being independently identically distributed (i.i.d.), then, according to the classical extreme value theory [1,2,3,4,5], there exists a non-degenerate distribution $G ( x )$ with some normalized constants $a n > 0$ and $b n ∈ R$ such that
$lim n → ∞ P ( M n − b n a n ≤ x ) = lim n → ∞ F n ( a n x + b n ) = G ( x )$
for every x. Here, based on the difference in domains of attraction with respect to the distribution F, $G ( x )$ belongs to one of the three fundamental classes:
$I ( Gumbel ) : Λ ( x ) = exp { e − x } , x ∈ R , II ( Fr e ´ chet ) : Φ α ( x ) = 0 , x < 0 , exp { − x − α } , x ≥ 0 , III ( Weibull ) : Ψ α ( x ) = exp { − ( − x ) − α , x < 0 , 0 , x ≥ 0 ,$
with parameter $α > 0$.
The methods of determining the normalized constants $a n$ and $b n$ are provided in [4,5]. Similar conclusions have been drawn for the random sequences under strong or weak mixing conditions by [6,7,8]. Moreover, the convergence rate of $F n ( a n x + b n )$ with respect to each of the three types can be found in [9,10,11,12,13,14,15]. In particular, the authors in [10,12,13,14] obtained the uniform convergence rate of extremes. Although extreme value theory has been extensively studied in a large variety of models, existing literature gives little insight into the limiting distribution for the minimum-maximum model. To fill this gap, we present some theoretical results of limiting distributions for the minimum-maximum model (1), which probably give some insights on application.
In this paper, we focus on the problem of obtaining the limiting distributions for the minimum-maximum model (1), as well as the convergence rate of $P ( M n ≤ a n x + b n )$ to its extreme value limit. Motivated by [1,4,5], we first provide the methods for selecting the normalized constants $a n$, $b n$. Then, combining their properties with Taylor’s expansions of the distribution functions, we obtain the limiting distributions for i.i.d. and stationary random sequence, respectively. Our results show that $P ( M n ≤ a n x + b n )$ converges to a non-standard Gumbel distribution as long as the distribution functions have continuous and bounded first derivatives. In order to obtain the convergence rate of extremes, we take the advantage of an important inequality and a classical probability result adopted from . A closer examination of convergence results reveals a uniform convergence rate of $O ( n − 1 )$ for some probability distributions. Finally, numerical examples are provided to verify our theoretical results.
The rest of the paper is organized as follows: In Section 2 and Section 3, we derive the extreme value distribution for model (1) with i.i.d. and stationary sequence, respectively. The convergence analysis for the limiting functions is presented in Section 4. Section 5 is devoted to the numerical experiments implying the asymptotic behaviors and uniform convergence of different distribution functions.

## 2. Extreme Value Distribution for i.i.d. Sequences

Suppose $X 11$, $X 12$, ⋯, $X n m$ is a sequence of i.i.d. random variables with common distribution function F. Note that model (1) can be rewritten as
$M n = max { Y 1 , Y 2 , ⋯ , Y n } , Y i = min { X i 1 , X i 2 , ⋯ , X i m } .$
The following theorem implies that the limiting distribution of (4) with i.i.d. random sequence belongs to the Gumbel class.
Theorem 1.
Suppose ${ X i j , 1 ≤ i ≤ n , 1 ≤ j ≤ m }$ are i.i.d. random variables with common distribution function F having a continuous and bounded first derivative $F ′$, then there exist some normalized constants $a n = 1 − F ( b n ) n F ′ ( b n ) > 0$, $b n = ( 1 1 − F ) − 1 ( n 1 A n )$ such that
$lim n → ∞ P ( M n ≤ a n x + b n ) = lim n → ∞ { 1 − [ 1 − F ( a n x + b n ) ] m } n = exp { − e − A x } : = G A ( x ) .$
Proof.
Since $X 11 , X 12 , ⋯ , X m n$ are i.i.d., we have
$P ( M n ≤ a n x + b n ) = P n ( Y i ≤ a n x + b n ) = [ 1 − P ( min { X i 1 , X i 2 , ⋯ , X i m } ≥ a n x + b n ) ] n = { 1 − P m ( X i j > a n x + b n ) } n = { 1 − [ 1 − F ( a n x + b n ) ] m } n .$
According to the expressions of $a n$ and $b n$, it follows that
$1 − F ( b n ) = n − 1 A n and a n = 1 n 1 + 1 A n F ′ ( b n ) .$
Combining Taylor’s expansion of $F ( a n x + b n )$ with (7) yields
$F ( a n x + b n ) = F ( b n ) + a n x F ′ ( b n + θ 1 a n x ) , = F ( b n ) + x F ′ ( b n + θ 1 a n x ) n 1 + 1 A n F ′ ( b n ) ,$
where $θ 1 ∈ ( 0 , 1 )$.
Taking the limit of (6) as n goes to infinity and substituting (8) into it, we have
$lim n → ∞ P ( M n ≤ a n x + b n ) = lim n → ∞ { 1 − [ 1 − F ( a n x + b n ) ] A n } n = lim n → ∞ { 1 − [ 1 − F ( b n ) − x F ′ ( b n + θ 1 a n x ) n 1 + 1 A n F ′ ( b n ) ] A n } n = lim n → ∞ { 1 − 1 n [ 1 − x F ′ ( b n + θ 1 a n x ) n F ′ ( b n ) ] A n } n .$
The right-hand side of the first line in (9) is by the assumption $m = A n$. On the right-hand side of the third line in (9), we applied (7) to replace $1 − F ( b n )$ by $n − 1 A n$.
Note that
$lim n → ∞ a n = lim n → ∞ 1 n 1 + 1 A n F ′ ( b n ) = 0 ,$
thus we obtain the following limit:
$lim n → ∞ F ′ ( b n + θ 1 a n x ) F ′ ( b n ) = 1 .$
Combining (9) with (11), we finish the proof of the theorem. □
Remark 1.
It can be easily seen that the probability density function of $M n$ will converge to $G A ′ ( x )$, which will be verified by numerical experiments in Section 5.

## 3. Extreme Value Distribution for Stationary Sequences

In this section, we turn to the case for the strictly stationary sequence ${ X ¯ i j }$ for $i = 1 , ⋯ , N , j = 1 , ⋯ L$. Similarly, the minimum-maximum model of ${ X ¯ i j }$ can be defined as
$M ¯ N = max 1 ≤ i ≤ N min 1 ≤ j ≤ L { X ¯ i j } ,$
where $N = A ¯ L$ with $A ¯$ is a fixed positive constant.
In order to get asymptotic results, it is necessary to put some restrictions on the distribution functions. We further assume that the random sequence satisfies the following conditions:
For any fixed i, the joint distribution of the random sequence ${ X ¯ i j , j = 1 , ⋯ , L }$, denoted as $F ¯ i , 1 ⋯ L ( x ¯ i 1 , ⋯ , x ¯ i L )$, satisfying the following strong mixing condition $D ( u L )$ defined as:
$D ( u L ) : | F ¯ i , i 1 ⋯ i p j 1 ⋯ j q ( u L ) − F ¯ i , i 1 ⋯ i p ( u L ) F ¯ i , j 1 ⋯ j q ( u L ) | ≤ α L , r ,$
for $1 ≤ i 1 < ⋯ < i p < j 1 < ⋯ < j q ≤ L$ and $j 1 − j p ≥ r$. Here, ${ u L }$ is a sequence of real numbers and
$lim L → ∞ lim r → ∞ α L , r = 0 .$
Now, we consider the limiting distribution of $M ¯ N$. Similar to (4), Model (12) can be rewritten as
$M ¯ n = max { Y ¯ 1 , Y ¯ 2 , ⋯ , Y ¯ N } , Y ¯ i = min { X ¯ i 1 , X ¯ i 2 , ⋯ , X ¯ i L } .$
Motivated by , under the conditions $D ( u L )$ stated above, we can prove that $1 − P ( Y ¯ i ≤ u L )$ can be approximated by $P L ( X ¯ i 1 ≤ u L )$. The analysis results are provided by the following theorem.
Theorem 2.
Suppose ${ X ¯ i j , i = 1 , ⋯ , N , j = 1 , ⋯ L }$ is a strictly stationary sequence with joint distribution $F ¯ i , 1 ⋯ N ( x ¯ i 1 , ⋯ , x ¯ i N )$ for $i = 1 , 2 , ⋯ , L$ satisfying the condition (13), then for any $x ≥ 0$, there exists a sequence $u L = a L x + b L$ with $a L = 1 − F ¯ i 1 ( b L ) F ¯ i 1 ′ ( b L )$ and $b L = ( 1 1 − F ¯ i 1 ) − 1 ( N 1 L )$ such that the following approximate results hold:
$lim N → ∞ | P ( Y ¯ i 1 > u L ) − P L ( X ¯ i 1 > u L ) | = 0 .$
Proof.
Under the condition $D ( u L )$ and by the definition of $Y ¯ i 1$, it follows that
$| P ( Y ¯ i 1 > u L ) − P L ( X ¯ i 1 > u L ) | = | P ( X ¯ i 1 > u L , X ¯ i 2 > u L , ⋯ , X ¯ i L > u L ) − P L ( X ¯ i 1 > u L ) | ≤ | P ( X ¯ i 1 > u L , ⋯ , X ¯ i L > u L ) − P ( X ¯ i 1 > u L , ⋯ , X ¯ i , L − 1 > u L ) P ( X ¯ i L > u L ) | + P ( X ¯ i L > u L ) | P ( X ¯ i 1 > u L , ⋯ , X ¯ i , L − 1 > u L ) − P ( X ¯ i 1 > u L , ⋯ , X ¯ i , L − 2 > u L ) P ( X ¯ i , L − 1 > u L ) | + P ( X ¯ i L > u L ) P ( X ¯ i , L − 1 > u L ) | P ( X ¯ i 1 > u L , ⋯ , X ¯ i , L − 2 > u L ) − P ( X ¯ i 1 > u L , ⋯ , X ¯ i , L − 3 > u L ) P ( X ¯ i , L − 2 > u L ) | + ⋯ + P ( X ¯ i L > u L ) P ( X ¯ i , L − 1 > u L ) ⋯ P ( X ¯ i 3 > u L ) | P ( X ¯ i 1 > u L , X ¯ i 2 > u L ) − P ( X ¯ i 1 > u L ) P ( X ¯ i 2 > u L ) | ≤ α L 1 + P ( X ¯ i 1 > u L ) α L − 1 , 1 + ⋯ + P L − 2 ( X ¯ i 1 > u L ) α 21 = ∑ j = 0 L − 2 α L − j , 1 ( 1 − F ¯ i 1 ( u L ) ) j = ∑ j = 2 L α j 1 ( 1 − F ¯ i 1 ( u L ) ) L − j .$
Note that, for any $r ∈ { 1 , 2 , ⋯ , N }$, we have $lim L → ∞ α L , r = 0$. Therefore, for any sufficiently small $ε > 0$, there exists an integer $L 0$ such that, if $L > L 0$, then $α L , r < ε$. Consequently, if $L ≤ L 0$, then $α L , r < α 0$ where $α 0$ is a positive constant. We now proceed by dividing (17) into two parts:
$∑ j = 2 L α j 1 ( 1 − F ¯ i 1 ( u L ) ) L − j = ∑ j = 2 L 0 α j 1 ( 1 − F ¯ i 1 ( u L ) ) L − j + ∑ j = L 0 + 1 L α j 1 ( 1 − F ¯ i 1 ( u L ) ) L − j .$
To deal with the first term on the right-hand side of (18), we use Taylor’s expansion of $F ¯ i 1 ( u L )$ to get
$∑ j = 2 L 0 α j 1 ( 1 − F ¯ i 1 ( u L ) ) L − j ≤ α 0 ( 1 − F ¯ i 1 ( u L ) ) L − L 0 [ 1 + ( 1 − F ¯ i 1 ( u L ) ) + ⋯ + ( 1 − F ¯ i 1 ( u L ) ) L 0 − 2 ] ≤ α 0 ( L 0 − 1 ) ( 1 − F ¯ i 1 ( u L ) ) L − L 0 = α 0 ( L 0 − 1 ) ( 1 − F ¯ i 1 ( b L ) − a L x F ¯ i 1 ′ ( b L + θ 2 a L x ) ) L − L 0 = α 0 ( L 0 − 1 ) N − L − L 0 L ( 1 − x F ¯ i 1 ′ ( b L + θ 2 a L x ) N F ¯ i 1 ′ ( b L ) ) L − L 0 → 0$
as N goes to infinity, where the parameter $θ 2 ∈ ( 0 , 1 )$.
In the same manner, we bound the second term on the right-hand side of (18) by
$lim N → ∞ ∑ j = L 0 + 1 L α j 1 ( 1 − F ¯ i 1 ( u L ) ) L − j < ε lim N → ∞ [ 1 + ( 1 − F ¯ i 1 ( u L ) ) + ⋯ + ( 1 − F ¯ i 1 ( u L ) ) L − L 0 − 1 ] = ε lim N → ∞ 1 − ( 1 − F ¯ i 1 ( u L ) ) L − L 0 F ¯ i 1 ( u L ) = ε lim N → ∞ ( L − L 0 ) ( 1 − F ¯ i 1 ( u L ) ) = ε lim N → ∞ ( L − L 0 ) ( 1 − F ¯ i 1 ( u L ) − a L x F ¯ i 1 ′ ( b L + θ 2 a L x ) ) L − L 0 − 1 = ε lim N → ∞ ( L − L 0 ) N − L − L 0 − 1 L ( 1 − x F ¯ i 1 ′ ( b L + θ 2 a L x ) N F ¯ i 1 ′ ( b L ) ) L − L 0 − 1 = ε A ¯ e − A ¯ x .$
Substituting (19) and (20) into (18), we finish the proof of the theorem. □
With the help of Theorem 2, we can now proceed to obtain the limiting distribution of $( M ¯ N − b L ) / a L$.
Theorem 3.
Suppose the strictly stationary sequence ${ X ¯ i j , i = 1 , ⋯ , N , j = 1 , ⋯ L }$ satisfies the conditions of Theorem 2. Moreover, assume that the joint distribution $F ¯ i 1$ has continuous first derivative, then there exist normalized constants $a L$ and $b L$ as defined in Theorem 2 such that
$lim N → ∞ P ( M ¯ N ≤ a L x + b L ) = exp { − e − A ¯ x } : = G A ¯ ( x ) .$
Proof.
By the definitions of $M ¯ N$ and $Y ¯ i$, we have
$P ( M ¯ N ≤ a L x + b L ) = P N ( Y ¯ i ≤ a L x + b L ) = [ 1 − P ( Y ¯ i > a L x + b L ) ] N .$
According to asymptotic results implied by Theorem 2, it follows easily that
$lim N → ∞ P ( M ¯ N ≤ a L x + b L ) = lim N → ∞ [ 1 − P ( Y ¯ i > a L x + b L ) ] N = lim N → ∞ [ 1 − P L ( X ¯ i 1 > a L x + b L ) ] N = lim N → ∞ [ 1 − ( 1 − F ¯ i 1 ( a L x + b L ) ) L ] N .$
Note that the resulting limit (23) shares the same form with (6), We can now proceed analogously to the proof of Theorem 1 to conclude our limiting distribution (21). The detailed proofs are omitted here. □

## 4. Rate of Convergence of the Minimum-Maximum Model

In this section, we will be concerned with the convergence rate of $| P ( M n − b n a n ≤ x ) − G A ( x ) |$. Here, we present the proofs for the i.i.d. random sequence. It can be proved in much the same way for the strictly stationary sequence. Our convergence analysis needs the following result given by  in Proposition 1:
$sup 0 ≤ x ≤ n | ( 1 − x n ) n − e − x | < n − 1 ( 2 + n − 1 ) e − 2 , n ≥ 1 .$
Theorem 4.
Suppose the random sequence ${ X i j , 1 ≤ i ≤ m , 1 ≤ j ≤ m }$ satisfies the conditions of Theorem 1, and the normalized constants $a n$ and $b n$ as defined in Theorem 1, then the following convergence rate is achieved for $n ≥ max { 1 A , 1 }$ and $x ∈ D n$,
$| P ( M n − b n a n ≤ x ) − G A ( x ) | < 1 n ( 2 + 1 n ) 1 e 2 + G A ( x ) { A e − A x | x ( 1 − γ n ( x ) ) | + 1 n ( 2 + 1 n A ) 1 A e 2 } ,$
where $γ n ( x ) = F ′ ( θ a n x + b n ) / F ′ ( b n )$ and $D n : = { x | x γ n ( x ) < n } ∩ { x | 0 < x < n }$.
Proof.
Combining (6) with (8), we have
$| P ( M n − b n a n ≤ x ) − G A ( x ) | = | { 1 − 1 n [ 1 − x F ′ ( θ a n x + b n ) n F ′ ( b n ) ] A n } n − G A ( x ) | ≤ | { 1 − 1 n [ 1 − x γ n ( x ) n ] A n } n − ( 1 − e − A x n ) n | + | ( 1 − e − A x n ) n − G A ( x ) | ,$
where $γ n ( x ) : = F ′ ( θ a n x + b n ) / F ′ ( b n )$.
According to (24), we can bound the second term on the right-hand side of “≤” sign in (26) as
$| ( 1 − e − A x n ) n − G A ( x ) | < n − 1 ( 2 + n − 1 ) e − 2 , x > − ln n A , n ≥ 1 .$
We now turn to the first term, it is easy to check that
$| { 1 − 1 n ( 1 − x γ n ( x ) n ) A n } n − ( 1 − e − A x n ) n | ≤ { 1 − 1 n ( 1 − x γ n ( x ) n ) A n } n − 1 | ( 1 − x γ n ( x ) n ) A n − e − A x | ,$
where we have used the following inequality:
$u n − v n ≤ n u n − 1 ( u − v ) , u > v > 0 .$
Combining Theorem 1 with the monotone of $( 1 − x n ) n$ in n, the first part on the right-hand side of “≤” sign in (27) can be controlled by $G A ( x )$. Meanwhile, note that we assume $m = A n$ in our model, then the rest part of the term can be handled as
$| ( 1 − x γ n ( x ) n ) A n − e − A x | ≤ | ( 1 − A x γ n ( x ) m ) m − ( 1 − A x m ) m | + | ( 1 − A x m ) m − e − A x | .$
Combining (29) with the monotone of $( 1 − x n ) n$ in n, we have
$| ( 1 − A x γ n ( x ) m ) m − ( 1 − A x m ) m | ≤ A max { ( 1 − A x γ n ( x ) m ) m − 1 , ( 1 − A x m ) m − 1 } | x ( 1 − γ n ( x ) ) | < A e − A x | x ( 1 − γ n ( x ) ) | ,$
where x should satisfy both $x γ n ( x ) < n$ and $0 < x < n$. For the second term on the right-hand side of “≤” in (30), applying (24) again yields
$| ( 1 − A x m ) m − e − A x | < m − 1 ( 2 + m − 1 ) e − 2 , n ≥ 1 A .$
A combination of (26), (31) and (32) completes the proof of the theorem. □
Remark 2.
Note that $γ n ( x ) = F ′ ( θ a n x + b n ) / F ′ ( b n )$, according to (11), we have $lim n → ∞ γ n ( x ) = 1$. When $n → ∞$, it is sufficient to show that $D n : = { x | x γ n ( x ) < n } ∩ { x | 0 < x < n } → ( 0 , ∞ )$, hence for n large enough, we consider that $D n$ is free of n.
Note that Theorem 4 shows the point-wise convergence of the distribution of (1) to extreme value distribution $G A ( x )$. However, a closer examination of this theorem yields a uniform rate of convergence for some probability distributions. With the help of Taylor’s expansion, we are now in a position to show the uniform convergence rate.
Corollary 1.
Suppose the random sequence ${ X i j , 1 ≤ i ≤ m , 1 ≤ j ≤ m }$ satisfies the conditions of Theorem 4 with distributions F having continuous and bounded first and second derivates, then there exist some constants K, $K ′$ independent of n such that the following uniform convergence rate holds for $n ≥ max { 1 A , 1 }$,
$sup x ∈ D n | P ( M n − b n a n ≤ x ) − G A ( x ) | < K ′ n − 1 .$
Proof.
According to the proofs of Theorem 4, what is left is to verify $| 1 − γ n ( x ) | = O ( n − 1 )$. Combining Taylor ’s expansion of $F ′ ( θ a n x + b n )$ in $b n$ with (7) yields that
$F ′ ( θ a n x + b n ) = F ′ ( b n ) + θ a n x F ″ ( θ 1 θ a n x + b n ) = F ′ ( b n ) + n − 1 − 1 A n θ x F ″ ( θ 1 θ a n x + b n ) F ′ ( b n ) , 0 < θ 1 < 1 .$
Therefore,
$| 1 − γ n ( x ) | = n − 1 − 1 A n θ x | F ″ ( θ 1 θ a n x + b n ) ( F ′ ( b n ) ) 2 | .$
Note that $F ′$ and $F ″$ are bounded on $D n$ and $0 < θ 1 < 1$, thus we can assert that
$| 1 − γ n ( x ) | < K x n − 1 − 1 A n ,$
where K is a positive constant which should not rely on n and satisfy
$| F ″ ( θ 1 θ a n x + b n ) ( F ′ ( b n ) ) 2 | ≤ K .$
Take (36) into (4) to obtain
$| P ( M n − b n a n ) − G A ( x ) | < n − 1 { ( 2 + 1 n ) 1 e 2 + ( 2 + 1 A n ) 1 A e 2 + K A x 2 e − A x n − 1 A n } .$
It can be easily proved that the function $x 2 e − A x$ is bounded by $4 ( A e ) − 2$ for $x ∈ D n$, which implies the convergence is uniform on $D n$. □

## 5. Numerical Experiments

In this section, numerical experiments are conducted to validate the approximation capability of $G A ( x )$ and the uniform convergence rate provided by Theorem 1 and Corollary 1, respectively.
We begin by generating random sequences following three types of distributions (i.e., uniform distribution, exponential distribution and Cauchy distribution); then Theorem 1 leads to explicit expression of the corresponding normalized constants $a n$ and $b n$, which are given in the following Table 1.
Here, we choose $A = 1 2 , 1 , 20$, respectively. The results are presented in Table 2 and Table 3 and Figure 1, Figure 2 and Figure 3. Note that the second derivative of Cauchy distribution function is unbounded on $D n$; consequently, this kind of distribution could not converge to $G A ( x )$ uniformly in this domain. For this reason, we only test the asymptotic behavior of extremes by feedback sketch. The results are presented in Figure 3. These results suggest that the uniform convergence rate is of order $O ( 1 n )$ for different A as predicted by Theorem 1. Figure 1, Figure 2 and Figure 3 show that the asymptotic density is essentially coincident with the extreme value density $G A ′ ( x )$ as posed in Remark 1.

## 6. Conclusions

In this paper we have provided the analysis of deriving the limiting distributions and convergence rates for the minimum-maximum model. Under the framework of this model, we considered the i.i.d. random sequence and the stationary random sequence satisfying the strong mixing condition respectively. Without specifying the form of the distribution functions of the two sequences, our results yield that the limiting distribution of $M n$ converges to a non-standard Gumbel distribution. For the distribution functions having continuous and bounded first derivatives, the limiting distribution has the convergence properties as shown in Theorem 4. In particular, for the distribution functions having continuous and bounded second derivates, our result reveals a uniform convergence rate of $O ( n − 1 )$.

## Author Contributions

Conceptualization, L.P. and L.G.; methodology, L.P. and L.G.; software, L.P.; validation, L.P.; formal analysis, L.P.; investigation, L.P.; resources, L.P.; data curation, L.P.; writing—original draft preparation, L.P.; writing—review and editing, L.P.; visualization, L.P.; supervision, L.P. and L.G.; project administration, L.P.

## Funding

This research is supported by the Major State Basic Research Development Program of China under Grant No. 2017YFA60700602.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Fisher, R.A.; Tippett, L.H.C. Limiting forms of the frequency distribution of the largest or smallest member of a sample. Math. Proc. Camb. Philos. Soc. 1928, 24, 180–190. [Google Scholar] [CrossRef]
2. Gnedenko, B. Sur la distribution limite du terme maximum d’une serie aleatoire. Ann. Math. 1943, 44, 423–453. [Google Scholar] [CrossRef]
3. Haan, L.D. Sample extremes: An elementary introduction. Stat. Neerl. 1976, 30, 161–172. [Google Scholar] [CrossRef]
4. Leadbetter, M.R.; Lindgren, G.; Rootzén, H. Extremes and Related Properties of Random Sequences and Processes; Springer: Berlin/Heidelberg, Germany, 1983. [Google Scholar]
5. Resnick, S.I. Extreme Values, Regular Variation and Point Processes; World Book Inc.: Chicago, IL, USA, 2011. [Google Scholar]
6. Watson, G.S. Extreme Values in Samples from m-Dependent Stationary Stochastic Processes. Ann. Math. Stat. 1954, 25, 798–800. [Google Scholar] [CrossRef]
7. Loynes, R.M. Extreme Values in Uniformly Mixing Stationary Stochastic Processes. Ann. Math. Stat. 1965, 36, 993–999. [Google Scholar] [CrossRef]
8. Deo, C.M. A Note on Strong-Mixing Gaussian Sequences. Ann. Probab. 1973, 1, 186–187. [Google Scholar] [CrossRef]
9. Anderson, C. Contributions to the Asymptotic Theory of Extreme Values. Ph.D. Thesis, University of London, London, UK, 1971. [Google Scholar]
10. Cohen, J.P. Convergence rates for the ultimate and pentultimate approximations in extreme-value theory. Adv. Appl. Probab. 1982, 14, 833–854. [Google Scholar] [CrossRef]
11. Hall, P. On the rate of convergence of normal extremes. J. Appl. Probab. 1979, 16, 433–439. [Google Scholar] [CrossRef]
12. Hall, W.; Wellner, J.A. The rate of convergence in law of the maximum of an exponential sample. Stat. Neerl. 1979, 33, 151–154. [Google Scholar] [CrossRef]
13. Peng, Z.; Nadarajah, S.; Lin, F. Convergence rate of extremes for the general error distribution. J. Appl. Probab. 2010, 47, 668–679. [Google Scholar] [CrossRef]
14. Smith, R.L. Uniform rates of convergence in extreme-value theory. Adv. Appl. Probab. 1982, 14, 600–622. [Google Scholar] [CrossRef]
15. Welsch, R.E. A Weak Convergence Theorem for Order Statistics From Strong-Mixing Processes. Ann. Math. Stat. 1971, 42, 1637–1646. [Google Scholar] [CrossRef]
16. Leadbetter, M.R. On extreme values in stationary sequences. Z. Wahrscheinlichkeitstheorie Verw. Geb. 1974, 28, 289–303. [Google Scholar] [CrossRef]
Figure 1. The sketchs above show the figures of $G A ′ ( x )$ (red star) and $f n ( x )$ (blue line) for i.i.d. uniform sequence when $A = 1 2$, 1 and 20, respectively.
Figure 1. The sketchs above show the figures of $G A ′ ( x )$ (red star) and $f n ( x )$ (blue line) for i.i.d. uniform sequence when $A = 1 2$, 1 and 20, respectively.
Figure 2. The sketches above show the figures of $G A ′ ( x )$ (red star) and $f n ( x )$ (blue line) for i.i.d. exponential sequence when $A = 1 2$, 1 and 20, respectively.
Figure 2. The sketches above show the figures of $G A ′ ( x )$ (red star) and $f n ( x )$ (blue line) for i.i.d. exponential sequence when $A = 1 2$, 1 and 20, respectively.
Figure 3. The sketches above show the figures of $G A ′ ( x )$ (red star) and $f n ( x )$ (blue line) for i.i.d. Cauchy’s sequence when $A = 1 2$, 1 and 20, respectively.
Figure 3. The sketches above show the figures of $G A ′ ( x )$ (red star) and $f n ( x )$ (blue line) for i.i.d. Cauchy’s sequence when $A = 1 2$, 1 and 20, respectively.
Table 1. Normalized constants $a n$ and $b n$ for the corresponding distributions.
Table 1. Normalized constants $a n$ and $b n$ for the corresponding distributions.
Distribution Function FNormalized Constant $a n$ and $b n$
$F ( x ) = x , x ∈ [ 0 , 1 ] 0 , others$$a n = n − 1 − 1 A n$$b n = 1 − n − 1 A n$
$F ( x ) = 1 − e − x , x ≥ 0 0 , x < 0$$a n = 1 n$$b n = 1 A n ln n$
$F ( x ) = 1 π arctan x + 1 2 , x ∈ R$$a n = n − 1 − 1 A n π ( 1 + cot 2 ( n − 1 A n π ) ) , b n = cot ( n − 1 A n π )$
Table 2. Uniform convergence rate and convergence order for uniform distribution.
Table 2. Uniform convergence rate and convergence order for uniform distribution.
$A = 1 / 2$$A = 1$$A = 20$
$n$RateOrder$n$RateOrder$n$RateOrder
20.3078 10.3679 10.3679
100.00516$− 1.1097$100.0246$− 1.1748$100.0192$− 1.2824$
$10 2$0.0048$− 1.0314$$10 2$0.0024$− 1.0107$$10 2$0.0018$− 1.0280$
$10 3$$4.7561 × 10 − 4$$− 1.0040$$10 3$$2.3545 × 10 − 4$$− 1.0083$$10 3$$1.8402 × 10 − 4$$− 0.9904$
$10 4$$4.7526 × 10 − 5$$− 1.0003$$10 4$$2.3535 × 10 − 5$$− 1.0002$$10 4$$1.8395 × 10 − 5$$− 1.0002$
Table 3. Uniform convergence rate and convergence order for exponential distribution.
Table 3. Uniform convergence rate and convergence order for exponential distribution.
$A = 1 / 2$$A = 1$$A = 20$
$n$RateOrder$n$RateOrder$n$RateOrder
20.1617 10.3679 10.3679
100.0280$− 1.0896$100.0279$− 1.1195$100.0248$− 1.1718$
$10 2$0.0027$− 1.0136$$10 2$0.0027$− 1.0107$$10 2$0.0027$− 1.0280$
$10 3$$2.7070 × 10 − 4$$− 1.0013$$10 3$$2.7070 × 10 − 4$$− 1.0083$$10 3$$2.5762 × 10 − 4$$− 0.9904$
$10 4$$2.7060 × 10 − 5$$− 1.0002$$10 4$$2.7060 × 10 − 5$$− 1.0002$$10 4$$1.9649 × 10 − 5$$− 1.0002$

## Share and Cite

MDPI and ACS Style

Peng, L.; Gao, L. Limiting Distributions for the Minimum-Maximum Models. Mathematics 2019, 7, 719. https://doi.org/10.3390/math7080719

AMA Style

Peng L, Gao L. Limiting Distributions for the Minimum-Maximum Models. Mathematics. 2019; 7(8):719. https://doi.org/10.3390/math7080719

Chicago/Turabian Style

Peng, Ling, and Lei Gao. 2019. "Limiting Distributions for the Minimum-Maximum Models" Mathematics 7, no. 8: 719. https://doi.org/10.3390/math7080719

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.