Next Article in Journal
Duty Cycle-Rotor Angular Speed Reverse Acting Relationship Steady State Analysis Based on a PMSG d–q Transform Modeling
Next Article in Special Issue
On the M-Estimator under Third Moment Condition
Previous Article in Journal
Stochastic Cognitive Dominance Leading Particle Swarm Optimization for Multimodal Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Large Deviations for the Maximum of the Absolute Value of Partial Sums of Random Variable Sequences

Faculty of Science, College of Statistics and Date Science, Beijing University of Technology, Beijing 100124, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(5), 758; https://doi.org/10.3390/math10050758
Submission received: 14 January 2022 / Revised: 25 February 2022 / Accepted: 25 February 2022 / Published: 27 February 2022
(This article belongs to the Special Issue Limit Theorems of Probability Theory)

Abstract

:
Let { ξ i : i 1 } be a sequence of independent, identically distributed (i.i.d. for short) centered random variables. Let S n = ξ 1 + + ξ n denote the partial sums of { ξ i } . We show that sequence { 1 n max 1 k n | S k | : n 1 } satisfies the large deviation principle (LDP, for short) with a good rate function under the assumption that P ( ξ 1 x ) and P ( ξ 1 x ) have the same exponential decrease.

1. Introduction

Throughout this paper, on a probability space { Ω , F , P } , let { ξ i : i 1 } be a sequence of independent, identically distributed (i.i.d.) centered random variables that take real values. Denote the partial sums S n : = i = 1 n ξ i of sequence { ξ i : i 1 } .
The seminal paper of Cramér [1] motivates our work. Cramér obtained that { 1 n S n : n 1 } satisfies large deviation principle (LDP) with rate function Λ * ( x ) (see Theorem 1) under finite moments, which is the famous Cramér condition, i.e., if there exists a δ > 0 such that E e λ | X 1 | < for all | λ | < δ . Cramer theorem has the following form for any measurable set B R :
inf x B Λ * ( x ) lim inf n 1 n l o g P ( S n n B )
lim sup n 1 n l o g P ( S n n B ) inf x B ¯ Λ * ( x ) .
where B denotes the interior of B and B ¯ denotes its closure. We call inequality (1) the large deviations lower bound and inequality (2) the large deviations upper bound. If both hold, then sequence { 1 n S n : n 1 } satisfies LDP with rate function Λ * ( x ) . In other words, the theory of LDP deals with large fluctuations and the probability of such large fluctuations decays exponentially.
The tail probability P ( S n n x ) of independent random variables was researched in detail in many papers. Nagaev [2] obtained that partial sums { 1 n S n : n 1 } for i.i.d. random variables and found that it satisfies LDP under the assumption that P ( ξ 1 x ) decreases similarly to a power function. Soon, Nagaev [3] obtained the bounds for probabilities of partial sums of independent random variables, by weakening the requirement, on the hypothesis that generalized and ordinary moments are finite. Under Cramer condition, Kiesel and Stadtmuller [4] extended Cramer theorem to weighted sums of i.i.d. random variables. Moreover, Gantert, Ramanan and Rembart [5] researched the LDP for weighted sums of i.i.d. random variables with stretched exponential tails.
The tail probability P ( max 1 k n S k n x ) has been researched in depth. Under the Cramer condition, Borovkov and Korshunov [6] conducted work for time-homogeneous Markov chain and Shklyaev [7] conducted work for i.i.d. random variables; both obtained LDP. Soon after, Kozlov [8] obtained LDP results by applying a direct probability approach to P ( max 1 k n S k n x ) of i.i.d. non-degenerate random variables, which obey the Cramer condition. Lately, Fan, Grama and Liu [9] established the LDP for sequence { 1 n max 1 k n S k : n 1 } of martingale differences random variables under finite subexponential moments condition.
Feller [10] mentioned the importance of the estimation of tail probability P ( max 1 k n | S k | n x ) , which has attracted broad attention in recent decades. Recently, Li [11] established the upper bound estimation for probability P ( max 1 k n | S k | n x ) of martingale differences random variables bounded in L p . For strictly stationary and negatively associated random variables, Xing and Yang [12] obtained some exponential inequalities for the maximum of the absolute value of partial sums via classical techniques based on blocking and truncation. Moreover, the upper bound estimation for tail probability P ( max 1 k n | S k | n x ) for martingale differences random variables was obtained by Fan, Grama and Liu [13] in situations where conditional subexponential moments are bounded.
The above results demonstrate that the research for probability { 1 n max 1 k n | S k | : n 1 } only obtained large deviations upper bound. To fill this gap, we shall primarily obtain the result that sequence { 1 n max 1 k n | S k | : n 1 } of i.i.d. random variables satisfies LDP under the assumption that P ( ξ 1 x ) and P ( ξ 1 x ) have the same exponential decrease (see Corollary 1), i.e., we obtain large deviations lower bound and large deviations upper bound.
This article is organized as follows. We firstly introduce the necessary knowledge about definitions and theorems that we need in Section 2. Then, the main theorems and corollaries are presented in Section 3. Moreover, in Section 4, we provide the lemmas needed to prove the conclusions and proofs of our main results.

2. Preliminaries

Before we present our results and proofs, we introduce some definitions and theorems that can be found in cf. [14,15].
Definition 1.
(1) A function I: F [ 0 , ] is called a rate function if it is non-negative and lower semicontinuous, i.e., the level sets { x : I ( x ) α } are closed, for α R . (2) A rate function I is said to be good if, in addition, its level sets are compact.
Definition 2.
We say that a sequence of random variables { ξ i : i 1 } satisfies LDP in R with rate function I if I is a rate function and, for any measurable set B B ( R ) , the following is the case:
inf x B I ( x ) lim inf n 1 n log P ( ξ n B ) lim sup n 1 n log P ( ξ n B ) inf x B ¯ I ( x ) ,
where B denotes the interior of B, and B ¯ denotes its closure.
Theorem 1.
(Cramer’s theorem)Let { ξ i : i 1 } be a sequence of i.i.d. real value random variables on ( Ω , F , P ) . Let partial sums S n = i = 1 n ξ i , and let Λ ( θ ) be log moment generating function of ξ 1 , i.e., Λ ( θ ) = l o g E e θ ξ 1 , and let Λ * ( x ) be convex conjugate of Λ, i.e., Λ * ( x ) = sup θ R { θ ξ Λ ( θ ) } . Then, { S n n : n 1 } satisfies LDP with rate function Λ * ( x ) in R if Λ is finite in a neighborhood of zero, i.e., for any measurable set B R of the following:
inf x B Λ * ( x ) lim inf n 1 n l o g P ( S n n B ) lim sup n 1 n l o g P ( S n n B ) inf x B ¯ Λ * ( x ) .
Theorem 2.
(Principle of the largest term)Let a n and b n be sequences in R + . Then, the following is the case:
lim sup n 1 n l o g ( a n + b n ) lim sup n 1 n l o g ( a n ) lim sup n 1 n l o g ( b n ) ,
and the following is the case.
lim inf n 1 n l o g ( a n + b n ) lim inf n 1 n l o g ( a n ) lim inf n 1 n l o g ( b n ) .

3. Main Results

Let { ξ i : i 1 } be a sequence of i.i.d. centered random variables and denote S n : = i = 1 n ξ i . Then, we shall investigate the LDP for the sequence of { 1 n max 1 k n | S k | : n 1 } . The main results of this paper are as follows.
Theorem 3.
Let { ξ i : i 1 } be a sequence of i.i.d. random variables. If E ξ 1 = 0 , E ξ 1 2 < and for some constants α ( 0 , 1 ) , 0 < C 1 C 2 , the following is the case:
C 2 lim inf x 1 x α l o g P ( ξ 1 x ) lim sup x 1 x α l o g P ( ξ 1 x ) C 1 ,
then for all x > 0, we have the following.
C 2 x α lim inf n 1 n α l o g P ( max 1 k n S k n x ) lim sup n 1 n α l o g P ( max 1 k n S k n x ) C 1 x α .
Theorem 4.
Let { ξ i : i 1 } be a sequence of i.i.d. random variables. If E ξ 1 = 0 and for some constants α ( 0 , 1 ) , 0 < C 1 C 2 , 0 < C 3 C 4 , the following is the case:
C 2 lim inf x 1 x α l o g P ( ξ 1 x ) lim sup x 1 x α l o g P ( ξ 1 x ) C 1 ,
C 4 lim inf x 1 x α l o g P ( ξ 1 x ) lim sup x 1 x α l o g P ( ξ 1 x ) C 3 ,
then for all x > 0, we have the following.
( C 2 C 4 ) x α lim inf n 1 n α l o g P ( max 1 k n | S k | n x ) lim sup n 1 n α l o g P ( max 1 k n | S k | n x ) ( C 1 C 3 ) x α .
Corollary 1.
Let { ξ i : i 1 } be a sequence of i.i.d. random variables. If E ξ i = 0 , and for some constants α ( 0 , 1 ) , C > 0 , we have the following:
lim x 1 x α l o g P ( ξ 1 x ) = C ,
lim x 1 x α l o g P ( ξ 1 x ) = C ,
then for all x > 0, the following is obtained.
lim n 1 n α l o g P ( max 1 k n | S k | n x ) = C x α .
Then, { 1 n max 1 k n | S k | : n 1 } satisfies LDP with the good rate function I ( x ) = C x α .

4. Proofs of Main Results

To prove our main results, we need the following lemmas, and we also will provide their proofs.
Lemma 1.
For a random variable ξ 1 with E ξ 1 = 0 , we assume E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) < , for some constant α ( 0 , 1 ) . Set η 1 = ξ 1 1 { ξ 1 y } , for y > 0. Then, the following is the case.
E e y α 1 η 1 1 + y 2 α 2 2 E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) .
Proof of Lemma 1.
By Taylor’s expansion, we can obtain
e y α 1 η 1 1 + y α 1 η 1 + y 2 α 2 η 1 2 2 e y α 1 η 1 + .
The following is the case:
η 1 + = ξ 1 1 { 0 ξ 1 y } y 1 α ξ 1 α 1 { 0 ξ 1 y } y 1 α ( ξ 1 + ) α ,
and η 1 2 ξ 1 2 . Then, we obtain the following.
E e y α 1 η 1 1 + y α 1 E η 1 + y 2 α 2 2 E ( η 1 2 e y α 1 η 1 + ) 1 + y α 1 E ξ 1 + y 2 α 2 2 E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) = 1 + y 2 α 2 2 E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) .
Thus, we complete the proof of Lemma 1.  □
Lemma 2.
Assume { ξ i : i 1 } is an i.i.d. random variables sequence. If E ξ 1 = 0 , and for some constants α ( 0 , 1 ) , C > 0 , E ξ 1 2 e x p { ( ξ 1 + ) α } < ,
lim sup x 1 x α l o g P ( ξ 1 x ) C ,
then for all x > 0, the following is the case.
lim sup n 1 n α l o g P ( max 1 k n S k n x ) C x α .
Proof of Lemma 2.
Set η i = ξ i 1 { ξ i y } for y > 0 . Then, the following is the case.
P ( max 1 k n S k x ) P ( max 1 k n i = 1 k η i x ) + P ( max 1 k n i = 1 k ξ i 1 { ξ i > y } > 0 ) = P ( i = 1 k η i x , k [ 1 , n ] ) + P ( max 1 i n ξ i > y ) : = P 1 + P 2 .
For all x > 0 , denote stopping time
T ( x ) = min { k [ 1 , n ] : i = 1 k η i x } and min Ø = 0 .
We easily obtain
1 { i = 1 k η i x , k [ 1 , n ] } = k = 1 n 1 { T ( x ) = k } .
In order to obtain the upper bound of P 1 , we consider martingale Z ( λ ) = { ( Z k ( λ ) , F k ) : k 0 } , where F k = σ ( ξ 1 , ξ 2 , , ξ k ) , k 0 , and the following is the case.
Z k ( λ ) = i = 1 k e x p { λ η i } E e x p { λ η i } , Z 0 ( λ ) = 1 .
Let the following be the case.
Z T ( x ) k ( λ ) = i = 1 T ( x ) k e x p { λ η i } E e x p { λ η i } , Z 0 ( λ ) = 1 .
By the property of martingale, then { ( Z T ( x ) k ( λ ) , F k ) : k 0 } is also a martingale. Because E ( Z T ( x ) n ( λ ) ) = E ( Z 0 ( λ ) ) = 1 , then we define the probability measure d P λ : = Z T ( x ) n d P and define the expectation with respect to P λ by E λ .
P 1 = E λ [ Z T ( x ) n ( λ ) 1 1 { i = 1 k η i x , k [ 1 , n ] } ] = E λ [ ( i = 1 T ( x ) n e x p { λ η i } E e x p { λ η i } ) 1 k = 1 n 1 { T ( x ) = k } ] = k = 1 n E λ [ ( i = 1 T ( x ) n e x p { λ η i } E e x p { λ η i } ) 1 1 { T ( x ) = k } ] = k = 1 n E λ [ ( i = 1 k e x p { λ η i } E e x p { λ η i } ) 1 1 { T ( x ) = k } ] = k = 1 n E λ [ e x p { i = 1 k l o g e x p { λ η i } E e x p { λ η i } } 1 { T ( x ) = k } ] = k = 1 n E λ [ e x p { λ i = 1 k η i + i = 1 k l o g E e λ η i } 1 { T ( x ) = k } ] = k = 1 n E λ [ e x p { λ i = 1 k η i + k l o g E e λ η 1 } 1 { T ( x ) = k } ] .
Under the conditions of Lemma 2, we take λ = y α 1 , and by Lemma 1, and l o g ( 1 + t ) t , for t 0 , we obtain the following.
l o g E e y α 1 η 1 l o g ( 1 + y 2 α 2 2 E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) ) y 2 α 2 2 E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) .
On the set { T ( x ) = k } , we obtain i = 1 k η i x . Combining this fact with (4) and (5), we obtain that, for all x > 0 , the following is the case.
P 1 k = 1 n E λ ( e x p { λ x + n y 2 α 2 2 E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) } 1 { T ( x ) = k } ) e x p { y α 1 x + n y 2 α 2 2 E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) } E λ ( k = 1 n 1 { T ( x ) = k } ) e x p { y α 1 x + n y 2 α 2 2 E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) } .
Next, using the Markov inequality, we obtain the following.
P 2 = P ( i = 1 n { ξ i > y } ) n P ( ξ 1 > y ) n P ( ξ 1 2 e x p { ( ξ 1 + ) α } > y 2 e x p { y α } ) n y 2 e x p { y α } E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) .
Let y = x . Combining (3), (6) and (7) together, we obtain the following.
P ( max 1 k n S k x ) e x p { x α + n E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) 2 x 2 2 α } + n E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) x 2 e x α = e x α ( e x p { n E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) 2 x 2 2 α } + n E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) x 2 ) .
Now, we replace x by nx in the above inequality; then, the following is obtained.
P ( max 1 k n S k n x ) e n α x α ( e x p { E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) 2 n 1 2 α x 2 2 α } + E ( ξ 1 2 e x p { ( ξ 1 + ) α } ) n x 2 ) .
Take log and limsup to both sides and use the principle of the largest term; here, we obtain the LDP upper bound.
lim sup n 1 n α l o g P ( max 1 k n S k n x ) x α .
Now we end the proof of Lemma 2.  □
In the following, we prove Theorem 3.
Proof of Theorem 3.
(i) Firstly, we prove the upper bound. Let ε ( 0 , 1 ) be fixed, ξ 1 = C 1 1 α ξ 1 ( C 1 ε C 1 ) β , β > 1 α . By the condition given in Theorem 3, we have the following:
lim sup x 1 x α l o g P ( ξ 1 x ) C 1 ,
we obtain for ε > 0 , x 0 , such that when x > x 0 ,
l o g P ( ξ 1 x ) x α C 1 + ε ;
that is,
P ( ξ 1 x ) e x p { ( C 1 ε ) x α } .
Thus, for ε > 0 , x 0 , such that when x > x 0 ,
P ( ξ 1 x ) e x p { ( C 1 ε ) 1 α β C 1 α β 1 x α } .
Then, the following is the case:
E { ( ξ 1 + ) 2 e x p { ( ξ 1 + ) α } } = 0 P ( ξ 1 x ) ( 2 x + α x α 1 ) e x α 2 0 x e θ x α d x + α 0 x α + 1 e θ x α d x < ,
where θ = ( C 1 ε ) 1 α β C 1 α β 1 1 , θ > 0 .
Because E ξ 1 2 < , one can easily obtain E ( ξ 1 ) 2 = C 1 2 α ( C 1 ε C 1 ) 2 β E ξ 1 2 < . Then, we obtain the following.
E ( ξ 1 ) 2 e x p { ( ξ 1 + ) α } = E ( ( ξ 1 ) 2 e x p { ( ξ 1 + ) α } 1 { ξ 1 0 } + ( ξ 1 ) 2 e x p { ( ξ 1 + ) α } 1 { ξ 1 < 0 } ) = E ( ( ξ 1 ) 2 e x p { ( ξ 1 + ) α } 1 { ξ 1 0 } + ( ξ 1 ) 2 1 { ξ 1 < 0 } ) E ( ξ 1 + ) α e x p { ( ξ 1 + ) α } + E ( ξ 1 ) 2 < .
Thus, sequence { ξ i } satisfies the conditions of Lemma 2, and we denote S k = i = 1 k ξ i . Then, we obtain, for all x > 0 , the following.
lim sup n 1 n α l o g P ( max 1 k n S k n x ) = lim sup n 1 n α l o g P ( C 1 1 α ( 1 ε C 1 ) β max 1 k n S k n x ) x α .
Thus, we obtain the following.
lim sup n 1 n α l o g P ( max 1 k n S k n x ) C 1 ( 1 ε C 1 ) α β x α .
Letting ε 0 , we obtain the following.
lim sup n 1 n α l o g P ( max 1 k n S k n x ) C 1 x α .
(ii) Next, we will prove the lower bound.
Because { ξ i : i 1 } is an i.i.d sequence, the following is the case.
P ( max 1 k n S k n x ) P ( S n n x ) = P ( ξ 1 + i = 2 n ξ i n ( ε + x ) n ε ) P ( { i = 2 n ξ i n ε } { ξ 1 n ( ε + x ) } ) = P ( i = 2 n ξ i n ε ) P ( ξ 1 n ( ε + x ) ) .
By using the weak law of large numbers and the following fact:
{ i = 2 n ξ i ( n 1 ) ε } { i = 2 n ξ i n ε } ,
we know the following.
lim n P ( i = 2 n ξ i n ε ) = 1 .
Then, by the condition in Theorem 3, lim inf x 1 x α log P ( ξ 1 x ) C 2 , we obtain ε > 0 , x 0 , s . t . x > x 0 ,
l o g P ( ξ 1 x ) x α C 2 ε .
Then, the following is the case.
P ( ξ 1 x ) e x p { ( C 2 + ε ) x α } .
Thus, we obtain the following.
P ( ξ 1 n ( x + ε ) ) e x p { [ ( x + ε ) n ] α ( C 2 + ε ) } .
Combining (9), (10) and (11) together, we easily obtain the following.
lim inf n 1 n α log P ( max 1 k n S k n x ) lim inf n 1 n α log P ( i = 2 n ξ i n ε ) + lim inf n 1 n α log P ( ξ 1 n ( ε + x ) ) ( C 2 + ε ) ( x + ε ) α .
Letting ε 0 , we obtain the following.
lim inf n 1 n α l o g P ( max 1 k n S k n x ) C 2 x α .
At last, by (8) and (12), we obtain, for all x > 0, the following.
C 2 x α lim inf n 1 n α l o g P ( max 1 k n S k n x ) lim sup n 1 n α l o g P ( max 1 k n S k n x ) C 1 x α .
Thus, we complete the proof of Theorem 3.  □
In the following, we prove Theorem 4.
Proof of Theorem 4.
By the condition lim inf x 1 x α l o g P ( ξ 1 x ) C 1 , we obtain for ε > 0 , x 0 , such that when x > x 0 , the following.
P ( ξ 1 x ) e x p { ( C 1 ε ) x α } .
By condition lim inf x 1 x α l o g P ( ξ 1 x ) C 3 , we obtain for ε > 0 , x 0 , such that when x > x 0 , P ( ξ 1 x ) e x p { ( C 3 ε ) x α } .
Thus, we obtain the following.
E ξ 1 2 = 0 2 x P ( | ξ 1 | x ) d x = 0 x 0 2 x P ( | ξ 1 | x ) d x + x 0 2 x P ( | ξ 1 | x ) d x x 0 2 + x 0 2 x P ( ξ 1 x ) d x + x 0 2 x P ( ξ 1 x ) d x x 0 2 + x 0 2 x e x p { ( C 1 ε ) x α } d x + x 0 2 x e x p { ( C 3 ε ) x α } d x < .
Thus, by Theorem 3, we obtain the following:
lim sup n 1 n α l o g P ( max 1 k n S k n x ) C 1 x α ,
lim sup n 1 n α l o g P ( max 1 k n S k n x ) C 3 x α ,
and we know the following.
P ( max 1 k n | S k | n x ) P ( max 1 k n S k n x ) + P ( max 1 k n ( S k ) n x ) .
Thus we obtain the following inequality by the principle of the largest term.
lim sup n 1 n α l o g P ( max 1 k n | S k | n x ) lim sup n 1 n α l o g ( P ( max 1 k n S k n x ) + P ( max 1 k n ( S k ) n x ) ) lim sup n 1 n α l o g P ( max 1 k n S k n x ) lim sup n 1 n α l o g P ( max 1 k n ( S k ) n x ) ) ( C 1 x α ) ( C 3 x α ) ( C 1 C 3 ) x α .
By the given conditions in Theorem 4, lim inf x 1 x α l o g P ( ξ 1 x ) C 2 ,   lim inf x 1 x α l o g P ( ξ 1 x ) C 4 , we obtain the following.
lim inf n 1 n α l o g P ( max 1 k n S k n x ) C 2 x α ,
lim sup n 1 n α l o g P ( max 1 k n S k n x ) C 4 x α .
Since the following is the case:
P ( max 1 k n | S k | n x ) P ( max 1 k n S k n x ) P ( max 1 k n ( S k ) n x ) ,
then we obtain the following.
lim inf n 1 n α l o g P ( max 1 k n | S k | n x ) lim inf n 1 n α l o g ( P ( max 1 k n S k n x ) P ( max 1 k n ( S k ) n x ) ) lim inf n 1 n α l o g P ( max 1 k n S k n x ) lim inf n 1 n α l o g P ( max 1 k n ( S k ) n x ) ) ( C 2 x α ) ( C 4 x α ) ( C 2 C 4 ) x α .
Combining (13) and (14), we complete the proof of Theorem 4.  □
Proof of Corollary 1.
Take C 1 = C 2 = C 3 = C 4 = C in Theorem 4, we can obtain the following easily for all x > 0.
lim n 1 n α l o g P ( max 1 k n | S k | n x ) = C x α .
Because the upper bound and the lower bound are same, we can obtain the fact that { 1 n max 1 k n | S k | : n 1 } satisfies LDP with good rate function I ( x ) = C x α .  □

5. Conclusions

We obtained LDP for the maximum of the absolute value of partial sums of i.i.d. centered random variables under the assumption that P ( ξ 1 x ) and P ( ξ 1 x ) have the same exponential decrease. For further research, we will consider LDP for the maximum of the absolute value of partial sums of other types of dependent random variables, such as martingale differences and acceptable random variables.

Author Contributions

X.W. is mainly responsible for providing funding acquisition and scientific research. M.Z. is mainly responsible for writing the original draft and scientific research. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (No. 62072044) and Beijing Natural Science Foundation (No. 1202001).

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the referees for their very helpful comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cramer, H. Sur un nouveau théoreme-limite de la théorie des probabilits. Actual. Sci. Ind. 1937, 736, 5–23. [Google Scholar]
  2. Nagaev, A.V. Integral Limit Theorems Taking Large Deviations Into Account When Cramér’s Condition Does Not Hold. I. Theory Probab. Appl. 1969, 14, 51–64. [Google Scholar] [CrossRef]
  3. Nagaev, S.V. Large deviations for sums of independent random variables. Ann. Probability 1979, 7, 745–789. [Google Scholar] [CrossRef]
  4. Kiesel, R.; Stadtmuller, U. A Large Deviation Principle for Weighted Sums of Independent Identically Distributed Random Variables. J. Math. Anal. Appl. 2000, 251, 929–939. [Google Scholar] [CrossRef]
  5. Gantert, N.; Ramanan, K.; Rembart, F. Large Deviations for Weighted Sums of Stretched Exponential Random Variables. Electron. Commun. Probab. 2014, 19, 1–14. [Google Scholar] [CrossRef] [Green Version]
  6. Borovkov, A.A.; Korshunov, D.A. Large-deviation probabilities for one-dimensional Markov chains. Part 2: Prestationary distributions in the exponential case. Theory Probab. Appl. 2001, 45, 379–405. [Google Scholar] [CrossRef]
  7. Shklyaev, A.V. Limit theorems for random walk under the assumption of maxima large deviation. Theory Probab. Appl. 2011, 55, 517–525. [Google Scholar] [CrossRef]
  8. Kozlov, M.V. On Large Deviations of Maximum of a Cramer Random Walk and the Queueing Process. Theory Probab. Appl. 2014, 58, 76–106. [Google Scholar] [CrossRef]
  9. Fan, X.; Grama, I.; Liu, Q. Deviation inequalities for martingales with applications. J. Math. Anal. Appl. 2017, 448, 538–566. [Google Scholar] [CrossRef]
  10. Feller, W. The fundamental limit theorems in probability. Bull. Am. Math. Soc. 1945, 51, 800–832. [Google Scholar] [CrossRef] [Green Version]
  11. Li, Y. A martingale inequality and large deviations. Stat. Probab. Lett. 2003, 62, 317–321. [Google Scholar] [CrossRef]
  12. Xing, G.; Yang, S. An exponential inequality for strictly stationary and negatively associated random variables. Commun.-Stat.-Theory Methods 2009, 39, 340–349. [Google Scholar] [CrossRef]
  13. Fan, X.; Grama, I.; Liu, Q. Large deviation exponential inequalities for supermartingales. Electron. Commun. Probab. 2012, 17, 1–8. [Google Scholar] [CrossRef]
  14. Ganesh, A.; O’Connell, N.; Wischik, D. Big Queues; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  15. Dembo, A.; Zeitouni, O. Large Deviations Techniques and Applications, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, X.; Zhang, M. Large Deviations for the Maximum of the Absolute Value of Partial Sums of Random Variable Sequences. Mathematics 2022, 10, 758. https://doi.org/10.3390/math10050758

AMA Style

Wang X, Zhang M. Large Deviations for the Maximum of the Absolute Value of Partial Sums of Random Variable Sequences. Mathematics. 2022; 10(5):758. https://doi.org/10.3390/math10050758

Chicago/Turabian Style

Wang, Xia, and Miaomiao Zhang. 2022. "Large Deviations for the Maximum of the Absolute Value of Partial Sums of Random Variable Sequences" Mathematics 10, no. 5: 758. https://doi.org/10.3390/math10050758

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop