1. Introduction
Throughout this paper, on a probability space , let be a sequence of independent, identically distributed (i.i.d.) centered random variables that take real values. Denote the partial sums of sequence
The seminal paper of Cramér [
1] motivates our work. Cramér obtained that
satisfies large deviation principle (LDP) with rate function
(see Theorem 1) under finite moments, which is the famous Cramér condition, i.e., if there exists a
such that
for all
. Cramer theorem has the following form for any measurable set
:
where
denotes the interior of
B and
denotes its closure. We call inequality (1) the large deviations lower bound and inequality (2) the large deviations upper bound. If both hold, then sequence
satisfies LDP with rate function
. In other words, the theory of LDP deals with large fluctuations and the probability of such large fluctuations decays exponentially.
The tail probability
of independent random variables was researched in detail in many papers. Nagaev [
2] obtained that partial sums
for i.i.d. random variables and found that it satisfies LDP under the assumption that
decreases similarly to a power function. Soon, Nagaev [
3] obtained the bounds for probabilities of partial sums of independent random variables, by weakening the requirement, on the hypothesis that generalized and ordinary moments are finite. Under Cramer condition, Kiesel and Stadtmuller [
4] extended Cramer theorem to weighted sums of i.i.d. random variables. Moreover, Gantert, Ramanan and Rembart [
5] researched the LDP for weighted sums of i.i.d. random variables with stretched exponential tails.
The tail probability
has been researched in depth. Under the Cramer condition, Borovkov and Korshunov [
6] conducted work for time-homogeneous Markov chain and Shklyaev [
7] conducted work for i.i.d. random variables; both obtained LDP. Soon after, Kozlov [
8] obtained LDP results by applying a direct probability approach to
of i.i.d. non-degenerate random variables, which obey the Cramer condition. Lately, Fan, Grama and Liu [
9] established the LDP for sequence
of martingale differences random variables under finite subexponential moments condition.
Feller [
10] mentioned the importance of the estimation of tail probability
, which has attracted broad attention in recent decades. Recently, Li [
11] established the upper bound estimation for probability
of martingale differences random variables bounded in
. For strictly stationary and negatively associated random variables, Xing and Yang [
12] obtained some exponential inequalities for the maximum of the absolute value of partial sums via classical techniques based on blocking and truncation. Moreover, the upper bound estimation for tail probability
for martingale differences random variables was obtained by Fan, Grama and Liu [
13] in situations where conditional subexponential moments are bounded.
The above results demonstrate that the research for probability only obtained large deviations upper bound. To fill this gap, we shall primarily obtain the result that sequence of i.i.d. random variables satisfies LDP under the assumption that and have the same exponential decrease (see Corollary 1), i.e., we obtain large deviations lower bound and large deviations upper bound.
This article is organized as follows. We firstly introduce the necessary knowledge about definitions and theorems that we need in
Section 2. Then, the main theorems and corollaries are presented in
Section 3. Moreover, in
Section 4, we provide the lemmas needed to prove the conclusions and proofs of our main results.
4. Proofs of Main Results
To prove our main results, we need the following lemmas, and we also will provide their proofs.
Lemma 1. For a random variable with , we assume , for some constant . Set , for y > 0. Then, the following is the case. Proof of Lemma 1. By Taylor’s expansion, we can obtain
The following is the case:
and
. Then, we obtain the following.
Thus, we complete the proof of Lemma 1. □
Lemma 2. Assume is an i.i.d. random variables sequence. If , and for some constants , , ,then for all x > 0, the following is the case. Proof of Lemma 2. Set
for
. Then, the following is the case.
For all
, denote stopping time
We easily obtain
In order to obtain the upper bound of
, we consider martingale
, where
and the following is the case.
Let the following be the case.
By the property of martingale, then
is also a martingale. Because
then we define the probability measure
and define the expectation with respect to
by
.
Under the conditions of Lemma 2, we take
, and by Lemma 1, and
, for
, we obtain the following.
On the set
, we obtain
. Combining this fact with (4) and (5), we obtain that, for all
, the following is the case.
Next, using the Markov inequality, we obtain the following.
Let
Combining (3), (6) and (7) together, we obtain the following.
Now, we replace x by nx in the above inequality; then, the following is obtained.
Take log and limsup to both sides and use the principle of the largest term; here, we obtain the LDP upper bound.
Now we end the proof of Lemma 2. □
In the following, we prove Theorem 3.
Proof of Theorem 3. (i) Firstly, we prove the upper bound. Let
be fixed,
,
By the condition given in Theorem 3, we have the following:
we obtain for
,
, such that when
,
that is,
Thus, for
,
, such that when
,
Then, the following is the case:
where
,
Because
, one can easily obtain
Then, we obtain the following.
Thus, sequence
satisfies the conditions of Lemma 2, and we denote
. Then, we obtain, for all
the following.
Thus, we obtain the following.
Letting
, we obtain the following.
(ii) Next, we will prove the lower bound.
Because
is an i.i.d sequence, the following is the case.
By using the weak law of large numbers and the following fact:
we know the following.
Then, by the condition in Theorem 3,
we obtain
,
Then, the following is the case.
Thus, we obtain the following.
Combining (9), (10) and (11) together, we easily obtain the following.
Letting
, we obtain the following.
At last, by (8) and (12), we obtain, for all x > 0, the following.
Thus, we complete the proof of Theorem 3. □
In the following, we prove Theorem 4.
Proof of Theorem 4. By the condition
we obtain for
such that when
the following.
By condition we obtain for such that when
Thus, we obtain the following.
Thus, by Theorem 3, we obtain the following:
and we know the following.
Thus we obtain the following inequality by the principle of the largest term.
By the given conditions in Theorem 4,
we obtain the following.
Since the following is the case:
then we obtain the following.
Combining (13) and (14), we complete the proof of Theorem 4. □
Proof of Corollary 1. Take
in Theorem 4, we can obtain the following easily for all x > 0.
Because the upper bound and the lower bound are same, we can obtain the fact that satisfies LDP with good rate function □