Next Article in Journal
Credibility Measure for Intuitionistic Fuzzy Variables
Next Article in Special Issue
On Small Deviation Asymptotics In L2 of Some Mixed Gaussian Processes
Previous Article in Journal
A Numerical Method for Solving a Class of Nonlinear Second Order Fractional Volterra Integro-Differntial Type of Singularly Perturbed Problems
Previous Article in Special Issue
Forecast Combinations in the Presence of Structural Breaks: Evidence from U.S. Equity Markets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Large Deviation Results and Applications to the Generalized Cramér Model
 †

1
Dipartimento di Matematica, Università di Pisa, Largo Bruno Pontecorvo 5, I-56127 Pisa, Italy
2
Dipartimento di Matematica, Università di Roma Tor Vergata, Via della Ricerca Scientifica, I-00133 Rome, Italy
*
Author to whom correspondence should be addressed.
The support of INdAM (Fondi GNAMPA) and Università di Pisa (Fondi di Ateneo) is acknowledged. The first version of this paper was written during the staying of the first author at the University Jean Monnet (St. Etienne).
Mathematics 2018, 6(4), 49; https://doi.org/10.3390/math6040049
Submission received: 2 March 2018 / Revised: 25 March 2018 / Accepted: 27 March 2018 / Published: 2 April 2018
(This article belongs to the Special Issue Stochastic Processes with Applications)

Abstract

:
In this paper, we prove large deviation results for some sequences of weighted sums of random variables. These sequences have applications to the probabilistic generalized Cramér model for products of primes in arithmetic progressions; they could lead to new conjectures concerning the (non-random) set of products of primes in arithmetic progressions, a relevant topic in number theory.

1. Introduction

The aim of this paper is to prove asymptotic results for a class of sequences of random variables, i.e.,
k = 1 n L k X k b n : n 1
for suitable sequences of real numbers { b n : n 1 } and { L n : n 1 } (see Condition 1 in Section 3) and suitable random independent variables { X n : n 1 } defined on the same probability space ( Ω , F , P ) . We also present analogue results for the slightly different sequence
L n k = 1 n X k b n : n 1 .
More precisely we refer to the theory of large deviations, which gives an asymptotic computation of small probabilities on an exponential scale (see, e.g., [1] as a reference on this topic). We recall [2] as a recent reference on large deviations for models of interest in number theory.
The origin and the motivation of our research rely on the study of some random models similar in nature to the celebrated Cramér model for prime numbers: i.e., what we have called the generalized model (for products of prime numbers in arithmetic progressions). We are not aware of any work where these probabilistic models are studied. Details on these structures will be given in Section 2. Here we only point out that, as the classical probabilistic model invented by Cramér has been used to formulate conjectures on the (non-random) set of primes (see [3] for details), in a similar way we can draw out conjectures also for the non-random sets of products of primes or products of primes in arithmetic progressions. The large deviation results for the sequences concerning these structures will be given in Corollary 1.
We also remark that the particular form of the sequence (1) is motivated by analogy with the first Chebyshev function, as will be explained in Section 2.
It is worth noting that also some moderate deviation properties can be proved (in terms of suitable bounds on cumulants and central moments) for the centered sequences
k = 1 n L k ( X k E [ X k ] ) b n : n 1 and L n k = 1 n ( X k E [ X k ] ) b n : n 1 .
Such propositions will not be dealt with in the sequel since, though some specific assumptions must be made in the present setting, these results are in the same direction as those of the paper [4], where moderate deviations from the point of view of cumulants and central moments are fully investigated.
It should be noted that our results are a contribution to the recent literature on limit theorems of interest in probability and number theory; here, we recall [5], where the results are formulated in terms of the mod- φ convergence (see also [6] where the simpler mod-Gaussian convergence is studied).
We here introduce some terminology and notation. We always set 0 log 0 = 0 , c = 0 for c 0 , and x : = max { k Z : k x < k + 1 } for all x R . Moreover, we write
  • a n b n to mean that lim n a n b n = 1 ;
  • Z law B ( p ) , for p [ 0 , 1 ] , to mean that P ( Z = 1 ) = p = 1 P ( Z = 0 ) ;
  • Z law P ( λ ) , for λ > 0 , to mean that P ( Z = k ) = λ k k ! e λ for all integers k 0 .
The outline of this paper is as follows: We start with some preliminaries in Section 2, and we present the results in Section 3. The results for the generalized Cramér model (for products of primes in arithmetic progressions) are presented in Corollary 1.

2. Preliminaries

On large deviations.
We refer to [1] (pages 4–5). Let Z be a topological space equipped with its completed Borel σ -field. A sequence of Z -valued random variables { Z n : n 1 } satisfies the large deviation principle (LDP) with speed function v n and rate function I if the following is true: lim n v n = , and the function I : Z [ 0 , ] is lower semi-continuous.
lim sup n 1 v n log P ( Z n F ) inf z F I ( z ) for   all   closed   sets F
lim inf n 1 v n log P ( Z n G ) inf z G I ( z ) for   all   open   sets G .
A rate function I is said to be good if its level sets { { z Z : I ( z ) η } : η 0 } are compact.
Throughout this paper, we prove LDPs with Z = R . We recall the following known result for future use.
Theorem 1 (Gärtner–Ellis Theorem).
Let { Z n : n 1 } be a sequence of real valued random variables. Assume that the function Λ : R ( , ] defined by
Λ ( θ ) : = lim n 1 v n log E e v n θ Z n ( f o r   a l l θ R )
exists; assume, moreover, that Λ is essentially smooth (see e.g., Definition 2.3.5 in [1]) and lower semi-continuous. Then { Z n : n 1 } satisfies the LDP with speed function v n and good rate function Λ * : R [ 0 , ] defined by
Λ * ( z ) : = sup θ R { θ z Λ ( θ ) } .
Proof. 
See, e.g., Theorem 2.3.6 in [1]. ☐
The main application of Theorem 1 in this paper concerns Theorem 2, where we have
Λ ( θ ) = e θ 1 , which   yields Λ * ( x ) = x log x x + 1 if x 0 if x < 0 .
The LDP in Theorem 3 will instead be proved by combining Theorem 4.2.13 in [1] with Theorem 2, i.e., by checking the exponential equivalence (see, e.g., Definition 4.2.10 in [1]) of the involved sequences.
On the generalized Cramér model (for products of primes in arithmetic progressions).
The Cramér model for prime numbers consists in a sequence of independent random variables { X n : n 1 } such that, for every n 2 ,
X n law B ( 1 / log n ) .
This model can be justified by the prime numbers theorem (PNT), which roughly asserts that the expected density of primes around x is 1 log x : the cardinality of prime numbers n is
π ( n ) : = p n 1 li ( n ) : = 2 n 1 log t d t ,
and, with the words of [7] (see footnote on p. 6), “the quantity 1 log n appears here naturally as the derivative of li ( x ) evaluated at x = n ”. Since 2 n 1 log t d t n log n , another way of stating the PNT is
π ( n ) n 1 log n .
A first extension of this formula concerns the case of integers n which are products of exactly r prime factors ( r 2 ). More precisely, we consider the sets
A r ( n ) : = { k n : Ω ( k ) = r } and B r ( n ) : = { k n : ω ( k ) = r }
where ω ( n ) is the number of distinct prime factors of n, and Ω ( n ) counts the number of prime factors of n (with multiplicity); this means that, letting (by the canonical prime factorization of n) n = i = 1 ω ( n ) p i α i , where p 1 , , p n are the distinct prime factors of n, we have
Ω ( n ) : = i = 1 ω ( n ) α i .
A result proved by Landau in 1909 (see, e.g., [8]) states that the cardinalities τ r ( n ) and π r ( n ) of A r ( n ) and B r ( n ) respectively verify
τ r ( n ) : = k A r ( n ) 1 n ( log log n ) r 1 ( r 1 ) ! log n and π r ( n ) : = k B r ( n ) 1 n ( log log n ) r 1 ( r 1 ) ! log n ;
see also, e.g., Theorem 437 in [9] (Section 22.18, page 368) or [10] (II.6, Theorems 4 and 5). Note that this formula for π r ( n ) reduces to Equation (6) when r = 1 .
Going a little further, for fixed integers a and q, we can consider the sets of products of primes in arithmetic progressions
A r ( q ) ( n ) = : { k n : Ω ( k ) = r , k a mod q } and B r ( q ) ( n ) = : { k n : ω ( k ) = r , k a mod q } .
One can prove (by similar methods as in [10,11]) that, for any a and q with ( a , q ) = 1 , the cardinalities τ r ( q ) ( n ) and π r ( q ) ( n ) of A r ( q ) ( n ) and B r ( q ) ( n ) respectively verify
τ r ( q ) ( n ) : = k A r ( q ) ( n ) 1 1 ϕ ( q ) · n ( log log n ) r 1 ( r 1 ) ! log n and π r ( q ) ( n ) : = k B r ( q ) ( n ) 1 1 ϕ ( q ) · n ( log log n ) r 1 ( r 1 ) ! log n ,
where ϕ is Euler’s totient function. Notice that, for r = 1 , we recover the sets of primes in arithmetic progressions, considered for instance in [8,10] II.8, or [11]; the case r = 2 is studied in [12]; the general case r 1 is considered in the recent preprint [13]; for q = 1 , we recover the sets and the formulas for the model described above.
Therefore, following Cramér’s heuristic, Equation (5), we can define the generalized Cramér model for products of r prime numbers (or products of r prime numbers in arithmetic progression) as a sequence of independent random variables { X n : n 1 } such that
X n law B ( λ n ) , where λ n : = n log n and n : = 1 ϕ ( q ) · ( log log n ) r 1 ( r 1 ) ! .
Obviously in Equation (7) we take n n 0 , where n 0 is an integer, such that λ n ( 0 , 1 ] for n n 0 ; the definition of λ n for n < n 0 is arbitrary.
Large deviation results for this model will be presented in Corollary 1 as a consequence of Theorem 3 and Remark 2, with
L n : = log n and b n : = n n ;
thus, the sequences in Equations (1) and (2) become
k = 1 n ( log k ) X k n n and ( log n ) k = 1 n X k n n
respectively. Moreover, by taking into account Remark 3 presented below, the sequences in Equation (9) converge almost surely to 1 (as n ).
On the first Chebyshev function.
The first Chebyshev function is defined by
θ ( x ) : = p x log p ,
where the sum is extended over all prime numbers p x .
Therefore, when considering the classical Cramér model, this function is naturally modeled with k = 1 n ( log k ) X k (and we obtain the numerator of the first fraction in Equation (9)).
It must be noted that T. Tao, in his blog (see [14]), considers the same random variable k x ( log k ) X k and proves that almost surely one has
k x ( log k ) X k = x + O ε ( x 1 / 2 + ε )
for all ε > 0 (where the implied constant in the O ε ( · ) notation is allowed to be random). In particular, almost surely one has
lim n k n ( log k ) X k n = 1 .
It appears clearly that in this setting we have a sequence of the form of Equation (1), with the particular choices L n = log n and b n = n . What we are going to investigate in the sequel is how the sequence of random variables { X n : n 1 } and the two sequences of numbers { L n : n 1 } and { b n : n 1 } must be connected in order to obtain large deviations and convergence results (see also Equations (8) and (9) above).
On slowly and regularly varying functions (at infinity).
Here we recall the following basic definitions. A positive measurable function H defined on some neighborhood of [ x 0 , ) of infinity is said to be slowly varying at infinity (see, e.g., [15], page 6) if
lim t H ( t x ) H ( t ) = 1 for   all x > 0 .
Similarly, a positive measurable function M defined on some neighborhood of [ x 0 , ) of infinity is said to be regularly varying at infinity of index ρ (see, e.g., [15], page 18) if
lim t M ( t x ) M ( t ) = x ρ for   all x > 0 .
Obviously, we recover the slowly varying case if ρ = 0 . Recall the following well-known result for slowly varying functions.
Lemma 1 (Karamata’s representation of slowly varying functions).
A function H is slowly varying at infinity if and only if
H ( x ) = c ( x ) exp x 0 x ϕ ( t ) t d t
where ϕ ( x ) 0 and c ( x ) c for some c > 0 (as x ).
Proof. 
See, e.g., Theorem 1.3.1 in [15]. ☐
In view of what follows we also present the following results. They are more or less known; but we prefer to give detailed proofs in order to ensure that the paper is self-contained.
Lemma 2.
Let M be a regularly varying function (at infinity) of index ρ 0 . Then,
lim t M ( t x ) M ( t ) = x ρ f o r   a l l x > 0 .
Proof. 
It is well-known (see, e.g., Theorem 1.4.1 in [15]) that we have M ( x ) = x ρ H ( x ) for a suitable slowly varying function H. Thus, it is easy to check that it suffices to prove the result for the case ρ = 0 (namely for a slowly varying function H), i.e.,
lim t H ( t x ) H ( t ) = 1 for   all x > 0 .
By Lemma 1, for all x > 0 , we have
H ( t x ) H ( t ) = c ( t x ) c ( t ) exp t t x ϕ ( v ) v d v
for t > 0 . Obviously, c ( t x ) c ( t ) 1 (as t ). Moreover, for all ε > 0 , we have
t t x ϕ ( v ) v d v ε | log ( t x / t ) |
for t > 0 , and log ( t x / t ) log x (as t ); thus,
t t x ϕ ( v ) v d v 0 ( as t )
by the arbitrariness of ε > 0 . Thus, Equation (10) holds, and the proof is complete. ☐
Lemma 3.
Let H be a slowly varying function (at infinity). Then,
lim x x H ( x ) k = 1 x H ( k ) = 1 .
Proof. 
By the representation of H in Lemma 1, for all ε > 0 there is an integer n 0 1 such that, for all x > n 0 , we have c ε < c ( x ) < c + ε and ε < ϕ ( x ) < ε . Then, we take x n 0 + 1 , and
k = 1 x H ( k ) x H ( x ) = k = 1 n 0 H ( k ) x H ( x ) + k = n 0 + 1 x H ( k ) x H ( x ) .
The first summand in the right hand side can be ignored since, if we take ε ( 0 , 1 ) , for a sufficient high x, we have
H ( x ) > c 2 exp ε x 0 x 1 t d t = c 2 x x 0 ε ,
which yields x H ( x ) > c 1 x 1 ε for a suitable constant c 1 > 0 (and x 1 ε as x ). Therefore, we concentrate our attention on the second summand and, by taking into account again the representation of H in Lemma 1, for a sufficiently high x, we have
k = n 0 + 1 x H ( k ) x H ( x ) = k = n 0 + 1 x c ( k ) exp x 0 k ϕ ( t ) t d t x c ( x ) exp x 0 x ϕ ( t ) t d t = k = n 0 + 1 x c ( k ) c ( x ) exp k x ϕ ( t ) t d t x .
Moreover,
k = n 0 + 1 x c ( k ) c ( x ) exp k x ϕ ( t ) t d t x c + ε c ε k = n 0 + 1 x k ε x 1 ε c + ε c ε 1 1 ε ( as x )
and
k = n 0 + 1 x c ( k ) c ( x ) exp k x ϕ ( t ) t d t x c ε c + ε k = n 0 + 1 x k ε x 1 + ε c ε c + ε 1 1 + ε ( as x ) ,
and the proof is complete by the arbitrariness of ε . ☐

3. Results

In this section we present large deviation results for Equations (1) and (2). We start with the case of Poisson distributed random variables (see Theorem 2 and Remark 1), and later we consider the case of Bernoulli distributed random variables (see Theorem 3 and Remark 2). Our large deviation results yield the almost sure convergence to 1 (as n ) of the involved random variables (see Remark 3 for details). In particular, the results for Bernoulli distributed random variables can be applied to the sequences of the generalized Cramér model in Equation (9) (see Corollary 1).
In all our results, we assume the following condition.
Condition 1.
The sequence { b n : n 1 } is eventually positive; { L n : n 1 } is eventually positive and non-decreasing.
In general, we can ignore the definition of { b n : n 1 } and { L n : n 1 } for a finite number of indices; therefore, in order to simplify the proofs, we assume that { b n : n 1 } and { L n : n 1 } are positive sequences and that { L n : n 1 } is non-decreasing.
We start with the case where { X n : n 1 } are (independent) Poisson distributed random variables.
Theorem 2 (the Poisson case; the sequence in Equation (1)).
Let { b n : n 1 } and { L n : n 1 } be two sequences as in Condition 1. Assume that
{ L n : n 1 } i s   t h e   r e s t r i c t i o n   ( o n   N )   o f   a   s l o w l y   v a r y i n g   f u n c t i o n   ( a t   i n f i n i t y ) .
F o r   a l l c ( 0 , 1 ) , α ( c ) : = lim n b c n b n e x i s t s ,   a n d lim c 0 α ( c ) = 0 .
lim n L n b n = 0 .
Moreover, assume that { X n : n 1 } are independent and X n law P ( λ n ) for all n 1 , where { λ n : n 1 } are positive numbers such that
k = 1 n λ k b n L n .
The sequence in Equation (1) then satisfies the LDP with speed function v n = b n L n and good rate function Λ * defined by Equation (4).
We point out that Equation (12) is satisfied if the sequence { b n : n 1 } is nondecreasing and is the restriction (on N ) of a regularly varying function with positive index (at infinity); this is a consequence of Lemma 2.
Proof. 
We apply Theorem 1, i.e., we check that Equation (3) holds with Z n = k = 1 n L k X k b n , v n = b n L n , and Λ as in Equation (4) (in fact, Equation (3) holds even without assuming (13); however, Equation (13) must be required in order that v n = b n L n be a speed function). We remark that
L n b n log E e b n L n θ k = 1 n L k X k b n = L n b n log E e θ k = 1 n L k X k L n = L n b n k = 1 n log E e ( θ L k / L n ) X k = L n b n k = 1 n log ( e λ k ( e θ L k / L n 1 ) ) = L n b n k = 1 n λ k ( e θ L k / L n 1 ) for   all θ R .
Equation (3) trivially holds for θ = 0 . The proof is divided in two parts: the proof of the upper bound,
lim sup n L n b n log E e b n L n θ k = 1 n L k X k b n e θ 1 for   all θ R ,
and that of the lower bound,
lim inf n L n b n log E e b n L n θ k = 1 n L k X k b n e θ 1 for   all θ R .
We start with the proof of Equation (15). For θ > 0 , we have
L n b n log E e b n L n θ k = 1 n L k X k b n = L n b n k = 1 n λ k ( e θ L k / L n 1 ) L n b n k = 1 n λ k ( e θ 1 )
since { L n : n 1 } is nondecreasing, and we obtain Equation (15) by letting n go to infinity and by taking into account Equation (14). For θ < 0 , we take c ( 0 , 1 ) and
γ : = sup { L n : n 1 }
(possibly infinite). Recalling that { L n : n 1 } is nondecreasing and that L c n L n 1 (it is a consequence of Lemma 2), we have
L n b n log E e b n L n θ k = 1 n L k X k b n = L n b n k = 1 n λ k ( e θ L k / L n 1 ) L n b n k = 1 c n λ k ( e θ L 1 / γ 1 ) + L n b n k = c n + 1 n λ k ( e θ L c n / L n 1 ) = L n L c n b c n b n L c n b c n k = 1 c n λ k ( e θ L 1 / γ 1 ) + L n b n k = 1 n λ k L n L c n b c n b n L c n b c n k = 1 c n λ k ( e θ L c n / L n 1 ) .
Then, by Equation (11) (and Lemma 2 with ρ = 0 ), (12) and (14), we obtain
lim sup n L n b n log E e b n L n θ k = 1 n L k X k b n α ( c ) ( e θ L 1 / γ 1 ) + ( 1 α ( c ) ) ( e θ 1 ) .
Using Equation (12), we conclude by letting c 0 .
The proof of Equation (16) is similar with reversed inequalities; hence, we only sketch it here. For θ < 0 , we have
L n b n log E e b n L n θ k = 1 n L k X k b n = L n b n k = 1 n λ k ( e θ L k / L n 1 ) L n b n k = 1 n λ k ( e θ 1 ) ,
and we obtain Equation (16) by letting n go to infinity and by taking into account (14). For θ 0 , we take c ( 0 , 1 ) and, for γ defined as above, after some manipulations, we obtain
lim inf n   L n b n log E e b n L n θ k = 1 n L k X k b n α ( c ) ( e θ L 1 / γ 1 ) + ( 1 α ( c ) ) ( e θ 1 ) .
We conclude by letting c 0 (by Equation (12)). ☐
Remark 1 (The Poisson case; the sequence in Equation (2)).
The LDP in Theorem 2 holds also for the sequence in Equation (2) in place of the sequence in Equation (1). In this case we only need to use Condition 1 and to assume Equations (13) and (14), whereas Equations (11) and (12) (which were required in the proof of Theorem 2) can be ignored. For the proof, we still apply Theorem 1, so we have to check that Equation (3) holds with Z n = L n k = 1 n X k b n , v n = b n L n , and Λ as in Equation (4). This can be easily checked noting that
L n b n log E e b n L n θ L n k = 1 n X k b n = L n b n log E e θ k = 1 n X k = L n b n k = 1 n log E e θ X k = L n b n k = 1 n log ( e λ k ( e θ 1 ) ) = L n b n k = 1 n λ k ( e θ 1 ) e θ 1 f o r   a l l θ R
where the limit relation holds by Equation (14).
The next result is for Bernoulli distributed random variables { X n : n 1 } . Here we shall use the concept of exponential equivalence (see, e.g., Definition 4.2.10 in [1]). The proof is similar to the one of Proposition 3.5 in [16] (see also Remark 3.6 in the same reference). We point out that it is not unusual to prove a convergence result for Bernoulli random variables { X n : n 1 } starting from a similar one for Poisson random variables { Y n : n 1 } and by setting X n : = Y n 1 for all n 1 ; see, for instance, Lemmas 1 and 2 in [17].
Theorem 3 (The Bernoulli case; the sequence in Equation (1)).
Let { b n : n 1 } and { L n : n 1 } be as in Theorem 2 (thus, Condition 1 together with Equations (11)–(13) hold). Moreover, assume that { X n : n 1 } are independent and X n law B ( λ n ) for all n 1 and that Equation (14) and lim n λ n = 0 hold. The sequence in Equation (1) satisfies the LDP with speed function v n = b n L n and the good rate function Λ * defined by Equation (4).
Proof. 
Let n 0 such that λ n [ 0 , 1 ) for all n n 0 (recall that λ n 0 as n ), and let { X n * : n 1 } be independent random variables such that X n * law P ( λ ^ n ) (for all n 1 ), where λ ^ n : = log 1 1 λ n for n n 0 (the definition of λ ^ n for n < n 0 is arbitrary). Notice that
k = 1 n λ ^ k k = 1 n λ k
because k = 1 n λ k (as n ) by Equations (13) and (14) and, by the Cesaro theorem,
lim n k = 1 n λ ^ k k = 1 n λ k = lim n λ ^ n λ n = lim n log 1 1 λ n λ n = 1 .
Hence, the assumption of Equation (14) and Theorem 2 are in force for the sequence { X n * : n 1 } (in fact, we have Equation (14) with { λ ^ n : n 1 } in place of { λ n : n 1 } ) and, if we set X n : = X n * 1 (for all n 1 ), the sequence { X n : n 1 } is indeed an instance of the sequence appearing in the statement of the present theorem since, by construction, X n law B ( 1 e λ ^ n ) and 1 e λ ^ n = λ n .
The statement will be proved by combining Theorem 4.2.13 in [1] and Theorem 2 (for the sequence { X n * : n 1 } ). This means that we have to check the exponential equivalence condition
lim sup n L n b n log P ( Δ n > δ ) = ( for   all δ > 0 )
where
Δ n : = 1 b n k = 1 n L k X k 1 b n k = 1 n L k X k * .
We remark that
Δ n L n b n k = 1 n | X k X k * |
by the monotonicity and the nonnegativeness of { L n : n 1 } ; therefore, if we combine Equation (19) and the Chernoff bound, for each arbitrarily fixed θ 0 , we obtain
P ( Δ n > δ ) P L n b n k = 1 n | X k X k * | > δ E e θ k = 1 n | X j X j * | e θ δ b n / L n .
Therefore,
L n b n log P ( Δ n > δ ) L n b n k = 1 n log E e θ | X k X k * | θ δ .
Moreover, if we set
ρ k ( θ ) : = e λ k e θ 1 λ k e θ ,
we have
E e θ | X k X k * | = P ( X k * = 0 ) + P ( X k * = 1 ) + h = 2 e θ | 1 h | P ( X k * = h ) = e λ k + λ k e λ k + h = 2 e θ ( h 1 ) λ k h h ! e λ k = e λ k + λ k e λ k + e θ e λ k e λ k e θ 1 λ k e θ = e λ k + e θ e λ k e λ k e θ 1 = e λ k 1 + e θ e λ k e θ 1 = e λ k 1 + λ k ρ k ( θ ) .
Therefore,
L n b n log P ( Δ n > δ ) L n b n k = 1 n λ k + L n b n k = 1 n log 1 + λ k ρ k ( θ ) θ δ .
The proof will be complete if we show that, for all θ > 0 ,
lim sup n L n b n k = 1 n log 1 + λ k ρ k ( θ ) 1 .
In fact, by Equations (14) and (21), we deduce from Equation (20) that
lim sup n L n b n log P ( Δ n > δ ) θ δ ,
and we obtain Equation (17) by letting θ go to infinity.
Thus, we prove Equation (21). We remark that ρ n ( θ ) 1 because λ n 0 (as n ). Hence, for all ε ( 0 , 1 ) , there exists n 0 such that, for all n > n 0 , we have ρ n ( θ ) < 1 + ε and
L n b n k = 1 n log 1 + λ k ρ k ( θ ) = L n b n k = 1 n 0 log 1 + λ k ρ k ( θ ) + L n b n k = n 0 + 1 n log 1 + λ k ρ k ( θ ) L n b n k = 1 n 0 log 1 + λ k ρ k ( θ ) + L n b n k = n 0 + 1 n log 1 + λ k ( 1 + ε ) .
Moreover, L n b n k = 1 n 0 log 1 + λ k ρ k ( θ ) 0 (as n ) by Equation (13) and
L n b n k = n 0 + 1 n log 1 + λ k ( 1 + ε ) ( 1 + ε ) L n b n k = n 0 + 1 n λ k = ( 1 + ε ) L n b n k = 1 n λ k L n b n k = 1 n 0 λ k .
Hence, Equation (21) follows from Equations (13) and (14), and the arbitrariness of ε . ☐
Remark 2 (The Bernoulli case; the sequence in Equation (2)).
The LDP in Theorem 3 holds also for the sequence in Equation (2) in place of the sequence in Equation (1). The proof is almost identical to the one of Theorem 3: in this case, we have
Δ n : = L n b n k = 1 n X k L n b n k = 1 n X k *
in place of Equation (18), and Inequality (19) still holds (even without the monotonicity of { L n : n 1 } ).
Remark 3 (Almost sure convergence to 1 of the sequences in Theorems 2 and 3).
Let { Z n : n 1 } be either the sequence in Equation (1) or the sequence in Equation (2), where { X n : n 1 } is as in Theorem 2 or as in Theorem 3 (so we also consider Remarks 1 and 2). Then, by a straightforward consequence of the Borel–Cantelli lemma, the sequence { Z n : n 1 } converges to 1 almost surely (as n ) if
n 1 P ( Z n C ) <   f o r   c l o s e d   s e t   C   s u c h   t h a t   1 C .
Obviously this condition holds if C ( , 0 ) because { Z n : n 1 } are nonnegative random variables. On the other hand, if C [ 0 , ) is not empty, Λ * ( C ) : = inf x C Λ * ( x ) is finite; moreover, Λ * ( C ) ( 0 , ) because 1 C . Then, by the upper bound of the closed set, for all δ > 0 , there exists n δ such that, for all n > n δ , we have
P ( Z n C ) e ( Λ * ( C ) δ ) b n / L n .
Thus, again by the Borel–Cantelli lemma, { Z n : n 1 } converges almost surely to 1 (as n ) if, for all κ > 0 , we have
n 1 e κ b n / L n < .
Then, by the Cauchy condensation test, Equation (22) holds if and only if n 1 2 n e κ b 2 n / L 2 n < and, as we see below, the convergence of the condensed series is a consequence of the ratio test and of some hypotheses of Theorems 2 and 3. In fact,
2 n + 1 e κ b 2 n + 1 / L 2 n + 1 2 n e κ b 2 n / L 2 n = 2 exp κ b 2 n + 1 L 2 n + 1 1 b 2 n b 2 n + 1 · L 2 n + 1 L 2 n 0 ( a s n )
because b 2 n b 2 n + 1 α ( 1 / 2 ) by Equation (12), L 2 n + 1 L 2 n 1 by Equation (11) and b 2 n + 1 L 2 n + 1 + by Equation (13).
We conclude with the results for the generalized Cramér model (the sequences in Equation (9)).
Corollary 1 (Application to the sequences in Equation (9)).
Let { X n : n 1 } be the random variables in Equation (7), and let { b n : n 1 } and { L n : n 1 } be defined by Equation (8). Then, the sequences k = 1 n ( log k ) X k n n : n 1 and ( log n ) k = 1 n X k n n : n 1 in Equation (9) satisfy the LDP with speed function v n = b n L n = n n log n and the good rate function Λ * defined by Equation (4).
Proof. 
In this proof, the sequences in Equation (9) play the roles of the sequences in Equations (1) and (2) in Theorem 3 and Remark 2, respectively. Therefore, we have to check that the hypotheses of Theorem 3 are satisfied. Condition 1 and Equations (11) and (13) and lim n λ n = 0 can be easily checked. Moreover, one can also check Equation (12) with α ( c ) = c ; note that in this case, we have a regularly varying function with index ρ = 1 (as n ), and { b n : n 1 } is eventually nondecreasing. Finally, Equation (14), which is
lim n ( log n ) k = 1 n k log k n n = 1 ,
can be obtained as a consequence of Lemma 3; in fact, { n : n 1 } and { n / ( log n ) : n 1 } are restrictions (on N ) of slowly varying functions at infinity. ☐
In conclusion, we can say that, roughly speaking, for any Borel set A such that 1 A ¯ (where A ¯ is the closure of A), the probabilities P k = 1 n ( log k ) X k n n and P ( log n ) k = 1 n X k n n decay exponentially as e n n log n inf x A Λ * ( x ) (as n ). Thus, in the spirit of Tao’s remark, we are able to suggest estimations concerning a sort of “generalized” Chebychev function defined by p 1 p r x log ( p 1 p r ) x x or by ( log x ) p 1 p r x 1 x x . To our knowledge, such estimations are not available for r > 1 .

Author Contributions

Rita Giuliano and Claudio Macci equally contributed to the proofs of the results. The paper was also written and reviewed cooperatively.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dembo, A.; Zeitouni, O. Large Deviations Techniques and Applications, 2nd ed.; Springer: New York, NY, USA, 1998. [Google Scholar]
  2. Fang, L. Large and moderate deviation principles for alternating Engel expansions. J. Number. Theory 2015, 156, 263–276. [Google Scholar] [CrossRef]
  3. Granville, A. Harald Cramér and the distribution of prime numbers. Scand. Actuar. J. 1995, 1995, 12–28. [Google Scholar] [CrossRef]
  4. Döring, H.; Eichelsbacher, P. Moderate deviations via cumulants. J. Theor. Probab. 2013, 26, 360–385. [Google Scholar] [CrossRef]
  5. Féray, V.; Méliot, P.L.; Nikeghbali, A. Mod-φ Convergence, I: Normality Zones and Precise Deviations. Unpublished Manuscript. 2015. Available online: http://arxiv.org/pdf/1304.2934.pdf (accessed on 23 November 2015).
  6. Jacod, J.; Kowalski, E.; Nikeghbali, A. Mod-Gaussian convergence: new limit theorems in probability and number theory. Forum Math. 2011, 23, 835–873. [Google Scholar] [CrossRef] [Green Version]
  7. Tenenbaum, G.; Mendès France, M. The Prime Numbers and Their Distribution; (Translated from the 1997 French original by P.G. Spain); American Mathematical Society: Providence, RI, USA, 2000. [Google Scholar]
  8. Landau, E. Handbuch der Lehre von der Verteilung der Primzahlen (2 Volumes), 3rd ed.; Chelsea Publishing: New York, NY, USA, 1974. [Google Scholar]
  9. Hardy, G.H.; Wright, E.M. An Introduction to the Theory of Numbers, 4th ed.; Oxford University Press: London, UK, 1975. [Google Scholar]
  10. Tenenbaum, G. Introduction to Analytic and Probabilistic Number Theory, 3rd ed.; (Translated from the 2008 French Edition by P.D.F. Ion); American Mathematical Society: Providence, RI, USA, 2015. [Google Scholar]
  11. Davenport, H. Multiplicative Number Theory, 3rd ed.; Springer: New York, NY, USA; Berlin, Germany, 2000. [Google Scholar]
  12. Ford, K.; Sneed, J. Chebyshev’s bias for products of two primes. Exp. Math. 2010, 19, 385–398. [Google Scholar] [CrossRef]
  13. Meng, X. Chebyshev’s Bias for Products of k Primes. Unpublished Manuscript. 2016. Available online: http://arxiv.org/pdf/1606.04877v2.pdf (accessed on 16 August 2016).
  14. Tao, T. Probabilistic Models and Heuristics for the Primes (Optional). In Terence Tao Blog. 2015. Available online: https://terrytao.wordpress.com/2015/01/04/254a-supplement-4-probabilistic-models-and-heuristics-for-the-primes-optional/ (accessed on 4 January 2015).
  15. Bingham, N.H.; Goldie, C.M.; Teugels, J.L. Regular variation. In Encyclopedia of Mathematics and its Applications; Cambridge University Press: Cambridge, UK, 1987; Volume 27. [Google Scholar]
  16. Giuliano, R.; Macci, C. Asymptotic results for weighted means of random variables which converge to a Dickman distribution, and some number theoretical applications. ESAIM Probab. Stat. 2015, 19, 395–413. [Google Scholar] [CrossRef]
  17. Arratia, R.; Tavaré, S. Independent processes approximations for random combinatorial structures. Adv. Math. 1994, 104, 90–154. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Giuliano, R.; Macci, C. Large Deviation Results and Applications to the Generalized Cramér Model
. Mathematics 2018, 6, 49. https://doi.org/10.3390/math6040049

AMA Style

Giuliano R, Macci C. Large Deviation Results and Applications to the Generalized Cramér Model
. Mathematics. 2018; 6(4):49. https://doi.org/10.3390/math6040049

Chicago/Turabian Style

Giuliano, Rita, and Claudio Macci. 2018. "Large Deviation Results and Applications to the Generalized Cramér Model
" Mathematics 6, no. 4: 49. https://doi.org/10.3390/math6040049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop