Open Access
This article is

- freely available
- re-usable

*Mathematics*
**2018**,
*6*(4),
49;
doi:10.3390/math6040049

Article

Large Deviation Results and Applications to the Generalized Cramér Model †

^{1}

Dipartimento di Matematica, Università di Pisa, Largo Bruno Pontecorvo 5, I-56127 Pisa, Italy

^{2}

Dipartimento di Matematica, Università di Roma Tor Vergata, Via della Ricerca Scientifica, I-00133 Rome, Italy

*

Correspondence: [email protected]

^{†}

The support of INdAM (Fondi GNAMPA) and Università di Pisa (Fondi di Ateneo) is acknowledged. The first version of this paper was written during the staying of the first author at the University Jean Monnet (St. Etienne).

Received: 2 March 2018 / Accepted: 27 March 2018 / Published: 2 April 2018

## Abstract

**:**

In this paper, we prove large deviation results for some sequences of weighted sums of random variables. These sequences have applications to the probabilistic generalized Cramér model for products of primes in arithmetic progressions; they could lead to new conjectures concerning the (non-random) set of products of primes in arithmetic progressions, a relevant topic in number theory.

Keywords:

arithmetic progressions; first Chebyshev function; products of primes; regularly varying functions; slowly varying functions## 1. Introduction

The aim of this paper is to prove asymptotic results for a class of sequences of random variables, i.e.,
for suitable sequences of real numbers $\{{b}_{n}:n\ge 1\}$ and $\{{L}_{n}:n\ge 1\}$ (see Condition 1 in Section 3) and suitable random independent variables $\{{X}_{n}:n\ge 1\}$ defined on the same probability space $(\Omega ,\mathcal{F},P)$. We also present analogue results for the slightly different sequence

$$\left\{\frac{{\sum}_{k=1}^{n}{L}_{k}{X}_{k}}{{b}_{n}}:n\ge 1\right\}$$

$$\left\{\frac{{L}_{n}{\sum}_{k=1}^{n}{X}_{k}}{{b}_{n}}:n\ge 1\right\}.$$

More precisely we refer to the theory of large deviations, which gives an asymptotic computation of small probabilities on an exponential scale (see, e.g., [1] as a reference on this topic). We recall [2] as a recent reference on large deviations for models of interest in number theory.

The origin and the motivation of our research rely on the study of some random models similar in nature to the celebrated Cramér model for prime numbers: i.e., what we have called the generalized model (for products of prime numbers in arithmetic progressions). We are not aware of any work where these probabilistic models are studied. Details on these structures will be given in Section 2. Here we only point out that, as the classical probabilistic model invented by Cramér has been used to formulate conjectures on the (non-random) set of primes (see [3] for details), in a similar way we can draw out conjectures also for the non-random sets of products of primes or products of primes in arithmetic progressions. The large deviation results for the sequences concerning these structures will be given in Corollary 1.

We also remark that the particular form of the sequence (1) is motivated by analogy with the first Chebyshev function, as will be explained in Section 2.

It is worth noting that also some moderate deviation properties can be proved (in terms of suitable bounds on cumulants and central moments) for the centered sequences

$$\left\{\frac{{\sum}_{k=1}^{n}{L}_{k}({X}_{k}-\mathbb{E}\left[{X}_{k}\right])}{{b}_{n}}:n\ge 1\right\}\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}\left\{\frac{{L}_{n}{\sum}_{k=1}^{n}({X}_{k}-\mathbb{E}\left[{X}_{k}\right])}{{b}_{n}}:n\ge 1\right\}.$$

Such propositions will not be dealt with in the sequel since, though some specific assumptions must be made in the present setting, these results are in the same direction as those of the paper [4], where moderate deviations from the point of view of cumulants and central moments are fully investigated.

It should be noted that our results are a contribution to the recent literature on limit theorems of interest in probability and number theory; here, we recall [5], where the results are formulated in terms of the mod-$\phi $ convergence (see also [6] where the simpler mod-Gaussian convergence is studied).

We here introduce some terminology and notation. We always set $0log0=0$, $\frac{c}{\infty}=0$ for $c\ne 0$, and $\lfloor x\rfloor :=max\{k\in \mathbb{Z}:k\le x<k+1\}$ for all $x\in \mathbb{R}$. Moreover, we write

- ${a}_{n}\sim {b}_{n}$ to mean that ${lim}_{n\to \infty}\frac{{a}_{n}}{{b}_{n}}=1$;
- $Z\stackrel{\mathrm{law}}{\sim}\mathcal{B}\left(p\right)$, for $p\in [0,1]$, to mean that $P(Z=1)=p=1-P(Z=0)$;
- $Z\stackrel{\mathrm{law}}{\sim}\mathcal{P}\left(\lambda \right)$, for $\lambda >0$, to mean that $P(Z=k)=\frac{{\lambda}^{k}}{k!}{e}^{-\lambda}$ for all integers $k\ge 0$.

## 2. Preliminaries

On large deviations.

We refer to [1] (pages 4–5). Let $\mathcal{Z}$ be a topological space equipped with its completed Borel $\sigma $-field. A sequence of $\mathcal{Z}$-valued random variables $\{{Z}_{n}:n\ge 1\}$ satisfies the large deviation principle (LDP) with speed function ${v}_{n}$ and rate function I if the following is true: ${lim}_{n\to \infty}{v}_{n}=\infty $, and the function $I:\mathcal{Z}\to [0,\infty ]$ is lower semi-continuous.

$$\underset{n\to \infty}{lim\; sup}\frac{1}{{v}_{n}}logP({Z}_{n}\in F)\le -\underset{z\in F}{inf}I\left(z\right)\phantom{\rule{4pt}{0ex}}\mathrm{for}\text{}\mathrm{all}\text{}\mathrm{closed}\text{}\mathrm{sets}\phantom{\rule{4pt}{0ex}}F$$

$$\underset{n\to \infty}{lim\; inf}\frac{1}{{v}_{n}}logP({Z}_{n}\in G)\ge -\underset{z\in G}{inf}I\left(z\right)\phantom{\rule{4pt}{0ex}}\mathrm{for}\text{}\mathrm{all}\text{}\mathrm{open}\text{}\mathrm{sets}\phantom{\rule{4pt}{0ex}}G.$$

A rate function I is said to be good if its level sets $\left\{\right\{z\in \mathcal{Z}:I\left(z\right)\le \eta \}:\eta \ge 0\}$ are compact.

Throughout this paper, we prove LDPs with $\mathcal{Z}=\mathbb{R}$. We recall the following known result for future use.

**Theorem**

**1**(Gärtner–Ellis Theorem)

**.**

Let $\{{Z}_{n}:n\ge 1\}$ be a sequence of real valued random variables. Assume that the function $\mathsf{\Lambda}:\mathbb{R}\to (-\infty ,\infty ]$ defined by
exists; assume, moreover, that Λ is essentially smooth (see e.g., Definition 2.3.5 in [1]) and lower semi-continuous. Then $\{{Z}_{n}:n\ge 1\}$ satisfies the LDP with speed function ${v}_{n}$ and good rate function ${\mathsf{\Lambda}}^{*}:\mathbb{R}\to [0,\infty ]$ defined by

$$\mathsf{\Lambda}\left(\theta \right):=\underset{n\to \infty}{lim}\frac{1}{{v}_{n}}log\mathbb{E}\left[{e}^{{v}_{n}\theta {Z}_{n}}\right]\phantom{\rule{4pt}{0ex}}(for\text{}all\phantom{\rule{4pt}{0ex}}\theta \in \mathbb{R})$$

$${\mathsf{\Lambda}}^{*}\left(z\right):=\underset{\theta \in \mathbb{R}}{sup}\{\theta z-\mathsf{\Lambda}\left(\theta \right)\}.$$

**Proof.**

See, e.g., Theorem 2.3.6 in [1]. ☐

The main application of Theorem 1 in this paper concerns Theorem 2, where we have

$$\mathsf{\Lambda}\left(\theta \right)={e}^{\theta}-1,\phantom{\rule{4pt}{0ex}}\mathrm{which}\text{}\mathrm{yields}\phantom{\rule{4pt}{0ex}}{\mathsf{\Lambda}}^{*}\left(x\right)=\left\{\begin{array}{cc}xlogx-x+1\hfill & \phantom{\rule{4pt}{0ex}}\mathrm{if}\phantom{\rule{4pt}{0ex}}x\ge 0\hfill \\ \infty \hfill & \phantom{\rule{4pt}{0ex}}\mathrm{if}\phantom{\rule{4pt}{0ex}}x0.\hfill \end{array}\right.$$

The LDP in Theorem 3 will instead be proved by combining Theorem 4.2.13 in [1] with Theorem 2, i.e., by checking the exponential equivalence (see, e.g., Definition 4.2.10 in [1]) of the involved sequences.

On the generalized Cramér model (for products of primes in arithmetic progressions).

The Cramér model for prime numbers consists in a sequence of independent random variables $\{{X}_{n}:n\ge 1\}$ such that, for every $n\ge 2$,

$${X}_{n}\stackrel{\mathrm{law}}{\sim}\mathcal{B}(1/logn).$$

This model can be justified by the prime numbers theorem (PNT), which roughly asserts that the expected density of primes around x is $\frac{1}{logx}$: the cardinality of prime numbers $\le n$ is
and, with the words of [7] (see footnote on p. 6), “the quantity $\frac{1}{logn}$ appears here naturally as the derivative of $\mathrm{li}\left(x\right)$ evaluated at $x=n$”. Since ${\int}_{2}^{n}\frac{1}{logt}dt\sim \frac{n}{logn}$, another way of stating the PNT is

$$\pi \left(n\right):=\sum _{p\le n}1\sim \mathrm{li}\left(n\right):={\int}_{2}^{n}\frac{1}{logt}dt,$$

$$\frac{\pi \left(n\right)}{n}\sim \frac{1}{logn}.$$

A first extension of this formula concerns the case of integers n which are products of exactly r prime factors ($r\ge 2$). More precisely, we consider the sets
where $\omega \left(n\right)$ is the number of distinct prime factors of n, and $\Omega \left(n\right)$ counts the number of prime factors of n (with multiplicity); this means that, letting (by the canonical prime factorization of n) $n={\prod}_{i=1}^{\omega \left(n\right)}{p}_{i}^{{\alpha}_{i}}$, where ${p}_{1},\dots ,{p}_{n}$ are the distinct prime factors of n, we have

$${A}_{r}\left(n\right):=\{k\le n:\Omega \left(k\right)=r\}\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}{B}_{r}\left(n\right):=\{k\le n:\omega \left(k\right)=r\}$$

$$\Omega \left(n\right):=\sum _{i=1}^{\omega \left(n\right)}{\alpha}_{i}.$$

A result proved by Landau in 1909 (see, e.g., [8]) states that the cardinalities ${\tau}_{r}\left(n\right)$ and ${\pi}_{r}\left(n\right)$ of ${A}_{r}\left(n\right)$ and ${B}_{r}\left(n\right)$ respectively verify
see also, e.g., Theorem 437 in [9] (Section 22.18, page 368) or [10] (II.6, Theorems 4 and 5). Note that this formula for ${\pi}_{r}\left(n\right)$ reduces to Equation (6) when $r=1$.

$${\tau}_{r}\left(n\right):=\sum _{k\in {A}_{r}\left(n\right)}1\sim \frac{n{(loglogn)}^{r-1}}{(r-1)!logn}\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}{\pi}_{r}\left(n\right):=\sum _{k\in {B}_{r}\left(n\right)}1\sim \frac{n{(loglogn)}^{r-1}}{(r-1)!logn};$$

Going a little further, for fixed integers a and q, we can consider the sets of products of primes in arithmetic progressions

$${A}_{r}^{\left(q\right)}\left(n\right)=:\{k\le n:\Omega \left(k\right)=r,\phantom{\rule{4pt}{0ex}}k\equiv a\phantom{\rule{4.pt}{0ex}}\mathrm{mod}\phantom{\rule{4.pt}{0ex}}q\}\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}{B}_{r}^{\left(q\right)}\left(n\right)=:\{k\le n:\omega \left(k\right)=r,\phantom{\rule{4pt}{0ex}}k\equiv a\phantom{\rule{4.pt}{0ex}}\mathrm{mod}\phantom{\rule{4.pt}{0ex}}q\}.$$

One can prove (by similar methods as in [10,11]) that, for any a and q with $(a,q)=1$, the cardinalities ${\tau}_{r}^{\left(q\right)}\left(n\right)$ and ${\pi}_{r}^{\left(q\right)}\left(n\right)$ of ${A}_{r}^{\left(q\right)}\left(n\right)$ and ${B}_{r}^{\left(q\right)}\left(n\right)$ respectively verify
where $\varphi $ is Euler’s totient function. Notice that, for $r=1$, we recover the sets of primes in arithmetic progressions, considered for instance in [8,10] II.8, or [11]; the case $r=2$ is studied in [12]; the general case $r\ge 1$ is considered in the recent preprint [13]; for $q=1$, we recover the sets and the formulas for the model described above.

$${\tau}_{r}^{\left(q\right)}\left(n\right):=\sum _{k\in {A}_{r}^{\left(q\right)}\left(n\right)}1\sim \frac{1}{\varphi \left(q\right)}\xb7\frac{n{(loglogn)}^{r-1}}{(r-1)!logn}\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}{\pi}_{r}^{\left(q\right)}\left(n\right):=\sum _{k\in {B}_{r}^{\left(q\right)}\left(n\right)}1\sim \frac{1}{\varphi \left(q\right)}\xb7\frac{n{(loglogn)}^{r-1}}{(r-1)!logn},$$

Therefore, following Cramér’s heuristic, Equation (5), we can define the generalized Cramér model for products of r prime numbers (or products of r prime numbers in arithmetic progression) as a sequence of independent random variables $\{{X}_{n}:n\ge 1\}$ such that

$${X}_{n}\stackrel{\mathrm{law}}{\sim}\mathcal{B}\left({\lambda}_{n}\right),\phantom{\rule{4pt}{0ex}}\mathrm{where}\phantom{\rule{4pt}{0ex}}{\lambda}_{n}:=\frac{{\ell}_{n}}{logn}\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}{\ell}_{n}:=\frac{1}{\varphi \left(q\right)}\xb7\frac{{(loglogn)}^{r-1}}{(r-1)!}.$$

Obviously in Equation (7) we take $n\ge {n}_{0}$, where ${n}_{0}$ is an integer, such that ${\lambda}_{n}\in (0,1]$ for $n\ge {n}_{0}$; the definition of ${\lambda}_{n}$ for $n<{n}_{0}$ is arbitrary.

Large deviation results for this model will be presented in Corollary 1 as a consequence of Theorem 3 and Remark 2, with
thus, the sequences in Equations (1) and (2) become
respectively. Moreover, by taking into account Remark 3 presented below, the sequences in Equation (9) converge almost surely to 1 (as $n\to \infty $).

$${L}_{n}:=logn\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}{b}_{n}:=n{\ell}_{n};$$

$$\frac{{\sum}_{k=1}^{n}(logk){X}_{k}}{n{\ell}_{n}}\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}\frac{(logn){\sum}_{k=1}^{n}{X}_{k}}{n{\ell}_{n}}$$

On the first Chebyshev function.

The first Chebyshev function is defined by
where the sum is extended over all prime numbers $p\le x$.

$$\theta \left(x\right):=\sum _{p\le x}logp,$$

Therefore, when considering the classical Cramér model, this function is naturally modeled with ${\sum}_{k=1}^{n}(logk){X}_{k}$ (and we obtain the numerator of the first fraction in Equation (9)).

It must be noted that T. Tao, in his blog (see [14]), considers the same random variable ${\sum}_{k\le x}(logk){X}_{k}$ and proves that almost surely one has
for all $\epsilon >0$ (where the implied constant in the ${O}_{\epsilon}(\xb7)$ notation is allowed to be random). In particular, almost surely one has

$$\sum _{k\le x}(logk){X}_{k}=x+{O}_{\epsilon}\left({x}^{1/2+\epsilon}\right)$$

$$\underset{n\to \infty}{lim}\frac{{\sum}_{k\le n}(logk){X}_{k}}{n}=1.$$

It appears clearly that in this setting we have a sequence of the form of Equation (1), with the particular choices ${L}_{n}=logn$ and ${b}_{n}=n$. What we are going to investigate in the sequel is how the sequence of random variables $\{{X}_{n}:n\ge 1\}$ and the two sequences of numbers $\{{L}_{n}:n\ge 1\}$ and $\{{b}_{n}:n\ge 1\}$ must be connected in order to obtain large deviations and convergence results (see also Equations (8) and (9) above).

On slowly and regularly varying functions (at infinity).

Here we recall the following basic definitions. A positive measurable function H defined on some neighborhood of $[{x}_{0},\infty )$ of infinity is said to be slowly varying at infinity (see, e.g., [15], page 6) if

$$\underset{t\to \infty}{lim}\frac{H\left(tx\right)}{H\left(t\right)}=1\phantom{\rule{4pt}{0ex}}\mathrm{for}\text{}\mathrm{all}\phantom{\rule{4pt}{0ex}}x0.$$

Similarly, a positive measurable function M defined on some neighborhood of $[{x}_{0},\infty )$ of infinity is said to be regularly varying at infinity of index $\rho $ (see, e.g., [15], page 18) if

$$\underset{t\to \infty}{lim}\frac{M\left(tx\right)}{M\left(t\right)}={x}^{\rho}\phantom{\rule{4pt}{0ex}}\mathrm{for}\text{}\mathrm{all}\phantom{\rule{4pt}{0ex}}x0.$$

Obviously, we recover the slowly varying case if $\rho =0$. Recall the following well-known result for slowly varying functions.

**Lemma**

**1**(Karamata’s representation of slowly varying functions)

**.**

A function H is slowly varying at infinity if and only if
where $\varphi \left(x\right)\to 0$ and $c\left(x\right)\to {c}_{\infty}$ for some ${c}_{\infty}>0$ (as $x\to \infty $).

$$H\left(x\right)=c\left(x\right)exp\left({\int}_{{x}_{0}}^{x}\frac{\varphi \left(t\right)}{t}dt\right)$$

**Proof.**

See, e.g., Theorem 1.3.1 in [15]. ☐

In view of what follows we also present the following results. They are more or less known; but we prefer to give detailed proofs in order to ensure that the paper is self-contained.

**Lemma**

**2.**

Let M be a regularly varying function (at infinity) of index $\rho \ge 0$. Then,

$$\underset{t\to \infty}{lim}\frac{M(\lfloor tx\rfloor )}{M\left(t\right)}={x}^{\rho}\phantom{\rule{4pt}{0ex}}for\text{}all\phantom{\rule{4pt}{0ex}}x0.$$

**Proof.**

It is well-known (see, e.g., Theorem 1.4.1 in [15]) that we have $M\left(x\right)={x}^{\rho}H\left(x\right)$ for a suitable slowly varying function H. Thus, it is easy to check that it suffices to prove the result for the case $\rho =0$ (namely for a slowly varying function H), i.e.,

$$\underset{t\to \infty}{lim}\frac{H(\lfloor tx\rfloor )}{H\left(t\right)}=1\phantom{\rule{4pt}{0ex}}\mathrm{for}\text{}\mathrm{all}\phantom{\rule{4pt}{0ex}}x0.$$

By Lemma 1, for all $x>0$, we have
for $t>0$. Obviously, $\frac{c(\lfloor tx\rfloor )}{c\left(t\right)}\to 1$ (as $t\to \infty $). Moreover, for all $\epsilon >0$, we have
for $t>0$, and $log(\lfloor tx\rfloor /t)\to logx$ (as $t\to \infty $); thus,
by the arbitrariness of $\epsilon >0$. Thus, Equation (10) holds, and the proof is complete. ☐

$$\frac{H(\lfloor tx\rfloor )}{H\left(t\right)}=\frac{c(\lfloor tx\rfloor )}{c\left(t\right)}exp\left({\int}_{t}^{\lfloor tx\rfloor}\frac{\varphi \left(v\right)}{v}dv\right)$$

$$\left|{\int}_{t}^{\lfloor tx\rfloor}\frac{\varphi \left(v\right)}{v}dv\right|\le \epsilon |log(\lfloor tx\rfloor /t)|$$

$${\int}_{t}^{\lfloor tx\rfloor}\frac{\varphi \left(v\right)}{v}dv\to 0\phantom{\rule{4pt}{0ex}}(\mathrm{as}\phantom{\rule{4pt}{0ex}}t\to \infty )$$

**Lemma**

**3.**

Let H be a slowly varying function (at infinity). Then,

$$\underset{x\to \infty}{lim}\frac{xH\left(x\right)}{{\sum}_{k=1}^{\lfloor x\rfloor}H\left(k\right)}=1.$$

**Proof.**

By the representation of H in Lemma 1, for all $\epsilon >0$ there is an integer ${n}_{0}\ge 1$ such that, for all $x>{n}_{0}$, we have ${c}_{\infty}-\epsilon <c\left(x\right)<{c}_{\infty}+\epsilon $ and $-\epsilon <\varphi \left(x\right)<\epsilon $. Then, we take $x\ge {n}_{0}+1$, and

$$\frac{{\sum}_{k=1}^{\lfloor x\rfloor}H\left(k\right)}{xH\left(x\right)}=\frac{{\sum}_{k=1}^{{n}_{0}}H\left(k\right)}{xH\left(x\right)}+\frac{{\sum}_{k={n}_{0}+1}^{\lfloor x\rfloor}H\left(k\right)}{xH\left(x\right)}.$$

The first summand in the right hand side can be ignored since, if we take $\epsilon \in (0,1)$, for a sufficient high x, we have
which yields $xH\left(x\right)>{c}_{1}{x}^{1-\epsilon}$ for a suitable constant ${c}_{1}>0$ (and ${x}^{1-\epsilon}\to \infty $ as $x\to \infty $). Therefore, we concentrate our attention on the second summand and, by taking into account again the representation of H in Lemma 1, for a sufficiently high x, we have

$$H\left(x\right)>\frac{{c}_{\infty}}{2}exp\left(-\epsilon {\int}_{{x}_{0}}^{x}\frac{1}{t}dt\right)=\frac{{c}_{\infty}}{2}{\left(\frac{x}{{x}_{0}}\right)}^{-\epsilon},$$

$$\frac{{\sum}_{k={n}_{0}+1}^{\lfloor x\rfloor}H\left(k\right)}{xH\left(x\right)}=\frac{{\sum}_{k={n}_{0}+1}^{\lfloor x\rfloor}c\left(k\right)exp\left({\int}_{{x}_{0}}^{k}\frac{\varphi \left(t\right)}{t}dt\right)}{xc\left(x\right)exp\left({\int}_{{x}_{0}}^{x}\frac{\varphi \left(t\right)}{t}dt\right)}=\frac{{\sum}_{k={n}_{0}+1}^{\lfloor x\rfloor}\frac{c\left(k\right)}{c\left(x\right)}exp\left(-{\int}_{k}^{x}\frac{\varphi \left(t\right)}{t}dt\right)}{x}.$$

Moreover,
and
and the proof is complete by the arbitrariness of $\epsilon $. ☐

$$\frac{{\sum}_{k={n}_{0}+1}^{\lfloor x\rfloor}\frac{c\left(k\right)}{c\left(x\right)}exp\left(-{\int}_{k}^{x}\frac{\varphi \left(t\right)}{t}dt\right)}{x}\le \frac{{c}_{\infty}+\epsilon}{{c}_{\infty}-\epsilon}\frac{{\sum}_{k={n}_{0}+1}^{\lfloor x\rfloor}{k}^{-\epsilon}}{{x}^{1-\epsilon}}\to \frac{{c}_{\infty}+\epsilon}{{c}_{\infty}-\epsilon}\frac{1}{1-\epsilon}\phantom{\rule{4pt}{0ex}}(\mathrm{as}\phantom{\rule{4pt}{0ex}}x\to \infty )$$

$$\frac{{\sum}_{k={n}_{0}+1}^{\lfloor x\rfloor}\frac{c\left(k\right)}{c\left(x\right)}exp\left(-{\int}_{k}^{x}\frac{\varphi \left(t\right)}{t}dt\right)}{x}\ge \frac{{c}_{\infty}-\epsilon}{{c}_{\infty}+\epsilon}\frac{{\sum}_{k={n}_{0}+1}^{\lfloor x\rfloor}{k}^{\epsilon}}{{x}^{1+\epsilon}}\to \frac{{c}_{\infty}-\epsilon}{{c}_{\infty}+\epsilon}\frac{1}{1+\epsilon}\phantom{\rule{4pt}{0ex}}(\mathrm{as}\phantom{\rule{4pt}{0ex}}x\to \infty ),$$

## 3. Results

In this section we present large deviation results for Equations (1) and (2). We start with the case of Poisson distributed random variables (see Theorem 2 and Remark 1), and later we consider the case of Bernoulli distributed random variables (see Theorem 3 and Remark 2). Our large deviation results yield the almost sure convergence to 1 (as $n\to \infty $) of the involved random variables (see Remark 3 for details). In particular, the results for Bernoulli distributed random variables can be applied to the sequences of the generalized Cramér model in Equation (9) (see Corollary 1).

In all our results, we assume the following condition.

**Condition**

**1.**

The sequence $\{{b}_{n}:n\ge 1\}$ is eventually positive; $\{{L}_{n}:n\ge 1\}$ is eventually positive and non-decreasing.

In general, we can ignore the definition of $\{{b}_{n}:n\ge 1\}$ and $\{{L}_{n}:n\ge 1\}$ for a finite number of indices; therefore, in order to simplify the proofs, we assume that $\{{b}_{n}:n\ge 1\}$ and $\{{L}_{n}:n\ge 1\}$ are positive sequences and that $\{{L}_{n}:n\ge 1\}$ is non-decreasing.

We start with the case where $\{{X}_{n}:n\ge 1\}$ are (independent) Poisson distributed random variables.

**Theorem 2**(the Poisson case; the sequence in Equation (1))

**.**

Let $\{{b}_{n}:n\ge 1\}$ and $\{{L}_{n}:n\ge 1\}$ be two sequences as in Condition 1. Assume that

$$\{{L}_{n}:n\ge 1\}\phantom{\rule{4pt}{0ex}}istherestriction(on\mathbb{N})ofaslowlyvaryingfunction(atinfinity).$$

$$Forall\phantom{\rule{4pt}{0ex}}c\in (0,1),\phantom{\rule{4pt}{0ex}}\alpha \left(c\right):=\underset{n\to \infty}{lim}\frac{{b}_{\lfloor cn\rfloor}}{{b}_{n}}\phantom{\rule{4pt}{0ex}}exists,and\phantom{\rule{4pt}{0ex}}\underset{c\downarrow 0}{lim}\alpha \left(c\right)=0.$$

$$\underset{n\to \infty}{lim}\frac{{L}_{n}}{{b}_{n}}=0.$$

Moreover, assume that $\{{X}_{n}:n\ge 1\}$ are independent and ${X}_{n}\stackrel{\mathrm{law}}{\sim}\mathcal{P}\left({\lambda}_{n}\right)$ for all $n\ge 1$, where $\{{\lambda}_{n}:n\ge 1\}$ are positive numbers such that

$$\sum _{k=1}^{n}{\lambda}_{k}\sim \frac{{b}_{n}}{{L}_{n}}.$$

We point out that Equation (12) is satisfied if the sequence $\{{b}_{n}:n\ge 1\}$ is nondecreasing and is the restriction (on $\mathbb{N}$) of a regularly varying function with positive index (at infinity); this is a consequence of Lemma 2.

**Proof.**

We apply Theorem 1, i.e., we check that Equation (3) holds with ${Z}_{n}=\frac{{\sum}_{k=1}^{n}{L}_{k}{X}_{k}}{{b}_{n}}$, ${v}_{n}=\frac{{b}_{n}}{{L}_{n}}$, and $\mathsf{\Lambda}$ as in Equation (4) (in fact, Equation (3) holds even without assuming (13); however, Equation (13) must be required in order that ${v}_{n}=\frac{{b}_{n}}{{L}_{n}}$ be a speed function). We remark that

$$\begin{array}{c}\frac{{L}_{n}}{{b}_{n}}log\mathbb{E}\left[{e}^{\frac{{b}_{n}}{{L}_{n}}\theta \frac{{\sum}_{k=1}^{n}{L}_{k}{X}_{k}}{{b}_{n}}}\right]=\frac{{L}_{n}}{{b}_{n}}log\mathbb{E}\left[{e}^{\theta \frac{{\sum}_{k=1}^{n}{L}_{k}{X}_{k}}{{L}_{n}}}\right]=\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}log\mathbb{E}\left[{e}^{(\theta {L}_{k}/{L}_{n}){X}_{k}}\right]\hfill \\ \hfill =\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}log\left({e}^{{\lambda}_{k}({e}^{\theta {L}_{k}/{L}_{n}}-1)}\right)=\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}{\lambda}_{k}({e}^{\theta {L}_{k}/{L}_{n}}-1)\phantom{\rule{4pt}{0ex}}\mathrm{for}\mathrm{all}\phantom{\rule{4pt}{0ex}}\theta \in \mathbb{R}.\end{array}$$

Equation (3) trivially holds for $\theta =0$. The proof is divided in two parts: the proof of the upper bound,
and that of the lower bound,

$$\underset{n\to \infty}{lim\; sup}\frac{{L}_{n}}{{b}_{n}}log\mathbb{E}\left[{e}^{\frac{{b}_{n}}{{L}_{n}}\theta \frac{{\sum}_{k=1}^{n}{L}_{k}{X}_{k}}{{b}_{n}}}\right]\le {e}^{\theta}-1\phantom{\rule{4pt}{0ex}}\mathrm{for}\mathrm{all}\phantom{\rule{4pt}{0ex}}\theta \in \mathbb{R},$$

$$\underset{n\to \infty}{lim\; inf}\frac{{L}_{n}}{{b}_{n}}log\mathbb{E}\left[{e}^{\frac{{b}_{n}}{{L}_{n}}\theta \frac{{\sum}_{k=1}^{n}{L}_{k}{X}_{k}}{{b}_{n}}}\right]\ge {e}^{\theta}-1\phantom{\rule{4pt}{0ex}}\mathrm{for}\mathrm{all}\phantom{\rule{4pt}{0ex}}\theta \in \mathbb{R}.$$

We start with the proof of Equation (15). For $\theta >0$, we have
since $\{{L}_{n}:n\ge 1\}$ is nondecreasing, and we obtain Equation (15) by letting n go to infinity and by taking into account Equation (14). For $\theta <0$, we take $c\in (0,1)$ and
(possibly infinite). Recalling that $\{{L}_{n}:n\ge 1\}$ is nondecreasing and that $\frac{{L}_{\lfloor cn\rfloor}}{{L}_{n}}\to 1$ (it is a consequence of Lemma 2), we have

$$\frac{{L}_{n}}{{b}_{n}}log\mathbb{E}\left[{e}^{\frac{{b}_{n}}{{L}_{n}}\theta \frac{{\sum}_{k=1}^{n}{L}_{k}{X}_{k}}{{b}_{n}}}\right]=\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}{\lambda}_{k}({e}^{\theta {L}_{k}/{L}_{n}}-1)\le \frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}{\lambda}_{k}({e}^{\theta}-1)$$

$$\gamma :=sup\{{L}_{n}:n\ge 1\}$$

$$\begin{array}{c}\frac{{L}_{n}}{{b}_{n}}log\mathbb{E}\left[{e}^{\frac{{b}_{n}}{{L}_{n}}\theta \frac{{\sum}_{k=1}^{n}{L}_{k}{X}_{k}}{{b}_{n}}}\right]=\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}{\lambda}_{k}({e}^{\theta {L}_{k}/{L}_{n}}-1)\hfill \\ \le \frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{\lfloor cn\rfloor}{\lambda}_{k}({e}^{\theta {L}_{1}/\gamma}-1)+\frac{{L}_{n}}{{b}_{n}}\sum _{k=\lfloor cn\rfloor +1}^{n}{\lambda}_{k}({e}^{\theta {L}_{\lfloor cn\rfloor}/{L}_{n}}-1)\\ =\frac{{L}_{n}}{{L}_{\lfloor cn\rfloor}}\frac{{b}_{\lfloor cn\rfloor}}{{b}_{n}}\left\{\frac{{L}_{\lfloor cn\rfloor}}{{b}_{\lfloor cn\rfloor}}\sum _{k=1}^{\lfloor cn\rfloor}{\lambda}_{k}\right\}({e}^{\theta {L}_{1}/\gamma}-1)\\ \hfill +\left(\left\{\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}{\lambda}_{k}\right\}-\frac{{L}_{n}}{{L}_{\lfloor cn\rfloor}}\frac{{b}_{\lfloor cn\rfloor}}{{b}_{n}}\left\{\frac{{L}_{\lfloor cn\rfloor}}{{b}_{\lfloor cn\rfloor}}\sum _{k=1}^{\lfloor cn\rfloor}{\lambda}_{k}\right\}\right)({e}^{\theta {L}_{\lfloor cn\rfloor}/{L}_{n}}-1).\end{array}$$

Then, by Equation (11) (and Lemma 2 with $\rho =0$), (12) and (14), we obtain

$$\underset{n\to \infty}{lim\; sup}\frac{{L}_{n}}{{b}_{n}}log\mathbb{E}\left[{e}^{\frac{{b}_{n}}{{L}_{n}}\theta \frac{{\sum}_{k=1}^{n}{L}_{k}{X}_{k}}{{b}_{n}}}\right]\le \alpha \left(c\right)({e}^{\theta {L}_{1}/\gamma}-1)+(1-\alpha \left(c\right))({e}^{\theta}-1).$$

Using Equation (12), we conclude by letting $c\downarrow 0$.

The proof of Equation (16) is similar with reversed inequalities; hence, we only sketch it here. For $\theta <0$, we have
and we obtain Equation (16) by letting n go to infinity and by taking into account (14). For $\theta \ge 0$, we take $c\in (0,1)$ and, for $\gamma $ defined as above, after some manipulations, we obtain

$$\frac{{L}_{n}}{{b}_{n}}log\mathbb{E}\left[{e}^{\frac{{b}_{n}}{{L}_{n}}\theta \frac{{\sum}_{k=1}^{n}{L}_{k}{X}_{k}}{{b}_{n}}}\right]=\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}{\lambda}_{k}({e}^{\theta {L}_{k}/{L}_{n}}-1)\ge \frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}{\lambda}_{k}({e}^{\theta}-1),$$

$$\underset{n\to \infty}{lim\; inf}\frac{{L}_{n}}{{b}_{n}}log\mathbb{E}\left[{e}^{\frac{{b}_{n}}{{L}_{n}}\theta \frac{{\sum}_{k=1}^{n}{L}_{k}{X}_{k}}{{b}_{n}}}\right]\ge \alpha \left(c\right)({e}^{\theta {L}_{1}/\gamma}-1)+(1-\alpha \left(c\right))({e}^{\theta}-1).$$

We conclude by letting $c\downarrow 0$ (by Equation (12)). ☐

**Remark 1**(The Poisson case; the sequence in Equation (2))

**.**

The LDP in Theorem 2 holds also for the sequence in Equation (2) in place of the sequence in Equation (1). In this case we only need to use Condition 1 and to assume Equations (13) and (14), whereas Equations (11) and (12) (which were required in the proof of Theorem 2) can be ignored. For the proof, we still apply Theorem 1, so we have to check that Equation (3) holds with ${Z}_{n}=\frac{{L}_{n}{\sum}_{k=1}^{n}{X}_{k}}{{b}_{n}}$, ${v}_{n}=\frac{{b}_{n}}{{L}_{n}}$, and Λ as in Equation (4). This can be easily checked noting that
where the limit relation holds by Equation (14).

$$\begin{array}{c}\frac{{L}_{n}}{{b}_{n}}log\mathbb{E}\left[{e}^{\frac{{b}_{n}}{{L}_{n}}\theta \frac{{L}_{n}{\sum}_{k=1}^{n}{X}_{k}}{{b}_{n}}}\right]=\frac{{L}_{n}}{{b}_{n}}log\mathbb{E}\left[{e}^{\theta {\sum}_{k=1}^{n}{X}_{k}}\right]=\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}log\mathbb{E}\left[{e}^{\theta {X}_{k}}\right]\hfill \\ \hfill =\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}log\left({e}^{{\lambda}_{k}({e}^{\theta}-1)}\right)=\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}{\lambda}_{k}({e}^{\theta}-1)\to {e}^{\theta}-1\phantom{\rule{4pt}{0ex}}forall\phantom{\rule{4pt}{0ex}}\theta \in \mathbb{R}\end{array}$$

The next result is for Bernoulli distributed random variables $\{{X}_{n}:n\ge 1\}$. Here we shall use the concept of exponential equivalence (see, e.g., Definition 4.2.10 in [1]). The proof is similar to the one of Proposition 3.5 in [16] (see also Remark 3.6 in the same reference). We point out that it is not unusual to prove a convergence result for Bernoulli random variables $\{{X}_{n}:n\ge 1\}$ starting from a similar one for Poisson random variables $\{{Y}_{n}:n\ge 1\}$ and by setting ${X}_{n}:={Y}_{n}\wedge 1$ for all $n\ge 1$; see, for instance, Lemmas 1 and 2 in [17].

**Theorem 3**(The Bernoulli case; the sequence in Equation (1))

**.**

Let $\{{b}_{n}:n\ge 1\}$ and $\{{L}_{n}:n\ge 1\}$ be as in Theorem 2 (thus, Condition 1 together with Equations (11)–(13) hold). Moreover, assume that $\{{X}_{n}:n\ge 1\}$ are independent and ${X}_{n}\stackrel{\mathrm{law}}{\sim}\mathcal{B}\left({\lambda}_{n}\right)$ for all $n\ge 1$ and that Equation (14) and ${lim}_{n\to \infty}{\lambda}_{n}=0$ hold. The sequence in Equation (1) satisfies the LDP with speed function ${v}_{n}=\frac{{b}_{n}}{{L}_{n}}$ and the good rate function ${\mathsf{\Lambda}}^{*}$ defined by Equation (4).

**Proof.**

Let ${n}_{0}$ such that ${\lambda}_{n}\in [0,1)$ for all $n\ge {n}_{0}$ (recall that ${\lambda}_{n}\to 0$ as $n\to \infty $), and let $\{{X}_{n}^{*}:n\ge 1\}$ be independent random variables such that ${X}_{n}^{*}\stackrel{\mathrm{law}}{\sim}\mathcal{P}\left({\widehat{\lambda}}_{n}\right)$ (for all $n\ge 1$), where ${\widehat{\lambda}}_{n}:=log\frac{1}{1-{\lambda}_{n}}$ for $n\ge {n}_{0}$ (the definition of ${\widehat{\lambda}}_{n}$ for $n<{n}_{0}$ is arbitrary). Notice that
because ${\sum}_{k=1}^{n}{\lambda}_{k}\to \infty $ (as $n\to \infty $) by Equations (13) and (14) and, by the Cesaro theorem,

$$\sum _{k=1}^{n}{\widehat{\lambda}}_{k}\sim \sum _{k=1}^{n}{\lambda}_{k}$$

$$\underset{n\to \infty}{lim}\frac{{\sum}_{k=1}^{n}{\widehat{\lambda}}_{k}}{{\sum}_{k=1}^{n}{\lambda}_{k}}=\underset{n\to \infty}{lim}\frac{{\widehat{\lambda}}_{n}}{{\lambda}_{n}}=\underset{n\to \infty}{lim}\frac{log\frac{1}{1-{\lambda}_{n}}}{{\lambda}_{n}}=1.$$

Hence, the assumption of Equation (14) and Theorem 2 are in force for the sequence $\{{X}_{n}^{*}:n\ge 1\}$ (in fact, we have Equation (14) with $\{{\widehat{\lambda}}_{n}:n\ge 1\}$ in place of $\{{\lambda}_{n}:n\ge 1\}$) and, if we set ${X}_{n}:={X}_{n}^{*}\wedge 1$ (for all $n\ge 1$), the sequence $\{{X}_{n}:n\ge 1\}$ is indeed an instance of the sequence appearing in the statement of the present theorem since, by construction, ${X}_{n}\stackrel{\mathrm{law}}{\sim}\mathcal{B}(1-{e}^{-{\widehat{\lambda}}_{n}})$ and $1-{e}^{-{\widehat{\lambda}}_{n}}={\lambda}_{n}$.

The statement will be proved by combining Theorem 4.2.13 in [1] and Theorem 2 (for the sequence $\{{X}_{n}^{*}:n\ge 1\}$). This means that we have to check the exponential equivalence condition
where

$$\underset{n\to \infty}{lim\; sup}\frac{{L}_{n}}{{b}_{n}}logP({\Delta}_{n}>\delta )=-\infty \phantom{\rule{4pt}{0ex}}(\mathrm{for}\mathrm{all}\phantom{\rule{4pt}{0ex}}\delta 0)$$

$${\Delta}_{n}:=\left|\frac{1}{{b}_{n}}\sum _{k=1}^{n}{L}_{k}{X}_{k}-\frac{1}{{b}_{n}}\sum _{k=1}^{n}{L}_{k}{X}_{k}^{*}\right|.$$

We remark that
by the monotonicity and the nonnegativeness of $\{{L}_{n}:n\ge 1\}$; therefore, if we combine Equation (19) and the Chernoff bound, for each arbitrarily fixed $\theta \ge 0$, we obtain

$${\Delta}_{n}\le \frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}|{X}_{k}-{X}_{k}^{*}|$$

$$P({\Delta}_{n}>\delta )\le P\left(\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}|{X}_{k}-{X}_{k}^{*}|>\delta \right)\le \frac{\mathbb{E}\left[{e}^{\theta {\sum}_{k=1}^{n}|{X}_{j}-{X}_{j}^{*}|}\right]}{{e}^{\theta \delta {b}_{n}/{L}_{n}}}.$$

Therefore,

$$\frac{{L}_{n}}{{b}_{n}}logP({\Delta}_{n}>\delta )\le \frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}log\mathbb{E}\left[{e}^{\theta |{X}_{k}-{X}_{k}^{*}|}\right]-\theta \delta .$$

Moreover, if we set
we have

$${\rho}_{k}^{\left(\theta \right)}:=\frac{{e}^{{\lambda}_{k}{e}^{\theta}}-1}{{\lambda}_{k}{e}^{\theta}},$$

$$\begin{array}{c}\mathbb{E}\left[{e}^{\theta |{X}_{k}-{X}_{k}^{*}|}\right]=P({X}_{k}^{*}=0)+P({X}_{k}^{*}=1)+\sum _{h=2}^{\infty}{e}^{\theta |1-h|}P({X}_{k}^{*}=h)\hfill \\ ={e}^{-{\lambda}_{k}}+{\lambda}_{k}{e}^{-{\lambda}_{k}}+\sum _{h=2}^{\infty}{e}^{\theta (h-1)}\frac{{\lambda}_{k}^{h}}{h!}{e}^{-{\lambda}_{k}}={e}^{-{\lambda}_{k}}+{\lambda}_{k}{e}^{-{\lambda}_{k}}+{e}^{-\theta}{e}^{-{\lambda}_{k}}\left({e}^{{\lambda}_{k}{e}^{\theta}}-1-{\lambda}_{k}{e}^{\theta}\right)\\ \hfill ={e}^{-{\lambda}_{k}}+{e}^{-\theta}{e}^{-{\lambda}_{k}}\left({e}^{{\lambda}_{k}{e}^{\theta}}-1\right)={e}^{-{\lambda}_{k}}\left(1+{e}^{-\theta}\left({e}^{{\lambda}_{k}{e}^{\theta}}-1\right)\right)={e}^{-{\lambda}_{k}}\left(1+{\lambda}_{k}{\rho}_{k}^{\left(\theta \right)}\right).\end{array}$$

Therefore,

$$\frac{{L}_{n}}{{b}_{n}}logP({\Delta}_{n}>\delta )\le -\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}{\lambda}_{k}+\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}log\left(1+{\lambda}_{k}{\rho}_{k}^{\left(\theta \right)}\right)-\theta \delta .$$

The proof will be complete if we show that, for all $\theta >0$,

$$\underset{n\to \infty}{lim\; sup}\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}log\left(1+{\lambda}_{k}{\rho}_{k}^{\left(\theta \right)}\right)\le 1.$$

In fact, by Equations (14) and (21), we deduce from Equation (20) that
and we obtain Equation (17) by letting $\theta $ go to infinity.

$$\underset{n\to \infty}{lim\; sup}\frac{{L}_{n}}{{b}_{n}}logP({\Delta}_{n}>\delta )\le -\theta \delta ,$$

Thus, we prove Equation (21). We remark that ${\rho}_{n}^{\left(\theta \right)}\to 1$ because ${\lambda}_{n}\to 0$ (as $n\to \infty $). Hence, for all $\epsilon \in (0,1)$, there exists ${n}_{0}$ such that, for all $n>{n}_{0}$, we have ${\rho}_{n}^{\left(\theta \right)}<1+\epsilon $ and

$$\begin{array}{c}\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}log\left(1+{\lambda}_{k}{\rho}_{k}^{\left(\theta \right)}\right)=\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{{n}_{0}}log\left(1+{\lambda}_{k}{\rho}_{k}^{\left(\theta \right)}\right)+\frac{{L}_{n}}{{b}_{n}}\sum _{k={n}_{0}+1}^{n}log\left(1+{\lambda}_{k}{\rho}_{k}^{\left(\theta \right)}\right)\hfill \\ \hfill \le \frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{{n}_{0}}log\left(1+{\lambda}_{k}{\rho}_{k}^{\left(\theta \right)}\right)+\frac{{L}_{n}}{{b}_{n}}\sum _{k={n}_{0}+1}^{n}log\left(1+{\lambda}_{k}(1+\epsilon )\right).\end{array}$$

Moreover, $\frac{{L}_{n}}{{b}_{n}}{\sum}_{k=1}^{{n}_{0}}log\left(1+{\lambda}_{k}{\rho}_{k}^{\left(\theta \right)}\right)\to 0$ (as $n\to \infty $) by Equation (13) and

$$\frac{{L}_{n}}{{b}_{n}}\sum _{k={n}_{0}+1}^{n}log\left(1+{\lambda}_{k}(1+\epsilon )\right)\le (1+\epsilon )\frac{{L}_{n}}{{b}_{n}}\sum _{k={n}_{0}+1}^{n}{\lambda}_{k}=(1+\epsilon )\left(\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}{\lambda}_{k}-\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{{n}_{0}}{\lambda}_{k}\right).$$

**Remark 2**(The Bernoulli case; the sequence in Equation (2))

**.**

The LDP in Theorem 3 holds also for the sequence in Equation (2) in place of the sequence in Equation (1). The proof is almost identical to the one of Theorem 3: in this case, we have
in place of Equation (18), and Inequality (19) still holds (even without the monotonicity of $\{{L}_{n}:n\ge 1\}$).

$${\Delta}_{n}:=\left|\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}{X}_{k}-\frac{{L}_{n}}{{b}_{n}}\sum _{k=1}^{n}{X}_{k}^{*}\right|$$

**Remark**

**3**(Almost sure convergence to 1 of the sequences in Theorems 2 and 3)

**.**

Let $\{{Z}_{n}:n\ge 1\}$ be either the sequence in Equation (1) or the sequence in Equation (2), where $\{{X}_{n}:n\ge 1\}$ is as in Theorem 2 or as in Theorem 3 (so we also consider Remarks 1 and 2). Then, by a straightforward consequence of the Borel–Cantelli lemma, the sequence $\{{Z}_{n}:n\ge 1\}$ converges to 1 almost surely (as $n\to \infty $) if

$${\sum}_{n\ge 1}P({Z}_{n}\in C)<\infty forclosedsetCsuchthat1\notin C.$$

Obviously this condition holds if $C\subset (-\infty ,0)$ because $\{{Z}_{n}:n\ge 1\}$ are nonnegative random variables. On the other hand, if $C\cap [0,\infty )$ is not empty, ${\mathsf{\Lambda}}^{*}\left(C\right):={inf}_{x\in C}{\mathsf{\Lambda}}^{*}\left(x\right)$ is finite; moreover, ${\mathsf{\Lambda}}^{*}\left(C\right)\in (0,\infty )$ because $1\notin C$. Then, by the upper bound of the closed set, for all $\delta >0$, there exists ${n}_{\delta}$ such that, for all $n>{n}_{\delta}$, we have

$$P({Z}_{n}\in C)\le {e}^{-({\mathsf{\Lambda}}^{*}\left(C\right)-\delta ){b}_{n}/{L}_{n}}.$$

Thus, again by the Borel–Cantelli lemma, $\{{Z}_{n}:n\ge 1\}$ converges almost surely to 1 (as $n\to \infty $) if, for all $\kappa >0$, we have

$$\sum _{n\ge 1}{e}^{-\kappa {b}_{n}/{L}_{n}}<\infty .$$

Then, by the Cauchy condensation test, Equation (22) holds if and only if ${\sum}_{n\ge 1}{2}^{n}{e}^{-\kappa {b}_{{2}^{n}}/{L}_{{2}^{n}}}<\infty $ and, as we see below, the convergence of the condensed series is a consequence of the ratio test and of some hypotheses of Theorems 2 and 3. In fact,
because $\frac{{b}_{{2}^{n}}}{{b}_{{2}^{n+1}}}\to \alpha (1/2)$ by Equation (12), $\frac{{L}_{{2}^{n+1}}}{{L}_{{2}^{n}}}\to 1$ by Equation (11) and $\frac{{b}_{{2}^{n+1}}}{{L}_{{2}^{n+1}}}\to +\infty $ by Equation (13).

$$\frac{{2}^{n+1}{e}^{-\kappa {b}_{{2}^{n+1}}/{L}_{{2}^{n+1}}}}{{2}^{n}{e}^{-\kappa {b}_{{2}^{n}}/{L}_{{2}^{n}}}}=2exp\left(-\kappa \frac{{b}_{{2}^{n+1}}}{{L}_{{2}^{n+1}}}\left(1-\frac{{b}_{{2}^{n}}}{{b}_{{2}^{n+1}}}\xb7\frac{{L}_{{2}^{n+1}}}{{L}_{{2}^{n}}}\right)\right)\to 0\phantom{\rule{4pt}{0ex}}(as\phantom{\rule{4pt}{0ex}}n\to \infty )$$

We conclude with the results for the generalized Cramér model (the sequences in Equation (9)).

**Corollary 1**(Application to the sequences in Equation (9))

**.**

Let $\{{X}_{n}:n\ge 1\}$ be the random variables in Equation (7), and let $\{{b}_{n}:n\ge 1\}$ and $\{{L}_{n}:n\ge 1\}$ be defined by Equation (8). Then, the sequences $\left\{\frac{{\sum}_{k=1}^{n}(logk){X}_{k}}{n{\ell}_{n}}:n\ge 1\right\}$ and $\left\{\frac{(logn){\sum}_{k=1}^{n}{X}_{k}}{n{\ell}_{n}}:n\ge 1\right\}$ in Equation (9) satisfy the LDP with speed function ${v}_{n}=\frac{{b}_{n}}{{L}_{n}}=\frac{n{\ell}_{n}}{logn}$ and the good rate function ${\mathsf{\Lambda}}^{*}$ defined by Equation (4).

**Proof.**

In this proof, the sequences in Equation (9) play the roles of the sequences in Equations (1) and (2) in Theorem 3 and Remark 2, respectively. Therefore, we have to check that the hypotheses of Theorem 3 are satisfied. Condition 1 and Equations (11) and (13) and ${lim}_{n\to \infty}{\lambda}_{n}=0$ can be easily checked. Moreover, one can also check Equation (12) with $\alpha \left(c\right)=c$; note that in this case, we have a regularly varying function with index $\rho =1$ (as $n\to \infty $), and $\{{b}_{n}:n\ge 1\}$ is eventually nondecreasing. Finally, Equation (14), which is
can be obtained as a consequence of Lemma 3; in fact, $\{{\ell}_{n}:n\ge 1\}$ and $\{{\ell}_{n}/(logn):n\ge 1\}$ are restrictions (on $\mathbb{N}$) of slowly varying functions at infinity. ☐

$$\underset{n\to \infty}{lim}\frac{(logn){\sum}_{k=1}^{n}\frac{{\ell}_{k}}{logk}}{n{\ell}_{n}}=1,$$

In conclusion, we can say that, roughly speaking, for any Borel set A such that $1\notin \overline{A}$ (where $\overline{A}$ is the closure of A), the probabilities $P\left(\frac{{\sum}_{k=1}^{n}(logk){X}_{k}}{n{\ell}_{n}}\right)$ and $P\left(\frac{(logn){\sum}_{k=1}^{n}{X}_{k}}{n{\ell}_{n}}\right)$ decay exponentially as ${e}^{-\frac{n{\ell}_{n}}{logn}{inf}_{x\in A}{\mathsf{\Lambda}}^{*}\left(x\right)}$ (as $n\to \infty $). Thus, in the spirit of Tao’s remark, we are able to suggest estimations concerning a sort of “generalized” Chebychev function defined by $\frac{{\sum}_{{p}_{1}\cdots {p}_{r}\le x}log({p}_{1}\cdots {p}_{r})}{x{\ell}_{x}}$ or by $\frac{(logx){\sum}_{{p}_{1}\cdots {p}_{r}\le x}1}{x{\ell}_{x}}$. To our knowledge, such estimations are not available for $r>1$.

## Author Contributions

Rita Giuliano and Claudio Macci equally contributed to the proofs of the results. The paper was also written and reviewed cooperatively.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

- Dembo, A.; Zeitouni, O. Large Deviations Techniques and Applications, 2nd ed.; Springer: New York, NY, USA, 1998. [Google Scholar]
- Fang, L. Large and moderate deviation principles for alternating Engel expansions. J. Number. Theory
**2015**, 156, 263–276. [Google Scholar] [CrossRef] - Granville, A. Harald Cramér and the distribution of prime numbers. Scand. Actuar. J.
**1995**, 1995, 12–28. [Google Scholar] [CrossRef] - Döring, H.; Eichelsbacher, P. Moderate deviations via cumulants. J. Theor. Probab.
**2013**, 26, 360–385. [Google Scholar] [CrossRef] - Féray, V.; Méliot, P.L.; Nikeghbali, A. Mod-φ Convergence, I: Normality Zones and Precise Deviations. Unpublished Manuscript. 2015. Available online: http://arxiv.org/pdf/1304.2934.pdf (accessed on 23 November 2015).
- Jacod, J.; Kowalski, E.; Nikeghbali, A. Mod-Gaussian convergence: new limit theorems in probability and number theory. Forum Math.
**2011**, 23, 835–873. [Google Scholar] [CrossRef] - Tenenbaum, G.; Mendès France, M. The Prime Numbers and Their Distribution; (Translated from the 1997 French original by P.G. Spain); American Mathematical Society: Providence, RI, USA, 2000. [Google Scholar]
- Landau, E. Handbuch der Lehre von der Verteilung der Primzahlen (2 Volumes), 3rd ed.; Chelsea Publishing: New York, NY, USA, 1974. [Google Scholar]
- Hardy, G.H.; Wright, E.M. An Introduction to the Theory of Numbers, 4th ed.; Oxford University Press: London, UK, 1975. [Google Scholar]
- Tenenbaum, G. Introduction to Analytic and Probabilistic Number Theory, 3rd ed.; (Translated from the 2008 French Edition by P.D.F. Ion); American Mathematical Society: Providence, RI, USA, 2015. [Google Scholar]
- Davenport, H. Multiplicative Number Theory, 3rd ed.; Springer: New York, NY, USA; Berlin, Germany, 2000. [Google Scholar]
- Ford, K.; Sneed, J. Chebyshev’s bias for products of two primes. Exp. Math.
**2010**, 19, 385–398. [Google Scholar] [CrossRef] - Meng, X. Chebyshev’s Bias for Products of k Primes. Unpublished Manuscript. 2016. Available online: http://arxiv.org/pdf/1606.04877v2.pdf (accessed on 16 August 2016).
- Tao, T. Probabilistic Models and Heuristics for the Primes (Optional). In Terence Tao Blog. 2015. Available online: https://terrytao.wordpress.com/2015/01/04/254a-supplement-4-probabilistic-models-and-heuristics-for-the-primes-optional/ (accessed on 4 January 2015).
- Bingham, N.H.; Goldie, C.M.; Teugels, J.L. Regular variation. In Encyclopedia of Mathematics and its Applications; Cambridge University Press: Cambridge, UK, 1987; Volume 27. [Google Scholar]
- Giuliano, R.; Macci, C. Asymptotic results for weighted means of random variables which converge to a Dickman distribution, and some number theoretical applications. ESAIM Probab. Stat.
**2015**, 19, 395–413. [Google Scholar] [CrossRef] - Arratia, R.; Tavaré, S. Independent processes approximations for random combinatorial structures. Adv. Math.
**1994**, 104, 90–154. [Google Scholar] [CrossRef]

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).