Next Article in Journal
Availability Analysis of Software Systems with Rejuvenation and Checkpointing
Next Article in Special Issue
On the Accuracy of the Generalized Gamma Approximation to Generalized Negative Binomial Random Sums
Previous Article in Journal
An Inventory Model with Stock-Dependent Demand Rate and Maximization of the Return on Investment
Previous Article in Special Issue
Limit Theory for Stationary Autoregression with Heavy-Tailed Augmented GARCH Innovations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Approximation of the Tails of the Binomial Distribution with These of the Poisson Law

by
Sergei Nagaev
1,† and
Vladimir Chebotarev
2,*,†
1
Sobolev Institute of Mathematics, 630090 Novosibirsk, Russia
2
Computing Center, Far Eastern Branch of the Russian Academy of Sciences, 680000 Khabarovsk, Russia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(8), 845; https://doi.org/10.3390/math9080845
Submission received: 11 March 2021 / Revised: 8 April 2021 / Accepted: 9 April 2021 / Published: 13 April 2021
(This article belongs to the Special Issue Analytical Methods and Convergence in Probability with Applications)

Abstract

:
A subject of this study is the behavior of the tail of the binomial distribution in the case of the Poisson approximation. The deviation from unit of the ratio of the tail of the binomial distribution and that of the Poisson distribution, multiplied by the correction factor, is estimated. A new type of approximation is introduced when the parameter of the approximating Poisson law depends on the point at which the approximation is performed. Then the transition to the approximation by the Poisson law with the parameter equal to the mathematical expectation of the approximated binomial law is carried out. In both cases error estimates are obtained. A number of conjectures are made about the refinement of the known estimates for the Kolmogorov distance between binomial and Poisson distributions.

1. Introduction and Main Results

The subject of this study is upper and lower bounds for probabilities of the type P i = 1 n X i n x , where X 1 , , X n are independent equally distributed Bernoulli random variables. In other words, we estimate tail probabilities for the binomial distribution. To this end we use the Poisson approximation.
It should be noted that although the binomial distribution is very special from the formal point of view it is of great concern in applications. Moreover, due to simplicity, more exact bounds are attainable for the binomial distribution than in the general case.
Let us start with the known Hoeffding inequality. Assuming that the independent random variables X 1 , , X n satisfy the condition 0 X i 1 , i = 1 , , n , W. Hoeffding [1] deduced the inequality
P i = 1 n X i n ( μ + t ) μ μ + t n ( μ + t ) 1 μ 1 μ t n ( 1 μ t ) ,
where μ = 1 n i = 1 n E X i , 0 < t < 1 μ . In the case of identically distributed random variables X j we have μ = E X 1 , and the inequality (1) remains the same. Making in (1) the change of variable n ( μ + t ) = y we get
P i = 1 n X i y n μ y y 1 μ 1 y / n n ( 1 y / n ) .
In turn this inequality can be written in the following form,
P i = 1 n X i y e n H ( y / n , μ ) ,
where
H ( t , p ) = t ln t p + ( 1 t ) ln 1 t 1 p
is the so-called relative entropy or Kullback–Leibler distance between two two-point distributions ( t , 1 t ) and ( p , 1 p ) concentrated at the same pair of points.
Apparently, I. Sanov [2] was the first who stated probability inequalities in terms of the function of the type j = 1 m t j ln p j t j , where j = 1 m p j = j = 1 m t j = 1 , p j , t j > 0 , j = 1 , , m .
The starting point in proving (1) and many other probability inequalities for independent random variables is the following bound.
Let there exist H 0 > 0 such that
e H 0 u d V j ( u ) < , j = 1 , n ,
where V j are the distribution functions of X j , j = 1 , n . Then for every 0 < h H 0 , we have
P j = 1 n X j y e h y j = 1 n R ( h ; V j ) ,
where
R ( h ; V j ) : = e h u d V j ( u ) , j = 1 , n .
Thus,
P j = 1 n X j y min h > 0 e h y j = 1 n R ( h ; V j ) .
In the case of i. i. d. random variables inequality (5) can be written in the following form,
1 G n ( y ) min h > 0 e h y R n ( h ; G ) ,
where G n ( y ) = P j = 1 n X j < y , G is the distribution of X 1 . On the other hand, for each 0 < h H 0 the following identity holds,
1 G n ( y ) = R n ( h ; G ) y e h u d G n ( h ) ( u ) ,
where
G n ( h ) ( y ) = R n ( h ; G ) y e h u d G n ( u )
is the Esscher transformation of the distribution function G n ( y ) (see [3]). Note that starting with the classic work of Cramér [4], Esscher’s transform has been repeatedly used in the theory of large deviations.
Let h 0 be such that
h 0 y n ln R ( h 0 ; G ) = max h > 0 h y n ln R ( h ; G ) ,
and denote
ω n ( y ; G ) = y e h 0 u d G n ( h 0 ) ( u ) .
It follows from (7) and (10) that
1 G n ( y ) = R n ( h 0 ; G ) ω n ( y ; G ) .
Notice, that although the method used in this work essentially coincides with the method of our previous article on estimates of large deviations in the case of normal approximation [5], function ω n ( y ; G ) differs from function ω n ( y ; G ) from [5] by the absence of the factor e h 0 y . The nuance is that in this work we are dealing with one-way distributions, and direct copying of the previous approach could make the reasoning unnecessarily cumbersome.
Taking into account (9) it is easily seen that h 0 satisfies the equality m ( h 0 ; G ) = y n , where
m ( h ; G ) = u d G ( h ) ( u ) R 1 ( h ; G ) u e h u d G ( u ) .
For any nondegenerate random variable ξ we have P ( ξ < E ξ ) > 0 . Therefore, ω n ( y ; G ) < 1 . Estimating ω n ( h 0 ; G ) , we can sharpen the Hoeffding inequality.
Note that in the case E X 1 = 0 the asymptotics of ω n ( y ; G ) is found in [4] (p. 172) under condition e h y d G ( y ) < , 0 < h H 0 , namely,
ω n ( y ; G ) e y 2 2 n σ 2 1 Φ y σ n M y σ n ,
where σ 2 = E X 1 2 , the restriction y = o ( n ) being imposed (see also [6]), and
M ( t ) = 1 Φ ( t ) φ ( t )
is the so called Mills ratio ( Φ ( t ) and φ ( t ) are the distribution function and density function, respectively, of the standard normal law).
Let λ be an arbitrary positive number, Π λ ( y ) the distribution function of the Poisson law with the mean λ . We will also use the notation π λ ( j ) = λ j j ! e λ . Note that we consider distribution functions to be continious from left.
In connection with (12), note that in the present work we define and use the following analogue of the Mills ratio for the Poisson distribution with an arbitrary parameter λ : for every integer k 0
M ( k ; λ ) : = 1 Π λ ( k ) π λ ( k ) 1 π λ ( k ) j = k π λ ( j ) = 1 + m = 1 λ m k ! ( m + k ) ! = 1 + m = 1 λ m j = 1 m 1 k + j .
M. Talagrand [7] sharpened the Hoeffding inequalities for y < n σ 2 K b , where K is a constant, regarding which is known only that it exists. The bounds obtained in [7] are stated in terms of K as well.
Remark that Talagrand, like Hoeffding, considers the case of non-identically distributed random variables.
In the present work we estimate ω n ( y ; G ) in the case of Bernoulli trials with explicit values for constants, not laying any restriction on y.
In what follows we use the next notations: F the distribution function of the Bernoulli random variable with parameter p, 0 < p 1 2 , F n , p = F * n the n-fold convolution of F.
Obviously,
R ( h ; F ) = r ( h ) : = q + p e h .
In what follows we will assume x to satisfy the following condition,
0 < p < x < 1 .
It is not hard to verify that h 0 satisfying (9) in the case G = F and y = n x has the following form,
h 0 = ln q x p ( 1 x ) .
Notice that h 0 > 0 under condition (15). In what follows h = h 0 .
We get from (14) and (16) that
r ( h ) = q 1 x .
Hence, by (11),
1 F n , p ( n x ) = r n ( h ) ω n ( n x ; F ) = q 1 x n ω n ( n x ; F ) ,
where
ω n ( n x ; F ) = n x e h u d F n , p ( h ) ( u ) .
Denote by Π λ ( t ) the distribution function of Poisson law with a parameter λ > 0 . If the variable x from (18) approaches 0, it is natural to take Π λ with λ = n p as the approximating distribution for F n , p . Just this distribution is used in Theorem 2. However, first we need another approximating Poisson distribution with the mean
λ 1 = λ 1 ( n , p , x ) = n p ( 1 x ) q ,
depending not only on the parameters n and p, but on the variable x from formula (15). We shall call this distribution by the variable Poisson distribution.
Let us formulate the first statement about the connection between the behaviors of tails 1 F n , p ( n x ) and 1 Π λ 1 ( n x ) . First introduce the function
A ( x , n , p ) = 1 x q n e n ( x p ) q .
We have
A ( x , n , p ) = 1 x p q n e n ( x p ) q = e n ln 1 x p q + x p q = e n [ ln ( 1 u ) + u ] ,
where u = x p q . Function ln ( 1 u ) u is presented as a series:
ln ( 1 u ) u = k = 2 u k k = : Λ 2 ( u ) .
Thus,
A ( x , n , p ) = e n Λ 2 ( u ) .
Note that the series Λ 2 ( u ) converges since by condition (15), we have 0 < x p q < 1 .
Proposition 1.
If condition (15) is fulfilled, then
1 F n , p ( n x ) = 1 Π λ 1 ( n x ) A ( x , n , p ) + R 1 = 1 Π λ 1 ( n x ) e n Λ 2 ( u ) + R 1 ,
where
| R 1 | 2 e n H ( p , x ) max y n x | F n , x ( y ) Π n x ( y ) | 2 x e n H ( x , p ) .
The following theorem gives one more form of the dependence of the tails of the binomial distribution on the tails 1 Π λ 1 ( n x ) of the variable Poisson distribution. It is a consequence of Proposition 1, but by no means trivial, and requires the proof of a number of additional statements, which are given in Section 3.
Theorem 1.
If condition (15) is fulfilled, then
1 F n , p ( n x ) = 1 Π λ 1 ( n x ) A ( x , n , p ) ( 1 + r 1 ) = 1 Π λ 1 ( n x ) e n Λ 2 ( u ) ( 1 + r 1 ( x ) ) ,
where u = x p q ,
| r 1 ( x ) | c 1 n x 3 , c 1 = 2 e 1 / 12 2 π = 5.4489 .
Example 1.
Let n = 500 , p = 0.002 , x = k p , k = 2 , 5 ¯ . Table 1 shows the corresponding values of the function c 1 n x 3 .
Table 1, in accordance with Theorem 1, shows that with increasing x the approximation deteriorates.
Remark 1.
It is known that the binomial distribution with parameters n, p is well approximated by the Poisson one with the parameter n p if p is small enough [8]. The Poisson distribution from the equalities (24) and (26) has another parameter. However, we have λ 1 = n p ( 1 x ) q n p , when x is close to 0 and p < x . In the next claims we consider the Poisson approximation with the parameter n p . Note also that the Poisson distribution with parameter λ 1 degenerates when x is close to 1. See also Table 2.
Remark 2.
A necessary condition for good approximation in (26) is the smallness of x, namely, x < θ n 1 / 3 . This agrees with the result by Yu.V. Prokhorov [9], according to which in the case x < θ n 1 / 3 ( θ = 0.637 ) Poisson approximation to the binomial distribution is more precise with respect to the normal approximation. However, as x is close to 0, λ 1 λ 1 ( λ = n p ). In this case, λ can be both large and small. This also applies to the values of n x . Note that d d λ 1 Π λ ( k ) > 0 for any k 1 . Indeed, it is easy to see that d d λ Π λ ( k ) = Π λ ( k 1 ) Π λ ( k ) = π λ ( k 1 ) < 0 . Therefore, d d λ 1 Π λ ( k ) > 0 . This means that 1 Π λ 1 ( k ) < 1 Π λ ( k ) for all k 1 .
Theorem 2.
If condition (15) is fulfilled, then the following equality holds,
1 F n , p ( n x ) 1 Π n p ( n x ) = M ( n x ; λ 1 ) M ( n x ; n p ) e n q Λ 3 x p q ( 1 + r 1 ( x ) ) ,
where
Λ 3 ( u ) = k = 2 u k k ( k 1 ) ,
r 1 ( x ) is the function from Theorem 1.
Remark 3.
It follows from Remark 2 that if in the representation (26) the difference 1 Π λ 1 ( n x ) is replaced by 1 Π λ ( n x ) , where λ = n p , then instead of the function A ( x , n , p ) , it will be necessary to insert another correction factor, which will be less than A ( x , n , p ) . The form of this factor is indicated in Theorem 2. In this connection, we note that the exponential function on the right-hand side of (28) has a negative exponent, in contrast to the exponential function in (26).
The following table gives an idea of the relationship between tails of the approximating distributions under consideration: Π λ 1 and Π λ .
By θ we will denote quantities, maybe different in different places, satisfying the bound | θ | 1 .
Rewrite (28) in the form:
1 F n , p ( n x ) = 1 Π n p ( n x ) Ω 1 ( x , n , p ) e n q Λ 3 x p q ( 1 + r 1 ( x ) ) ,
where Ω 1 ( x , n , p ) = M ( n x ; λ 1 ) M ( n x ; n p ) .
Let us give a table of values of the functions: M ( n x ; λ 1 ) , M ( n x ; n p ) and Ω 1 ( x , n , p ) M ( n x ; λ 1 ) M ( n x ; n p ) . Let n = 10 , p = 0.1 , x = k n , k = 2 , 10 ¯ . Calculations arrive at the following table (Table 3).
Taking into account that Ω 1 ( x , n , p ) is not much different from 1 (see Table 3) write up Ω 1 ( x , n , p ) ( 1 + r 1 ( x ) ) = 1 + r 2 ( x ) . We will use the elementary identity b ( 1 + a ) 1 = a ( 1 b ) ( 1 + a ) . Putting a = r 1 ( x ) , b = Ω 1 ( x , n , p ) , we obtain
r 2 ( x ) Ω 1 ( x , n , p ) ( 1 + r 1 ( x ) ) 1 = r 1 ( x ) 1 Ω 1 ( x , n , p ) ( 1 + r 1 ( x ) ) ,
and
1 F n , p ( n x ) = 1 Π n p ( n x ) e n q Λ 3 x p q ( 1 + r 2 ( x ) ) .
Note that Equality (31) is another form of Theorem 2.
The following inequalities hold: 1 Ω 1 ( x , n , p ) > 0 and 1 + r 1 ( x ) > 0 . Hence, by (30), | r 2 ( x ) | | r 1 ( x ) | if r 1 ( x ) < 0 , and r 2 ( x ) r 1 ( x ) if r 1 ( x ) > 0 .
In the next theorem, the estimate of r 2 ( x ) is got.
Theorem 3.
If condition (15) is fulfilled, then
1 F n , p ( n x ) 1 Π n p ( n x ) = e n q Λ 3 x p q ( 1 + r 2 ( x ) ) ,
where
r 2 ( x ) = θ c 1 n x 3 + p x .
Remark 4.
 
1.
The closeness of x n 1 / 3 and p x to 0 ensure the closeness of r 2 ( x ) to zero. Moreover, as it was said in Remark 2, the closeness of x n 1 / 3 to 0 agrees with [9].
2.
Under the condition x = o ( n 1 / 3 ) the quantity n x 2 may not tend to zero.
Remark 5.
Let us discuss the connection between x, n and p the function r ( x ) = c 1 n x 3 + p x approaching zero. Obviously, r ( x ) can tend to zero only if x 0 and p x 0 .
Let the parameters n and p be fixed. Find min p < x < 1 r ( x ) . We write r ( x ) for brevity as r ( x ) = a x 3 / 2 + p x . Obviously, d r ( x ) d x = 3 a 2 x 1 / 2 p x 2 . Therefore, the minimum of r ( x ) is attained at the point x 0 = 2 p 3 a 2 / 5 , and d r ( x ) d x < 0 for 0 < x < x 0 , and d r ( x ) d x > 0 for x > x 0 .
As a result of calculations, we make sure that p < x 0 if p < 2 3 c 1 2 / 3 n 1 / 3 . This condition can be considered fulfilled. From here,
min p < x < 1 r ( x ) = r ( x 0 ) = a 2 / 5 p 3 / 5 ( 2 / 3 ) 3 / 5 + ( 2 / 3 ) 2 / 5 = ( n p 3 ) 1 / 5 c 2 ,
where c 2 = c 1 2 / 5 ( 5 / 2 ) ( 2 / 3 ) 3 / 5 = 3.8619 . Thus, min p < x < 1 r ( x ) 0 if and only if n p 3 0 .
Indeed, let f ( x ) and g ( x ) be the left and right branches of the function r ( x ) with respect to the line x = x 0 . In this case, the domain of f ( x ) is ( 0 , x 0 ] , and g ( x ) is [ x 0 , ) . On the other hand, for each ε > 0 , you can specify an interval ( x 1 , x 2 ) , containing x 0 such that for x ( x 1 , x 2 ) the inequality r ( x ) r ( x 0 ) < ε will hold.
These functions are strictly monotone and therefore have inverse ones: f 1 ( · ) and g 1 ( · ) , respectively. Then the required interval has the form f 1 ( ε ) , g 1 ( ε ) . Note that the domain of these inverse functions is the same: [ r ( x 0 ) , ) .
Example 2.
Let n = 100 , p = 4.75 × 10 4 . Then a = c 1 n = 54.4893 , x 0 = 2 p 3 a 2 / 5 = 0.00804853 , r ( x 0 ) = 0.0983617 . The graph of the function r ( x ) = a x 3 / 2 + p x is shown in Figure 1.
Take ε = 0.05 for example. Finding the roots of the equation r ( x ) r ( x 0 ) = ε , we get: x 1 = 0.0034603 , x 2 = 0.0169604 , x 2 x 1 = 0.0135001 . Note that ε can be chosen arbitrarily small only if n p 3 is sufficiently close to 0.
The following table (Table 4) shows the behavior of the interval ( x 1 , x 2 ) ( f 1 ( ε ) , g 1 ( ε ) with decreasing ε.
Figure 1 illustrates the column “ ε = 1 8 ” from Table 4.
Note that near the point x 0 both functions a x 3 / 2 and p x that form r ( x ) , make approximately the same contribution to r ( x ) . For instance, r ( 0.008 ) = 0.098 , where a x 3 / 2 | x = 0.008 = 0.0389 , p x | x = 0.008 = 0.059 .
Corollary 1.
Let condition (15) be fulfilled and c 1 n x 3 < 1 . Then
1 F n , p ( n x ) 1 Π n p ( n x ) = e n ( x p ) 2 2 1 + θ 5.74 n x 3 + p x .
Remark 6.
Note that the behavior of the series n q Λ 3 x p q is defined by the first summand in contrast to the Cramér series in the case of Gaussian approximation [4].

2. Proof of Proposition 1

We will use one result from [8]. The latter is formulated as follows.
Let X 1 , , X n be independent Bernoulli random variables. We denote S n = j = 1 n X j , F S n the distribution of the sum S n , p j = P ( X j = 1 ) , λ = j = 1 n p j , Π λ Poisson distribution with parameter λ . In the paper [8], the following estimate for the total variation distance d T V ( F S n , Π λ ) between F S n and Π λ is obtained,
d T V ( F S n , Π λ ) ( 1 e λ ) 1 λ j = 1 n p j 2 .
In the particular case when
p 1 = p 2 = = p n = p ,
we have λ = n p . Then it follows from (34) that
d T V ( F S n , Π λ ) ( 1 e λ ) p ,
whence
d T V ( F S n , Π λ ) p .
In the case (35) we will use the notation
d n , p = d K ( F S n , Π λ ) ,
where d K ( F S n , Π λ ) is the Kolmogorov distance between the distributions F S n and Π λ .
Since
d n , p d T V ( F S n , Π λ ) ,
It follows from (37) that
d n , p p .
In what follows, we will use the estimate (39), although there is reason to believe that on the right-hand side of (39) the coefficient 1 in front of p can be replaced by a smaller number, namely, e 1 1 4 0.236 (see Section 6 Supplement).
Proof of Proposition 1.
Note that F ( h ) ( t ) is the Bernoulli distribution function with the parameter p h = p e h r ( h ) , and F n , p ( h ) ( t ) the binomial distribution function with the parameters n and p h . It is easily seen that by (16),
p h = x , q h = 1 p h = 1 x .
Therefore, instead of F n . p ( h ) ( t ) , we can write F n , x ( t ) .
The approximating Poisson distribution for F n , p ( h ) has the parameter n x . Taking into account this and (19), we have
ω n ( n x , F ) = n x e h y d F n , p ( h ) ( y ) = I 1 + I 2 ,
where
I 1 = n x e h y d Π n x ( y ) , I 2 = n x e h y d F n , x ( y ) Π n x ( y ) .
It is easily seen that
I 1 = e n x k n x e h k ( n x ) k k ! = e n x ( 1 e h ) k n x ( n x e h ) k k ! e n x e h = e n x ( 1 e h ) 1 Π λ ( h ) ( n x ) ,
where λ ( h ) = n x e h . It follows from (16) and (20) that
λ ( h ) = n x ( 1 x ) p x q = n p 1 x q = λ 1 .
Moreover, by (16),
n x ( 1 e h ) = n x 1 ( 1 x ) p x q = n ( x p ) q .
Then we get from (42) that
I 1 = e n ( x p ) q 1 Π λ 1 ( n x ) .
Further, integrating by parts, we get
| I 2 | = | h n x e h y F n , x ( y ) Π n x ( y ) d y e h n x F n , x ( n x ) Π n x ( n x ) | 2 e n h x max y n x | F n , x ( y ) Π n x ( y ) | .
It follows from (41), (44) and (45) that
ω n ( n x ; F ) = e n ( x p ) q 1 Π λ 1 ( n x ) + R , | R | 2 e n h x max y n x | F n , x ( y ) Π n x ( y ) | .
Using (16) and the definition of H ( t , p ) , we get
e n h x q 1 x n = q x p ( 1 x ) n x q 1 x n = = x p x 1 x q 1 x n = e n H ( x , p ) .
It follows from (18), (21) and(46) that
1 F n , p ( n x ) = 1 Π λ 1 ( n x ) A ( x , n , p ) + q 1 x n R ,
where R satisfies the inequality in (46).
Now applying (39) and (46)–(48), one after the other, we obtain the statement of Proposition 1. □

3. Proof of Theorems 1 and 2

It is assumed in Section 3 condition (15) to be fulfilled.
Lemma 1.
If the condition (15) is satisfied and n x is an integer, then
| I 2 | I 1 2 2 π n x 3 e 1 / ( 12 n x ) M ( n x ; λ 1 ) c 1 n x 3 M ( n x ; λ 1 ) .
Proof. 
First of all, we write (44) as
I 1 = M ( n x ; λ 1 ) exp n ( x p ) q λ 1 λ 1 n x ( n x ) ! .
In view of (43),
n ( x p ) q + λ 1 = n x n x p q = n x .
Consequently,
I 1 = M ( n x ; λ 1 ) e n x λ 1 n x ( n x ) ! .
By (39), (45) and (50),
| I 2 | I 1 2 x ( n x ) ! e n x n h x M ( n x ; λ 1 ) λ 1 n x .
In accordance to (16),
e n h x = p ( 1 x ) q x n x .
Further, by the Stirling formula,
( n x ) ! = 2 π n x ( n x ) n x e n x + θ ,
where 1 12 n x + 1 < θ < 1 12 n x . Finally, according to (43),
λ 1 n x = ( n x ) n x p ( 1 x ) q x n x .
Substituting the expressions (52)–(54) into the right-hand side of (51), we obtain (49). □
Proof of Theorem 1.
It follows from (18), (41), and (49) that
1 F n , p ( n x ) = I 1 · 1 x q n ( 1 + r 1 ) ,
where | r 1 | c 1 n x 3 . Note that by (55), we have 1 + r 1 > 0 .
Taking into account (21) and (44), we get
I 1 · 1 x q n = A ( x , n , p ) 1 Π λ 1 ( n x ) .
Theorem 1 follows from (23), (55) and (56). □
Proof of Theorem 2.
Using (55) and (56), we obtain
1 F n , p ( n x ) = 1 Π λ 1 ( n x ) A ( x , n , p ) ( 1 + r 1 ) = 1 Π n p ( n x ) M ( n x ; λ 1 ) π λ 1 ( n x ) M ( n x ; n p ) π n p ( n x ) A ( x , n , p ) ( 1 + r 1 ) .
We have
π λ 1 ( n x ) π n p ( n x ) = ( 1 u ) n x e n p u = e n p u + n x ln ( 1 u ) ,
where u = x p q . Hence, taking into account (22), we get
π λ 1 ( n x ) π n p ( n x ) A ( x , n , p ) = e n q 1 x q ln ( 1 u ) + u = e n q ( 1 u ) ln ( 1 u ) + u .
Expanding ln ( 1 u ) to the power of u, we have
u + ( 1 u ) ln ( 1 u ) = u ( 1 u ) k = 1 u k k = Λ 3 ( u ) .
Thus,
π λ 1 ( n x ) π n p ( n x ) A ( x , n , p ) = e n q Λ 3 ( u ) .
The equality (28) follows from (57) and (59). □

4. Proof of Theorem 3 and Corollary 1

Lemma 2.
Let n x be an integer. Then
M ( n x ; λ 1 ) 1 1 p / x .
Proof. 
By formula (13),
M ( n x ; λ 1 ) 1 + m = 1 λ 1 n x m .
It follows from (15) that
λ 1 n p .
These inequalities imply Lemma 2. □
Lemma 3.
The following inequality holds,
1 Π λ 1 ( n x ) 1 Π n p ( n x ) > π λ 1 ( n x ) π n p ( n x ) ( 1 x 1 p ) .
Proof. 
Obviously,
1 Π λ 1 ( n x ) 1 Π n p ( n x ) > π λ 1 ( n x ) 1 Π n p ( n x ) .
Since for every k 0 , λ > 0 ,
π λ ( k + 1 ) π λ ( k ) = λ k + 1 ,
then for k n x
π n p ( k + 1 ) π n p ( k ) = n p k + 1 < p x
and, consequently,
1 Π n p ( n x ) = π n p ( n x ) k = 0 π n p ( n x + k ) π n p ( n x ) < π n p ( n x ) k = 0 p k x k = π n p ( n x ) 1 1 x 1 p .
The statement of the lemma follows from (60) and (61). □
Lemma 4.
The following inequality holds,
1 Π λ 1 ( n x ) 1 Π n p ( n x ) π λ 1 ( n x ) π n p ( n x ) .
Proof. 
We have
1 Π λ 1 ( n x ) 1 Π n p ( n x ) max k n x π λ 1 ( k ) π n p ( k ) = π λ 1 ( n x ) π n p ( n x ) .
The last equality is true because of
π λ 1 ( k ) π n p ( k ) = λ 1 k ( n p ) k e λ 1 + n p
and, by (43),
λ 1 n p = 1 x q < 1 .
Lemma 5.
The following equality holds,
1 Π λ 1 ( n x ) 1 Π n p ( n x ) = π λ 1 ( n x ) π n p ( n x ) ( 1 r 3 ) ,
where
r 3 = θ 1 p x , 0 < θ 1 < 1 .
Proof. 
By Lemmas 3 and 4,
π λ 1 ( n x ) π n p ( n x ) ( 1 x 1 p ) < 1 Π λ 1 ( n x ) 1 Π n p ( n x ) π λ 1 ( n x ) π n p ( n x ) .
This implies the statement of Lemma 5. □
Proof of Theorem 3.
Using (55) and (56) and Lemma 5, we have
1 F n , p ( n x ) = 1 Π n p ( n x ) e n u + ln ( 1 u ) π λ 1 ( n x ) π n p ( n x ) ( 1 + r 1 ) ( 1 r 3 ) .
By (62) and (63),
π λ 1 ( n x ) π n p ( n x ) = ( 1 u ) n x e n p u = e n p u + n x ln ( 1 u ) .
It follows from (64), (65) that
1 F n , p ( n x ) = 1 Π n p ( n x ) e n u q + ( 1 x ) ln ( 1 u ) ( 1 + r 1 ) ( 1 r 3 ) .
Taking into account that 1 x = q ( 1 u ) and expanding ln ( 1 u ) to the power of u, we have
u q + ( 1 x ) ln ( 1 u ) = q u + ( 1 u ) ln ( 1 u ) = q u ( 1 u ) k = 1 u k k = q Λ 3 ( u ) .
The statement of the theorem follows from (66), (67) and the inequality
| r 1 r 3 r 1 r 3 | = | r 1 ( 1 r 3 ) r 3 | | r 1 | ( 1 r 3 ) + r 3 < | r 1 | + r 3 ,
which holds because 0 < r 3 < 1 . □
Proof of Corollary 1.
As before, we use the notation u = x p q . Write up
n q Λ 3 ( u ) = n q u 2 2 + k = 3 u k k ( k 1 ) = A + B ,
where A = n ( x p ) 2 2 q , B = n q k = 3 u k k ( k 1 ) . Firstly,
A = n ( x p ) 2 2 + n ( x p ) 2 2 1 q 1 = n ( x p ) 2 2 + θ 1 n x 3 2 ( 1 x ) ,
where 0 < θ 1 < 1 . Now let us estimate B. We have
B = n q x p q 3 k = 3 u k 3 k ( k 1 ) .
Using (58) we can write up
k = 3 u k 3 k ( k 1 ) = 1 u 3 Λ 3 ( u ) u 2 2 = 1 u 3 ( 1 u ) ln ( 1 u ) + u u 2 2 = : g ( u ) .
The condition c 1 n x 3 < 1 implies that x < n 1 / 3 c 1 2 / 3 c 1 2 / 3 . Hence,
u < x 1 x < x 1 x | x = c 1 2 / 3 < 0.477 .
Then
k = 3 u k 3 k ( k 1 ) < g ( 0.477 ) < 0.224 .
By (70) and (73),
B < n q x p q 3 0.224 < 0.224 n x 3 ( 1 x ) 2 .
Thus,
B = θ 2 0.224 n x 3 ( 1 x ) 2 ,
where 0 < θ 2 < 1 .
Collecting (68), (69) and (74), we arrive at
n q Λ 3 x p q = n ( x p ) 2 2 + θ 1 n x 3 2 ( 1 x ) + θ 2 0.224 n x 3 ( 1 x ) 2 = n ( x p ) 2 2 + θ 3 0.724 n x 3 ( 1 x ) 2 ,
where 0 < θ 3 < 1 . Hence,
e n q Λ 3 x p q = e n ( x p ) 2 2 θ 3 0.724 n x 3 ( 1 x ) 2 = e n ( x p ) 2 2 ( 1 r 4 ) ,
where r 4 = θ 4 0.724 n x 3 ( 1 x ) 2 , 0 < θ 4 < 1 . Using the condition c 1 n x 3 < 1 again, we obtain
n x 3 ( 1 x ) 2 < 1 c 1 ( 1 c 1 2 / 3 ) 2 < 0.4004 .
Since 0.724 · 0.4004 < 0.29 c 3 , we have
r 4 = θ 5 · c 3 n x 3 ,
where 0 < θ 5 < 1 .
From (76) we obtain
e n q Λ 3 x p q ( 1 + r ) = e n ( x p ) 2 2 ( 1 r 4 ) ( 1 + r ) .
Write up
( 1 r 4 ) ( 1 + r ) = 1 + r 5 ,
where
r 5 = r r 4 r r 4 .
Denote for brevity a 1 = n x 3 , a 2 = p x . Then r = θ ( c 1 a 1 + a 2 ) , | θ | < 1 , r 4 = θ 5 c 3 a 1 , 0 < θ 5 < 1 .
Let us consider two cases: r 0 and r < 0 . Consider the case r 0 . Then
| r 5 | max { r , r 4 + r r 4 } .
Note that r < 2 . Since
r c 1 a 1 + a 2 , r 4 + r r 4 < 3 r 4 3 c 3 a 1 , c 1 a 1 + a 2 > 3 c 3 a 1 ,
then
| r 5 | c 1 a 1 + a 2 .
If r < 0 , then
| r 5 | max { | r | + r 4 , | r | r 4 } .
Since
| r | + r 4 a 1 ( c 1 + c 3 ) + a 2 , | r | r 4 2 c 3 a 1 , a 1 ( c 1 + c 3 ) + a 2 > 2 c 3 a 1 ,
then
| r 5 | ( c 1 + c 3 ) a 1 + a 2 .
Taking into account that c 1 + c 3 < 5.74 we can conclude that Corollary 1 is proved. □

5. Numerical Experiments

Further we will use the following Table 5 and Table 6.
Let us represent r 1 ( x ) from the equality (26) in Theorem 1 in the form
r 1 ( x ) = g ( x , n , p ) n x 3 .
In view of (27) the following estimate holds,
| g ( x , n , p ) | c 1 = 5.4489 .
The question arises: how accurate is this estimate? In other words, we need to find
c 0 = inf { c : | g ( x , n , p ) | c , n 1 , 0 < p < x < 1 }
and check how much c 1 differs from c 0 .
Let us rewrite (26) as
1 F n , p ( n x ) 1 Π λ 1 ( n x ) A ( x , n , p ) 1 = 1 + g ( x , n , p ) n x 3 ,
where the parameter λ 1 is defined by formula (43), and A ( x , n , p ) by formula (21). It follows from (80) that
g ( x , n , p ) = 1 n x 3 1 F n , p ( n x ) 1 Π λ 1 ( n x ) A ( x , n , p ) 1 1 .
Let us carry out numerical experiments to get closer to understanding the answer to the question posed.
Note that if x = j / n , where j = 0 , 1 , , ( n 1 ) , then λ 1 = p q ( n j ) .

5.1. Illustration to Theorem 1

Let n = 10 , p = 0.1 . Table 7 contains values of the function g ( x , n , p ) , found for x = j n , j = 2 , 9 ¯ , as well as intermediate results.
On the last row, the maximum value is shown in bold.
The third row from bottom in Table 7 shows a real value of the remainder in (26) for various x. These values are acceptable for x 0.6 , taking into account small value of the parameter n. The last line contains the values of the coefficient at n x 3 in r 1 = r 1 ( x ) , for which the latter coincides with the real remainder for the given value of x. For example, for x = 0.2 , the coefficient for n x 3 must be equal to 0.26764 (column “ x = 0.2 ”, last line), i.e., r 1 ( 0.2 ) = 0.26764 0.08 = 0.0757 . If for d K ( F 10 , 1 / 5 , Π 2 ) to take the bound 0.150981 x , where x = 0.2 , which is found with the help of Table 6 (column “ 5 d n , 1 / 5 ”, line “ n = 10 ”), we obtain 0.822685 0.150981 c 1 as the coefficient at n x 3 in r 1 ( 0.2 ) . The latter is more than two times the number 0.370456 from Table 7. Accordingly, we obtain for r 1 ( 0.2 ) the estimate 0.822681 0.08 = 0.232689 . At the same time, the real value of r 1 ( 0.2 ) is 0.104781 (see Table 7 or Table 8).
We observe that the product 1 Π λ 1 ( n x ) A ( x , n , p ) approximates 1 F n , p by excess, which is compensated by multiplying by 1 + r 1 ( 0.2 ) .

5.2. Illustration to Theorems 2 and 3

Denote
V ( x , n , p ) = 1 F n , p ( n x ) 1 Π n p ( n x ) e n q Λ 3 x p q , Ω 2 ( x , n , p ) = M ( n x ; n p ) M ( n x ; λ 1 ) 1 Ω 1 ( x , n , p ) .
Put n = 10 , p = 0.1 . Using Table 8 we can compare the remainders r 1 ( x ) and r 2 ( x ) .
Let us do more detailed calculations. We calculate for x = j n , j = 2 , 9 ¯ :
V ( x , n , p ) 1 , V ( x , n , p ) 1 n x 3 , Ω 2 ( x , n , p ) V ( x , n , p ) 1 , Ω 2 ( x , n , p ) V ( x , n , p ) 1 n x 3 .
Table 9 is similar to Table 7. The difference is that in Table 9 the tail 1 F n , p ( n x ) is compared with two products: M ( n x ; λ 1 ) M ( n x ; n p ) 1 Π n p ( n x ) e n q Λ 3 x p q and 1 Π n p ( n x ) e n q Λ 3 x p q . It turns out that the second of them more accurately approximates 1 F n , p ( n x ) (compare the sixth and second rows from the bottom in Table 9; see also Table 8).
To compare the remainders r 2 ( x ) (see (31)) and r 1 ( x ) (Theorems 1 and 2), we can select in Table 9 the rows “ V ( x , n , p ) 1 ” and “ Ω 2 ( x , n , p ) V ( x , n , p ) 1 ”.

6. Supplement

In this section, we offer the reader some conjectures regarding the behavior of d K ( F n , p , Π λ ) .
Due to the cumbersomeness of the table, we did not place columns corresponding to 11 k 20 .
Nevertheless, we made sure that for each 2 k 20 the equality
max 1 n 20 d n , 1 / k = d k , 1 / k
holds. Our conjecture is as follows: for every 2 k n ,
max n 1 d n , 1 / k = d k , 1 / k .
Remark that the sequence d k , 1 / k decreases monotonically for 2 k 20 . This property is also true for sequences d k + j , 1 / k for every fixed j 1 and d j k , 1 / k for fixed j 2 . According to CLT, F n , p n p + x n converges in a uniform metric with the normal law Φ ( x / p q ) . On the other hand, Π n p n p + x n approaches Φ ( x / p ) . It means that
lim n d n , p = max x > 0 Φ ( x / q ) Φ ( x ) .
Hence,
lim p 0 lim n d n , p = 0 .
Using formula (83), we get the elements of the last row of Table 5 and, hence, the elements of the last row of Table 6.
The next conjecture concerns existence and the value of the limit of d k , 1 / k , when k . Calculations for k = 2 , 20 ¯ suggest that
d k , 1 / k max 0 j k | F k , 1 / k ( j + ) Π 1 ( j + ) | = | F k , 1 / k ( 0 + ) Π 1 ( 0 + ) | e 1 ( 1 1 / k ) k .
Indeed, according to Table 5,
d 2 , 1 / 2 = 0.11879 e 1 1 4 , d 3 , 1 / 3 = 0.07183 e 1 2 3 3 , .
In connection with this we remark that the behaviour of differences C n k p k ( 1 p ) n k π n p ( k ) under condition n p 2 2 is investigated in [10].
Note that (84) is equivalent to the assumption
d K ( F n , 1 / n , Π 1 ) = | F n , 1 / n ( 0 + ) Π 1 ( 0 + ) | ,
i.e., max 0 k n | F n , 1 / n ( k + ) Π 1 ( k + ) is realized at k = 0 . This fact is fairly easy to prove in the case λ 2 2 , using the results of the paper [10], in which this case is considered. In the case of λ = 1 , it is more difficult to find a proof, but it certainly exists.
After that we can assert the formula (84) is valid for all k and, moreover, there exists lim k k d k , 1 / k . Indeed, according to [10],
k e 1 1 1 k k = k e 1 e k ln ( 1 1 / k ) = e 1 k 1 e 1 2 k + O ( k 2 ) k 1 2 e ,
whence
lim k k d k , 1 / k = 1 2 e .
Accordingly to Table 6 the constant c 0 in the inequality
sup n , p d n , p p c 0
cannot be less 2 ( e 1 1 4 ) 0.235759 .
If we impose the constraint p 1 4 , then the lower bound for c 0 is not less than 4 e 1 3 4 4 0.205892 (see Table 6). As for the upper bound for c 0 , it is equal to 2 e 1 1 4 if in (85) supremum with respect to p is taken over all p such that p = 1 / k , k = 2 , ¯ .
If we adhere to the principle of incomplete induction, then the available information is sufficient to assert that c 0 = 2 e 1 1 4 .
Note that in the case p > 1 2 , it is sufficient to swap the roles of p and 1 p .
Table 6 demonstrates the following remarkable property:
max 1 n 20 max 2 k 9 k d n , 1 / k = max 2 k 9 k d k , 1 / k = 2 d 2 , 1 / 2 = 2 e 1 1 4 = 0.235759 .
Therefore, it is highly plausible that
max 1 n 20 sup 0 < p < 1 / 2 d n , p p = max 1 n 20 max 2 k 9 k d n , 1 / k = 2 d 2 , 1 / 2 .
Moreover, the following equality is highly plausible,
sup n 1 sup 0 < p < 1 / 2 d n , p p = max 1 n 20 sup 0 < p < 1 / 2 d n , p p = 2 d 2 , 1 / 2
The equality (86) is another our conjecture. If this assumption is true, then instead of (39) we have a more precise estimate
d n , p 2 e 1 1 4 p < 0.23576 p .
If the hypothetical estimate (87) is correct, main statements of the present work can be sharpen.
Since 2 · 0.23576 < 0.472 , in the right-hand side of inequality (25) in Proposition 1 the product 2 x e n H ( x , p ) can be replaced by 0.472 x e n H ( x , p ) .
Taking into account the inequality 0.23576 c 1 < 1.285 , in all places the constant c 1 = 5.4489 can be replaced by c ˜ 1 = 1.285 . In particular, in the formulations of Theorems 1–3, c 1 can be replaced by c ˜ 1 .
Taking into account that d k , 1 / k = e 1 ( 1 1 / k ) k , and using Table 6, we arrive at the conclusion that in the case under k 2 and p = 1 k (see the row “ n = 2 ”, the column “ 2 d n , 1 / 2 ”),
d K ( F n , 1 / k , Π n / k ) k e 1 ( 1 1 / k ) k | k = 2 p < 0.236 p .
If k is growing, the coefficient at p in (88) decreases, but cannot be less than 1 2 e = 0.18 .

7. Conlusions

The main results of this paper are Theorems 1–3. In them, estimates are obtained for the errors arising when the tails of binomial distributions are replaced by the tails of the corresponding Poisson distributions. The constant c 1 5.5 included in the error estimates is large enough for the estimates to be of practical use. However, further improvement of the estimates will allow apply our results, in particular, when constructing confidence intervals for the parameter p. The basis for the hope of obtaining such an improvement is the inequality (88).

Author Contributions

Investigation, S.N. and V.C.; Writing—original draft, S.N. and V.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We are grateful to the reviewers for a number of useful comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hoeffding, W. Probability inequalities for sums of bounded random variables. J. Am. Stat. Assoc. 1963, 58, 13–30. [Google Scholar] [CrossRef]
  2. Sanov, I. On probability of large deviations of random variables. Mat. Sb. 1957, 42, 11–44. [Google Scholar]
  3. Esscher, F. On the probability function in the collective theory of risk. Skand. Aktuarietidskr. 1932, 15, 175–195. [Google Scholar]
  4. Cramér, H. On a new limit theorem of probability theory. Russ. Math. Surv. 1944, 10, 166–178. [Google Scholar]
  5. Nagaev, S.V.; Chebotarev, V.I. On large deviations for sums of i.i.d. Bernoulli random variables. J. Math. Sci. 2018, 234, 816–828. [Google Scholar] [CrossRef]
  6. Petrov, V.V. A generalization of Cramer’s limit theorem. Uspekhi Mat. Nauk 1954, 9, 195–202. [Google Scholar]
  7. Talagrand, M. The missing factor in Hoeffding inequalities. Ann. Inst. Henri Poincaré 1995, 31, 689–702. [Google Scholar]
  8. Barbour, A.D.; Hall, P. On the rate of Poisson convergence. Math. Proc. Camb. Philos. Soc. 1984, 95, 473–480. [Google Scholar] [CrossRef] [Green Version]
  9. Prokhorov, Y.V. Asymptotic behaviour of binomial distribution. Russ. Math. Surv. 1953, 55, 135–142. [Google Scholar]
  10. Kennedy, J.E.; Quine, M.P. The total variation distance between the binomial and Poisson distributions. Ann. Probab. 1989, 17, 396–400. [Google Scholar] [CrossRef]
Figure 1. Graph of the function r ( x ) from Example 2. Points x 1 = f 1 ( 1 / 8 ) = 0.00218 and x 2 = g 1 ( 1 / 8 ) = 0.024 are indicated.
Figure 1. Graph of the function r ( x ) from Example 2. Points x 1 = f 1 ( 1 / 8 ) = 0.00218 and x 2 = g 1 ( 1 / 8 ) = 0.024 are indicated.
Mathematics 09 00845 g001
Table 1. Values of the function c 1 n x 3 for n = 500 , p = 0.002 , x = k p , k = 2 , 5 ¯ .
Table 1. Values of the function c 1 n x 3 for n = 500 , p = 0.002 , x = k p , k = 2 , 5 ¯ .
x0.0040.0060.0080.01
n x 2345
c 1 n x 3 0.03080.05660.08710.1218
Table 2. Values of the ratio 1 Π λ 1 ( n x ) 1 Π λ ( n x ) for n = 10 , p = 0.1 , x = k p , k = 2 , 8 ¯ .
Table 2. Values of the ratio 1 Π λ 1 ( n x ) 1 Π λ ( n x ) for n = 10 , p = 0.1 , x = k p , k = 2 , 8 ¯ .
x0.20.30.40.50.60.70.8
n x 2345678
1 Π λ 1 ( n x ) 0.2230.0440.0040.0002 7.32 × 10 6 6.78 × 10 8 1.2 × 10 10
1 Π λ ( n x ) 0.2640.080.0180.0030.00050.000080.00001
1 Π λ 1 ( n x ) 1 Π λ ( n x ) 0.8450.5510.2550.0760.01230.0008 1.18 × 10 5
Table 3. Values of M ( n x ; λ 1 ) , M ( n x ; n p ) and Ω 1 ( x , n , p ) for n = 10 , p = 0.1 , x = k n , k = 2 , 9 ¯ .
Table 3. Values of M ( n x ; λ 1 ) , M ( n x ; n p ) and Ω 1 ( x , n , p ) for n = 10 , p = 0.1 , x = k n , k = 2 , 9 ¯ .
x nx λ 1 M ( nx ; λ 1 ) M ( nx ; np ) Ω 1 ( x , n , p )
0.220.888881.375831.436560.95772
0.330.777771.229091.309690.93846
0.440.666661.149691.238760.92809
0.550.555551.100481.193820.92181
0.660.444441.06721.162920.91769
0.770.333331.043261.140420.91481
0.880.222221.025251.123320.91269
0.990.111111.011671.109910.91148
Table 4. Behavior of the interval ( x 1 , x 2 ) depending on ε.
Table 4. Behavior of the interval ( x 1 , x 2 ) depending on ε.
ε 1/21/41/81/161/321/1024
x 1 0.00070.0010.0020.0030.0040.007
x 2 0.0480.0330.0240.0180.0140.009
x 2 x 1 0.0480.0320.0210.0150.010.001
Table 5. Values of d n , p = d K ( F S n , Π n p ) for p = 1 / k , k = 2 , 10 ¯ , n = 1 , 20 ¯ .
Table 5. Values of d n , p = d K ( F S n , Π n p ) for p = 1 / k , k = 2 , 10 ¯ , n = 1 , 20 ¯ .
n d n , 1 / 2 d n , 1 / 3 d n , 1 / 4 d n , 1 / 5 d n , 1 / 6 d n , 1 / 7 d n , 1 / 8 d n , 1 / 9 d n , 1 / 10
10.106530.04980.02880.018730.01310.009730.007490.005950.0048
20.117870.068970.0440.030220.0220.016780.013170.010610.0087
30.098130.071580.0500.036810.02780.02170.017360.014190.011
40.09350.066060.051470.039720.0310.024940.020340.016880.0142
50.099790.057180.0490.040190.03270.026870.022350.018820.016
60.089770.054830.0450.039050.03298 0.027800.023570.020140.0173
70.094280.059860.0400.036880.0320.027960.024160.020960.0182
80.093570.059680.03890.034120.0310.027540.024270.021360.0188
90.088380.056080.0420.031080.0290.026710.023990.021440.0191
100.093150.053630.0430.030190.0270.025590.023420.021240.0192
110.088410.0570.04260.032440.0250.024260.02260.020840.0191
120.09120.056980.040760.033560.024670.022820.02170.020280.0187
130.090240.054470.038080.033730.0260.021310.020670.019590.0183
140.088730.053760.03970.033160.027270.020850.01950.018820.0178
150.090550.055780.040980.032020.027670.022080.01840.017980.0172
160.086170.055360.040990.030460.027600.022900.018070.017110.0165
170.090020.053090.040080.030110.027140.023350.01900.016210.0152
180.087830.053980.038270.031400.026360.023480.01970.015940.0159
190.089020.054970.038790.032000.0250.023330.02010.016700.0144
200.088630.054110.03980.032010.0240.022960.020340.017270.0137
0.083030.048880.034740.026960.022040.018640.01610.014240.0127
Table 6. Values of product k d n , 1 / k for k = 2 , 10 ¯ , n = 1 , 20 ¯ .
Table 6. Values of product k d n , 1 / k for k = 2 , 10 ¯ , n = 1 , 20 ¯ .
n 2 d n , 1 / 2 3 d n , 1 / 3 4 d n , 1 / 4 5 d n , 1 / 5 6 d n , 1 / 6 7 d n , 1 / 7 8 d n , 1 / 8 9 d n , 1 / 9 10 d n , 1 / 10
10.213060.149590.115200.093650.078890.068140.059970.053550.0483
20.235750.206910.176120.15160.132520.117480.105400.095520.087
30.196260.214740.201960.184050.166960.15190.138930.127790.11
40.187010.198190.205890.198640.186980.174600.162790.151960.142
50.199590.171560.19680.200990.196320.188140.178820.169420.16
60.179540.164520.180600.195250.197880.194620.188570.181320.173
70.188560.179590.161160.184400.193920.195730.193320.188670.182
80.187140.179040.155700.170620.186170.192840.194160.192310.88
90.176770.168240.168830.155400.175940.187020.191950.19290.191
100.186300.160900.173080.150980.16420.179140.187430.191220.1920
110.176830.171020.170520.162230.151750.169880.181180.187600.1906
120.182410.170940.163060.167810.148030.159770.173700.182530.187
130.180490.163430.152340.168690.157790.149220.165370.17630.183
140.177460.161300.158870.165830.163620.146030.156510.169400.178
150.181110.167350.163920.160110.166050.154610.147360.161880.172
160.172350.166090.163960.152310.165620.160310.144560.154010.165
170.180050.159280.160030.150580.162840.163450.152210.145950.1529
180.175670.161950.153090.157020.158200.164360.157660.143460.1520
190.178040.164920.155170.160030.152130.163370.161110.150340.144
200.177260.162340.159470.160090.14500.160770.162790.15550.137
0.1660.146660.138980.134840.132250.130480.129190.128210.1274
Table 7. Values of the function g ( x , n , p ) for n = 10 , p = 0.1 and x = j n , j = 2 , 9 ¯ .
Table 7. Values of the function g ( x , n , p ) for n = 10 , p = 0.1 and x = j n , j = 2 , 9 ¯ .
x0.20.30.40.5
n x 2345
1 F n , p ( n x ) 0.2630.070.0120.001
1 Π λ 1 ( n x ) 0.2230.0440.0040.0002
1 F n , p ( n x ) 1 Π λ 1 ( n x ) 1.1811.5852.6335.871
1 A ( x , n , p ) 0.9350.7470.4860.238
1 F n , p ( n x ) 1 Π λ 1 ( n x ) A ( x , n , p ) 1.1041.1851.281.400
1 F n , p ( n x ) 1 Π λ 1 ( n x ) A ( x , n , p ) 1 0.1040.1850.280.4002
n x 3 0.2820.5190.81.118
g ( x , n , p ) 0.37040.3560.35030.358
x0.60.70.80.9
n x 6789
1 F n , p ( n x ) 0.00014 9.12 × 10 6 3.7 × 10 7 9.1 × 10 9
1 Π λ 1 ( n x ) 7.3 × 10 6 6.78 × 10 8 1.21 × 10 10 6.4 × 10 15
1 F n , p ( n x ) 1 Π λ 1 ( n x ) 20.05134.53085.37 1.4 × 10 6
1 A ( x , n , p ) 0.07770.01330.0007 2 × 10 6
1 F n , p ( n x ) 1 Π λ 1 ( n x ) A ( x , n , p ) 1.561.7892.162.9
1 F n , p ( n x ) 1 Π λ 1 ( n x ) A ( x , n , p ) 1 0.560.7891.161.9
n x 3 1.4691.8522.2622.7
g ( x , n , p ) 0.380.4260.510.718
Table 8. Values of the remainders r 2 ( x ) = V ( x , n , p ) 1 (see (31)) and r 1 ( x ) = Ω 2 ( x , n , p ) V ( x , n , p ) 1 (Theorems 1 and 2) for n = 10 , p = 0.1 and x = j 10 , j = 2 , 9 ¯ .
Table 8. Values of the remainders r 2 ( x ) = V ( x , n , p ) 1 (see (31)) and r 1 ( x ) = Ω 2 ( x , n , p ) V ( x , n , p ) 1 (Theorems 1 and 2) for n = 10 , p = 0.1 and x = j 10 , j = 2 , 9 ¯ .
x0.20.30.40.5
n x 2345
r 2 ( x ) = V ( x , n , p ) 1 0.058070.112070.188220.29078
r 1 ( x ) = Ω 2 ( x , n , p ) V ( x , n , p ) 1 0.104780.1850.280290.4002
Table 9. Values of the functions from (81) for n = 10 , p = 0.1 .
Table 9. Values of the functions from (81) for n = 10 , p = 0.1 .
x0.20.30.40.5
n x 2345
1 F n , p ( n x ) 0.26390.070190.012790.0016
1 Π n p ( n x ) 0.26420.08030.018980.0036
1 F n , p ( n x ) 1 Π n p ( n x ) 0.99870.874090.67380.4467
x p q 0.11110.22220.33330.4444
Λ 3 x p q 0.00640.02670.0630.1178
n q Λ 3 x p q 0.05770.240790.56721.061
e n q Λ 3 x p q 1.05941.27221.76332.8894
V ( x , n , p ) 1.0581.1121.1881.29
V ( x , n , p ) 1 0.0580.1120.1880.29
V ( x , n , p ) 1 n x 3 0.20530.21560.23520.26
Ω 2 ( x , n , p ) 1.04411.06551.07741.0848
Ω 2 ( x , n , p ) V ( x , n , p ) 1.10471.1851.28021.4002
Ω 2 ( x , n , p ) V ( x , n , p ) 1 0.10470.1850.28020.4002
Ω 2 ( x , n , p ) V ( x , n , p ) 1 n x 3 0.370.3560.3500.358
x0.60.70.80.9
n x 6789
1 F n , p ( n x ) 0.0001469 9.12 × 10 6 3.7 × 10 7 9.1 × 10 9
1 Π n p ( n x ) 0.000594 8 × 10 5 10 5 1.12 × 10 6
1 F n , p ( n x ) 1 Π n p ( n x ) 0.247230.109580.036450.00808
x p q 0.55550.66660.77770.8888
Λ 3 x p q 0.19510.30040.44350.6447
n q Λ 3 x p q 1.75622.70413.99185.8027
e n q Λ 3 x p q 5.790814.941854.1547331.218
V ( x , n , p ) 1.431691.637331.974032.6787
V ( x , n , p ) 1 0.431690.637330.974031.6787
V ( x , n , p 1 n x 3 0.29370.34410.43040.6217
Ω 2 ( x , n , p ) 1.08961.09311.09561.0971
Ω 2 ( x , n , p ) V ( x , n , p ) 1.56011.78982.16282.9388
Ω 2 ( x , n , p ) V ( x , n , p ) 1 0.56010.78981.16281.9388
Ω 2 ( x , n , p ) V ( x , n , p ) 1 n x 3 0.3810.4260.5130.718
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nagaev, S.; Chebotarev, V. On Approximation of the Tails of the Binomial Distribution with These of the Poisson Law. Mathematics 2021, 9, 845. https://doi.org/10.3390/math9080845

AMA Style

Nagaev S, Chebotarev V. On Approximation of the Tails of the Binomial Distribution with These of the Poisson Law. Mathematics. 2021; 9(8):845. https://doi.org/10.3390/math9080845

Chicago/Turabian Style

Nagaev, Sergei, and Vladimir Chebotarev. 2021. "On Approximation of the Tails of the Binomial Distribution with These of the Poisson Law" Mathematics 9, no. 8: 845. https://doi.org/10.3390/math9080845

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop