On the Accuracy of the Exponential Approximation to Random Sums of Alternating Random Variables

: Using the generalized stationary renewal distribution (also called the equilibrium transform) for arbitrary distributions with a ﬁnite non-zero ﬁrst moment, we prove moment-type error-bounds in the Kantorovich distance for the exponential approximation to random sums of possibly dependent random variables with positive ﬁnite expectations, in particular, to geometric random sums, generalizing the previous results to alternating and dependent random summands. We also extend the notions of new better than used in expectation (NBUE) and new worse than used in expectation (NWUE) distributions to alternating random variables in terms of the corresponding distribution functions and provide a criteria in terms of conditional expectations similar to the classical one. As corollary, we provide simpliﬁed error-bounds in the case of NBUE/NWUE conditional distributions of random summands.


Introduction
According to the generalized Rényi theorem, a geometric random sum of independent identically distributed (i.i.d.) nonnegative random variables (r.v.'s), normalized by its mean, converges in distribution to the exponential law when the expectation of the geometric number of summands tends to infinity. Some numerical bounds for the exponential approximation to geometric random sums, as well as their various applications, can be found in the classical monograph of Kalashnikov [1]. Peköz and Röllin [2] developed Stein's method for the exponential distribution and obtained moment-type estimates for the exponential approximation to geometric and non-geometric random sums with non-negative summands completing Kalashnikov's bounds in the Kantorovich distance. Their method was substantially based on the equilibrium transform (stationary renewal distribution) of non-negative random variables, hence yielding the technical restriction on the support of the random summands under consideration. Moreover, Peköz and Röllin considered dependent random summands with constant conditional expectations and presented some error-bounds in this case. The present authors extended Stein's method to alternating (i.e., taking values of both signs) random summands by generalizing the equilibrium transform to distributions with arbitrary support, and obtained moment-type estimates of the accuracy of the exponential approximation for geometric and non-geometric random sums of independent alternating random variables. The same paper [3] contains a detailed overview of the estimates of the exponential approximation to geometric random sums.
The aim of the present work is to extend the results of [3] to dependent random summands with constant conditional expectations, also generalizing the results of [2] to alternating random summands.
Recall that the Kantorovich distance ζ 1 between probability distributions of r.v.'s X and Y with distribution functions (d.f.'s) F and G is defined as a simple probability metric with ζ-structure (see [1,4]) as where Lip ∞ c = h ∈ Lip c | h is bounded and If both X and Y are integrable, then ζ 1 (X, Y) < ∞ and the supremum in (1) can be taken over a wider class Lip 1 of Lipschitz functions. In this case, according to the Kantorovich-Rubinshtein theorem, ζ 1 allows several alternative representations where F −1 and G −1 are generalized inverse functions of F and G, respectively. We will use a generalized equilibrium transform that was introduced and studied in [3]. Given a probability distribution of a r.v. X with d.f. F and finite a := EX = 0, its equilibrium transform is defined as a (signed) measure L e (X) on (R, B) with the d.f.
Observe that L e (X) is absolutely continuous (a.c.) with respect to (w.r.t.) the Lebesgue measure with the density The characteristic function (ch.f.) of L e (X) can be expressed in terms of the original ch.f. f of r.v. X as If X is nonnegative or nonpositive almost surely (a.s.), then L e (X) is a probability distribution and it is possible to construct a r.v. X e ∼ L e (X).
Below, we list some other properties of the equilibrium transform (see ([3], Theorem 1) for details and proofs) which will be used in the present work: Homogeneity. For any r.v. X with finite EX = 0 and d.f. F X we have Moments. If E|X| r+1 < ∞ for some r > 0, then for all k ∈ N ∩ [1, r] we have R x k dF e (x) = EX k+1 (k + 1)EX , R |x| r dF e (x) = EX|X| r (r + 1)EX .
We will also use the following inequality from ( [3], Theorem 3), which states that the Kantorovich distance to the exponential law is no more than twice greater than distance to the equilibrium transform. Lemma 1. Let X be a square integrable r.v. with EX = 1 and E ∼ Exp(1). Then, where L e (X) is the equilibrium transform of L (X).
The r.-h.s.'s of (8), in turn, can be bounded from above by the second moment EX 2 in the following way.
Lemma 2 (see Theorem 2 and Remark 2 in [3]). For any square-integrable r.v. X with EX = 0, Note the presence of the Kantorovich distance between L (X) and possibly signed measure L e (X) on the r.-h.s.'s of (8) and (9), which requires some extra explanation. As described in [3], it is defined in terms of d.f.'s in the same way as for probability measures in (1) and allows an alternative representation as an area between d.f.'s of its arguments (similar to the last expression in (2)). Moreover, this generalization retains the property of the homogeneity of order 1 (see ( [3], Lemma 1)). Namely, if F and G are d.f.'s of (signed) Borel measures on R with F(+∞) = G(+∞) and F c ( Using the above notation and techniques, we prove moment-type error bounds in the Kantorovich distance for the exponential approximation to random sums of possibly dependent r.v.'s with positive finite expectations (Theorem 1), which generalize the results of [2] to alternating random summands and results of [3] to dependent random summands.
Moreover, we extend the definitions of new better than used in expectation (NBUE) and new worse than used in expectation (NWUE) distributions to alternating random variables in terms of the corresponding d.f.'s and provide a criteria in terms of conditional expectations similar to the classical one (Theorem 2). Finally, we provide simplified error-bounds in cases of NBUE/NWUE conditional distributions of random summands, generalizing those obtained in [2].

Main Results
Lemma 3. Let X 1 , X 2 , . . . be a sequence of random variables, such that for every n ≥ 2 there exists a regular conditional probability L (X n | X 1 , . . . , X n−1 ) with the constant conditional expectation a n := E (X n | X 1 , . . . , X n−1 ) ∈ (0, +∞). Let S n := ∑ n i=1 X n for n ∈ N, S 0 := 0 and N be a where M is an N-valued r.v. with Or, in terms of (conditional) distribution functions, Here, and in what follows, we assume that f e m (t | X 1 , . . . , X m−1 ) and F e m (x | x 1 , . . . , x m−1 ) for m = 1 denote unconditional ch.f. and d.f. of L e (X 1 ). A similar notation will be used for other characteristics of distributions. Remark 1. If X 1 , X 2 , . . . are independent, then (11)-(12) reduces to the single summand property of the equilibrium transform (see (Equation (30), [3]).
Proof. According to ([3], Lemma 2), for every t ∈ R and n ∈ N we have where ∏ 0 j=1 . . . ≡ 1. By applying (5) twice, we obtain for t = 0 . . , X k−1 ) . Theorem 1. Let X 1 , X 2 , . . . be a sequence of random variables, such that for every n ≥ 2 there exists a regular conditional probability L (X n | X 1 , . . . , X n−1 ) with the constant conditional expectation a n := E (X n | X 1 , . . . , X n−1 ) ∈ (0, +∞). Let S n := ∑ n i=1 X n for n ∈ N, S 0 := 0 and N be a N 0 -valued r.v., independent of {X 1 , X 2 , . . .}, with A := ES N = ∞ ∑ n=1 a n P(N ≥ n) < +∞. Then, for any joint distribution of N and M we have where the first term vanishes in case of N d = M and Proof. By Lemma 1 and homogeneity of both the Kantorovich distance and the equilibrium transform (see (6) and (10)), we have Let us bound ζ 1 L (S N ) , L e (S N ) from above. For a given joint distribution L (N, M), let p nm := P(N = n, M = m), n ∈ N 0 , m ∈ N. Denoting S j,k := ∑ k i=j X i for j ≤ k, designating F m (x | x 1 , . . . , x m−1 ) and F e m (x | x 1 , . . . , x m−1 ) the short forms of the conditional d.f.'s of L (X m | X 1 = x 1 , . . . , X m−1 = x m−1 ) and L e (X m | X 1 = x 1 , . . . , X m−1 = x m−1 ), respectively, m ∈ N, and using Lemma 3 together with the representation of the Kantorovich distance between (signed) measures as an area between their distribution functions, we obtain where For the summands with n < m by Tonelli's theorem we have where F(x n+1 , . . . , x m−1 | x 1 , . . . , x n ) stands for the conditional joint d.f. of (X n+1 , . . . , X m−1 ) given that X 1 = x 1 , . . . , X n = x n . By adding and subtracting under the modulus sign and using further the triangle inequality, we obtain where δ 0 is the Dirac measure concentrated in zero. For the case of n ≥ m by Tonelli's theorem, we have where F S m,n (x | x 1 , . . . , x m−1 ) stands for the conditional d.f. F S m,n (x | X 1 = x 1 , . . . , X m−1 = x m−1 ). By adding and subtracting F m (x | x 1 , . . . , x m−1 ) in the integrand under the modulus sign and using further the triangle inequality, we obtain Combining both n < m and n ≥ m cases and using the fact that ζ 1 (δ 0 , L (X)) = E|X|, we get where the first sum can be bounded from above by Substituting the latter bound into (14) yields (13). If N d = M, then we take a comonotonic pair (N, N) as (N, M), which eliminates the first term on the r.-h.s. of (15). Moreover, if N and M are stochastically ordered (that is, F N (x) ≤ F M (x) for all x ∈ R or vice versa), then If, in addition, all EX n = a, then and the first term on the r.-h.s of (13) can be bounded from above as Hence, we arrive at the following.

Corollary 1.
Let, in addition to the conditions of Theorem 1, EX n = a for all n ∈ N and the r.v.'s N and M be stochastically ordered with finite expectations. Then

Remark 4.
If N ∼ Geom(p), p ∈ (0, 1), that is P(N = n) = (1 − p) n−1 p, n ∈ N, then In this case, for every h ∈ Lip 1 with E|h(M)| < ∞ we have Therefore, by the Cauchy-Bunyakovsky-Schwarz inequality, we have Thus, the first term on the r.-h.s of (13) can be bounded from above as This means that in case of sup n E|X n | < ∞ and inf n a n > 0, the first term on the r.-h.s. of (13) is, at most, of order O √ Var a N as p → +0. If N ∼ Geom(p), p ∈ (0, 1) and EX n = a for all n ∈ N, then M ∼ Geom(p), and thus, ζ 1 (N, M) = 0. Therefore, if sup n E|X n | < ∞, then the first term on the r.-h.s. of (13) vanishes.
If N + 1 ∼ Geom(p), p ∈ (0, 1) and EX n = a for all n ∈ N, then M ∼ Geom(p) as well, and thus, Next, let us simplify the second term D in (13).

Corollary 2.
Let, in addition to the conditions of Theorem 1, b n = EX 2 n < ∞ for every n ∈ N and the r.v. M be independent of {X 1 , X 2 , . . .}. Then Proof. By Lemma 2, we have which proves the statement of the corollary.
Recall that a nonnegative r.v. X with finite EX > 0 is said to be new better than used in expectation (NBUE), if EX ≥ E (X − t | X > t) for all t ≥ 0, and new worse than used in expectation (NWUE), if Using Tonelli's theorem, it can be ascertained that X is NBUE if and only if X stochastically dominates its equilibrium transform X e , that is, F(x) ≤ F e (x) for all x ≥ 0. Similarly, X is NWUE if and only if X e stochastically dominates X. We will show that the same results hold true if we extend both NBUE and NWUE notions to the case of r.v.s without support constraints. Definition 1. We say that a (possibly alternating) r.v. X with d.f. F and EX ∈ (0, +∞) is NBUE, if F(x) ≤ F e (x) for all x ∈ R, where F e is the equilibrium transform w.r.t. F. Similarly, we say that the r.v. X with d.f. F and EX ∈ (0, +∞) is NWUE (new worse than used in expectation), if F(x) ≥ F e (x) for all x ∈ R.
Let X be NBUE, i.e., F(x) ≤ F e (x) for all x ∈ R. This implies that 1 − F(t + 0) ≥ 1 − F e (t) due to the absolute continuity of F e , and hence, with the account of (19), we obtain (16).
Conversely, let (17) hold true. For t ≤ 0 we have F(t) ≥ 0 ≥ F e (t), since L e (X) has negative density on the negative half-line. Finally, (17) and (19) yield F(t) ≥ F e (t) for positive t.
If X is NBUE or NWUE with EX > 0 and EX 2 < ∞, then where the last equality follows from (7). Hence, if for all m ∈ N and x 1 , x 2 . . . ∈ R the conditional distribution L (X m | x 1 , . . . , x m−1 ) is NBUE or NWUE, then the second term on the r.-h.s. of (13) takes the form