A Two-Moment Inequality with Applications to Rényi Entropy and Mutual Information

This paper explores some applications of a two-moment inequality for the integral of the rth power of a function, where 0<r<1. The first contribution is an upper bound on the Rényi entropy of a random vector in terms of the two different moments. When one of the moments is the zeroth moment, these bounds recover previous results based on maximum entropy distributions under a single moment constraint. More generally, evaluation of the bound with two carefully chosen nonzero moments can lead to significant improvements with a modest increase in complexity. The second contribution is a method for upper bounding mutual information in terms of certain integrals with respect to the variance of the conditional density. The bounds have a number of useful properties arising from the connection with variance decompositions.


Introduction
The interplay between inequalities and information theory has a rich history, with notable examples including the relationship between the Brunn-Minkowski inequality and the entropy power inequality as well as the matrix determinant inequalities obtained from differential entropy [1]. In this paper, the focus is on a "two-moment" inequality that provides an upper bound on the integral of the rth power of a function. Specifically, if f is a nonnegative function defined on R n and p, q, r are real numbers satisfying 0 < r < 1 and p < 1/r − 1 < q, then f (x) r dx 1 r ≤ C n,p,q,r x np f (x) dx qr+r−1 (q−p)r where the best possible constant C n,p,q,r is given exactly; see Propositions 2 and 3 ahead. The one-dimensional version of this inequality is a special case of the classical Carlson-Levin inequality [2][3][4], and the multidimensional version is a special case of a result presented by Barza et al. [5]. The particular formulation of the inequality used in this paper was derived independently in [6], where the proof follows from a direct application of Hölder's inequality and Jensen's inequality.
In the context of information theory and statistics, a useful property of the two-moment inequality is that it provides a bound on a nonlinear functional, namely the r-quasi-norm · r , in terms of integrals that are linear in f . Consequently, this inequality is well suited to settings where f is a mixture of simple functions whose moments can be evaluated. We note that this reliance on moments to bound a nonlinear functional is closely related to bounds obtained from variational characterizations such as the Donsker-Varadhan representation of Kullback divergence [7] and its generalizations to Rényi divergence [8,9].
The first application considered in this paper concerns the relationship between the entropy of a probability measure and its moments. This relationship is fundamental to the principle of maximum entropy, which originated in statistical physics and has since been applied to statistical inference problems [10]. It also plays a prominent role in information theory and estimation theory where the fact that the Gaussian distribution maximizes differential entropy under second moment constraints ( [11], [Theorem 8.6.5]) plays a prominent role. Moment-entropy inequalities for Rényi entropy were studied in a series of works by Lutwak et al. [12][13][14], as well as related works by Costa et al. [15,16] and Johonson and Vignat [17], in which it is shown that, under a single moment constraint, Rényi entropy is maximized by a family of generalized Gaussian distributions. The connection between these moment-entropy inequalities and the Carlson-Levin inequality was noted recently by Nguyen [18].
In this direction, one of the contributions of this paper is a new family of moment-entropy inequalities. This family of inequalities follows from applying Inequality (1) in the setting where f is a probability density function, and thus there is a one-to-one correspondence between the integral of the rth power and the Rényi entropy of order r. In the special case where one of the moments is the zeroth moment, this approach recovers the moment-entropy inequalities given in previous work. More generally, the additional flexibility provided by considering two different moments can lead to stronger results. For example, in Proposition 6, it is shown that if f is the standard Gaussian density function defined on R n , then the difference between the Rényi entropy and the upper bound given by the two-moment inequality (equivalently, the ratio between the left-and right-hand sides of (1)) is bounded uniformly with respect to n under the following specification of the moments: Conversely, if one of the moments is restricted to be equal to zero, as is the case in the usual moment-entropy inequalities, then the difference between the Rényi entropy and the upper bound diverges with n.
The second application considered in this paper is the problem of bounding mutual information. In conjunction with Fano's inequality and its extensions, bounds on mutual information play a prominent role in establishing minimax rates of statistical estimation [19] as well as the information-theoretic limits of detection in high-dimensional settings [20]. In many cases, one of the technical challenges is to provide conditions under which the dependence between the observations and an underlying signal or model parameters converges to zero in the limit of high dimension. This paper introduces a new method for bounding mutual information, which can be described as follows. Let P X,Y be a probability measure on X × Y such that P Y|X=x and P Y have densities f (y | x) and f (y) with respect to the Lebesgue measure on R n . We begin by showing that the mutual information between X and Y satisfies the upper bound where Var(p(y | X)) = ( f (y | x) − f (y)) 2 dP X (x) is the variance of f (y | X); see Proposition 8 ahead. In view of (3), an application of the two-moment Inequality (1) with r = 1/2 leads to an upper bound with respect to the moments of the variance of the density: y ns Var( f (y | X)) dy (4) where this expression is evaluated at s ∈ {p, q} with p < 1 < q. A useful property of this bound is that the integrated variance is quadratic in P X , and thus Expression (4) can be evaluated by swapping the integration over y and with the expectation of over two independent copies of X. For example, when P X,Y is a Gaussian scale mixture, this approach provides closed-form upper bounds in terms of the moments of the Gaussian density. An early version of this technique is used to prove Gaussian approximations for random projections [21] arising in the analysis of a random linear estimation problem appearing in wireless communications and compressed sensing [22,23].

Moment Inequalities
Let L p (S) be the space of Lebesgue measurable functions from S to R whose pth power is absolutely integrable, and for p = 0, define Recall that · p is a norm for p ≥ 1 but only a quasi-norm for 0 < p < 1 because it does not satisfy the triangle inequality. The sth moment of f is defined as where · denotes the standard Euclidean norm on vectors.
The two-moment Inequality (1) can be derived straightforwardly using the following argument. For r ∈ (0, 1), the mapping f → f r is concave on the subset of nonnegative functions and admits the variational representation where r * = r/(r − 1) ∈ (−∞, 0) is the Hölder conjugate of r. Consequently, each g ∈ L r * leads to an upper bound on f r . For example, if f has bounded support S, choosing g to be the indicator function of S leads to the basic inequality f r ≤ (Vol(S)) (1−r)/r f 1 . The upper bound on f r given in Inequality (1) can be obtained by restricting the minimum in Expression (5) to the parametric class of functions of the form g(x) = ν 1 x np + ν 2 x nq with ν 1 , ν 2 > 0 and then optimizing over the parameters (ν 1 , ν 2 ). Here, the constraints on p, q are necessary and sufficient to ensure that g ∈ L r * (R n ).
In the following sections, we provide a more detailed derivation, starting with the problem of maximizing f r under multiple moment constraints and then specializing to the case of two moments. For a detailed account of the history of the Carlson type inequalities as well as some further extensions, see [4].

Multiple Moments
Consider the following optimization problem: For r ∈ (0, 1), this is a convex optimization problem because · r r is concave and the moment constraints are linear. By standard theory in convex optimization (e.g., [24]), it can be shown that if the problem is feasible and the maximum is finite, then the maximizer has the form The parameters ν * 1 , · · · , ν * k are nonnegative and the ith moment constraint holds with equality for all i such that ν * i is strictly positive-that is, ν * i > 0 =⇒ µ s i ( f * ) = m i . Consequently, the maximum can be expressed in terms of a linear combination of the moments: For the purposes of this paper, it is useful to consider a relative inequality in terms of the moments of the function itself. Given a number 0 < r < 1 and vectors s ∈ R k and ν ∈ R k + , the function c r (ν, s) is defined according to if the integral exists. Otherwise, c r (ν, s) is defined to be positive infinity. It can be verified that c r (ν, s) is finite provided that there exists i, j such that ν i and ν j are strictly positive and s i < (1 − r)/r < s j .
The following result can be viewed as a consequence of the constrained optimization problem described above. We provide a different and very simple proof that depends only on Hölder's inequality.
Proposition 1. Let f be a nonnegative Lebesgue measurable function defined on the positive reals R + . For any number 0 < r < 1 and vectors s ∈ R k and ν ∈ R k + , we have where the second step is Hölder's inequality with conjugate exponents 1/(1 − r) and 1/r.

Two Moments
For a, b > 0, the beta function B(a, b) and gamma function Γ(a) are given by and satisfy the relation B(a, b) = Γ(a)Γ(b)/Γ(a + b), a, b > 0. To lighten the notation, we define the normalized beta function Properties of these functions are provided in Appendix A. The next result follows from Proposition 1 for the case of two moments.
Proof. Letting s = (p, q) and ν = ( Making the change of variable x → (γu) where a = r 1−r λ and b = r 1−r (1 − λ) and the second step follows from recognizing the integral representation of the beta function given in Equation (A3). Therefore, by Proposition 1, the inequality holds for all γ > 0. Evaluating this inequality with leads to the stated result.
The special case r = 1/2 admits the simplified expression where we have used Euler's reflection formula for the beta function ( [25], [Theorem 1.2.1]). Next, we consider an extension of Proposition 2 for functions defined on R n . Given any measurable subset S of R n , we define where B n = {u ∈ R n : u ≤ 1} is the n-dimensional Euclidean ball of radius one and The function ω(S) is proportional to the surface measure of the projection of S on the Euclidean sphere and satisfies for all S ⊆ R n . Note that ω(R + ) = 1 and ω(R) = 2.
Proposition 3. Let f be a nonnegative Lebesgue measurable function defined on a subset S of R n . For any numbers p, q, r with 0 < r < 1 and p < 1/r − 1 < q, where λ = (q + 1 − 1/r)/(q − p) and ψ r (p, q) is given by Equation (7).
Proof. Let f be extended to R n using the rule f (x) = 0 for all x outside of S and let g : R + → R + be defined according to where S n−1 = {u ∈ R n : u = 1} is the Euclidean sphere of radius one and σ(u) is the surface measure of the sphere. In the following, we will show that Then, the stated inequality then follows from applying Proposition 2 to the function g.
To prove Inequality (11), we begin with a transformation into polar coordinates: Letting 1 cone(S) (x) denote the indicator function of the set cone(S), the integral over the sphere can be bounded using: where: (a) follows from Hölder's inequality with conjugate exponents 1 1−r and 1 r , and (b) follows from the definition of g and the fact that Plugging Inequality (14) back into Equation (13) and then making the change of variable t → y 1 n yields The proof of Equation (12) follows along similar lines. We have where (a) follows from a transformation into polar coordinates and (b) follows from the change of variable t → y 1 n . Having established Inequality (11) and Equation (12), an application of Proposition 2 completes the proof.

Rényi Entropy Bounds
Let X be a random vector that has a density f (x) with respect to the Lebesgue measure on R n . The differential Rényi entropy of order r ∈ (0, 1) ∪ (1, ∞) is defined according to [11]: Throughout this paper, it is assumed that the logarithm is defined with respect to the natural base and entropy is measured in nats. The Rényi entropy is continuous and nonincreasing in r. If the support set S = {x ∈ R n : f (x) > 0} has finite measure, then the limit as r converges to zero is given by h 0 (X) = log Vol(S). If the support does not have finite measure, then h r (X) increases to infinity as r decreases to zero. The case r = 1 is given by the Shannon differential entropy: Given a random variable X that is not identical to zero and numbers p, q, r with 0 < r < 1 and where λ = (q + 1 − 1/r)/(q − p). The next result, which follows directly from Proposition 3, provides an upper bound on the Rényi entropy. Proposition 4. Let X be a random vector with a density on R n . For any numbers p, q, r with 0 < r < 1 and p < 1/r − 1 < q, the Rényi entropy satisfies where ω(S) is defined in Equation (9) and ψ r (p, q) is defined in Equation (7).
Proof. This result follows immediately from Proposition 3 and the definition of Rényi entropy.
The relationship between Proposition 4 and previous results depends on whether the moment p is equal to zero: • One-moment inequalities: If p = 0, then there exists a distribution such that Inequality (15) holds with equality. This is because the zero-moment constraint ensures that the function that maximizes the Rényi entropy integrates to one. In this case, Proposition 4 is equivalent to previous results that focused on distributions that maximize Rényi entropy subject to a single moment constraint [12,13,15]. With some abuse of terminology, we refer to these bounds as one-moment inequalities. (A more accurate name would be two-moment inequalities under the constraint that one of the moments is the zeroth moment.) • Two-moment inequalities: If p = 0, then the right-hand side of Inequality (15) corresponds to the Rényi entropy of a nonnegative function that might not integrate to one. Nevertheless, the expression provides an upper bound on the Rényi entropy for any density with the same moments. We refer to the bounds obtained using a general pair (p, q) as two-moment inequalities.
The contribution of two-moment inequalities is that they lead to tighter bounds. To quantify the tightness, we define ∆ r (X; p, q) to be the gap between the right-hand side and left-hand side of Inequality (15) corresponding to the pair (p, q)-that is, The gaps corresponding to the optimal two-moment and one-moment inequalities are defined according to

Some Consequences of These Bounds
By Lyapunov's inequality, the mapping s → 1 In other words, the case p = 0 provides an upper bound on L r (X; p, q) for nonnegative p. Alternatively, we also have the lower bound which follows from the convexity of log E [|X| s ].
A useful property of L r (X; p, q) is that it is additive with respect to the product of independent random variables. Specifically, if X and Y are independent, then L r (XY; p, q) = L r (X; p, q) + L r (Y; p, q).
One consequence is that multiplication by a bounded random variable cannot increase the Rényi entropy by an amount that exceeds the gap of the two-moment inequality with nonnegative moments.
Proposition 5. Let Y be a random vector on R n with finite Rényi entropy of order 0 < r < 1, and let X be an independent random variable that satisfies 0 < X ≤ t. Then, Proof. Let Z = XY and let S Z and S Y denote the support sets of Z and Y, respectively. The assumption that X is nonnegative means that cone(S Z ) = cone(S Y ). We have where (a) follows from Proposition 4, (b) follows from Equation (18) and the definition of ∆ r (Y; p, q), and (c) follows from Inequality (16) and the assumption |X| ≤ t. Finally, recalling that h r (tY) = h r (Y) + n log t completes the proof.

Example with Log-Normal Distribution
If W ∼ N (µ, σ 2 ), then the random variable X = exp(W) has a log-normal distribution with parameters (µ, σ 2 ). The Rényi entropy is given by and the logarithm of the sth moment is given by With a bit of work, it can be shown that the gap of the optimal two-moment inequality does not depend on the parameters (µ, σ 2 ) and is given by The details of this derivation are given in Appendix B.1. Meanwhile, the gap of the optimal one-moment inequality is given by The functions ∆ r (X) and ∆ r (X) are illustrated in Figure 1 as a function of r for various σ 2 . The function ∆ r (X) is bounded uniformly with respect to r and converges to zero as r increases to one. The tightness of the two-moment inequality in this regime follows from the fact that the log-normal distribution maximizes Shannon entropy subject to a constraint on E [log X]. By contrast, the function ∆ r (X) varies with the parameter σ 2 . For any fixed r ∈ (0, 1), it can be shown that ∆ r (X) increases to infinity if σ 2 converges to zero or infinity.

Example with Multivariate Gaussian Distribution
Next, we consider the case where Y ∼ N (0, I n ) is an n-dimensional Gaussian vector with mean zero and identity covariance. The Rényi entropy is given by and the sth moment of the magnitude Y is given by .
The next result shows that as the dimension n increases, the gap of the optimal two-moment inequality converges to the gap for the log-normal distribution. Moreover, for each r ∈ (0, 1), the following choice of moments is optimal in the large-n limit: The proof is given in Appendix B.3.

Proposition 6.
If Y ∼ N (0, I n ), then, for each r ∈ (0, 1), where X has a log-normal distribution and (p n , q n ) are given by (21). Figure 2 provides a comparison of ∆ r (Y), ∆ r (Y; p n , q n ), and ∆ r (Y) as a function of n for r = 0.1. Here, we see that both ∆ r (Y) and ∆ r (Y; p n , q n ) converge rapidly to the asymptotic limit given by the gap of the log-normal distribution. By contrast, the gap of the optimal one-moment inequality ∆ r (Y) increases without bound.

Inequalities for Differential Entropy
Proposition 4 can also be used to recover some known inequalities for differential entropy by considering the limiting behavior as r converges to one. For example, it is well known that the differential entropy of an n-dimensional random vector X with finite second moment satisfies with equality if and only if the entries of X are i.i.d. zero-mean Gaussian. A generalization of this result in terms of an arbitrary positive moment is given by for all s > 0. Note that Inequality (22) corresponds to the case s = 2.
Inequality (23) can be proved as an immediate consequence of Proposition 4 and the fact that h r (X) is nonincreasing in r. Using properties of the beta function given in Appendix A, it is straightforward to verify that Combining this result with Proposition 4 and Inequality (16) leads to Using Inequality (10) and making the substitution s = nq leads to Inequality (23). Another example follows from the fact that the log-normal distribution maximizes the differential entropy of a positive random variable X subject to constraints on the mean and variance of log(X), and hence h(X) ≤ E [log(X)] + 1 2 log (2πe Var(log(X))) , with equality if and only if X is log-normal. In Appendix B.4, it is shown how this inequality can be proved using our two-moment inequalities by studying the behavior as both p and q converge to zero as r increases to one.

Relative Entropy and Chi-Squared Divergence
Let P and Q be distributions defined on a common probability space that have densities p and q with respect to a dominating measure µ. The relative entropy (or Kullback-Leibler divergence) is defined according to D (P Q) = p log p q dµ, and the chi-squared divergence is defined as Both of these divergences can be seen as special cases of the general class of f -divergence measures and there exists a rich literature on comparisons between different divergences [8,[26][27][28][29][30][31][32]. The chi-squared divergence can also be viewed as the squared L 2 distance between p/ √ q and √ q. The chi-square can also be interpreted as the first non-zero term in the power series expansion of the relative entropy ( [26], [Lemma 4]). More generally, the chi-squared divergence provides an upper bound on the relative entropy via The proof of this inequality follows straightforwardly from Jensen's inequality and the concavity of the logarithm; see [27,31,32] for further refinements. Given a random pair (X, Y), the mutual information between X and Y is defined according to From Inequality (25), we see that the mutual information can always be upper bounded using The next section provides bounds on the mutual information that can improve upon this inequality.

Mutual Information and Variance of Conditional Density
Let (X, Y) be a random pair such that the conditional distribution of Y given X has a density f Y|X (y|x) with respect to the Lebesgue measure on R n . Note that the marginal density of Y is given by f Y (y) = E f Y|X (y|X) . To simplify notation, we will write f (y|x) and f (y) where the subscripts are implicit. The support set of Y is denoted by S Y .
The measure of the dependence between X and Y that is used in our bounds can be understood in terms of the variance of the conditional density. For each y, the conditional density f (y|X) evaluated with a random realization of X is a random variable. The variance of this random variable is given by where we have used the fact that the marginal density f (y) is the expectation of f (y|X). The sth moment of the variance of the conditional density is defined according to The variance moment V s (Y|X) is nonnegative and equal to zero if and only if X and Y are independent. The function κ(t) is defined according to The proof of the following result is given in Appendix C. The behavior of κ(t) is illustrated in Figure 3.  (29) can be expressed as

Proposition 7. The function κ(t) defined in Equation
and W(·) denotes Lambert's W-function, i.e., W(z) is the unique solution to the equation z = w exp(w) on the interval [−1, ∞). Furthermore, the function g(t) = tκ(t) is strictly increasing on (0, 1] with lim t→0 g(t) = 1/e and g(1) = 1, and thus where the lower bound 1/(et) is tight for small values of t ∈ (0, 1) and the upper bound 1/t is tight for values of t close to 1.
We are now ready to give the main results of this section, which are bounds on the mutual information. We begin with a general upper bound in terms of the variance of the conditional density. Proposition 8. For any 0 < t ≤ 1, the mutual information satisfies Proof. We use the following series of inequalities: where (a) follows from the definition of mutual information, (b) follows from Inequality (25), and (c) follows from Bayes' rule, which allows us to write the chi-square in terms of the variance of the conditional density: Inequality (d) follows from the nonnegativity of the variance and the definition of κ(t).
Evaluating Proposition 8 with t = 1 recovers the well-known inequality I(X; Y) ≤ χ 2 (P X,Y P X P Y ). The next two results follow from the cases 0 < t < 1 2 and t = 1 2 , respectively.

Proof. Evaluating Proposition 8 with t = 1/2 gives
Evaluating Proposition 3 with r = 1 2 leads to Combining these inequalities with the expression for ψ 1/2 (p, q) given in Equation (8) completes the proof.
The contribution of Propositions 9 and 10 is that they provide bounds on the mutual information in terms of quantities that can be easy to characterize. One application of these bounds is to establish conditions under which the mutual information corresponding to a sequence of random pairs (X k , Y k ) converges to zero. In this case, Proposition 9 provides a sufficient condition in terms of the Rényi entropy of Y n and the function V 0 (Y n |X n ), while Proposition 10 provides a sufficient condition in terms of V s (Y n |X n ) evaluated with two difference values of s. These conditions are summarized in the following result. Proposition 11. Let (X k , Y k ) be a sequence of random pairs such that the conditional distribution of Y k given X k has a density on R n . The following are sufficient conditions under which the mutual information of I(X k ; Y k ) converges to zero as k increases to infinity: 1. There exists 0 < r < 1 such that 2. There exists p < 1 < q such that

Properties of the Bounds
The variance moment V s (Y|X) has a number of interesting properties. The variance of the conditional density can be expressed in terms of an expectation with respect to two independent random variables X 1 and X 2 with the same distribution as X via the decomposition: Consequently, by swapping the order of the integration and expectation, we obtain where The function K s (x 1 , x 2 ) is a positive definite kernel that does not depend on the distribution of X. For s = 0, this kernel has been studied previously in the machine learning literature [33], where it is referred to as the expected likelihood kernel. The variance of the conditional density also satisfies a data processing inequality. Suppose that U → X → Y forms a Markov chain. Then, the square of the conditional density of Y given U can be expressed as where (U, X 1 , X 2 ) ∼ P U P X 1 |U P X 2 |U . Combining this expression with Equation (30) yields where we recall that (X 1 , X 2 ) are independent copies of X. Finally, it is easy to verify that the function V s (Y) satisfies V s (aY|X) = |a| s−n V s (Y|X), for all a = 0.
Using this scaling relationship, we see that the sufficient conditions in Proposition 11 are invariant to scaling of Y.

Example with Additive Gaussian Noise
We now provide a specific example of our bounds on the mutual information. Let X ∈ R n be a random vector with distribution P X and let Y be the output of a Gaussian noise channel where W ∼ N (0, I n ) is independent of X. If X has finite second moment, then the mutual information satisfies where equality is attained if and only if X has zero-mean isotropic Gaussian distribution. This inequality follows straightforwardly from the fact that the Gaussian distribution maximizes differential entropy subject to a second moment constraint [11]. One of the limitations of this bound is that it can be loose when the second moment is dominated by events that have small probability. In fact, it is easy to construct examples for which X does not have a finite second moment, and yet I(X; Y) is arbitrarily close to zero. Our results provide bounds on I(X; Y) that are less sensitive to the effects of rare events. Let φ n (x) = (2π) −n/2 exp(− x 2 /2) denote the density of the standard Gaussian distribution on R n . The product of the conditional densities can be factored according to where the second step follows because φ 2n (·) is invariant to orthogonal transformations.
Integrating with respect to y leads to where we recall that W ∼ N (0, I n ). For the case s = 0, we see that K 0 (x 1 , x 2 ) is a Gaussian kernel, thus A useful property of V 0 (Y|X) is that the conditions under which it converges to zero are weaker than the conditions needed for other measures of dependence. Observe that the expectation in Equation (34) is bounded uniformly with respect to (X 1 , X 2 ). In particular, for every > 0 and x ∈ R, we have where we have used the inequality 1 − e −x ≤ x and the fact that Consequently, V 0 (Y|X) converges to zero whenever X converges to a constant value x in probability.
To study some further properties of these bounds, we now focus on the case where X is a Gaussian scalar mixture generated according to with A and U independent. In this case, the expectations with respect to the kernel K s (x 1 , x 2 ) can be computed explicitly, leading to where (U 1 , U 2 ) are independent copies of U. It can be shown that this expression depends primarily on the magnitude of U. This is not surprising given that X converges to a constant if and only if U converges to zero. Our results can also be used to bound the mutual information I(U; Y) by noting that U → X → Y forms a Markov chain, and taking advantage of the characterization provided in Equation (31).
In this case, V s (Y|U) is a measure of the variation in U. To study its behavior, we consider the simple upper bound which follows from noting that the term inside the expectation in Equation (37) is zero on the event U 1 = U 2 . This bound shows that if s ≤ 1 then V s (Y|U) is bounded uniformly with respect to distributions on U, and if s > 1, then V s (Y|U) is bounded in terms of the ( s−1 2 )th moment of U. In conjunction with Propositions 9 and 10, the function V s (Y|U) provides bounds on the mutual information I(U; Y) that can be expressed in terms of simple expectations involving two independent copies of U. Figure 4 provides an illustration of the upper bound in Proposition 10 for the case where U is a discrete random variable supported on two points, and X and Y are generated according to Equations (32) and (35). This example shows that there exist sequences of distributions for which our upper bounds on the mutual information converge to zero while the chi-squared divergence between P XY and P X P Y is bounded away from zero.  , and X and Y are generated according to Equations (32) and (35). The bound from Proposition 10 is evaluated with p = 0 and q = 2.

Conclusions
This paper provides bounds on Rényi entropy and mutual information that are based on a relatively simple two-moment inequality. Extensions to inequalities with more moments are worth exploring. Another potential application is to provide a refined characterization of the "all-or-nothing" behavior seen in a sparse linear regression problem [34,35], where the current methods of analysis depend on a complicated conditional second moment method.

Conflicts of Interest:
The author declares no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. The Gamma and Beta Functions
This section reviews some properties of the gamma and beta functions. For x > 0, the gamma function is defined according to Γ(x) = ∞ 0 t x−1 e −t dt. Binet's formula for the logarithm for the gamma function ( [25], [Theorem 1.6.3]) gives where the remainder term θ(x) is convex and nonincreasing with lim x→0 θ(x) = ∞ and lim x→∞ θ(x) = 0. Euler's reflection formula ( [25], [Theorem 1.

2.1]) gives
For x, y > 0, the beta function can be expressed as follows where the second integral expression follows from the change of variables t → u/(1 + u). Recall that B(x, y) = B(x, y)(x + y) x+y x −x y −y . Using Equation (A1) leads to log B(x, y) x y 2π(x+y) It can also be shown that ( [36], [Equation (2), pg. 2])

Appendix B. Details for Rényi Entropy Examples
This appendix studies properties of the two-moment inequalities for Rényi entropy described in Section 3.

Appendix B.1. Log-Normal Distribution
Let X be a log-normal random variable with parameters (µ, σ 2 ) and consider the parametrization where λ ∈ (0, 1) and u ∈ (0, ∞). Then, we have Combining these expressions with Equation (A4) leads to We now characterize the minimum with respect to the parameters (λ, u). Note that the mapping is convex and symmetric about the point λ = 1/2. Therefore, the minimum with respect to λ is attained at λ = 1/2. Meanwhile, mapping u → uσ 2 − log(uσ 2 ) is convex and attains its minimum at u = 1/σ 2 . Evaluating Equation (A6) with these values, we see that the optimal two-moment inequality can be expressed as By Equation (A4), this expression is equivalent to Equation (A1). Moreover, the fact that ∆ r (X) decreases to zero as r increases to one follows from the fact that θ(x) decreases to zero and x increases to infinity.
Next, we express the gap in terms of the pair (p, q). Comparing the difference between ∆ r (X; p, q) and ∆ r (X) leads to ∆ r (X; p, q) = ∆ r (X) In particular, if p = 0, then we obtain the simplified expression This characterization shows that the gap of the optimal one-moment inequality ∆ r (X) increases to infinity in the limit as either σ 2 → 0 or σ 2 → ∞.

Appendix B.2. Multivariate Gaussian Distribution
Let Y ∼ N (0, I n ) be an n-dimensional Gaussian vector and consider the parametrization where λ ∈ (0, 1) and z ∈ (0, ∞). We can write then L r ( Y n ; p, q) is finite and is given by Here, we note that the scaling in Equation (21) corresponds to λ = 1/2 and z = n/(n + 1), and thus the condition Inequality (A7) is satisfied for all n ≥ 1. Combining the above expressions and then using Equations (A1) and (A4) leads to Next, we study some properties of Q r,n (λ, z). By Equation (A1), the logarithm of the gamma function can be expressed as the sum of convex functions: Here, the last equality follows from the analysis in Appendix B.1, which shows that the minimum of G(λ, z) is a attained at λ = 1/2 and z = 1.
To complete the proof we will show that for any sequence λ n that converges to one as n increases to infinity, we have lim inf n→∞ inf z∈(0,∞) G n (λ n , z) = ∞.
To see why this is the case, note that by Equation (A4) and Inequality (A5), Therefore, we can write where c n is bounded uniformly for all n. Making the substitution u = λ(1 − λ)z, we obtain inf z>0 G n (λ, z) ≥ inf u>0 Q n λ, u λ(1 − λ) − 1 2 log u + c n .
Next, let b n = 2(1 − r)/(9n). The lower bound in Inequality (A10) leads to inf u>0 Q n λ, u The limiting behavior in Equation (A14) can now be seen as a consequence of Inequality (A15) and the fact that, for any sequence λ n converging to one, the right-hand side of Inequality (A16) increases without bound as n increases. Combining Inequality (A12), Inequality (A13), and Equation (A14) establishes that the large n limit of ∆ r (Y) exists and is equal to ∆ r (X). This concludes the proof of Proposition 6.
Appendix B.4. Proof of Inequality (24) Given any λ ∈ (0, 1) and u ∈ (0, ∞) let We need the following results, which characterize the terms in Proposition 4 in the limit as r increases to one.
Proof. Starting with Equation (A4), we can write As r converges to one, the terms in the exponent converge to zero. Note that q(r) − p(r) = rλ(1 − λ)/(1 − r) completes the proof.
Lemma A2. If X is a random variable such that s → E [|X| s ] is finite in a neighborhood of zero, then E [log(X)] and Var(log(X)) are finite, and lim r→1 L r (X; p(r), q(r)) = E [log |X|] + u 2 Var(log |X|).
Taking the limit as r increases to one completes the proof.
The stated inequality follows from evaluating the right-hand side with u = 1/ Var(log X), recalling that h(X) corresponds to the limit of h r (X) as r increases to one.