A Lower Bound on the Differential Entropy of Log-Concave Random Vectors with Applications

We derive a lower bound on the differential entropy of a log-concave random variable X in terms of the p-th absolute moment of X. The new bound leads to a reverse entropy power inequality with an explicit constant, and to new bounds on the rate-distortion function and the channel capacity. Specifically, we study the rate-distortion function for log-concave sources and distortion measure d(x,x^)=|x−x^|r, with r≥1, and we establish that the difference between the rate-distortion function and the Shannon lower bound is at most log(πe)≈1.5 bits, independently of r and the target distortion d. For mean-square error distortion, the difference is at most log(πe2)≈1 bit, regardless of d. We also provide bounds on the capacity of memoryless additive noise channels when the noise is log-concave. We show that the difference between the capacity of such channels and the capacity of the Gaussian channel with the same noise power is at most log(πe2)≈1 bit. Our results generalize to the case of a random vector X with possibly dependent coordinates. Our proof technique leverages tools from convex geometry.


Introduction
It is well known that the differential entropy among all zero-mean random variables with the same second moment is maximized by the Gaussian distribution: More generally, the differential entropy under p-th moment constraint is upper bounded as (see e.g., [1] (Appendix 2)), for p > 0, where α p 2e Here, Γ denotes the Gamma function. Of course, if p = 2, α p = √ 2πe, and Equation (2) reduces to Equation (1). A natural question to ask is whether a matching lower bound on h(X) can be found in terms of p-norm of X, X p . The quest is meaningless without additional assumptions on the density of X, as h(X) = −∞ is possible even if X p is finite. In this paper, we show that if the density of X, f X (x), is log-concave (that is, log f X (x) is concave), then h(X) stays within a constant from the upper bound in Equation (2) (see Theorem 3 in Section 2 below): where p ≥ 1. Moreover, the bound (4) tightens for p = 2, where we have The bound (4) actually holds for p > −1 if, in addition to being log-concave, X is symmetric (that is, f X (x) = f X (−x)), (see Theorem 1 in Section 2 below).
The class of log-concave distributions is rich and contains important distributions in probability, statistics and analysis. Gaussian distribution, Laplace distribution, uniform distribution on a convex set, chi distribution are all log-concave. The class of log-concave random vectors has good behavior under natural probabilistic operations: namely, a famous result of Prékopa [2] states that sums of independent log-concave random vectors, as well as marginals of log-concave random vectors, are log-concave. Furthermore, log-concave distributions have moments of all orders.
Together with the classical bound in Equation (2), the bound in (4) tells us that entropy and moments of log-concave random variables are comparable.
Using a different proof technique, Bobkov and Madiman [3] recently showed that the differential entropy of a log-concave X satisfies Our results in (4) and (5) tighten (6), in addition to providing a comparison with other moments. Furthermore, this paper generalizes the lower bound on differential entropy in (4) to random vectors. If the random vector X = (X 1 , . . . , X n ) consists of independent random variables, then the differential entropy of X is equal to the sum of differential entropies of the component random variables, and one can trivially apply (4) component-wise to obtain a lower bound on h(X). In this paper, we show that, even for nonindependent components, as long as the density of the random vector X is log-concave and satisfies a symmetry condition, its differential entropy is bounded from below in terms of covariance matrix of X (see Theorem 4 in Section 2 below). As noted in [4], such a generalization is related to the famous hyperplane conjecture in convex geometry. We also extend our results to a more general class of random variables, namely, the class of γ-concave random variables, with γ < 0.
The bound (4) on the differential entropy allows us to derive reverse entropy power inequalities with explicit constants. The fundamental entropy power inequality of Shannon [5] and Stam [6] states that for all independent continuous random vectors X and Y in R n , where denotes the entropy power of X. It is of interest to characterize distributions for which a reverse form of (7) holds. In this direction, it was shown by Bobkov and Madiman [7] that, given any continuous log-concave random vectors X and Y in R n , there exist affine volume-preserving maps u 1 , u 2 such that a reverse entropy power inequality holds for u 1 (X) and u 2 (Y): for some universal constant c ≥ 1 (independent of the dimension). In applications, it is important to know the precise value of the constant c that appears in (9). It was shown by Cover and Zhang [8] that, if X and Y are identically distributed (possibly dependent) log-concave random variables, then Inequality (10) easily extends to random vectors (see [9]). A similar bound for the difference of i.i.d. log-concave random vectors was obtained in [10], and reads as Recently, a new form of reverse entropy power inequality was investigated in [11], and a general reverse entropy power-type inequality was developed in [12]. For further details, we refer to the survey paper [13]. In Section 5, we provide explicit constants for non-identically distributed and uncorrelated log-concave random vectors (possibly dependent). In particular, we prove that as long as log-concave random variables X and Y are uncorrelated, A generalization of (12) to arbitrary dimension is stated in Theorem 8 in Section 2 below. The bound (4) on the differential entropy is essential in the study of the difference between the rate-distortion function and the Shannon lower bound that we describe next. Given a nonnegative number d, the rate-distortion function R X (d) under r-th moment distortion measure is given by where the infimum is over all transition probability kernels R → R satisfying the moment constraint. The celebrated Shannon lower bound [14] states that the rate-distortion function is lower bounded by where α r is defined in (3). For mean-square distortion (r = 2), (14) simplifies to The Shannon lower bound states that the rate-distortion function is lower bounded by the difference between the differential entropy of the source and the term that increases with target distortion d, explicitly linking the storage requirements for X to the information content of X (measured by h(X)) and the desired reproduction distortion d. As shown in [15][16][17] under progressively less stringent assumptions (Koch [17] showed that (16) holds as long as H( X ) < ∞), the Shannon lower bound is tight in the limit of low distortion, The speed of convergence in (16) and its finite blocklength refinement were recently explored in [18]. Due to its simplicity and tightness in the high resolution/low distortion limit, the Shannon lower bound can serve as a proxy for the rate-distortion function R X (d), which rarely has an explicit representation. Furthermore, the tightness of the Shannon lower bound at low d is linked to the optimality of simple lattice quantizers [18], an insight which has evident practical significance. Gish and Pierce [19] showed that, for mean-square error distortion, the difference between the entropy rate of a scalar quantizer, H 1 , and the rate-distortion function R X (d) converges to 1 2 log 2πe 12 ≈ 0.254 bit/sample in the limit d ↓ 0. Ziv [20] proved that H 1 − R X (d) is bounded by 1 2 log 2πe 6 ≈ 0.754 bit/sample, universally in d, where H 1 is the entropy rate of a dithered scalar quantizer.
In this paper, we show that the gap between R X (d) and R X (d) is bounded universally in d, provided that the source density is log-concave: for mean-square error distortion (r = 2 in (13)), we have Besides leading to the reverse entropy power inequality and the reverse Shannon lower bound, the new bounds on the differential entropy allow us to bound the capacity of additive noise memoryless channels, provided that the noise follows a log-concave distribution.
The capacity of a channel that adds a memoryless noise Z is given by (see e.g., [21] (Chapter 9)), where P is the power allotted for the transmission. As a consequence of the entropy power inequality (7) (or more elementary as a consequence of the worst additive noise lemma, see [22,23]), it holds that for arbitrary noise Z, where C Z (P) denotes the capacity of the additive white Gaussian noise channel with noise variance Var[Z]. This fact is well known (see e.g., [21] (Chapter 9)), and is referred to as the saddle-point condition.
In this paper, we show that, whenever the noise Z is log-concave, the difference between the capacity C Z (P) and the capacity of a Gaussian channel with the same noise power satisfies Let us mention a similar result by Zamir and Erez [24], who showed that the capacity of an arbitrary memoryless additive noise channel is well approximated by the mutual information between the Gaussian input and the output of the channel: where X * is a Gaussian input satisfying the power constraint. The bounds (20) and (21) are not directly comparable. The rest of the paper is organized as follows. Section 2 presents and discusses our main results: the lower bounds on differential entropy in Theorems 1, 3 and 4, the reverse entropy power inequalities with explicit constants in Theorems 7 and 8, the upper bounds on R X (d) − R X (d) in Theorems 9 and 10, and the bounds on the capacity of memoryless additive channels in Theorems 12 and 13. The convex geometry tools served to prove the bounds on differential entropy and the bounds in Theorems 1, 3 and 4 are presented in Section 3. In Section 4, we extend our results to the class of γ-concave random variables. The reverse entropy power inequalities in Theorems 7 and 8 are proven in Section 5. The bounds on the rate-distortion function in Theorems 9 and 10 are proven in Section 6. The bounds on the channel capacity in Theorems 12 and 13 are proven in Section 7.

Lower Bounds on the Differential Entropy
Equivalently, f is log-concave if for every λ ∈ [0, 1] and for every x, y ∈ R n , one has We say that a random vector X in R n is log-concave if it has a probability density function f X with respect to Lebesgue measure in R n such that f X is log-concave.
Our first result is a lower bound on the differential entropy of symmetric log-concave random variable in terms of its moments. Theorem 1. Let X be a symmetric log-concave random variable. Then, for every p > −1, Moreover, (23) holds with equality for uniform distribution in the limit p ↓ −1.
As we will see in Theorem 3, for p = 2, the bound (23) tightens as The difference between the upper bound in (2) and the lower bound in (23) grows as log(p) as p → +∞, as 1 √ p as p → 0 + , and reaches its minimum value of log(e) ≈ 1.4 bits at p = 1. The next theorem, due to Karlin, Proschan and Barlow [25], shows that the moments of a symmetric log-concave random variable are comparable, and demonstrates that the bound in Theorem 1 tightens as p ↓ −1.
Combining Theorem 2 with the well-known fact that X p is non-decreasing in p, we deduce that for every symmetric log-concave random variable X, for every −1 < p < q, Using Theorem 1 and (24), we immediately obtain the following upper bound for the relative entropy D(X||G X ) between a symmetric log-concave random variable X and a Gaussian G X with same variance as that of X. Corollary 1. Let X be a symmetric log-concave random variable. Then, for every p > −1, where G X ∼ N (0, X 2 2 ), and Remark 1. The uniform distribution achieves equality in (27) in the limit p ↓ −1. Indeed, if U is uniformly distributed on a symmetric interval, then and so, in the limit p ↓ −1, the upper bound in Corollary 1 coincides with the true value of D(U||G U ): We next provide a lower bound for the differential entropy of log-concave random variables that are not necessarily symmetric.
Theorem 3. Let X be a log-concave random variable. Then, for every p ≥ 1, Moreover, for p = 2, the bound (31) tightens as The next proposition is an analog of Theorem 2 for log-concave random variables that are not necessarily symmetric. Proposition 1. Let X be a log-concave random variable. Then, for every 1 ≤ p ≤ q, Remark 2. Contrary to Theorem 2, we do not know whether there exists a distribution that realizes equality in (33).
Using Theorem 3, we immediately obtain the following upper bound for the relative entropy D(X||G X ) between an arbitrary log-concave random variable X and a Gaussian G X with same variance as that of X. Recall the definition of ∆ p in (28).

Corollary 2.
Let X be a zero-mean, log-concave random variable. Then, for every p ≥ 1, where G X ∼ N (0, X 2 2 ). In particular, by taking p = 2, we necessarily have For a given distribution of X, one can optimize over p to further tighten (35), as seen in (29) for the uniform distribution.
We now present a generalization of the bound in Theorem 1 to random vectors satisfying a symmetry condition. A function f : R n → R is called unconditional if, for every (x 1 , . . . , x n ) ∈ R n and every (ε 1 , . . . , ε n ) ∈ {−1, 1} n , one has For example, the probability density function of the standard Gaussian distribution is unconditional. We say that a random vector X in R n is unconditional if it has a probability density function f X with respect to Lebesgue measure in R n such that f X is unconditional.
Theorem 4. Let X be a symmetric log-concave random vector in R n , n ≥ 2. Then, where |K X | denotes the determinant of the covariance matrix of X, and c(n) = e 2 n 2 4 √ 2(n+2) . If, in addition, X is unconditional, then c(n) = e 2 2 .
By combining Theorem 4 with the well-known upper bound on the differential entropy, we deduce that, for every symmetric log-concave random vector X in R n , where c(n) = e 2 n 2 in general, and c(n) = e 2 2 if, in addition, X is unconditional. Using Theorem 4, we immediately obtain the following upper bound for the relative entropy D(X||G X ) between a symmetric log-concave random vector X and a Gaussian G X with the same covariance matrix as that of X. Corollary 3. Let X be a symmetric log-concave random vector in R n . Then, in general, and c(n) = e 2 2 when X is unconditional.
For isotropic unconditional log-concave random vectors (whose definition we recall in Section 3.3 below), we extend Theorem 4 to other moments. Theorem 5. Let X = (X 1 , . . . , X n ) be an isotropic unconditional log-concave random vector. Then, for every p > −1, where c = e √ 6. If, in addition, f X is invariant under permutations of coordinates, then c = e.

Extension to γ-Concave Random Variables
The bound in Theorem 1 can be extended to a larger class of random variables than log-concave, namely the class of γ-concave random variables that we describe next.
Let γ < 0. We say that a probability density function f : 1] and every x, y ∈ R n , one has As γ → 0, (41) agrees with (22), and thus 0-concave distributions corresponds to log-concave distributions. The class of γ-concave distributions has been deeply studied in [26,27].
For example, extended Cauchy distributions, that is, distributions of the form where C γ is the normalization constant, are γ-concave distributions (but are not log-concave). We say that a random vector X in R n is γ-concave if it has a probability density function f X with respect to Lebesgue measure in R n such that f X is γ-concave.
We derive the following lower bound on the differential entropy for one-dimensional symmetric γ-concave random variables, with γ ∈ (−1, 0).

Reverse Entropy Power Inequality with an Explicit Constant
As an application of Theorems 3 and 4, we establish in Theorems 7 and 8 below a reverse form of the entropy power inequality (7) with explicit constants, for uncorrelated log-concave random vectors. Recall the definition of the entropy power (8).
Theorem 7. Let X and Y be uncorrelated log-concave random variables. Then, As a consequence of Corollary 4, reverse entropy power inequalities for more general distributions can be obtained. In particular, for any uncorrelated symmetric γ-concave random variables X and Y, with γ ∈ (− 1 3 , 0), One cannot have a reverse entropy power inequality in higher dimensions for arbitrary log-concave random vectors. Indeed, just consider X uniformly distributed on with ε > 0 small enough so that N(X) and N(Y) are arbitrarily small compared to N(X + Y). Hence, we need to put X and Y in a certain position so that a reverse form of (7) is possible. While the isotropic position (discussed in Section 3) will work, it can be relaxed to the weaker condition that the covariance matrices are proportionals. Recall that we denote by K X the covariance matrix of X. Theorem 8. Let X and Y be uncorrelated symmetric log-concave random vectors in R n such that K X and K Y are proportionals. Then, If, in addition, X and Y are unconditional, then

New Bounds on the Rate-distortion Function
As an application of Theorems 1 and 3, we show in Corollary 5 below that in the class of one-dimensional log-concave distributions, the rate-distortion function does not exceed the Shannon lower bound by more than log( √ πe) ≈ 1.55 bits (which can be refined to log(e) ≈ 1.44 bits when the source is symmetric), independently of d and r ≥ 1. Denote for brevity and recall the definition of α r in (3). We start by giving a bound on the difference between the rate-distortion function and the Shannon lower bound, which applies to general, not necessarily log-concave, random variables. Theorem 9. Let d ≥ 0 and r ≥ 1. Let X be an arbitrary random variable.

Remark 3.
For Gaussian X and r = 2, the upper bound in (50) is 0, as expected.
The next result refines the bounds in Theorem 9 for symmetric log-concave random variables when r > 2.
Theorem 10. Let d ≥ 0 and r > 2. Let X be a symmetric log-concave random variable. If To bound R X (d) − R X (d) independently of the distribution of X, we apply the bound (35) on D(X||G X ) to Theorems 9 and 10: Corollary 5. Let X be a log-concave random variable. For r ∈ [1, 2], we have For r > 2, we have If, in addition, X is symmetric, then, for r > 2, we have (a) Arbitrary log-concave source (b) Symmetric log-concave source One can see that the graph in Figure 1b is continuous at r = 2, contrary to the graph in Figure 1a. This is because Theorem 2, which applies to symmetric log-concave random variables, is strong enough to imply the tightening of (51) given in (52), while Proposition 1, which provides a counterpart of Theorem 2 applicable to all log-concave random variables, is insufficient to derive a similar tightening in that more general setting.

Remark 4.
While Corollary 5 bounds the difference R X (d) − R X (d) by a universal constant independent of the distribution of X, tighter bounds can be obtained if one is willing to relinquish such universality. For example, for mean-square distortion (r = 2) and a uniformly distributed source U, using Remark 1, we obtain Theorem 9 easily extends to random vector X in R n , n ≥ 2, with a similar proof. The only difference being an extra term of n 2 log 1 n X 2 2 /|K X | 1 n that will appear on the right-hand side of (50) and (51), and will come from the upper bound on the differential entropy (38). Here, As a result, the bound R X (d) − R X (d) can be arbitrarily large in higher dimensions because of the term 1 n X 2 2 /|K X | 1 n . However, for isotropic random vectors (whose definition we recall in Section 3.3 below), one has 1 n X 2 2 = |K X | 1 n . Hence, using the bound (39) on D(X||G X ), we can bound R X (d) − R X (d) independently of the distribution of isotropic log-concave random vector X in R n , n ≥ 2. Corollary 6. Let X be an isotropic log-concave random vector in R n , n ≥ 2. Then, where c(n) = n 2 e 2 (n+2)4 √ 2 in general, and c(n) = e 2 2 if, in addition, X is unconditional.
Let us consider the rate-distortion function under the determinant constraint for random vectors in R n , n ≥ 2: where the infimum is taken over all joint distributions satisfying the determinant constraint |K X−X | 1 n ≤ d. For this distortion measure, we have the following bound.

New Bounds on the Capacity of Memoryless Additive Channels
As another application of Theorem 3, we compare the capacity C Z of a channel with log-concave additive noise Z with the capacity of the Gaussian channel. Recall that the capacity of the Gaussian channel is .
(62) Theorem 12. Let Z be a log-concave random variable. Then, Remark 5. Theorem 12 tells us that the capacity of a channel with log-concave additive noise exceeds the capacity a Gaussian channel by no more than 1.05 bits.
As an application of Theorem 4, we can provide bounds for the capacity of a channel with log-concave additive noise Z in R n , n ≥ 1. The formula for capacity (18) generalizes to dimension n as Theorem 13. Let Z be a symmetric log-concave random vector in R n . Then, where c(n) = n 2 e 2 (n+2)4 √ 2 . If, in addition, Z is unconditional, then c(n) = e 2 2 .
The upper bound in Theorem 13 can be arbitrarily large by inflating the ratio 1 n X 2 2 /|K X | 1 n . For isotropic random vectors (whose definition is recalled in Section 3.3 below), one has 1 n Z 2 2 = |K Z | 1 n , and the following corollary follows.

Proof of Theorem 1
The key to our development is the following result for one-dimensional log-concave distributions, well-known in convex geometry. It can be found in [28], in a slightly different form.
Proof of Theorem 1. Let p > 0. Applying Lemma 1 to the values −1, 0, p, we have The bound in Theorem 1 follows by computing the values F(−1), F(0) and F(p) for f = f X . One has To compute F(−1), we first provide a different expression for F(r). Notice that Denote the generalized inverse of f X by f −1 it follows that f X is non-increasing on [0, +∞). Therefore, We deduce that Plugging (69) and (73) into (68), we obtain It follows immediately that For p ∈ (−1, 0), the bound is obtained similarly by applying Lemma 1 to the values −1, p, 0. We now show that equality is attained, by letting p ↓ −1, by U uniformly distributed on a symmetric interval [− a 2 , a 2 ], for some a > 0. In this case, we have Hence, Remark 6. From (71) and (74), we see that the following statement holds: For every symmetric log-concave random variable X ∼ f X , for every p > −1, and for every x ∈ R, Inequality (78) is the main ingredient in the proof of Theorem 1. It is instructive to provide a direct proof of inequality (78) without appealing to Lemma 1, the ideas going back to [25]: Proof of inequality (78). By considering X|X ≥ 0, where X is symmetric log-concave, it is enough to show that for every log-concave density f supported on [0, +∞), one has By a scaling argument, one may assume that f (0) = 1. Take g(x) = e −x . If f = g, then the result follows by a straightforward computation. Assume that f = g. Since f = g and f = g, the function f − g changes sign at least one time. However, since f (0) = g(0), f is log-concave and g is log-affine, the function f − g changes sign exactly once. It follows that there exists a unique point x 0 > 0 such that for every 0 < x < x 0 , f (x) ≥ g(x), and for every x > x 0 , f (x) ≤ g(x). We deduce that for every x > 0, and p = 0, Integrating over x > 0, we arrive at which yields the desired result.
Actually, the powerful and versatile result of Lemma 1, which implies (78), is also proved using the technique in (79)-(81). In the context of information theory, Lemma 1 has been previously applied to obtain reverse entropy power inequalities [7], as well as to establish optimal concentration of the information content [29]. In this paper, we make use of Lemma 1 to prove Theorem 1. Moreover, Lemma 1 immediately implies Theorem 2. Below, we recall the argument for completeness.
Proof of Theorem 2. The result follows by applying Lemma 1 to the values 0, p, q. If 0 < p < q, then Hence, which yields the desired result. The bound is obtained similarly if p < q < 0 or if p < 0 < q.

Proof of Theorem 3 and Proposition 1
The proof leverages the ideas from [10].
Proof of Theorem 3. Let Y be an independent copy of X. Jensen's inequality yields Since X − Y is symmetric and log-concave, we can apply inequality (74) to X − Y to obtain where the last inequality again follows from Jensen's inequality. Combining (84) and (85) leads to the desired result: For p = 2, one may tighten (85) by noticing that Hence, Proof of Proposition 1. Let Y be an independent copy of X. Since X − Y is symmetric and log-concave, we can apply Theorem 2 to X − Y. Jensen's inequality and triangle inequality yield:

Proof of Theorem 4
We say that a random vector X ∼ f X is isotropic if X is symmetric and for all unit vectors θ, one has for some constant m X > 0. Equivalently, X is isotropic if its covariance matrix K X is a multiple of the identity matrix I n , for some constant m X > 0. The constant is called the isotropic constant of X.
It is well known that X is bounded from below by a positive constant independent of the dimension [30]. A long-standing conjecture in convex geometry, the hyperplane conjecture, asks whether the isotropic constant of an isotropic log-concave random vector is also bounded from above by a universal constant (independent of the dimension). This conjecture holds under additional assumptions, but, in full generality, X is known to be bounded only by a constant that depends on the dimension. For further details, we refer the reader to [31]. We will use the following upper bounds on X (see [32] for the best dependence on the dimension up to date).

Lemma 2.
Let X be an isotropic log-concave random vector in R n , with n ≥ 2. Then, 2 X ≤ n 2 e 2 (n+2)4 √ 2 . If, in addition, X is unconditional, then 2 X ≤ e 2 2 . If X is uniformly distributed on a convex set, these bounds hold without factor e 2 .
Even though the bounds in Lemma 2 are well known, we could not find a reference in the literature. We thus include a short proof for completeness.
Since f X is unconditional and log-concave, it follows that f X i is symmetric and log-concave, so inequality (74) applies to f X i : We apply Lemma 3 to pass from f X i to f X in the right side of (100): Thus,

Extension to γ-Concave Random Variables
In this section, we prove Theorem 6, which extends Theorem 1 to the class of γ-concave random variables, with γ < 0. First, we need the following key lemma, which extends Lemma 1.
One can recover Lemma 1 from Lemma 4 by letting γ tend to 0 from below.
Proof of Theorem 6. Let us first consider the case p ∈ (−1, 0). Let us denote by f X the probability density function of X. By applying Lemma 4 to the values −1, p, 0, we have From the proof of Theorem 1, we deduce that F(−1) = f X (0). In addition, notice that, for γ ∈ (−1, 0), Hence, and the bound on differential entropy follows: For the case p ∈ 0, −1 − 1 γ , the bound is obtained similarly by applying Lemma 4 to the values −1, 0, p.

Proof of Theorem 7
Proof. Using the upper bound on the differential entropy (1), we have the last equality being valid since X and Y are uncorrelated. Hence, Using inequality (32), we conclude that

Proof of Theorem 8
Proof. Since X and Y are uncorrelated and K X and K Y are proportionals, Using (110) and the upper bound on the differential entropy (38), we obtain Using Theorem 4, we conclude that where c(n) = e 2 n 2 4 √ 2(n+2) in general, and c(n) = e 2 2 if X and Y are unconditional.
(1) Let r ∈ [1,2]. Assume that σ > d 1 r . We takê where is independent of X. This choice ofX is admissible since where we used r ≤ 2 and the left-hand side of inequality (26). Upper-bounding the rate-distortion function by the mutual information between X andX, we obtain We compare the capacity C Z of a channel with log-concave additive noise with the capacity of the Gaussian channel.

Proof of Theorem 12
Proof. The lower bound is well known, as mentioned in (19). To obtain the upper bound, we first use the upper bound on the differential entropy (1) to conclude that h(X + Z) ≤ 1 2 log(2πe(P + Var[Z])), for every random variable X such that X 2 2 ≤ P. By combining (140), (141) and (32) which is the desired result.

Proof of Theorem 13
Proof. The lower bound is well known, as mentioned in (19). To obtain the upper bound, we write where c(n) = n 2 e 2 (n+2)4 √ 2 in general, and c(n) = e 2 2 if Z is unconditional. The first inequality in (143) is obtained from the upper bound on the differential entropy (38). The last inequality in (143) is obtained by applying the arithmetic-geometric mean inequality and Theorem 4.

Conclusions
Several recent results show that the entropy of log-concave probability densities have nice properties. For example, reverse, strengthened and stable versions of the entropy power inequality were recently obtained for log-concave random vectors (see e.g., [3,11,[35][36][37][38]). This line of developments suggest that, in some sense, log-concave random vectors behave like Gaussians.
Our work follows this line of results, by establishing a new lower bound on differential entropy for log-concave random variables in (4), for log-concave random vectors with possibly dependent coordinates in (37), and for γ-concave random variables in (43). We made use of the new lower bounds in several applications. First, we derived reverse entropy power inequalities with explicit constants for uncorrelated, possibly dependent log-concave random vectors in (12) and (47). We also showed a universal bound on the difference between the rate-distortion function and the Shannon lower bound for log-concave random variables in Figure 1a and Figure 1b, and for log-concave random vectors in (59). Finally, we established an upper bound on the capacity of memoryless additive noise channels when the noise is a log-concave random vector in (20) and (66).
Under the Gaussian assumption, information-theoretic limits in many communication scenarios admit simple closed-form expressions. Our work demonstrates that, at least in three such scenarios (source coding, channel coding and joint source-channel coding), the information-theoretic limits admit a closed-form approximation with at most 1 bit of error if the Gaussian assumption is relaxed to the log-concave one. We hope that the approach will be useful in gaining insights into those communication and data processing scenarios in which the Gaussianity of the observed distributions is violated but the log-concavity is preserved.