Next Article in Journal
Entropy Production and Equilibrium Conditions of General-Covariant Spin Systems
Previous Article in Journal
Complexity Analysis and DSP Implementation of the Fractional-Order Lorenz Hyperchaotic System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Tight Upper Bound on the Entropy of Sums

Department of Electrical and Computer Engineering, American University of Beirut, Beirut 1107 2020, Lebanon
*
Author to whom correspondence should be addressed.
Entropy 2015, 17(12), 8312-8324; https://doi.org/10.3390/e17127881
Submission received: 18 August 2015 / Revised: 8 December 2015 / Accepted: 14 December 2015 / Published: 19 December 2015
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
We consider the independent sum of a given random variable with a Gaussian variable and an infinitely divisible one. We find a novel tight upper bound on the entropy of the sum which still holds when the variable possibly has an infinite second moment. The proven bound has several implications on both information theoretic problems and infinitely divisible noise channels’ transmission rates.

1. Introduction

Information inequalities have been investigated since the foundation of information theory. Two such important ones are due to Shannon [1]:
  • The first one is an upper bound on the entropy of Random Variables (RV)s having a finite second moment by virtue of the fact that Gaussian distributions maximize entropy under a second moment constraint (the (differential) entropy h ( Y ) of a random variable Y having a probability density function p ( y ) is defined as:
    h ( Y ) = - - + p ( y ) ln p ( y ) d y ,
    whenever the integral exists). For any RVs X and Z having respectively finite variances σ X 2 and σ Z 2 , we have
    h ( X + Z ) 1 2 ln 2 π e σ X 2 + σ Z 2 .
  • The second one is a lower bound on the entropy of independent sums of RVs and commonly known as the Entropy Power Inequality (EPI). The EPI states that given two real independent RVs X, Z such that h ( X ) , h ( Z ) and h ( X + Z ) exist, then (Corollary 3, [2])
    N ( X + Z ) N ( X ) + N ( Z ) ,
    where N X is the entropy power of X and is equal to
    N X = 1 2 π e e 2 h ( X ) .
While Shannon proposed Equation (2) and proved it locally around the normal distribution, Stam [3] was the first to prove this result in general followed by Blachman [4] in what is considered to be a simplified proof. The proof was done via the usage of two information identities:
1-
The Fisher Information Inequality (FII): Let X and Z be two independent RVs such that the respective Fisher informations J ( X ) and J ( Z ) exist (the Fisher information J ( Y ) of a random variable Y having a probability density function p ( y ) is defined as:
J ( Y ) = - + 1 p ( y ) p 2 ( y ) d y ,
whenever the derivative and the integral exit). Then
1 J ( X + Z ) 1 J ( X ) + 1 J ( Z ) .
2-
The de Bruijn’s identity: For any ϵ > 0 ,
d d ϵ h ( X + ϵ Z ) = σ 2 2 J ( X + ϵ Z ) ,
where Z is a Gaussian RV with mean zero and variance σ 2 independent of X.
Rioul proved that the de Bruijn’s identity holds at ϵ = 0 + for any finite-variance RV Z (Proposition 7, p. 39, [5]).
The remarkable similarity between Equations (2) and (3) was pointed out in Stam’s paper [3] who in addition, related the entropy power and the Fisher information by an “uncertainty principle-type” relation:
N ( X ) J ( X ) 1 ,
which is commonly known as the Isoperimetric Inequality for Entropies (IIE) (Theorem 16, [6]). Interestingly, equality holds in Equation (5) whenever X is Gaussian distributed and in Equations (1)–(3) whenever X and Z are independent Gaussian.
When it comes to upper bounds, a bound on the discrete entropy of the sum exists [7]:
H ( X + Z ) H ( X ) + H ( Z ) .
In addition, several identities involving discrete entropy of sums were shown in [8,9] using the Plünnecke-Ruzsa sumset theory and its analogy to Shannon entropy. Except for Equation (1), that holds for finite variance RVs, the differential entropy inequalities provided in some sense a lower bound on the entropy of sums of independent RVs. Equation (6) does not always hold for differential entropies, and unless the variance is finite, if we start with two RVs X and Z having respectively finite differential entropies h ( X ) and h ( Z ) , one does not have a clear idea on how much the growth of h ( X + Z ) will be. The authors in [10] deferred this to the fact that discrete entropy has a functional submodularity property which is not the case for differential entropy. Nevertheless, the authors were able to derive various useful inequalities. Madiman [11] used basic information theoretic relations to prove the submodularity of the entropy of independent sums and found accordingly upper bounds on the discrete and differential entropy of sums. Though, in its general form, the problem of upper bounding the differential entropy of independent sums is not always possible (proposition 4, [2]), several results are known in particular settings. Cover et al. [12] solved the problem of maximizing the differential entropy of the sum of dependent RVs having the same marginal log-concave densities. In [13], Ordentlich found the maximizing probability distribution for the differential entropy of the independent sum of n finitely supported symmetric RVs. For “sufficiently convex” probability distributions, an interesting reverse EPI was proven to hold in (Theorem 1.1, p. 63, [14]).
In this study, we find a tight upper bound on the (differential) entropy of the independent sum of a RV X not necessarily having a finite variance with an infinitely divisible variable having a Gaussian component. The proof is based on the infinite divisibility property with the application of the FII, de Bruijn’s identity, a proven concavity result of the differential entropy and a novel “de Bruijn type” identity. We use convolutions along small perturbations to upper bound some relevant information theoretic quantities as done in [15] where some moment constraints were imposed on X which is not the case here. The novel bound presented in this paper is, for example, useful when studying Gaussian channels or when the additive noise is modeled as a combination of Gaussian and Poisson variables [16]. It has several implications which are listed in Section 2 and can be possibly used for variables with infinite second moments. Even when the second moment of X is finite, in some cases our bound can be tighter than Equation (1).

2. Main Result

We consider the following scalar RV,
Y = X + Z ,
where Z = Z 1 + Z 2 is a composite infinitely divisible RV that is independent of X and where:
  • Z 1 N ( μ 1 , σ 1 2 ) is a Gaussian RV with mean μ 1 and positive variance σ 1 2 .
  • Z 2 is an infinitely divisible RV with mean μ 2 and finite (possibly zero) variance σ 2 2 that is independent of Z 1 .
We note that since Z 1 is absolutely continuous with bounded Probability Density Function (PDF), then so are Z = Z 1 + Z 2 and Y = X + Z for any RV X [17]. In addition, we define the set L of distribution functions F ( x ) that have a finite logarithmic moment:
L = F : ln 1 + | X | d F ( x ) < + .
Let E · denote the expectation operator. Using the identity ln 1 + | x + n | ln 1 + | x | + ln 1 + | n | ,
E ln 1 + | Y | = E ln 1 + | X + Z | E ln 1 + | X | + E ln 1 + | Z | .
Since Z has a bounded PDF with finite variance then necessarily it has a finite logarithmic moment, and by virtue of Equation (8), if X L then Y L .
Under this finite logarithmic constraint, the differential entropy of X is well defined and is such that - h ( X ) < + (Proposition 1, [5]) and that of Y exists and is finite (Lemma 1, [5]). Also, when X L , the identity I ( X + Z ; Z ) = h ( X + Z ) - h ( X ) always holds (Lemma 1, [5]).
The main result of this work stated in Theorem 1 is a novel upper bound on h ( Y ) whenever X L with finite differential entropy h ( X ) and finite Fisher information J ( X ) .
Theorem 1. 
Let X L having finite h ( X ) and J ( X ) . The differential entropy of X + Z is upper bounded by:
h ( X + Z ) h ( X ) + 1 2 ln 1 + σ 1 2 J ( X ) + σ 2 2 min sup x D p X ( u - x ) p X ( u ) x 2 ; 1 2 σ 1 2 ,
where the Kullback-Leibler divergence between probability distributions p and q is denoted D ( p q ) . In the case where Z N ( μ 1 , σ 1 2 ) , i.e. σ 2 2 = 0 , we have
h ( X + Z ) h ( X ) + 1 2 ln 1 + σ 1 2 J ( X ) ,
and equality holds if and only if both X and Z are Gaussian distributed.
We defer the proof of Theorem 1 to Section 3. The rest of this section is dedicated to the implications of identity Equation (10) which are fivefold:
1-
While the usefulness of this upper bound is clear for RVs X having an infinite second moment for which Equation (1) fails, it can in some cases, present a tighter upper bound than the one provided by Shannon for finite second moment variables X. This is the case, for example, when Z N ( μ 1 , σ 1 2 ) and X is a RV having the following PDF:
p X ( x ) = f ( x + a ) - 1 - a x 1 - a f ( x - a ) - 1 + a x 1 + a ,
for some a > 0 and where
f ( x ) = 3 4 ( 1 + x ) 2 - 1 x 0 3 4 ( 1 - x ) 2 0 < x 1 0 otherwise .
The involved quantities related to X are easily computed and they evaluate to the following: E X = 0 , E X 2 = a 2 + 1 10 , J ( X ) = 12 and h ( X ) = ln 4 3 + 2 3 , for which Equation (10) becomes
h ( X + Z ) h ( X ) + 1 2 ln 1 + σ 1 2 J ( X ) = ln 4 3 + 2 3 + 1 2 ln ( 1 + 12 σ 1 2 ) ,
and Equation (1) becomes
h ( X + Z ) 1 2 ln 2 π e σ X 2 + σ 1 2 = 1 2 ln 2 π e + 1 2 ln a 2 + 1 10 + σ 1 2 .
Comparing Equations (11) and (12), it can be seen that our upper bound is independent of a whereas the Shannon bound increases to and gets looser as a increases. This is explained by the fact that our bound is location independent and depends only on the PDF of X whereas the Shannon bound is location dependent via the variance of X.
2-
Theorem 1 gives an analytical bound on the change in the transmission rates of the linear Gaussian channel function of an input scaling operation. In fact, let X be a RV satisfying the conditions of Theorem 1 and Z N ( μ 1 , σ 1 2 ) . Then a X satisfies similar conditions for some positive scalar a. Hence
h ( a X + Z ) h ( a X ) + 1 2 ln 1 + σ 1 2 J ( a X ) = h ( X ) + ln a + 1 2 ln 1 + σ 1 2 a 2 J ( X ) ,
where we used the fact that h ( a X ) = h ( X ) + ln a and J ( a X ) = 1 a 2 J ( X ) . Subtracting h ( Z ) from both sides of the equation gives
I ( a X + Z ; X ) - I ( X + Z ; X ) 1 2 ln a 2 + σ 1 2 J ( X ) .
It can be seen that the variation in the transmissions rates is bounded by a logarithmically growing function in a. This is a known behavior of the optimal transmission rates that are achieved by Gaussian inputs. A similar behavior is observed whenever Z = Z 1 + Z 2 .
3-
If the EPI is regarded as being a lower bound on the entropy of sums, Equation (10) can be considered as its upper bound counterpart whenever one of the variables is Gaussian. In fact using both of these inequalities gives:
N ( X ) + N ( Z ) N ( Y ) N ( X ) + N ( Z ) N ( X ) J ( X ) .
It can be seen that the sandwich bound is efficient whenever the IIE in Equation (5) evaluated for the variable X is close to its lower bound of 1.
4-
The result of Theorem 1 is more powerful that the IIE in Equation (5). Indeed, using the fact that h ( Z ) h ( X + Z ) , inequality Equation (10) gives the looser inequality:
h ( Z ) h ( X ) + 1 2 ln 1 + σ 2 J ( X ) ,
which implies that
N ( X ) J ( X ) σ 2 J ( X ) 1 + σ 2 J ( X ) ,
where we used the fact that h ( Z ) = 1 2 ln 2 π e σ 2 . Since the left hand side of Equation (14) is scale invariant, then
N ( a X ) J ( a X ) = N ( X ) J ( X ) σ 2 J ( X ) a 2 + σ 2 J ( X ) ,
for any positive scalar a. Now, letting a 0 will yield Equation (5).
5-
Finally, in the context of communicating over a channel, it is well-known that, under a second moment constraint, the best way to “fight” Gaussian noise is to use Gaussian inputs. This follows from the fact that Gaussian variables maximize entropy under a second moment constraint. Conversely, when using a Gaussian input, the worst noise in terms of minimizing the transmission rates is also Gaussian. This is a direct result of the EPI and is also due to the fact that Gaussian distributions have the highest entropy and therefore are the worst noise to deal with. If one were to make a similar statement where instead of the second moment, the Fisher information is constrained, i.e., if the input X is subject to a Fisher information constraint: J ( X ) A for some A > 0 , then the input minimizing the mutual information of the additive white Gaussian channel is Gaussian distributed. This is a result of the EPI in Equation (2) and the IIE in Equation (5). They both reduce in this setting to
arg min X : J ( X ) A h ( X + Z ) N 0 ; 1 A .
Reciprocally, for a Gaussian input, what is the noise that maximizes the mutual information subject to a Fisher information constraint? this problem can be formally stated as follows: If X N ( 0 ; p ) , find
arg max Z : J ( Z ) A h ( X + Z ) .
An intuitive answer would be Gaussian since it has the minimum entropy for a given Fisher information. Indeed, Equation (10) provides the answer:
I ( Y ; X ) 1 2 ln 1 + p J ( Z ) ,
is maximized whenever Z N 0 ; 1 A .

3. Proof of the Upper Bound

3.1. Concavity of Differential Entropy

Let U be an infinitely divisible RV with characteristic function ϕ U ( ω ) (the characteristic function ϕ ( ω ) of a probability distribution function F U ( u ) is defined by:
ϕ U ( ω ) = R e i ω u d F U ( u ) , ω R ,
which is the Fourier transform of F U ( u ) at - ω ). For each real t 0 , denote by F t ( · ) the unique probability distribution (Theorem 2.3.9, p. 65, [18]) with characteristic function:
ϕ t ( ω ) = e t ln ϕ U ( ω ) ,
where ln ( · ) is the principal branch of the logarithm. For the rest of this paper, we denote by U t a RV with characteristic function ϕ t ( ω ) as defined in Equation (15). Note that U 0 is deterministically equal to 0 (i.e., distributed according to the Dirac delta distribution) and U 1 is distributed according to U. The family of probability distributions F t ( · ) t 0 forms a continuous convolution semi-group in the space of probability measures on R (see Definition 2.3.8 and Theorem 2.3.9, [18]) and hence one can write:
U s + U t = U s + t s , t 0 ,
where U s and U t are independent.
Lemma 1. 
Let U be an infinitely divisible RV and { U t } t 0 an associated family of RVs distributed according to Equation (15) and independent of X. The differential entropy h ( X + U t ) is a concave function in t 0 .
In the case of a Gaussian-distributed U, the family U t t 0 has the same distribution as t U t , and it is already known that the entropy (and actually even the entropy power) of Y is concave in t ((Section VII, p. 51, [5]) and [19]).
Proof. 
We start by noting that h ( X + U t ) is non-decreasing in t. For 0 s < t ,
h ( X + U t ) = h ( X + U s + U t - s ) h ( X + U s ) ,
where U t , U s and U t - s are three independent instances of RVs in the family { U t } t 0 . Next we show that h ( X + U t ) is midpoint concave: Let U t , U s , U ( t + s ) / 2 and U ( t - s ) / 2 be independent RVs in the family { U t } t 0 . For 0 s < t ,
  h ( X + U t ) - h ( X + U ( t + s ) / 2 ) = h ( X + U ( t + s ) / 2 + U ( t - s ) / 2 ) - h ( X + U ( t + s ) / 2 ) (16) = I ( X + U ( t + s ) / 2 + U ( t - s ) / 2 ; U ( t - s ) / 2 ) (17) I ( X + U s + U ( t - s ) / 2 ; U ( t - s ) / 2 )   = h ( X + U ( t + s ) / 2 ) - h ( X + U s )
where Equation (16) is the definition of the mutual information and Equation (17) is the application of the data processing inequality to the Markov chain U ( t - s ) / 2 - ( X + U s + U ( t - s ) / 2 ) - ( X + U ( t + s ) / 2 + U ( t - s ) / 2 ) . Therefore,
h ( X + U ( t + s ) / 2 ) 1 2 h ( X + U t ) + h ( X + U s ) ,
and the function is midpoint concave for t 0 . Since the function is non-decreasing, it is Lebesgue measurable and midpoint concavity guarantees its concavity. ☐
An interesting implication of Lemma 1 is that h ( X + U t ) as a function of t is below any of its tangents. Particularly,
h ( X + U t ) h ( X ) + t d h ( X + U t ) d t | t = 0 .

3.2. Perturbations along U t : An Identity of the de-Bruijn Type

Define the non-negative quantity
C X = sup x 0 D p X ( u - x ) p X ( u ) x 2 ,
which is possibly infinite if the supremum is not finite. We first note that C X has the following interesting properties:
  • It was found by Verdu [20] to be equal to the channel capacity per unit cost of the linear average power constrained additive noise channel where the noise is independent of the input and is distributed according to X.
  • Using the above interpretation, one can infer that for independent RVs X and W,
    C X + W C X .
    This inequality may be formally established using convexity of relative entropy (Theorem 2.7.2, [7]): Since D ( p q ) is convex in the pair ( p , q ) ,
    D p X + W ( u - x ) p X + W ( u ) D p X ( u - x ) p X ( u ) ,
    for independent RVs X and W.
  • Using Kullback’s well-known result on the divergence (Section 2.6, [21]),
    C X lim x 0 + D p X ( u - x ) p X ( u ) x 2 = 1 2 J ( X ) .
  • Whenever the supremum is at “0”,
    C X = 1 2 J ( X ) ,
    which is the case for RVs X whose density functions have heavier tails than the Gaussian (p.1020, [20]).
Next we prove the following lemma,
Lemma 2. 
Let U be an infinitely divisible real RV with finite variance σ U 2 and { U t } t 0 the associated family of RVs distributed according to Equation (15). Let X be an independent real variable satisfying the following:
  • X has a positive PDF p X ( x ) .
  • The integrals R | ω | k ϕ X ( ω ) d ω k are finite for all k N \ { 0 } .
  • C X = sup x 0 D p X ( u - x ) p X ( u ) x 2 is finite.
For t 0 , the right derivative of h ( X + U t )
d + d t h ( X + U t ) | t = τ = lim t 0 + h ( X + U τ + t ) - h ( X + U τ ) t σ U 2 C X .
Before we proceed to the proof, we note that by Lemma 1, h ( X + U τ ) is concave in the real-valued τ and hence it is everywhere right and left differentiable.
Proof. 
Using Equation (15), the characteristic function of the independent sum X + U τ + t for τ 0 and small t > 0 is:
ϕ X + U τ + t ( ω ) = ϕ X + U τ ( ω ) ϕ U t ( ω ) = ϕ X + U τ ( ω ) exp t ln ϕ U ( ω ) = ϕ X + U τ ( ω ) + ϕ X + U τ ( ω ) ln ϕ U ( ω ) t + o ( t ) .
Taking the inverse Fourier transform yields,
p X + U τ + t ( y ) = p X + U τ ( y ) + t F - 1 ϕ X + U τ ( ω ) ln ϕ U ( ω ) ( - y ) + o ( t ) ,
where F - 1 denotes the inverse distributional Fourier transform operator. Equation (22) holds true when F - 1 ϕ X + U τ ( ω ) ln k ϕ U ( ω ) ( y ) exists for all k N \ { 0 } , which is the case. Indeed, using the Kolmogorov representation of the characteristic function of an infinitely divisible RV (Theorem 7.7, [22]), we know that there exists a unique distribution function H U ( x ) associated with U such that
ln ϕ U ( ω ) = σ U 2 R e i ω x - 1 - i ω x 1 x 2 d H U ( x ) .
Furthermore, since e i ω x - 1 - i ω x ω 2 x 2 2 (p.179, [22]),
ln ϕ U ( ω ) σ U 2 R e i ω x - 1 - i ω x 1 x 2 d H U ( x ) σ U 2 2 ω 2 ,
which implies that
R ϕ X + U τ ( ω ) ln k ϕ U ( ω ) d ω R ϕ X ( ω ) ϕ U τ ( ω ) ln ϕ U ( ω ) k d ω σ U 2 2 k R ϕ X ( ω ) ω 2 k d ω ,
which is finite under the conditions of the lemma and hence F - 1 ϕ X + U τ ( ω ) ln k ϕ U ( ω ) ( y ) exists. Using the definition of the derivative, Equation (22) implies that:
d p X + U t ( y ) d t | t = τ = F - 1 ϕ X + U τ ( ω ) ln ϕ U ( ω ) ( - y ) .
Using the Mean Value Theorem: for some 0 < h ( t ) < t ,
h ( X + U τ + t ) - h ( X + U τ ) t = - R p X + U τ + t ( y ) ln p X + U τ + t ( y ) - p X + U τ ( y ) ln p X + U τ ( y ) t d y = - R 1 + ln p X + U τ + h ( t ) ( y ) d p X + U t ( y ) d t | τ + h ( t ) d y = - R 1 + ln p X + U τ + h ( t ) ( y ) F - 1 ϕ X + U τ + h ( t ) ( ω ) ln ϕ U ( ω ) ( - y ) d y = - R ln p X + U τ + h ( t ) ( y ) F - 1 ϕ X + U τ + h ( t ) ( ω ) ln ϕ U ( ω ) ( - y ) d y ,
where Equation (24) is justified by the fact that ϕ X ( 0 ) = ϕ U ( 0 ) = 1 . We proceed next to evaluate the inverse Fourier transform in the integrand of Equation (24):
  F - 1 ϕ X + U τ + h ( t ) ( ω ) ln ϕ U ( ω ) ( - y ) = 1 2 π R ϕ X + U τ + h ( t ) ( ω ) ln ϕ U ( ω ) e - i ω y d ω (25) = σ U 2 2 π R ϕ X + U τ + h ( t ) ( ω ) R e i ω x - 1 - i ω x 1 x 2 d H U ( x ) e - i ω y d ω (26) = σ U 2 2 π R R ϕ X + U τ + h ( t ) ( ω ) e i ω x - 1 - i ω x e - i ω y d ω 1 x 2 d H U ( x )   = σ U 2 R p X + U τ + h ( t ) ( y - x ) - p X + U τ + h ( t ) ( y ) - x p X + U τ + h ( t ) ( y ) 1 x 2 d H U ( x ) ,
where the last equation is due to standard properties of the Fourier transform and Equation (25) is due to Equation (23). The interchange in Equation (26) is justified by Fubini. In fact | e i ω x - 1 - i ω x | ω 2 x 2 2 and
R R ϕ X + U τ + h ( t ) ( ω ) e i ω x - 1 - i ω x e - i ω y 1 x 2 d ω d H U ( x ) R R ϕ X ( ω ) ω 2 2 d ω d H U ( x ) R ϕ X ( ω ) ω 2 2 d ω ,
which is finite by assumption. Back to Equation (24),
h ( X + U τ + t ) - h ( X + U τ ) t = - R ln p X + U τ + h ( t ) ( y ) F - 1 ϕ X + U τ + h ( t ) ( ω ) ln ϕ U ( ω ) ( - y ) d y = - σ U 2 R R ln p X + U τ + h ( t ) ( y ) p X + U τ + h ( t ) ( y - x ) - p X + U τ + h ( t ) ( y ) - x p X + U τ + h ( t ) ( y ) 1 x 2 d H U ( x ) d y = - σ U 2 R R ln p X + U τ + h ( t ) ( y ) p X + U τ + h ( t ) ( y - x ) - p X + U τ + h ( t ) ( y ) - x p X + U τ + h ( t ) ( y ) d y 1 x 2 d H U ( x ) ,
where the interchange in the order of integration in Equation (27) will be validated next by Fubini. Considering the inner integral,
- R ln p X + U τ + h ( t ) ( y ) p X + U τ + h ( t ) ( y - x ) - p X + U τ + h ( t ) ( y ) - x p X + U τ + h ( t ) ( y ) d y = D ( p X + U τ + h ( t ) ( y - x ) p X + U τ + h ( t ) ( y ) ) + x R p X + U τ + h ( t ) ( y ) d y = D ( p X + U τ + h ( t ) ( u - x ) p X + U τ + h ( t ) ( u ) ) .
Finally, Equation (27) gives
(28) h ( X + U τ + t ) - h ( X + U τ ) t = σ U 2 R D ( p X + U τ + h ( t ) ( u - x ) p X + U τ + h ( t ) ( u ) ) 1 x 2 d H U ( x ) (29) σ U 2 C X + U τ + h ( t ) (30) σ U 2 C X ,
which is finite. Equation (29) is due to the definition Equation (19) and Equation (30) is due to Equation (20). The finiteness of the end result justifies the interchange of the order of integration in Equation (27) by Fubini. ☐
We point out that the result of this lemma is sufficient for the purpose of the main result of this paper. Nevertheless, it is worth noting that Equation (21) could be strengthened to:
d d t h ( X + U t ) | t = τ = σ U 2 R D ( p X + U τ ( u - x ) p X + U τ ( u ) ) 1 x 2 d H U ( x ) .
In fact, the set of points where the left and right derivative of the concave function h ( X + U t ) differ is of zero measure. One can therefore state that the derivative exists almost-everywhere and it is upperbounded almost-everywhere by Equation (30). Furthermore, considering Equation (28), one can see that taking the limit as t 0 + will yield
d + d t h ( X + U t ) | t = τ = σ U 2 R lim t 0 D ( p X + U τ + h ( t ) ( u - x ) p X + U τ + h ( t ) ( u ) ) 1 x 2 d H U ( x ) ,
by the Monotone Convergence Theorem. The continuity of relative entropy may be established using techniques similar to those in [23] when appropriate conditions on p X hold.
Finally, When U is purely Gaussian, U t t U , H U ( x ) is the unit step function and Equation (31) boils down to de Bruijn’s identity for Gaussian perturbations (Equation (4)).

3.3. Proof of Theorem 1

Proof. 
We assume without loss of generality that X and Z have a zero mean. Otherwise, define Y = Y - ( μ X + μ 1 + μ 2 ) and X = X - μ X for which h ( Y ) = h ( Y ) , h ( X ) = h ( X ) and J ( X ) = J ( X ) since differential entropy and Fisher information are translation invariant. We divide the proof into two steps and we start by proving the theorem when Z is purely Gaussian. ☐
Z is purely Gaussian:
We decompose Z as follows: Let n N * \ { 1 } and ϵ = 1 n , then
Z = ϵ i = 1 n Z i ,
where the { Z i } 0 i n are IID with the same law as Z. We write Equation (7) in an incremental formulation as follows:
Y 0 = X Y l = Y l - 1 + ϵ Z l = X + ϵ i = 1 l Z i
for l { 1 , , n } . Note that Y n has the same statistics as Y. Using the de Bruijn’s identity (Equation (4)), we write
h ( Y l ) = h ( Y l - 1 ) + ϵ σ 1 2 2 J ( Y l - 1 ) + o ( ϵ ) ,
and
h ( Y l ) h ( Y l - 1 ) + ϵ σ 1 2 2 J ( Y l - 1 ) . l { 1 , , n } ,
by virtue of the fact that h ( Y l ) is concave in ϵ 0 (Section VII, p. 51, [5]). Using the FII Equation (3) on Y l we obtain
1 J ( Y l ) 1 J ( Y l - 1 ) + 1 J ( ϵ Z l ) = 1 J ( Y l - 1 ) + ϵ J Z l 1 J ( X ) + l ϵ J Z 1 l { 1 , , n } .
Examining now h ( Y ) = h ( Y n ) ,
  h ( Y n ) h ( Y n - 1 ) + ϵ σ 1 2 2 J ( Y n - 1 )   h ( X ) + ϵ σ 1 2 2 l = 0 n - 1 J ( Y l ) (34) h ( X ) + ϵ σ 1 2 2 J ( X ) + l = 1 n - 1 J ( X ) J ( Z 1 ) J ( Z 1 ) + l ϵ J ( X ) (35) h ( X ) + ϵ σ 1 2 2 J ( X ) 1 + 0 n - 1 J ( Z 1 ) J ( Z 1 ) + u ϵ J ( X ) d u   = h ( X ) + ϵ σ 1 2 2 J ( X ) + σ 1 2 2 J ( Z 1 ) ln 1 + ( n - 1 ) ϵ J ( X ) J ( Z 1 )   = h ( X ) + ϵ σ 1 2 2 J ( X ) + 1 2 ln 1 + ( 1 - ϵ ) σ 1 2 J ( X ) ,
where, in order to write the last equality, we used the fact that J ( Z 1 ) = 1 σ ! 2 since Z 1 N ( 0 , σ 1 2 ) . Equation (34) is due to the bounds in Equation (33) and Equation (35) is justified since the function J ( Z 1 ) J ( Z 1 ) + u ϵ J ( X ) is decreasing in u. Since the upper bound is true for any small-enough ϵ, necessarily
h ( Y ) h ( X ) + 1 2 ln 1 + σ 1 2 J ( X ) ,
which completes the proof of the second part of the theorem. When X and Z are both Gaussian, evaluating the quantities shows that equality holds. In addition, equality only holds whenever the FII is satisfied with equality that is whenever X and Z are Gaussian.
Z is a sum of a Gaussian variable with a non-Gaussian infinitely divisible one:
Let { U t } t 0 be a family of RVs associated with the non-Gaussian infinitely divisible RV Z 2 and distributed according to Equation (15). Using concavity Equation (18) in t,
  h ( Y ) = h ( X + Z 1 + Z 2 ) = h ( X + Z 1 + U 1 )   h ( X + Z 1 ) + d h ( X + Z 1 + U t ) d t | t = 0 + (36) h ( X + Z 1 ) + σ 2 2 C X + Z 1 (37) h ( X ) + 1 2 ln 1 + σ 1 2 J ( X ) + σ 2 2 C X + Z 1 (38) h ( X ) + 1 2 ln 1 + σ 1 2 J ( X ) + σ 2 2 min C X ; C Z   = h ( X ) + 1 2 ln 1 + σ 1 2 J ( X ) + σ 2 2 min sup x D p X ( u - x ) p X ( u ) x 2 ; 1 2 σ 1 2 .
Equation (36) is an application of Lemma 2 since ( X + Z 1 ) has a positive PDF and satisfies all the required technical conditions. The upperbound proven in the previous paragraph gives Equations (37) and (38) is due to Equation (20). ☐
On a final note, using Lemmas 1 and 2 one could have applied the Plünnecke-Ruzsa inequality (Theorem 3.11, [10]) which yields
h ( Y ) h ( X ) + σ 1 2 2 J ( X ) + σ 2 2 C X + Z 1 ,
which is looser than Equation (37).

4. Extension

The bound in Equation (10) may be extended to the n-dimensional vector Gaussian case, Y = X + Z , where Z is an n-dimensional Gaussian vector. In this case, if J ( X ) denotes the Fisher information matrix, the Fisher information and the entropy power are defined as
J ( X ) = Tr ( J ( X ) ) N ( X ) = 1 2 π e e 2 n h ( X ) .
  • When Z has n IID Gaussian components –i.e., with covariance matrix Λ Z = σ 2 I , following similar steps lead to:
    N ( X + Z ) N ( X ) + N ( Z ) N ( X ) J ( X ) n ,
    with equality if and only if X is Gaussian with IID components.
  • In general, for any positive-definite matrix Λ Z with a singular value decomposition U D U T , if we denote by B = U D - 1 2 U T then
    B Y = B ( X + Z ) = B X + B Z = B X + Z
    where Z is Gaussian distributed with an identity covariance matrix. Equation (39) gives
    N ( B ( X + Z ) ) N ( B X ) + N ( Z ) N ( B X ) J ( B X ) n N ( X + Z ) N ( X ) + N ( X ) J ( B X ) n N ( X + Z ) N ( X ) + N ( X ) Tr ( J ( X ) Λ Z ) n ,
    where we used N ( B X ) = det ( B ) 2 n N ( X ) .

5. Conclusions

We have derived a novel tight upper bound on the entropy of the sum of two independent random variables where one of them is infinitely divisible with a Gaussian component. The bound is shown to be tighter than previously known ones and holds for variables with possibly infinite second moment. With the isoperimetric inequality in mind, the “symmetry” that this bound provides with the known lower bounds is remarkable and hints to possible generalizations to scenarios where no Gaussian component is present.

Acknowledgments

The authors would like to thank the Assistant Editor for his patience and the anonymous reviewers for their helpful comments. This work was supported by AUB’s University Research Board and the Lebanese National Council for Scientific Research (CNRS-L).

Author Contributions

Both authors contributed equally on all stages of this work: formulated and solved the problem and prepared the manuscript. Both authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication, part I. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Bobkov, S.G.; Chistyakov, G.P. Entropy power inequality for the Renyi entropy. IEEE Trans. Inf. Theory 2015, 61, 708–714. [Google Scholar] [CrossRef]
  3. Stam, A.J. Some inequalities satisfied by the quantities of information of Fisher and Shannon. Inf. Control 1959, 2, 101–112. [Google Scholar] [CrossRef]
  4. Blachman, N.M. The convolution inequality for entropy powers. IEEE Trans. Inf. Theory 1965, 11, 267–271. [Google Scholar] [CrossRef]
  5. Rioul, O. Information theoretic proofs of entropy power inequality. IEEE Trans. Inf. Theory 2011, 57, 33–55. [Google Scholar] [CrossRef]
  6. Dembo, A.; Cover, T.M.; Thomas, J.A. Information Theoretic Inequalities. IEEE Trans. Inf. Theory 1991, 37, 1501–1518. [Google Scholar] [CrossRef]
  7. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley: New York, NY, USA, 2006. [Google Scholar]
  8. Ruzsa, I.Z. Sumsets and entropy. Random Struct. Algorithms 2009, 34, 1–10. [Google Scholar] [CrossRef]
  9. Tao, T. Sumset and inverse sumset theory for Shannon entropy. Comb. Probab. Comput. 2010, 19, 603–639. [Google Scholar] [CrossRef]
  10. Kontoyiannis, I.; Madiman, M. Sumset and inverse sumset inequalities for differential entropy and mutual information. IEEE Trans. Inf. Theory 2014, 60, 4503–4514. [Google Scholar] [CrossRef]
  11. Madiman, M. On the entropy of sums. In Proceedings of the 2008 IEEE Information Theory Workshop, Oporto, Portugal, 5–9 May 2008.
  12. Cover, T.M.; Zhang, Z. On the maximum entropy of the sum of two dependent random variables. IEEE Trans. Inf. Theory 1994, 40, 1244–1246. [Google Scholar] [CrossRef]
  13. Ordentlich, E. Maximizing the entropy of a sum of independent bounded random variables. IEEE Trans. Inf. Theory 2006, 52, 2176–2181. [Google Scholar] [CrossRef]
  14. Bobkov, S.; Madiman, M. On the Problem of Reversibility of the Entropy Power Inequality. In Limit Theorems in Probability, Statistics and Number Theory; Springer-Verlag: Berlin/Heidelberg, Germany, 2013; pp. 61–74. [Google Scholar]
  15. Miclo, L. Notes on the speed of entropic convergence in the central limit theorem. Progr. Probab. 2003, 56, 129–156. [Google Scholar]
  16. Luisier, F.; Blu, T.; Unser, M. Image denoising in mixed Poisson-Gaussian noise. IEEE Trans. Image Process. 2011, 20, 696–708. [Google Scholar] [CrossRef] [PubMed]
  17. Fahs, J.; Abou-Faycal, I. Using Hermite bases in studying capacity-achieving distributions over AWGN channels. IEEE Trans. Inf. Theory 2012, 58, 5302–5322. [Google Scholar] [CrossRef]
  18. Heyer, H. Structural Aspects in the Theory of Probability: A Primer in Probabilities on Algebraic-Topological Structures; World Scientific: Singapore, Singapore, 2004; Volume 7. [Google Scholar]
  19. Costa, M.H.M. A new entropy power inequality. IEEE Trans. Inf. Theory 1985, 31, 751–760. [Google Scholar] [CrossRef]
  20. Verdú, S. On channel capacity per unit cost. IEEE Trans. Inf. Theory 1990, 36, 1019–1030. [Google Scholar] [CrossRef]
  21. Kullback, S. Information Theory and Statistics; Dover Publications: Mineola, NY, USA, 1968. [Google Scholar]
  22. Steutel, F.W.; Harn, K.V. Infinite Divisibility of Probability Distributions on the Real Line; Marcel Dekker Inc.: New York, NY, USA, 2006. [Google Scholar]
  23. Fahs, J.; Abou-Faycal, I. On the finiteness of the capacity of continuous channels. IEEE Trans. Commun. 2015. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Fahs, J.; Abou-Faycal, I. A New Tight Upper Bound on the Entropy of Sums. Entropy 2015, 17, 8312-8324. https://doi.org/10.3390/e17127881

AMA Style

Fahs J, Abou-Faycal I. A New Tight Upper Bound on the Entropy of Sums. Entropy. 2015; 17(12):8312-8324. https://doi.org/10.3390/e17127881

Chicago/Turabian Style

Fahs, Jihad, and Ibrahim Abou-Faycal. 2015. "A New Tight Upper Bound on the Entropy of Sums" Entropy 17, no. 12: 8312-8324. https://doi.org/10.3390/e17127881

APA Style

Fahs, J., & Abou-Faycal, I. (2015). A New Tight Upper Bound on the Entropy of Sums. Entropy, 17(12), 8312-8324. https://doi.org/10.3390/e17127881

Article Metrics

Back to TopTop