- The first one is an upper bound on the entropy of Random Variables (RV)s having a finite second moment by virtue of the fact that Gaussian distributions maximize entropy under a second moment constraint (the (differential) entropy of a random variable Y having a probability density function is defined as:
- The second one is a lower bound on the entropy of independent sums of RVs and commonly known as the Entropy Power Inequality (EPI). The EPI states that given two real independent RVs X, Z such that , and exist, then (Corollary 3, )
- The Fisher Information Inequality (FII): Let X and Z be two independent RVs such that the respective Fisher informations and exist (the Fisher information of a random variable Y having a probability density function is defined as:
- The de Bruijn’s identity: For any ,Rioul proved that the de Bruijn’s identity holds at for any finite-variance RV Z (Proposition 7, p. 39, ).
2. Main Result
- is a Gaussian RV with mean and positive variance .
- is an infinitely divisible RV with mean and finite (possibly zero) variance that is independent of .
- While the usefulness of this upper bound is clear for RVs X having an infinite second moment for which Equation (1) fails, it can in some cases, present a tighter upper bound than the one provided by Shannon for finite second moment variables X. This is the case, for example, when and X is a RV having the following PDF:
- Theorem 1 gives an analytical bound on the change in the transmission rates of the linear Gaussian channel function of an input scaling operation. In fact, let X be a RV satisfying the conditions of Theorem 1 and . Then satisfies similar conditions for some positive scalar a. Hence
- If the EPI is regarded as being a lower bound on the entropy of sums, Equation (10) can be considered as its upper bound counterpart whenever one of the variables is Gaussian. In fact using both of these inequalities gives:
- The result of Theorem 1 is more powerful that the IIE in Equation (5). Indeed, using the fact that , inequality Equation (10) gives the looser inequality:
- Finally, in the context of communicating over a channel, it is well-known that, under a second moment constraint, the best way to “fight” Gaussian noise is to use Gaussian inputs. This follows from the fact that Gaussian variables maximize entropy under a second moment constraint. Conversely, when using a Gaussian input, the worst noise in terms of minimizing the transmission rates is also Gaussian. This is a direct result of the EPI and is also due to the fact that Gaussian distributions have the highest entropy and therefore are the worst noise to deal with. If one were to make a similar statement where instead of the second moment, the Fisher information is constrained, i.e., if the input X is subject to a Fisher information constraint: for some , then the input minimizing the mutual information of the additive white Gaussian channel is Gaussian distributed. This is a result of the EPI in Equation (2) and the IIE in Equation (5). They both reduce in this setting to
3. Proof of the Upper Bound
3.1. Concavity of Differential Entropy
3.2. Perturbations along : An Identity of the de-Bruijn Type
- It was found by Verdu  to be equal to the channel capacity per unit cost of the linear average power constrained additive noise channel where the noise is independent of the input and is distributed according to X.
- Using the above interpretation, one can infer that for independent RVs X and W,
- Using Kullback’s well-known result on the divergence (Section 2.6, ),
- Whenever the supremum is at “0”,
- X has a positive PDF .
- The integrals are finite for all .
- is finite.
3.3. Proof of Theorem 1
- When has n IID Gaussian components –i.e., with covariance matrix , following similar steps lead to:
- In general, for any positive-definite matrix with a singular value decomposition , if we denote by then
Conflicts of Interest
- Shannon, C.E. A mathematical theory of communication, part I. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
- Bobkov, S.G.; Chistyakov, G.P. Entropy power inequality for the Renyi entropy. IEEE Trans. Inf. Theory 2015, 61, 708–714. [Google Scholar] [CrossRef]
- Stam, A.J. Some inequalities satisfied by the quantities of information of Fisher and Shannon. Inf. Control 1959, 2, 101–112. [Google Scholar] [CrossRef]
- Blachman, N.M. The convolution inequality for entropy powers. IEEE Trans. Inf. Theory 1965, 11, 267–271. [Google Scholar] [CrossRef]
- Rioul, O. Information theoretic proofs of entropy power inequality. IEEE Trans. Inf. Theory 2011, 57, 33–55. [Google Scholar] [CrossRef]
- Dembo, A.; Cover, T.M.; Thomas, J.A. Information Theoretic Inequalities. IEEE Trans. Inf. Theory 1991, 37, 1501–1518. [Google Scholar] [CrossRef]
- Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley: New York, NY, USA, 2006. [Google Scholar]
- Ruzsa, I.Z. Sumsets and entropy. Random Struct. Algorithms 2009, 34, 1–10. [Google Scholar] [CrossRef]
- Tao, T. Sumset and inverse sumset theory for Shannon entropy. Comb. Probab. Comput. 2010, 19, 603–639. [Google Scholar] [CrossRef]
- Kontoyiannis, I.; Madiman, M. Sumset and inverse sumset inequalities for differential entropy and mutual information. IEEE Trans. Inf. Theory 2014, 60, 4503–4514. [Google Scholar] [CrossRef]
- Madiman, M. On the entropy of sums. In Proceedings of the 2008 IEEE Information Theory Workshop, Oporto, Portugal, 5–9 May 2008.
- Cover, T.M.; Zhang, Z. On the maximum entropy of the sum of two dependent random variables. IEEE Trans. Inf. Theory 1994, 40, 1244–1246. [Google Scholar] [CrossRef]
- Ordentlich, E. Maximizing the entropy of a sum of independent bounded random variables. IEEE Trans. Inf. Theory 2006, 52, 2176–2181. [Google Scholar] [CrossRef]
- Bobkov, S.; Madiman, M. On the Problem of Reversibility of the Entropy Power Inequality. In Limit Theorems in Probability, Statistics and Number Theory; Springer-Verlag: Berlin/Heidelberg, Germany, 2013; pp. 61–74. [Google Scholar]
- Miclo, L. Notes on the speed of entropic convergence in the central limit theorem. Progr. Probab. 2003, 56, 129–156. [Google Scholar]
- Luisier, F.; Blu, T.; Unser, M. Image denoising in mixed Poisson-Gaussian noise. IEEE Trans. Image Process. 2011, 20, 696–708. [Google Scholar] [CrossRef] [PubMed]
- Fahs, J.; Abou-Faycal, I. Using Hermite bases in studying capacity-achieving distributions over AWGN channels. IEEE Trans. Inf. Theory 2012, 58, 5302–5322. [Google Scholar] [CrossRef]
- Heyer, H. Structural Aspects in the Theory of Probability: A Primer in Probabilities on Algebraic-Topological Structures; World Scientific: Singapore, Singapore, 2004; Volume 7. [Google Scholar]
- Costa, M.H.M. A new entropy power inequality. IEEE Trans. Inf. Theory 1985, 31, 751–760. [Google Scholar] [CrossRef]
- Verdú, S. On channel capacity per unit cost. IEEE Trans. Inf. Theory 1990, 36, 1019–1030. [Google Scholar] [CrossRef]
- Kullback, S. Information Theory and Statistics; Dover Publications: Mineola, NY, USA, 1968. [Google Scholar]
- Steutel, F.W.; Harn, K.V. Infinite Divisibility of Probability Distributions on the Real Line; Marcel Dekker Inc.: New York, NY, USA, 2006. [Google Scholar]
- Fahs, J.; Abou-Faycal, I. On the finiteness of the capacity of continuous channels. IEEE Trans. Commun. 2015. [Google Scholar] [CrossRef]
© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons by Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).