An Information Theoretic Condition for Perfect Reconstruction
Abstract
:A movement is accomplished in six stages 
And the seventh brings return. 
The seven is the number of the young light 
It forms when darkness is increased by one. 
Change returns success 
Going and coming without error. 
Action brings good fortune. 
Sunset, sunrise. 
Syd Barrett, Chapter 24 (Pink Floyd). 
1. Introduction
2. What Is Information? A Detailed Study of Shannon’s Information Lattice
2.1. Definition of the “True” Information
 Reflexivity: $X=\mathrm{Id}\left(X\right)$ so $X\ge X$.
 Antisymmetry: If $X\ge Y$ and $Y\ge X$, $X=f\left(Y\right)$ a.s., and $Y=g\left(X\right)$ a.s. for deterministic functions f and g, so $X\equiv Y$.
 Transitivity: If $X\ge Y$ and $Y\ge Z$, then there exist two deterministic functions f and g such that: $Z=g\left(Y\right)$ a.s. and $Y=f\left(X\right)$ a.s. Then, $Z=g\left(f\right(X\left)\right)$ a.s.; hence, $X\ge Z$. □
2.2. Structure of the Information Lattice: Joint Information; Common Information
2.3. Computing Common Information
Algorithm 1: Algorithm to compute the common information. 

2.4. Boundedness and Complementedness: Null, Total, and Complementary Information
 The minimal element 0 (“null information”) is the equivalence class of all deterministic variables. Thus, $X=0$ means that X is a deterministic variable.
 The maximal element 1 (“total information”) of the lattice is the equivalence class of the identity function $\mathrm{Id}$ on Ω.
2.5. Computing the Complementary Information
Algorithm 2: Algorithm for computing the complementary information. 

2.6. Relationship between Complementary Information and Functional Representation
2.7. Is the Information Lattice a Boolean Algebra?
3. Metric Properties of the Information Lattice
3.1. Information and Information Measures
 Entropy: If $X\equiv Y$, there exist functions f and g such that $Y=f\left(X\right)$ a.s. (hence, $H\left(Y\right)\le H\left(X\right)$) and $X=g\left(Y\right)$ a.s. (hence, $H\left(X\right)\le H\left(Y\right)$). Thus, $H\left(X\right)=H\left(Y\right)$.
 Conditional entropy: Let ${X}_{1}\equiv {X}_{2}$ with f and g be two functions such that ${X}_{1}=f\left({X}_{2}\right)$ and ${X}_{2}=g\left({X}_{1}\right)$ a.s. Then, $H\left({X}_{1}\rightY)=H\left(f\left({X}_{2}\right)\rightY)\le H\left({X}_{2}\rightY)$. Similarly, $H\left({X}_{2}\rightY)=H\left(g\left({X}_{1}\right)\rightY)\le H\left({X}_{1}\rightY)$. Therefore, $H\left({X}_{1}\rightY)=H\left({X}_{2}\rightY)$. Finally, if ${Y}_{1}\equiv {Y}_{2}$ with two functions h and k such that ${Y}_{1}=h\left({Y}_{2}\right)$ and ${Y}_{2}=k\left({Y}_{1}\right)$ a.s., then $H\left(X\right{Y}_{1})=H\left(X\righth\left({Y}_{2}\right))\ge H\left(X\right{Y}_{2},h\left({Y}_{2}\right))=H\left(X\right{Y}_{2})$ and likewise $H\left(X\right{Y}_{2})=H\left(X\rightk\left({Y}_{1}\right))\ge H\left(X\right{Y}_{1},k\left({Y}_{1}\right))=H\left(X\right{Y}_{1})$. Therefore, $H\left(X\right{Y}_{1})=H\left(X\right{Y}_{2})$.
 Mutual information: Since $I(X;Y)=H\left(X\right)H\left(X\rightY)$, compatibility follows from the two previous cases. □
3.2. Common Information vs. Mutual Information
3.3. Submodularity of Entropy on the Information Lattice
3.4. Two Entropic Metrics: Shannon Distance; Rajski Distance
 Positivity: As just noted above, $D(X,Y)\ge 0$ vanishes only when $X=Y$.
 Symmetry: $D(X,Y)=D(Y,X)$ is obvious by the commutativity of addition.
 Triangular inequality: First note that $H\left(X\rightZ)\le H(X,Y\leftZ\right)=H\left(X\rightY,Z)+H(Y\leftZ\right)\le H\left(X\rightY)+H(Y\leftZ\right)$. By permuting X and Z, we also obtain that $H\left(Z\rightX)\le H(Z\leftY\right)+H\left(Y\rightX)$. Summing up the two inequalities, we obtain the triangular inequality $D(X,Z)=H\left(X\rightZ)+H(Z\leftX\right)\le H\left(X\rightY)+H(Y\leftX\right)+H\left(Y\rightZ)+H(Z\leftY\right)=D(X,Y)$$+D(Y,Z)$. □
3.5. Dependency Coefficient
3.6. Discontinuity and Continuity Properties
 (i)
 $\leftH\right(X)H(Y\left)\right\le D(X,Y)$.
 (ii)
 $H(X,Y)H({X}^{\prime},{Y}^{\prime})\le D(X,{X}^{\prime})+D(Y,{Y}^{\prime})$.
 (iii)
 $H\left(X\rightY)H\left({X}^{\prime}\right{Y}^{\prime})\le D(X,{X}^{\prime})+2D(Y,{Y}^{\prime})$.
 (iv)
 $I(X;Y)I({X}^{\prime};{Y}^{\prime})\le 2(D(X,{X}^{\prime})+D(Y,{Y}^{\prime}))$.
 (i)
 By the chain rule: $H\left(X\right)+H\left(Y\rightX)=H(X,Y)=H(Y)+H(X\leftY\right)$, hence $\leftH\right(X)H(Y\left)\right=\leftH\right(X\leftY\right)H\left(Y\rightX\left)\right\le H\left(X\rightY)+H(Y\leftX\right)=D(X,Y)$.
 (ii)
 Applying the inequality (i) to the variables $(X,Y)$ and $({X}^{\prime},{Y}^{\prime})$, we obtain $H(X,Y)H({X}^{\prime},{Y}^{\prime})\le D((X,Y),({X}^{\prime},{Y}^{\prime}))$. From the continuity of joint information (Proposition 20), one can further bound $D((X,Y),({X}^{\prime},{Y}^{\prime}))\le D(X,{X}^{\prime})+D(Y,{Y}^{\prime})$.
 (iii)
 By the chain rule, $H\left(X\rightY)H\left({X}^{\prime}\right{Y}^{\prime})=H(X,Y)H\left(Y\right)(H({X}^{\prime},{Y}^{\prime})H\left({Y}^{\prime}\right))\le H(X,Y)H({X}^{\prime},{Y}^{\prime})+H\left({Y}^{\prime}\right)H\left(Y\right)$. The conclusion now follows from (i) and (ii).
 (iv)
 By the chain rule, $I(X;Y)I({X}^{\prime};{Y}^{\prime})=H\left(X\right)H\left({X}^{\prime}\right)+H\left(Y\right)H\left({Y}^{\prime}\right)+H({X}^{\prime},{Y}^{\prime})$$H(X,Y)\le H\left(X\right)H\left({X}^{\prime}\right)+H\left(Y\right)H\left({Y}^{\prime}\right)+H({X}^{\prime},{Y}^{\prime})H(X,Y)$. The conclusion follows from bounding each of the three terms in the sum using (i) and (ii). □
4. Geometric Properties of the Information Lattice
4.1. Alignments of Random Variables
4.2. Convex Sets of Random Variables in the Information Lattice
4.3. The Lattice Generated by a Random Variable
4.4. Properties of Rajski and Shannon Distances in the Lattice Generated by a Random Variable
4.5. Triangle Properties of the Shannon Distance
5. The Perfect Reconstruction Problem
5.1. Problem Statement
5.2. A Necessary Condition for Perfect Reconstruction
5.3. A Sufficient Condition for Perfect Reconstruction
 Either ${\sum}_{i=1}^{n}d(X,{X}_{i})=n1$, and perfect reconstruction is possible;
 Or ${\sum}_{i=1}^{n}d(X,{X}_{i})>n1$, and perfect reconstruction is impossible.
 Either ${\sum}_{i=1}^{n}\rho (X,{X}_{i})=1$, and perfect reconstruction is possible;
 Or ${\sum}_{i=1}^{n}\rho (X,{X}_{i})<1$, and perfect reconstruction is impossible.
5.4. Approximate Reconstruction
6. Examples and Applications
6.1. Reconstruction from Sign and Absolute Value
6.2. Linear Transformation over a Finite Field
6.3. Integer Prime Factorization
6.4. Chinese Remainder Theorem
6.5. Optimal Sort
7. Conclusions and Perspectives
Author Contributions
Funding
Conflicts of Interest
References
 Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
 Shannon, C.E. The lattice theory of information. Trans. Ire Prof. Group Inf. Theory 1953, 1, 105–107. [Google Scholar] [CrossRef]
 Fano, R.M. Interview by Aftab, Cheung, Kim, Thkkar, Yeddanapudi, 6.933 Project History, Massachusetts Institute of Technology. November 2001. [Google Scholar]
 Fano, R.M. Class Notes for Course 6.574: Transmission of Information; MIT: Cambridge, MA, USA, 1952. [Google Scholar]
 Cherry, E.C. A history of the theory of information. Proc. Inst. Electr. Eng. 1951, 98, 383–393. [Google Scholar]
 Shannon, C.E. The bandwagon (editorial). In IRE Transactions on Information Theory; Institute for Radio Engineers, Inc.: New York, NY, USA, 1956; Volume 2, p. 3. [Google Scholar]
 Shannon, C.E. Some Topics on Information Theory. In Proceedings of the International Congress of Mathematicians, Cambridge, MA, USA, 30 August–6 September 1950; Volume II. pp. 262–263. [Google Scholar]
 Rioul, O.; Béguinot, J.; Rabiet, V.; Souloumiac, A. La véritable (et méconnue) théorie de l’information de Shannon. In Proceedings of the 28e Colloque GRETSI 2022, Nancy, France, 6–9 September 2022. [Google Scholar]
 Rajski, C. A metric space of discrete probability distributions. Inf. Control 1961, 4, 371–377. [Google Scholar] [CrossRef]
 Gács, P.; Körner, J. Common information is far less than mutual information. Probl. Control Inf. Theory 1973, 2, 149–162. [Google Scholar]
 Gamal, A.E.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
 Wyner, A.D. The common information of two dependent random variables. IEEE Trans. Inf. Theory 1975, 21, 163–179. [Google Scholar] [CrossRef]
 Nakamura, Y. Entropy and semivaluations on semilattices. Kodai Math. Semin. Rep. 1970, 22, 443–468. [Google Scholar] [CrossRef]
 Yeung, R.W. Information Theory and Network Coding; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
 Horibe, Y. A note on entropy metrics. Inf. Control 1973, 22, 403. [Google Scholar] [CrossRef]
 Jaccard, P. Distribution de la flore alpine dans le bassin des Dranses et dans quelques régions voisines. Bull. Société Vaudoise Des Sci. Nat. 1901, 37, 241–272. [Google Scholar]
 Csiszár, I.; Körner, J. Information Theory. Coding Theorems for DiscreteMemoryless Systems, 2nd ed.; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
 Donderi, D.C. Information measurement of distinctiveness and similarity. Percept. Psychophys. 1988, 44, 576–584. [Google Scholar] [CrossRef] [PubMed]
 Donderi, D.C. An information theory analysis of visual complexity and dissimilarity. Perception 2006, 35, 823–835. [Google Scholar] [CrossRef] [PubMed]
 Rioul, O. Théorie de l’information et du Codage; Hermes Science—Lavoisier: London, UK, 2007. [Google Scholar]
 Pierce, J.R. The early days of information theory. IEEE Trans. Inf. Theory 1973, 19, 3–8. [Google Scholar] [CrossRef]
 Malacaria, P. Algebraic foundations for quantitative information flow. Math. Struct. Comput. Sci. 2015, 25, 404–428. [Google Scholar] [CrossRef]
$\mathit{\omega}$  0  1  2  3 
X  0  1  0  1 
${Z}_{1}$  1  1  2  2 
${Z}_{2}$  2  1  1  2 
${Z}_{1}\vee {Z}_{2}$  $(1,2)$  $(1,1)$  $(2,1)$  $(2,2)$ 
$X\wedge ({Z}_{1}\vee {Z}_{2})$  0  1  0  1 
$X\wedge {Z}_{1}$  0  0  0  0 
$X\wedge {Z}_{2}$  0  0  0  0 
$(X\wedge {Z}_{1})\vee (X\wedge {Z}_{2})$  $(0,0)$  $(0,0)$  $(0,0)$  $(0,0)$ 
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. 
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Delsol , I.; Rioul , O.; Béguinot, J.; Rabiet , V.; Souloumiac , A. An Information Theoretic Condition for Perfect Reconstruction. Entropy 2024, 26, 86. https://doi.org/10.3390/e26010086
Delsol I, Rioul O, Béguinot J, Rabiet V, Souloumiac A. An Information Theoretic Condition for Perfect Reconstruction. Entropy. 2024; 26(1):86. https://doi.org/10.3390/e26010086
Chicago/Turabian StyleDelsol , Idris, Olivier Rioul , Julien Béguinot, Victor Rabiet , and Antoine Souloumiac . 2024. "An Information Theoretic Condition for Perfect Reconstruction" Entropy 26, no. 1: 86. https://doi.org/10.3390/e26010086