Upper Bound on the Joint Entropy of Correlated Sources Encoded by Good Lattices
Abstract
:1. Introduction
1.1. Contributions
 A class of upper bounds on conditional entropyrates of appropriately designed lattice encoded Gaussian signals.
 An application of the bounds to the problem of pointtopoint communication through a manyhelpone network in the presence of interference. This strategy takes advantage of a specially designed transmitter codebook’s lattice structure.
 A numerical experiment demonstrating the behavior of these bounds. It is seen that a joint–compression stage can partially alleviate inefficiencies in lattice encoder design.
1.2. Background
1.3. Outline
2. Main Results
 Choose some ${\overrightarrow{\alpha}}_{0}\in {\mathbb{Z}}^{K1},\phantom{\rule{4pt}{0ex}}{n}_{0}\in \mathbb{N}.$ Apply Lemma 1 to ${U}_{K}$. Call ${\tilde{Y}}_{\perp}$ a ‘residual.’
 Choose some ${\overrightarrow{\alpha}}_{}\in {\mathbb{Z}}^{K}.$ Apply Lemma 2 to the residual to break the residual ${\tilde{Y}}_{\perp}$ up into the sum of a lattice part due to ${\overrightarrow{\alpha}}_{}^{\u2020}{\overrightarrow{Y}}_{[K1]}$ and a new residual, whatever is left over.
 Repeat the previous step until the residual vanishes (up to $K1$ times). Notice that this process has given several different ways of writing ${U}_{K}$; by stopping at any amount of steps, ${U}_{K}$ is the modulo sum of several lattice components and a residual.
 Design the lattice ensemble for the encoders such that the logvolume contributed to the support of ${U}_{K}$ by each component can be estimated. The discrete parts will each contribute logvolume $\frac{1}{2}log{\delta}^{2}$ and residuals logvolume ${r}_{K}+\frac{1}{2}log{\sigma}^{2}.$
 Recognize the entropy of ${U}_{K}$ is no greater than the logvolume of its support. Choose the lowest support logvolume estimate of those just found.
3. LatticeBased Strategy for Communication via Decentralized Processing
3.1. Description of the Communication Scheme
 ${\sigma}_{\mathit{noise}}^{2}:=var{Y}_{\mathit{noise}}=var\left({({\mathbf{a}}_{\mathbb{Z}}+{\mathbf{a}}_{\mathbb{R}})}^{\u2020}{\overrightarrow{Y}}_{\left[K\right]}{X}_{\mathit{msg}}^{n}\right),$
 ${Y}_{\mathit{noise}}\perp ({X}_{\mathit{msg}},M,{W}_{\mathit{msg}}),$
 ${Y}_{\mathit{noise}}$ is with high probability in the base cell of any lattice good for coding semi normergodic noise up to power${\sigma}_{\mathit{noise}}^{2}+\epsilon $.
3.2. Numerical Results
3.2.1. Communications Schemes
 Fix some $c\in (0,3)$. If helper $k\in \left[4\right]$ in the channel from Figure 4 observes ${X}_{\mathrm{raw},k}^{n}$, then it encodes a normalized version of the signal:$${X}_{k}^{n}:=\frac{c}{\sqrt{var{X}_{\mathrm{raw},k}^{n}}}{X}_{\mathrm{raw},k}^{n}.$$
 Fix equal lattice encoding rates per helper $r={r}_{1}={r}_{2}={r}_{3}={r}_{4}$, and take lattice encoders as described in Theorem 1. Note that these rates may be distinct from the helpertobase rates ${R}_{1},\dots ,{R}_{4}$ if postprocessing of the encodings is involved.
 Upper Bound: An upper bound on the achievable transmittertodecoder communications rate, corresponding to helpers which forward with infinite rate. This bound is given by the formula $I({X}_{msg};{\left({X}_{\mathrm{raw},k}\right)}_{k\in \left[4\right]})$.
 Corollary 1 The achievable communications rate from Corollary 1, where each helper computes the lattice encoding described above, then employs a joint–compression stage to reduce its messaging rate. The sumhelperstodecoder rate for this scheme is given by Equation (2), taking $S=\left[4\right].$ The achieved messaging rate is given by the righthandside of Equation (3).
 Uncompressed Lattice: The achievable communications rate from Corollary 1, with each helper forwarding to the decoder its entire lattice encoding without joint–compression. The sumhelperstodecoder rate for this scheme is $4r$ since in this scheme each helper forwards to the base at rate ${R}_{k}=r$. The achieved messaging rate is given by the righthandside of Equation (3).
 Quantize & Forward: An achievable communications rate where helpertodecoder rates ${R}_{k},k\in \left[4\right]$ are chosen so that ${R}_{1}+{R}_{2}+{R}_{3}+{R}_{4}={R}_{sum}$ and each helper forwards a ratedistortionoptimal quantization of its observation to the decoder. The decoder processes these quantizations into an estimate of ${X}_{\mathrm{msg}}$ and decodes. This is discussed in more detail in [13]. The sumhelperstodecoder rate for this scheme is ${R}_{sum}$. The achieved messaging rate is $I({X}_{msg};{({X}_{raw,k}+{Z}_{k})}_{k\in \left[4\right]})$, where ${Z}_{k}\sim \mathcal{N}(0,var\left({X}_{raw,k}\right)\xb7{2}^{2{R}_{k}})$.
4. Conclusions
Author Contributions
Funding
Conflicts of Interest
Appendix A. Subroutines
 Stages*(·) is a slight modification of an algorithm from [3], reproduced here in Algorithm 1. The original algorithm characterizes the integral combinations ${\mathit{A}}^{\u2020}\overrightarrow{Y}$ which are recoverable with high probability from lattice messages $\overrightarrow{U}$ and dithers $\overrightarrow{W}$, excluding those with zero power. The exclusion is due to the algorithm’s use of $\mathrm{SLVC}\phantom{\rule{2.22214pt}{0ex}}\left(\xb7\right)$ as just defined. Such linear combinations never arose in the context of [3], although it provides justification for them being recoverable; in the paper, the algorithm’s argument is always fullrank. This is not true in the present context. The version here includes these zeropower subspaces by including a call to $\mathrm{L}\mathrm{A}\mathrm{T}\mathrm{T}\mathrm{I}\mathrm{C}\mathrm{E}\mathrm{K}\mathrm{E}\mathrm{R}\mathrm{N}\mathrm{E}\mathrm{L}\phantom{\rule{2.22214pt}{0ex}}\left(\xb7\right)$ before returning.
 $\mathrm{SLVC}\phantom{\rule{2.22214pt}{0ex}}\left(\mathit{B}\right)$, ‘Shortest Lattice Vector Coordinates’ returns the nonzero integer vector $\overrightarrow{a}$ which minimizes the norm of $\mathit{B}\overrightarrow{a}$ while $\mathit{B}\overrightarrow{a}\ne 0$, or the zero vector if no such vector exists. $\mathrm{SLVC}\phantom{\rule{2.22214pt}{0ex}}\left(\xb7\right)$ can be implemented using a lattice enumeration algorithm like one in [15] together with the LLL algorithm to convert a set of spanning lattice vectors into a basis [16].
 LatticeKernel(B,A), for $\mathit{B}\in {\mathbb{R}}^{K\times d},\phantom{\rule{4pt}{0ex}}\mathit{A}\in {\mathbb{Z}}^{d\times a}$ returns the integer matrix ${\mathit{A}}_{\perp}\in {\mathbb{Z}}^{d\times b}$ whose columns span the collection of all $\overrightarrow{a}\in {\mathbb{Z}}^{K}$ where $\mathit{B}\overrightarrow{a}=0$ while ${\mathit{A}}^{\u2020}\overrightarrow{a}={0}_{a}$. In other words, it returns a basis for the integer lattice in $ker\mathit{B}$ whose components are orthogonal to the lattice $\mathit{A}$. This can be implemented using an algorithm for finding ‘simultaneous integer relations’ as described in [17].
 $\mathrm{ICQM}\phantom{\rule{2.22214pt}{0ex}}\left(\mathit{M},\overrightarrow{v},c\right)$ is an “Integer Convex Quadratic Minimizer.” It provides a solution for the NPhard problem: “Minimize $({\overrightarrow{x}}^{\u2020}\mathit{M}\overrightarrow{x}+2{\overrightarrow{v}}^{\u2020}\overrightarrow{x}+c)$ over $\overrightarrow{x}$ with integer components.” Although finding the optimal solution is exponentially difficult in input size, algorithms are tractable for low dimension. [18] (Algorithm 5, Figure 2).
 CVarComponents$\left({\mathsf{\Sigma}}_{Q},\mathit{A}\right)$ returns certain variables $\{\mathit{M},\overrightarrow{v},c\}$ involved in computing$$var\left({Y}_{K}{\overrightarrow{\alpha}}_{}^{\u2020}{\overrightarrow{Y}}_{[K1]}\mathit{A}{\overrightarrow{Y}}_{[K1]}\right)$$$$\begin{array}{cc}\hfill {\mathsf{\Sigma}}_{Q}& =\left[\begin{array}{cc}{\mathit{M}}_{1}& {\overrightarrow{v}}_{1}\\ {\overrightarrow{v}}_{1}^{\u2020}& {\varsigma}_{1}^{2}\end{array}\right],\hfill \\ \hfill {\mathsf{\Sigma}}_{Q}\left[\begin{array}{c}\mathit{A}\\ 0\end{array}\right]{\left({\left[\begin{array}{c}\mathit{A}\\ 0\end{array}\right]}^{\u2020}{\mathsf{\Sigma}}_{Q}\left[\begin{array}{c}\mathit{A}\\ 0\end{array}\right]\right)}^{1}{\left[\begin{array}{c}\mathit{A}\\ 0\end{array}\right]}^{\u2020}{\mathsf{\Sigma}}_{Q}& =\left[\begin{array}{cc}{\mathit{M}}_{2}& {\overrightarrow{v}}_{2}\\ {\overrightarrow{v}}_{2}^{\u2020}& {\varsigma}_{2}^{2}\end{array}\right].\hfill \end{array}$$Then, taking $\mathit{M}=({\mathit{M}}_{1}{\mathit{M}}_{2}),\phantom{\rule{4pt}{0ex}}\mathit{v}=({\overrightarrow{v}}_{1}{\overrightarrow{v}}_{2}),\phantom{\rule{4pt}{0ex}}c=({\varsigma}_{1}^{2}{\varsigma}_{2}^{2}),$ one can check that:$$var\left({Y}_{K}{\overrightarrow{\alpha}}_{}^{\u2020}{\overrightarrow{Y}}_{[K1]}\mathit{A}{\overrightarrow{Y}}_{[K1]}\right)={\overrightarrow{\alpha}}_{}^{\u2020}\mathit{M}{\overrightarrow{\alpha}}_{}+2{\overrightarrow{v}}^{\u2020}{\overrightarrow{\alpha}}_{}+c.$$
 CVar$\left({\mathit{M}}_{1}{\mathit{M}}_{2};\mathsf{\Sigma}\right)$ computes the conditional covariance matrix of ${\mathit{M}}_{1}^{\u2020}\overrightarrow{Z}$ conditioned on ${\mathit{M}}_{2}^{\u2020}\overrightarrow{Z}$ for $\overrightarrow{Z}\sim \mathcal{N}(0,\mathsf{\Sigma})$. This is given by the formula:$$\mathrm{C}\mathrm{V}\mathrm{A}\mathrm{R}\phantom{\rule{2.22214pt}{0ex}}\left({\mathit{M}}_{1}{\mathit{M}}_{2};\mathsf{\Sigma}\right):={\mathit{M}}_{1}^{\u2020}\mathsf{\Sigma}{\mathit{M}}_{1}{\mathit{M}}_{1}^{\u2020}\mathsf{\Sigma}{\mathit{M}}_{2}pinv\left({\mathit{M}}_{2}^{\u2020}\mathsf{\Sigma}{\mathit{M}}_{2}\right){\mathit{M}}_{2}^{\u2020}\mathsf{\Sigma}{\mathit{M}}_{2}.$$
 $\mathrm{A}\mathrm{L}\mathrm{P}\mathrm{H}\mathrm{A}0\phantom{\rule{2.22214pt}{0ex}}\left({\mathsf{\Sigma}}_{Q},\mathit{A}\right)$ in Algorithm 2 implements a strategy for choosing ${\overrightarrow{\alpha}}_{0}$ in Theorems 1, 2.
 $\mathrm{A}\mathrm{L}\mathrm{P}\mathrm{H}\mathrm{A}\phantom{\rule{2.22214pt}{0ex}}\left(\mathsf{\Sigma},\mathit{A}\right)$ in Algorithm 3 implements a strategy for choosing ${\overrightarrow{\alpha}}_{s}$ in theorems 1, 2.
Algorithm 1 Compute recoverable linear combinations $\mathit{A}\in {\mathbb{R}}^{K\times m}$ from modulos of lattice encodings with covariance ${\mathsf{\Sigma}}_{Q}\in {\mathbb{R}}^{K\times K}$. 

Algorithm 2 Strategy for choosing ${\overrightarrow{\alpha}}_{0}$ for Theorems 1, 2 

Algorithm 3 Strategy for picking ${\overrightarrow{\alpha}}_{s}$ for Theorems 1, 2. 

Appendix B. Proof of Lemmas 1, 2, Theorem 1
Appendix B.1. Upper Bound for Singleton S
 Coarse and fine encoding lattices${L}_{c},{L}_{1},\dots ,{L}_{K}$ (base regions ${B}_{c},{B}_{1},\dots ,{B}_{K}$) with each k has ${L}_{c}\subset {L}_{k}$ designed with nesting ratio $\frac{1}{n}log{B}_{c}\cap {L}_{k}\to {r}_{k}$.
 Discrete part auxiliary lattices${\widehat{L}}_{1},\dots ,{\widehat{L}}_{K}$ (base regions ${\widehat{B}}_{1},\dots ,{\widehat{B}}_{K}$) with each ${\widehat{L}}_{k}\subset {L}_{c}$ having nesting ratio $\frac{1}{n}log{B}_{c}\cap {\widehat{L}}_{k}\to \frac{1}{2}log{\delta}_{k}^{2}$.
 Initial residual part auxiliary lattice${\widehat{L}}_{0}^{\prime}$ (base region ${\widehat{B}}_{0}^{\prime}$) with ${\widehat{L}}_{0}^{\prime}\subset {L}_{K}$, nesting ratio $\frac{1}{n}log{\widehat{B}}_{0}^{\prime}\cap {L}_{K}\to \frac{1}{2}log{\sigma}_{0}^{2}.$
 Residual part auxiliary lattices${\widehat{L}}_{1}^{\prime},\dots ,{\widehat{L}}_{K}^{\prime}$ (base regions ${\widehat{B}}_{1}^{\prime},\dots ,{\widehat{B}}_{K}^{\prime}$) with each ${\widehat{L}}_{k}^{\prime}\subset {L}_{K}$, having nesting ratio $\frac{1}{n}log{\widehat{B}}_{k}^{\prime}\cap {L}_{K}\to \frac{1}{2}log{\sigma}_{k}^{2}$.
Appendix C. Sketch of Theorem 2 for Upper Bound on EntropyRates of Decentralized Processing Messages
Appendix D. Proof of Lemma 3 for Recombination of Decentralized Processing Lattice Modulos
Appendix E. Proof of Corollary 1 for Achievability of the Decentralized Processing Rate
References
 Zamir, R. Lattice Coding for Signals and Networks: A Structured Coding Approach to Quantization, Modulation, and Multiuser Information Theory; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
 Ordentlich, O.; Erez, U. A simple proof for the existence of “good” pairs of nested lattices. IEEE Trans. Inf. Theory 2016, 62, 4439–4453. [Google Scholar] [CrossRef]
 Chapman, C.; Kinsinger, M.; Agaskar, A.; Bliss, D.W. Distributed Recovery of a Gaussian Source in Interference with Successive Lattice Processing. Entropy 2019, 21, 845. [Google Scholar] [CrossRef]
 Erez, U.; Zamir, R. Achieving $\frac{1}{2}$log(1+SNR) on the AWGN Channel With Lattice Encoding and Decoding. IEEE Trans. Inf. Theory 2004, 50, 1. [Google Scholar] [CrossRef]
 Ordentlich, O.; Erez, U.; Nazer, B. Successive integerforcing and its sumrate optimality. In Proceedings of the 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 2–4 October 2013; pp. 282–292. [Google Scholar]
 Ordentlich, O.; Erez, U. Precoded integerforcing universally achieves the MIMO capacity to within a constant gap. IEEE Trans. Inf. Theory 2014, 61, 323–340. [Google Scholar] [CrossRef]
 Wagner, A.B. On Distributed Compression of Linear Functions. IEEE Trans. Inf. Theory 2011, 57, 79–94. [Google Scholar] [CrossRef]
 Yang, Y.; Xiong, Z. An improved latticebased scheme for lossy distributed compression of linear functions. In Proceedings of the 2011 Information Theory and Applications Workshop, La Jolla, CA, USA, 6–11 Feburuary 2011. [Google Scholar]
 Yang, Y.; Xiong, Z. Distributed compression of linear functions: Partial sumrate tightness and gap to optimal sumrate. IEEE Trans. Inf. Theory 2014, 60, 2835–2855. [Google Scholar] [CrossRef]
 Cheng, H.; Yuan, X.; Tan, Y. Generalized computecompressandforward. IEEE Trans. Inf. Theory 2018, 65, 462–481. [Google Scholar] [CrossRef]
 Saurabha, T.; Viswanath, P.; Wagner, A.B. The Gaussian Manyhelpone Distributed Source Coding Problem. IEEE Trans. Inf. Theory 2009, 56, 564–581. [Google Scholar]
 Sanderovich, A.; Shamai, S.; Steinberg, Y.; Kramer, G. Communication via Decentralized Processing. IEEE Trans. Inf. Theory 2008, 54, 3008–3023. [Google Scholar] [CrossRef]
 Chapman, C.D.; Mittelmann, H.; Margetts, A.R.; Bliss, D.W. A Decentralized Receiver in Gaussian Interference. Entropy 2018, 20, 269. [Google Scholar] [CrossRef]
 El Gamal, A.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
 Schnorr, C.P.; Euchner, M. Lattice Basis Reduction: Improved Practical Algorithms and Solving Subset Sum Problems. Math. Program. 1994, 66, 181–199. [Google Scholar] [CrossRef]
 Buchmann, J.; Pohst, M. Computing a Lattice Basis from a System of Generating Vectors. In Proceedings of the European Conference on Computer Algebra, Leipzig, Germany, 2–5 June 1987; pp. 54–63. [Google Scholar]
 Hastad, J.; Just, B.; Lagarias, J.C.; Schnorr, C.P. Polynomial time algorithms for finding integer relations among real numbers. SIAM J. Comput. 1989, 18, 859–881. [Google Scholar] [CrossRef]
 Ghasemmehdi, A.; Agrell, E. Faster recursions in sphere decoding. IEEE Trans. Inf. Theory 2011, 57, 3530–3536. [Google Scholar] [CrossRef]
 Krithivasan, D.; Pradhan, S.S. Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function. In International Symposium on Applied Algebra, Algebraic Algorithms, and ErrorCorrecting Codes; Springer: Berlin/Heidelberg, Germany, 2007; pp. 178–187. [Google Scholar][Green Version]
 Erez, U.; Litsyn, S.; Zamir, R. Lattices Which are Good for (Almost) Everything. IEEE Trans. Inf. Theory 2005, 51, 3401–3416. [Google Scholar] [CrossRef]
$a:=b$  Define a to equal b 
$\left[n\right]$  Integers from 1 to n 
$\mathit{A},\overrightarrow{a},\overrightarrow{A}$  Matrix, column vector, vector, random vector 
${\mathit{A}}^{\u2020},{\overrightarrow{a}}^{\u2020}$  Transpose (All matrices involved are real) 
${\left[\mathit{A}\right]}_{S,T}$  Submatrix corresponding to rows S, columns T of $\mathit{A}$ 
${\overrightarrow{Y}}_{S}$  an $\leftS\right$vector, the subvector of $\overrightarrow{Y}$ including components with indices in S. If S has order then this vector respects S’s order. 
${\mathbf{I}}_{K}$  $K\times K$ identity matrix 
${0}_{K}$  $K\times 1$ zero vector 
$diag\overrightarrow{a}$  Square diagonal matrix with diagonals $\overrightarrow{a}$ 
$pinv(\xb7)$  MoorePenrose pseudoinverse 
$\mathcal{N}(0,\mathsf{\Sigma})$  Normal distribution with zero mean, covariance $\mathsf{\Sigma}$ 
$X\sim f$  X is a random variable distributed like f 
${X}^{n},f\left({x}^{n}\right)$  Vector of n independent trials of a random variable distributed like X, a function whose input is intended to be such a variable 
$var\left(a\right)$  Variance (or covariance matrix) of (components of) a, averaged over time index. 
$var\left(a\rightb)$  Conditional variance (or covariance matrix) of (components of) a given observation b, averaged over time index. 
$cov(a,b),cov\left(a,bc\right)$  Covariance between a and $b,$, covariance between a and b conditioned on c, averaged over time index. 
$\mathcal{E}\phantom{\rule{2.22214pt}{0ex}}\left(ab\right)$  Linear MMSE estimate of a given observations b 
${\mathcal{E}}_{\perp}\phantom{\rule{2.22214pt}{0ex}}\left(ab\right)$  Complement of $\mathcal{E}\phantom{\rule{2.22214pt}{0ex}}\left(ab\right)$, i.e., ${\mathcal{E}}_{\perp}\phantom{\rule{2.22214pt}{0ex}}\left(ab\right):=a\mathcal{E}\phantom{\rule{2.22214pt}{0ex}}\left(ab\right)$. An important property is that $\mathcal{E}\phantom{\rule{2.22214pt}{0ex}}\left(ab\right)$ and ${\mathcal{E}}_{\perp}\phantom{\rule{2.22214pt}{0ex}}\left(ab\right)$ are uncorrelated. 
${round}_{L}(\xb7),{mod}_{L}(\xb7)$  Lattice round, modulo to a lattice L (when it is clear what base region is associated with L). 
K  Number of lattice encodings in current context. 
n  Scheme blocklength 
${X}_{k}^{n}$  Observation at receiver k 
${W}_{k}$  Lattice dither k 
${U}_{k}$  Lattice encoding k 
${Y}_{k}$  Quantization of ${X}_{k}^{n}$ 
${\overrightarrow{Y}}_{c}$  Ensemble of lattice quantizations, sans modulo 
$\mathsf{\Sigma}$  $K\times K$ timeaveraged covariance between observations ${X}_{1}^{n},\dots ,{X}_{K}^{n}$ 
${\mathsf{\Sigma}}_{Q}$  $K\times K$ timeaveraged covariance between quantizations ${Y}_{1},\dots ,{Y}_{K}$ 
${r}_{1},\dots ,{r}_{K}$  Nesting ratios for coarse lattice ${L}_{c}$ in the fine lattices ${L}_{1},\dots ,{L}_{K}$, equivalent to the encoding rates of lattice codes when joint compression is not used 
${R}_{1},\dots ,{R}_{K}$  Messaging rates for helpers in the Section 3 communications scenario 
${r}_{\mathrm{msg}}$  Nesting ratio for codebook coarse lattice ${L}_{c,\mathrm{msg}}$ in codebook fine lattice ${L}_{f,\mathrm{msg}}$ in Section 3, equivalent to codebook rate 
${\overrightarrow{h}}_{\mathrm{msg}}$  Covariance between codeword and quantizations in Section 3 
${\overrightarrow{\alpha}}_{s}$  Integer combination of ${\overrightarrow{Y}}_{c}$ to analyze in step s of Appendix B 
${\delta}_{s}^{2}$  Variance of ${\overrightarrow{\alpha}}_{s}^{\u2020}{\overrightarrow{Y}}_{c}$ after removing prior knowledge in Appendix B 
${\sigma}_{s}^{2}$  Variance of ${Y}_{K}$ uncorrelated with prior knowledge and ${\overrightarrow{\alpha}}_{s}^{\u2020}{\overrightarrow{Y}}_{c}$ in Appendix B 
${\beta}_{s}$  Regression coefficient for ${\overrightarrow{\alpha}}_{s}^{\u2020}{\overrightarrow{Y}}_{c}$ in ${Y}_{K}$ after including prior knowledge at step s in Appendix B 
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chapman, C.; Bliss, D.W. Upper Bound on the Joint Entropy of Correlated Sources Encoded by Good Lattices. Entropy 2019, 21, 957. https://doi.org/10.3390/e21100957
Chapman C, Bliss DW. Upper Bound on the Joint Entropy of Correlated Sources Encoded by Good Lattices. Entropy. 2019; 21(10):957. https://doi.org/10.3390/e21100957
Chicago/Turabian StyleChapman, Christian, and Daniel W. Bliss. 2019. "Upper Bound on the Joint Entropy of Correlated Sources Encoded by Good Lattices" Entropy 21, no. 10: 957. https://doi.org/10.3390/e21100957