# Rate-Distortion Region of a Gray–Wyner Model with Side Information

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

#### 1.1. Main Contributions

#### 1.2. Related Works

#### 1.3. Outline

#### Notation

## 2. Problem Setup and Formal Definitions

**Definition**

**1.**

- -
- Three sets of messages ${\mathcal{W}}_{0}\triangleq [1:{M}_{0,n}]$, ${\mathcal{W}}_{1}\triangleq [1:{M}_{1,n}]$, and ${\mathcal{W}}_{2}\triangleq [1:{M}_{2,n}]$.
- -
- Three encoding functions, ${f}_{0}$, ${f}_{1}$ and ${f}_{2}$ defined, for $j\in \{0,1,2\}$ as$$\begin{array}{ccc}\hfill {f}_{j}\phantom{\rule{4pt}{0ex}}:\phantom{\rule{4pt}{0ex}}{\mathcal{S}}_{1}^{n}\times {\mathcal{S}}_{2}^{n}& \mapsto & {\mathcal{W}}_{j}\hfill \\ \hfill ({S}_{1}^{n},{S}_{2}^{n})& \mapsto & {W}_{j}={f}_{j}({S}_{1}^{n},{S}_{2}^{n})\phantom{\rule{4pt}{0ex}}.\hfill \end{array}$$
- -
- Two decoding functions ${g}_{1}$ and ${g}_{2}$, one at each user:$$\begin{array}{ccc}\hfill {g}_{1}\phantom{\rule{4pt}{0ex}}:\phantom{\rule{4pt}{0ex}}{\mathcal{W}}_{0}\times {\mathcal{W}}_{1}\times {\mathcal{Y}}_{1}^{n}& \mapsto & {\widehat{\mathcal{S}}}_{2}^{n}\times {\widehat{\mathcal{S}}}_{1}^{n}\hfill \\ \hfill ({W}_{0},{W}_{1},{Y}_{1}^{n})& \mapsto & ({\widehat{S}}_{2,1}^{n},{\widehat{S}}_{1}^{n})={g}_{1}({W}_{0},{W}_{1},{Y}_{1}^{n})\phantom{\rule{4pt}{0ex}},\hfill \end{array}$$$$\begin{array}{ccc}\hfill {g}_{2}\phantom{\rule{4pt}{0ex}}:\phantom{\rule{4pt}{0ex}}{\mathcal{W}}_{0}\times {\mathcal{W}}_{2}\times {\mathcal{Y}}_{2}^{n}& \mapsto & {\widehat{\mathcal{S}}}_{2}^{n}\hfill \\ \hfill ({W}_{0},{W}_{2},{Y}_{2}^{n})& \mapsto & {\widehat{S}}_{2,2}^{n}={g}_{2}({W}_{0},{W}_{2},{Y}_{2}^{n})\phantom{\rule{4pt}{0ex}}.\hfill \end{array}$$The expected distortion of this code is given by$$\mathbb{E}\left({d}_{1}^{(n)}({S}_{1}^{n},{\widehat{S}}_{1}^{n})\right)\triangleq \mathbb{E}{\displaystyle \frac{1}{n}}{\displaystyle \sum _{i=1}^{n}}{d}_{1}({S}_{1,i},{\widehat{S}}_{1,i})\phantom{\rule{4pt}{0ex}}.$$The probability of error is defined as$${P}_{e}^{(n)}\triangleq \mathbb{P}\left({\widehat{S}}_{2,1}^{n}\ne {S}_{2}^{n}\phantom{\rule{4pt}{0ex}}\mathit{or}\phantom{\rule{4pt}{0ex}}{\widehat{S}}_{2,2}^{n}\ne {S}_{2}^{n}\right)\phantom{\rule{4pt}{0ex}}.$$

**Definition**

**2.**

## 3. Gray–Wyner Model with Side Information and Degraded Reconstruction Sets

**Theorem**

**1.**

- (1)
- The following Markov chain is valid:$$({Y}_{1},{Y}_{2})\overline{)\u25cb}({S}_{1},{S}_{2})\overline{)\u25cb}({U}_{0},{U}_{1})$$
- (2)
- There exists a function $\varphi :{\mathcal{Y}}_{1}\times {\mathcal{U}}_{0}\times {\mathcal{U}}_{1}\times {\mathcal{S}}_{2}\to {\widehat{\mathcal{S}}}_{1}$ such that:$$\mathbb{E}{d}_{1}({S}_{1},{\widehat{S}}_{1})\le {D}_{1}\phantom{\rule{4pt}{0ex}}.$$

**Proof.**

**Remark**

**1.**

**Remark**

**2.**

**Remark**

**3.**

## 4. The Heegard–Berger Problem with Successive Refinement

#### 4.1. Rate-Distortion Region

**Corollary**

**1.**

- (1)
- The following Markov chain is valid:$$({U}_{0},{U}_{1})\overline{)\u25cb}({S}_{1},{S}_{2})\overline{)\u25cb}({Y}_{1},{Y}_{2})$$

- (2)
- There exists a function $\varphi :{\mathcal{Y}}_{1}\times {\mathcal{U}}_{0}\times {\mathcal{U}}_{1}\times {\mathcal{S}}_{2}\to {\widehat{\mathcal{S}}}_{1}$ such that:$$\mathbb{E}{d}_{1}({S}_{1},{\widehat{S}}_{1})\le {D}_{1}\phantom{\rule{4pt}{0ex}}.$$

**Proof.**

**Remark**

**4.**

**Remark**

**5.**

**Remark**

**6.**

**Remark**

**7.**

#### 4.2. Binary Example

**Claim**

**1.**

**Proof.**

## 5. The Heegard–Berger Problem with Scalable Coding

#### 5.1. Rate-Distortion Region

**Corollary**

**2.**

- (1)
- The following Markov chain is valid:$$({U}_{0},{U}_{1})\overline{)\u25cb}({S}_{1},{S}_{2})\overline{)\u25cb}({Y}_{1},{Y}_{2})$$
- (2)
- There exists a function $\varphi :{\mathcal{Y}}_{1}\times {\mathcal{U}}_{0}\times {\mathcal{U}}_{1}\times {\mathcal{S}}_{2}\to {\widehat{\mathcal{S}}}_{1}$ such that:$$\mathbb{E}{d}_{1}({S}_{1},{\widehat{S}}_{1})\le {D}_{1}\phantom{\rule{4pt}{0ex}}.$$

**Proof.**

**Remark**

**8.**

**Remark**

**9.**

**Remark**

**10.**

#### 5.2. Binary Example

**Claim**

**2.**

**Proof.**

## 6. Proof of Theorem 1

#### 6.1. Proof of Converse Part

#### 6.2. Proof of Direct Part

**Proposition**

**1.**

**Proof**

**of**

**Proposition**

**1.**

#### 6.2.1. Codebook Generation

- (1)
- Randomly and independently generate ${2}^{n{T}_{0}}$ length-n codewords ${v}_{0}^{n}({k}_{0})$ indexed with the pair of indices ${k}_{0}=({k}_{0,0},{k}_{0,p})$, where ${k}_{0,0}\in [1:{2}^{n{T}_{0,0}}]$ and ${k}_{0,p}\in [1:{2}^{n{T}_{0,p}}]$. Each codeword ${v}_{0}^{n}({k}_{0})$ has i.i.d. entries drawn according to $\prod _{i=1}^{n}{P}_{{V}_{0}}({v}_{0,i}({k}_{0}))$. The codewords $\{{v}_{0}^{n}({k}_{0})\}$ are partitioned into superbins whose indices will be relevant for both receivers; and each superbin is partitioned int two different ways, each into subbins whose indices will be relevant for a distinct receiver (i.e., double-binning). This is obtained by partitioning the indices $\{({k}_{0,0},{k}_{0,p})\}$ as follows. We partition the ${2}^{n{T}_{0,0}}$ indices $\{{k}_{0,0}\}$ into ${2}^{n{\tilde{R}}_{0,0}}$ bins by randomly and independently assigning each index ${k}_{0,0}$ to an index ${\tilde{w}}_{0,0}({k}_{0,0})$ according to a uniform pmf over $[1:{2}^{n{\tilde{R}}_{0,0}}]$. We refer to each subset of indices $\{{k}_{0,0}\}$ with the same index ${\tilde{w}}_{0,0}$ as a bin ${\mathcal{B}}_{00}({\tilde{w}}_{0,0})$, ${\tilde{w}}_{0,0}\in [1:{2}^{n{\tilde{R}}_{0,0}}]$. In addition, we make two distinct partitions of the ${2}^{n{T}_{0,p}}$ indices $\{{k}_{0,p}\}$, each relevant for a distinct receiver. In the first partition, which is relevant for Receiver 1, the indices $\{{k}_{0,p}\}$ are assigned randomly and independently each to an index ${\tilde{w}}_{0,1}({k}_{0,p})$ according to a uniform pmf over $[1:{2}^{n{\tilde{R}}_{0,1}}]$. We refer to each subset of indices $\{{k}_{0,p}\}$ with the same index ${\tilde{w}}_{0,1}$ as a bin ${\mathcal{B}}_{01}({\tilde{w}}_{0,1})$, ${\tilde{w}}_{0,1}\in [1:{2}^{n{\tilde{R}}_{0,1}}]$. Similarly, in the second partition, which is relevant for Receiver 2, the indices $\{{k}_{0,p}\}$ are assigned randomly and independently each to an index ${\tilde{w}}_{0,2}({k}_{0,p})$ according to a uniform pmf over $[1:{2}^{n{\tilde{R}}_{0,2}}]$; and refer to each subset of indices $\{{k}_{0,p}\}$ with the same index ${\tilde{w}}_{0,2}$ as a bin ${\mathcal{B}}_{02}({\tilde{w}}_{0,2})$, ${\tilde{w}}_{0,2}\in [1:{2}^{n{\tilde{R}}_{0,2}}]$.
- (2)
- For each ${k}_{0}\in [1:{2}^{n{T}_{0}}]$, randomly and independently generate ${2}^{n{T}_{1}}$ length-n codewords ${u}_{1}^{n}({k}_{1},{k}_{0})$ indexed with the pair of indices ${k}_{1}=({k}_{1,0},{k}_{1,1})$, where ${k}_{1,0}\in [1:{2}^{n{T}_{1,0}}]$ and ${k}_{1,1}\in [1:{2}^{n{T}_{1,1}}]$. Each codeword ${u}_{1}^{n}({k}_{1},{k}_{0})$ is with i.i.d. elements drawn according to $\prod _{i=1}^{n}{P}_{{U}_{1}|{V}_{0}}({u}_{1,i}({k}_{1},{k}_{0})|{v}_{0,i}({k}_{0}))$. We partition the ${2}^{n{T}_{1,0}}$ indices $\{{k}_{1,0}\}$ into ${2}^{n{\tilde{R}}_{1,0}}$ bins by randomly and independently assigning each index ${k}_{1,0}$ to an index ${\tilde{w}}_{1,0}({k}_{1,0})$ according to a uniform pmf over $[1:{2}^{n{\tilde{R}}_{1,0}}]$. We refer to each subset of indices $\{{k}_{1,0}\}$ with the same index ${\tilde{w}}_{1,0}$ as a bin ${\mathcal{B}}_{10}({\tilde{w}}_{1,0})$, ${\tilde{w}}_{1,0}\in [1:{2}^{n{\tilde{R}}_{1,0}}]$. Similarly, we partition the ${2}^{n{T}_{1,1}}$ indices $\{{k}_{1,1}\}$ into ${2}^{n{\tilde{R}}_{1,1}}$ bins by randomly and independently assigning each index ${k}_{1,1}$ to an index ${\tilde{w}}_{1,1}({k}_{1,1})$ according to a uniform pmf over $[1:{2}^{n{\tilde{R}}_{1,1}}]$; and refer to each subset of indices $\{{k}_{1,1}\}$ with the same index ${\tilde{w}}_{1,1}$ as a bin ${\mathcal{B}}_{11}({\tilde{w}}_{1,1})$, ${\tilde{w}}_{1,1}\in [1:{2}^{n{\tilde{R}}_{1,1}}]$.
- (3)
- Reveal all codebooks and their partitions to the encoder, the codebook of $\{{v}_{0}^{n}({k}_{0})\}$ and its partitions to both receivers, and the codebook of $\{{u}_{1}^{n}({k}_{1},{k}_{0})\}$ and its partitions to only Receiver 1.

#### 6.2.2 Encoding

#### 6.2.3 Decoding

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Gray, R.; Wyner, A. Source coding for a simple network. Bell Syst. Tech. J.
**1974**, 53, 1681–1721. [Google Scholar] [CrossRef] - Heegard, C.; Berger, T. Rate distortion when side information may be absent. IEEE Trans. Inf. Theory
**1985**, 31, 727–734. [Google Scholar] [CrossRef] - Tian, C.; Diggavi, S.N. Side-information scalable source coding. Inf. Theory IEEE Trans.
**2008**, 54, 5591–5608. [Google Scholar] [CrossRef] - Shayevitz, O.; Wigger, M. On the capacity of the discrete memoryless broadcast channel with feedback. IEEE Trans. Inf. Theory
**2013**, 59, 1329–1345. [Google Scholar] [CrossRef] - Kaspi, A.H. Rate distortion function when side information may be present at the decoder. IEEE Trans. Inf. Theory
**1994**, 40, 2031–2034. [Google Scholar] [CrossRef] - Sgarro, A. Source coding with side information at several decoders. Inf. Theory IEEE Trans.
**1977**, 23, 179–182. [Google Scholar] [CrossRef] - Tian, C.; Diggavi, S.N. On multistage successive refinement for Wyner–Ziv source coding with degraded side informations. Inf. Theory IEEE Trans.
**2007**, 53, 2946–2960. [Google Scholar] [CrossRef] - Timo, R.; Oechtering, T.; Wigger, M. Source Coding Problems With Conditionally Less Noisy Side Information. Inf. Theory IEEE Trans.
**2014**, 60, 5516–5532. [Google Scholar] [CrossRef] - Benammar, M.; Zaidi, A. Rate-distortion function for a heegard-berger problem with two sources and degraded reconstruction sets. IEEE Trans. Inf. Theory
**2016**, 62, 5080–5092. [Google Scholar] [CrossRef] - Timo, R.; Grant, A.; Kramer, G. Rate-distortion functions for source coding with complementary side information. In Proceedings of the 2011 IEEE International Symposium on Information Theory (ISIT), St. Petersburg, Russia, 31 July–5 August 2011; pp. 2934–2938. [Google Scholar]
- Unal, S.; Wagner, A. An LP bound for rate distortion with variable side information. In Proceedings of the Data Compression Conference (DCC), Snowbird, UT, USA, 4–7 April 2017. [Google Scholar]
- Equitz, W.H.; Cover, T.M. Successive refinement of information. IEEE Trans. Inf. Theory
**1991**, 37, 269–275. [Google Scholar] [CrossRef] - Steinberg, Y.; Merhav, N. On successive refinement for the Wyner-Ziv problem. IEEE Trans. Inf. Theory
**2004**, 50, 1636–1654. [Google Scholar] [CrossRef] - Timo, R.; Chan, T.; Grant, A. Rate distortion with side-information at many decoders. Inf. Theory IEEE Trans.
**2011**, 57, 5240–5257. [Google Scholar] [CrossRef] - Timo, R.; Grant, A.; Chan, T.; Kramer, G. Source coding for a simple network with receiver side information. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Toronto, ON, Canada, 6–11 July 2008; pp. 2307–2311. [Google Scholar]
- Gamal, A.E.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]

**Figure 2.**Gray–Wyner model with side information at both receivers and degraded reconstruction sets.

**Figure 3.**Two classes of Heegard–Berger models (HB models): (

**a**) HB model with successive refinement; and (

**b**) HB model with scalable coding.

**Figure 4.**Comparison of coding schemes for the Gray–Wyner network with side information, Gray–Wyner network and the Heegard–Berger problem: (

**a**) coding scheme for the Gray–Wyner network; (

**b**) coding scheme for the Heegard–Berger problem; and (

**c**) coding scheme for the Gray–Wyner network with side information.

**Figure 6.**Rate region of the binary example of Figure 5. The choices ${U}_{0}=(X2,X3)$ or ${U}_{0}={X}_{2}$ or ${U}_{0}={X}_{3}$ are optimal irrespective of the value of ${R}_{1}$, while the degenerate choice ${U}_{0}=\u2300$ is optimal only in some slices of the region.

**Figure 8.**The optimal rate region for the setting of Figure 7 given by (${R}_{0}\ge 2,{R}_{2}\ge 0$). The choice of ${U}_{0}=\u2300$ is optimal only in a slice of the region.

${\mathcal{T}}_{0}$ | ${\mathcal{T}}_{1}$ | ${\mathcal{T}}_{2}$ | |
---|---|---|---|

${\mathcal{A}}_{{\mathcal{T}}_{j}}^{-}$ | ∅ | ∅ | ${U}_{1}$ |

${\mathcal{A}}_{{\mathcal{T}}_{j}}^{\supset}$ | ∅ | ${U}_{12}$ | ${U}_{12}$ |

${\mathcal{A}}_{{\mathcal{T}}_{j}}^{+}$ | $\{{U}_{1},{U}_{2}\}$ | ∅ | ∅ |

${\mathcal{A}}_{{\mathcal{T}}_{j}}^{\u2020}$ | ∅ | ∅ | ∅ |

${\mathcal{A}}_{{\mathcal{T}}_{j},1}^{\u2021}$ | ∅ | ∅ | ∅ |

${\mathcal{A}}_{{\mathcal{T}}_{j},2}^{\u2021}$ | ∅ | ∅ | ∅ |

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Benammar, M.; Zaidi, A. Rate-Distortion Region of a Gray–Wyner Model with Side Information. *Entropy* **2018**, *20*, 2.
https://doi.org/10.3390/e20010002

**AMA Style**

Benammar M, Zaidi A. Rate-Distortion Region of a Gray–Wyner Model with Side Information. *Entropy*. 2018; 20(1):2.
https://doi.org/10.3390/e20010002

**Chicago/Turabian Style**

Benammar, Meryem, and Abdellatif Zaidi. 2018. "Rate-Distortion Region of a Gray–Wyner Model with Side Information" *Entropy* 20, no. 1: 2.
https://doi.org/10.3390/e20010002