#
Finite-Length Analyses for Source and Channel Coding on Markov Chains^{ †}

^{1}

^{2}

^{3}

^{4}

^{5}

^{*}

^{†}

^{‡}

## Abstract

**:**

## 1. Introduction

#### 1.1. Two Aspects of Finite-Length Analysis

**(A1)**- Computational complexity for the bound and
**(A2)**- Asymptotic tightness for the bound.

- A large deviation regime in which the error probability $\epsilon $ asymptotically behaves as ${e}^{-nr}$ for some $r>0$;
- A moderate deviation regime in which $\epsilon $ asymptotically behaves as ${e}^{-{n}^{1-2t}r}$ for some $r>0$ and $t\in (0,1/2)$; and
- A second-order regime in which $\epsilon $ is a constant.

#### 1.2. Main Contribution for Finite-Length Analysis

#### 1.3. Main Contribution for Channel Coding

#### 1.4. Asymptotic Bounds and Asymptotic Tightness for Finite-Length Bounds

#### 1.5. Related Work on Markov Chains

#### 1.6. Organization of the Paper

#### 1.7. Notations

## 2. Information Measures

#### 2.1. Information Measures for the Single-Shot Setting

**Lemma**

**1.**

**Proof.**

**Lemma**

**2.**

**Lemma**

**3.**

- 1.
- For fixed ${Q}_{Y}$, $\theta {H}_{1+\theta}\left({P}_{XY}\right|{Q}_{Y})$ is a concave function of θ, and it is strict concave iff $\mathrm{Var}\left[log\frac{{Q}_{Y}\left(Y\right)}{{P}_{XY}(X,Y)}\right]>0$.
- 2.
- For fixed ${Q}_{Y}$, ${H}_{1+\theta}\left({P}_{XY}\right|{Q}_{Y})$ is a monotonically decreasing (Technically, ${H}_{1+\theta}\left({P}_{XY}\right|{Q}_{Y})$ is always non-increasing, and it is monotonically decreasing iff strict concavity holds in Statement 1. Similar remarks are also applied for other information measures throughout the paper.) function of θ.
- 3.
- The function $\theta {H}_{1+\theta}^{\downarrow}\left(X\right|Y)$ is a concave function of θ, and it is strict concave iff $\mathsf{V}\left(X\right|Y)>0$.
- 4.
- ${H}_{1+\theta}^{\downarrow}\left(X\right|Y)$ is a monotonically decreasing function of θ.
- 5.
- The function $\theta {H}_{1+\theta}^{\uparrow}\left(X\right|Y)$ is a concave function of θ, and it is strict concave iff $\mathsf{V}\left(X\right|Y)>0$.
- 6.
- ${H}_{1+\theta}^{\uparrow}\left(X\right|Y)$ is a monotonically decreasing function of θ.
- 7.
- For every $\theta \in (-1,0)\cup (0,\infty )$, we have ${H}_{1+\theta}^{\downarrow}\left(X\right|Y)\le {H}_{1+\theta}^{\uparrow}\left(X\right|Y)$.
- 8.
- For fixed ${\theta}^{\prime}$, the function $\theta {H}_{1+\theta ,1+{\theta}^{\prime}}\left(X\right|Y)$ is a concave function of θ, and it is strict concave iff $\mathsf{V}\left(X\right|Y)>0$.
- 9.
- For fixed ${\theta}^{\prime}$, ${H}_{1+\theta ,1+{\theta}^{\prime}}\left(X\right|Y)$ is a monotonically decreasing function of θ.
- 10.
- We have:$$\begin{array}{c}\hfill {H}_{1+\theta ,1}\left(X\right|Y)={H}_{1+\theta}^{\downarrow}\left(X\right|Y).\end{array}$$
- 11.
- We have:$$\begin{array}{c}\hfill {H}_{1+\theta ,1+\theta}\left(X\right|Y)={H}_{1+\theta}^{\uparrow}\left(X\right|Y).\end{array}$$
- 12.
- For every $\theta \in (-1,0)\cup (0,\infty )$, ${H}_{1+\theta ,1+{\theta}^{\prime}}\left(X\right|Y)$ is maximized at ${\theta}^{\prime}=\theta $.

**Lemma**

**4.**

**Proof.**

**Remark**

**1.**

#### 2.2. Information Measures for the Transition Matrix

**Assumption**

**1**

**Assumption**

**2**

**Lemma**

**5.**

**Proof.**

**Example**

**1.**

**Example**

**2.**

**Example**

**3.**

**Remark**

**2.**

**Lemma**

**6.**

**Remark**

**3.**

**Lemma**

**7.**

- 1.
- The function $\theta {H}_{1+\theta}^{\downarrow ,W}\left(X\right|Y)$ is a concave function of θ, and it is strict concave iff ${\mathsf{V}}^{W}\left(X\right|Y)>0$.
- 2.
- ${H}_{1+\theta}^{\downarrow ,W}\left(X\right|Y)$ is a monotonically decreasing function of θ.
- 3.
- The function $\theta {H}_{1+\theta}^{\uparrow ,W}\left(X\right|Y)$ is a concave function of θ, and it is strict concave iff ${\mathsf{V}}^{W}\left(X\right|Y)>0$.
- 4.
- ${H}_{1+\theta}^{\uparrow ,W}\left(X\right|Y)$ is a monotonically decreasing function of θ.
- 5.
- For every $\theta \in (-1,0)\cup (0,\infty )$, we have ${H}_{1+\theta}^{\downarrow ,W}\left(X\right|Y)\le {H}_{1+\theta}^{\uparrow ,W}\left(X\right|Y)$.
- 6.
- For fixed ${\theta}^{\prime}$, the function $\theta {H}_{1+\theta ,1+{\theta}^{\prime}}^{W}\left(X\right|Y)$ is a concave function of θ, and it is strict concave iff ${\mathsf{V}}^{W}\left(X\right|Y)>0$.
- 7.
- For fixed ${\theta}^{\prime}$, ${H}_{1+\theta ,1+{\theta}^{\prime}}^{W}\left(X\right|Y)$ is a monotonically decreasing function of θ.
- 8.
- We have:$$\begin{array}{c}\hfill {H}_{1+\theta ,1}^{W}\left(X\right|Y)={H}_{1+\theta}^{\downarrow ,W}\left(X\right|Y).\end{array}$$
- 9.
- We have:$$\begin{array}{c}\hfill {H}_{1+\theta ,1+\theta}^{W}\left(X\right|Y)={H}_{1+\theta}^{\uparrow ,W}\left(X\right|Y).\end{array}$$
- 10.
- For every $\theta \in (-1,0)\cup (0,\infty )$, ${H}_{1+\theta ,1+{\theta}^{\prime}}^{W}\left(X\right|Y)$ is maximized at ${\theta}^{\prime}=\theta $, i.e.,$$\begin{array}{c}\hfill \frac{d\left[{H}_{1+\theta ,1+{\theta}^{\prime}}^{W}\left(X\right|Y)\right]}{d{\theta}^{\prime}}{|}_{{\theta}^{\prime}=\theta}=0.\end{array}$$

**Lemma**

**8.**

**Lemma**

**9.**

**Proof.**

**Remark**

**4.**

#### 2.3. Information Measures for the Markov Chain

**Lemma**

**10.**

**Proof.**

**Theorem**

**1.**

**Theorem**

**2.**

**Lemma**

**11.**

**Proof.**

**Theorem**

**3.**

**Lemma**

**12.**

**Proof.**

**Theorem**

**4.**

## 3. Source Coding with Full Side-Information

#### 3.1. Problem Formulation

#### 3.2. Single-Shot Bounds

**Lemma**

**13**

**Lemma**

**14**

**Lemma**

**15.**

**Proof.**

**Lemma**

**16**

**Lemma**

**17**

**Lemma**

**18**

**Theorem**

**5.**

**Proof.**

**Corollary**

**1.**

**Remark**

**5.**

#### 3.3. Finite-Length Bounds for Markov Source

**Theorem**

**6**

**Theorem**

**7**

**Theorem**

**8**

**Proof.**

**Theorem**

**9**

**Theorem**

**10**

**Proof.**

#### 3.4. Second-Order

**Theorem**

**11.**

**Proof.**

#### 3.5. Moderate Deviation

**Theorem**

**12.**

**Proof.**

**Remark**

**6.**

#### 3.6. Large Deviation

**Theorem**

**13.**

**Proof.**

**Theorem**

**14.**

**Proof.**

**Remark**

**7.**

**Remark**

**8.**

#### 3.7. Numerical Example

#### 3.8. Summary of the Results

## 4. Channel Coding

#### 4.1. Formulation for the Conditional Additive Channel

#### 4.1.1. Single-Shot Case

#### 4.1.2. n-Fold Extension

#### 4.2. Conversion from the Regular Channel to the Conditional Additive Channel

**Theorem**

**15.**

**Example**

**4**

**Example**

**5**

#### 4.3. Conversion of the BPSK-AWGN Channel into the Conditional Additive Channel

#### 4.4. Achievability Bound Derived by Source Coding with Side-Information

**Lemma**

**19**

**Lemma**

**20**

**Lemma**

**21**

**Lemma**

**22**

**Lemma**

**23**

#### 4.5. Converse Bound

**Lemma**

**24**

**Lemma**

**25.**

**Proof.**

**Theorem**

**16.**

**Proof.**

#### 4.6. Finite-Length Bound for the Markov Noise Channel

**Example**

**6**

**Theorem**

**17**

**Theorem**

**18**