# Coarsely Quantized Decoding and Construction of Polar Codes Using the Information Bottleneck Method

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

#### Notations

## 2. Prerequisites

#### 2.1. Polar Codes

**u**, specified in the information set, $\mathcal{A}$, carry the information bits.

#### 2.2. Successive Cancellation List Decoder

#### 2.3. Polar Code Construction

#### 2.4. Information Bottleneck Method

## 3. Polar Code Construction Using the Information Bottleneck Method

#### 3.1. Information Bottleneck Construction

#### 3.2. Tal and Vardy’s Construction

#### 3.3. Information Bottleneck vs. Tal and Vardy Construction

## 4. Information Bottleneck Polar Decoders

#### 4.1. Lookup Tables for Decoding on a Building Block

- Use the decoding table $p\left({t}_{0}\right|{\mathbf{y}}_{0})$ of Figure 8a to determine the cluster index to which the observed channel output ${\mathbf{y}}_{0}$ belongs. For example, ${t}_{0}=\left|\mathcal{T}\right|-1$ when ${y}_{0}={y}_{1}=0$.
- Use ${t}_{0}$ from step 1 for a hard decision on ${u}_{0}$ or translate it into an LLR value using the translation table of Figure 8b. For the example of ${t}_{0}=\left|\mathcal{T}\right|-1$, ${\widehat{u}}_{0}=0$ and ${L}_{{u}_{0}}\left({t}_{0}\right)$$=2.19$.

#### 4.2. Information Bottleneck Successive Cancellation List Decoder

## 5. Space-Efficient Information Bottleneck Successive Cancellation List Decoder

#### 5.1. The Role of Translation Tables

#### 5.2. Message Alignment for Successive Cancellation List Decoder

## 6. Numerical Results

#### 6.1. Code Construction

#### 6.2. Information Bottleneck Decoders

## 7. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## Abbreviations

SC | Successive cancellation |

SCL | Successive cancellation list |

CRC | Cyclic redundancy check |

GA | Gaussian approximation |

TV | Tal and Vardy |

## References

- Lewandowsky, J.; Bauch, G. Trellis based node operations for LDPC decoders from the Information Bottleneck method. In Proceedings of the 9th International Conference on Signal Processing and Communication Systems (ICSPCS), Cairns, Australia, 14–16 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–10. [Google Scholar]
- Lewandowsky, J.; Bauch, G. Information-Optimum LDPC Decoders Based on the Information Bottleneck Method. IEEE Access
**2018**, 6, 4054–4071. [Google Scholar] [CrossRef] - Tishby, N.; Pereira, F.C.; Bialek, W. The Information Bottleneck Method. In Proceedings of the 37th Allerton Conference on Communication and Computation, Monticello, IL, USA, 22–24 September 1999. [Google Scholar]
- Slonim, N. The Information Bottleneck: Theory and Applications. Ph.D. Thesis, Hebrew University of Jerusalem, Jerusalem, Israel, 2002. [Google Scholar]
- Kurkoski, B.M.; Yamaguchi, K.; Kobayashi, K. Noise Thresholds for Discrete LDPC Decoding Mappings. In Proceedings of the IEEE GLOBECOM 2008–2008 IEEE Global Telecommunications Conference, New Orleans, LO, USA, 30 November–4 December 2008; pp. 1–5. [Google Scholar]
- Richardson, T.; Urbanke, R. Modern Coding Theory; Cambridge University Press: New York, NY, USA, 2008. [Google Scholar]
- Stark, M.; Lewandowsky, J.; Bauch, G. Information-Bottleneck Decoding of High-Rate Irregular LDPC Codes for Optical Communication Using Message Alignment. Appl. Sci.
**2018**, 8, 1884. [Google Scholar] [CrossRef] - Stark, M.; Lewandowsky, J.; Bauch, G. Information-Optimum LDPC Decoders with Message Alignment for Irregular Codes. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, UAE, 9–13 December 2018; pp. 1–6. [Google Scholar]
- Balatsoukas-Stimming, A.; Meidlinger, M.; Ghanaatian, R.; Matz, G.; Burg, A. A fully-unrolled LDPC decoder based on quantized message passing. In Proceedings of the 2015 IEEE Workshop on Signal Processing Systems SiPS, Hangzhou, China, 14–16 October 2015; pp. 1–6. [Google Scholar] [CrossRef]
- Meidlinger, M.; Balatsoukas-Stimming, A.; Burg, A.; Matz, G. Quantized message passing for LDPC codes. In Proceedings of the 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 8–11 November 2015; pp. 1606–1610. [Google Scholar] [CrossRef]
- Meidlinger, M.; Matz, G. On irregular LDPC codes with quantized message passing decoding. In Proceedings of the 2017 IEEE 18th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Sapporo, Japan, 3–6 July 2017; pp. 1–5. [Google Scholar] [CrossRef]
- Ghanaatian, R.; Balatsoukas-Stimming, A.; Müller, T.C.; Meidlinger, M.; Matz, G.; Teman, A.; Burg, A. A 588-Gb/s LDPC Decoder Based on Finite-Alphabet Message Passing. IEEE Trans. Very Large Scale Integr. Syst.
**2018**, 26, 329–340. [Google Scholar] [CrossRef] - Arikan, E. Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels. IEEE Trans. Inf. Theory
**2009**, 55, 3051–3073. [Google Scholar] [CrossRef] - Hussami, N.; Korada, S.B.; Urbanke, R. Performance of polar codes for channel and source coding. In Proceedings of the 2009 IEEE International Symposium on Information Theory, Seoul, Korea, 28 June–3 July 2009; pp. 1488–1492. [Google Scholar] [CrossRef]
- Bakshi, M.; Jaggi, S.; Effros, M. Concatenated Polar codes. In Proceedings of the 2010 IEEE International Symposium on Information Theory, Austin, TX, USA, 13–18 June 2010; pp. 918–922. [Google Scholar] [CrossRef]
- Tal, I.; Vardy, A. List Decoding of Polar Codes. IEEE Trans. Inf. Theory
**2015**, 61, 2213–2226. [Google Scholar] [CrossRef] - Li, B.; Shen, H.; Tse, D. An Adaptive Successive Cancellation List Decoder for Polar Codes with Cyclic Redundancy Check. IEEE Commun. Lett.
**2012**, 16, 2044–2047. [Google Scholar] [CrossRef] - Wang, T.; Qu, D.; Jiang, T. Parity-Check-Concatenated Polar Codes. IEEE Commun. Lett.
**2016**, 20, 2342–2345. [Google Scholar] [CrossRef] - Nokia. Chairman’s notes of AI 7.1.5 on channel coding and modulation for NR. In Proceedings of the Meeting 87, 3GPP TSG RAN WG1, Reno, NV, USA, 14–19 November 2016. [Google Scholar]
- Arikan, E. A performance comparison of polar codes and Reed-Muller codes. IEEE Commun. Lett.
**2008**, 12, 447–449. [Google Scholar] [CrossRef] - ETSI. 5G; NR; Multiplexing and Channel Coding (Release 15); Version 15.6.0; Technical Specification (TS) 38.212, 3rd Generation Partnership Project (3GPP); ETSI: Sophia Antipolis, France, 2019. [Google Scholar]
- Mori, R.; Tanaka, T. Performance of Polar Codes with the Construction using Density Evolution. IEEE Commun. Lett.
**2009**, 13, 519–521. [Google Scholar] [CrossRef] - Tal, I.; Vardy, A. How to Construct Polar Codes. IEEE Trans. Inf. Theory
**2013**, 59, 6562–6582. [Google Scholar] [CrossRef] [Green Version] - Trifonov, P. Efficient Design and Decoding of Polar Codes. IEEE Trans. Commun.
**2012**, 60, 3221–3227. [Google Scholar] [CrossRef] [Green Version] - Stark, M.; Shah, S.A.A.; Bauch, G. Polar code construction using the information bottleneck method. In Proceedings of the 2018 IEEE Wireless Communications and Networking Conference Workshops, Barcelona, Spain, 15–18 April 2018; pp. 7–12. [Google Scholar] [CrossRef]
- Shah, S.A.A.; Stark, M.; Bauch, G. Design of Quantized Decoders for Polar Codes using the Information Bottleneck Method. In Proceedings of the 12th International ITG Conference on Systems, Communications and Coding (SCC 2019), Rostock, Germany, 11–14 February 2019. [Google Scholar]
- Balatsoukas-Stimming, A.; Parizi, M.B.; Burg, A. LLR-based successive cancellation list decoding of polar codes. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 3903–3907. [Google Scholar] [CrossRef]
- Hassani, S.H.; Urbanke, R. Polar codes: Robustness of the successive cancellation decoder with respect to quantization. In Proceedings of the 2012 IEEE International Symposium on Information Theory Proceedings, Cambridge, MA, USA, 1–6 July 2012; pp. 1962–1966. [Google Scholar] [CrossRef]
- Shi, Z.; Chen, K.; Niu, K. On Optimized Uniform Quantization for SC Decoder of Polar Codes. In Proceedings of the 2014 IEEE 80th Vehicular Technology Conference (VTC2014-Fall), Vancouver, BC, Canada, 14–17 September 2014; pp. 1–5. [Google Scholar] [CrossRef]
- Giard, P.; Sarkis, G.; Balatsoukas-Stimming, A.; Fan, Y.; Tsui, C.; Burg, A.; Thibeault, C.; Gross, W.J. Hardware decoders for polar codes: An overview. In Proceedings of the 2016 IEEE International Symposium on Circuits and Systems (ISCAS), Montreal, QC, Canada, 22–25 May 2016; pp. 149–152. [Google Scholar] [CrossRef]
- Neu, J. Quantized Polar Code Decoders: Analysis and Design. arXiv
**2019**, arXiv:1902.10395. [Google Scholar] - Hagenauer, J.; Offer, E.; Papke, L. Iterative decoding of binary block and convolutional codes. IEEE Trans. Inf. Theory
**1996**, 42, 429–445. [Google Scholar] [CrossRef] - Lewandowsky, J.; Stark, M.; Bauch, G. Information Bottleneck Graphs for receiver design. In Proceedings of the 2016 IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016; pp. 2888–2892. [Google Scholar] [CrossRef]
- Stark, M.; Lewandowsky, J. Information Bottleneck Algorithms in Python. Available online: https://goo.gl/QjBTZf (accessed on 25 August 2019).
- Lewandowsky, J.; Stark, M.; Bauch, G. A Discrete Information Bottleneck Receiver with Iterative Decision Feedback Channel Estimation. In Proceedings of the 2018 IEEE 10th International Symposium on Turbo Codes Iterative Information Processing (ISTC), Hong Kong, China, 3–7 December 2018; pp. 1–5. [Google Scholar] [CrossRef]
- Hassanpour, S.; Wuebben, D.; Dekorsy, A. Overview and Investigation of Algorithms for the Information Bottleneck Method. In Proceedings of the 11th International ITG Conference on Systems, Communications and Coding (SCC 2017), Hamburg, Germany, 6–9 February 2017; pp. 1–6. [Google Scholar]
- Lewandowsky, J.; Stark, M.; Bauch, G. Message alignment for discrete LDPC decoders with quadrature amplitude modulation. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 2925–2929. [Google Scholar] [CrossRef]
- Elkelesh, A.; Ebada, M.; Cammerer, S.; ten Brink, S. Decoder-Tailored Polar Code Design Using the Genetic Algorithm. IEEE Trans. Commun.
**2019**, 67, 4521–4534. [Google Scholar] [CrossRef] [Green Version] - Wu, D.; Li, Y.; Sun, Y. Construction and Block Error Rate Analysis of Polar Codes Over AWGN Channel Based on Gaussian Approximation. IEEE Commun. Lett.
**2014**, 18, 1099–1102. [Google Scholar] [CrossRef]

**Figure 1.**Factor graph of the building block (dashed rectangle) of a polar code along with the transmission channel.

**Figure 2.**Structure of a polar code with $N=4$. The graph is partitioned into $n={log}_{2}N=2$ levels. The node labels, ${v}_{i,j}$ indicate the vertical stage, $i=0,1,\dots ,N-1$, and horizontal level, $j=0,\dots ,n$. For all i, ${v}_{i,0}={y}_{i}$, i.e., the channel output at level $j=0$, while ${v}_{i,2}={u}_{i}$, i.e., the encoder input bits at level $j=2$.

**Figure 3.**(

**a**) Information bottleneck setup, where $I(X;T)$ is the relevant information, $I(X;Y)$ is the original mutual information, and $I(Y;T)$ is the compression information. The goal is to determine the mapping $p\left(t\right|y)$ which maximizes $I(X;T)$ and minimizes $I(Y;T)$. (

**b**) Information bottleneck graph for the elementary setup of (

**a**). The realizations of Y are mapped onto those of T such that their relevance to X is preserved, i.e., $I(X;T)\approx I(X;Y)$, and $\left|\mathcal{T}\right|<\left|\mathcal{Y}\right|$.

**Figure 4.**Transition probability of AWGN channel where continuous channel output $\tilde{y}$ is clustered into $\left|\mathcal{Y}\right|=8$ bins or clusters for ${\sigma}_{N}^{2}=0.5$. (

**a**) Randomly initialized symmetric cluster boundaries. (

**b**) Cluster boundaries optimized such that $I(X;Y)$ is maximum for $\left|\mathcal{Y}\right|=8$.

**Figure 5.**Information bottleneck graph for (

**a**) the bit channel of ${u}_{0}$ which maps outputs ${y}_{0},{y}_{1}$ onto $\left|\mathcal{T}\right|$ clusters, labeled ${t}_{0}$, treating ${u}_{0}$ as the relevant variable. (

**b**) The bit channel of ${u}_{1}$ which maps the outputs ${y}_{0},{y}_{1},{u}_{0}$ onto $\left|\mathcal{T}\right|$ clusters, labeled ${t}_{1}$, treating ${u}_{1}$ as the relevant variable.

**Figure 6.**Information bottleneck graph of a polar code with length $N=4$. The outputs of the bit channel experienced by each node ${v}_{i,j}$ are clustered into a compressed random variable ${T}_{i,j}={t}_{i,j}\in \{0,\dots ,|\mathcal{T}|-1\}$, where $i=0,1,\dots ,N-1$ indicates the stage, while $j=0,\dots ,n$ represents the level in the code structure.

**Figure 7.**Plot of the cumulative product ${R}_{\mathrm{cum}}$ indicating the amount of mutual information preserved over the levels of a half-rate polar code with $N=512$ depending on the number of clusters $\left|\mathcal{T}\right|$.

**Figure 8.**(

**a**) Clustering $p\left({t}_{0}\right|{\mathbf{y}}_{0})$ that maps the $|{\mathcal{Y}}_{0}|\xb7|{\mathcal{Y}}_{1}|$ outputs of the bit channel of ${u}_{0}$ to $\left|\mathcal{T}\right|$ clusters, written as a lookup table. (

**b**) Translation table of ${u}_{0}$ obtained from the conditional distribution $p\left({u}_{0}\right|{t}_{0})$ as in Equation (21).

**Figure 9.**Translation tables of an information bottleneck decoder with $\left|\mathcal{T}\right|=16$ for different bit channels of a half-rate polar code with $N=128$, design ${E}_{b}/{N}_{0}=3$ dB. ${u}_{14}$ is a frozen bit while ${u}_{83}$, and ${u}_{124}$ are information bits.

**Figure 10.**Information bottleneck graph for the alignment of the translation tables of a polar code. The decision level cluster indices ${t}_{i}={[t,i]}^{T}$ are clustered into aligned indices ${t}^{\ast}$ such that the relevant information $I(U;{T}^{\ast})$ is maximized.

**Figure 11.**Translation table for aligned decoding tables at the decision level with $|{\mathcal{T}}^{\ast}|=16$ for the polar code with $N=128$, rate = 0.5 and design ${E}_{b}/{N}_{0}=3$ dB.

**Figure 12.**Frozen charts for half-rate polar code with $N=128$ and design ${E}_{b}/{N}_{0}=3$ dB, constructed using (

**a**) Tal and Vardy’s method with $\left|\mathcal{T}\right|=512$, (

**b**) Tal and Vardy’s method with $\left|\mathcal{T}\right|=16$, (

**c**) information bottleneck method with $\left|\mathcal{T}\right|=16$, (

**d**) information bottleneck method with $\left|\mathcal{T}\right|=32$, and (

**e**) Gaussian approximation.

**Figure 13.**Frozen charts for half-rate polar code with $N=1024$ and design ${E}_{b}/{N}_{0}=3$ dB, constructed using (

**a**) Tal and Vardy’s method with $\left|\mathcal{T}\right|=512$, (

**b**) Tal and Vardy’s method with $\left|\mathcal{T}\right|=16$, (

**c**) information bottleneck method with $\left|\mathcal{T}\right|=16$, (

**d**) information bottleneck method with $\left|\mathcal{T}\right|=32$, and (

**e**) Gaussian approximation.

**Figure 14.**Block error rate of polar codes constructed using Gaussian approximation, Tal and Vardy’s method, and the information bottleneck method using a conventional successive cancellation (SC) and successive cancellation list (SCL) decoder with ${N}_{L}=8,32$, 16-bit cyclic redundancy check (CRC), $N=1024$, rate 0.5, and design ${E}_{b}/{N}_{0}=3$ dB.

**Figure 15.**Block error rate of double-precision floating-point SCL decoder with channel quantizers of different resolutions designed using the information bottleneck method. ${N}_{L}=32$, ${N}_{crc}=16$, $N=1024$, code rate 0.5, and design ${E}_{b}/{N}_{0}=3$ dB.

**Figure 16.**Block error rate of double-precision floating-point SCL decoder with channel quantizers of different resolutions designed using the information bottleneck method. ${N}_{L}=32$ without the outer CRC code, $N=128$, code rate 0.5, and design ${E}_{b}/{N}_{0}=3$ dB.

**Figure 17.**Block error rate comparison between conventional decoder (SCL) and information bottleneck decoders (IB-SCL) constructed for $\left|\mathcal{T}\right|=8,16$ or 32. $N=1024$, ${N}_{L}=32$, 16-bit CRC, code rate 0.5, and design ${E}_{b}/{N}_{0}=3$ dB.

**Figure 18.**Block error rate comparison between conventional decoder (SCL) and information bottleneck decoders (IB-SCL) constructed for $\left|\mathcal{T}\right|=4,8$ or 16. $N=128$, ${N}_{L}=32$, no CRC, code rate 0.5, and design ${E}_{b}/{N}_{0}=3$ dB.

**Figure 19.**Block error rate comparison between conventional decoder (SCL) and information bottleneck decoders (IB-SCL) constructed for $\left|\mathcal{T}\right|=4,8$ or 16. $N=256$, ${N}_{L}=32$, no CRC, code rate $37/256$, and design ${E}_{b}/{N}_{0}=4$ dB.

**Figure 20.**Effect of design ${E}_{b}/{N}_{0}$ in the mismatched use of the information bottleneck decoders. $N=128$ or 1024, $\left|\mathcal{T}\right|=16$, $N=128$, ${N}_{L}=32$, ${N}_{crc}=16$, and code rate 0.5.

**Figure 21.**Block error rate of conventional (SCL) and a 4-bit information bottleneck (IB-SCL) decoders for $N=1024,256$ and 128, rate 0.5, 16-bit CRC design ${E}_{b}/{N}_{0}=3$ dB. (

**a**) ${N}_{L}=8$, (

**b**) ${N}_{L}=32$.

**Figure 22.**Effect of using the approximation path metric update rule of Equation (10) on the block error rate of a 4-bit information bottleneck decoder with ${N}_{L}=2,8$ or 32, 16-bit CRC, $N=128$, rate 0.5, and design ${E}_{b}/{N}_{0}=3$ dB.

**Figure 23.**Block error rate of a 4-bit aligned information bottleneck SCL decoder with alignment cardinality $|{\mathcal{T}}^{\ast}|=16$, 16-bit CRC, $N=128$, ${N}_{L}=2,8$ or 32.

**Figure 24.**Effect of alignment cardinality $|{\mathcal{T}}^{\ast}|$ on the block error rate of a 4-bit aligned information bottleneck SCL decoder with ${N}_{L}=8$ or 32, 16-bit CRC for $N=128$, rate 0.5, and design ${E}_{b}/{N}_{0}=3$ dB.

**Figure 25.**Path metric increments computed for the aligned translation table of Figure 11 according to Equation (9). $|{\mathcal{T}}^{\ast}|=16$ for the polar code with $N=128$, rate = 0.5, and design ${E}_{b}/{N}_{0}=3$ dB.

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Shah, S.A.A.; Stark, M.; Bauch, G.
Coarsely Quantized Decoding and Construction of Polar Codes Using the Information Bottleneck Method. *Algorithms* **2019**, *12*, 192.
https://doi.org/10.3390/a12090192

**AMA Style**

Shah SAA, Stark M, Bauch G.
Coarsely Quantized Decoding and Construction of Polar Codes Using the Information Bottleneck Method. *Algorithms*. 2019; 12(9):192.
https://doi.org/10.3390/a12090192

**Chicago/Turabian Style**

Shah, Syed Aizaz Ali, Maximilian Stark, and Gerhard Bauch.
2019. "Coarsely Quantized Decoding and Construction of Polar Codes Using the Information Bottleneck Method" *Algorithms* 12, no. 9: 192.
https://doi.org/10.3390/a12090192