# Deep Ensemble of Weighted Viterbi Decoders for Tail-Biting Convolutional Codes

^{*}

## Abstract

**:**

## 1. Introduction

**WCVA**—A parameterized CVA decoder, combining the optimality of the VA with a data-driven approach in Section 3.1. Viterbi selections in the WCVA are based on the sums of weighted path metrics and the relevant branch metrics. The magnitude of a weight reflect the contribution of the corresponding path or branch to successful decoding of a noisy word.**Partition of the channel words space**—We exploit the domain knowledge regarding the TBCC problem and partition the input space to different subsets of termination states in Section 3.3; Each expert specializes on codewords that belong to a single subset.**Gating function**—We reinforce the practical aspect of this scheme by introducing a low-complexity gating that acts as a filter, reducing the number of calls to each expert. The gating maps noisy words to a subgroup of experts based on the CRC checksum (see Section 3.4).

## 2. Background

#### 2.1. Notation

#### 2.2. Problem Formalization

#### 2.3. Viterbi Decoding of CC

#### 2.4. Circular Viterbi Decoding of TBCC

## 3. A Data-Driven Approach to TBCC Decoding

#### 3.1. Weighted Circular Viterbi Algorithm

#### 3.2. Ensembles in Decoding

#### 3.3. Specialized-Experts Ensemble

#### 3.4. Gating

## 4. Results

#### 4.1. Performance and Complexity Comparisons

**3-repetition CVA**—a fixed-repetitions CVA [7].**List circular Viterbi algorithm (LCVA)**—an LVA that runs CVA instead of a VA; All other details are as explained in Section 1.**List genie VA (LGVA)**—an LVA decoder with list of size $\alpha $, that runs from a known ground-truth state; The optimal decoded codeword is chosen by the CRC criterion. The FER of the gated and non-gated WCVAE are lower bounded by this genie-empowered decoder.

#### 4.2. Generalization to Longer Lengths

#### 4.3. Training Analysis

**only this state**, were simulated until 250 accumulated errors.

#### 4.4. Ensemble Size Evaluation

## 5. Discussion

## Supplementary Materials

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Cisco, V. Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2017–2022. Cisco White Paper. Available online: https://davidellis.ca/wp-content/uploads/2019/12/cisco-vni-mobile-data-traffic-feb-2019.pdf (accessed on 6 January 2021).
- Ma, H.; Wolf, J. On tail biting convolutional codes. IEEE Trans. Commun.
**1986**, 34, 104–111. [Google Scholar] [CrossRef] - LTE. Evolved Universal Terrestrial Radio Access (E-UTRA); Multiplexing and Channel Coding (3GPP TS 36.212 Version 13.10.0 Release 13). Available online: https://www.etsi.org/deliver/etsi_ts/136200_136299/136212/13.10.00_60/ts_136212v131000p.pdf (accessed on 6 January 2021).
- Maunder, R.G. A Vision for 5G Channel Coding. Available online: https://eprints.soton.ac.uk/401809/ (accessed on 6 January 2021).
- Stahl, P.; Anderson, J.B.; Johannesson, R. Optimal and near-optimal encoders for short and moderate-length tail-biting trellises. IEEE Trans. Inf. Theory
**1999**, 45, 2562–2571. [Google Scholar] [CrossRef] [Green Version] - Viterbi, A. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Trans. Inf. Theory
**1967**, 13, 260–269. [Google Scholar] [CrossRef] [Green Version] - Cox, R.V.; Sundberg, C.E.W. An efficient adaptive circular Viterbi algorithm for decoding generalized tailbiting convolutional codes. IEEE Trans. Veh. Technol.
**1994**, 43, 57–68. [Google Scholar] [CrossRef] - Shao, R.Y.; Lin, S.; Fossorier, M.P. Two decoding algorithms for tailbiting codes. IEEE Trans. Commun.
**2003**, 51, 1658–1665. [Google Scholar] [CrossRef] - Seshadri, N.; Sundberg, C.E. List Viterbi decoding algorithms with applications. IEEE Trans. Commun.
**1994**, 42, 313–323. [Google Scholar] [CrossRef] - Chen, B.; Sundberg, C.E. List Viterbi algorithms for continuous transmission. IEEE Trans. Commun.
**2001**, 49, 784–792. [Google Scholar] [CrossRef] - Kim, J.W.; Tak, J.W.; Kwak, H.Y.; No, J.S. A new list decoding algorithm for short-length TBCCs with CRC. IEEE Access
**2018**, 6, 35105–35111. [Google Scholar] [CrossRef] - Gruber, T.; Cammerer, S.; Hoydis, J.; ten Brink, S. On deep learning-based channel decoding. In Proceedings of the 51st Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 22–24 March 2017. [Google Scholar]
- Kim, H.; Jiang, Y.; Rana, R.; Kannan, S.; Oh, S.; Viswanath, P. Communication algorithms via deep learning. arXiv
**2018**, arXiv:1805.09317. [Google Scholar] - Tandler, D.; Dörner, S.; Cammerer, S.; Brink, S.T. On Recurrent Neural Networks for Sequence-based Processing in Communications. arXiv
**2019**, arXiv:1905.09983. [Google Scholar] - Nachmani, E.; Be’ery, Y.; Burshtein, D. Learning to decode linear codes using deep learning. In Proceedings of the Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 27–30 September 2016. [Google Scholar]
- Nachmani, E.; Marciano, E.; Burshtein, D.; Be’ery, Y. RNN decoding of linear block codes. arXiv
**2017**, arXiv:1702.07560. [Google Scholar] - Nachmani, E.; Marciano, E.; Lugosch, L.; Gross, W.J.; Burshtein, D.; Be’ery, Y. Deep learning methods for improved decoding of linear codes. IEEE J. Sel. Top. Signal Process.
**2018**, 12, 119–131. [Google Scholar] [CrossRef] [Green Version] - Be’ery, I.; Raviv, N.; Raviv, T.; Be’ery, Y. Active deep decoding of linear codes. IEEE Trans. Commun.
**2019**, 68, 728–736. [Google Scholar] [CrossRef] - Shlezinger, N.; Farsad, N.; Eldar, Y.C.; Goldsmith, A.J. ViterbiNet: A deep learning based Viterbi algorithm for symbol detection. IEEE Trans. Wirel. Commun.
**2020**, 19, 3319–3331. [Google Scholar] [CrossRef] [Green Version] - Raviv, T.; Raviv, N.; Be’ery, Y. Data-Driven Ensembles for Deep and Hard-Decision Hybrid Decoding. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Angeles, CA, USA, 21–26 June 2020; pp. 321–326. [Google Scholar] [CrossRef]
- Moon, T.K. Error Correction Coding: Mathematical Methods and Algorithms; John Wiley & Sons: Hoboken, NJ, USA, 2005. [Google Scholar]
- Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Elsevier: Hoboken, NJ, USA, 2014. [Google Scholar]
- Hershey, J.R.; Roux, J.L.; Weninger, F. Deep unfolding: Model-based inspiration of novel deep architectures. arXiv
**2014**, arXiv:1409.2574. [Google Scholar] - Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Rokach, L. Pattern Classification Using Ensemble Methods; World Scientific: Singapore, 2010; Volume 75. [Google Scholar]
- Rokach, L. Ensemble-based classifiers. Artif. Intell. Rev.
**2010**, 33, 1–39. [Google Scholar] [CrossRef] - Fedorenko, S.V.; Trefilov, M.; Wei, Y. Improved list decoding of tail-biting convolutional codes. In Proceedings of the 2014 XIV International Symposium on Problems of Redundancy in Information and Control Systems, St. Petersburg, Russia, 1–5 June 2014; pp. 35–38. [Google Scholar]

Symbol | Definition | Value |
---|---|---|

$\nu $ | CC memory size | 6 |

- | CC polynomials | $(133,171,165)$ |

${R}_{CC}$ | CC rate | $1/3$ |

- | CRC length | 16 |

Symbol | Definition | Value |
---|---|---|

$\alpha $ | Ensemble size | 8 |

I | Repetitions per decoder | 3 |

${\lambda}_{max}$ | LLR clipping | 20 |

lr | Learning rate | ${10}^{-3}$ |

- | Optimizer | RMSPROP |

- | Loss | Cross Entropy |

- | Training SNR range [dB] | (−2)–0 |

- | Mini-batch size | 450 |

- | Number of mini-batches | 50 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Raviv, T.; Schwartz, A.; Be’ery, Y.
Deep Ensemble of Weighted Viterbi Decoders for Tail-Biting Convolutional Codes. *Entropy* **2021**, *23*, 93.
https://doi.org/10.3390/e23010093

**AMA Style**

Raviv T, Schwartz A, Be’ery Y.
Deep Ensemble of Weighted Viterbi Decoders for Tail-Biting Convolutional Codes. *Entropy*. 2021; 23(1):93.
https://doi.org/10.3390/e23010093

**Chicago/Turabian Style**

Raviv, Tomer, Asaf Schwartz, and Yair Be’ery.
2021. "Deep Ensemble of Weighted Viterbi Decoders for Tail-Biting Convolutional Codes" *Entropy* 23, no. 1: 93.
https://doi.org/10.3390/e23010093