# Encoding Individual Source Sequences for the Wiretap Channel

## Abstract

**:**

## 1. Introduction

## 2. Notation, Definitions, and Problem Setting

#### 2.1. Notation

#### 2.2. Definitions and Problem Setting

**u**is first is divided into chunks of length k, ${\tilde{u}}_{i}={u}_{ik+1}^{ik+k}\in {\mathcal{U}}^{k}$, $i=0,1,2,\dots $, which are fed into a stochastic finite-state encoder, defined by the following equations:

**u**since the state designates the memory of the encoder to past inputs.) Finally, $P\left(\tilde{x}\right|\tilde{u},s)$, $\tilde{u}\in {\mathcal{U}}^{k}$, $s\in {\mathcal{S}}^{e}$, $\tilde{x}\in {\mathcal{X}}^{m}$, is a conditional probability distribution function, i.e., $\left\{P\left(\tilde{x}\right|\tilde{u},s)\right\}$ are all non-negative and ${\sum}_{\tilde{x}}P\left(\tilde{x}\right|\tilde{u},s)=1$ for all $(\tilde{u},s)\in {\mathcal{U}}^{k}\times {\mathcal{S}}^{e}$. The vector ${\tilde{x}}_{i}$ designates the current output vector from the encoder in response to the current input source vector, ${\tilde{u}}_{i}$ and its current state, ${s}_{i}^{e}$. Without loss of generality, we assume that the initial state of the encoder, ${s}_{0}^{e}$, is some fixed member of ${\mathcal{S}}^{e}$. The ratio

- For a given ${\u03f5}_{r}>0$, the system satisfies the following reliability requirement: The bit error probability is guaranteed to be less than ${\u03f5}_{\mathrm{r}}$, i.e.,$${P}_{\mathrm{b}}\stackrel{\u25b5}{=}\frac{1}{k}\sum _{i=1}^{k}\mathrm{Pr}\{{V}_{i}\ne {u}_{i}\}\le {\u03f5}_{\mathrm{r}}$$
- For a given ${\u03f5}_{s}>0$, the system satisfies the following security requirement: For every sufficiently large positive integer n,$$\underset{\mu}{max}{I}_{\mu}({U}^{n};{Z}^{N})\le n{\u03f5}_{\mathrm{s}},$$

#### 2.3. Preliminaries and Background

## 3. Results

**Theorem**

**1.**

**Discussion.**

- 1.
- Irrelevance of ${q}_{e}$. It is interesting to note that as far as the encoding and decoding resources are concerned, the lower bound depends on k and ${q}_{\mathrm{d}}$, but not on the number of states of the encoder, ${q}_{\mathrm{e}}$. This means that the same lower bound continues to hold, even if the encoder has an unlimited number of states. Pushing this to the extreme, even if the encoder has room to store the entire past, the lower bound of Theorem 1 would remain unaltered. The crucial bottleneck is, therefore, in the finite memory resources associated with the decoder, where the memory may help to reconstruct the source by exploiting empirical dependencies with the past. The dependence on ${q}_{\mathrm{e}}$, however, appear later when we discuss local randomness resources as well as in the extension to the case of decoder side information.
- 2.
- The redundancy term ${\zeta}_{n}({q}_{d},k)$. A technical comment is in order concerning the term ${\zeta}_{n}({q}_{\mathrm{d}},k)$, which involves minimization over all divisors of $n/k$, where we have already assumed that $n/k$ is integer. Strictly speaking, if $n/k$ happens to be a prime, this minimization is not very meaningful, as ${\zeta}_{n}({q}_{\mathrm{d}},k)$ would be relatively large. If this the case, a better bound is obtained if one omits some of the last symbols of ${u}^{n}$, thereby reducing n to, say, ${n}^{\prime}$ so that ${n}^{\prime}/k$ has a richer set of factors. Consider, for example, the choice $\ell ={\ell}_{n}=\lfloor \sqrt{logn}\rfloor $ (instead of minimizing over ℓ) and replace $n/k$ by the $n/k-(n/k\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}mod\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{\ell}_{n})$, without essential loss of tightness. This way, ${\zeta}_{n}({q}_{\mathrm{d}},k)$ would tend to zero as $n\to \infty $, for fixed k and ${q}_{\mathrm{d}}$.
- 3.
- Achievability. Having established that ${\zeta}_{n}({q}_{\mathrm{d}},k)\to 0$, and given that ${\u03f5}_{\mathrm{r}}$ and ${\u03f5}_{\mathrm{s}}$ are small, it is clear that the main term at the numerator of the lower bound of Theorem 1 is the term ${\rho}_{\mathrm{LZ}}\left({u}^{n}\right)$, which is, as mentioned earlier, the individual-sequence analogue of the entropy of the source [22]. In other words, $\lambda $ cannot be much smaller than ${\lambda}_{\mathrm{L}}\left({u}^{n}\right)={\rho}_{\mathrm{LZ}}\left({u}^{n}\right)/{C}_{\mathrm{s}}$. A matching achievability scheme would most naturally be based on separation: first apply variable-rate compression of ${u}^{n}$ to about $n{\rho}_{\mathrm{LZ}}\left({u}^{n}\right)$ bits using the LZ algorithm [22], and then feed the resulting compressed bit-stream into a good code for the wiretap channel [1] with codewords of length about$$N=n{\lambda}_{\mathrm{L}}\left({u}^{n}\right)\sim \frac{n{\rho}_{\mathrm{LZ}}\left({u}^{n}\right)}{{C}_{\mathrm{s}}(1-\delta )},$$$$\lambda \approx \frac{n{\rho}_{\mathrm{LZ}}\left({u}^{n}\right)+log(nlog\alpha )}{n{C}_{\mathrm{s}}(1-\delta )}=\frac{{\rho}_{\mathrm{LZ}}\left({u}^{n}\right)}{{C}_{\mathrm{s}}(1-\delta )}+O\left(\frac{logn}{n}\right).$$$$\lambda =\frac{N}{n}=\frac{{\rho}_{\mathrm{LZ}}\left({u}^{n}\right)}{{C}_{\mathrm{s}}(1-\delta )}.$$

**Theorem**

**2.**

## 4. Side Information at the Decoder with Partial Leakage to the Wiretapper

**Theorem**

**3.**

`ACK`. Each chunk of r bits is fed into a good channel code for the wiretap channel, at a rate slightly less than ${C}_{\mathrm{s}}$. At the decoder side, this channel code is decoded (correctly, with high probability, for large r). Then, for each i ($i=1,2,\dots $), after having decoded the i-th chunk of r bits of $b\left({u}^{n}\right)$, the decoder creates the list ${\mathcal{A}}_{i}\left({u}^{n}\right)=\{{\dot{u}}^{n}:\phantom{\rule{3.33333pt}{0ex}}{\left[b\left({\dot{u}}^{n}\right)\right]}^{ir}={\left[b\left({u}^{n}\right)\right]}^{ir}\}$, where ${\left[b\left({\dot{u}}^{n}\right)\right]}^{l}$ denotes the string formed by the first l bits of $b\left({\dot{u}}^{n}\right)$. For each ${\dot{u}}^{n}\in {\mathcal{A}}_{i}\left({u}^{n}\right)$, the decoder calculates ${\rho}_{\mathrm{LZ}}\left({\dot{u}}^{n}\right|{w}^{n})$. We fix an arbitrarily small $\delta >0$, which controls the trade-off between error probability and compression rate. If $n{\rho}_{\mathrm{LZ}}\left({\dot{u}}^{n}\right|{w}^{n})\le i\xb7r-n\delta $ for some ${\dot{u}}^{n}\in {\mathcal{A}}_{i}\left({u}^{n}\right)$, the decoder sends

`ACK`on the feedback channel and outputs the reconstruction, ${\dot{u}}^{n}$, with the smallest ${\rho}_{\mathrm{LZ}}\left({\dot{u}}^{n}\right|{w}^{n})$ among all members of ${\mathcal{A}}_{i}\left({u}^{n}\right)$. If no member of ${\mathcal{A}}_{i}\left({u}^{n}\right)$ satisfies $n{\rho}_{\mathrm{LZ}}\left({\dot{u}}^{n}\right|{w}^{n})\le i\xb7r-n\delta $, the receiver waits for the next chunk of r compressed bits, and it does not send

`ACK`. The probability of source-coding error after the i-th chunk is upper bounded by

`ACK`is received at the encoder (and hence the transmission stops), no later than after the transmission of chunk no. ${i}^{\ast}$, where ${i}^{\ast}$ is the smallest integer i such that $i\xb7r\ge n{\rho}_{\mathrm{LZ}}\left({u}^{n}\right|{w}^{n})+n\delta $, namely, ${i}^{\ast}=\lceil [n{\rho}_{\mathrm{LZ}}\left({u}^{n}\right|{w}^{n})+n\delta ]/r\rceil $, which is the stage at which at least the correct source sequence begins to satisfy the condition $n{\rho}_{\mathrm{LZ}}\left({u}^{n}\right|{w}^{n})\le i\xb7r-n\delta $. Therefore, the compression ratio is no worse than ${i}^{\ast}\xb7r/n=\lceil n[{\rho}_{\mathrm{LZ}}\left({u}^{n}\right|{w}^{n})+\delta ]/r\rceil \xb7r/n\le {\rho}_{\mathrm{LZ}}\left({u}^{n}\right|{w}^{n})+\delta +r/n$. The overall probability of source-coding error is then upper bounded by

## 5. Proofs

#### 5.1. Proof of Theorem 1

#### 5.2. Proof of Theorem 2

#### 5.3. Outline of the Proof of Theorem 3

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Wyner, A.D. The wire-tap channel. Bell Syst. Tech. J.
**1975**, 54, 1355–1387. [Google Scholar] [CrossRef] - Csiszár, I.; Körner, J. Broadcast channels with confidential messages. IEEE Trans. Inform. Theory
**1978**, 24, 339–348. [Google Scholar] [CrossRef][Green Version] - Leung-Yan-Cheong, S.K.; Hellman, M.E. The Gaussian wire-tap channel. IEEE Trans. Inform. Theory
**1978**, 24, 451–456. [Google Scholar] [CrossRef] - Ozarow, L.H.; Wyner, A.D. Wire–tap channel II. In Proceedings of the Eurocrypt 84, Workshop on Advances in Cryptology: Theory and Applications of Cryptographic Techniques, Paris, France, 9–11 April 1985; pp. 33–51. [Google Scholar]
- Yamamoto, H. Coding theorems for secret sharing communication systems with two noisy channels. IEEE Trans. Inform. Theory
**1989**, 35, 572–578. [Google Scholar] [CrossRef] - Yamamoto, H. Rate–distortion theory for the Shannon cipher system. IEEE Trans. Inform. Theory
**1997**, 43, 827–835. [Google Scholar] [CrossRef][Green Version] - Merhav, N. Shannon’s secrecy system with informed receivers an its application to systematic coding for wiretapped channels. IEEE Trans. Inform. Theory
**2008**, 54, 2723–2734. [Google Scholar] [CrossRef][Green Version] - Tekin, E.; Yener, A. The Gaussian multiple access wire–Tap channel. IEEE Trans. Inform. Theory
**2008**, 54, 5747–5755. [Google Scholar] [CrossRef][Green Version] - Mitrpant, C. Information Hiding—An Application of Wiretap Channels with Side Information. Ph.D. Thesis, der Universitaet Duisburg–Essen, Essen, Germany, 2003. [Google Scholar]
- Mitrpant, C.; Vinck, A.J.H.; Luo, Y. An achievable region for the Gaussian wiretap channel with side information. IEEE Trans. Inform. Theory
**2006**, 52, 2181–2190. [Google Scholar] [CrossRef] - Ardestanizadeh, E.; Franceschetti, M.; Javidi, T.; Kim, Y.-H. Wiretap channel with secure rate–Limited feedback. IEEE Trans. Inform. Theory
**2009**, 55, 5353–5361. [Google Scholar] [CrossRef] - Hayashi, M. Upper bounds of eavesdropper’s performances in finite-length code with the decoy method. Phy. Rev. A
**2007**, 76, 012329, Erratum in Phys. Rev. A**2009**, 79, 019901E. [Google Scholar] [CrossRef][Green Version] - Hayashi, M.; Matsumoto, R. Secure multiplex coding with dependent and non-uniform multiple messages. IEEE Trans. Inform. Theory
**2016**, 62, 2355–2409. [Google Scholar] [CrossRef][Green Version] - Bloch, M.; Barros, J. Physical-Layer Security: From Information Theory to Security Engineering; Cambridge University Press: New York, NY, USA, 2011. [Google Scholar]
- Bellare, M.; Tessaro, S.; Vardy, A. Semantic security for the wiretap channel. In Advances in Cryptology—CRYPTO 2012; Safavi-Naini, R., Canetti, R., Eds.; CRYPTO 2012 Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7417. [Google Scholar] [CrossRef][Green Version]
- Goldfeld, Z.; Cuff, P.; Permuter, H.H. Semantic security capacity for wiretap channels of type II. In Proceedings of the 2016 IEEE International Symposium on Information Theory (ISIT 2016), Barcelona, Spain, 10–15 July 2016; pp. 2799–2803. [Google Scholar]
- Ziv, J. Coding theorems for individual sequences. IEEE Trans. Inform. Theory
**1978**, 24, 405–412. [Google Scholar] [CrossRef] - Ziv, J. Distortion–rate theory for individual sequences. IEEE Trans. Inform. Theory
**1980**, 26, 137–143. [Google Scholar] [CrossRef] - Ziv, J. Fixed–rate encoding of individual sequences with side information. IEEE Trans. Inform. Theory
**1984**, 30, 348–352. [Google Scholar] [CrossRef] - Merhav, N. Perfectly secure encryption of individual sequences. IEEE Trans. Inform. Theory
**2013**, 58, 1302–1310. [Google Scholar] [CrossRef] - Merhav, N. On the data processing theorem in the semi-deterministic setting. IEEE Trans. Inform. Theory
**2014**, 60, 6032–6040. [Google Scholar] [CrossRef][Green Version] - Ziv, J.; Lempel, A. Compression of individual sequences via variable–Rate coding. IEEE Trans. Inform. Theory
**1978**, 24, 530–536. [Google Scholar] [CrossRef][Green Version] - Ziv, J. Universal decoding for finite-state channels. IEEE Trans. Inform. Theory
**1985**, 31, 453–460. [Google Scholar] [CrossRef] - Uyematsu, T.; Kuzuoka, S. Conditional Lempel–Ziv complexity and its application to source coding theorem with side information. IEICE Trans. Fundam.
**2003**, E86-A, 2615–2617. [Google Scholar] - Draper, S. Universal incremental Slepian–Wolf Coding. In Proceedings of the 43rd Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 29 September–1 October 2004; pp. 1332–1341. [Google Scholar]
- Merhav, N. Universal detection of messages via finite–State channels. IEEE Trans. Inform. Theory
**2000**, 46, 2242–2246. [Google Scholar] [CrossRef] - Merhav, N. Guessing individual sequences: Generating randomized guesses using finite—State machines. IEEE Trans. Inform. Theory
**2020**, 66, 2912–2920. [Google Scholar] [CrossRef][Green Version]

**Figure 1.**Wiretap channel model. Since the source and the channel may operate at different rates ($\lambda $ channel symbols per source symbol), the time variables associated with source-related sequences and channel-related sequences are denoted differently, i.e., i and t, respectively.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Merhav, N. Encoding Individual Source Sequences for the Wiretap Channel. *Entropy* **2021**, *23*, 1694.
https://doi.org/10.3390/e23121694

**AMA Style**

Merhav N. Encoding Individual Source Sequences for the Wiretap Channel. *Entropy*. 2021; 23(12):1694.
https://doi.org/10.3390/e23121694

**Chicago/Turabian Style**

Merhav, Neri. 2021. "Encoding Individual Source Sequences for the Wiretap Channel" *Entropy* 23, no. 12: 1694.
https://doi.org/10.3390/e23121694