Encoding Individual Source Sequences for the Wiretap Channel

We consider the problem of encoding a deterministic source sequence (i.e., individual sequence) for the degraded wiretap channel by means of an encoder and decoder that can both be implemented as finite-state machines. Our first main result is a necessary condition for both reliable and secure transmission in terms of the given source sequence, the bandwidth expansion factor, the secrecy capacity, the number of states of the encoder and the number of states of the decoder. Equivalently, this necessary condition can be presented as a converse bound (i.e., a lower bound) on the smallest achievable bandwidth expansion factor. The bound is asymptotically achievable by Lempel–Ziv compression followed by good channel coding for the wiretap channel. Given that the lower bound is saturated, we also derive a lower bound on the minimum necessary rate of purely random bits needed for local randomness at the encoder in order to meet the security constraint. This bound too is achieved by the same achievability scheme. Finally, we extend the main results to the case where the legitimate decoder has access to a side information sequence, which is another individual sequence that may be related to the source sequence, and a noisy version of the side information sequence leaks to the wiretapper.


Introduction
In his seminal paper, Wyner [1] has introduced the wiretap channel as a model of secure communication over a degraded broadcast channel, without using a secret key, where the legitimate receiver has access to the output of the good channel and the wiretapper receives the output of the bad channel.The main idea is that the excess noise at the output of the wiretapper channel is utilized to secure the message intended to the legitimate receiver.Wyner has fully characterized the best achievable trade-off between reliable communication to the legitimate receiver and the equivocation rate at the wiretapper, that was quantified in terms of the conditional entropy of the source given the output of the wiretapper channel.One of the most important concepts, introduced by Wyner, was the secrecy capacity, that is, the supremum of all coding rates that allow both reliable decoding at the legitimate receiver and full secrecy, where the equivocation rate saturates at the (unconditional) entropy rate of the source, or equivalently, the normalized mutual information between the source and the wiretap channel output is vanishingly small for large block-length.The idea behind the construction of a good code for the wiretap channel is basically the same as the idea of binning: One designs a big code, that can be reliably decoded at the legitimate receiver, which is subdivided into smaller codes that are fed by purely random bits that are unrelated to the secret message.Each such sub-code can be reliably decoded individually by the wiretapper to its full capacity, thus leaving no further decoding capability for the remaining bits, which all belong to the real secret message.
During the nearly five decades that have passed since [1] was published, the wiretap channel model has been extended and further developed in many aspects.We mention here just a few.Three years after Wyner, Csiszár and Körner [2] have extended the wiretap channel to a general broadcast channel that is not necessarily degraded, allowing also a common message intended to both receivers.In the same year, Leung-Yan-Cheong and Hellman [3], studied the Gaussian wiretap channel, and proved, among other things, that its secrecy capacity is equal to the difference between the capacity of the legitimate channel and that of the wiretap channel.In [4], Ozarow and Wyner considered a somewhat different model, known as the type II wiretap channel, where the channel to the legitimate receiver is clean (noiseless), and the wiretapper can access a subset of the coded bits.In [5], Yamamoto extended the wiretap channel to include two parallel broadcast channels, that connect one encoder and one legitimate decoder, and both channels are being wiretapped by wiretappers that do not cooperate with each other.A few years later, the same author [6] has further developed the scope of [1] in two ways: First, by allowing a private secret key to be shared between the encoder and the legitimate receiver, and secondly, by allowing a given distortion in the reproducing the source at the legitimate receiver.The main coding theorem of [6] suggests a three-fold separation principle, which asserts that no asymptotic optimality is lost if the encoder first, applies a good lossy source code, then encrypts the compressed bits, and finally, applies a good channel code for the wiretap channel.In [7], this model in turn was generalized to allow source side information at the decoder and at the wiretapper in a degraded structure with application to systematic coding for the wiretap channel.The Gaussian wiretap channel model of [3] was also extended in two ways: The first is the Gaussian multiple access wiretap channel of [8], and the second is Gaussian interference wiretap channel of [9], [10], where the encoder has access to the interference signal as side information.Wiretap channels with feedback were considered in [11], where it was shown that feedback is best used for the purpose of sharing a secret key as in [6] and [7].More recent research efforts were dedicated to strengthening the secrecy metric from weak secrecy to strong secrecy, where the mutual information between the source and the wiretap channel output vanishes even without normalization by the block-length, as well as to semantic security, which is similar but refers even to the worst-case message source distribution, see, e.g., [12,Section 3.3], [13], [14].
In this work, we look at Wyner's wiretap channel model from a different perspective.Following the individual-sequence approach pioneered by Ziv in [15], [16] and [17], and continued in later works, such as [18] and [19], we consider the problem of encoding a deterministic source sequence (a.k.a. an individual sequence) for the degraded wiretap channel using finite-state encoders and finite-state decoders.One of the non-trivial issues associated with individual sequences, in the context of the wiretap channel, is how to define the security metric, as there is no probability distribution assigned to the source, and therefore, the equivocation, or the mutual information between the source and the wiretap channel output cannot be well defined.In [18], a similar dilemma was encountered in the context of private-key encryption of individual sequences, and in the converse theorem therein, it was assumed that the system is perfectly secure in the sense that the probability distribution of the cryptogram does not depend on the source sequence.In principle, it is possible to apply the same approach here, where the word 'cryptogram' is replaced by the 'wiretap channel output'.But in order to handle residual dependencies, which will always exist, it would be better to use a security metric that quantifies those small dependencies.To this end, it makes sense to adopt the above mentioned maximum mutual information security metric (or, equivalently the semantic security metric), where the maximum is over all input assignments.
After this maximization, this quantity depends only on the 'channel' between the source and the wiretap channel output.
Our first main result is a necessary condition (i.e., a converse to a coding theorem) for both reliable and secure transmission, which depends on: (i) the given individual source sequence, (ii) the bandwidth expansion factor, (iii) the secrecy capacity, (iv) the number of states of the encoder, (v) the number of states of the decoder, (vi) the allowed bit error probability at the legitimate decoder and (vii) the allowed maximum-mutual-information secrecy.Equivalently, this necessary condition can be presented as a converse bound (i.e., a lower bound) to the smallest achievable bandwidth expansion factor.The bound is asymptotically achievable by Lempel-Ziv (LZ) compression followed by a good channel coding scheme for the wiretap channel.Given that this lower bound is saturated, we then derive also a lower bound on the minimum necessary rate of purely random bits needed for adequate local randomness at the encoder, in order to meet the security constraint.This bound too is achieved by the same achievability scheme, a fact which may be of independent interest regardless of individual sequences and finite-state encoders and decoder (i.e., also for ordinary block codes in the traditional probabilistic setting).Finally, we extend the main results to the case where the legitimate decoder has access to a side information sequence, which is another individual sequence that may be related to the source sequence, and where a noisy version of the side information sequence leaks to the wiretapper.It turns out that in this case, the best strategy is the same as if one assumes that the wiretapper sees the clean side information sequence.While this may not be surprising as far as sufficiency is concerned (i.e., as an achievability result), it is less obvious in the context of necessity (i.e., a converse theorem).
The remaining part of this article is organized as follows.In Section 2, we establish the notation, provide some definitions and formalize the problem setting.In Section 3, we provide the main results of this article and discuss them in detail.In Section 4, the extension that incorporates side information is presented.Finally, in Section 5, the proofs of the main theorems are given.
2 Notation, Definitions, and Problem Setting

Notation
Throughout this paper, random variables will be denoted by capital letters, specific values they may take will be denoted by the corresponding lower case letters, and their alphabets will be denoted by calligraphic letters.Random vectors, their realizations, and their alphabets will be denoted, respectively, by capital letters, the corresponding lower case letters and caligraphic letters, all superscriped by their dimensions.For example, the random vector X n = (X 1 , . . ., X n ), (npositive integer) may take a specific vector value x n = (x 1 , . . ., x n ) in X n , the n-th order Cartesian power of X , which is the alphabet of each component of this vector.Infinite sequences will be denoted using the bold face font, e.g., x = (x 1 , x 2 , . ..).Segments of vectors will be denoted by subscripts and superscripts that correspond to the start and the end locations, for example, x j i , for i < j integers, will denote (x i , x i+1 , . . ., x j ).When i = 1 the subscript will be omitted.
Sources and channels will be denoted by the letter P or Q, subscripted by the names of the relevant random variables/vectors and their conditionings, if applicable, following the standard notation conventions, e.g., Q X , P Y |X , and so on, or by abbreviated names that describe their functionality.When there is no room for ambiguity, these subscripts will be omitted.The probability of an event E will be denoted by Pr{E}, and the expectation operator with respect to (w.r.t.) a probability distribution P will be denoted by E P {•}.Again, the subscript will be omitted if the underlying probability distribution is clear from the context or explicitly explained in the following text.The indicator function of an event E will be denoted by 1{E}, that is, 1{E} = 1 if E occurs, otherwise, 1{E} = 0.
Throughout considerably large parts of the paper, the analysis will be carried out w.r.t.joint distributions that involve several random variables.Some of this random variables will be induced from empirical distributions of deterministic sequences while others will be ordinary random variables.
Random variables from the former kind will be denoted with 'hats'.As a simple example, consider a deterministic sequence, x n , that is fed as an input to a memoryless channel defined by a singleletter transition matrix, {P Y |X , x ∈ X , y ∈ Y}, and let y n denote a realization of the corresponding channel output.Let P X Ŷ (x, y) = 1 n n i=1 1{x i = x, y i = y} denote the joint empirical distribution induced from (x n , y n ).In addition to P X Ŷ (x, y), we also define P XY (x, y) = E{P X Ŷ (x, y)}, where now Y is an ordinary random variable.Clearly, the relation between the two distributions is given by P XY (x, y) = P X (x) • P Y |X (y|x), where P X (x) = y P X Ŷ (x, y) is the empirical marginal of X.
Such mixed joint distributions will underlie certain information-theoretic quantities, for example, I( X; Y ) and H(Y | X) will denote, respectively, the mutual information between X and Y and the conditional entropy of Y given X, both induced from P XY .The same notation rules will be applicable in more involved situations too.

Definitions and Problem Setting
Let u = (u 1 , u 2 , . ..) be a deterministic source sequence (a.k.a.individual sequence), whose symbols take values in a finite alphabet, U , of size α.This source sequence is divided into chunks of length k, ũi = u ik+k ik+1 ∈ U k , i = 0, 1, 2, . .., which are fed into a stochastic finite-state encoder, defined by the following equations: where the variables of these equations are defined as follows: Xi is a random vector taking values in X m , X being the β-ary input alphabet of the channel and m being a positive integer, x ∈ X m is a realization of Xi , s e i is the state of the encoder at time i, which takes on values in a finite set of states, S e , of size q e .The variable ũ is an arbitrary member of U k .The function h : U k × S e → S e is called the next-state function of the encoder. 1 Finally, P (x|ũ, s), ũ ∈ U k , s ∈ S e , x ∈ X m , is a conditional probability distribution function, i.e., {P (x|ũ, s)} are all non-negative and x P (x|ũ, s) = 1 for all (ũ, s) ∈ U k × S e .Without loss of generality, we assume that the initial state of the encoder, s e 0 , is some fixed member of S e .The ratio The sequence of encoder outputs, x 1 , x 2 , . .., is fed into a discrete memoryless channel (DMC), henceforth referred to as the main channel, whose corresponding outputs, y 1 , y 2 , . .., are generated according to: for every positive integer N and every x N ∈ X N and y N ∈ Y N .The channel output symbols, {y i }, take values in a finite alphabet, Y, of size γ.
1 More generally, we could have defined both s e i+1 and xi to be random functions of the (ũi, s e i ) by a conditional joint distribution, Pr{ Xi = x, s e i+1 = s|ũi = ũ, s e i = s ′ }.However, it makes sense to let the encoder state sequence evolve deterministically in response to the input u since the state designates the memory of the encoder to past inputs.
The sequence of channel outputs, y 1 , y 2 , . .., is divided into chunks of length m, ỹi = y im+m im+1 , i = 0, 1, 2, . .., which are fed into a deterministic finite-state decoder, defined according to the following recursive equations: where the variables in the equations are defined as follows: {s d i } is the sequence of states of the decoder, which takes values in a finite set, S d of size The output of the main channel, y 1 , y 2 , . .., is fed into another DMC, henceforth referred to as the wiretap channel, which generates in response, a corresponding sequence, z 1 , z 2 , . .., according to where {Z i } and {z i } take values in a finite alphabet Z.We denote the cascade of channels We seek a communication system (P, h, f, g) which satisfies two requirements: 1.For a given ǫ r > 0, the system satisfies the following reliability requirement: The bit error probability is guaranteed to be less than ǫ r , i.e., for every (u 1 , . . ., u k ) and every combination of initial states of the encoder and the decoder, where Pr{•} is defined w.r.t. the randomness of the encoder and the main channel.
2. For a given ǫ s > 0, the system satisfies the following security requirement: For every sufficiently large positive integer n, where N = nλ and I µ (U n ; Z N ) is the mutual information between U n and Z N , induced by an As for the reliability requirement, note that the larger is k, the requirement becomes less stringent.Concerning the security requirement, ideally, we would like to have perfect secrecy, which means that P (z N |u n ) would be independent of u n (see also [18]), but it is more realistic to allow a small deviation from this idealization.This security metric is actually the maximum mutual information metric, or equivalently (see [13]) the semantic security, as mentioned in the Introduction.

Results
We begin with definitions of two more quantities.The first is the secrecy capacity [1], [12], which is the supremum of all coding rates for which there exist block codes that maintain both an arbitrarily small error probability at the legitimate decoder and an equivocation arbitrarily close to the unconditional entropy of the source.The secrecy capacity is given by with The second quantity we need to define is the LZ complexity [20].Consider the process of incremental parsing the source vector, u n , that is, sequentially parsing this sequence into distinct phrases, such that each new parsed phrase is the shortest string that has not been obtained before as a phrase, with a possible exception of the last phrase, which might be incomplete.Let c(u n ) denote the number of resulting phrases.For example, if n = 10 and u 10 = (0000110110) then incremental parsing (from left to right) yields (0, 00, 01, 1, 011, 0) and so, c(u 10 ) = 6.We define the LZ complexity of the individual sequence, u n , as As was shown by Ziv and Lempel in their seminal paper [20], for large n, the LZ complexity, ρ LZ (u n ), is essentially the best compression ratio that can be achieved by any information lossless, finite-state encoder (up to some negligibly small terms, for large n), and it can be viewed as the individual-sequence analogue of the entropy rate.
Before moving on to present our first main result, a simple comment is in order.Even in the traditional probabilistic setting, given a source with entropy H and a channel with capacity C, reliable communication cannot be accomplished unless H ≤ λC, where λ is the bandwidth expansion factor.Since both H and C are given and only λ is a under the control of the system designer, it is natural to state this condition as a lower bound to bandwidth expansion factor, i.e., λ ≥ H/C.By the same token, in the presence of a secrecy constraint, λ must not fall below H/C s .
Our converse theorems for individual sequences will be presented in the same spirit, where the entropy H at the numerator will be replaced by an expression whose main term is the Lempel-Ziv compressibility.
We assume, without essential loss of generality, that k divides n (otherwise, omit the last (n mod k) symbols of u n and replace n by k • ⌊n/k⌋ without affecting the asymptotic behavior as n → ∞).Our first main result is the following: Theorem 1 Consider the problem setting defined in Section 2. If there exists a stochastic encoder with q e states and a decoder with q d states that together satisfy the reliability constraint (9) and the security constraint (10), then the bandwidth expansion factor λ must be lower bounded as follows. where being the binary entropy function, and The proof of Theorem 1, like all other proofs in this article, is deferred to Section 5.
Discussion.A few comments are in order with regard to Theorem 1.
1. Irrelevance of q e .It is interesting to note that as far as the encoding and decoding resources are concerned, the lower bound depends on k and q d , but not on the number of states of the encoder, q e .This means that the same lower bound continues to hold even if the encoder has an unlimited number of states.Pushing this to the extreme, even if the encoder has room to store the entire past, the lower bound of Theorem 1 would remain unaltered.The crucial bottleneck is therefore in the finite memory resources associated with the decoder, where the memory may help to reconstruct the source by exploiting empirical dependencies with the past.The dependence on q e , however, will appear later, when we discuss local randomness resources as well as in the extension to the case of decoder side information.
2. The redundancy term ζ n (q d , k).A technical comment is in order concerning the term ζ n (q d , k), which involves minimization over all divisors of n/k, where we have already assumed that n/k is integer.Strictly speaking, if n/k happens to be a prime, this minimization is not very meaningful as ζ n (q d , k) would be relatively large.If this the case, a better bound will be obtained if one omits some of the last symbols of u n and thereby reduce n, to say, n ′ , so that n ′ /k has a richer set of factors.Consider, for example, the choice ℓ = ℓ n = ⌊ √ log n⌋ (instead of minimizing over ℓ) and replace n/k by the n/k − (n/k mod ℓ n ), without essential loss of tightness.This way, ζ n (q d , k) would tend to zero as n → ∞, for fixed k and q d .

Achievability.
Having established that ζ n (q d , k) → 0, and given that ǫ r and ǫ s are small, it is clear that the main term at the numerator of the lower bound of Theorem 1 is the term ρ LZ (u n ), which is, as mentioned earlier, the individual-sequence analogue of the entropy of the source [20].
In other words, λ cannot be much smaller than λ L (u n ) = ρ LZ (u n )/C s .A matching achievability scheme would most naturally be based on separation: first apply variable-rate compression2 of u n to about nρ LZ (u n ) bits using the LZ algorithm [20], and then feed the resulting compressed bit-stream into a good code for the wiretap channel [1] with codewords of length about where δ is an arbitrarily small (but positive) margin to keep the coding rate strictly smaller than C s .
But to this end, the decoder must know N .One possible solution is that before the actual encoding of each u n , one would use a separate, auxiliary fixed code that encodes the value of the number of compressed bits, nρ LZ (u n ), using log(n log α) bits (as n log α is about the number of possible values that nρ LZ (u n ) can take) and protect it using a channel code of rate less than C s (1 − δ).Since the length of this auxiliary code grows only logarithmically with n (as opposed to the 'linear' growth of nρ LZ (u n )), the overhead in using the auxiliary code is asymptotically negligible.The auxiliary code and the main code will be used alternately, first the auxiliary code, and then the main code for each n-tuple of the source.The main channel code is actually an array of codes, one for each possible value of nρ LZ (u n ).Once the auxiliary decoder has decoded this number, the corresponding main decoder is used.Overall, the resulting bandwidth expansion factor is about Another, perhaps simpler and better, approach is to use the LZ algorithm in the mode of a variableto-fixed length code: Let the the length of the channel codeword, N , be fixed, and start to compress Of course, these coding schemes require decoder memory that grows exponentially in n, and not just a fixed number, q d , and therefore strictly speaking, there is a gap between the achievability and the converse result of Theorem 2. However, this gap is closed asymptotically, once we take the limit of q d → 0 after the limit n → ∞, and we consider successive application of these codes over many blocks.The same approach appears also in [15], [16], [17], [20], as well as in later related work.
This concludes the discussion on Theorem 1.
We next focus on local randomness resources that are necessary when the full secrecy capacity is exploited.Specifically, suppose that the stochastic encoder is implemented as a deterministic encoder with an additional input of purely random bits, i.e., xi = a(ũ i , s e i , bi ), (19) where bi = b ij+j ij+1 is a string of j purely random bits.The question is the following: How large must j be in order to achieve full secrecy?Equivalently, what is the minimum necessary rate of random bits for local randomness at the encoder for secure coding at the maximum reliable rate?In fact, this question may be interesting on its own right, regardless of the individual-sequence setting and finite-state encoders and decoders, but even for ordinary block coding (which is the special case of q e = q d = 1) and in the traditional probabilistic setting.The following theorem answers this question.
Theorem 2 Consider the problem setting defined in Section 2 and let λ meet the lower bound of Theorem 1.If there exists an encoder (19) with q e states and a decoder with q d states that jointly satisfy the reliability constraint (9) and the security constraint (10), then where X * is the random variable that achieves C s and ℓ is the achiever of ζ n (q d , k).
Note that the lower bound of Theorem 2 depends on q e , as opposed to Theorem 1, where it depended only on q d .Since ǫ s is assumed small and ℓ → ∞, it is clear that main term is mI(X * ; Z * ), i.e., the bit rate must be essentially at least as large as I(X * ; Z * ) random bits per channel use, or equivalently, λI(X * ; Z * ) bits per source symbol.It is interesting to note that Wyner's code [1] asymptotically achieves this bound when the coding rate saturates the secrecy capacity, because the subcode that can be decoded by the wiretapper (within each given bin) is of rate about I(X * ; Z * ), and it encodes just the bits of the local randomness.So when working at the full secrecy capacity, Wyner's code is optimal, not only in terms of the optimal trade-off between reliability and security, but also in terms of minimum consumption of local, purely random bits.

Side Information at the Decoder with Partial Leakage to the Wiretapper
Consider next an extension of our model to the case where there are side information sequences, w n = (w 1 , . . ., w n ) and ẇn = ( ẇ1 , . . ., ẇn ), available to the decoder and the wiretapper, respectively.
For the purpose of a converse theorem, we assume that w n is available to the encoder too, whereas in the achievability part, we will comment also on the case where it is not.We will assume that w n is a deterministic sequence, but ẇn is a realization of a random vector Ẇ n = ( Ẇ1 , . . ., Ẇn ), which is a noisy version of w n .In other words, it is generated from w n by another memoryless channel, The symbols of {w i } and { ẇi } take values in finite alphabets, W and Ẇ, respectively.There are two extreme important special cases: (i) Ẇ n = w n almost surely, which is the case of totally insecure side information that fully leaks to the wiretapper, and (ii) Ẇ n is degenerated (or independent of w n ), which is the case of secure side information with no leakage to the wiretapper.Every intermediate situation between these two extremes is a situation of partial leakage.The finite-state encoder model is now re-defined according to: Pr{ Xi = x|ũ i = ũ, wi = w, s e i = s} = P (x|ũ, w, s), i = 0, 1, 2, . . .
s e i+1 = h(ũ i , wi , s e i ), i = 0, 1, 2, . . ., where wi = w ik+k ik+1 , i = 0, 1, . . ., n/k − 1.Likewise, the decoder is given by ṽi and the wiretapper has access to Z N and Ẇ n .Accordingly, the security constraint is modified as follows: For a given ǫ s > 0 and for every sufficiently large n, where I µ (U n ; Z N | Ẇ n ) is the conditional mutual information between U n and Z N given Ẇ n , induced by µ = {µ(u n , ẇn ), u n ∈ U n , ẇn ∈ Ẇn } and the system, {P (z In order to present the extension of Theorem 1 to incorporate side information, we first need to define the extension of the LZ complexity to include side information, namely, to define the conditional LZ complexity (see also [21]).Given u n and w n , let us apply the incremental parsing procedure of the LZ algorithm to the sequence of pairs ((u 1 , w 1 ), (u 2 , w 2 ), . . ., (u n , w n )).According to this procedure, all phrases are distinct with a possible exception of the last phrase, which might be incomplete.Let c(u n , w n ) denote the number of distinct phrases.For example, 3 if then c(u 6 , w 6 ) = 4. Let c(w n ) denote the resulting number of distinct phrases of w n , and let w(l) denote the l-th distinct w-phrase, l = 1, 2, ..., c(w n ).In the above example, c(w 6 ) = 3. Denote by c l (u n |w n ) the number of occurrences of w(l) in the parsing of w n , or equivalently, the number of distinct u-phrases that jointly appear with w(l).Clearly, In the above example, w(1) = 0, w(2) = 1, w(3) = 01, c 1 (u 6 |w 6 ) = c 2 (u 6 |w 6 ) = 1, and c 3 (u 6 |w 6 ) = 2. Now, the conditional LZ complexity of u n given w n is defined as We are now ready to present the main result of this section.
Theorem 3 Consider the problem setting defined in Section 2 along with the above-mentioned modifications to incorporate side information.If there exists a stochastic encoder with q e states and a decoder with q d states that together satisfy the reliability constraint (9) and the security constraint (25), then its bandwidth expansion factor λ must be lower bounded as follows. where , ω being the size of W.
Note that the lower bound of Theorem 3 does not depend on the noisy side information at the wiretapper or on the channel Q Ẇ |W that generates it from w n .It depends only on u n and w n in terms of the data available in the system.Clearly, as it is a converse theorem, if it allows the side information to be available also at the encoder, then it definitely applies also to the case where the encoder does not have access to w n .Interestingly, the encoder and the legitimate decoder act as if the wiretapper had the clean side information, w n .While it is quite obvious that protection against availability of w n at the wiretapper is sufficient for protection against availability of Ẇ n (as Ẇ n is a degraded version of w n ), it is not quite trivial that this should be also necessary, as the above converse theorem asserts.It is also interesting to note that here, the bound depends also on q e , and not only q d , as in Theorem 1.However, this dependence on q e disappears in the special case where Ẇ n = w n with probability one.
We next discuss the achievability of the lower bound of Theorem 3. If encoder has access to w n , then the first step would be to apply the conditional LZ algorithm (see [21, proof of Lemma 2], [22]), thus compressing u n to about nρ LZ (u n |w n ) bits, and the second step would be good channel coding for the wiretap channel, using the same methods as described in the previous section.If, however, the encoder does not have access to w n , the channel coding part is still as before, but the situation with the source coding part is somewhat more involved, since neither the encoder nor the decoder can calculate the target bit rate, ρ LZ (u n |w n ), as neither party has access to both u n and w n .However, this source coding rate can essentially be achieved, provided that there is a low-rate noiseless feedback channel from the legitimate decoder to the encoder.The following scheme is in the spirit of the one proposed by Draper [23], but with a few modifications.
The encoder implements random binning for all source sequences in U n , that is, for each member of U n an index is drawn independently, under the uniform distribution over {0, 1, 2, . . ., α n − 1}, which is represented by its binary expansion, b(u n ), of length n log α bits.We select a large positive integer r, but keep r ≪ n (say, r = √ n or r = log 2 n).The encoder transmits the bits of b(u n ) incrementally, r bits at a time, until it receives from the decoder ACK.Each chunk of r bits is fed into a good channel code for the wiretap channel, at a rate slightly less than C s .At the decoder side, this channel code is decoded (correctly, with high probability, for large r).Then, for each i (i = 1, 2, . ..), after having decoded the i-th chunk of r bits of b(u n ), the decoder creates where [b( un )] l denotes the string formed by the first l bits of b( un ).For each un ∈ A i (u n ), the decoder calculates ρ LZ ( un |w n ).Fix an arbitrarily small δ > 0, which controls the trade-off between error probability and compression rate.If nρ LZ ( un |w n ) ≤ i • r − nδ for some un ∈ A i (u n ), the decoder sends ACK on the feedback channel and outputs the reconstruction, un , with the smallest ρ LZ ( un |w n ) among all members of A i (u n ).
If no member of A i (u n ) satisfies nρ LZ ( un |w n ) ≤ i • r − nδ, the receiver waits for the next chunk of r compressed bits, and it does not send ACK.The probability of source-coding error after the i-th chunk is upper bounded by where in (a), the factor 2 −i•r is the probability that [b( un )] ir = [b(u n )] ir for each member of the set { un = u n : nρ LZ ( un |w n ) ≤ i • r − nδ} and (b) is based on [21, eq.(A.13)].Clearly, it is guaranteed that an ACK will be received at the encoder (and hence the transmission will stop), no later than after the transmission of chunk no.i * , where i * is the smallest integer i such that /r⌉, which is the stage at which at least the correct source sequence begins to satisfy the condition nρ LZ (u The overall probability of source-coding error is then upper bounded by which still tends to zero as n → ∞.As for channel-coding errors, the probability that at least one chunk will be decoded incorrectly is upper bounded by ( n log α r + 1) • e −rE , where E is an achievable error exponent of channel coding at the given rate.Thus, if r grows at any rate faster than logarithmic, but sub-linear in n, then the overall channel-coding error probability tends to zero and, at the same time, the compression redundancy, r/n, tends to zero too.
To show that the security constraint ( 25) is satisfied too, consider an arbitrary assignment µ of random vectors (U n , W n ) and let us denote by B the string of I(X N ; Z N ) − N ǫ bits of local randomness in Wyner's code [1].Then, where , (e) is since conditioning reduces entropy, and (f) is since in Wyner coding, B can be reliably decoded given (Z N , U n ) (δ n is understood to be small, and recall that W n is not needed in the channel decoding phase, but only in the Slepian-Wold decoding phase), and that the length of B is chosen to be I(X N ; Z N ) − N ǫ.Comparing the right-most side to the left-most side, we readily obtain: which can be made arbitrarily small.

Proofs
We begin this section by establishing more notation conventions to be used throughout all proofs.
Let n ≫ k be a positive integer and let ℓ be such that K For the sake of brevity, we henceforth denote (ũ iℓ , . . ., ũ(i+1)ℓ−1 ) by ũ(i+1)ℓ−1 iℓ and use the same notation rule for all other sequences.Next, define the joint empirical distribution: and where the expectation is w.r.t.both the randomness of the encoder and the randomness of both channels.Note that where ℓ−1 j=0 P (x j |ũ j , s e j ), s e 0 = s e (38) Note also that the bit error probability (in the absence of side information) under this distribution where f (Y M , S d ) is induced by ℓ successive applications of the decoder output function with inputs Y m , Y 2m m+1 , . . ., Y M M −m+1 and the initial state S d , and where

Proof of Theorem 1
Beginning with the reliability constraint, we have: On the other hand, and so, Following [1], we define the function which is monotonically non-increasing and concave [1, Lemma 1].Regarding the security constraint, where or, equivalently, Finally, we apply the inequality [18, eq. ( 18)], to obtain which completes the proof of Theorem 1.

Proof of Theorem 2
Consider the following extension of the joint distribution to include a random variable that represents {b i }, as follows: where J = jℓ and b(i+1)ℓ−1 iℓ = ( biℓ , biℓ+1 , . . ., b(i+1)ℓ−1 ).Next, consider the following chain of inequalities where (a) is due to the fact that, on the one hand, X M is a deterministic function of ( Û K , B J , S e ), which implies that I( Û K , B J , S e ; Z M ) ≥ I(X M ; Z M ), but on the other hand, ( Û K , B J , S e ) → X M → Z M is a Markov chain and so, I( Û K , B J , S e ; Z M ) ≤ I(X M ; Z M ), hence the equality.Thus, or The meaning of this result is the following: once one finds a communication system that complies with both the security constraint and the reliability constraint, then the amount of local randomization is lower bounded in terms of the induced mutual information, I(X M ; Z M ), as above.By the hypothesis of Theorem 2, the secrecy capacity is saturated, and hence P X M must coincide with the product distribution, [P X * ] M , yielding I(X M ; Z M )/M = I(X * ; Z * ).Thus, This completes the proof of Theorem 2.

Outline of the Proof of Theorem 3
The proof follows essentially the same steps as those of the proof of Theorem 1, except that everything should be conditioned on the side information, but there are also some small twists.We will therefore only provide a proof outline and highlight the differences.
The auxiliary joint distribution is now extended to read where It follows that Ẇ K → Ŵ K → ( Û K , Ŝe ) → Z M is a Markov chain under P Û K Ŵ K Ẇ K Z M Ŝe .In other words, the legitimate decoder has side information of better quality than that of the wiretapper.
First, observe that The reliability constraint is handled exactly as in the proof of Theorem 1, except that everything should be conditioned on Ŵ K .The result of this is Regarding the security constraint, we begin with the following manipulation.
where in (a) we have used eq.( 62), in (b) we used Fano's inequality, and in (c) we used the data processing inequality as ( Û K , Ŝe ) → X M → Y M is a Markov chain (also conditioned on ( Ŵ K , Z M )).
The next step is to further upper bound the term I(X M ; Y M |Z M , Ŵ K ).This is carried out very similarly as in the proof of Theorem 1, except that everything is conditioned also on Ŵ K .We then obtain log(q e q d ) + K∆(ǫ r ), or, equivalently, + log(q e q d ) + K∆(ǫ r ) = I( Û K ; Z M | Ẇ K ) + log(q e q d ) + K∆(ǫ r ) ≤ Kǫ s + log(q e q d ) + K∆(ǫ r ), or R(u n , w n , q e • q d , ǫ r ) ≤ λ • Γ R(u n , w n , q d , ǫ r ) λ ≤ λ • Γ R(u n , w n , q e • q d , ǫ r ) λ (67) which is the same as R(u n , w n , q e • q d , ǫ r ) ≤ λ • C s . or The proof is completed by combining the last inequality with the following inequality [24, eqs.
is referred to as the bandwidth expansion factor.It should be pointed out that the parameters k and m are fixed integers, which are not necessarily large (e.g., k = 2 and m = 3 are valid values of k and m).The concatenation of output vectors from the encoder, x0 , x1 , . .., is viewed as a sequence chunks of channel input symbols, x 1 , x 2 , . .., with xi = x im+m im+1 , similarly as in the above defined partition of the source sequence.
the output function of the decoder and the function g : Y m × S d → S d is the next-state function of the decoder.The concatenation of the decoder output vectors, ṽ0 , ṽ1 , . .., forms the entire stream of reconstruction symbols, v 1 , v 2 , . ...
a) is due to the security constraint, (b) follows from Fano's inequality and the fact that I(S d ; Û K |Y M , Z M ) ≤ H(S d ) ≤ log q d , (c) is by the data processing inequality and the fact that Û K → X M → Y M is a Markov chain given Z M , (d) is as in [1, eq.(37)], (e) is by the definition of Wyner's function Γ(•), (f) is by the concavity of this function, and (g) is by (45) and the decreasing monotonicity of the function Γ(•).Thus,