Variable-Length Resolvability for General Sources and Channels

We introduce the problem of variable-length (VL) source resolvability, in which a given target probability distribution is approximated by encoding a VL uniform random number, and the asymptotically minimum average length rate of the uniform random number, called the VL resolvability, is investigated. We first analyze the VL resolvability with the variational distance as an approximation measure. Next, we investigate the case under the divergence as an approximation measure. When the asymptotically exact approximation is required, it is shown that the resolvability under two kinds of approximation measures coincides. We then extend the analysis to the case of channel resolvability, where the target distribution is the output distribution via a general channel due to a fixed general source as an input. The obtained characterization of channel resolvability is fully general in the sense that, when the channel is just an identity mapping, it reduces to general formulas for source resolvability. We also analyze the second-order VL resolvability.


Introduction
Generating a random number subject to a given probability distribution has a number of applications, such as in information security, statistical machine learning, and computer science.From the viewpoint of information theory, random number generation may be considered to be a transformation (encoding) of sequences emitted from a given source with coin distribution into other sequences with target distribution via a deterministic mapping [1][2][3].Among others, there have been two major types of problems of random number generation: intrinsic randomness [4,5] and (source) resolvability [6,7].In the former case, a fixed-length (FL) uniform random number is extracted from an arbitrary coin distribution, and we want to find the maximum achievable rate of such uniform random numbers.In the latter case, in contrast, an FL uniform random number used as a coin distribution is encoded to approximate a given target distribution, and we want to find the minimum achievable rate of such uniform random numbers.Thus, there is a duality between these two problems.
The problem of intrinsic randomness has been extended to the case of variable-length (VL) uniform random numbers, for which the length of random numbers may vary.This problem, referred to as the VL intrinsic randomness, was first introduced by Vembu and Verdú [5] for a finite source alphabet and later extended by Han [4] to a countably infinite alphabet.This problem was actually motivated because, in many practical situations, it is indispensable to consider cases where FL uniform random numbers are not available, and, instead, VL uniform random numbers are available; typically, this is in cases where we work with Elias' universal random numbers [1].The use of such uniform random numbers is expected generally to increase the achievable average length rate for intrinsic randomness.Then, the following natural question may be raised: Can we indeed lower the average length rate needed in the "resolvability" problem by using VL random numbers?The answer is "yes".Despite the duality between these two kinds of problems for random number generation, the VL counterpart in the resolvability problem has not been discussed, where we focus on this problem.
We introduce the problem of VL source/channel resolvability, where a given target probability distribution is to be approximated by encoding a VL uniform random number.Distance measures between the target distribution and an approximated distribution are used to measure the fineness of the approximation.We first analyze the fundamental limit on VL source resolvabilities with the variational distance as an approximation measure in Section 3. We use the smooth Shannon entropy, which is a version of smooth Rényi entropy [8], to characterize the δ-source resolvability, which is defined as the minimum achievable length rate of uniform random numbers with an asymptotic distance of less than or equal to δ ∈ [0, 1).In the proof of the direct part, we will develop a simple version of information spectrum slicing [2], in which each "sliced" information density quantized to an integer is approximated by an FL uniform random number.Due to the simplicity of the method, the analysis with variational distance is first facilitated.As an important implication of general formulas for the δ-source resolvability, it is shown that the minimum resolvability rate of VL resolvability is equal to (1 − δ) times that of FL resolvability when the source is stationary and memoryless or is even with one-point spectrum (cf.Corollary 1).This result indicates an advantage of the use of a VL uniform random number when δ > 0 because we can make the VL resolvability rate strictly smaller than an FV one.We then extend these analyses to the case under the (unnormalized) divergence as an approximation measure in Section 4. When δ = 0, that is, when the asymptotically exact approximation is required, it is shown that the 0-source resolvabilities under two kinds of approximation measures coincide with each other.
In Section 5, we then consider the problem of channel resolvability [6,9,10], in which not only a source but also a channel is fixed, and the output distribution via the channel is now the target of approximation.This problem, also referred to as the problem of output approximation, provides a powerful tool to analyze the fundamental limits of various problems in information theory and information security.Some such examples include identification codes [11][12][13], distributed hypothesis testing [14], message authentication [15], secret key generation [16], and coding for secure communication [17][18][19].We consider two types of problems in which either a general source (mean-channel resolvability) or a VL uniform random number (VL channel resolvability) is used as a coin distribution.It is shown that the formulas established are equal for both coin distributions.In the special case that the channel is the identity mapping, the formulas established reduce to those in source resolvability as established in Sections 3 and 4.
From Sections 3-5, the so-called first-order resolvability rates are analyzed, and the next important step may be the second-order analysis.Second-order analyses for various coding problems were initiated by Strassen [20] and have been studied in the past decade or so (cf.[21][22][23][24][25][26][27][28][29][30][31]).We also analyze the second-order fundamental limits of the VL channel/source resolvability in Section 6.In this paper, it is shown that the VL δ-source resolvability under the variational distance is equal to the minimum achievable rate of fixed-to-variable-length source codes with an error probability of less than or equal to δ.It is demonstrated that this close relationship provides a single-letter characterization for the first-and secondorder source resolvability under the variational distance when the source is stationary and memoryless.It is worth noting that second-order analyses for the VL setting are relatively few, compared to those in the FL setting.The second-order formulas established in this paper are of importance from this perspective, too.
The remainder of the paper is organized as follows: Section 2 reviews the problem of FL source resolvability and the relations between the minimum resolvability rate and the minimum coding rate of FL source codes.Section 3 formally introduces the problem of VL source resolvability with variational distance as an approximation measure.Then, Section 4 discusses VL source resolvability with divergence as an approximation measure, and Section 5 generalizes the settings to channel resolvability.Section 6 investigates the second-order fundamental limits of the VL channel/source resolvability.Section 7 concludes the paper with a discussion of possible extensions.

FL Resolvability: Review
Let U = {1, 2, . . ., K} be a finite alphabet of size K, and let X be a finite or countably infinite alphabet.Let X = {X n } ∞ n=1 be a general source [2], where P X n is a probability distribution on X n .We do not impose any assumptions, such as stationarity or ergodicity.In this paper, we identify X n with its probability distribution P X n , and these symbols are used interchangeably.
We first review the problem of FL (source) resolvability [2] using the variational distance as an approximation measure.Let U M n denote the uniform random number, which is a random variable uniformly distributed over U M n := {1, . . ., M n }.Consider the problem of approximating the target distribution P X n by using U M n as a coin distribution via a deterministic mapping ϕ n : {1, . . ., M n } → X n .Denoting Xn = ϕ n (U M n ), we want to make P Xn approximate P X n (cf. Figure 1).A standard choice of the performance measure for approximation is which is referred to as the variational distance between P X n and P Xn .It is easily seen that 0 ≤ d(P X n , P Xn ) ≤ 1. ( Let us now review the problem for source resolvability.Throughout this paper, logarithms are of the base K.

Definition 1 (FL resolvability).
A resolution rate R ≥ 0 is said to be FL achievable or simply fachievable (under the variational distance) if there exists a deterministic mapping ϕ n : {1, . . ., M n } → X n satisfying where Xn = ϕ n (U M n ) and U M n is the uniform random number over U M n .The infimum of f-achievable rates, i.e., S f (X) := inf{R : R is f-achievable} is called the FL resolvability or simply f-resolvability.
Then, we have the following theorem: Theorem 1 (Han and Verdú [6]).For any general target source X, where Remark 1.As a dual counterpart of (7), we may define Sources such that H(X) = H(X) are called one-point spectrum sources (or equivalently, said to satisfy the strong converse property (cf.Han [2])), which includes stationary memoryless sources and stationary ergodic sources, etc.This class of sources is discussed later in Corollary 1.
The following problem is called the δ-resolvability problem [2,7], which relaxes the condition on the variational distance, compared to (4).

Definition 2 (FL δ-resolvability).
For a fixed δ ∈ [0, 1), a resolution rate R ≥ 0 is said to be FL δachievable or simply f (δ)-achievable (under the variational distance) if there exists a deterministic mapping ϕ n : {1, . . ., M n } → X n satisfying lim sup n→∞ d(P X n , P Xn ) ≤ δ, (10) where Xn = ϕ n (U M n ) and U M n is the uniform random number over U M n .The infimum of all f (δ)-achievable rates, i.e., S f (δ|X is referred to as the FL δ-resolvability or simply f(δ)-resolvability.
Then, a characterization of S f (δ|X) is given by Theorem 2 (Steinberg and Verdú [7]).For any general target source X, where Remark 2. The FL resolvability problem is deeply related to the FL source coding problem allowing a probability of a decoding error up to ε. Denoting by R f (ε|X) the minimum achievable rate for the source X, there is the relationship [7]: and, hence, by Theorem 2, Formula (14) can also be shown with a smooth Rényi entropy of order zero [32].

VL Resolvability: Variational Distance
In this section, we introduce the problem of variable-length (VL) resolvability, where the target probability distribution is approximated by encoding a VL uniform random number.As an initial step, we analyze the fundamental limit on the VL resolvability with the variational distance as an approximation measure.

Definitions
Let U * denote the set of all sequences u ∈ U m over m = 0, 1, 2, • • • , where U 0 = {λ} (λ is the null string).Let L n denote a random variable which takes a value in {0, 1, 2, . ..}.We define the VL uniform random number U (L n ) so that U (m) is uniformly distributed over U m given L n = m.In other words, where K = |U |.It should be noticed that the VL sequence u ∈ U m is generated with the joint probability P U (Ln ) (u, m).
We formally define the δ-resolvability problem under the variational distance using the VL random number, called the VL δ-resolvability or simply v(δ)-resolvability.

Definition 3 (VL δ-resolvability: variational distance).
A resolution rate R ≥ 0 is said to be VL δ-achievable (under the variational distance) with δ ∈ [0, 1) if there exists a VL uniform random number U (L n ) and a deterministic mapping ϕ n : U * → X n satisfying where E[•] denotes the expected value and Xn = ϕ n (U (L n ) ).The infimum of all v(δ)-achievable rates, i.e., S v (δ|X is referred to as the VL δ-resolvability or simply v(δ)-resolvability.
If δ = 0, v(0)-achievable is said to be VL achievable or simply v-achievable (under the variational distance).The infimum of all v-achievable rates, i.e., S v (X) := inf{R : R is v-achievable} (21) is called the VL resolvability or simply v-resolvability.

Smooth Shannon Entropy
To establish a general formula for S v (δ|X), we introduce the following quantity for a general source X.Let P (X n ) denote the set of all probability distributions on X n .For δ ∈ [0, 1), by defining the δ-ball using the variational distance we introduce the smooth Shannon entropy: where H(V n ) denotes the Shannon entropy of Based on this quantity for a general source X = {X n } ∞ n=1 , we define Remark 4. Renner and Wolf [8] have defined the smooth Rény entropy of order α ∈ (0, 1) ∪ (1, ∞) as By letting α ↑ 1, we have As for the proof, see Appendix A.
3.3.General Formula for General δ ∈ [0, 1) The following main theorem indicates that the v(δ)-resolvability S v (δ|X) can be characterized by the smooth Shannon entropy for X.Theorem 3.For any general target source X, Remark 5.In Formula (32), the limit lim γ↓0 of the offset term +γ appears in the characterization of S v (δ|X).This is because the smooth entropy H [δ] (X n ) for X n involves the infimum over the nonasymptotic δ-ball B δ (X n ) for a given length n.Alternatively, we may consider the asymptotic δ-ball defined as and then we obtain the alternative formula without an offset term, where is the sup-entropy rate for V with the Shannon entropy H(V n ).The proof of (34) is given in Appendix B.
The same remark also applies to general formulas to be established in the subsequent sections.
Remark 6. Independently of this work, Tomita, Uyematsu, and Matsumoto [34] have investigated the following problem: the coin distribution is given by fair coin-tossing and the average number of coin tosses should be asymptotically minimized as in [3] while the variational distance between the target and approximated distributions should satisfy (19).In this case, the asymptotically minimum average number of coin tosses is also characterized by the right-hand side (r.h.s.) of (32) (cf.[34]).
Since the coin distribution is restricted to that given by fair coin-tossing with a stopping algorithm, realizations of L n must satisfy the Kraft inequality (for prefix codes), whereas the problem addressed in this paper allows the probability distribution of L n to be an arbitrary discrete one, not necessarily implying prefix codes.In this sense, our problem is more relaxed, while the coin is constrained to be conditionally independent given L n .Theorem 3 indicates that the v(δ)-resolvability does not differ in both problems.Later, we shall show that, even in the case where the coin distribution may be any general source X, the δ-resolvability remains the same (cf.Theorem 5 and Remark 14).
On the other hand, we now define the following information quantity (to be used in Remark 7 below) to discuss the relationship with VL source coding: For δ ∈ [0, 1) we define The G [δ] (X n ) is a nonincreasing function of δ.Based on this quantity, we define Then, we have Remark 7.There is a deep relation between the δ-resolvability problem and VL δ-source coding with the error probability asymptotically not exceeding δ.Koga and Yamamoto [35] (also, cf.Han [36]) showed that the minimum average length rate R * v (δ|X) of VL δ-source codes is given by Theorem 3 and Proposition 1 (to be shown just below) reveal that The following proposition shows a general relationship between G [δ] (X) and H [δ] (X).
Proposition 1.For any general source X, In particular, By plugging γ = δ into (40), a looser but sometimes useful bound can be obtained.Equation (40) has been derived by [21], which improves a bound established in [35,37].In view of Theorems 2 and 3, (41) in Proposition 1 implies for all δ ∈ [0, 1), where S f (δ|X) denotes the f(δ)-resolvability.This general relationship elucidates the advantage of the use of VL uniform random numbers to make the average length rate lower.The proposition also claims that G [δ] (X) coincides with H [δ] (X) for all δ ∈ [0, 1) for any general source X.
A consequence of Theorem 3 is the following corollary: where where we notice that H * (X) = H(X 1 ) (entropy) for stationary memoryless sources.
(Proof) See Appendix D.
Now, we are ready to give the proof of Theorem 3: Proof of Theorem 3.
(1) Converse Part: Let R be v(δ)-achievable.Then, there exists U (L n ) and ϕ n satisfying (18) and where we define δ n = d(P X n , P Xn ) with Xn = ϕ n (U (L n ) ). Equation (46) implies that, for any given γ > 0, it holds that δ n ≤ δ + γ for all n ≥ n 0 with some n 0 > 0, and thus we have because On the other hand, it follows from ( 23) that where the inequality is due to the fact that ϕ n is a deterministic mapping and Xn = ϕ n (U (L n ) ).
Combining ( 47)-( 49) yields where we have used ( 18) and ( 25) for the last inequality.Since γ > 0 is arbitrary, we obtain (2) Direct Part: Without loss of generality, we assume that Letting R = H + + 3γ, where γ > 0 is an arbitrary constant, we shall show that R is v(δ)-achievable.In what follows, we use a simpler form of information spectrum slicing [2], where each piece of sliced information quantized to a positive integer is approximated by the uniform random number U ( ) of the length .First, we note that because of the monotonicity of H [δ] (X) in δ.Let V n be a random variable subject to For γ > 0, we can choose a c n > 0 so large that where We also define For m = 0, 1, . . ., then these sets form a partition of X n , i.e., We set L n so that where it is obvious that ∑ β n m=0 Pr{L n = m} = 1, and, hence, the probability distribution of the VL uniform random number U (L n ) is given as since for x ∈ S n (m) and, therefore, If then add a for i = 1, 2, . . . in order, until it holds that with some 1 where such a 1 ≤ c ≤ |S n (m)| always exists.For simplicity, we set for which defines the random variable Xn with values in X n such that that is, Xn = ϕ n (U (L n ) ), where if X n \ T n = ∅, we choose some x 0 ∈ X n \ T n and set Notice that, by this construction, we have it is evaluated as follows: where we have used and (72).For x i ∈ S n (m), we obtain from (74) where, to derive the last inequality, we have used (62).Plugging the inequality and (77) into (76), we obtain which yields lim sup where the second inequality follows from (53) and the last one is due to (52).(c) Evaluation of Variational Distance: From ( 61) and (74), we have which, in view of (58), leads to where we have used (75) to obtain the leftmost inequality in (82).By the triangle inequality, we obtain where the last inequality follows because P V n ∈ B δ+γ (X n ).Thus, we obtain from (83) Since γ > 0 is arbitrary and we have (80), we conclude that R is v(δ)-achievable.
3.4.General Formula for δ = 0 In this subsection, we consider the special case with δ = 0.In this case, we can elucidate the relationship between the minimum achievable rates for VL source codes with an asymptotically vanishing decoding error probability and the FL source codes.
We obtain the following corollary from Theorem 3 and Proposition 1: Corollary 2. For any general target source X, where G [γ] (X) is defined in (37).
It has been shown by Han [2] that any source X = {X n } ∞ n=1 satisfying the uniform integrability (cf.Han [2]) satisfies where H(X) is called the sup-entropy rate.Notice here, in particular, that the finiteness of an alphabet implies the uniform integrability [2].Thus, we obtain the following corollary: Corollary 3.For any finite alphabet target source X, Remark 8.As in the case of FL resolvability and FL source coding problems, S v (X) is tightly related to VL source codes with vanishing decoding error probabilities.Denoting by R * v (X) the minimum error-vanishing VL -achievable rate for a source X, Han [36] has shown that and, hence, from Corollary 3, it is concluded that In addition, if a general source X satisfies the uniform integrability and the strong converse property (cf.Han [2]), then equation (86) holds and hence it follows from ([2], Theorem 1.7.1) that where R f (X) := R f (0|X) and R v (X) denotes the minimum achievable rate of VL source codes with zero error probabilities for all n = 1, 2, • • • .
Remark 9. Han and Verdú [6] have discussed the problem of mean-resolvability for the target distribution P X n .In this problem, the coin distribution may be a general source X = { Xn } ∞ n=1 , where Xn is a random variable that takes values in X n with the average length rate 1  n E[L n ] in (18) replaced with the entropy rate 1  n H( Xn ).Denoting by S v (X) the mean-resolvability, which is defined as the infimum of v-achievable rates for a general source X (with the countably infinite alphabet), we can easily verify that any mean-resolution rate R > S v (X) must satisfy R ≥ lim γ↓0 G [γ] (X) so that S v (X) ≤ S v (X).On the other hand, S v (X) ≥ S v (X) by definition.Thus, in view of Corollary 2, we have Corollary 4. For any general target source X, (91)

VL Resolvability: Divergence
So far, we have considered the problem of VL resolvability, in which the approximation level is measured by the variational distance between X n and Xn .It is sometimes of use to deal with another quantity as an approximation measure.In this section, we use the (unnormalized) divergence as the approximation measure.

Definitions
In this subsection, we address the following problem.

Definition 4 (VL δ-resolvability: divergence).
A resolution rate R ≥ 0 is said to be VL δachievable or simply v D (δ)-achievable (under the divergence) with δ ≥ 0, if there exists a VL uniform random number U (L n ) and a deterministic mapping ϕ n : U * → X n satisfying lim sup where Xn = ϕ n (U (L n ) ) and D( Xn ||X n ) denotes the divergence between P Xn and P X n : The infimum of all v D (δ)-achievable rates, i.e., is called the VL δ-resolvability or simply the v D (δ)-resolvability.
Remark 10.The measure of approximation is now the divergence D( Xn ||X n ) but not its reversed version D(X n || Xn ).In the context of resolvability, divergence D( Xn ||X n ) (and its counterpart in the case of channel resolvability) is usually employed as in [6,7,9].We also use this type of divergence in the subsequent sections.
To establish the general formula for S D v (δ|X), we introduce the following quantity for a general source X = {X n } ∞ n=1 .Recall that P (X n ) denotes the set of all probability distributions on X n .For δ ≥ 0, defining the δ-ball using the divergence as we introduce the following quantity, referred to as the smooth entropy using the divergence: where H(V n ) denotes the Shannon entropy of The following lemma is used to derive Corollary 5 of Theorem 4 below in the next subsection.
Lemma 1.For any general source X, where we define g(δ) = 2δ 2 / ln K, and (Proof) See Appendix E.

General Formula
Here, we establish another main theorem, which characterizes S D v (δ|X) for all δ ≥ 0 in terms of the smooth entropy using the divergence.Theorem 4. For any general target source X, Remark 11.It should be noticed that the approximation measure considered here is not the normalized divergence which has been used in the problem of FL δ-resolvability [7].The achievability scheme given in the proof of the direct part of Theorem 4 can also be used in the case of this relaxed measure.Indeed, denoting the VL δ-resolvability with the normalized divergence by SD v (δ|X), the general formula for SD v (δ|X) is given in the same form as (101), if the radius of the δ-ball B D δ (X n ) in the definition of H D [δ] (X n ) is replaced with the normalized divergence.It generally holds that S D v (δ|X) ≥ SD v (δ|X) for all δ ≥ 0 because the normalized divergence is smaller than the unnormalized divergence.
As we have seen in Lemma 1, we generally have S D v (g(δ)|X) ≥ S v (δ|X) for any δ ∈ [0, 1) with g(δ) = 2δ 2 / ln K.In particular, in the case that δ = 0, we obtain the following corollary of Theorems 3 and 4.
Corollary 5.For any general target source X, Corollary 5 indicates that the v D (0)-resolvability S D v (0|X) coincides with the v-resolvability S v (X) and is also characterized by the r.h.s. of (85).By (88), it also implies that , where R * v (X) denotes the minimum error-vanishing achievable rate with VL source codes for X.

Proof of Theorem 4.
(1) Converse Part: Let R be v D (δ)-achievable.Then, there exists U (L n ) and ϕ n satisfying (92) and lim sup n→∞ where we define δ n = D( Xn ||X n ) with Xn = ϕ n (U (L n ) ). Equation (104) implies that, for any given γ > 0, it holds that δ n ≤ δ + γ for all n ≥ n 0 with some n 0 > 0, and therefore On the other hand, it follows from (23) that where the inequality is due to the fact that ϕ n is a deterministic mapping and Xn = ϕ n (U (L n ) ).
Combining (105)-( 107) yields where we used ( 25) and (92) for the last inequality.Since γ > 0 is arbitrary, we have (2) Direct Part: We modify the achievability scheme in the proof of the direct part of Theorem 3.
Although the proof of this part is quite similar to that of the direct part of Theorem 3, we give here the full proof in order to avoid subtle possible confusions.We may assume that H + := lim γ↓0 H D [δ+γ] (X) is finite (H + < +∞).Letting R = H + + µ, where µ > 0 is an arbitrary constant, we shall show that R is v D (δ)-achievable.Let V n be a random variable subject to P for any fixed γ ∈ (0, 1  2 ].We can choose a c n > 0 so large that where We also define Letting, for m = 1, 2, . . ., these sets form a partition of T n : We set L n so that which satisfies and, hence, the probability distribution of U (L n ) is given as then add a for i = 1, 2, . . . in order, until it holds that with some 1 where such a 1 ≤ c ≤ |S n (m)| always exists.For simplicity, we set for and which defines the random variable Xn with values in X n such that that is, Xn = ϕ n (U (L n ) ).Notice that, by this construction, we have where we have used and (128).For x i ∈ S n (m) we obtain from (129) and the right inequality of (130) where, to derive the last inequality, we have used the fact 0 ≤ γ 0 ≤ γ ≤ 1 2 and It should be noticed that (132) also implies that and ( 132) into (131), we obtain Thus, we obtain from (136) where the second inequality follows from (110).Since we have assumed that H + is finite and γ ∈ (0, 1  2 ] is arbitrary, the r.h.s. of (137) can be made as close to H + as desired.Therefore, for all sufficiently small γ > 0, we obtain lim sup (c) Evaluation of Divergence: The divergence D( Xn ||X n ) can be rewritten as In view of (132), we obtain and where to obtain the last inequality we used the fact that P V n ∈ B D δ+γ (X n ).Plugging ( 140) and ( 141) into (139) yields where we have used the fact that 2γ ln K ≤ 3γ for all K ≥ 2 and the assumption 0 < γ ≤ 1 2 to derive the last inequality.Since γ ∈ (0, 1  2 ] is arbitrary and we have (138), R is v D (δ)-achievable.

Mean and VL Channel Resolvability
So far, we have studied the problem of source resolvability, whereas the problem of channel resolvability has been introduced by Han and Verdú [6] to investigate the capacity of identification codes [11].In a conventional problem of this kind, a target output distribution P Y n via a channel W n due to an input X n is approximated by encoding the FL uniform random number U M n as a channel input.In this section, we generalize the problem of such channel resolvability to that in the variable-length setting.

Definitions
Let X and Y be finite or countably infinite alphabets.Let W = {W n } ∞ n=1 be a general channel, where W n : X n → Y n denotes a stochastic mapping.We denote by Y = {Y n } ∞ n=1 the output process via W due to an input process X = {X n } ∞ n=1 , where X n and Y n take values in X n and Y n , respectively.Again, we do not impose any assumptions such as stationarity or ergodicity on either X or W. As in the previous sections, we will identify X n and Y n with their probability distributions P X n and P Y n , respectively, and these symbols are used interchangeably.
In this section, we consider several types of problems of approximating a target output distribution P Y n .The first one is the problem of mean-resolvability [6], in which the channel input is allowed to be an arbitrary general source.
Definition 5 (mean δ-channel resolvability: variational distance).Let δ ∈ [0, 1) be fixed arbitrarily.A resolution rate R ≥ 0 is said to be mean δ-achievable for X (under the variational distance) if there exists a general source where Ỹn denotes the output via W n due to input Xn .The infimum of all mean δ-achievable rates for X, i.e., S v (δ|X, W ) := inf{R : R is mean δ-achievable for X} (145) is referred to as the mean δ-resolvability for X.We also define the mean δ-resolvability for the worst input as On the other hand, we may also consider the problem of VL channel resolvability.Here, the VL uniform random number U (L n ) is defined as in the foregoing sections.Consider the problem of approximating the target output distribution P Y n via W n due to X n by using another input Xn = ϕ n (U (L n ) ) with a deterministic mapping ϕ n : U * → X n .Definition 6 (VL δ-channel resolvability: variational distance).Let δ ∈ [0, 1) be fixed arbitrarily.A resolution rate R ≥ 0 is said to be VL δ-achievable or simply v(δ)-achievable for X (under the variational distance) if there exists a VL uniform random number U (L n ) and a deterministic mapping where E[•] denotes the expected value and Ỹn denotes the output via W n due to input Xn = ϕ n (U (L n ) ).The infimum of all v(δ)-achievable rates for X, i.e., S v (δ|X, W ) := inf{R : R is v(δ)-achievable for X} (149) is called the VL δ-channel resolvability or simply v(δ)-channel resolvability for X.We also define the VL δ-channel resolvability or simply v(δ)-channel resolvability for the worst input as When W n is the identity mapping, the problem of channel resolvability reduces to that of source resolvability, which has been investigated in the foregoing sections.In this sense, the problem of channel resolvability is a generalization of the problem of source resolvability.
Similarly to the problem of source resolvability, we may also use the divergence between the target output distribution P Y n and the approximated output distribution P Ỹn as an approximation measure.Definition 7 (mean δ-channel resolvability: divergence).Let δ ≥ 0 be fixed arbitrarily.A resolution rate R ≥ 0 is said to be mean δ-achievable for X (under the divergence) if there exists a general source where Ỹn denotes the output via W n due to input Xn .The infimum of all mean δ-achievable rates for X, i.e., S D v (δ|X, W ) := inf{R : R is mean δ-achievable for X} (153) is referred to as the mean δ-channel resolvability for X.We also define the mean δ-channel resolvability for the worst input as Definition 8 (VL δ-channel resolvability: divergence).Let δ ≥ 0 be fixed arbitrarily.A resolution rate R ≥ 0 is said to be VL δ-achievable or simply v D (δ)-achievable for X (under the divergence) if there exists a VL uniform random number U (L n ) and a deterministic mapping where E[•] denotes the expected value and Ỹn denotes the output via W n due to input Xn = ϕ n (U (L n ) ).The infimum of all v D (δ)-achievable rates for X, i.e., is called the VL δ-channel resolvability or simply v D (δ)-channel resolvability for X.We also define the VL δ-channel resolvability or simply v D (δ)-channel resolvability for the worst input as Remark 12. Since the outputs of a deterministic mapping Xn = ϕ n (U (L n ) ) form a general source X, it holds that for any general source X and general channel W.These relations lead to the analogous relation for the mean/VL δ-channel resolvability for the worst input:

General Formulas
For a given general source X = {X n } ∞ n=1 and a general channel n=1 be the channel output via W due to input X.We define where respectively, with Z n defined as the output via W n due to input ) are nonincreasing functions of δ.In addition, we define which play an important role in characterizing the mean/VL δ-channel resolvability.We show the general formulas for the mean/VL δ-channel resolvability.
Theorem 5 (with variational distance).For any input process X and any general channel W, In particular, Theorem 6 (with divergence).For any input process X and any general channel W, In particular, Remark 13.It can be easily verified that the variational distance satisfies and, therefore, we have B δ (X n ) ⊆ B δ (X n , W n ).This relation and formulas (32) and (169) indicate that for any given channel W. Likewise, it is well known that the divergence satisfies the data processing inequality D( Ỹn ||Y n ) ≤ D( Xn ||X n ) [33], and formulas (101) and (171) lead to regardless of channel W.
Remark 14.It is obvious that Theorems 5 and 6 reduce to Theorems 3 and 4, respectively, when the channel W is the identity mapping.Precisely, for the identity mapping W = I, the mean δ-resolvability and the v(δ)-channel resolvability for X are given by where S v (δ|X) denotes the mean δ-resolvability S v (δ|X, W ) for the identity mapping W. The analogous relationship holds under the divergence: (178) Let R be mean δ-achievable for X under the variational distance.Then, there exists a general source X = { Xn } ∞ n=1 satisfying (143) and lim sup n→∞ where δ n := d(P Y n , P Ỹn ).Fixing γ > 0 arbitrarily, we have δ n ≤ δ + γ for all n ≥ n 0 with some n 0 > 0 and then since . Thus, we obtain from ( 143) Since γ > 0 is an arbitrary constant, this implies (178).The converse part of Theorem 6 can be proven in an analogous way with due modifications.(2) Direct Part: Because of the general relationship (159), to prove the direct part (achievability) of Theorem 5, it suffices to show that, for any fixed γ > 0, the resolution rate is v(δ)-achievable for X under the variational distance.
Let P V n ∈ B δ+γ (X n , W n ) be a source satisfying Then, by the same argument to derive (80) and (82) as developed in the proof of the direct part of Theorem 3, we can construct a VL uniform random number U (L n ) and a deterministic mapping ϕ n : U * → X n satisfying lim sup and where Xn = ϕ n (U (L n ) ).Let Z n denote the output random variable via W n due to input V n .Then, letting Ỹn be the output via channel W n due to input Xn , we can evaluate d(P Ỹn , P Z n ) as where we have used the fact P V n ∈ B δ+γ (X n , W n ) to derive the last inequality.Since γ > 0 is an arbitrary constant, we can conclude that R is v(δ)-achievable for X.
The direct part of Theorem 6 can be proven in the same way as Theorem 4 with due modifications.Fixing P V n ∈ B D δ+γ (X n , W n ) and using the encoding scheme as developed in the proof of Theorem 4, the evaluation of the average length rate is exactly the same, and we can obtain (138).A key step is to evaluate the divergence D( Ỹn ||Y n ), which can be rewritten as The first term on the r.h.s.can be bounded as as in (140), where the left inequality is due to the data processing inequality.Similarly to the derivation of (141), the second term can be bounded as where we have used (134).Here, The rest of the steps are the same as in the proof of Theorem 4.

Second-Order VL Channel Resolvability
So far, we have analyzed the first-order VL resolvabilities and established various first-order resolvability theorems.One of the next important steps is the second-order analysis, and so, in this section, we generalize VL resolvabilities in the second-order setting.

Definitions
We now turn to considering the second-order resolution rates [26,27,29].First, we consider the VL resolvability based on the variational distance.Definition 9 (VL (δ, R)-channel resolvability: variational distance).A second-order resolution rate L ∈ (−∞, +∞) is said to be VL (δ, R)-achievable (under the variational distance) for X with δ ∈ [0, 1) if there exist a VL uniform random number U (L n ) and a deterministic mapping where Ỹn denotes the output via W n due to input Xn = ϕ n (U (L n ) ).The infimum of all VL (δ, R)achievable rates for X is denoted by When W is the identity mapping I, T v (δ, R|X, W ) is simply denoted by T v (δ, R|X) (source resolvability).
Next, we may consider the VL resolvability with the divergence instead of the variational distance.
Definition 10 (VL (δ, R)-channel resolvability: divergence).A second-order resolution rate L ∈ (−∞, +∞) is said to be VL (δ, R)-achievable for X (with the divergence) where δ ≥ 0 if there exists a VL uniform random number U (L n ) and a deterministic mapping where Ỹn denotes the output via W n due to input Xn = ϕ n (U (L n ) ).The infimum of all VL (δ, R)achievable rates for X is denoted as When W is the identity mapping I, T D v (δ, R|X, W ) is simply denoted by T D v (δ, R|X) (source resolvability).

Remark 15. It is easily verified that
Hence, only the case R = S v (δ|X, W ) is of interest to us.The same remark also applies to T D v (δ, R|X, W ).

General Formulas
We establish general formulas for the second-order resolvability.The proofs of the following theorems are given below subsequently to Remark 17.
Theorem 7 (with variational distance).For any input process X and general channel W, In particular, in the case where W is the identity mapping I, Theorem 8 (with divergence).For any input process X and general channel W, In particular, in the case where W is the identity mapping I, Remark 16.As discussed in Section 5, we may also consider using a general source X as an input to channel W, and we can define L to be a mean (δ, R)-achievable rate for X by replacing (191) and (194) with Let T v (δ, R|X, W ) and T D v (δ, R|X, W ) denote the infimum of all mean (δ, R)-achievable rates for X under the variational distance and the divergence, respectively.Then, it is not difficult to verify that Thus, there is no loss in the (δ, R)-achievable resolution rate even if the channel input X is restricted to be generated by the VL uniform random number U (L n ) .

Remark 17.
As in the first-order case, when the channel W is the identity mapping I, T v (δ, R|X) coincides with the minimum second-order length rate of the VL source codes.More precisely, we denote by R * v (δ, R|X) the minimum second-order length rate of a sequence of VL source codes with the first-order average length rate R and an average error probability asymptotically not exceeding δ.Yagi and Nomura [31] have shown that Modifying the proof of Proposition 1 (cf.Appendix C), we can show that the r.h.s. of (199) coincides with that of (205), and, therefore, it generally holds that As a special case, suppose that X is a stationary and memoryless source X with the finite third absolute moment of log 1 P X (X) .In this case, Kostina et al. [25] have recently given a single-letter characterization for R where V(X) denotes the variance of log 1 P X (X) (varentropy) and Q −1 is the inverse of the complementary cumulative distribution function of the standard Gaussian distribution.In view of the general relation (206), we can also obtain the single-letter characterization for T v (δ, R|X): It has not yet been made clear whether we can also have a single-letter formula for T v (δ, R|X, W ) when the channel W is memoryless but not necessarily the identity mapping. T We here define and β n = (nc n + √ nγ) .Arguing similarly to the proof of Theorems 3 and 5, we can show that there exist ϕ n : U * → X n and U (L n ) such that and Since γ > 0 is arbitrary, L is VL (δ, R)-achievable for X.

Conclusions
We have investigated the problem of VL source/channel resolvability, in which a given target probability distribution is approximated by transforming VL uniform random numbers.Table 1 summarizes various first-order resolvabilities and their characterizations in terms of information quantities.In this table, the theorem numbers that contain the corresponding characterization are also indicated.
In this paper, we have first analyzed the fundamental limits on the VL δ-source resolvability with the variational distance in Theorem 3. The VL δ-source resolvability is essentially characterized in terms of smooth Shannon entropies.In the proof of the direct part, we have developed a simple method for information spectrum slicing, in which sliced information densities quantized to the same integer are approximated by an FL uniform random number of the same length.Next, we have extended the analysis to the δ-source resolvability under the unnormalized divergence in Theorem 4. The smoothed entropy with the divergence again plays an important role in characterizing the δ-source resolvability.Then, we have addressed the problem of δ-channel resolvability.It has been revealed in Theorems 5 and 6 that using an arbitrary general source as a coin distribution (meanresolvability problem) cannot go beyond the fundamental limits of the VL resolvability, in which only VL uniform random numbers are allowed to be a coin distribution.As in the case of source resolvability, we have discussed the δ-channel resolvability under the variational distance and the unnormalized divergence.The second-order channel resolvability has been characterized in Theorems 7 and 8 as well as the first-order case.We notice here that a counterpart of the VL uniform random number is the problem of VL source coding, for which the general treatment, focused on overflow/underflow probabilities, is found in [38].Indeed, when the variational distance is used as an approximation measure, it turns out that the δ-source resolvability is equal to the minimum achievable rate of VL source codes with an error probability of less than or equal to δ.This is a parallel relationship between FL source resolvability and the minimum achievable rate of FL source codes [6,7].It is of interest to investigate whether there is a coding problem to which the δ-channel resolvability is closely related.
When δ = 0, asymptotically exact approximation is required.In the case where the channel W is the identity mapping I, it turned out that the source resolvability under the variational distance and the unnormalized divergence coincides and is given by lim γ↓0 H [γ] (X), where X is the general target source.This result is analogous to the dual problem of VL intrinsic randomness [5,36], in which the maximum achievable rates of VL uniform random numbers extracted from a given source X are the same under two kinds of approximation measures.It should be emphasized that in the case of VL intrinsic randomness, the use of normalized divergence as an approximation measure results in the same general formula as with the variational distance and the unnormalized divergence, which does not necessarily hold in the case of mean/VL resolvability (cf.Remark 11).It is also noteworthy that whereas only the case of δ = 0 has been completely solved for the VL intrinsic randomness, we have also dealt with the case of a δ > 0 for the VL source/channel resolvability.
When X is a stationary and memoryless source or is even with a one-point spectrum (cf.Corollary 1), the formulas established reduce to a single-letter characterization for the first-and second-order source resolvability under the variational distance.In the case where the divergence is an approximation measure and/or the channel W is a non-identity mapping, however, it has not yet been made clear whether we can derive a single-letter characterization for the δ-source/channel resolvability.This question remains to be studied.
As noted in Remark 10, the order of arguments in divergence D( Xn ||X n ) is important, and it seems difficult to extend the analyses in this paper to the case with the reversed D(X n || Xn ) used as an approximation measure.In the context of intrinsic randomness, the reversed divergence is also discussed [23].Investigating the problem of source/channel resolvability using such divergence is an interesting research topic.

Figure 1 .
Figure 1.Illustration of the problem of FL resolvability.

Dv
(δ|X) denotes the mean δ-resolvability S D v (δ|X, W ) for the identity mapping W = I.Thus, it turns out that Theorems 5 and 6 are indeed generalizations of Theorems 3 and 4. Proof of Theorems 5 and 6. (1) Converse Part: Because of the general relationship (159), to prove the converse part of Theorem 5, it suffices to show that S v (δ|X, W ) ≥ lim γ↓0 H [δ+γ],W (X).

Table 1 .
Summary of First-Order Resolvability and Information Quantities.