Next Article in Journal
Non-Gaussian Systems Control Performance Assessment Based on Rational Entropy
Previous Article in Journal
Minimum Penalized ϕ-Divergence Estimation under Model Misspecification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Continuity of Channel Parameters and Operations under Various DMC Topologies †

École Polytechnique Fédérale de Lausanne, Route Cantonale, 1015 Lausanne, Switzerland
This paper is an extended version of our paper that was published in the International Symposium on Information Theory 2017 (ISIT 2017).
Entropy 2018, 20(5), 330; https://doi.org/10.3390/e20050330
Submission received: 27 March 2018 / Revised: 17 April 2018 / Accepted: 27 April 2018 / Published: 30 April 2018
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
We study the continuity of many channel parameters and operations under various topologies on the space of equivalent discrete memoryless channels (DMC). We show that mutual information, channel capacity, Bhattacharyya parameter, probability of error of a fixed code and optimal probability of error for a given code rate and block length are continuous under various DMC topologies. We also show that channel operations such as sums, products, interpolations and Arıkan-style transformations are continuous.

1. Introduction

This paper is an extended version of our paper that is published in the International Symposium on Information Theory 2017 (ISIT 2017) [1].
Let X and Y be two finite sets, and let W be a fixed channel with input alphabet X and output alphabet Y . It is well known that the input-output mutual information is continuous on the simplex of input probability distributions. Many other parameters that depend on the input probability distribution were shown to be continuous on the simplex in [2].
Polyanskiy studied in [3] the continuity of the Neyman–Pearson function for a binary hypothesis test that arises in the analysis of channel codes. He showed that for arbitrary input and output alphabets, this function is continuous in the input distribution in the total variation topology. He also showed that under some regularity assumptions, this function is continuous in the weak-∗ topology.
If X and Y are finite sets, the space of channels with input alphabet X and output alphabet Y can naturally be endowed with the topology of the Euclidean metric, or any other equivalent metric. It is well known that the channel capacity is continuous in this topology. If X and Y are arbitrary, one can construct a topology on the space of channels using the weak-∗ topology on the output alphabet. It was shown in [4] that the capacity is lower semi-continuous in this topology.
The continuity results that are mentioned in the previous paragraph do not take into account “equivalence” between channels. Two channels are said to be equivalent if they are degraded from each other. This means that each channel can be simulated from the other by local operations at the receiver. Two channels that are degraded from each other are completely equivalent from an operational point of view: both channels have exactly the same probability of error under optimal decoding for any fixed code. Moreover, any sub-optimal decoder for one channel can be transformed to a sub-optimal decoder for the other channel with the same probability of error and essentially the same computational complexity. This is why it makes sense, from an information-theoretic point of view, to identify equivalent channels and consider them as one point in the space of “equivalent channels”.
In [5], equivalent binary-input channels were identified with their L-density (i.e., the density of log-likelihood ratios). The space of equivalent binary-input channels was endowed with the topology of convergence in distribution of L-densities. Since the symmetric capacity (the symmetric capacity is the input-output mutual information with uniformly-distributed input) and the Bhattacharyya parameter can be written as an integral of a continuous function with respect to the L-density [5], it immediately follows that these parameters are continuous in the L-density topology.
In [6], many topologies were constructed for the space of equivalent channels sharing a fixed input alphabet. In this paper, we study the continuity of many channel parameters and operations under these topologies. The continuity of channel parameters and operations might be helpful in the following two problems:
  • If a parameter (such as the optimal probability of error of a given code) is difficult to compute for a channel W, one can approximate it by computing the same parameter for a sequence of channels ( W n ) n 0 that converges to W in some topology where the parameter is continuous.
  • The study of the robustness of a communication system against the imperfect specification of the channel.
In Section 2, we introduce the preliminaries for this paper. In Section 3, we recall the main results of [6] that we need here. In Section 4, we introduce the channel parameters and operations that we investigate in this paper. In Section 5, we study the continuity of these parameters and operations in the quotient topology of the space of equivalent channels with fixed input and output alphabets. The continuity in the strong topology of the space of equivalent channels sharing the same input alphabet is studied in Section 6. Finally, the continuity in the noisiness/weak-∗ and the total variation topologies is studied in Section 7.

2. Preliminaries

We assume that the reader is familiar with the basic concepts of General Topology. The main concepts and theorems that we need can be found in the Preliminaries Section of [6].

2.1. Set-Theoretic Notations

For every integer n 1 , we denote the set { 1 , , n } as [ n ] .
The set of mappings from a set A to a set B is denoted as B A .
Let A be a subset of B. The indicator mapping 𝟙 A , B : B { 0 , 1 } of A in B is defined as:
𝟙 A , B ( x ) = 𝟙 x A = 1 if x A , 0 otherwise .
If the superset B is clear from the context, we simply write 𝟙 A to denote the indicator mapping of A in B.
The power set of B is the set of subsets of B. Since every subset of B can be identified with its indicator mapping, we denote the power set of B as { 0 , 1 } B = 2 B .
Let ( A i ) i I be a collection of arbitrary sets indexed by I. The disjoint union of ( A i ) i I is defined as i I A i = i I ( A i × { i } ) . For every i I , the i-th-canonical injection is the mapping ϕ i : A i j I A j defined as ϕ i ( x i ) = ( x i , i ) . If no confusions can arise, we can identify A i with A i × { i } through the canonical injection. Therefore, we can see A i as a subset of j I A j for every i I .
Let R be an equivalence relation on a set T. For every x T , the set x ^ = { y T : x R y } is the R-equivalence class of x. The collection of R-equivalence classes, which we denote as T / R , forms a partition of T, and it is called the quotient space of T by R. The mapping Proj R : T T / R defined as Proj R ( x ) = x ^ for every x T is the projection mapping onto T / R .

2.2. Topological Notations

A topological space ( T , U ) is said to be contractible to x 0 T if there exists a continuous mapping H : T × [ 0 , 1 ] T such that H ( x , 0 ) = x and H ( x , 1 ) = x 0 for every x T , where [ 0 , 1 ] is endowed with the Euclidean topology. ( T , U ) is strongly contractible to x 0 T if we also have H ( x 0 , t ) = x 0 for every t [ 0 , 1 ] .
Intuitively, T is contractible if it can be “continuously shrinked” to a single point x 0 . If this “continuous shrinking” can be done without moving x 0 , T is strongly contractible.
Note that contractibility is a very strong notion of connectedness: every contractible space is path-connected and simply connected. Moreover, all its homotopy, homology and cohomology groups of order 1 are zero.
Let { ( T i , U i ) } i I be a collection of topological spaces indexed by I. The product topology on i I T i is denoted by i I U i . The disjoint union topology on i I T i is denoted by i I U i .
The following lemma is useful to show the continuity of many functions.
Lemma 1.
Let ( S , V ) and ( T , U ) be two compact topological spaces, and let f : S × T R be a continuous function on S × T . For every s S and every ϵ > 0 , there exists a neighborhood V s of s such that for every s V s , we have:
sup t T | f ( s , t ) f ( s , t ) | ϵ .
Proof. 
See Appendix A. ☐

2.3. Quotient Topology

Let ( T , U ) be a topological space, and let R be an equivalence relation on T. The quotient topology on T / R is the finest topology that makes the projection mapping Proj R continuous. It is given by:
U / R = U ^ T / R : Proj R 1 ( U ^ ) U .
Lemma 2.
Let f : T S be a continuous mapping from ( T , U ) to ( S , V ) . If f ( x ) = f ( x ) for every x , x T satisfying x R x , then we can define a transcendent mapping f : T / R S such that f ( x ^ ) = f ( x ) for any x x ^ . f is well defined on T / R . Moreover, f is a continuous mapping from ( T / R , U / R ) to ( S , V ) .
Let ( T , U ) and ( S , V ) be two topological spaces, and let R be an equivalence relation on T. Consider the equivalence relation R on T × S defined as ( x 1 , y 1 ) R ( x 2 , y 2 ) if and only if x 1 R x 2 and y 1 = y 2 . A natural question to ask is whether the canonical bijection between ( T / R ) × S , ( U / R ) V and ( T × S ) / R , ( U V ) / R is a homeomorphism. It turns out that this is not the case in general. The following theorem, which is widely used in Algebraic Topology, provides a sufficient condition:
Theorem 1.
[7] If ( S , V ) is locally compact and Hausdorff, then the canonical bijection between ( ( T / R ) × S , ( U / R ) V ) and ( T × S ) / R , ( U V ) / R is a homeomorphism.
Corollary 1.
Let ( T , U ) and ( S , V ) be two topological spaces, and let R T and R S be two equivalence relations on T and S, respectively. Define the equivalence relation R on T × S as ( x 1 , y 1 ) R ( x 2 , y 2 ) if and only if x 1 R T x 2 and y 1 R S y 2 . If ( S , V ) and ( T / R T , U / R T ) are locally compact and Hausdorff, then the canonical bijection between ( T / R T ) × ( S / R S ) , ( U / R T ) ( V / R S ) and ( T × S ) / R , ( U V ) / R is a homeomorphism.
Proof. 
We just need to apply Theorem 1 twice. Define the equivalence relation R T on T × S as follows: ( x 1 , y 1 ) R T ( x 2 , y 2 ) if and only if x 1 R T x 2 and y 1 = y 2 . Since ( S , V ) is locally compact and Hausdorff, Theorem 1 implies that the canonical bijection from ( T / R T ) × S , ( U / R T ) V to ( T × S ) / R T , ( U V ) / R T is a homeomorphism. Let us identify these two spaces through the canonical bijection.
Now, define the equivalence relation R S on ( T / R T ) × S as follows: ( x ^ 1 , y 1 ) R S ( x ^ 2 , y 2 ) if and only if x ^ 1 = x ^ 2 and y 1 R S y 2 . Since ( T / R T , U / R T ) is locally compact and Hausdorff, Theorem 1 implies that the canonical bijection from ( T / R T ) × ( S / R S ) , ( U / R T ) ( V / R S ) to ( ( T / R T ) × S ) / R S , ( ( U / R T ) V ) / R S is a homeomorphism.
Since we identified ( T / R T ) × S , ( U / R T ) V and ( T × S ) / R T , ( U V ) / R T through the canonical bijection (which is a homeomorphism), R S can be seen as an equivalence relation on ( T × S ) / R T , ( U V ) / R T . It is easy to see that the canonical bijection from ( T × S ) / R T / R S , ( U V ) / R T / R S to ( T × S ) / R , ( U V ) / R is a homeomorphism. We conclude that the canonical bijection from ( T / R T ) × ( S / R S ) , ( U / R T ) ( V / R S ) to ( T × S ) / R , ( U V ) / R is a homeomorphism. ☐

2.4. Measure-Theoretic Notations

If ( M , Σ ) is a measurable space, we denote the set of probability measures on ( M , Σ ) as P ( M , Σ ) . If the σ -algebra Σ is known from the context, we simply write P ( M ) to denote the set of probability measures.
If P P ( M , Σ ) and { x } is a measurable singleton, we simply write P ( x ) to denote P ( { x } ) .
For every P 1 , P 2 P ( M , Σ ) , the total variation distance between P 1 and P 2 is defined as:
P 1 P 2 T V = sup A Σ | P 1 ( A ) P 2 ( A ) | .
  • The push-forward probability measure:
    Let P be a probability measure on ( M , Σ ) , and let f : M M be a measurable mapping from ( M , Σ ) to another measurable space ( M , Σ ) . The push-forward probability measure of P by f is the probability measure f # P on ( M , Σ ) defined as ( f # P ) ( A ) = P ( f 1 ( A ) ) for every A Σ .
    A measurable mapping g : M R is integrable with respect to f # P if and only if g f is integrable with respect to P. Moreover,
    M g · d ( f # P ) = M ( g f ) · d P .
    The mapping f # from P ( M , Σ ) to P ( M , Σ ) is continuous if these spaces are endowed with the total variation topology:
    f # P f # P T V ( a ) P P T V ,
    where (a) follows from Property 1 of [8].
  • Probability measures on finite sets:
    We always endow finite sets with their finest σ -algebra, i.e., the power set. In this case, every probability measure is completely determined by its value on singletons, i.e., if P is a probability measure on a finite set X , then for every A X , we have:
    P ( A ) = x A P ( x ) .
    If X is a finite set, we denote the set of probability distributions on X as Δ X . Note that Δ X is an ( | X | 1 ) -dimensional simplex in R X . We always endow Δ X with the total variation distance and its induced topology. For every p 1 , p 2 Δ X , we have:
    p 1 p 2 T V = 1 2 x X | p 1 ( x ) p 2 ( x ) | = 1 2 p 1 p 2 1 .
  • Products of probability measures:
    We denote the product of two measurable spaces ( M 1 , Σ 1 ) and ( M 2 , Σ 2 ) as ( M 1 × M 2 , Σ 1 Σ 2 ) . If P 1 P ( M 1 , Σ 1 ) and P 2 P ( M 2 , Σ 2 ) , we denote the product of P 1 and P 2 as P 1 × P 2 .
    If P ( M 1 , Σ 1 ) , P ( M 2 , Σ 2 ) and P ( M 1 × M 2 , Σ 1 Σ 2 ) are endowed with the total variation topology, the mapping ( P 1 , P 2 ) P 1 × P 2 is a continuous mapping (see Appendix B).
  • Borel sets and the support of a probability measure:
    Let ( T , U ) be a Hausdorff topological space. The Borel σ -algebra of ( T , U ) is the σ -algebra generated by U . We denote the Borel σ -algebra of ( T , U ) as B ( T , U ) . If the topology U is known from the context, we simply write B ( T ) to denote the Borel σ -algebra. The sets in B ( T ) are called the Borel sets of T.
The support of a measure P P ( T , B ( T ) ) is the set of all points x T for which every neighborhood has a strictly positive measure:
supp ( P ) = { x T : P ( O ) > 0 for every neighborhood O of x } .
If P is a probability measure on a Polish space, then P T \ supp ( P ) = 0 .

2.5. Random Mappings

Let M and M be two arbitrary sets, and let Σ be a σ -algebra on M . A random mapping from M to ( M , Σ ) is a mapping R from M to P ( M , Σ ) . For every x M , R ( x ) can be interpreted as the probability distribution of the random output given that the input is x.
Let Σ be a σ -algebra on M. We say that R is a measurable random mapping from ( M , Σ ) to ( M , Σ ) if the mapping R B : M R defined as R B ( x ) = ( R ( x ) ) ( B ) is measurable for every B Σ .
Note that this definition of measurability is consistent with the measurability of ordinary mappings: let f be a mapping from M to M , and let D f : M P ( M , Σ ) be the random mapping defined as D f ( x ) = δ f ( x ) for every x M , where δ f ( x ) P ( M , Σ ) is a Dirac measure centered at f ( x ) . We have:
D f is measurable ( D f ) B is measurable , B Σ ( ( D f ) B ) 1 ( B ) Σ , B B ( R ) , B Σ ( a ) ( ( D f ) B ) 1 ( { 1 } ) Σ , B Σ ( b ) f 1 ( B ) Σ , B Σ f is measurable ,
where (a) and (b) follow from the fact that ( ( D f ) B ) ( x ) is either one or zero, depending on whether f ( x ) B or not.
Let P be a probability measure on ( M , Σ ) , and let R be a measurable random mapping from ( M , Σ ) to ( M , Σ ) . The push-forward probability measure of P by R is the probability measure R # P on ( M , Σ ) defined as:
( R # P ) ( B ) = M R B · d P , B Σ .
Note that this definition is consistent with the push-forward of ordinary mappings: if f and D f are as above, then for every B Σ , we have:
( ( D f ) # P ) ( B ) = M ( D f ) B · d P = M ( 𝟙 B f ) · d P = M 𝟙 B · d ( f # P ) = ( f # P ) ( B ) .
Proposition 1.
Let R be a measurable random mapping from ( M , Σ ) to ( M , Σ ) . If g : M R + { + } is a Σ -measurable mapping, then the mapping x M g ( y ) · d ( R ( x ) ) ( y ) is a measurable mapping from ( M , Σ ) to R + { + } . Moreover, for every P P ( M , Σ ) , we have:
M g · d ( R # P ) = M M g ( y ) · d ( R ( x ) ) ( y ) d P ( x ) .
Proof. 
See Appendix C. ☐
Corollary 2.
If g : M R is bounded and Σ -measurable, then the mapping:
x M g ( y ) · d ( R ( x ) ) ( y )
is bounded and Σ-measurable. Moreover, for every P P ( M , Σ ) , we have:
M g · d ( R # P ) = M M g ( y ) · d ( R ( x ) ) ( y ) d P ( x ) .
Proof. 
Write g = g + g (where g + = max { g , 0 } and g = max { g , 0 } ), and use the fact that every bounded measurable function is integrable over any probability distribution. ☐
Lemma 3.
For every measurable random mapping R from ( M , Σ ) to ( M , Σ ) , the push-forward mapping R # is continuous from P ( M , Σ ) to P ( M , Σ ) under the total variation topology.
Proof. 
See Appendix D. ☐
Lemma 4.
Let U be a Polish (This assumption can be dropped. We assumed that U is Polish just to avoid working with Moore–Smith nets.) topology on M, and let U be an arbitrary topology on M . Let R be a measurable random mapping from ( M , B ( M ) ) to ( M , B ( M ) ) . Moreover, assume that R is a continuous mapping from ( M , U ) to P ( M , B ( M ) ) when the latter space is endowed with the weak-∗ topology. Under these assumptions, the push-forward mapping R # is continuous from P ( M , B ( M ) ) to P ( M , B ( M ) ) under the weak-∗ topology.
Proof. 
See Appendix D. ☐

2.6. Meta-Probability Measures

Let X be a finite set. A meta-probability measure on X is a probability measure on the Borel sets of Δ X . It is called a meta-probability measure because it is a probability measure on the space of probability distributions on X .
We denote the set of meta-probability measures on X as MP ( X ) . Clearly, MP ( X ) = P ( Δ X ) .
A meta-probability measure MP on X is said to be balanced if it satisfies:
Δ X p · d MP ( p ) = π X ,
where π X is the uniform probability distribution on X .
We denote the set of all balanced meta-probability measures on X as MP b ( X ) . The set of all balanced and finitely-supported meta-probability measures on X is denoted as MP b f ( X ) .
The following lemma is useful to show the continuity of functions defined on MP ( X ) .
Lemma 5.
Let ( S , V ) be a compact topological space, and let f : S × Δ X R be a continuous function on S × Δ X . The mapping F : S × MP ( X ) R defined as:
F ( s , MP ) = Δ X f ( s , p ) · d MP ( p )
is continuous, where MP ( X ) is endowed with the weak-∗ topology.
Proof. 
See Appendix E. ☐
Let f be a mapping from a finite set X to another finite set X . f induces a push-forward mapping f # taking probability distributions in Δ X to probability distributions in Δ X . f # is continuous because Δ X and Δ X are endowed with the total variation distance. f # in turn induces another push-forward mapping taking meta-probability measures in MP ( X ) to meta-probability measures in MP ( X ) . We denote this mapping as f # # , and we call it the meta-push-forward mapping induced by f. Since f # is a continuous mapping from Δ X to Δ X , f # # is a continuous mapping from MP ( X ) to MP ( X ) under both the weak-∗ and the total variation topologies.
Let X 1 and X 2 be two finite sets. Let Mul : Δ X 1 × Δ X 2 Δ X 1 × X 2 be defined as Mul ( p 1 , p 2 ) = p 1 × p 2 . For every MP 1 MP ( X 1 ) and MP 2 MP ( X 2 ) , we define the tensor product of MP 1 and MP 2 as MP 1 MP 2 = Mul # ( MP 1 × MP 2 ) MP ( X 1 × X 2 ) .
Note that since Δ X 1 , Δ X 2 and Δ X 1 × X 2 are endowed with the total variation topology, Mul ( p 1 , p 2 ) = p 1 × p 2 is a continuous mapping from Δ X 1 × Δ X 2 to Δ X 1 × X 2 . Therefore, Mul # is a continuous mapping from P ( Δ X 1 × Δ X 2 ) to P ( Δ X 1 × X 2 ) = MP ( X 1 × X 2 ) under both the weak-∗ and the total variation topologies. On the other hand, Appendix B and Appendix F imply that the mapping ( MP 1 , MP 2 ) MP 1 × MP 2 from MP ( X 1 ) × MP ( X 2 ) to P ( Δ X 1 × Δ X 2 ) is continuous under both the weak-∗ and the total variation topologies. We conclude that the tensor product is continuous under both of these topologies.

3. The Space of Equivalent Channels

In this section, we summarize the main results of [6].

3.1. Space of Channels from X to Y

A discrete memoryless channel W is a three-tuple W = ( X , Y , p W ) where X is a finite set that is called the input alphabet of W, Y is a finite set that is called the output alphabet of W and p W : X × Y [ 0 , 1 ] is a function satisfying x X , y Y p W ( x , y ) = 1 .
For every ( x , y ) X × Y , we denote p W ( x , y ) as W ( y | x ) , which we interpret as the conditional probability of receiving y at the output, given that x is the input.
Let DMC X , Y be the set of all channels having X as the input alphabet and Y as the output alphabet.
For every W , W DMC X , Y , define the distance between W and W as follows:
d X , Y ( W , W ) = 1 2 max x X y Y | W ( y | x ) W ( y | x ) | .
We always endow DMC X , Y with the metric distance d X , Y . This metric makes DMC X , Y a compact path-connected metric space. The metric topology on DMC X , Y that is induced by d X , Y is denoted as T X , Y .

3.2. Equivalence between Channels

Let W DMC X , Y and W DMC X , Z be two channels having the same input alphabet. We say that W is degraded from W if there exists a channel V DMC Y , Z such that:
W ( z | x ) = y Y V ( z | y ) W ( y | x ) .
W and W are said to be equivalent if each one is degraded from the other.
Let Δ X and Δ Y be the space of probability distributions on X and Y , respectively. Define P W o Δ Y as P W o ( y ) = 1 | X | x X W ( y | x ) for every y Y . The image of W is the set of output-symbols y Y having strictly positive probabilities:
Im ( W ) = { y Y : P W o ( y ) > 0 } .
For every y Im ( W ) , define W y 1 Δ X as follows:
W y 1 ( x ) = W ( y | x ) | X | P W o ( y ) , x X .
For every ( x , y ) X × Im ( W ) , we have W ( y | x ) = | X | P W o ( y ) W y 1 ( x ) . On the other hand, if x X and y Y \ Im ( W ) , we have W ( y | x ) = 0 . This shows that P W o and the collection { W y 1 } y Im ( W ) uniquely determine W.
The Blackwell measure (denoted MP W ) of W is a meta-probability measure on X defined as:
MP W = y Im ( W ) P W o ( y ) δ W y 1 ,
where δ W y 1 is a Dirac measure centered at W y 1 . In an earlier version of this work, I called MP W the posterior meta-probability distribution of W. Maxim Raginsky thankfully brought to my attention the fact that MP W is called the Blackwell measure.
It is known that a meta-probability measure MP on X is the Blackwell measure of some discrete memoryless channels (DMC) with input alphabet X if and only if it is balanced and finitely supported [9].
It is also known that two channels W DMC X , Y and W DMC X , Z are equivalent if and only if MP W = MP W [9].

3.3. Space of Equivalent Channels from X to Y

Let X and Y be two finite sets. Define the equivalence relation R X , Y ( o ) on DMC X , Y as follows:
W , W DMC X , Y , W R X , Y ( o ) W W is equivalent to W .
The space of equivalent channels with input alphabet X and output alphabet Y is the quotient of DMC X , Y by the equivalence relation:
DMC X , Y ( o ) = DMC X , Y / R X , Y ( o ) .
Quotient topology:
We define the topology T X , Y ( o ) on DMC X , Y ( o ) as the quotient topology T X , Y / R X , Y ( o ) . We always associate DMC X , Y ( o ) with the quotient topology T X , Y ( o ) .
We have shown in [6] that DMC X , Y ( o ) is a compact, path-connected and metrizable space.
If Y 1 and Y 2 are two finite sets of the same size, there exists a canonical homeomorphism between DMC X , Y 1 ( o ) and DMC X , Y 2 ( o ) [6]. This allows us to identify DMC X , Y ( o ) with DMC X , [ n ] ( o ) , where n = | Y | and [ n ] = { 1 , , n } .
Moreover, for every 1 n m , there exists a canonical subspace of DMC X , [ m ] ( o ) that is homeomorphic to DMC X , [ n ] ( o ) [6]. Therefore, we can consider DMC X , [ n ] ( o ) as a compact subspace of DMC X , [ m ] ( o ) .
Noisiness metric:
For every m 1 , let Δ [ m ] × X be the space of probability distributions on [ m ] × X . Let Y be a finite set, and let W DMC X , Y . For every p Δ [ m ] × X , define P c ( p , W ) as follows:
P c ( p , W ) = sup D DMC Y , [ m ] u [ m ] , x X , y Y p ( u , x ) W ( y | x ) D ( u | y ) .
The quantity P c ( p , W ) depends only on the R X , Y ( o ) -equivalence class of W (see [6]). Therefore, if W ^ DMC X , Y ( o ) , we can define P c ( p , W ^ ) : = P c ( p , W ) for any W W ^ .
Define the noisiness distance d X , Y ( o ) : DMC X , Y ( o ) × DMC X , Y ( o ) R + as follows:
d X , Y ( o ) ( W ^ 1 , W ^ 2 ) = sup m 1 , p Δ [ m ] × X | P c ( p , W ^ 1 ) P c ( p , W ^ 2 ) | .
We have shown in [6] that ( DMC X , Y ( o ) , T X , Y ( o ) ) is topologically equivalent to ( DMC X , Y ( o ) , d X , Y ( o ) ) .

3.4. Space of Equivalent Channels with Input Alphabet X

The space of channels with input alphabet X is defined as:
DMC X , = n 1 DMC X , [ n ] .
We define the equivalence relation R X , ( o ) on DMC X , as follows:
W , W DMC X , , W R X , ( o ) W W is equivalent to W .
The space of equivalent channels with input alphabet X is the quotient of DMC X , by the equivalence relation:
DMC X , ( o ) = DMC X , / R X , ( o ) .
For every n 1 and every W DMC X , [ n ] , we identify the R X , [ n ] ( o ) -equivalence class of W with the R X , ( o ) -equivalence class of it. This allows us to consider DMC X , [ n ] ( o ) as a subspace of DMC X , ( o ) . Moreover,
DMC X , ( o ) = n 1 DMC X , [ n ] ( o ) .
Since any two equivalent channels have the same Blackwell measure, we can define the Blackwell measure of W ^ DMC X , ( o ) as MP W ^ = MP W for any W W ^ . The rank of W ^ DMC X , ( o ) is the size of the support of its Blackwell measure:
rank ( W ^ ) = | supp ( MP W ^ ) | .
We have:
DMC X , [ n ] ( o ) = { W ^ DMC X , ( o ) : rank ( W ^ ) n } .
A topology T on DMC X , ( o ) is said to be natural if and only if it induces the quotient topology T X , [ n ] ( o ) on DMC X , [ n ] ( o ) for every n 1 .
Every natural topology is σ -compact, separable and path-connected [6]. On the other hand, if | X | 2 , a Hausdorff natural topology is not Baire, and it is not locally compact anywhere [6]. This implies that no natural topology can be completely metrized if | X | 2 .
Strong topology on DMC X , ( o ) :
We associate DMC X , with the disjoint union topology T s , X , : = n 1 T X , [ n ] . The space ( DMC X , , T s , X , ) is disconnected, metrizable and σ -compact [6].
The strong topology T s , X , ( o ) on DMC X , ( o ) is the quotient of T s , X , by R X , ( o ) :
T s , X , ( o ) = T s , X , / R X , ( o ) .
We call open and closed sets in ( DMC X , ( o ) , T s , X , ( o ) ) as strongly-open and strongly-closed sets, respectively. If A is a subset of DMC X , ( o ) , then A is strongly open if and only if A DMC X , [ n ] ( o ) is open in DMC X , [ n ] ( o ) for every n 1 . Similarly, A is strongly closed if and only if A DMC X , [ n ] ( o ) is closed in DMC X , [ n ] ( o ) for every n 1 .
We have shown in [6] that T s , X , ( o ) is the finest natural topology. The strong topology is sequential, compactly generated and T 4 [6]. On the other hand, if | X | 2 , the strong topology is not first-countable anywhere [6]; hence, it is not metrizable.
Noisiness metric:
Define the noisiness metric on DMC X , ( o ) as follows:
d X , ( o ) ( W ^ , W ^ ) : = d X , [ n ] ( o ) ( W ^ , W ^ ) where n 1 satisfies W ^ , W ^ DMC X , [ n ] ( o ) .
d X , ( o ) ( W ^ , W ^ ) is well-defined because d X , [ n ] ( o ) ( W ^ , W ^ ) does not depend on n 1 as long as W ^ , W ^ DMC X , [ n ] ( o ) . We can also express d X , ( o ) as follows:
d X , ( o ) ( W ^ , W ^ ) = sup m 1 , p Δ [ m ] × X | P c ( p , W ^ ) P c ( p , W ^ ) | .
The metric topology on DMC X , ( o ) that is induced by d X , ( o ) is called the noisiness topology on DMC X , ( o ) , and it is denoted as T X , ( o ) . We have shown in [6] that T X , ( o ) is a natural topology that is strictly coarser than T s , X , ( o ) .
Topologies from Blackwell measures:
The mapping W ^ MP W ^ is a bijection from DMC X , ( o ) to MP b f ( X ) . We call this mapping the canonical bijection from DMC X , ( o ) to MP b f ( X ) .
Since Δ X is a metric space, there are many standard ways to construct topologies on MP ( X ) . If we choose any of these standard topologies on MP ( X ) and then relativize it to the subspace MP b f ( X ) , we can construct topologies on DMC X , ( o ) through the canonical bijection.
In [6], we studied the weak-∗ and the total variation topologies. We showed that the weak-∗ topology is exactly the same as the noisiness topology.
The total-variation metric distance d T V , X , ( o ) on DMC X , ( o ) is defined as:
d T V , X , ( o ) ( W ^ , W ^ ) = MP W ^ MP W ^ T V .
The total-variation topology T T V , X , ( o ) is the metric topology that is induced by d T V , X , ( o ) on DMC X , ( o ) . We proved in [6] that if | X | 2 , we have:
  • T T V , X , ( o ) is not natural, nor Baire, hence it is not completely metrizable.
  • T T V , X , ( o ) is not locally compact anywhere.

4. Channel Parameters and Operations

4.1. Useful Parameters

Let Δ X be the space of probability distributions on X . For every p Δ X and every W DMC X , Y , define I ( p , W ) as the mutual information I ( X ; Y ) , where X is distributed as p and Y is the output of W when X is the input. The mutual information is computed using the natural logarithm. The capacity of W is defined as C ( W ) = sup p Δ X I ( p , W ) .
For every p Δ X , the error probability of the MAP decoder of W under prior p is defined as:
P e ( p , W ) = 1 y Y max x X { p ( x ) W ( y | x ) } .
Clearly, 0 P e ( p , W ) 1 .
For every W DMC X , Y , define the Bhattacharyya parameter of W as:
Z ( W ) = 1 | X | · ( | X | 1 ) x 1 , x 2 X , x 1 x 2 y Y W ( y | x 1 ) W ( y | x 2 ) , if | X | 2 0 if | X | = 1 .
It is easy to see that 0 Z ( W ) 1 .
It was shown in [10,11] that 1 4 Z ( W ) 2 P e ( π X , W ) ( | X | 1 ) Z ( W ) , where π X is the uniform distribution on X .
An ( n , M ) -code C on the alphabet X is a subset of X n such that | C | = M . The integer n is the block length of C , and M is the size of the code. The rate of C is 1 n log M , and it is measured in nats. The error probability of the ML decoder for the code C when it is used for a channel W DMC X , Y is given by:
P e , C ( W ) = 1 1 | C | y 1 n Y n max x 1 n C i = 1 n W ( y i | x i ) .
The optimal error probability of ( n , M ) -codes for a channel W is given by:
P e , n , M ( W ) = min C X n , | C | = M P e , C ( W ) .
The following proposition shows that all the above parameters are continuous:
Proposition 2.
We have:
  • I : Δ X × DMC X , Y R + is continuous, concave in p and convex in W.
  • C : DMC X , Y R + is continuous and convex.
  • P e : Δ X × DMC X , Y [ 0 , 1 ] is continuous, concave in p and concave in W.
  • Z : DMC X , Y [ 0 , 1 ] is continuous.
  • For every code C on X , P e , C : DMC X , Y [ 0 , 1 ] is continuous.
  • For every n > 0 and every 1 M | X | n , the mapping P e , n , M : DMC X , Y [ 0 , 1 ] is continuous.
Proof. 
These facts are well known, especially the continuity of I, its concavity in p and its convexity in W [12]. Since C is the supremum of a family of mappings that are convex in W, it is also convex in W. For a proof of the continuity of C, see Appendix G. The continuity of Z, P e and P e , C follows immediately from their definitions. Moreover, since P e , n , M is the minimum of a finite number of continuous mappings, it is continuous. The concavity of P e in p and in W can also be easily seen from the definition. ☐

4.2. Channel Operations

If W DMC X , Y and V DMC Y , Z , we define the composition V W DMC X , Z of W and V as follows:
( V W ) ( z | x ) = y Y V ( z | y ) W ( y | x ) . x X , z Z .
For every function f : X Y , define the deterministic channel D f DMC X , Y as follows:
D f ( y | x ) = 1 if y = f ( x ) , 0 otherwise .
It is easy to see that if f : X Y and g : Y Z , then D g D f = D g f .
For every two channels W 1 DMC X 1 , Y 1 and W 2 DMC X 2 , Y 2 , define the channel sum W 1 W 2 DMC X 1 X 2 , Y 1 Y 2 of W 1 and W 2 as:
( W 1 W 2 ) ( y , i | x , j ) = W i ( y | x ) if i = j , 0 otherwise .
W 1 W 2 arises when the transmitter has two channels W 1 and W 2 at its disposal, and it can use exactly one of them at each channel use. It is an easy exercise to check that e C ( W 1 W 2 ) = e C ( W 1 ) + e C ( W 2 ) (remember that we compute the mutual information using the natural logarithm).
We define the channel product W 1 W 2 DMC X 1 × X 2 , Y 1 × Y 2 of W 1 and W 2 as:
( W 1 W 2 ) ( y 1 , y 2 | x 1 , x 2 ) = W 1 ( y 1 | x 1 ) W 2 ( y 2 | x 2 ) .
W 1 W 2 arises when the transmitter has two channels W 1 and W 2 at its disposal, and it uses both of them at each channel use. It is an easy exercise to check that C ( W 1 W 2 ) = C ( W 1 ) + C ( W 2 ) , or equivalently e C ( W 1 W 2 ) = e C ( W 1 ) · e C ( W 2 ) . Channel sums and products were first introduced by Shannon in [13].
For every W 1 DMC X , Y 1 , W 2 DMC X , Y 2 and every 0 α 1 , we define the α -interpolation [ α W 1 , ( 1 α ) W 2 ] DMC X , Y 1 Y 2 between W 1 and W 2 as:
[ α W 1 , ( 1 α ) W 2 ] ( y , i | x ) = α W 1 ( y | x ) if i = 1 , ( 1 α ) W 2 ( y | x ) if i = 2 .
Channel interpolation arises when a channel behaves as W 1 with probability α and as W 2 with probability 1 α . The transmitter has no control on which behavior the channel chooses, but on the other hand, the receiver knows which one was chosen. Channel interpolations were used in [14] to construct interpolations between polar codes and Reed–Muller codes.
Now, fix a binary operation ∗ on X . For every W DMC X , Y , define W DMC X , Y 2 and W + DMC X , Y 2 × X as:
W ( y 1 , y 2 | u 1 ) = 1 | X | u 2 X W ( y 1 | u 1 u 2 ) W ( y 2 | u 2 ) ,
and:
W + ( y 1 , y 2 , u 1 | u 2 ) = 1 | X | W ( y 1 | u 1 u 2 ) W ( y 2 | u 2 ) .
These operations generalize Arıkan’s polarization transformations [15].
Proposition 3.
We have:
  • The mapping ( W , V ) V W from DMC X , Y × DMC Y , Z to DMC X , Z is continuous.
  • The mapping ( W 1 , W 2 ) W 1 W 2 from DMC X 1 , Y 1 × DMC X 2 , Y 2 to the space DMC X 1 X 2 , Y 1 Y 2 is continuous.
  • The mapping ( W 1 , W 2 ) W 1 W 2 from DMC X 1 , Y 1 × DMC X 2 , Y 2 to DMC X 1 × X 2 , Y 1 × Y 2 is continuous.
  • The mapping ( W 1 , W 2 , α ) [ α W 1 , ( 1 α ) W 2 ] from DMC X , Y 1 × DMC X , Y 2 × [ 0 , 1 ] to DMC X , Y 1 Y 2 is continuous.
  • For any binary operation ∗ on X , the mapping W W from DMC X , Y to DMC X , Y 2 is continuous.
  • For any binary operation ∗ on X , the mapping W W + from DMC X , Y to DMC X , Y 2 × X is continuous.
Proof. 
The continuity immediately follows from the definitions. ☐

5. Continuity on DMC X , Y ( o )

It is well known that the parameters defined in Section 4.1 depend only on the R X , Y ( o ) -equivalence class of W. Therefore, we can define those parameters for any W ^ DMC X , Y ( o ) through the transcendent mapping (defined in Lemma 2). The following proposition shows that those parameters are continuous on DMC X , Y ( o ) :
Proposition 4.
We have:
  • I : Δ X × DMC X , Y ( o ) R + is continuous and concave in p.
  • C : DMC X , Y ( o ) R + is continuous.
  • P e : Δ X × DMC X , Y ( o ) [ 0 , 1 ] is continuous and concave in p.
  • Z : DMC X , Y ( o ) [ 0 , 1 ] is continuous.
  • For every code C on X , P e , C : DMC X , Y ( o ) [ 0 , 1 ] is continuous.
  • For every n > 0 and every 1 M | X | n , the mapping P e , n , M : DMC X , Y ( o ) [ 0 , 1 ] is continuous.
Proof. 
Since the corresponding parameters are continuous on DMC X , Y (Proposition 2), Lemma 2 implies that they are continuous on DMC X , Y ( o ) . The only cases that need a special treatment are those of I and Z. We will only prove the continuity of I since the proof of continuity of Z is similar.
Define the relation R on Δ X × DMC X , Y as:
( p 1 , W 1 ) R ( p 2 , W 2 ) p 1 = p 2 and W 1 R X , Y ( o ) W 2 .
It is easy to see that I ( p , W ) depends only on the R-equivalence class of ( p , W ) . Since I is continuous on Δ X × DMC X , Y , Lemma 2 implies that the transcendent mapping of I is continuous on ( Δ X × DMC X , Y ) / R . On the other hand, since Δ X is locally compact, Theorem 1 implies that ( Δ X × DMC X , Y ) / R can be identified with Δ X × ( DMC X , Y / R X , Y ( o ) ) = Δ X × DMC X , Y ( o ) , and the two spaces have the same topology. Therefore, I is continuous on Δ X × DMC X , Y ( o ) . ☐
With the exception of channel composition, all the channel operations that were defined in Section 4.2 can also be “quotiented”. We just need to realize that the equivalence class of the resulting channel depends only on the equivalence classes of the channels that were used in the operation. Let us illustrate this in the case of channel sums:
Let W 1 , W 1 DMC X 1 , Y 1 and W 2 , W 2 DMC X 2 , Y 2 and assume that W 1 is degraded from W 1 and W 2 is degraded from W 2 . There exists V 1 DMC Y 1 , Y 1 and V 2 DMC Y 2 , Y 2 such that W 1 = V 1 W 1 and W 2 = V 2 W 2 . It is easy to see that W 1 W 2 = ( V 1 V 2 ) ( W 1 W 2 ) , which shows that W 1 W 2 is degraded from W 1 W 2 . This was proven by Shannon in [16].
Therefore, if W 1 is equivalent to W 1 and W 2 is equivalent to W 2 , then W 1 W 2 is equivalent to W 1 W 2 . This allows us to define the channel sum for every W ^ 1 DMC X 1 , Y 1 ( o ) and every W ¯ 2 DMC X 2 , Y 2 ( o ) as W ^ 1 W ¯ 2 = W 1 W 2 ˜ DMC X 1 X 2 , Y 1 Y 2 ( o ) for any W 1 W ^ 1 and any W 2 W ¯ 2 , where W 1 W 2 ˜ is the R X 1 X 2 , Y 1 Y 2 ( o ) -equivalence class of W 1 W 2 .
With the exception of channel composition, we can “quotient” all the channel operations of Section 4.2 in a similar fashion. Moreover, we can show that they are continuous:
Proposition 5.
We have:
  • The mapping ( W ^ 1 , W ¯ 2 ) W ^ 1 W ¯ 2 from DMC X 1 , Y 1 ( o ) × DMC X 2 , Y 2 ( o ) to DMC X 1 X 2 , Y 1 Y 2 ( o ) is continuous.
  • The mapping ( W ^ 1 , W ¯ 2 ) W ^ 1 W ¯ 2 from DMC X 1 , Y 1 ( o ) × DMC X 2 , Y 2 ( o ) to DMC X 1 × X 2 , Y 1 × Y 2 ( o ) is continuous.
  • The mapping ( W ^ 1 , W ¯ 2 , α ) [ α W ^ 1 , ( 1 α ) W ¯ 2 ] from DMC X , Y 1 ( o ) × DMC X , Y 2 ( o ) × [ 0 , 1 ] to DMC X , Y 1 Y 2 ( o ) is continuous.
  • For any binary operation ∗ on X , the mapping W ^ W ^ from DMC X , Y ( o ) to DMC X , Y 2 ( o ) is continuous.
  • For any binary operation ∗ on X , the mapping W ^ W ^ + from DMC X , Y ( o ) to DMC X , Y 2 × X ( o ) is continuous.
Proof. 
We only prove the continuity of the channel sum because the proof of continuity of the other operations is similar.
Let Proj : DMC X 1 X 2 , Y 1 Y 2 DMC X 1 X 2 , Y 1 Y 2 ( o ) be the projection onto the R X 1 X 2 , Y 1 Y 2 ( o ) -equivalence classes. Define the mapping f : DMC X 1 , Y 1 × DMC X 2 , Y 2 DMC X 1 X 2 , Y 1 Y 2 ( o ) as f ( W 1 , W 2 ) = Proj ( W 1 W 2 ) . Clearly, f is continuous.
Now, define the equivalence relation R on DMC X 1 , Y 1 × DMC X 2 , Y 2 as:
( W 1 , W 2 ) R ( W 1 , W 2 ) W 1 R X 1 , Y 1 ( o ) W 1 and W 2 R X 2 , Y 2 ( o ) W 2 .
The discussion before the proposition shows that f ( W 1 , W 2 ) = Proj ( W 1 W 2 ) depends only on the R-equivalence class of ( W 1 , W 2 ) . Lemma 2 now shows that the transcendent map of f defined on ( DMC X 1 , Y 1 × DMC X 2 , Y 2 ) / R is continuous.
Notice that ( DMC X 1 , Y 1 × DMC X 2 , Y 2 ) / R can be identified with DMC X 1 , Y 1 ( o ) × DMC X 2 , Y 2 ( o ) . Therefore, we can define f on DMC X 1 , Y 1 ( o ) × DMC X 2 , Y 2 ( o ) through this identification. Moreover, since DMC X 1 , Y 1 and DMC X 2 , Y 2 ( o ) are locally compact and Hausdorff, Corollary 1 implies that the canonical bijection between ( DMC X 1 , Y 1 × DMC X 2 , Y 2 ) / R and DMC X 1 , Y 1 ( o ) × DMC X 2 , Y 2 ( o ) is a homeomorphism.
Now, since the mapping f on DMC X 1 , Y 1 ( o ) × DMC X 2 , Y 2 ( o ) is just the channel sum, we conclude that the mapping ( W ^ 1 , W ¯ 2 ) W ^ 1 W ¯ 2 from DMC X 1 , Y 1 ( o ) × DMC X 2 , Y 2 ( o ) to DMC X 1 X 2 , Y 1 Y 2 ( o ) is continuous. ☐

6. Continuity in the Strong Topology

The following lemma provides a way to check whether a mapping defined on ( DMC X , ( o ) , T s , X , ( o ) ) is continuous:
Lemma 6.
Let ( S , V ) be an arbitrary topological space. A mapping f : DMC X , ( o ) S is continuous on ( DMC X , ( o ) , T s , X , ( o ) ) if and only if it is continuous on ( DMC X , [ n ] ( o ) , T X , [ n ] ( o ) ) for every n 1 .
Proof. 
f is continuous on ( DMC X , ( o ) , T s , X , ( o ) ) f 1 ( V ) T s , X , ( o ) V V f 1 ( V ) DMC X , [ n ] ( o ) T X , [ n ] ( o ) n 1 , V V f is continuous on ( DMC X , [ n ] ( o ) , T X , [ n ] ( o ) ) n 1 .
Since the channel parameters I, C, P e , Z, P e , C and P e , n , M are defined on DMC X , [ n ] ( o ) for every n 1 (see Section 5), they are also defined on DMC X , ( o ) = n 1 DMC X , [ n ] ( o ) . The following proposition shows that those parameters are continuous in the strong topology:
Proposition 6.
Let U X be the standard topology on Δ X . We have:
  • I : Δ X × DMC X , ( o ) R + is continuous on ( Δ X × DMC X , ( o ) , U X T s , X , ( o ) ) and concave in p.
  • C : DMC X , ( o ) R + is continuous on ( DMC X , ( o ) , T s , X , ( o ) ) .
  • P e : Δ X × DMC X , ( o ) [ 0 , 1 ] is continuous on ( Δ X × DMC X , ( o ) , U X T s , X , ( o ) ) and concave in p.
  • Z : DMC X , ( o ) [ 0 , 1 ] is continuous on ( DMC X , ( o ) , T s , X , ( o ) ) .
  • For every code C on X , P e , C : DMC X , ( o ) [ 0 , 1 ] is continuous on ( DMC X , ( o ) , T s , X , ( o ) ) .
  • For every n > 0 and every 1 M | X | n , the mapping P e , n , M : DMC X , ( o ) [ 0 , 1 ] is continuous on ( DMC X , ( o ) , T s , X , ( o ) ) .
Proof. 
The continuity of C , Z , P e , C and P e , n , M immediately follows from Proposition 4 and Lemma 6. Since the proofs of the continuity of I and Z are similar, we only prove the continuity for I.
Due to the distributivity of the product with respect to disjoint unions, we have:
Δ X × DMC X , = n 1 Δ X × DMC X , [ n ] ,
and:
U X T s , X , = n 1 U X T X , [ n ] .
Therefore, ( Δ X × DMC X , , U X T s , X , ) is the disjoint union of the spaces ( Δ X × DMC X , [ n ] ) n 1 . Moreover, I is continuous on Δ X × DMC X , [ n ] for every n 1 . We conclude that I is continuous on ( Δ X × DMC X , , U X T s , X , ) .
Define the relation R on Δ X × DMC X , as follows: ( p 1 , W 1 ) R ( p 2 , W 2 ) if and only if p 1 = p 2 and W 1 R X , ( o ) W 2 . Since I ( p , W ) depends only on the R-equivalence class of ( p , W ) , Lemma 2 shows that the transcendent map of I is a continuous mapping from ( Δ X × DMC X , ) / R , ( U X T s , X , ) / R to R + . On the other hand, since Δ X is locally compact and Hausdorff, Theorem 1 implies that ( Δ X × DMC X , ) / R , ( U X T s , X , ) / R can be identified with Δ X × ( DMC X , / R X , ( o ) ) , U X ( T s , X , / R X , ( o ) ) = ( Δ X × DMC X , ( o ) , U X T s , X , ( o ) ) . Therefore, I is continuous on ( Δ X × DMC X , ( o ) , U X T s , X , ( o ) ) . ☐
It is also possible to extend the definition of all the channel operations that were defined in Section 5 to DMC X , ( o ) . Moreover, it is possible to show that many channel operations are continuous in the strong topology:
Proposition 7.
Assume that all equivalent channel spaces are endowed with the strong topology. We have:
  • The mapping ( W ^ 1 , W ¯ 2 ) W ^ 1 W ¯ 2 from DMC X 1 , ( o ) × DMC X 2 , Y 2 ( o ) to DMC X 1 X 2 , ( o ) is continuous.
  • The mapping ( W ^ 1 , W ¯ 2 ) W ^ 1 W ¯ 2 from DMC X 1 , ( o ) × DMC X 2 , Y 2 ( o ) to DMC X 1 × X 2 , ( o ) is continuous.
  • The mapping ( W ^ 1 , W ¯ 2 , α ) [ α W ^ 1 , ( 1 α ) W ¯ 2 ] from DMC X , × DMC X , Y 2 ( o ) × [ 0 , 1 ] to DMC X , ( o ) is continuous.
  • For any binary operation ∗ on X , the mapping W ^ W ^ from DMC X , ( o ) to DMC X , ( o ) is continuous.
  • For any binary operation ∗ on X , the mapping W ^ W ^ + from DMC X , ( o ) to DMC X , ( o ) is continuous.
Proof. 
We only prove the continuity of the channel interpolation because the proof of the continuity of other operations is similar.
Let U be the standard topology on [ 0 , 1 ] . Due to the distributivity of the product with respect to disjoint unions, we have:
DMC X , × DMC X , Y 2 × [ 0 , 1 ] = n 1 ( DMC X , [ n ] × DMC X , Y 2 × [ 0 , 1 ] ) ,
and:
T s , X , T X , Y 2 U = n 1 T X , [ n ] T X , Y 2 U .
Therefore, the space DMC X , × DMC X , Y 2 × [ 0 , 1 ] is the topological disjoint union of the spaces ( DMC X , [ n ] × DMC X , Y 2 × [ 0 , 1 ] ) n 1 .
For every n 1 , let Proj n be the projection onto the R X , [ n ] Y 2 ( o ) -equivalence classes, and let i n be the canonical injection from DMC X , [ n ] Y 2 ( o ) to DMC X , ( o ) .
Define the mapping f : DMC X , × DMC X , Y 2 × [ 0 , 1 ] DMC X , ( o ) as:
f ( W 1 , W 2 , α ) = i n ( Proj n ( [ α W 1 , ( 1 α ) W 2 ] ) ) = [ α W ^ 1 , ( 1 α ) W ¯ 2 ] ,
where n is the unique integer satisfying W 1 DMC X , [ n ] . W ^ 1 and W ¯ 2 are the R X , [ n ] ( o ) and R X , Y 2 ( o ) -equivalence classes of W 1 and W 2 , respectively.
Due to Proposition 3 and due to the continuity of Proj n and i n , the mapping f is continuous on DMC X , [ n ] × DMC X , Y 2 × [ 0 , 1 ] for every n 1 . Therefore, f is continuous on ( DMC X , × DMC X , Y 2 × [ 0 , 1 ] , T s , X , T X , Y 2 U ) .
Let R be the equivalence relation defined on DMC X , × DMC X , Y 2 as follows: ( W 1 , W 2 ) R ( W 1 , W 2 ) if and only if W 1 R X , ( o ) W 1 and W 2 R X , Y 2 ( o ) W 2 . Furthermore, define the equivalence relation R on DMC X , × DMC X , Y 2 × [ 0 , 1 ] as follows: ( W 1 , W 2 , α ) R ( W 1 , W 2 , α ) if and only if ( W 1 , W 2 ) R ( W 1 , W 2 ) and α = α .
Since f ( W 1 , W 2 , α ) depends only on the R-equivalence class of ( W 1 , W 2 , α ) , Lemma 2 implies that the transcendent mapping of f is continuous on ( DMC X , × DMC X , Y 2 × [ 0 , 1 ] ) / R .
Since [ 0 , 1 ] is Hausdorff and locally compact, Theorem 1 implies that the canonical bijection from ( DMC X , × DMC X , Y 2 × [ 0 , 1 ] ) / R to ( DMC X , × DMC X , Y 2 ) / R × [ 0 , 1 ] ) is a homeomorphism. On the other hand, since ( DMC X , , T s , X , ) and DMC X , Y 2 ( o ) = DMC X , Y 2 / R X , Y 2 ( o ) are Hausdorff and locally compact, Corollary 1 implies that the canonical bijection from DMC X , ( o ) × DMC X , Y 2 ( o ) to ( DMC X , × DMC X , Y 2 ) / R is a homeomorphism. We conclude that the channel interpolation is continuous on ( DMC X , ( o ) × DMC X , Y 2 ( o ) × [ 0 , 1 ] , T s , X , ( o ) T X , Y ( o ) U ) . ☐
Corollary 3.
( DMC X , ( o ) , T s , X , ( o ) ) is strongly contractible to every point in DMC X , ( o ) .
Proof. 
Fix W ^ 0 DMC X , ( o ) . Define the mapping H : DMC X , ( o ) × [ 0 , 1 ] DMC X , ( o ) as H ( W ^ , α ) = [ α W ^ 0 , ( 1 α ) W ^ ] . H is continuous by Proposition 7. We also have H ( W ^ , 0 ) = W ^ and H ( W ^ , 1 ) = W ^ 0 for every W ^ DMC X , ( o ) . Moreover, H ( W ^ 0 , α ) = W ^ 0 for every 0 α 1 . Therefore, ( DMC X , ( o ) , T s , X , ( o ) ) is strongly contractible to every point in DMC X , ( o ) . ☐
The reader might be wondering why channel operations such as the channel sum were not shown to be continuous on the whole space DMC X 1 , ( o ) × DMC X 2 , ( o ) instead of the smaller space DMC X 1 , ( o ) × DMC X 2 , Y 2 ( o ) . The reason is because we cannot apply Corollary 1 to DMC X 1 , × DMC X 2 , and DMC X 1 , ( o ) × DMC X 2 , ( o ) since neither DMC X 1 , ( o ) , nor DMC X 2 , ( o ) is locally compact (under the strong topology).
One potential method to show the continuity of the channel sum on ( DMC X 1 , ( o ) × DMC X 2 , ( o ) , T s , X 1 , ( o ) T s , X 2 , ( o ) ) is as follows: let R be the equivalence relation on DMC X 1 , × DMC X 2 , defined as ( W 1 , W 2 ) R ( W 1 , W 2 ) if and only if W 1 R X 1 , ( o ) W 1 and W 2 R X 2 , ( o ) W 2 . We can identify ( DMC X 1 , × DMC X 2 , ) / R with DMC X 1 , ( o ) × DMC X 2 , ( o ) through the canonical bijection. Using Lemma 2, it is easy to see that the mapping ( W ^ 1 , W ¯ 2 ) W ^ 1 W ¯ 2 is continuous from DMC X 1 , ( o ) × DMC X 2 , ( o ) , ( T s , X 1 , T s , X 2 , ) / R to ( DMC X 1 X 2 , ( o ) , T s , X 1 X 2 , ( o ) ) .
It was shown in [17] that the topology ( T s , X 1 , T s , X 2 , ) / R is homeomorphic to κ ( T s , X 1 , ( o ) T s , X 2 , ( o ) ) through the canonical bijection, where κ ( T s , X 1 , ( o ) T s , X 2 , ( o ) ) is the coarsest topology that is both compactly generated and finer than T s , X 1 , ( o ) T s , X 2 , ( o ) . Therefore, the mapping ( W ^ 1 , W ¯ 2 ) W ^ 1 W ¯ 2 is continuous on DMC X 1 , ( o ) × DMC X 2 , ( o ) , κ ( T s , X 1 , ( o ) T s , X 2 , ( o ) ) . This means that if T s , X 1 , ( o ) T s , X 2 , ( o ) is compactly generated, we will have T s , X 1 , ( o ) T s , X 2 , ( o ) = κ ( T s , X 1 , ( o ) T s , X 2 , ( o ) ) , and so, the channel sum will be continuous on ( DMC X 1 , ( o ) × DMC X 2 , ( o ) , T s , X 1 , ( o ) T s , X 2 , ( o ) ) . Note that although T s , X 1 , ( o ) and T s , X 2 , ( o ) are compactly generated, their product T s , X 1 , ( o ) T s , X 2 , ( o ) might not be compactly generated.

7. Continuity in the Noisiness/Weak-∗ and the Total Variation Topologies

We need to express the channel parameters and operations in terms of the Blackwell measures.

7.1. Channel Parameters

The following proposition shows that many channel parameters can be expressed as an integral of a continuous function with respect to the Blackwell measure:
Proposition 8.
For every W ^ DMC X , ( o ) , we have:
p Δ X , I ( p , W ^ ) = H ( p ) | X | · Δ X x X p ( x ) p ( x ) log p ( x ) p ( x ) x p ( x ) p ( x ) · d MP W ^ ( p ) ,
p Δ X , P e ( p , W ^ ) = 1 | X | Δ X max x X p ( x ) × p ( x ) · d MP W ^ ( p ) ,
if | X | 2 , Z ( W ^ ) = 1 | X | 1 x , x X , x x Δ X p ( x ) p ( x ) · d MP W ^ ( p ) ,
For every code C X n , P e , C ( W ^ ) = 1 | X | n | C | Δ X n max x 1 n C i = 1 n p i ( x i ) d MP W ^ n ( p 1 n ) ,
where H ( p ) is the entropy of p and MP W ^ n is the product measure on Δ X n obtained by multiplying MP W ^ with itself n times. Note that we adopt the standard convention that 0 log 0 0 = 0 .
Proof. 
By choosing any representative channel W W ^ and replacing W ( y | x ) by | X | P W o ( y ) W y 1 ( x ) in the definitions of the channel parameters, all the above formulas immediately follow. Let us show how this works for P e :
P e ( p , W ^ ) = P e ( p , W ) = ( a ) 1 y Im ( W ) max x X { p ( x ) W ( y | x ) } = 1 y Im ( W ) max x X p ( x ) · | X | · P W o ( y ) W y 1 ( x ) = 1 | X | y Im ( W ) max x X { p ( x ) W y 1 ( x ) } · P W o ( y ) = 1 | X | Δ X max x X { p ( x ) p ( x ) } · d MP W ( p ) = 1 | X | Δ X max x X { p ( x ) p ( x ) } · d MP W ^ ( p ) ,
where (a) is true because W ( y | x ) = 0 for y Im ( W ) . ☐
Proposition 9.
Let U X be the standard topology on Δ X . We have:
  • I : Δ X × DMC X , ( o ) R + is continuous on ( Δ X × DMC X , ( o ) , U X T X , ( o ) ) and concave in p.
  • C : DMC X , ( o ) R + is continuous on ( DMC X , ( o ) , T X , ( o ) ) .
  • P e : Δ X × DMC X , ( o ) [ 0 , 1 ] is continuous on ( Δ X × DMC X , ( o ) , U X T X , ( o ) ) and concave in p.
  • Z : DMC X , ( o ) [ 0 , 1 ] is continuous on ( DMC X , ( o ) , T X , ( o ) ) .
  • For every code C on X , P e , C : DMC X , ( o ) [ 0 , 1 ] is continuous on ( DMC X , ( o ) , T X , ( o ) ) .
  • For every n > 0 and every 1 M | X | n , the mapping P e , n , M : DMC X , ( o ) [ 0 , 1 ] is continuous on ( DMC X , ( o ) , T X , ( o ) ) .
Proof. 
We associate the space MP ( X ) with the weak-∗ topology. Define the mapping:
I ¯ : Δ X × MP ( X ) R +
as follows:
I ¯ ( p , MP ) = H ( p ) | X | · Δ X x X p ( x ) p ( x ) log p ( x ) p ( x ) x p ( x ) p ( x ) · d MP ( p ) ,
Lemma 5 implies that I ¯ is continuous. On the other hand, Proposition 8 shows that I ( p , W ^ ) = I ¯ ( p , MP W ^ ) . Therefore, I is continuous on ( Δ X × DMC X , ( o ) , U X T X , ( o ) ) . We can prove the continuity of P e and Z similarly.
Now, define the mapping C ¯ : MP ( X ) R as:
C ¯ ( MP ) = sup p Δ X I ¯ ( p , MP ) .
Fix MP MP ( X ) , and let ϵ > 0 . Since MP ( X ) is compact (under the weak-∗ topology), Lemma 1 implies the existence of a weakly-∗ open neighborhood U MP of MP such that | I ¯ ( p , MP ) I ¯ ( p , MP ) | < ϵ for every MP U MP and every p Δ X . Therefore, for every MP U MP and every p Δ X , we have:
I ¯ ( p , MP ) < I ¯ ( p , MP ) + ϵ C ¯ ( MP ) + ϵ ,
hence,
C ¯ ( MP ) = sup p Δ X I ¯ ( p , MP ) C ¯ ( MP ) + ϵ .
Similarly, we can show that C ¯ ( MP ) C ¯ ( MP ) + ϵ . This shows that | C ¯ ( MP ) C ¯ ( MP ) | ϵ for every MP U MP . Therefore, C ¯ is continuous. However, C ( W ^ ) = C ¯ ( MP W ^ ) , so C is continuous on ( DMC X , ( o ) , T X , ( o ) ) .
Now for every 0 i n , define the mapping f i : Δ X i × MP ( X ) R backward-recursively as follows:
  • f n ( p 1 n , MP ) = max x 1 n C i = 1 n p i ( x i ) .
  • For every 0 i < n , define:
    f i ( p 1 i , MP ) = Δ X f i + 1 ( p 1 i + 1 , MP ) · d MP ( p i + 1 ) .
Clearly f n is continuous. Now, let 0 i < n , and assume that f i + 1 is continuous. If we let S = Δ X i × MP ( X ) , Lemma 5 implies that the mapping F i : Δ X i × MP ( X ) × MP ( X ) defined as:
F i ( p 1 i , MP , MP ) = Δ X f i + 1 ( p 1 i + 1 , MP ) · d MP ( p i + 1 )
is continuous. However, f i ( p 1 i , MP ) = F i ( p 1 i , MP , MP ) , so f i is also continuous. Therefore, f 0 is continuous. By noticing that P e , C ( W ^ ) = 1 | X | n | C | f 0 ( MP W ^ ) , we conclude that P e , C is continuous on ( DMC X , ( o ) , T X , ( o ) ) . Moreover, since P e , n , M is the minimum of a finite family of continuous mappings, it is continuous. ☐
It is worth mentioning that Proposition 6 can be shown from Proposition 9 because the noisiness topology is coarser than the strong topology.
Corollary 4.
All the mappings in Proposition 9 are also continuous if we replace the noisiness topology T X , ( o ) with the total variation topology T T V , X , ( o ) .
Proof. 
This is true because T T V , X , ( o ) is finer than T X , ( o ) . ☐

7.2. Channel Operations

In the following, we show that we can express the channel operations in terms of Blackwell measures. We have all the tools to achieve this for the channel sum, channel product and channel interpolation. In order to express the channel polarization transformations in terms of the Blackwell measures, we need to introduce new definitions.
Let X be a finite set, and let ∗ be a binary operation on a finite set X . We say that ∗ is uniformity preserving if the mapping ( a , b ) ( a b , b ) is a bijection from X 2 to itself [18]. For every a , b X , we denote the unique element c X satisfying c b = a as c = a / b . Note that / is a binary operation, and it is uniformity preserving. / is called the right-inverse of ∗. It was shown in [11] that a binary operation is polarizing if and only if it is uniformity preserving and its inverse is strongly ergodic.
Binary operations that are not uniformity preserving are not interesting for polarization theory because they do not preserve the symmetric capacity [11]. Therefore, we will only focus on polarization transformations that are based on uniformity preserving operations.
Let ∗ be a fixed uniformity preserving operation on X . Define the mapping C , : Δ X × Δ X Δ X as
( C , ( p 1 , p 2 ) ) ( u 1 ) = u 2 X p 1 ( u 1 u 2 ) p 2 ( u 2 ) .
The probability distribution C , ( p 1 , p 2 ) can be interpreted as follows: let X 1 and X 2 be two independent random variables in X that are distributed as p 1 and p 2 , respectively, and let ( U 1 , U 2 ) be the random pair in X 2 defined as ( U 1 , U 2 ) = ( X 1 / X 2 , X 2 ) , or equivalently ( X 1 , X 2 ) = ( U 1 U 2 , U 2 ) . C , ( p 1 , p 2 ) is the probability distribution of U 1 .
Clearly, C , is continuous. Therefore, the push-forward mapping C # , is continuous from P ( Δ X × Δ X ) to P ( Δ X ) = MP ( X ) under both the weak-∗ and the total variation topologies (see Section 2.6). For every MP 1 , MP 2 MP ( X ) , we define the ( , ) -convolution of MP 1 and MP 2 as:
( MP 1 , MP 2 ) , = C # , ( MP 1 × MP 2 ) MP ( X ) .
Since the product of meta-probability measures is continuous under both the weak-∗ and the total variation topologies (Appendix B and Appendix F), the ( , ) -convolution is also continuous under these topologies.
For every p 1 , p 2 Δ X and every u 1 supp ( C , ( p 1 , p 2 ) ) , define C + , u 1 , ( p 1 , p 2 ) Δ X as:
( C + , u 1 , ( p 1 , p 2 ) ) ( u 2 ) = p 1 ( u 1 u 2 ) p 2 ( u 2 ) ( C , ( p 1 , p 2 ) ) ( u 1 ) .
The probability distribution C + , u 1 , ( p 1 , p 2 ) can be interpreted as follows: if X 1 , X 2 , U 1 and U 2 are as above, C + , u 1 , ( p 1 , p 2 ) is the conditional probability distribution of U 2 given U 1 = u 1 .
Define the mapping C + , : Δ X × Δ X P ( Δ X ) = MP ( X ) as follows:
C + , ( p 1 , p 2 ) = u 1 supp ( C , ( p 1 , p 2 ) ) ( C , ( p 1 , p 2 ) ) ( u 1 ) · δ C + , u 1 , ( p 1 , p 2 ) ,
where δ C + , u 1 , ( p 1 , p 2 ) is a Dirac measure centered at C + , u 1 , ( p 1 , p 2 ) .
If X 1 , X 2 , U 1 and U 2 are as above, C + , ( p 1 , p 2 ) is the meta-probability measure that describes the possible conditional probability distributions of U 2 that are seen by someone having knowledge of U 1 . Clearly, C + , is a random mapping from Δ X × Δ X to Δ X . In Appendix H, we show that C + , is a measurable random mapping. We also show in Appendix H that C + , is a continuous mapping from Δ X × Δ X to MP ( X ) when the latter space is endowed with the weak-∗ topology. Lemmas 3 and 4 now imply that the push-forward mapping C # + , is continuous under both the weak-∗ and the total variation topologies.
For every MP 1 , MP 2 MP ( X ) , we define the ( + , ) -convolution of MP 1 and MP 2 as:
( MP 1 , MP 2 ) + , = C # + , ( MP 1 × MP 2 ) MP ( X ) .
Since the product of meta-probability measures is continuous under both the weak-∗ and the total variation topologies (Appendix B and Appendix F), the ( + , ) -convolution is also continuous under these topologies.
Proposition 10.
We have:
  • For every W ^ 1 DMC X 1 , ( o ) and W ¯ 2 DMC X 2 , ( o ) , we have:
    MP W ^ 1 W ¯ 2 = | X 1 | | X 1 | + | X 2 | MP W ^ 1 + | X 2 | | X 1 | + | X 2 | MP W ¯ 2 ,
    where MP W ^ 1 (respectively MP W ^ 2 ) is the meta-push-forward of MP W ^ 1 (respectively MP W ^ 2 ) by the canonical injection from X 1 (respectively X 2 ) to X 1 X 2 .
  • For every W ^ 1 DMC X 1 , ( o ) and W ¯ 2 DMC X 2 , ( o ) , we have:
    MP W ^ 1 W ¯ 2 = MP W ^ 1 MP W ¯ 2 .
  • For every α [ 0 , 1 ] and every W ^ 1 , W ^ 2 DMC X , ( o ) , we have:
    MP [ α W ^ 1 , ( 1 α ) W ^ 2 ] = α MP W ^ 1 + ( 1 α ) MP W ^ 2 .
  • For every uniformity preserving binary operation ∗ on X , and every W ^ DMC X , ( o ) , we have:
    MP W ^ = ( MP W ^ , MP W ^ ) , .
  • For every uniformity preserving binary operation ∗ on X and every W ^ DMC X , ( o ) , we have:
    MP W ^ + = ( MP W ^ , MP W ^ ) + , .
Proof. 
See Appendix I. ☐
Note that the polarization transformation formulas in Proposition 10 generalize the formulas given by Raginsky in [19] for binary-input channels.
Proposition 11.
Assume that all equivalent channel spaces are endowed with the noisiness/weak-∗ or the total variation topology. We have:
  • The mapping ( W ^ 1 , W ¯ 2 ) W ^ 1 W ¯ 2 from DMC X 1 , ( o ) × DMC X 2 , ( o ) to DMC X 1 X 2 , ( o ) is continuous.
  • The mapping ( W ^ 1 , W ¯ 2 ) W ^ 1 W ¯ 2 from DMC X 1 , ( o ) × DMC X 2 , ( o ) to DMC X 1 × X 2 , ( o ) is continuous.
  • The mapping ( W ^ 1 , W ¯ 2 , α ) [ α W ^ 1 , ( 1 α ) W ¯ 2 ] from DMC X , × DMC X , ( o ) × [ 0 , 1 ] to DMC X , ( o ) is continuous.
  • For every uniformity preserving binary operation ∗ on X , the mapping W ^ W ^ from DMC X , ( o ) to DMC X , ( o ) is continuous.
  • For every uniformity preserving binary operation ∗ on X , the mapping W ^ W ^ + from DMC X , ( o ) to DMC X , ( o ) is continuous.
Proof. 
The proposition directly follows from Proposition 10 and the fact that all the meta-probability measure operations that are involved in the formulas are continuous under both the weak-∗ and the total variation topologies. ☐
Corollary 5.
Both ( DMC X , ( o ) , T X , ( o ) ) and ( DMC X , ( o ) , T T V , X , ( o ) ) are strongly contractible to every point in DMC X , ( o ) .
Proof. 
We can use the same proof of Corollary 3. ☐

8. Discussion and Conclusions

Section 5 and Section 6 show that the quotient topology is relatively easy to work with. If one is interested in the space of equivalent channels sharing the same input and output alphabets, then using the quotient formulation of the topology seems to be the easiest way to prove theorems.
The continuity of the channel sum and the channel product on the whole product space ( DMC X 1 , ( o ) × DMC X 2 , ( o ) , T s , X 1 , ( o ) T s , X 2 , ( o ) ) remains an open problem. As we mentioned in Section 6, it is sufficient to prove that the product topology T s , X 1 , ( o ) T s , X 2 , ( o ) is compactly generated.

Acknowledgments

I would like to thank Emre Telatar and Mohammad Bazzi for helpful discussions. I am also grateful to Maxim Raginsky for his comments.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DMCDiscrete memoryless channel
TVTotal variation

Appendix A. Proof of Lemma 1

Fix ϵ > 0 , and let ( s , t ) S × T . Since f is continuous, there exists a neighborhood O s , t of ( s , t ) in S × T such that for every ( s , t ) O s , t , we have | f ( s , t ) f ( s , t ) | < ϵ 2 . Moreover, since products of open sets form a base for the product topology, there exists an open neighborhood V s , t of s in ( S , V ) and an open neighborhood U s , t of t in T such that V s , t × U s , t O s , t .
Since ( S , V ) and ( T , U ) are compact, the product space is also compact. On the other hand, we have ( s , t ) S × T V s , t × U s , t = S × T , so { V s , t × U s , t } ( s , t ) S × T is an open cover of S × T . Therefore, there exist s 1 , , s n S and t 1 , , t n T such that i = 1 n V s i , t i × U s i , t i = S × T .
Now, fix s S , and define V s = 1 i n , s V s i , t i V s i , t i . Since V s is the intersection of finitely many open sets containing s, V s is an open neighborhood of s in ( S , V ) . Let s V s and t T . Since i = 1 n V s i , t i × U s i , t i = S × T , there exists 1 i n such that ( s , t ) V s i , t i × U s i , t i O s i , t i . Since s V s i , t i , we have V s V s i , t i , and so, s V s i , t i . Therefore, ( s , t ) V s i , t i × U s i , t i O s i , t i , hence:
| f ( s , t ) f ( s , t ) | | f ( s , t ) f ( s i , t i ) | + | f ( s i , t i ) f ( s , t ) | < ϵ 2 + ϵ 2 = ϵ .
However, this is true for every t T . Therefore,
sup t T | f ( s , t ) f ( s , t ) | ϵ .

Appendix B. Continuity of the Product of Measures

For every subset A of M 1 × M 2 and every x 1 M 1 , define A 2 x 1 = { x 2 M 2 : ( x 1 , x 2 ) A } . Similarly, for every x 2 M 2 , define A 1 x 2 = { x 1 M 1 : ( x 1 , x 2 ) A } . Let P 1 , P 1 P ( M 1 , Σ 1 ) and P 2 , P 2 P ( M 2 , Σ 2 ) . We have:
P 1 × P 2 P 1 × P 2 T V = sup A Σ 1 Σ 2 | ( P 1 × P 2 ) ( A ) ( P 1 × P 2 ) ( A ) | sup A Σ 1 Σ 2 | ( P 1 × P 2 ) ( A ) ( P 1 × P 2 ) ( A ) | + | ( P 1 × P 2 ) ( A ) ( P 1 × P 2 ) ( A ) | = sup A Σ 1 Σ 2 { M 2 P 1 ( A 1 x 2 ) · d P 2 ( x 2 ) M 2 P 1 ( A 1 x 2 ) · d P 2 ( x 2 ) + M 1 P 2 ( A 2 x 1 ) · d P 1 ( x 1 ) M 1 P 2 ( A 2 x 1 ) · d P 1 ( x 1 ) } sup A Σ 1 Σ 2 M 2 P 1 ( A 1 x 2 ) P 1 ( A 1 x 2 ) · d P 2 ( x 2 ) + M 1 P 2 ( A 2 x 1 ) P 2 ( A 2 x 1 ) · d P 1 ( x 1 ) M 2 sup A 1 Σ 1 P 1 ( A 1 ) P 1 ( A 1 ) d P 2 + M 1 sup A 2 Σ 2 P 2 ( A 2 ) P 2 ( A 2 ) d P 1 = P 1 P 1 T V + P 2 P 2 T V .
This shows that the product of measures is continuous under the total variation topology.

Appendix C. Proof of Proposition 1

Define the mapping G : M R + { + } as follows:
G ( x ) = M g ( y ) d ( R ( x ) ) ( y ) .
For every n 0 , define the mapping g n : M R + as follows:
g n ( y ) = 1 2 n 2 n × min { n , g ( y ) } .
Clearly, for every y M we have:
  • g n ( y ) g ( y ) for all n 0 .
  • g n ( y ) g n + 1 ( y ) for all n 0 .
  • lim n g n ( y ) = g ( y ) .
Moreover, for every fixed n 0 , we have:
  • g n is Σ -measurable.
  • g n takes values in i 2 n : 0 i n 2 n .
For every 0 i n 2 n , let B i , n = { y M : g n ( y ) = i 2 n } . Since g n is Σ -measurable, we have B i , n Σ for every 0 i n 2 n . Now, for every n 0 , define the mapping G n : M R { + } as follows:
G n ( x ) = M g n ( y ) d ( R ( x ) ) ( y ) = M i = 0 n 2 n i 2 n 𝟙 B i , n ( y ) d ( R ( x ) ) ( y ) = i = 0 n 2 n i 2 n ( R ( x ) ) ( B i , n ) = i = 0 n 2 n i 2 n R B i , n ( x ) .
Since the random mapping R is measurable and since B i , n Σ , the mapping R B i , n is Σ -measurable for every 0 i n 2 n . Therefore, G n is Σ -measurable for every n 0 . Moreover, for every x Σ , we have:
lim n G n ( x ) = lim n M g n ( y ) d ( R ( x ) ) ( y ) = ( a ) M g ( y ) d ( R ( x ) ) ( y ) = G ( x ) ,
where (a) follows from the monotone convergence theorem. We conclude that G is Σ -measurable because it is the point-wise limit of Σ -measurable functions. On the other hand, we have:
M g n · d ( R # P ) = i = 0 n 2 n i 2 n ( R # P ) ( B i , n ) = i = 0 n 2 n i 2 n M R B i , n ( x ) · d P ( x ) = i = 0 n 2 n i 2 n M ( R ( x ) ) ( B i , n ) · d P ( x ) = i = 0 n 2 n i 2 n M M 𝟙 B i , n ( y ) · d ( R ( x ) ) ( y ) d P ( x ) = M M i = 0 n 2 n i 2 n 𝟙 B i , n ( y ) d ( R ( x ) ) ( y ) d P ( x ) = M M g n ( y ) d ( R ( x ) ) ( y ) d P ( x ) = M G n · d P .
Therefore,
M g · d ( R # P ) = ( a ) lim n M g n · d ( R # P ) = lim n M G n · d P = ( b ) M G · d P ,
where (a) and (b) follow from the monotone convergence theorem.

Appendix D. Continuity of the Push-Forward by a Random Mapping

Let R be a measurable random mapping from ( M , Σ ) to ( M , Σ ) . Let P 1 , P 2 P ( M , Σ ) . Define the signed measure μ = P 1 P 2 , and let { μ + , μ } be the Jordan measure decomposition of μ . It is easy to see that P 1 P 2 T V = μ + ( M ) = μ ( M ) . For every B Σ , we have:
( R # ( P 1 ) ) ( B ) ( R # ( P 2 ) ) ( B ) = M R B · d P 1 M R B · d P 2 = M R B · d ( P 1 P 2 ) = M R B · d ( μ + μ ) M R B · d μ + R B · μ + ( M ) ( a ) μ + ( M ) = P 1 P 2 T V ,
where (a) follows from the fact that | R B ( x ) | = | ( R ( x ) ) ( B ) | 1 for every x M . We can similarly show that:
( R # ( P 2 ) ) ( B ) ( R # ( P 1 ) ) ( B ) R B · μ ( M ) P 1 P 2 T V .
Therefore,
R # ( P 1 ) R # ( P 2 ) T V = sup B Σ | ( R # ( P 1 ) ) ( B ) ( R # ( P 2 ) ) ( B ) | P 1 P 2 T V .
This shows that the push-forward mapping R # from P ( M , Σ ) to P ( M , Σ ) is continuous under the total variation topology. This concludes the proof of Lemma 3.
Now, assume that U is a Polish topology on M and U is an arbitrary topology on M . Let R be measurable random mapping from ( M , B ( M ) ) to ( M , B ( M ) ) . Moreover, assume that R is a continuous mapping from ( M , U ) to P ( M , B ( M ) ) when the latter space is endowed with the weak-∗ topology. Let ( P n ) n 0 be a sequence of probability measures in P ( M , B ( M ) ) that weakly-∗ converges to P P ( M , B ( M ) ) .
Let g : M R be a bounded and continuous mapping. Define the mapping G : M R as follows:
G ( x ) = M g ( y ) · d ( R ( x ) ) ( y ) .
For every sequence ( x n ) n 0 converging to x in M, the sequence ( R ( x n ) ) n 0 weakly-∗ converges to R ( x ) in P ( M , B ( M ) ) because of the continuity of R. This implies that the sequence ( G ( x n ) ) n 0 converges to G ( x ) . Since U is a Polish topology (hence, metrizable and sequential [20]), this shows that G is a bounded and continuous mapping from ( M , U ) to R . Therefore, we have:
lim n M g · d ( R # P n ) = ( a ) lim n M G · d P n = ( b ) M G · d P = ( c ) M g · d ( R # P ) ,
where (a) and (c) follow from Corollary 2, and (b) follows from the fact that ( P n ) n 0 weakly-∗ converges to P. This shows that ( R # P n ) n 0 weakly-∗ converges to R # P . Now, since U is Polish, the weak-∗ topology on P ( M , B ( M ) ) is metrizable [21]; hence, it is sequential [20]. This shows that the push-forward mapping R # from P ( M , B ( M ) ) to P ( M , B ( M ) ) is continuous under the weak-∗ topology.

Appendix E. Proof of Lemma 5

For every s S , define the mapping f s : Δ X R as f s ( p ) = f ( s , p ) . Clearly f s is continuous for every s S . Therefore, the mapping F s : MP ( X ) R defined as:
F s ( MP ) = Δ X f s · d MP
is continuous in the weak-∗ topology of MP ( X ) .
Fix ϵ > 0 , and let ( s , MP ) S × MP ( X ) . Since F s is continuous, there exists a weakly-∗ open neighborhood U s , MP of MP such that | F s ( MP ) F s ( MP ) | < ϵ 2 for every MP U s , MP . On the other hand, Lemma 1 implies the existence of an open neighborhood V s of s in ( S , V ) such that for every s V s , we have:
sup p Δ X | f ( s , p ) f ( s , p ) | ϵ 2 .
Clearly V s × U s , MP is an open neighborhood of ( s , MP ) in S × MP ( X ) . For every ( s , MP ) V s × U s , MP , we have:
| F ( s , MP ) F ( s , MP ) | | F ( s , MP ) F ( s , MP ) | + | F ( s , MP ) F ( s , MP ) | = Δ X f ( s , p ) f ( s , p ) · d MP ( p ) + | F s ( MP ) F s ( MP ) | < Δ X | f ( s , p ) f ( s , p ) | · d MP ( p ) + ϵ 2 ( a ) ϵ 2 + ϵ 2 = ϵ ,
where (a) follows from the fact that MP is a meta-probability measure and | f ( s , p ) f ( s , p ) | ϵ 2 for every p Δ X . We conclude that F is continuous.

Appendix F. Weak-∗ Continuity of the Product of Meta-Probability Measures

Let ( MP 1 , n ) n 0 and ( MP 2 , n ) n 0 be two sequences that weakly-∗ converge to MP 1 and MP 2 in MP ( X 1 ) and MP ( X 2 ) , respectively. Let f : Δ X 1 × Δ X 2 R be a continuous and bounded mapping. Define the mapping F : Δ X 1 × MP ( X 2 ) as follows:
F ( p 1 , MP 2 ) = Δ X 2 f ( p 1 , p 2 ) d MP 2 ( p 2 ) .
Fix ϵ > 0 . Since f ( p 1 , p 2 ) is continuous, Lemma 5 implies that F is continuous. Therefore, the mapping p 1 F ( p 1 , MP 2 ) is continuous on Δ X 1 , which implies that it is also bounded because Δ X 1 is compact. Therefore,
lim n Δ X 1 F ( p 1 , MP 2 ) d MP 1 , n ( p 1 ) = Δ X 1 F ( p 1 , MP 2 ) d MP 1 ( p 1 )
because ( MP 1 , n ) n 0 weakly-∗ converges to MP 1 . This means that there exists n 1 0 such that for every n n 1 , we have:
Δ X 1 F ( p 1 , MP 2 ) d MP 1 , n ( p 1 ) Δ X 1 F ( p 1 , MP 2 ) d MP 1 ( p 1 ) < ϵ 2 .
On the other hand, since F is continuous and since MP ( X 2 ) is compact under the weak-∗ topology [21], Lemma 1 implies the existence of a weakly-∗ open neighborhood U MP 2 of MP 2 such that | F ( p 1 , MP 2 ) F ( p 1 , MP 2 ) | ϵ 2 for every MP 2 U MP 2 and every p 1 Δ X 1 . Moreover, since MP 2 , n weakly-∗ converges to MP 2 , there exists n 2 0 such that MP 2 , n U MP 2 for every n n 2 .
Therefore, for every n max { n 1 , n 2 } , we have:
Δ X 1 Δ X 2 f ( p 1 , p 2 ) d MP 2 , n ( p 2 ) d MP 1 , n ( p 1 ) Δ X 1 Δ X 2 f ( p 1 , p 2 ) d MP 2 ( p 2 ) d MP 1 ( p 1 ) Δ X 1 Δ X 2 f ( p 1 , p 2 ) d MP 2 , n ( p 2 ) d MP 1 , n ( p 1 ) Δ X 1 Δ X 2 f ( p 1 , p 2 ) d MP 2 ( p 2 ) d MP 1 , n ( p 1 ) + Δ X 1 Δ X 2 f ( p 1 , p 2 ) d MP 2 ( p 2 ) d MP 1 , n ( p 1 ) Δ X 1 Δ X 2 f ( p 1 , p 2 ) d MP 2 ( p 2 ) d MP 1 ( p 1 ) = Δ X 1 F ( p 1 , MP 2 , n ) F ( p 1 , MP 2 ) d MP 1 , n ( p 1 ) + Δ X 1 F ( p 1 , MP 2 ) d MP 1 , n ( p 1 ) Δ X 1 F ( p 1 , MP 2 ) d MP 1 ( p 1 ) < Δ X 1 F ( p 1 , MP 2 , n ) F ( p 1 , MP 2 ) d MP 1 , n ( p 1 ) + ϵ 2 ( a ) Δ X 1 ϵ 2 · d MP 1 , n ( p 1 ) + ϵ 2 = ϵ ,
where (a) follows from the fact MP 2 , n U MP 2 for every n n 2 . Therefore,
lim n Δ X 1 × Δ X 2 f · d ( MP 1 , n × MP 2 , n ) = ( a ) lim n Δ X 1 Δ X 2 f ( p 1 , p 2 ) d MP 2 , n ( p 2 ) d MP 1 , n ( p 1 ) = Δ X 1 Δ X 2 f ( p 1 , p 2 ) d MP 2 ( p 2 ) d MP 1 ( p 1 ) = ( b ) Δ X 1 × Δ X 2 f · d ( MP 1 × MP 2 ) ,
where (a) and (b) follow from Fubini’s theorem. We conclude that ( MP 1 , n × MP 2 , n ) n 0 weakly-∗ converges to ( MP 1 × MP 2 ) n 0 . Therefore, the product of meta-probability measures is weakly-∗ continuous.

Appendix G. Continuity of the Capacity

Since the mapping I is continuous and since the space Δ X × DMC X , Y is compact, the mapping I is uniformly continuous, i.e., for every ϵ > 0 , there exists δ ( ϵ ) > 0 such that for every ( p 1 , W 1 ) , ( p 2 , W 2 ) Δ X × DMC X , Y , if p 1 p 2 1 : = x X | p 1 ( x ) p 2 ( x ) | < δ ( ϵ ) and d X , Y ( W 1 , W 2 ) < δ ( ϵ ) , then
| I ( p 1 , W 1 ) I ( p 2 , W 2 ) | < ϵ .
Let W 1 , W 2 DMC X , Y be such that d X , Y ( W 1 , W 2 ) < δ ( ϵ ) . For every p Δ X , we have p p 1 = 0 < δ ( ϵ ) , so we must have | I ( p , W 1 ) I ( p , W 2 ) | < ϵ . Therefore,
I ( p , W 1 ) < I ( p , W 2 ) + ϵ sup p Δ X I ( p , W 2 ) + ϵ = C ( W 2 ) + ϵ .
Therefore,
C ( W 1 ) = sup p Δ X I ( p , W 1 ) C ( W 2 ) + ϵ .
Similarly, we can show that C ( W 2 ) C ( W 1 ) + ϵ . This implies that | C ( W 1 ) C ( W 2 ) | ϵ ; hence, C is continuous.

Appendix H. Measurability and Continuity of C+,∗

Let us first show that the random mapping C + , is measurable. We need to show that the mapping C B + , : Δ X × Δ X R is measurable for every B B ( Δ X ) , where:
C B + , ( p 1 , p 2 ) = ( C + , ( p 1 , p 2 ) ) ( B ) , p 1 , p 2 Δ X .
For every u 1 X , define the set:
A u 1 = { ( p 1 , p 2 ) Δ X × Δ X : ( C , ( p 1 , p 2 ) ) ( u 1 ) > 0 } .
Clearly, A u 1 is open in Δ X × Δ X (and so it is measurable). The mapping C + , u 1 , is defined on A u 1 , and it is clearly continuous. Therefore, for every B B ( Δ X ) , ( C + , u 1 , ) 1 ( B ) is measurable. We have:
C B + , ( p 1 , p 2 ) = ( C + , ( p 1 , p 2 ) ) ( B ) = u 1 supp ( C , ( p 1 , p 2 ) ) , C + , u 1 , ( p 1 , p 2 ) B ( C , ( p 1 , p 2 ) ) ( u 1 ) = u 1 X , ( p 1 , p 2 ) A u 1 , C + , u 1 , ( p 1 , p 2 ) B ( C , ( p 1 , p 2 ) ) ( u 1 ) = ( a ) u 1 X ( C , ( p 1 , p 2 ) ) ( u 1 ) · 𝟙 ( C + , u 1 , ) 1 ( B ) ( p 1 , p 2 ) ,
where (a) follows from the fact that ( p 1 , p 2 ) ( C + , u 1 , ) 1 ( B ) if and only if ( p 1 , p 2 ) A u 1 and C + , u 1 , ( p 1 , p 2 ) B . This shows that C B + , is measurable for every B B ( Δ X ) . Therefore, C + , is a measurable random mapping.
Let ( p 1 , n , p 2 , n ) n 0 be a converging sequence to ( p 1 , p 2 ) in Δ X × Δ X . Since C , is continuous, we have lim n ( C , ( p 1 , n , p 2 , n ) ) ( u 1 ) = ( C , ( p 1 , p 2 ) ) ( u 1 ) for every u 1 X . Therefore, for every u 1 supp ( C , ( p 1 , p 2 ) ) , there exists n u 1 0 such that for every n n u 1 , we have C , ( p 1 , n , p 2 , n ) > 0 . Let n 0 = max { n u 1 : u 1 supp ( C , ( p 1 , p 2 ) ) } . For every n n 0 , we have supp ( C , ( p 1 , p 2 ) ) supp ( C , ( p 1 , n , p 2 , n ) ) . Therefore, for every continuous and bounded mapping g : Δ X R , we have:
lim n Δ X g · d ( C + , ( p 1 , n , p 2 , n ) ) = lim n u 1 supp ( C , ( p 1 , n , p 2 , n ) ) g ( C + , u 1 , ( p 1 , n , p 2 , n ) ) · ( C , ( p 1 , n , p 2 , n ) ) ( u 1 ) = ( a ) lim n u 1 supp ( C , ( p 1 , p 2 ) ) g ( C + , u 1 , ( p 1 , n , p 2 , n ) ) · ( C , ( p 1 , n , p 2 , n ) ) ( u 1 ) = ( b ) u 1 supp ( C , ( p 1 , p 2 ) ) g ( C + , u 1 , ( p 1 , p 2 ) ) · ( C , ( p 1 , p 2 ) ) ( u 1 ) = Δ X g · d ( C + , ( p 1 , p 2 ) ) ,
where (b) follows from the continuity of g and C , and the continuity of C + , u 1 , on A u 1 for every u 1 X . (a) follows from the fact that:
lim n u 1 supp ( C , ( p 1 , n , p 2 , n ) ) , u 1 supp ( C , ( p 1 , p 2 ) ) | g ( C + , u 1 , ( p 1 , n , p 2 , n ) ) · ( C , ( p 1 , n , p 2 , n ) ) ( u 1 ) | g lim n u 1 supp ( C , ( p 1 , n , p 2 , n ) ) , u 1 supp ( C , ( p 1 , p 2 ) ) ( C , ( p 1 , n , p 2 , n ) ) ( u 1 ) = g lim n 1 u 1 supp ( C , ( p 1 , p 2 ) ) ( C , ( p 1 , n , p 2 , n ) ) ( u 1 ) = g 1 u 1 supp ( C , ( p 1 , p 2 ) ) ( C , ( p 1 , p 2 ) ) ( u 1 ) = 0 .
We conclude that the mapping C + , is a continuous mapping from Δ X × Δ X to MP ( X ) when the latter space is endowed with the weak-∗ topology.

Appendix I. Proof of Proposition 10

Let W ^ 1 DMC X 1 , ( o ) and W ¯ 2 DMC X 2 , ( o ) . Fix W 1 W ^ 1 and W 2 W ¯ 2 , and let Y 1 and Y 2 be the output alphabets of W 1 and W 2 , respectively. We may assume without loss of generality that Im ( W 1 ) = Y 1 and Im ( W 2 ) = Y 2 .
Let y Y 1 . We have:
P W 1 W 2 o ( y ) = 1 | X 1 X 2 | x X 1 X 2 ( W 1 W 2 ) ( y | x ) = 1 | X 1 | + | X 2 | x X 1 W 1 ( y | x ) = | X 1 | | X 1 | + | X 2 | P W 1 o ( y ) > 0 .
For every x X 1 , we have:
( W 1 W 2 ) y 1 ( x ) = ( W 1 W 2 ) ( y | x ) ( | X 1 | + | X 2 | ) P W 1 o ( y ) = W 1 ( y | x ) | X 1 | P W 1 o ( y ) = ( W 1 ) y 1 ( x ) .
On the other hand, for every x X 2 , we have:
( W 1 W 2 ) y 1 ( x ) = ( W 1 W 2 ) ( y | x ) ( | X 1 | + | X 2 | ) P W 1 o ( y ) = 0 .
Therefore, ( W 1 W 2 ) y 1 = ϕ 1 # ( W 1 ) y 1 , where ϕ 1 is the canonical injection from X 1 to X 1 X 2 .
Similarly, for every y Y 2 , we have P W 1 W 2 o ( y ) = | X 2 | | X 1 | + | X 2 | P W 1 o ( y ) > 0 and ( W 1 W 2 ) y 1 = ϕ 2 # ( W 2 ) y 1 , where ϕ 2 is the canonical injection from X 2 to X 1 X 2 . For every B B ( Δ X 1 X 2 ) , we have:
MP W 1 W 2 ( B ) = y Y 1 Y 2 , ( W 1 W 2 ) y 1 B P W 1 W 2 o ( y ) = y Y 1 , ϕ 1 # ( W 1 ) y 1 B | X 1 | | X 1 | + | X 2 | P W 1 o ( y ) + y Y 2 , ϕ 2 # ( W 2 ) y 1 B | X 2 | | X 1 | + | X 2 | P W 2 o ( y ) = | X 1 | | X 1 | + | X 2 | MP W 1 ( ϕ 1 # ) 1 ( B ) + | X 2 | | X 1 | + | X 2 | MP W 2 ( ϕ 2 # ) 1 ( B ) = | X 1 | | X 1 | + | X 2 | ( ϕ 1 # # MP W 1 ) ( B ) + | X 2 | | X 1 | + | X 2 | ( ϕ 2 # # MP W 2 ) ( B ) .
Therefore,
MP W ^ 1 W ¯ 2 = | X 1 | | X 1 | + | X 2 | ϕ 1 # # MP W ^ 1 + | X 2 | | X 1 | + | X 2 | ϕ 2 # # MP W ¯ 2 .
This shows the first formula of Proposition 10.
For every y = ( y 1 , y 2 ) Y 1 × Y 2 , we have:
P W 1 W 2 o ( y ) = ( x 1 , x 2 ) X 1 × X 2 1 | X 1 × X 2 | ( W 1 W 2 ) ( y 1 , y 2 | x 1 , x 2 ) = x 1 X 2 , x 2 X 2 W 1 ( y 1 | x 1 ) | X 1 | · W 2 ( y 2 | x 2 ) | X 2 | = P W 1 o ( y 1 ) P W 2 o ( y 2 ) > 0 .
For every x = ( x 1 , x 2 ) X 1 × X 2 , we have:
( W 1 W 2 ) y 1 ( x ) = ( W 1 W 2 ) ( y | x ) | X 1 × X 2 | P W 1 W 2 o ( y ) = W 1 ( y 1 | x 1 ) | X 1 | P W 1 o ( y 1 ) · W 2 ( y 2 | x 2 ) | X 2 | P W 2 o ( y 2 ) = ( W 1 ) y 1 1 ( x 1 ) · ( W 2 ) y 2 1 ( x 2 ) = ( W 1 ) y 1 1 × ( W 2 ) y 2 1 ( x ) .
For every B B ( Δ X 1 × X 2 ) , we have:
MP W 1 W 2 ( B ) = y Y 1 × Y 2 , ( W 1 W 2 ) y 1 B P W 1 W 2 o ( y ) = y Y 1 × Y 2 , ( W 1 ) y 1 1 × ( W 2 ) y 2 1 B P W 1 o ( y 1 ) P W 2 o ( y 2 ) = y Y 1 × Y 2 , Mul ( W 1 ) y 1 1 , ( W 2 ) y 2 1 B P W 1 o ( y 1 ) P W 2 o ( y 2 ) = ( MP W 1 × MP W 2 ) ( Mul 1 ( B ) ) = Mul # ( MP W 1 × MP W 2 ) ( B ) = ( MP W 1 MP W 2 ) ( B ) .
Therefore,
MP W ^ 1 W ¯ 2 = MP W ^ 1 MP W ¯ 2 .
This shows the second formula of Proposition 10.
Now, let α [ 0 , 1 ] and W ^ 1 , W ^ 2 DMC X , ( o ) . Fix W 1 W ^ 1 and W 2 W ^ 2 , and let Y 1 and Y 2 be the output alphabets of W 1 and W 2 , respectively. We may assume without loss of generality that Im ( W 1 ) = Y 1 and Im ( W 2 ) = Y 2 . Let W = [ α W 1 , ( 1 α ) W 2 ] . If α = 0 , then W is equivalent to W 2 and MP W = MP W 2 = α MP W 1 + ( 1 α ) MP W 2 . If α = 1 , then W is equivalent to W 1 and MP W = MP W 1 = α MP W 1 + ( 1 α ) MP W 2 .
Assume now that 0 < α < 1 . For every y Y 1 , we have:
P W o ( y ) = 1 | X | x X W ( y | x ) = 1 | X | x X α · W 1 ( y | x ) = α P W 1 o ( y ) > 0 .
For every x X , we have:
W y 1 ( x ) = W ( y | x ) | X | P W o ( y ) = α W 1 ( y | x ) | X | α P W 1 o ( y ) = ( W 1 ) y 1 ( x ) .
Similarly, for every y Y 2 , we have P W o ( y ) = ( 1 α ) P W 2 o ( y ) > 0 and W y 1 = ( W 2 ) y 1 . Therefore,
MP W = y Y 1 Y 2 P W o ( y ) · δ W y 1 = y Y 1 α P W 1 o ( y ) · δ ( W 1 ) y 1 + y Y 2 ( 1 α ) P W 2 o ( y ) · δ ( W 2 ) y 1 = α MP W 1 + ( 1 α ) MP W 2 .
Therefore,
MP [ α W ^ 1 , ( 1 α ) W ^ 2 ] = α MP W ^ 1 + ( 1 α ) MP W ^ 2 .
This shows the third formula of Proposition 10.
Now, let W ^ DMC X , ( o ) , and let ∗ be a uniformity preserving binary operation on X . Fix W W ^ , and let Y be the output alphabet of W. We may assume without loss of generality that Im ( W ) = Y .
Let U 1 , U 2 be two independent random variables uniformly distributed in X . Let X 1 = U 1 U 2 and X 2 = U 2 . Send X 1 and X 2 through two independent copies of W, and let Y 1 and Y 2 be the output, respectively.
For every ( y 1 , y 2 ) Y 2 , we have:
P W o ( y 1 , y 2 ) = P Y 1 , Y 2 ( y 1 , y 2 ) = P Y 1 ( y 1 ) P Y 2 ( y 2 ) = P W o ( y 1 ) P W o ( y 2 ) > 0 .
For every u 1 X , we have:
( W ) y 1 , y 2 1 ( u 1 ) = P U 1 | Y 1 , Y 2 ( u 1 | y 1 , y 2 ) = u 2 X 2 P U 1 , U 2 | Y 1 , Y 2 ( u 1 , u 2 | y 1 , y 2 ) = u 2 X 2 P X 1 , X 2 | Y 1 , Y 2 ( u 1 u 2 , u 2 | y 1 , y 2 ) = u 2 X 2 P X 1 | Y 1 ( u 1 u 2 | y 1 ) P X 2 | Y 2 ( u 2 | y 2 ) = u 2 X 2 W y 1 1 ( u 1 u 2 ) W y 2 1 ( u 2 ) = C , ( W y 1 1 , W y 2 1 ) ( u 1 ) .
For every B B ( Δ X ) , we have:
MP W ( B ) = y Y 2 , ( W ) y 1 B P W o ( y ) = ( y 1 , y 2 ) Y 2 , C , ( W y 1 1 , W y 2 1 ) B P W 1 o ( y 1 ) P W 2 o ( y 2 ) = ( MP W × MP W ) ( C , ) 1 ( B ) = C # , ( MP W × MP W ) ( B ) = ( MP W , MP W ) , ( B ) .
Therefore,
MP W ^ = ( MP W ^ , MP W ^ ) , .
This shows the forth formula of Proposition 10.
For every ( y 1 , y 2 , u 1 ) Y 2 × X , we have:
P W + o ( y 1 , y 2 , u 1 ) = P Y 1 , Y 2 , U 1 ( y 1 , y 2 , y 1 ) = P Y 1 , Y 2 ( y 1 , y 2 ) P U 1 | Y 1 , Y 2 ( u 1 | y 1 , y 2 ) = P W o ( y 1 ) P W o ( y 2 ) · C , ( W y 1 1 , W y 2 1 ) ( u 1 ) .
Therefore,
Im ( W + ) = ( y 1 , y 2 ) Y 2 { ( y 1 , y 2 ) } × supp ( C , ( W y 1 1 , W y 2 1 ) ) .
For every ( y 1 , y 2 , u 1 ) Im ( W + ) , we have:
( W + ) y 1 , y 2 , u 1 1 ( u 2 ) = P U 2 | Y 1 , Y 2 , U 1 ( u 2 | y 1 , y 2 , u 1 ) = P U 1 , U 2 | Y 1 , Y 2 ( u 1 , u 2 | y 1 , y 2 ) P U 1 | Y 1 , Y 2 ( u 1 | y 1 , y 2 ) = P X 1 | Y 1 ( u 1 u 2 | y 1 ) P X 2 | Y 2 ( u 2 | y 2 ) C , ( W y 1 1 , W y 2 1 ) ( u 1 ) = W y 1 1 ( u 1 u 2 ) W y 2 1 ( u 2 ) C , ( W y 1 1 , W y 2 1 ) ( u 1 ) = C + , u 1 , ( W y 1 1 , W y 2 1 ) ( u 2 ) .
For every B B ( Δ X ) , we have:
MP W + ( B ) = ( y 1 , y 2 ) Y 2 u 1 supp ( C , ( W y 1 1 , W y 2 1 ) , C + , u 1 , ( W y 1 1 , W y 2 1 ) B P W o ( y 1 ) P W o ( y 2 ) · C , ( W y 1 1 , W y 2 1 ) ( u 1 ) = ( y 1 , y 2 ) Y 2 P W o ( y 1 ) P W o ( y 2 ) u 1 supp ( C , ( W y 1 1 , W y 2 1 ) , C + , u 1 , ( W y 1 1 , W y 2 1 ) B C , ( W y 1 1 , W y 2 1 ) ( u 1 ) = ( y 1 , y 2 ) Y 2 P W o ( y 1 ) P W o ( y 2 ) C + , ( W y 1 1 , W y 2 1 ) ( B ) = ( y 1 , y 2 ) Y 2 P W o ( y 1 ) P W o ( y 2 ) ( C B + , ( W y 1 1 , W y 2 1 ) = Δ X × Δ X C B + , ( p 1 , p 2 ) · d ( MP W × MP W ) ( p 1 , p 2 ) = C # + , ( MP W × MP W ) ( B ) = ( MP W , MP W ) + , ( B ) .
Therefore,
MP W ^ + = ( MP W ^ , MP W ^ ) + , .
This shows the fifth and last formula of Proposition 10.

References

  1. Nasser, R. Continuity of Channel Parameters and Operations under Various DMC Topologies. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 3185–3189. [Google Scholar]
  2. Polyanskiy, Y.; Poor, H.V.; Verdu, S. Channel Coding Rate in the Finite Blocklength Regime. IEEE Trans. Inf. Theory 2010, 56, 2307–2359. [Google Scholar] [CrossRef]
  3. Polyanskiy, Y. Saddle Point in the Minimax Converse for Channel Coding. IEEE Trans. Inf. Theory 2013, 59, 2576–2595. [Google Scholar] [CrossRef]
  4. Schwarte, H. On weak convergence of probability measures, channel capacity and code error probabilities. IEEE Trans. Inf. Theory 1996, 42, 1549–1551. [Google Scholar] [CrossRef]
  5. Richardson, T.; Urbanke, R. Modern Coding Theory; Cambridge University Press: New York, NY, USA, 2008. [Google Scholar]
  6. Nasser, R. Topological Structures on DMC spaces. arXiv, 2017; arXiv:1701.04467. [Google Scholar]
  7. Engelking, R. General Topology; Monografie Matematyczne: Warsaw, Poland, 1977. [Google Scholar]
  8. Schieler, C.; Cuff, P. The Henchman Problem: Measuring Secrecy by the Minimum Distortion in a List. IEEE Trans. Inf. Theory 2016, 62, 3436–3450. [Google Scholar] [CrossRef]
  9. Torgersen, E. Comparison of Statistical Experiments; Encyclopedia of Mathematics and its Applications, Cambridge University Press: Cambridge, UK, 1991. [Google Scholar]
  10. Şaşoğlu, E.; Telatar, E.; Arıkan, E. Polarization for Arbitrary Discrete Memoryless Channels. In Proceedings of the IEEE Information Theory Workshop, Taormina, Italy, 11–16 October 2009; pp. 144–148. [Google Scholar]
  11. Nasser, R. An Ergodic Theory of Binary Operations, Part II: Applications to Polarization. IEEE Trans. Inf. Theory 2017, 63, 1063–1083. [Google Scholar] [CrossRef]
  12. Cover, T.; Thomas, J. Elements of Information Theory, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  13. Shannon, C. The zero error capacity of a noisy channel. IRE Trans. Inf. Theory 1956, 2, 8–19. [Google Scholar] [CrossRef]
  14. Mondelli, M.; Hassani, S.H.; Urbanke, R.L. From Polar to Reed-Muller Codes: A Technique to Improve the Finite-Length Performance. IEEE Trans. Inf. Theory 2014, 62, 3084–3091. [Google Scholar]
  15. Arıkan, E. Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels. IEEE Trans. Inf. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef] [Green Version]
  16. Shannon, C. A Note on a Partial Ordering for Communication Channels. Inform. Contr. 1958, 1, 390–397. [Google Scholar] [CrossRef]
  17. Steenrod, N.E. A convenient category of topological spaces. Michigan Math. J. 1967, 14, 133–152. [Google Scholar] [CrossRef]
  18. Nasser, R. An Ergodic Theory of Binary Operations, Part I: Key Properties. IEEE Trans. Inf. Theory 2016, 62, 6931–6952. [Google Scholar] [CrossRef]
  19. Raginsky, M. Channel Polarization and Blackwell Measures. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016; pp. 56–60. [Google Scholar]
  20. Franklin, S. Spaces in which sequences suffice. Fundam. Math. 1965, 57, 107–115. [Google Scholar] [CrossRef]
  21. Villani, C. Topics in Optimal Transportation; Graduate studies in mathematics, American Mathematical Society: Madison, WI, USA, 2003. [Google Scholar]

Share and Cite

MDPI and ACS Style

Nasser, R. Continuity of Channel Parameters and Operations under Various DMC Topologies. Entropy 2018, 20, 330. https://doi.org/10.3390/e20050330

AMA Style

Nasser R. Continuity of Channel Parameters and Operations under Various DMC Topologies. Entropy. 2018; 20(5):330. https://doi.org/10.3390/e20050330

Chicago/Turabian Style

Nasser, Rajai. 2018. "Continuity of Channel Parameters and Operations under Various DMC Topologies" Entropy 20, no. 5: 330. https://doi.org/10.3390/e20050330

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop