Next Article in Journal
Analyzing Factors Influencing Learning Motivation in Online Virtual Museums Using the S-O-R Model: A Case Study of the National Museum of Natural History
Next Article in Special Issue
A Map Information Collection Tool for a Pedestrian Navigation System Using Smartphone
Previous Article in Journal
Evaluating Interaction Techniques in XR Environments Through the Prism of Four EduGames
Previous Article in Special Issue
Psychological Network Analysis for Risk and Protective Factors of Problematic Social Media Use
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computability of the Zero-Error Capacity of Noisy Channels †

by
Holger Boche
1,‡,§,‖,¶ and
Christian Deppe
2,*,§
1
Theoretical Information Technology, Technical University of Munich, 80333 Munich, Germany
2
Institute for Communications Technology, Technische Universität Braunschweig, 38106 Brunswick, Germany
*
Author to whom correspondence should be addressed.
This article is a revised and expanded version of a paper entitled Insights into [Computability of the zero-error capacity of noisy channels], which was presented at [2021 IEEE Information Theory Workshop (ITW), Kanazawa, Japan, 17–21 October 2021].
Current address: Cyber Security in the Age of Large-Scale Adversaries–Exzellenzcluster, Ruhr-Universität Bochum, 44801 Bochum, Germany.
§
Current address: BMBF Research Hub 6G-Life, 80333 Munich, Germany.
Current address: Munich Center for Quantum Science and Technology (MCQST), 80799 Munich, Germany.
Current address: Munich Quantum Valley (MQV), 80799 München, Germany.
Information 2025, 16(7), 571; https://doi.org/10.3390/info16070571
Submission received: 28 March 2025 / Revised: 19 June 2025 / Accepted: 1 July 2025 / Published: 3 July 2025
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)

Abstract

The zero-error capacity of discrete memoryless channels (DMCs), introduced by Shannon, is a fundamental concept in information theory with significant operational relevance, particularly in settings where even a single transmission error is unacceptable. Despite its importance, no general closed-form expression or algorithm is known for computing this capacity. In this work, we investigate the computability-theoretic boundaries of the zero-error capacity and establish several fundamental limitations. Our main result shows that the zero-error capacity of noisy channels is not Banach–Mazur-computable and therefore is also not Borel–Turing-computable. This provides a strong form of non-computability that goes beyond classical undecidability, capturing the inherent discontinuity of the capacity function. As a further contribution, we analyze the deep connections between (i) the zero-error capacity of DMCs, (ii) the Shannon capacity of graphs, and (iii) Ahlswede’s operational characterization via the maximum-error capacity of 0–1 arbitrarily varying channels (AVCs). We prove that key semi-decidability questions are equivalent for all three capacities, thus unifying these problems into a common algorithmic framework. While the computability status of the Shannon capacity of graphs remains unresolved, our equivalence result clarifies what makes this problem so challenging and identifies the logical barriers that must be overcome to resolve it. Together, these results chart the computational landscape of zero-error information theory and provide a foundation for further investigations into the algorithmic intractability of exact capacity computations.

1. Introduction

The zero-error capacity of discrete memoryless channels (DMCs) was introduced by Shannon in 1956 [1]. Since then, numerous works have examined this capacity across various channel classes. From the outset, determining C 0 for DMCs has been recognized as highly challenging. In response, Shannon posed a key question: can the zero-error capacity of a DMC be expressed via the other error capacities of suitably chosen channels? A major breakthrough came from Ahlswede, who proved that the C 0 value for a DMC equals the maximum-error capacity of a related 0–1 arbitrarily varying channel (AVC) [2].
This paper investigates the algorithmic computability of the zero-error capacity of DMCs and explores the broader computational implications of the Shannon and Ahlswede characterizations. We adopt Turing machine theory as our model of computability, which accurately reflects the capabilities of real-world digital computers.
Shannon’s original theory also provided a graph-theoretic interpretation: each channel corresponds to a simple graph whose Shannon capacity coincides with the channel’s zero-error capacity. In practice, however, channel descriptions are usually given directly by a transition mapping
W : X P ( Y ) ,
where X and Y are finite alphabets, and P ( Y ) denotes the set of probability distributions over Y . A notable application of this formulation is remote state estimation and stabilization [3].
The zero-error capacity is also central in quantum channels and entanglement-assisted classical channels. Research has focused on superactivation effects [4,5], entanglement-assisted gains [6], and connections to noncommutative graph theory [7]. Further studies have explored nonlocal correlations [8], no-signaling assistance [9], and noiseless feedback [10]. Surveys of quantum channel capacities provide broader context [11,12], while recent advances in quantum graph theory offer fresh insights into zero-error communication [13].
In general, one seeks the numerical value of C 0 ( W ) , typically irrational, and strives for reliable approximation algorithms that compute it to any specified precision.
Shannon’s use of graph theory involved defining the confusability graph G W of a DMC and using its Shannon capacity [14,15,16,17,18,19]. Since then, information theory has vastly expanded to cover multi-user systems, feedback channels, and advanced coding theory. Significant progress has been made in the zero-error capacity within relay, multiple-access, broadcast, and interference channels [20] and in specific models like binary adder and duplication channels [21,22,23]. Further studies have addressed list decoding [24,25], variable-length coding [26], and adversarial multiple-access channels [27].
Recent work [28] has determined the Shannon capacity for two infinite subclasses of strongly regular graphs and analyzed novel graph-join types, strengthening earlier results.
Today, two main algorithmic strategies exist for approximating the zero-error capacity: Shannon’s graph-theoretic method and Ahlswede’s 0–1 AVC–based method. We show that both approaches are non-recursive: there is no Turing machine that, given W, produces the confusability graph G W , nor one that constructs the corresponding 0–1 AVC.
Moreover, the zero-error capacity plays a significant and important role in analyzing the ϵ -capacity of compound channels under the average decoding error, even when the compound set has only | S | = 2 elements [29].
This paper is structured as follows:
  • Section 2 introduces computability concepts and the zero-error capacity of noisy channels and clarifies its links to the Shannon graph capacity and Ahlswede’s AVC framework;
  • Section 3 presents our main results: the non-computability of the zero-error capacity and the unresolved computability status of the Shannon graph capacity and the maximum-error AVC capacity;
  • Section 4 analyzes 0–1 AVCs under average error constraints, establishes the computability of their capacity, and shows that the Shannon capacity Θ is Borel–Turing-computable if and only if the corresponding 0–1 AVC capacity is;
  • Section 6 summarizes our conclusions and discusses future directions.
Some findings were previously presented at the IEEE Information Theory Workshop 2021 in Kanazawa [30], and related results from ISIT 2020 [31] are revisited in Section 5.

2. Basic Definitions and Results

We apply the theory of Turing machines [32] and recursive functions [33] to investigating the computability of the zero-error capacity. For brevity, we restrict ourselves to an informal description and refer to [34,35,36,37] for detailed treatment.
Table 1 gives an overview of the main definitions and notations.
Turing machines provide a mathematical idealization of real-world computational machines. Any algorithm that can be executed by a real-world computer can, in theory, be simulated by a Turing machine, and vice versa. However, unlike real-world computers, Turing machines are not constrained by factors such as energy consumption, computation time, or memory size. Furthermore, all computation steps on a Turing machine are assumed to be executed without error.
Recursive functions form a special subset of the set n = 0 f : N n N , where the symbol “↪” denotes a partial mapping. Turing machines and recursive functions are equivalent in the following sense: a function f : N n N is computable by a Turing machine if and only if it is a recursive function.
Definition 1. 
A sequence of rational numbers ( r n ) n N is said to be computable if recursive functions f si , f nu , f de : N N exist such that
r n = ( 1 ) f si ( n ) f nu ( n ) f de ( n )
holds true for all n N . Likewise, a double sequence of rational numbers ( r n , m ) n , m N is said to be computable if recursive functions f si , f nu , f de : N × N N exist such that
r n , m = ( 1 ) f si ( n , m ) f nu ( n , m ) f de ( n , m )
holds true for all n , m N .
Definition 2. 
A sequence ( x n ) n N of real numbers is said to converge effectively towards a number x * R if a recursive function κ : N N exists such that | x * x n | < 1 2 N holds true for all n , N N that satisfy n κ ( N ) .
Definition 3. 
A real number x is said to be computable if a computable sequence of rational numbers exists that converges effectively towards x.
We denote the set of computable real numbers as R c .
Definition 4. 
A sequence ( x n ) n N of computable numbers is called computable if a computable double sequence ( r n , m ) n , m N of rational numbers, as well as a recursive function κ : N × N N , exists such that
| x n r n , m | < 1 2 M
holds true for all n , m , M N that satisfy m κ ( n , M ) .
Definition 5. 
A sequence of functions { F n } n N with F n : X R c is computable if the mapping ( i , x ) F i ( x ) is computable.
Definition 6. 
A computable sequence of computable functions { F N } N N is called computably convergent to F if a partial recursive function ϕ : N × X N exists such that
F ( x ) F N ( x ) < 1 2 M
holds true for all M N , all N ϕ ( M , x ) , and all x X .
In the following, we consider Turing machines with only one output state. We interpret this output status as the stopping of the Turing machine. This means that for an input x R c , the Turing machine T M ( x ) ends its calculation after an unknown but finite number of arithmetic steps, or it computes forever.
Definition 7. 
We call a set M R c semi-decidable if there is a Turing machine T M M that stops for the input x R c , if and only if x M applies.
In [38], Specker constructed a monotonically increasing computable sequence { r n } n N of rational numbers that is bounded by 1, but the limit x * , which naturally exists, is not a computable number. For all M N , n 0 = n 0 ( M ) exists such that for all n n 0 , 0 x r n < 1 2 M always holds, but the function n 0 : N N is not partial recursive. This means there are computable monotonically increasing sequences of rational numbers, which each converge to a finite limit value, but for which the limit values are not computable numbers and therefore the convergence is not effective. Of course, the set of computable numbers is countable.
We will later examine the zero-error capacity C 0 ( · ) as a function of computable DMCs. To do this, we need to define computable functions generally.
Definition 8. 
A function f : R c R c is called Banach–Mazur-computable if f maps any given computable sequence { x n } n = 1 of computable numbers into a computable sequence { f ( x n ) } n = 1 of real numbers.
Definition 9. 
A function f : R c R c is called Borel–Turing-computable if there is an algorithm that transforms each given computable sequence of a computable real x into a corresponding representation for f ( x ) .
We note that Turing’s original definition of computability conforms to the definition of Borel–Turing computability above. Banach–Mazur computability (see Definition 8) is the weakest form of computability. For an overview of the logical relations between different notions of computability, we again refer to [39].
Now, we want to define the zero-error capacity. Therefore, we need the definition of a discrete memoryless channel. In the theory of transmission, the receiver must be in a position to successfully decode all of the messages transmitted by the sender.
Let X be a finite alphabet. We denote the set of probability distributions as ( X ) . We define the set of computable probability distributions c ( X ) as the set of all probability distributions P ( X ) such that P ( x ) R c for all x X . Furthermore, for finite alphabets X and Y , let C H ( X , Y ) be the set of all conditional probability distributions (or channels) P Y | X : X ( Y ) . C H c ( X , Y ) denotes the set of all computable conditional probability distributions, i.e.,  P Y | X ( · | x ) c ( Y ) for every x X .
Let M C H c ( X , Y ) . We call M semi-decidable (see Definition 7) if and only if there is a Turing machine T M M that either stops or computes forever, depending on whether W M is true. That means T M M accepts exactly the elements of M and calculates forever for an input W M c = C H c ( X , Y ) M .
Definition 10. 
A discrete memoryless channel (DMC) is a triple ( X , Y , W ) , where X is the finite input alphabet, Y is the finite output alphabet, and  W ( y | x ) C H ( X , Y ) with x X , y Y . The probability of a sequence y n Y n being received if x n X n was sent is defined by
W n ( y n | x n ) = j = 1 n W ( y j | x j ) .
Definition 11. 
A block code C with the rate R and the block length n consists of
  • A message set M = { 1 , 2 , , M } with M = 2 n R N ;
  • An encoding function e : M X n ;
  • A decoding function d : Y n M .
We call such a code an ( R , n ) -code.
Definition 12. 
1.
The individual message probability of error is defined by the conditional probability of error given that the message m is transmitted:
P m ( C ) = P r { d ( Y n ) m | X n = e ( m ) } .
2.
We define the maximal probability of the error as P max ( C ) = max m M P m ( C ) .
3.
A rate R is said to be achievable if a sequence of ( R , n ) -codes { C n } exists with a probability of error P max ( C n ) 0 as n .
Two sequences x n and x n of the size n of input variables are distinguishable by a receiver if the vectors W n ( · | x n ) and W n ( · | x n ) are orthogonal. That means if W n ( y n | x n ) > 0 , then W n ( y n | x n ) = 0 , and if W n ( y n | x n ) > 0 then W n ( y n | x n ) = 0 . We denote as M ( W , n ) the maximum cardinality of a set of mutually orthogonal vectors among W n ( · | x n ) with x n X n .
There are different ways to define the capacity of a channel. The so-called pessimistic capacity is defined as lim inf n log 2 M ( W , n ) n , and the optimistic capacity is defined as lim sup n log 2 M ( W , n ) n . A discussion of these quantities can be found in [40]. We define the zero-error capacity of W as follows:
C 0 ( W ) = lim inf n log 2 M ( W , n ) n .
For the zero-error capacity, the pessimistic capacity and the optimistic capacity are equal.
First, we want to introduce the representation of the zero-error capacity of Ahlswede. Therefore, we need to introduce the arbitrarily varying channel (AVC). This was introduced under a different name by Blackwell, Breiman, and Thomasian [41], and considerable progress has been made in the study of these channels.
Definition 13. 
Let X and Y be finite sets. A (discrete) arbitrarily varying channel (AVC) is determined by a family of channels with a common input alphabet X and output alphabet Y
W = W ( · | · , s ) C H ( X , Y ) : s S .
The index s is called the state, and the set S is called the state set. Now, an AVC is defined by a family of sequences of channels
W n ( y n | x n , s n ) = t = 1 n W ( y t | x t , s t ) , x n X n , y n Y n , s n S n
for all x n X n , y n Y n , s n S n , n N .
Definition 14. 
An ( n , M ) code is a system ( u i , D i ) i = 1 M with u i X n , D i Y n , and for i j D i D j = .
Definition 15. 
1.
The maximal probability of error of the code for an AVC W is
λ = max s n S n max 1 i M W n ( D i c | u i , s n ) .
2.
The average probability of error of the code for an AVC W is
λ ¯ = max s n S n M 1 i = 1 M W n ( D i c | u i , s n ) .
Definition 16. 
1.
The capacity of an AVC with the maximal probability of error is the maximal number C max ( W ) such that for all ε , λ , an ( n , M ) code of the AVC W exists for all large n with a maximal probability lower than λ and 1 n log M > C max ( W ) ε ;
2.
The capacity of an AVC W with an average probability of error is the maximal number C a v ( W ) such that for all ε, λ ¯ > 0 , an ( n , M ) code of the AVC exists for all large n with an average probability lower than λ ¯ and 1 n log M > C a v ( W ) ε .
In the following, we denote A V C 0 1 to be the set of AVCs W that satisfies W ( y | x , s ) { 0 , 1 } for all y Y , all x X , and all s § .
Theorem 1 
(Ahlswede [2]). Let X and Y be finite alphabets with | X | 2 and | Y | 2 .
(i)
For all DMCs W * C H ( X , Y ) , W A V C 0 1 exists such that for the zero-error capacity of W *
C 0 ( W * ) = C max ( W ) .
(ii)
Conversely, for each W A V C 0 1 , a DMC W * C H ( X , Y ) exists such that (6) holds.
The construction is interesting. Therefore, we cite it from [42]:
(i)
For a given W * , we let W be the set of stochastic matrices with the index (state) set S such that for all x X , y Y , and s S , it holds that W ( y | x , s ) = 1 implies W * ( y | x ) > 0 . Then, for all n, x n X n , and y n Y n , W * n ( y n | x n ) > 0 if and only if s n S n such that
W n ( y n | x n , s n ) = 1 .
Notice that for all λ < 1 , a code for W with the maximal probability of error λ is a zero-error code for W . Thus, it follows from (7) that a code is a zero-error code for W * if and only if it is a code for W with the maximal probability of error λ < 1 .
(ii)
For a given 0-1 type AVC W (with the state set S ) and any probability π ( S ) with π ( s ) > σ for all s, let W * = s S π ( s ) W ( · | · , s ) . Then, (7) holds.
The zero-error capacity can be characterized in graph-theoretic terms as well. Let W C H ( X , Y ) be given and | X | = q . Shannon [1] introduced the confusability graph G W with q = | G | . In this graph, two letters/vertices x and x are connected, if they can be confused with one another due to the channel noise (i.e., y exists such that W ( y | x ) > 0 and W ( y | x ) > 0 ). Therefore, the maximum independent set is the maximum number of single-letter messages which can be sent without danger of confusion. In other words, the receiver knows whether the received message is correct or not. It follows that α ( G ) is the maximum number of messages which can be sent without danger of confusion. Furthermore, the definition is extended to words of a length n by α ( G n ) . Therefore, we can give the following graph-theoretic definition of the Shannon capacity.
Definition 17. 
The Shannon capacity of a graph G G is defined by
Θ ( G ) : = lim sup n α ( G n ) 1 n .
Shannon discovered the following.
Theorem 2 
(Shannon [1]). Let ( X , Y , W ) be a DMC. Then,
2 C 0 ( W ) = Θ ( G W ) = lim n α ( G W n ) 1 n .
This limit exists and equals the supremum
Θ ( G W ) = sup n N α ( G W n ) 1 n
according to Fekete’s lemma.
Observe that Theorem 2 yields no further information on whether C 0 ( W ) and Θ ( G ) are computable real numbers.

3. The Algorithmic Computability of the Zero-Error Capacity

In this section, we investigate the algorithmic computability of the zero-error capacity C 0 ( W ) for discrete memoryless channels (DMCs), since no closed-form expression for C 0 ( W ) is known to date. Furthermore, we analyze the algorithmic relationship between Shannon’s and Ahlswede’s characterizations of the zero-error capacity.
We show that the function C 0 : C H c ( X , Y ) R and the cardinality of a maximum-size zero-error code of the blocklength n M * ( W , n ) are not Banach–Mazur-computable. Alon and Lubetzky raised the question of whether the set { G : Θ ( G ) < μ } is semi-decidable. We provide three equivalent conditions under which the answer is affirmative.
Moreover, we demonstrate that the set of channels with a 0 zero-error capacity—which are channels that are useless in this context—is not computable, though they are semi-decidable. To prove this result, we rely on the following auxiliary lemmas.
Lemma 1. 
  • There is a Turing machine T M > 0 that stops for x R c , if and only if x > 0 applies. Hence, the set R c + : = x R c : x > 0 is semi-decidable.
  • There is a Turing machine T M < 0 that stops for x R c , if and only if x < 0 applies. Hence, the set R c : = x R c : x < 0 is semi-decidable.
  • There is no Turing machine T M = 0 that stops for x R c , if and only if x = 0 applies.
Proof. 
Let x R c be given by the quadruple a , b , s , ζ , with  v k : = ( 1 ) s ( k ) a ( k ) b ( k ) . Then, a ˜ 1 , a ˜ 2 , a ˜ 3 , with a ˜ k : = max v ζ ( l ) 1 2 l : 1 l k is a computable monotonically increasing sequence and converges to x. The Turing machine T M > 0 sequentially computes the sequence a ˜ 1 , a ˜ 2 , a ˜ 3 , . Obviously, k N with a ˜ k > 0 if and only if x > 0 . Since a ˜ k is always a rational number, T M > 0 can directly check algorithmically whether a ˜ k > 0 applies. We set
T M > 0 ( x ) : = STOP if it finds k 0 N with a ˜ k 0 > 0 , The Turing machine computes forever .
Then, T M > 0 ( x ) = STOP applies if and only if x > 0 applies.
The construction of T M < 0 is analogous to the computable sequence b ˜ 1 , b ˜ 2 , b ˜ 3 , where b ˜ k : = min v ζ ( l ) + 1 2 l : 1 l k converges monotonically to x. Consequently,
T M < 0 ( x ) : = STOP if it finds k 0 with N b ˜ k < 0 , The Turing machine computes forever .
We now want to prove the last statement of this lemma. We provide the proof indirectly. Assume that the corresponding Turing machine T M = 0 exists. Let n N be arbitrary. We consider an arbitrary Turing machine T M and the computable sequence { λ m } m N of computable numbers:
λ m : = 1 2 l T M stops for input n after l m steps ; 1 2 m T M does not stop entering n after l m steps .
Obviously, for all m N , it holds that λ m λ m + 1 and lim m λ m = : x 0 , where lim m λ m = 0 if and only if T M for the input n does not stop in a finite number of steps. For all m , N N with N m ,
| λ m x | 1 2 N ,
holds, as we will show by considering the following cases:
  • Assume that T M stops for the input n after l N steps. For all m N , then λ m = λ N applies, and thus | λ m x | = 0 .
  • Assume that T M does not stop for the input n after l N steps. For all m N , then 1 2 N = λ N λ m , and thus | λ m x | | 1 2 N x | | 1 2 N 0 | = 1 2 N .
Hence, we can use the pair ( T M , n ) ( λ m ) m N , η with the estimate η : N N as a computable real number, which we can pass to a potential Turing machine T M = 0 as input. The partial recursive function η is a representation of the computable number x, that is, x ( λ m ) m N , η . Consequently, T M = 0 stops for the input x if and only if T M for the input n does not stop in a finite number of steps. Thus, every Turing machine T M = 0 solves for every input n the halting problem. The halting problem cannot be solved by a Turing machine ([32]). This proves the lemma.    □
In the following lemma, we give an example of a function that is not Banach–Mazur-computable.
Lemma 2. 
Let x [ 0 , ) R c be arbitrary. We consider the following function:
f 1 ( x ) : = 1 , x > 0 , x R c 0 , x = 0 .
The function f 1 is not Banach–Mazur-computable.
Proof. 
For all x [ 0 , ) R c holds f ( x ) R c . We assume that f 1 is Banach–Mazur-computable. Let { x n } n N be an arbitrary computable sequence of computable numbers with { x n } n N [ 0 , n ) .
The sequence ( f ( x n ) ) n N is a computable sequence of computable numbers. We take a set A N that is recursively enumerable but not recursive. Then, let T M A be a Turing machine that stops for the input n if and only if n A holds. T M A accepts exactly the elements from A. Let n N be arbitrary. We now define
λ n , m : = 1 2 l , T M A stops for the input n after l m steps ; 1 2 m , T M A does not stop after l m steps for the input n .
Then, ( λ n , m ) m , n N 2 is a computable (double) sequence of computable numbers. For n N , m M and M N implies
| λ n , m λ n , M | < 1 2 M .
This means that there is effective convergence for every n N . Consequently, according to Lemma 1, for every n N , λ n * R c with lim m | λ n * λ n , m | = 0 , and the sequence ( λ n * ) n N is a computable sequence of computable numbers. This means that f 1 ( λ n * ) n N is a computable sequence of computable numbers, where
f 1 ( λ n * ) = 1 if λ n * > 0 , 0 if λ n * = 0
applies. The following Turing machine T M * : N { yes , no } exists: T M * computes the value f 1 ( λ n * ) for the input n. If  f 1 ( λ n * ) = 1 , then the set T M * ( n ) = yes , i.e.,  n A . If  f 1 ( λ n * ) = 0 , then set T M * ( n ) = no , i.e.,  n A . This applies to every n N and therefore A is recursive, which contradicts the assumption. This means that f 1 is not Banach–Mazur-computable.    □
Theorem 3. 
Let X , Y be finite alphabets with | X | 2 and | Y | 2 . Then, C 0 : C H c ( X , Y ) R is not Banach–Mazur-computable.
Proof. 
Let | X | = | Y | = 2 ; then, we will show that C 0 : C H c ( X , Y ) R is not Banach–Mazur-computable. For  0 δ < 1 2 , we choose W δ ( y | 1 ) = 1 δ δ and W δ ( y | 0 ) = δ 1 δ . Then, we have
C 0 ( W δ ) = 1 , if δ = 0 0 , if 0 < δ < 1 2 .
We consider the function ξ : [ 0 , 1 2 ) { 0 , 1 } with ξ ( δ ) = C 0 ( W δ ) . It follows from Lemma 1 that ξ is not Banach–Mazur-computable.    □
Therefore, the zero-error capacity cannot be computed algorithmically.
Remark 1. 
There are still some questions that we would like to discuss.
1.
It is not clear whether C 0 ( W ) R c applies to all channels W C H c ( X , Y ) .
2.
In addition, it is not clear whether Θ is Borel–Turing-computable. Theorem 3 shows that this does not apply to the zero-error capacity for DMCs. We show that even C 0 is not Banach–Mazur-computable.
In the following, we want to investigate the semi-decidability of the set { W C H c ( X , Y ) : C 0 ( W ) > λ } .
Theorem 4. 
Let X , Y be finite alphabets with | X | 2 and | Y | 2 . For all λ R c with 0 λ < log 2 min { | X | , | Y | } , the sets { W C H c ( X , Y ) : C 0 ( W ) > λ } are not semi-decidable.
Proof. 
Let X = { 1 , 2 , , | X | } N and Y = { 1 , 2 , , | Y | } N be arbitrary finite alphabets with | X | 2 and | Y | 2 , and let D = min { | X | , | Y | } . First, we consider the case | X | = D . Let us consider the channel
W * ( y | x ) = 1 , if y = x 0 , if y x .
It holds that C 0 ( W * ) = log 2 | D | . For 0 < δ < 1 | Y | 1 , we define the channel
W δ , * ( y | x ) = 1 δ ( | Y | 1 ) , if y = x δ , if y x .
It holds that C 0 ( W δ , * ) = 0 for 0 < δ < 1 | Y | 1 . Let us now assume that λ ^ R c with 0 λ ^ < log 2 D such that the set { W C H c ( X , Y ) : C 0 ( W ) > λ ^ } is semi-decidable. Then, we consider the Turing machine T M > λ ^ which accepts this set. Furthermore, we consider for 0 < δ < 1 | Y | 1 the following Turing machine T M * :
  • T M * simulates two Turing machines T M 1 : = T M > 0 and T M 2 : = T M > λ ^ ;
  • In parallel, T M > 0 receives the input δ and tests if δ > 0 ;
  • T M > 0 stops if and only if δ > 0 .
It is shown in Lemma 1 that such a Turing machine exists. For the input δ = 0 , T M > 0 computes forever. The second Turing machine is defined by
  • T M 2 ( δ ) : = T M > λ ^ ( W δ , * ) ;
  • For δ > 0 , it holds that C 0 ( W δ , * ) = 0 ;
  • Therefore, T M 2 stops for 0 < δ < 1 | Y | 1 if and only if δ = 0 .
We now let T M * stop for the input δ if and only if one of the two Turing machines T M 1 or T M 2 stops. Exactly one Turing machine has to stop for every 0 < δ < 1 | Y | 1 .
If the Turing machine T M 1 stops at the input δ , we set T M * ( δ ) = 1 . If the Turing machine T M 2 stops at the input δ , we set T M * ( δ ) = 0 . Therefore, we have
T M * ( δ ) = 0 , if δ = 0 1 , if 0 < δ < 1 | Y | 1 .
We have shown in Lemma 1 that such a Turing machine cannot exist. This proves the theorem for D = | X | . The proof for D = | Y | is very similar.   □
For W C H c ( X , Y ) and n N , let M * ( W , n ) be the cardinality of a maximum code with a decoding error of 0. This maximum code always exists because we only have a finite set of possible codes for the blocklength n. Of course, a well-defined function M * ( · , n ) : C H c ( X , Y ) N exists for every n N . Because of Fekete’s lemma, we have
C 0 ( W ) = lim n 1 n log 2 M * ( W , n ) = sup n N 1 n log 2 M * ( W , n ) .
We now have the following theorem regarding the Banach–Mazur computability of the function M * .
Theorem 5. 
Let X , Y be finite alphabets with | X | 2 and | Y | 2 . The function
M * ( · , n ) : C H c ( X , Y ) N
is not Banach–Mazur-computable for all n N .
Proof. 
Let X and Y be finite alphabets with | X | 2 and | Y | 2 , and let N N be arbitrary. Consider the “ideal channel” W 1 C H c ( X , Y ) with M * ( W , N ) = min { | X | , | Y | } N . Furthermore, consider any channel W 2 C H c ( X , Y ) with W 2 ( y | x ) > 0 for all y Y and all x X . Then, M * ( W 2 , N ) = 1 for all N N , and consequently, because of (14), C 0 ( W 2 ) = 0 . Now, we can directly apply the proof of Theorem 3 to the function M * ( · , N ) : C H c ( X , Y ) R . M * ( · , N ) is therefore not Banach–Mazur-computable.    □
We now want to examine the question of whether a computable sequence of Banach–Mazur-computable lower bounds can be found for C 0 ( · ) . We set
F N ( W ) : = max 1 n N 1 n log 2 M * ( W , n ) .
For all W C H c ( X , Y ) and for all N N , we have F N ( W ) F N + 1 ( W ) and lim N ( W ) = C 0 ( W ) . However, this cannot be expressed algorithmically because due to Theorem 5, the functions F N are not Banach–Mazur-computable. We next want to show that this is a general phenomenon for C 0 .
Theorem 6. 
Let X , Y be finite alphabets with | X | 2 and | Y | 2 . No computable sequence { F N } N N of Banach–Mazur-computable functions exists that simultaneously satisfies the following:
1.
For all N N , it holds that F N ( W ) C 0 ( W ) for all W C H c ( X , Y ) ;
2.
For all W C H c ( X , Y ) , it holds that lim N F N ( W ) = C 0 ( W ) .
Proof. 
Assume to the contrary that finite alphabets X ^ and Y ^ exist with | X ^ | 2 and | Y ^ | 2 , as well as a computable sequence { F N } of Banach–Mazur-computable functions, such that the following holds true:
  • For all N N and all W C H c ( X ^ , Y ^ ) , we have F N ( W ) C 0 ( W ) ;
  • For all W C H c ( X ^ , Y ^ ) , we have lim N F N ( W ) = C 0 ( W ) .
We consider for N N the function
F ¯ N ( W ) = max 1 n N F n ( W ) , W C H c ( X , Y ) .
The function F ¯ N is Banach–Mazur-computable (see [37]). The sequence { F ¯ N } N N is a computable sequence of Banach–Mazur-computable functions. For all N N , it holds that F ¯ N ( W ) F ¯ N + 1 ( W ) and lim N F ¯ N ( W ) = C 0 ( W ) for W C H c ( X ^ , Y ^ ) . Since the sequence { F ¯ N } can be computed, we can find a Turing machine T M ̲ , so that for all W C H c ( X , Y ) and for all N N , F ¯ N ( W ) = T M ̲ ( W , N ) applies (according the s n m Theorem [43]). Let λ be given arbitrarily with 0 < λ < log 2 ( min { | X | , | Y | } ) . In addition, we also use the Turing machine T M > λ , which stops for the input x if and only if x > λ (see the proof of Theorem 4). Just as in the proof of Theorem 4, we use the two Turing machines T M > λ and T M ̲ to build a Turing machine T M * . The Turing machine T M * stops exactly when N N exists, so that F ¯ N 0 ( W ) > λ holds. N 0 exists for W C H c ( X ^ , Y ^ ) if and only if C 0 ( W ) > λ . The set { W C H c ( X ^ , Y ^ ) : C 0 ( W ) > λ } will then be semi-decidable, which is a contradiction to the assumption.    □
We make the following observation: For all μ R c with μ 1 , the sets { G G : Θ ( G ) > μ } are semi-decidable. It holds that 2 C 0 ( W ) = Θ ( G W ) .
Theorem 7. 
The following three statements A, B, and C are equivalent:
A 
For all X , Y , where X and Y are finite alphabets with | X | 2 and | Y | 2 and for all λ R c with 0 < λ , the sets
{ W C H c ( X , Y ) : C 0 ( W ) < λ }
are semi-decidable.
B 
For all μ R c with μ > 1 , the sets
{ G G : Θ ( G ) < μ }
are semi-decidable.
C 
For all X , Y , where X and Y are finite alphabets with | X | 2 and | Y | 2 and for all λ R c with λ > 0 , the sets
{ W ( · | · , s ) } s § A V C 0 1 : C max ( { W ( · | · , s ) } s § ) < λ
are semi-decidable.
Proof. 
First, we show A B . Let μ R c with μ > 1 . Then, μ = 2 λ with λ R c . Let X , Y be finite sets with | X | 2 and | Y | 2 . Then, the set
W C H c ( X , Y ) : C 0 ( W ) < λ
is semi-decidable by assumption. Let T M < λ be the associated Turing machine. Let G ^ { G G : Θ ( G ) < μ } be chosen arbitrarily. From G ^ = ( V ^ , E ^ ) , we algorithmically construct a channel W G ^ C ( X , Y ) with | X | = | Y | = | G ^ | as follows. We consider the set Z : = { v } : v V E and an arbitrary output alphabet Y with the bijection f : Z Y . Therefore, it is obvious that | Y | = | Z | . For  v V , we set Y ( v ) : = f ( z ) : z Z and v Z . We define
W ^ G ( y | v ) = 1 | Y ( v ) | if y Y ( v ) 0 otherwise .
Of course, G W ^ = G ^ applies to the confusability graph of W G ^ . It holds that C 0 ( W G ^ ) = log 2 Θ ( G ^ ) . Therefore, W G ^ { W C H c ( X , Y ) : C 0 ( W ) < λ } . Therefore, T M < μ ( G ) : = T M < λ ( W G ) stops. Conversely, if  T M < λ ( W G ) stops for G G , then C 0 ( W G ) < λ . Therefore, Θ ( G ) < μ . Thus, we have shown A B .
Now, we show B A . Let X and Y be finite alphabets with | X | 2 and | Y | 2 . We construct a sequence of confusability graphs as follows.
For all pairs x , x X with x x , we start the first computation step of the Turing machine T M > 0 for N = 1 in parallel for the number y = 1 | Y | W ( y | x ) W ( y | x ) = d ( x , x ) . That means for the input d ( x , x ) , we compute the first step of the calculation of T M > 0 ( d ( x , x ) ) . If the Turing machine T M > 0 stops at the input d ( x , x ) in the first step, then G 1 has the edge { x , x } . If for x , x X with x x the Turing machine T M > 0 does not stop after the first step, then { x , x } E ( G 1 ) .
For N = 2 , we construct G 2 as follows. For all x , x that have no edge in G 1 , we let T M > 0 calculate the second computation step at the input d ( x , x ) . If  T M > 0 stops, then we set G 2 to have the edge { x , x } and also receive all edges of G 1 . If for x , x X with x x the Turing machine T M > 0 does not stop after the second step, then { x , x } E ( G 2 ) . We continue this process iteratively, generating a sequence of graphs G 1 , G 2 , G 3 , , all sharing the same vertex set, with the edges satisfying E ( G 1 ) E ( G 2 ) E ( G 3 ) . The Turing machine T M > 0 stops for the input d ( x , x ) if and only if d ( x , x ) > 0 . We have a number of tests in each step that fall monotonically depending on N (generally not strictly). It holds that
Θ ( G 1 ) Θ ( G 2 ) Θ ( G n ) .
n 0 exists such that
G W = G n 0 .
G W is the confusability graph of W. Note that we do not have a computable upper bound for n 0 . However, the latter is not required for the proof. Therefore,
Θ ( G n 0 ) = 2 C 0 ( W ) .
Let T M G , λ be the Turing machine which accepts the set { G G : Θ ( G ) < 2 λ } . We have already shown that W ^ { W C H c ( X , Y ) : C 0 ( W ) < λ } holds if and only if n 0 N , so that the sequence { G n ^ } n N satisfies E ( G ^ n ) E ( G ^ n + 1 ) . These are all graphs with the same set of nodes, and n 0 with Θ ( G ^ n 0 ) < 2 λ exists. Furthermore, the sequence is computable. We only have to test for the sequence { G n ^ } n N , which is generated algorithmically from W ^ , whether G ^ n { G G : Θ ( G ) < 2 λ } applies. This means that we have to test whether T M G , λ ( G ^ n ) stops for a certain n. We compute the first step for T M G , λ ( G ^ 1 ) . If the Turing machine stops, then C 0 ( W ^ ) < λ . Otherwise, we compute the second step for T M G , λ ( G ^ 1 ) and the first step for T M G , λ ( G ^ 2 ) . We continue recursively like this, and it is clear that the computation stops if and only if C 0 ( W ^ ) < λ . Otherwise, the Turing machine computes forever.
Now, we show A C . Let λ R c and let { W ( · | · , s ) } s § A V C 0 1 be arbitrarily chosen. From  { W ( · | · , s ) } s § , we can effectively construct a DMC W * C H ( X , Y ) c according to the Ahlswede approach (Theorem 1), so that C 0 ( W * ) = C max ( { W ( · | · , s ) } s § ) . This means that C 0 ( W * ) < λ if and only if C max ( { W ( · | · , s ) } s § ) < λ . By assumption, the set W C H c ( X , Y ) : C 0 ( W ) < λ is semi-decidable. We have used it to construct a Turing machine T M c , < λ that stops when C max ( { W ( · | · , s ) } s § ) < λ applies; otherwise T M c , < λ , computes for ever. Therefore, C holds.
Now, we show C A . The idea of this part of the proof is similar to that of part B A . Let W C H c ( X , Y ) be arbitrary. Similar to case B A , we construct a suitable sequence { { W k ( · | · , s ) } s § k } k N of computable sequences of 0 1 AVCs on X and Y , such that the following assertions are satisfied:
  • For all k N , we have § k § k + 1 , as well as
    W k ( y | x , s ) = W k + 1 ( y | x , s )
    for all x X , all y Y , and all s § k .
  • k 0 N exists such that § k 0 = § k for all k k 0 and
    W ( y | x ) > 0 s § k 0 : W k 0 ( y | x , s ) = 1
    for all x X and all y Y .
The AVC { W k 0 ( · | · , s ) } s § k 0 then satisfies the requirements of Theorem 1.
In general, k 0 cannot be computed effectively depending on W C H c ( X , Y ) , but this is not a problem for the semi-decidability for all finite X , Y with | X | 2 and | Y | 2 .
So, we have for k N
C max ( { W k + 1 ( · | · , s ) } s § k + 1 ) C max ( { W k ( · | · , s ) } s § k )
and it holds that
C 0 ( W ) < λ k 0 : C max ( { W k 0 ( · | · , s ) } s § k 0 ) < λ .
We can use this property and the semi-decidability requirement in C just like in the proof of B A to construct a Turing machine T M < λ , which stops for W C H c ( X , Y ) exactly then, if C 0 ( W ) < λ applies or computes forever.
This proves the theorem.    □
Remark 2
(See also Section 1). Alon and Lubetzky have asked whether the set { G : Θ ( G ) < μ } is semi-decidable (see [44]). We see that the answer to Alon and Lubetzky’s question is positive if and only if Assertion A from Theorem 7 holds true. This is interesting for the following reason: on the one hand, the set { G G : Θ ( G ) > μ } is semi-decidable for μ R c with μ 1 , but on the other hand, even for | X | = | Y | = 2 and λ R c with 0 < λ < 1 , the set { W C H c ( X , Y ) : C 0 ( W ) > λ } is not semi-decidable. So, there is no equivalence regarding the semi-decidability of these sets.
In the next theorem, we look at useless channels in terms of the zero-error capacity. The set of useless channels is defined by
N 0 ( X , Y ) : = { W C H c ( X , Y ) : C 0 ( W ) = 0 } ,
where X and Y are finite alphabets with | X | 2 and | Y | 2 . It is clear from our previous theorem that N 0 ( X , Y ) is not semi-decidable.
Theorem 8. 
Let X and Y be finite alphabets with | X | 2 and | Y | 2 . Then, the set N 0 ( X , Y ) is semi-decidable.
Proof. 
For the proof of this theorem, we use the proof of Theorem 7. We have to construct a Turing machine T M 0 as follows. T M 0 is defined on C H c ( X , Y ) and stops for an input W if and only if W N 0 ( X , Y ) ; otherwise, it calculates forever. For the input W, we start the Turing machine T M > 0 in parallel for all x , x X with x x and test T M > 0 ( W ( y | x ) W ( y | x ) ) . We let all | Y | | X | ( | X | 1 ) Turing machines T M < 0 compute one step of the computation in parallel. As soon as a Turing machine stops, it will not be continued. The Turing machine T M 0 ( W ) stops if and only if y exists for every x , x X with x x such that T M > 0 ( W ( y | x ) W ( y | x ) ) stops. Then, the confusability graph G is a complete graph, and consequently, Θ ( G ) = 1 and C 0 ( W ) = 0 .   □
In [45], on the other hand, it was shown that the zero-error capacity for a fixed input and output alphabets can be calculated on a Blum–Shub–Smale machine.

4. The Computability of Θ and 0–1 AVCs

We know that the set { G : Θ ( G ) = 0 } is decidable, and we know that the set { { W ( · | · , s ) } s § A V C 0 1 : C max ( { W ( · | · , s ) } s § ) = 0 } is decidable. However, we have shown nothing about the computability of the above quantities so far. If we look at the 0-1 AVC with the average errors, it holds that C a v : A V C 0 1 R c is calculable and the set { { W ( · | · , s ) } s § A V C 0 1 : C a v ( { W ( · | · , s ) } s § ) = 0 } is decidable. It holds that N 0 , a v > N 0 , max . For an AVC { W ( · | · , s ) } s § , let M ( x ) : = { y Y : s § with W ( y | x , s ) = 1 } ; we have
C max ( { W ( · | · , s ) } s § ) = 0 x , x ^ X holds M ( x ) M ( x ^ ) .
In general, it is unclear whether Θ ( G ) and C max ( { W ( · | · , s ) } s § ) are computable. The computability of both capacities is open, but we can show the computability of the average error capacity of 0-1 AVCs. For a comprehensive survey on the general theory of AVCs, see [42].
Theorem 9. 
The function C a v : A V C 0 1 R c is Borel–Turing-computable.
Remark 3. 
It is important that we restrict C a v in Theorem 9 to the set of all 0-1 AVCs as a function and examine the Borel–Turing computability on this restricted set. This is because it was shown in [46] that for all | X | 2 , | Y | 3 , and a fixed  | § | 2 , the capacity C a v : C H c ( X , Y ) R c is not Banach–Mazur-computable. 
Proof of Theorem 9. 
We want to design a Turing machine that solves the above task. We choose x , y , s as variables with 1 x | X | , 1 y | Y | , and  1 s | § | . Let { W ( · | · , s ) } s § A V C 0 1 be an arbitrary 0-1 AVC. A set of vectors on R + | Y | is given by
v x , s = W ( 1 | x , s ) W ( | Y | | x , s )
with x X and s § . Each of these vectors is a 0-1 vector with only one non-zero element. Let E : = { e i } i | Y | be the standard basis of R | Y | . Then, E forms the set of extreme points of the probability vectors in R | Y | . We can identify the set of probability vectors with the set ( Y ) . We now want to show that the set
{ W ( · | · , s ) } s § A V C 0 1 : C a v ( { W ( · | · , s ) } s § ) = 0
is decidable by constructing a Turing machine that decides for each channel { W ( · | · , s ) } s § whether it is symmetrizable or not. An AVC { W ( · | · , s ) } s § is called symmetrizable if and only if a DMC U C H ( X , § ) such that
s § v x ˜ , s U ( s | x ) = s § v x , s U ( s | x ˜ )
holds true for all x , x ˜ X . If a general AVC { W ( · | · , s ) } s § is symmetrizable, then C a v ( { W ( · | · , s ) } s § ) = 0 . First, we will show that we can algorithmically decide whether an AVC { W ( · | · , s ) } s § A V C 0 1 is symmetrizable or not. Let { W ( · | · , s ) } s § A V C 0 1 be symmetrizable. Define for all x X
I U ( x ) : = { s § : U ( s | x ) > 0 } .
If s I U holds true, then the vector v x , s appears on the right-hand side in (17). Observe that for all x X and all s I U ( x ) , the vector v x , s is an element of E. Due to (17), for all x , x ˜ X , s I U ( x ) , s ˜ must exist such that v x , s = v x ˜ , s ˜ . Then, it follows that s ˜ belongs to the set I U ( x ˜ ) . We can now swap the roles of x and x ˜ and have thus shown that | I U ( x ) | = | I U ( x ˜ ) | . Since both x and x ˜ were arbitrary, we have
| I U ( 1 ) | = | I U ( 2 ) | = = | I U ( | X | ) | = ν .
Let V x : = { v x , s : s § , U ( s | x ) > 0 } E with x X . Then, for all x , x ˜ X with x x ˜ , it holds that V x V x ˜ = V ( x , x ˜ ) , and because of (18), it holds that | V ( x , x ˜ ) | = ν . Let
V ( x , x ˜ ) = { ( v 1 ( x , x ˜ ) , , v ν ( x , x ˜ ) }
be a list of the elements. For  1 x | X | and 1 x ˜ | X | , let s x , x ˜ : { 1 , , ν } § be the function with
v x , s x , x ˜ ( 1 ) = v 1 ( x , x ˜ ) v x , s x , x ˜ ( ν ) = v ν ( x , x ˜ ) .
Let f x ˜ ( s ) : = U ( s | x ˜ ) with x ˜ X and s § ; then, it holds that
s § v x , s f x ˜ ( s ) = s I U ( x ˜ ) v x , s f x ˜ ( s ) = t = 1 ν v t ( x , x ˜ ) f x ˜ ( s x , x ˜ ( t ) ) = t = 1 ν v t ( x , x ˜ ) f x ( s x , x ˜ ( t ) ) .
Because v t ( x , x ˜ ) with 1 t ν are extreme points of the set ( Y ) , the following applies for 1 t ν :
0 f x ˜ ( s x , x ˜ ( t ) ) = f j ( s x , x ˜ ( t ) ) .
We can now define a new function f * as follows:
f x ˜ * ( s x , x ˜ ( t ) ) = 1 ν f x ˜ ( s x , x ˜ ( t ) ) f x ˜ ( s x , x ˜ ( t ) ) .
It holds that
f x ˜ * ( s x , x ˜ ( t ) ) = 1 ν 1 t r 0 o t h e r w i s e .
Then, a channel is given by U * ( s | x ˜ ) = f x ˜ * ( s ) with s § and x ˜ X . This channel fulfills the following:
r = 1 ν v t ( x , x ˜ ) f x ˜ * ( s x , x ˜ ( t ) ) = r = 1 ν v t ( x , x ˜ ) f x ˜ * ( s x , x ˜ ( t ) ) .
So, U * is a symmetrizable channel. With this, we can specify an algorithm for the proof of the symmetrizability as follows (see also Algorithm 1):
  • Input { W ( · | · , s ) } s § .
  • Compute V ̲ x : = { v x , s } s § .
  • Compute min x ˜ x | V ̲ x V ̲ x ˜ | = : ν ̲ .
    If ν ̲ = 0 , then the channel is not symmetrizable.
    If ν ̲ 1 , then test for all ν with 1 ν ν ̲ , all pairs 1 x , x ˜ | X | with x x ˜ , all subsets V * ( V ̲ x V ̲ x ˜ ) of cardinality ν , and all functions of the form f x ˜ * whether they fulfill the following symmetrizability condition for all 1 x , x ˜ | X | with x x ˜ :
    f x ˜ * ( s x , x ˜ ( t ) ) = f x * ( s x , x ˜ ( t ) ) 1 t ν .
Algorithm 1 Check symmetrizability from the transition family { W ( · | · , s ) } s §
Require: The transition matrices W ( · | · , s ) for each state s §
  1:
Let X be the input alphabet, and S the set of states
  2:
for all  x X  do
  3:
      V ̲ x { v x , s : s S }
  4:
end for
  5:
ν ̲ min x , x ˜ X x x ˜ V ̲ x V ̲ x ˜
  6:
if  ν ̲ = 0  then
  7:
      return false             ▹ The channel is not symmetrizable
  8:
end if
  9:
for  ν 1  to  ν ̲  do
10:
     for all  x , x ˜ X with x x ˜  do
11:
           C V ̲ x V ̲ x ˜
12:
          for all subsets V * C with | V * | = ν  do
13:
               for all functions f x * , f x ˜ * : V * some co - domain  do
14:
                    sym true
15:
                   for  t 1  to  ν  do
16:
                         let s t be the t-th element of V *
17:
                         if  f x ˜ * ( s t ) f x * ( s t )  then
18:
                              sym false              ▹ Symmetry is broken
19:
                             break
20:
                         end if
21:
                   end for
22:
                   if  sym = true  then
23:
                         return true       ▹ A symmetrizing assignment is found
24:
                   end if
25:
               end for
26:
          end for
27:
     end for
28:
end for
29:
return false             ▹ No symmetrizing assignment exists
Clearly, there are only a finite number of options to test. Functions f x ˜ * with x ˜ X can be found if and only if the channel can be symmetrized. Using the described subroutine, we can now fully specify an algorithm that computes C a v : A V C 0 1 R c :
  • If we can prove algorithmically that { W ( · | · , s ) } s § is symmetrizable, then we set C a v ( { W ( · | · , s ) } s § ) = 0 .
  • If we can prove algorithmically that { W ( · | · , s ) } s § is not symmetrizable, then we compute C a v ( { W ( · | · , s ) } s § ) as follows [42]: for q ( § ) , let W q ( · | · ) = s § q ( s ) W ( · | · , s ) . Then, it holds that
    C a v ( { W ( · | · , s ) } s § ) = min q ( § ) C ( W q ) = min q R | § | , q ( s ) 0 , q ( s ) = 1 C ( W q ) .
Here, C denotes the capacity of a DMC, which is a computable continuous function (this follows from Shannon’s theorem and the continuity of the mutual information). Thus, C a v ( { W ( · | · , s ) } s § ) is a computable number, and we have constructed an algorithm which transforms an algorithmic description of { W ( · | · , s ) } s § into an algorithmic description of the number C a v ( { W ( · | · , s ) } s § ) (see Definition 9). □
We have now shown that C a v is Borel–Turing-computable. Although this does not say anything about C max , C a v is similar in structure to C max . For example, if local randomness is available at the encoder, the maximum-error capacity coincides with the average error capacity [42]. We now want to look at the computability of Θ and C max . We want to show the following.
Theorem 10. 
Θ is Borel–Turing-computable if and only if C max is Borel–Turing-computable.
Proof. 
From the proof of Theorem 7, it follows that two Turing machines T M 1 and T M 2 exist with the following properties:
  • For all G G , it holds that
    Θ ( G ) = C max T M 1 ( G ) .
  • For all { W ( · | · , s ) } s § A V C 0 1 , it holds that
    C max ( { W ( · | · , s ) } s § ) = Θ T M 2 ( { W ( · | · , s ) } s § ) .
So, if C max is Borel–Turing-computable, then for any input G for Θ , we can effectively find a suitable input { W ( · | · , s ) } s § for C max and then use it as an oracle. A similar line of reasoning applies if Θ is Borel–Turing-computable. □

5. The Computability of the Zero-Error Capacity with the Kolmogorov Oracle

We have shown that the zero-error capacity C 0 is not Banach–Mazur-computable as a function of the channels. The question now arises as to whether a Turing machine with additional input can be found so that, for example, the upper bounds for the zero-error capacity can be calculated. This question will be briefly discussed in this section. In [31], we showed that the zero-error capacity is semicomputable if we allow for the Kolmogorov oracle.
To define the Kolmogorov oracle, we need a special enumeration for
  • The set N ;
  • The set of the partial recursive functions Φ : D o m a i n ( Φ ) N N .
The problem is that the natural listing of the set of natural numbers is inappropriate because many numbers in N are too large for the natural enumerations. We start with the set of partially recursive functions from N to N . A listing M o p t = { Φ l : Φ l i s a p a r t i a l r e c u r s i v e f u n c t i o n l N } of the partial recursive functions is called an optimal listing if for any other recursive listing { g l : l N } of the set of recursive functions, there is a constant C 1 such that for all l N , the following holds: t ( l ) N exists with t ( l ) C 1 l and Φ t ( l ) = g l . This means that all partial recursive functions Φ have a small Gödel number with respect to the system M o p t . Schnorr [34] has shown that such an optimal recursive listing of the set of partial recursive functions exists. The same holds true for the sets of natural numbers N .
For N , let u N be an optimal listing and η : N G be a numbering of graphs.
For the set G , we define C u G ( G ) : = min { k : η ( u N ( k ) ) = G } . This is the Kolmogorov complexity generated by u N and η .
Definition 18. 
The Kolmogorov oracle O K , G ( · ) is a function of N in the power set of the set of graphs that produces a list
O K , G ( n ) : = G : C u G ( G ) n
for each n N , where the graphs G are listed by size.
Let T M be a Turing machine. We say that T M can use the oracle O K , G if for every n N , for the input n, the Turing machine acquires the list O K , G ( n ) . With T M ( O K , G ) , we denote a Turing machine that has access to the oracle O K , G . We now consider for λ R c , λ 0 the set L ( λ ) = { G : Θ ( G ) λ } , i.e., the λ -level set of the zero-error capacity. We have the following theorem:
Theorem 11 
([31]). Let λ R c w i t h λ > 0 . Then, the set L ( λ ) is decidable with a Turing machine T M * ( O K , G ) . This means a Turing machine T M * ( O K , G ) exists such that the set G ( λ ) is computable with this Turing machine with the oracle.
Corollary 1 
([31]). Let λ R c , λ 0 . Then, the set L ( λ ) is semi-decidable for Turning machines with the oracle O K , N ( O K , G ) .
Alon and Lubetzky have asked whether the set { G : Θ ( G ) λ } is semi-decidable. We gave in [31] a positive answer to this question on whether we can include the oracle. We do not know if C 0 is computable concerning T M ( O K , G ) .
Let M N be a number with 2 M > | X | . We set I k , M = [ k 2 M , k + 1 2 M ] for k = 0 , 1 , , 2 2 M 1 . We have the following theorem:
Theorem 12. 
A Turing machine T M ( 1 ) ( · , O K , N ) exists with T M ( 1 ) ( · , O K , N ) : G { 0 , 1 , , 2 M } such that for all G G , it holds that
T M ( 1 ) ( G , O K , N ) = r Θ ( G ) I r , M .
Thus, this approach does not directly provide the computability of C 0 through T M with the oracle O K , N . However, we can compute C 0 with any given accuracy.
We have seen that in order to prove the computability of C 0 or Θ , we need computable converses. In this sense, the recent characterization by Zuiddam [47] using the functions from the asymptotic spectrum of graphs is interesting.

6. Conclusions and Discussion

This paper revisited Ahlswede’s foundational approach in [2] to characterizing the zero-error capacity using arbitrarily varying channels (AVCs). Although the theoretical connection remains intriguing, it has not yet yielded practical methods for calculating C 0 ( W ) for discrete memoryless channels (DMCs). Obstacles include the absence of explicit formulas for the maximum-error AVC capacities and the impossibility of algorithmically transforming any DMC into a finite 0–1 AVC, as shown by Theorems 3, 4, and 6. These results prove that no Turing machine can realize the map
T M * : C H ( X , Y ) c { W s } s S : W s is finite 0 1 AVC .
Table 2 gives an overview of the main results of this paper.
Our focus has been on the computability of the zero-error capacity as a function of a DMC W; we did not address the computability questions arising from a graph-based perspective via the confusability graph G W . Whether the Shannon capacity Θ ( G ) is computable in that representation remains open.
This paper shows that the confusability graph derived from a channel’s transition matrix is not computable in general. This means that you cannot simply calculate the graph from the matrix data. Consequently, knowing the capacity of this confusability graph does not provide a concrete tool for evaluating the performance of the channel.
Furthermore, the capacity of the confusability graph is defined as a regularized limit, which makes it intrinsically difficult to evaluate in practice. In other words, this capacity is not given by a single, computable expression; instead, it emerges only in the limit of increasingly long codes or repeated graph operations. This regularization step complicates any attempt to actually compute or approximate the capacity, rendering it of limited utility in practical scenarios.
Nevertheless, because the descriptions of the DMC are standard in practical settings, our negative computability results are of broad significance.
Beyond coding theory, the zero-error capacity is relevant in areas like remote state estimation and quantum communication (see [3,48]). Our findings are part of a broader narrative in information theory: many core problems—calculating the finite-state channel capacity [49,50,51], optimizing mutual information, and even constructing capacity-achieving codes [52,53]—have been proven to be non–Turing-computable in general.
A compelling open question is whether computable channels W exist for which C 0 ( W ) itself is a non-computable real. If so, this would establish that exact capacity statements require more than an algorithmic effort—they confront the fundamental limits of computability. Similar phenomena have been observed in compound channels [54], colored Gaussian channels [55], and Wiener prediction problems [56], suggesting that rich, non-computable structures may also appear in zero-error contexts.
Moving forward, research should probe the algorithmic frontiers of zero-error information theory, especially in connection with automated systems and software-defined communications (see [57,58]). Though Turing computability hits a wall, other computational frameworks—such as Blum—Shub—Smale machines—may offer new possibilities [45]. Understanding these alternative models may be key to effectively navigating the computability landscape of the zero-error capacity.
In summary, while the zero-error capacity remains a cornerstone of information theory, our results clarify that its algorithmic determination is blocked by deep, non-computable obstructions. Characterizing or circumventing these obstructions should be a key priority in future studies.

Author Contributions

Conceptualization, Data curation, Investigation, Methodology, Writing—review & editing, H.B. and C.D. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the financial support from the Federal Ministry of Education and Research of Germany (BMBF) through the program “Souverän. Digital. Vernetzt.” as part of the joint project 6G-life (Project IDs: 16KISK002 and 16KISK263). H. Boche and C. Deppe also gratefully acknowledge the support from the BMBF quantum program QuaPhySI (Grants 16KIS1598K and 16KIS2234), QUIET (Grants 16KISQ093 and 16KISQ0170), and the QD-CamNetz project (Grants 16KISQ077 and 16KISQ169). Their research was further supported by the German Research Foundation (DFG) under the project “Post Shannon Theory and Implementation” (Grants BO 1734/38-1 and DE 1915/2-1). Additionally, the DFG supported H. Boche under Grant BO 1734/20-1. The authors also express their gratitude to the BMBF for supporting H. Boche through the national initiative under Grant 16KIS1003K and C. Deppe under Grant 16KIS1005.

Data Availability Statement

Data is contained within the article.

Acknowledgments

This work was initiated following discussions with Martin Bossert and Vince Poor at the IEEE International Symposium on Information Theory 2019 in Paris. Holger Boche extends his gratitude to Martin Bossert and Vince Poor for their valuable insights on the significance of the zero-error capacity in various areas of information theory. Finally, we extend our thanks to Yannik Böck for his helpful and insightful comments.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Shannon, C.E. The zero-error capacity of a noisy channel. Inst. Radio Eng. Trans. Inf. Theory 1956, IT-2, 8–19. [Google Scholar] [CrossRef]
  2. Ahlswede, R. A note on the existence of the weak capacity for channels with arbitrarily varying channel probability functions and its relation to Shannon’s zero-error capacity. Ann. Math. Stat. 1970, 41, 1027–1033. [Google Scholar] [CrossRef]
  3. Matveev, A.S.; Savkin, A.V. Shannon zero error capacity in the problems of state estimation and stabilization via noisy communication channels. Int. J. Control 2007, 80, 241–255. [Google Scholar] [CrossRef]
  4. Cubitt, T.S.; Chen, J.; Harrow, A.W. Superactivation of the asymptotic zero-error classical capacity of a quantum channel. IEEE Trans. Inf. Theory 2011, 57, 8114–8126. [Google Scholar] [CrossRef]
  5. Cubitt, T.S.; Smith, G. An extreme form of superactivation for quantum zero-error capacities. IEEE Trans. Inf. Theory 2012, 58, 1953–1961. [Google Scholar] [CrossRef]
  6. Cubitt, T.S.; Leung, D.; Matthews, W.; Winter, A. Improving zero-error classical communication with entanglement. Phys. Rev. Lett. 2010, 104, 230503. [Google Scholar] [CrossRef]
  7. Duan, R.; Severini, S.; Winter, A. Zero-Error Communication via Quantum Channels, Noncommutative Graphs, and a Quantum Lovász Number. IEEE Trans. Inf. Theory 2013, 59, 1164–1174. [Google Scholar] [CrossRef]
  8. Cubitt, T.S.; Leung, D.; Matthews, W.; Winter, A. Zero-error channel capacity and simulation assisted by non-local correlations. IEEE Trans. Inf. Theory 2011, 57, 5509–5523. [Google Scholar] [CrossRef]
  9. Duan, R.; Winter, A. No-Signalling-Assisted Zero-Error Capacity of Quantum Channels and an Information Theoretic Interpretation of the Lovász Number. IEEE Trans. Inf. Theory 2016, 62, 891–914. [Google Scholar] [CrossRef]
  10. Duan, R.; Severini, S.; Winter, A. On zero-error communication via quantum channels in the presence of noiseless feedback. IEEE Trans. Inf. Theory 2016, 62, 5260–5277. [Google Scholar] [CrossRef]
  11. Koudia, S.; Cacciapuoti, A.S.; Simonov, K.; Caleffi, M. How Deep the Theory of Quantum Communications Goes: Superadditivity, Superactivation and Causal Activation. IEEE Commun. Surv. Tutor. 2022, 24, 1926–1956. [Google Scholar] [CrossRef]
  12. Gyongyosi, L.; Imre, S.; Nguyen, H.V. A survey on quantum channel capacities. IEEE Commun. Surv. Tutor. 2018, 20, 1149–1205. [Google Scholar] [CrossRef]
  13. Daws, M. Quantum graphs: Different perspectives, homomorphisms and quantum automorphisms. Commun. Am. Math. Soc. 2024, 2, 1–35. [Google Scholar] [CrossRef]
  14. Aigner, M.; Ziegler, G.M. Proofs from THE BOOK; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar] [CrossRef]
  15. Haemers, W. On some problems of Lovász concerning the Shannon capacity of a graph. IEEE Trans. Inf. Theory 1979, 25, 231–232. [Google Scholar] [CrossRef]
  16. Körner, J.; Orlitsky, A. Zero-error information theory. IEEE Trans. Inf. Theory 1998, 44, 2207–2229. [Google Scholar] [CrossRef]
  17. Lovász, L. On the Shannon capacity of a graph. IEEE Trans. Inf. Theory 1979, 25, 1–7. [Google Scholar] [CrossRef]
  18. Schrijver, A. Combinatorial Optimization; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  19. West, D.B. Introduction to Graph Theory, 2nd ed.; Prentice Hall: Hoboken, NJ, USA, 2001. [Google Scholar]
  20. Devroye, N. When is the zero-error capacity positive in the relay, multiple-access, broadcast and interference channels? In Proceedings of the 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 27–30 September 2016; pp. 935–942. [Google Scholar] [CrossRef]
  21. Mattas, M.; Östergård, P.R.J. A new bound for the zero-error capacity region of the two-user binary adder channel. IEEE Trans. Inf. Theory 2005, 51, 3305–3308. [Google Scholar] [CrossRef]
  22. Gu, Y. Zero-error communication over adder MAC. arXiv 2018, arXiv:1809.07364. [Google Scholar]
  23. Kovačević, M. Zero-error capacity of duplication channels. IEEE Trans. Commun. 2019, 67, 7623–7630. [Google Scholar] [CrossRef]
  24. Dalai, M.; Guruswami, V. An improved bound on the zero-error list-decoding capacity of the 4/3 channel. IEEE Trans. Inf. Theory 2019, 65, 5635–5647. [Google Scholar]
  25. Bhandari, S.; Radhakrishnan, J. Bounds on the zero-error list-decoding capacity of the q/(q–1) channel. IEEE Trans. Inf. Theory 2021, 68, 238–247. [Google Scholar] [CrossRef]
  26. Charpenay, N.; Treust, M.L. Variable-length coding for zero-error channel capacity. arXiv 2020, arXiv:2001.03523. [Google Scholar]
  27. Zhang, Y. Zero-error communication over adversarial MACS. IEEE Trans. Inf. Theory 2023, 69, 4532–4547. [Google Scholar] [CrossRef]
  28. Sason, I. Observations on graph invariants with the Lovász ϑ-function. AIMS Math. 2024, 9, 15385–15468. [Google Scholar] [CrossRef]
  29. Ahlswede, A.; Althöfer, I.; Deppe, C.; Tamm, U. (Eds.) Transmitting and Gaining Data: Rudolf Ahlswede’s Lectures on Information Theory 2, 1st ed.; Foundations in Signal Processing, Communications and Networking; Springer: Berlin/Heidelberg, Germany, 2015; Volume 11. [Google Scholar]
  30. Boche, H.; Deppe, C. Computability of the zero-error capacity of noisy channels. In Proceedings of the 2021 IEEE Information Theory Workshop (ITW), Kanazawa, Japan, 17–21 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6. [Google Scholar] [CrossRef]
  31. Boche, H.; Deppe, C. Computability of the Zero-Error Capacity with Kolmogorov Oracle. In Proceedings of the 2020 IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020; pp. 2020–2025. [Google Scholar] [CrossRef]
  32. Turing, A.M. On computable numbers, with an application to the Entscheidungsproblem. Proc. Lond. Math. Soc. 1936, 2, 230–265. [Google Scholar]
  33. Kleene, S.C. General recursive functions of natural numbers. Math. Ann. 1936, 112, 727–742. [Google Scholar] [CrossRef]
  34. Schnorr, C.P. Rekursive Funktionen und ihre Komplexität, 1st ed.; Vieweg+Teubner: Berlin, Germany, 1974. [Google Scholar] [CrossRef]
  35. Weihrauch, K. Computable Analysis—An Introduction; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar] [CrossRef]
  36. Soare, R.I. Recursively Enumerable Sets and Degrees; Springer: Berlin/Heidelberg, Germany, 1987. [Google Scholar] [CrossRef]
  37. Pour-El, M.B.; Richards, J.I. Computability in Analysis and Physics; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar] [CrossRef]
  38. Specker, E. Nicht konstruktiv beweisbare Sätze der Analysis. J. Symb. Log. 1949, 14, 145–158. [Google Scholar] [CrossRef]
  39. Avigad, J.; Brattka, V. Computability and analysis: The legacy of Alan Turing. In Turing’s Legacy: Developments from Turing’s Ideas in Logic; Downey, R., Ed.; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar] [CrossRef]
  40. Ahlswede, R. On concepts of performance parameters for channels. In General Theory of Information Transfer and Combinatorics; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4123, pp. 639–663. [Google Scholar] [CrossRef]
  41. Blackwell, D.; Breiman, L.; Thomasian, A.J. The capacities of certain channel classes under random coding. Ann. Math. Stat. 1960, 31, 558–567. [Google Scholar] [CrossRef]
  42. Ahlswede, A.; Althöfer, I.; Deppe, C.; Tamm, U. (Eds.) Probabilistic Methods and Distributed Information: Rudolf Ahlswede’s Lectures on Information Theory 5, 1st ed.; Foundations in Signal Processing, Communications and Networking; Springer: Berlin/Heidelberg, Germany, 2019; Volume 14. [Google Scholar]
  43. Kleene, S.C. Recursive Predicates and Quantifiers. Trans. Am. Math. Soc. 1943, 53, 41–73. [Google Scholar] [CrossRef]
  44. Alon, N.; Lubetzky, E. The Shannon capacity of a graph and the independence numbers of its powers. IEEE Trans. Inf. Theory 2006, 52, 2172–2176. [Google Scholar] [CrossRef]
  45. Boche, H.; Böck, Y.; Deppe, C. Deciding the Problem of Remote State Estimation via Noisy Communication Channels on Real Number Signal Processing Hardware. In Proceedings of the ICC 2022—IEEE International Conference on Communications, Seoul, Republic of Korea, 16–20 May 2022; pp. 4510–4515. [Google Scholar] [CrossRef]
  46. Boche, H.; Schaefer, R.F.; Poor, H.V. Secure Communication and Identification Systems—Effective Performance Evaluation on Turing Machines. IEEE Trans. Inf. Forensics Secur. 2020, 15, 1013–1025. [Google Scholar] [CrossRef]
  47. Zuiddam, J. The asymptotic spectrum of graphs and the Shannon capacity. Combinatorica 2019, 39, 1173–1184. [Google Scholar] [CrossRef]
  48. Wiese, M.; Oechtering, T.J.; Johansson, K.H.; Papadimitratos, P.; Sandberg, H.; Skoglund, M. Secure Estimation and Zero-Error Secrecy Capacity. IEEE Trans. Autom. Control 2019, 64, 1047–1062. [Google Scholar] [CrossRef]
  49. Elkouss, D.; Pérez-García, D. Memory effects can make the transmission capability of a communication channel uncomputable. Nat. Commun. 2018, 9, 1149. [Google Scholar] [CrossRef] [PubMed]
  50. Boche, H.; Schaefer, R.F.; Poor, H.V. Shannon meets Turing: Non-computability and non-approximability of the finite state channel capacity. Commun. Inf. Syst. 2020, 20, 81–116. [Google Scholar] [CrossRef]
  51. Grigorescu, A.; Boche, H.; Schaefer, R.F.; Poor, H.V. Capacity of Finite State Channels with Feedback: Algorithmic and Optimization Theoretic Properties. In Proceedings of the 2022 IEEE International Symposium on Information Theory (ISIT), Espoo, Finland, 26 June–1 July 2022; pp. 498–503. [Google Scholar] [CrossRef]
  52. Lee, Y.; Boche, H.; Kutyniok, G. Computability of Optimizers. IEEE Trans. Inf. Theory 2024, 70, 2967–2983. [Google Scholar] [CrossRef]
  53. Boche, H.; Schaefer, R.F.; Poor, H.V. Turing Meets Shannon: On the Algorithmic Construction of Channel-Aware Codes. IEEE Trans. Commun. 2022, 70, 2256–2267. [Google Scholar] [CrossRef]
  54. Boche, H.; Schaefer, R.F.; Poor, H.V. Communication Under Channel Uncertainty: An Algorithmic Perspective and Effective Construction. IEEE Trans. Signal Process. 2020, 68, 6224–6239. [Google Scholar] [CrossRef]
  55. Boche, H.; Grigorescu, A.; Schaefer, R.F.; Poor, H.V. Algorithmic Computability of the Capacity of Additive Colored Gaussian Noise Channels. In Proceedings of the GLOBECOM 2023—2023 IEEE Global Communications Conference, Kuala Lumpur, Malaysia, 4–8 December 2023; pp. 4375–4380. [Google Scholar] [CrossRef]
  56. Boche, H.; Pohl, V.; Poor, H.V. The Wiener Theory of Causal Linear Prediction Is Not Effective. In Proceedings of the 2023 62nd IEEE Conference on Decision and Control (CDC), Singapore, Singapore, 13–15 December 2023; pp. 8229–8234. [Google Scholar] [CrossRef]
  57. Boche, H.; Böck, Y.; Deppe, C. On the Semi-Decidability of Remote State Estimation and Stabilization via Noisy Communication Channels. In Proceedings of the 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 13–17 December 2021; pp. 3428–3435. [Google Scholar] [CrossRef]
  58. Boche, H.; Deppe, C. Computability of the channel reliability function and related bounds. In Proceedings of the 2022 IEEE International Symposium on Information Theory (ISIT), Espoo, Finland, 26 June–1 July 2022; IEEE: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
Table 1. An overview of the main definitions and notations.
Table 1. An overview of the main definitions and notations.
Symbol/NotationDescription
R c A set of computable real numbers.
T M ( x ) The Turing machine for the input x R c .
T M M The Turing machine that halts for the input x R c if and only if x M .
P ( X ) The set of all probability distributions over a finite alphabet X .
P c ( X ) The set of all computable probability distributions P P ( X ) such that P ( x ) R c for all x X .
CH The set of all conditional probability distributions (channels) P Y | X : X P ( Y ) for finite alphabets X and Y .
CH c The set of all computable channels, i.e.,  P Y | X ( · | x ) P c ( Y ) , for all x X .
C 0 ( W ) The zero-error capacity of a channel W.
C a v ( W ) The capacity of an arbitrarily varying channel (AVC) W under the average error probability.
C max ( W ) The capacity of an AVC W under the maximal error probability.
Θ ( G ) The Shannon capacity of a graph G, defined by Θ ( G ) : = lim sup n α ( G n ) 1 / n , where α denotes the independence number.
Table 2. Overview of main results.
Table 2. Overview of main results.
TheoremStatement
Theorem 3For finite alphabets X , Y with | X | 2 , | Y | 2 , the function C 0 : C H c ( X , Y ) R is not Banach–Mazur-computable.
Theorem 4For all λ R c with 0 λ < log 2 min { | X | , | Y | } , the set { W C H c ( X , Y ) : C 0 ( W ) > λ } is not semi-decidable.
Theorem 5For all n N , the function M * ( · , n ) : C H c ( X , Y ) N is not Banach–Mazur-computable.
Theorem 8The set N 0 ( X , Y ) is semi-decidable for finite alphabets X , Y with | X | 2 and | Y | 2 .
Theorem 9The function C a v : A V C 0 1 R c is Borel–Turing-computable.
Theorem 10 Θ is Borel–Turing-computable if and only if C max is Borel–Turing-computable.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Boche, H.; Deppe, C. Computability of the Zero-Error Capacity of Noisy Channels. Information 2025, 16, 571. https://doi.org/10.3390/info16070571

AMA Style

Boche H, Deppe C. Computability of the Zero-Error Capacity of Noisy Channels. Information. 2025; 16(7):571. https://doi.org/10.3390/info16070571

Chicago/Turabian Style

Boche, Holger, and Christian Deppe. 2025. "Computability of the Zero-Error Capacity of Noisy Channels" Information 16, no. 7: 571. https://doi.org/10.3390/info16070571

APA Style

Boche, H., & Deppe, C. (2025). Computability of the Zero-Error Capacity of Noisy Channels. Information, 16(7), 571. https://doi.org/10.3390/info16070571

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop