Next Article in Journal
WIRE: A Weighted Item Removal Method for Unsupervised Rank Aggregation
Previous Article in Journal
A Box-Bounded Non-Linear Least Square Minimization Algorithm with Application to the JWL Parameter Determination in the Isentropic Expansion for Highly Energetic Material Simulation
Previous Article in Special Issue
Structure Approximation-Based Preconditioning for Solving Tempered Fractional Diffusion Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Computability of the Channel Reliability Function and Related Bounds

1
Theoretical Information Technology, Technical University of Munich, 80333 Munich, Germany
2
Institute for Communications Technology, Technische Universität Braunschweig, 38106 Brunswick, Germany
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(6), 361; https://doi.org/10.3390/a18060361
Submission received: 28 March 2025 / Revised: 21 May 2025 / Accepted: 6 June 2025 / Published: 11 June 2025
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 3rd Edition)

Abstract

:
The channel reliability function is a crucial tool for characterizing the dependable transmission of messages across communication channels. In many cases, the only upper and lower bounds of this function are known. We investigate the computability of the reliability function and its associated functions, demonstrating that the reliability function is not Turing computable. This also holds true for functions related to the sphere packing bound and the expurgation bound. Additionally, we examine the R function and zero-error feedback capacity, as they are vital in the context of the reliability function. Both the R function and the zero-error feedback capacity are not Banach–Mazur computable.

1. Introduction

In [1], C. Shannon established the foundations of information theory by characterizing the key mathematical properties of communication channels. For a transmission rate R that is less than the channel capacity C, the probability of erroneous decoding with respect to an optimal code decreases exponentially as the code length n N increases. Shannon introduced the channel reliability function E ( R ) as being the exponent governing this exponential decrease in relation to the transmission rate R.
A major goal in information theory is to find a closed-form expression for the channel reliability function. This expression should be computable and fully determined by the parameters of the communication task. Naturally, we must define what constitutes a closed-form expression. In [2], Chow, and in [3], Borwein and Crandall discuss different approaches to defining closed-form expressions. All of the various representations satisfy the requirement that the corresponding functions can be computed algorithmically using a digital computer. This can be achieved with great precision, depending on the inputs within their domain of definition.
Shannon’s characterization of the capacity for message transmission via the discrete memoryless channel (DMC) in [1], Ahlswede’s characterization of the capacity for message transmission via the multiple access channel in [4], and Ahlswede and Dueck’s characterization of the identification capacity for DMCs in [5] are all significant examples of closed-form solutions using elementary functions. These provide important instances of the computability of the corresponding performance functions, as defined in the previous context. The precise definition of computability, as outlined by Turing, is presented in Section 2.
Lovász’s characterization of the zero-error capacity for the pentagram also represents a closed-form number according to Chow’s definition in [2], which can be computed algorithmically—an outcome that is desirable. However, the characterization of the zero-error capacity for a cyclical heptagon remains an open problem. Moreover, it is still unclear whether the zero-error capacities of DMCs can take computable values for computable channels. Additionally, the algorithmic computability of the broadcast capacity region is still uncertain.
In the age of artificial intelligence, it is increasingly important to determine whether a digital computer can solve a given problem or compute a given function. Since every function that can be computed by a digital computer can also be computed by a Turing machine (as will be discussed in more detail below), this question is reduced to asking whether a function is computable. It is therefore crucial to distinguish between determining how to compute the zero-error capacity and whether it is computable at all. In this work, we focus on the latter: the computability of the zero-error capacity.
The Lovász ϑ -function for graphs was analyzed in [6] from three distinct research perspectives related to various graph invariants. This investigation resulted in new insights into the Shannon capacity of graphs, observations on cospectral and nonisomorphic graphs, and bounds on graph invariants while also serving as a tutorial in zero-error information theory and algebraic graph theory. Further observations on the Lovász ϑ -function are provided by the author in [7].
In this paper, we provide a negative answer to the question of whether the channel reliability function and several related bounds are algorithmically computable by Turing machines.
Significant research has been conducted on the channel reliability function, but many aspects of its behavior remain unresolved (see surveys [8] and [9]). In fact, a complete characterization of the channel reliability function is still unknown for binary-input binary-output channels. As a result, considerable efforts have been made to derive computable lower and upper bounds for the function (see [10,11,12]).
Determining the behavior of the channel reliability function across the entire interval ( 0 , C ) is a challenging problem. Various approaches have attempted to compute the reliability function algorithmically by constructing sequences of upper and lower bounds. The first significant contribution in this direction was made by Shannon, Gallager, and Berlekamp in [13].
A fundamental question that arises is whether the reliability function can be computed in this manner. To investigate this, we employ the framework of Turing computability [14]. In general, a function is considered Turing computable if there exists an algorithm capable of computing it. The Turing machine serves as the most fundamental and powerful model of computation, underpinning theoretical computer science. Unlike physical computers, which have practical constraints, a Turing machine is an abstract mathematical construct that can be rigorously analyzed using formal mathematical methods.
It is important to note that the Turing machine represents the ultimate performance limit of current digital computers, including supercomputers. A Turing machine models an algorithm or a program, where computation consists of step-by-step manipulation of symbols or characters that are read from and written to a memory tape according to a set of rules. These symbols can be interpreted in various ways, including as numbers. To perform computations on abstract sets, the elements of the set must be encoded as strings of symbols on the tape. This approach allows Turing computability to be defined for real and complex numbers.
The use of digital computers to compute approximations of channel capacities or channel reliability functions has been a prominent topic in information theory. The computation of channel capacity for discrete memoryless channels (DMCs) is a convex optimization problem, and in 1972, an algorithm for approximating the capacity of a DMC on digital computers was independently published in [15] and [16].
Even for binary symmetric channels with rational crossover probabilities (excluding the case p = 1 2 ), the channel capacity is a transcendental number. As a result, despite the relative simplicity of these channels, their capacity can only be approximated with finite precision by digital computers. In contrast to the problem of computing channel capacity, determining the behavior of the channel reliability function over the entire interval ( 0 , C ) R is a significantly more complex task. A common approach to this challenge involves considering sequences of upper and lower bounds for E ( R ) (see [13]).
In general, the channel reliability function is a well-studied topic in information theory. Originally introduced and analyzed for discrete memoryless channels (DMCs), the concept has since been significantly extended to various other scenarios and channel models. In [17], the reliability function of a DMC was studied at rates above capacity. Subsequent refinements and theoretical bounds were proposed, such as the Poor–Verdú upper bound addressed in [18]. Extensions beyond DMCs include continuous channels and channels with feedback or secrecy constraints. For instance, upper bounds for Gaussian channels were developed in [19], while the role of feedback in Poisson and Gaussian channels was explored in [20,21]. The impact of signal constraints was analyzed in [22], and improved Gaussian channel bounds were proposed in [23]. Secrecy considerations and cost constraints were incorporated into the analysis of the reliability and secrecy functions in [24]. The reliability function in the presence of side information, as in the Gelfand–Pinsker channel, was considered in [25]. More recently, a new upper bound for DMCs was given in [26], and noisy feedback for binary symmetric channels was studied in [27]. These developments culminated in the analysis of reliability functions in quantum communication settings. Foundational work includes [28,29], and recent advancements include [30].
In this work, we explore whether it is possible to compute the channel reliability function in this manner using a mathematically rigorous formalization of computability. Specifically, our analysis is based on the theory of Turing machines and recursive functions.
In many cases, there is no direct characterization of the behavior of a general function over an abstract set in terms of an algorithm on a Turing machine. Consequently, a common strategy is to approximate the function successively using a sequence of computable upper and lower bounds, for which an algorithm is available. One can then ask the weaker question of whether it is possible to approximate the function in a computable manner. This requires computable sequences of computable upper and lower bounds. This approach is also necessary for the reliability function, and we conducted this analysis. Unfortunately, our results show that the channel reliability function is not a Turing computable performance function when the channel is considered as input.
We also examine several other closely related functions, including the R function, the sphere packing bound function, the expurgation bound function, and the zero-error feedback capacity, all of which are closely tied to the reliability function. We treat all of these functions as functions of the channel.
As envisioned, the sixth generation (6G) of mobile networks will introduce a wide range of new features [31]. These innovations bring new challenges to the design of wireless communication systems. Specifically, the Tactile Internet will enable not only the control of data but also the manipulation of physical and virtual objects [31]. With such applications, there arises an increased need to ensure the trustworthiness of the system and its services [32,33].
6G will impose more diverse and demanding quality-of-service (QoS) requirements on network resilience, reliability, service availability, and delay [31]. The channel reliability function plays a vital role in the reliability and delay performance analysis of communication systems. It is therefore of interest to explore whether the reliability and delay performance of communication systems can be verified automatically on digital hardware [33]. Analyzing the channel reliability function with respect to Turing computability becomes crucial in this context. The question of Turing computability for performance functions is a central issue in information theory, as closed-form expressions are only known for a few performance functions. It is therefore important to compute corresponding performance functions on available computers with provable performance, ensuring the strict requirements for future communication systems [31,33].
The structure of this paper is as follows. In Section 2, we begin by presenting the basic definitions and known results that will be used throughout the paper. Section 3 focuses on the R function. We examine the decidability of connected sets with the R function and demonstrate that only an approximation from below is possible. This has implications for the sphere packing bound, and we show that it is not a Turing computable performance function.
In Section 4, we analyze the reliability function and prove that it is also not Turing computable. The same result holds for the expurgation bound. In Section 5, we investigate the zero-error feedback capacity, which is closely related to the R function. We first address a question posed by Alon and Lubetzky in [34] regarding the zero-error capacity with feedback, specifically for the case without feedback (which was examined in [35]). We then show that the zero-error feedback capacity is not Banach–Mazur computable and cannot be approximated by computable increasing sequences of computable functions. Additionally, we characterize the superadditivity of the zero-error feedback capacity and demonstrate that the R function is additive.
In Section 6, we analyze the behavior of the expurgation bound rates. Finally, we conclude by summarizing the implications of our results for the channel reliability function. Our findings indicate that, in general, there cannot be a simple recursive closed-form expression for the channel reliability function over a very precise interval.
Some of the results in this paper were presented at the IEEE International Symposium on Information Theory in Espoo, as noted in [36].

2. Definitions and Basic Results

2.1. Basic Concepts of Computability Theory

In this section, we present the basic definitions and results from computability theory that are necessary for this work. We begin with the fundamental definitions of computability, starting with the concept of a Turing machine [14].
A Turing machine serves as a mathematical model for what we intuitively understand as computation machines. In this sense, they provide an abstract idealization of modern-day computers. Any algorithm that can be executed by a real-world computer can, in principle, be simulated by a Turing machine, and vice versa. However, unlike real-world computers, Turing machines are not constrained by limitations such as energy consumption, computation time, or memory size. Furthermore, all computation steps on a Turing machine are assumed to be executed flawlessly, with no possibility of error.
Recursive functions, more specifically known as μ-recursive functions, form a special subset of the set n = 0 f : N n N , where the symbol “↪” denotes a partial mapping. The set of recursive functions provides an alternative characterization of the notion of computability. Turing machines and recursive functions are equivalent in the following sense: a function f : N n N is computable by a Turing machine if and only if it is a partial recursive function.
Next, we introduce several key definitions from computable analysis [37,38,39], which we will apply in the subsequent sections.
Definition 1.
A sequence of rational numbers { r n } n N is called a computable sequence if there exist recursive functions a , b , s : N N with b ( n ) 0 for all n N and
r n = ( 1 ) s ( n ) a ( n ) b ( n ) , n N .
Definition 2.
We say that a computable sequence { r n } n N of rational numbers converges effectively, i.e., computably, to a number x, if a recursive function a : N N exists such that | x r n | < 1 2 N for all N N and all n N with n a ( N ) applies.
We can now introduce computable numbers.
Definition 3.
A real number x is said to be computable if there exists a computable sequence of rational numbers { r n } n N , such that | x r n | < 2 n for all n N . We denote the set of computable real numbers using R c .
Next, we need suitable subsets of the natural numbers.
Definition 4.
A set A N is called recursive if there exists a recursive function f, such that f ( x ) = 1 if x A and f ( x ) = 0 if x A c , where A c stands for the complement set of A.
Definition 5.
A set A N is recursively enumerable if there exists a recursive function whose domain is exactly A.
Remark 1.
For the definition of recursive and partial recursive functions, see [37]. Recursive functions f : N N are the building blocks to develop the framework for computing theory on rational numbers, on real numbers, and on related functions defined over these number fields. This theory captures exactly what can be achieved in theory with digital computers in these number fields. We next introduce the concept of computable performance functions on the basis of computability theory. It is important to note that computability theory formalizes exactly what is computable with perfect digital computers.

2.2. Basic Concepts of Information Theory

To define the reliability function and its related functions, we first need the definition of a discrete memoryless channel. In the theory of transmission, the receiver must be in a position to successfully decode all the messages transmitted by the sender.
Let X be a finite alphabet, and denote the set of all probability distributions on X using P ( X ) . We define the set of computable probability distributions, P c ( X ) , as the subset of P ( X ) consisting of all distributions P P ( X ) for which P ( x ) R c holds for all x X .
Furthermore, for finite alphabets X and Y , let CH denote the set of all conditional probability distributions (or channels) P Y | X : X P ( Y ) . We define CH c as the set of computable conditional probability distributions, i.e., those for which P Y | X ( · | x ) P c ( Y ) holds for every x X .
Let M C H c ( X , Y ) . We call M semi-decidable if and only if there is a Turing machine T M M that either stops or computes forever, depending on whether W M is true. This means T M M exactly accepts the elements of M, and for an input W M c = C H c ( X , Y ) M , it computes forever.
Definition 6.
A discrete memoryless channel (DMC) is a triple ( X , Y , W ) , where X is the finite input alphabet, Y is the finite output alphabet, and W ( y | x ) C H ( X , Y ) , with x X , y Y . The probability that a sequence y n Y n is received if x n X n was sent is defined by
W n ( y n | x n ) = j = 1 n W ( y j | x j ) .
Definition 7.
A (deterministic) block code C ( n ) with rate R and block length n consists of
  • A message set M = { 1 , 2 , . . . , M } with M = 2 n R N ;
  • An encoding function e : M X n ;
  • aAdecoding function d : Y n M .
We call such a code an ( R , n ) -code.
Definition 8.
Let ( X , Y , W ) be a DMC. Using C ( n ) , we denote a block code with the block length n and message set M .
1.
The individual message probability of error is defined by the conditional probability of error, given that message m M is transmitted:
P e ( C ( n ) , W , m ) = P r { d ( Y n ) m | X n = e ( m ) } .
2.
We define the average probability of error by
P e , av ( C ( n ) , W ) = 1 | M | m M P e ( C ( n ) , W , m ) .
P e , av ( W , R , n ) denotes the minimum error probability P e , av ( C ( n ) , W ) over all block codes C ( n ) of block length n and with message set M = 2 n R .
3.
We define the maximal probability of error by
P e , max ( C ( n ) , W ) = max m M P e ( C ( n ) , W , m ) .
P e , max ( W , R , n ) denotes the minimum error probability P e , max ( C ( n ) , W ) over all block codes C ( n ) of block length n and with message set M = 2 n R .
4.
The Shannon capacity for a channel W C H ( X , Y ) is defined by
C ( W ) : = sup { R : lim n P e , max ( W , R , n ) = 0 } .
5.
The zero-error capacity for a channel W C H ( X , Y ) is defined by
C 0 ( W ) : = sup { R : P e , max ( W , R , n ) = 0 for some n } .
Remark 2.
For R with C 0 ( W ) < R < C ( W ) , there exists A ( W , R ) , B ( W , R ) R + , such that
2 n A ( W , R ) + o ( 1 ) P e , max ( W , R , n ) 2 n B ( W , R ) + o ( 1 ) .
We also define the discrete memoryless channel with noiseless feedback (DMCF). By this, we mean that, in addition to the DMC, there exists a return channel that sends the element of Y actually received back from the receiving point to the transmitting point. It is assumed that this information is received at the transmitting point before the next letter is sent and can therefore be used to choose the next letter to be sent. We assume that this feedback is noiseless. We denote the feedback capacity of a channel W by C F B ( W ) and the zero-error feedback capacity by C 0 F B ( W ) . Shannon proved in [40] that C ( W ) = C F B ( W ) . This is, in general, not true for the zero-error capacity. We see that the zero-error (feedback) capacity is related to the reliability function, which we analyze in this paper. It is defined as follows.
Definition 9.
The channel reliability function (error exponent) is defined by
E ( W , R ) = lim sup n 1 n log 2 P e , max ( W , R , n ) .
Remark 3.
We make use of the common convention that log 2 0 : = .
Remark 4.
We need the lim sup in (1), because it is not known whether the limit value, i.e., the limts on the right-hand side of (1), exist.
The first simple observation is that for R > C ( W ) , we have E ( W , R ) = 0 , and if C 0 ( W ) > 0 for 0 R < C 0 ( W ) , we have E ( W , R ) = + . One well-known upper bound is the sphere packing bound, which can be defined as follows (see [10]).
Definition 10.
Let X , Y be finite alphabets, and ( X , Y , W ) be a DMC. Then, for all R ( 0 , C ( W ) ) , we define the sphere packing bound function:
E S P ( W , R ) = sup ρ > 0 max P P ( X ) log y x P ( x ) W ( y | x ) 1 1 + ρ 1 + ρ ρ R .
Theorem 1
(Fano 1961, Shannon, Gallager, Berlekamp 1967). For any DMC W and for all R ( 0 , C ( W ) ) , it holds that
E ( W , R ) E S P ( W , R ) .
The sphere packing upper bound is an important upper bound. The following two lower bounds of the reliability function are also very important. In [41], the random coding bound was defined as follows:
Definition 11.
Let X , Y be finite alphabets, and ( X , Y , W ) be a DMC. Then, for all R ( 0 , C ( W ) ) , we define the random coding bound function as
E r ( W , R ) = max 0 ρ 1 E 0 ( W , ρ ) ρ R , where
E 0 ( ρ ) = max P P ( X ) log y x P ( x ) W ( y | x ) 1 / ( 1 + ρ ) 1 + ρ .
Theorem 2.
Let X , Y be finite alphabets and ( X , Y , W ) be a DMC; then,
E ( W , R ) E r ( W , R ) .
Gallager also defined in [41] the k-letter expurgation bound as follows:
Definition 12.
Let X , Y be finite alphabets and ( X , Y , W ) be a DMC; then, for all R ( 0 , C ( W ) ) , we define the k-letter expurgation bound function:
E ex ( W , R , k ) = sup ρ 1 E x ( ρ , k ) ρ R
E x ( ρ , k ) = ρ k log min P X k P ( X k ) Q k ( ρ , P X k )
Q k ( ρ , P X k ) = x k , x k P X k ( x k ) P X k ( x k ) g k ( x k , x k ) 1 ρ
g k ( x k , x k ) = y k W k ( y k | x k ) W k ( y k | x k ) .
Theorem 3.
Let X , Y be finite alphabets and ( X , Y , W ) be a DMC. Then, for all R ( 0 , C ( W ) ) , we have
E ( W , R ) lim k E ex ( W , R , k ) .
The inequality in (9) follows from Fekete’s lemma.
The smallest value of R, at which the convex curve E S P ( W , R ) meets its supporting line of slope -1, is called the critical rate and is denoted by R c r i t [9]. For the certain interval [ R c r i t , C ] , the random coding lower bound corresponds to the sphere packing upper bound. The channel reliability function is therefore known for this interval. The channel reliability function is generally not known for the interval [ 0 , R c r i t ] . For the interval [ 0 , R c r i t ] , there are also better lower bounds than the random coding lower bound. R ( W ) is the infimum of all rates R ̲ such that E S P ( W , R ̲ ) is finite on the open interval ( R ̲ , C ( W ) ) . C 0 ( W ) R ( W ) applies if C 0 ( W ) > 0 . The following representation of R exists (see [9]):
R ( W ) = min Q P ( Y ) max x X log 2 1 y : W ( y | x ) > 0 Q ( y ) .
There exist alphabets X , Y and channels W CH such that C 0 ( W ) = 0 while R ( W ) > 0 .
Moreover, for the zero-error feedback capacity C 0 F B , it holds that C 0 F B ( W ) = R ( W ) whenever C 0 ( W ) > 0 . However, if C 0 ( W ) = 0 , there exists a channel W for which C 0 F B ( W ) = 0 while R ( W ) > 0 (see [9]).
For the zero-error feedback capacity, the following is known.
Theorem 4
(Shannon 1956, [40]). Let W C H ( X , Y ) ; then,
C 0 F B ( W ) = 0 if C 0 ( W ) = 0 max P P ( X ) min y log 2 1 x : W ( y | x ) > 0 P ( x ) otherwise .

2.3. Lower and Upper Bounds on the Reliability Function for the Typewriter Channel

As mentioned before, Shannon, Gallager, and Berlekamp assumed in [13] that the expurgation is bound tight. Katsman, Tsfasman, and Vladut showed in [42] a counterexample for the symmetric q-ary channel when q 49 . Dalai and Polyanskiy found a simpler counterexample in [43]. They showed that the conjecture is already wrong for the q-ary typewriter channel for q 4 . We would like to briefly present their results here.
Definition 13.
Let X = Y = Z q and 0 ϵ 1 2 . The typewriter channel W ϵ is defined by
W ϵ ( y | x ) = 1 ϵ y = x ϵ y = x + 1 mod q .
The extension of the channel W ϵ n is defined by
W ϵ n ( y n | x n ) = k = 1 n W ϵ ( y i | x i ) .
For the reliability function of this channel, the interval ( C 0 ( W ϵ ) , C ( W ϵ ) ) is of interest. The capacity of a typewriter channel W ϵ has the formula
C ( W ϵ ) = log ( q ) h 2 ( ϵ ) ,
where h 2 is the binary entropy function. Shannon showed in [40] that C 0 ( W ϵ ) is positive if q 4 . He showed that for even q, it holds that C 0 ( W ϵ ) = log q 2 . It is difficult to get a formula for odd q. Lovász proved in [44] that Shannon’s lower bound for q = 5 : C 0 ( W ϵ ) = log 5 is tight. For general odd q, Lovász proved
C 0 ( W ϵ ) log cos ( π q ) 1 + cos ( π q ) q .
It is only known for q = 5 that this bound is tight. In general, this is not true. For special q, there are special results outlined in [44,45,46,47].
Dalai and Polyanskiy provide upper and lower bounds on the reliability function in [43]. They observed that the zero-error capacity of the pentagon can be determined by a careful study of the expurgated bound.
They present an improved lower bound for the case of even and odd q, showing that it also is a precisely shifted version of the expurgated bound for the BSC. Their result also provides a new elementary disproof of the conjecture suggested in [13] that the expurgated bound is asymptotically tight when computed on arbitrarily large blocks. Furthermore, in [43], Dalai and Polyanskiy present a new upper bound for the case of odd q based on the minimum distance of codes. They use Delsarte’s linear programming method [48] (see also [49]), combining the construction used by Lovász [44] for bounding the graph capacity with the construction used by McEliece–Rodemich–Rumsey–Welch [50] for bounding the minimum distance of codes in Hamming spaces. In the special case ϵ = 1 / 2 , they give another improved upper bound for the case of odd q, following the ideas of Litsyn [51] and Barg–McGregor [52], which in turn are based on estimates for the spectra of codes originated by Kalai–Linial [53].

2.4. Computable Channels and Computable Performance Functions

We need further basic concepts for computability. We want to investigate the function E ( W , R ) and the upper bounds like E S P ( W , R ) and E e x ( W , R ) for k N as functions of W and R. These functions are generally only well defined for fixed channels W on sub-intervals of [ 0 , C ( W ) ] as functions depending on R. For example, for W C H ( X , Y ) with C 0 ( W ) > 0 , E ( W , R ) is infinite for R < C 0 ( W ) . Hence, E ( W , R ) must be examined and computed as a function of R on the interval ( C 0 ( W ) , C ( W ) ] . Similar statements also apply to the other functions that have already been introduced. We now fix non-trivial alphabets X , Y and the corresponding set C H c ( X , Y ) of the computable channels and R R c .
Definition 14
(Turing computable channel function). We call a function f : C H c ( X , Y ) R c a Turing computable channel function if there is a Turing machine that converts any program for the representation of W C H c ( X , Y ) into a program for the computation of f ( W ) —that is, f ( W ) = T M f ( W ) , W C H c ( X , Y ) .
We want to determine whether there is a closed form for the channel reliability function. For this, we need the following definition, which we discuss in more detail in Remark 5 below.
Definition 15
(Turing computable performance function). Letbe a symbol. We call a function F : C H ( X , Y ) c × R c + R c { } a Turing computable performance function if there are two Turing computable channel functions f ̲ and f ¯ with f ̲ ( W ) f ¯ ( W ) for W C H c ( X , Y ) , and a Turing machine T M F , which is defined for input R R c + and W C H c ( X , Y ) . The Turing machine T M F stops for the variables R R c + and W C H c ( X , Y ) and any representation for W and R as input if and only if R ( f ̲ ( W ) ,   f ¯ ( W ) ) and the Turing machine T M F delivers F ( W , R ) = T M F ( W , R ) . If R ( f ̲ ( W ) ,   f ¯ ( W ) ) , then T M F does not stop.
Remark 5.
The requirement for function F : C H ( X , Y ) c × R c + R c { } to be a Turing computable performance function is relatively weak. For example, let us take W and R as inputs. Then, the interval ( f ̲ ( W ) ,   f ¯ ( W ) ) is computed first. If R is now in the interval ( ( f ̲ ( W ) ,   f ¯ ( W ) ) , then the Turing machine T M F must stop for the input ( W , R ) and deliver the result for F ( W , R ) . We impose no requirements on the behavior of the Turing machine for input W and R ( f ̲ ( W ) ,   f ¯ ( W ) ) . In particular, the Turing machine T M F does not have to stop for the input ( W , R ) in this case.
Take, for example, any Turing computable function G : C H ( X , Y ) c × R c + R c { } with the corresponding Turing machine T M G . Furthermore, let T M ̲ : C H c ( X , Y ) R c and T M ¯ : C H c ( X , Y ) R c be any two TMs, so that T M ̲ ( W ) T M ¯ ( W ) always holds for all W C H c ( X , Y ) . Then, the following Turing machine T M : C H c ( X , Y ) × R c R c { } defines a Turing computable performance function.
1.
For any input W C H c ( X , Y ) and R R c , first compute f ̲ ( W ) = T M ̲ ( W ) and f ¯ ( W ) = T M ¯ ( W ) .
2.
Compute the following two tests in parallel:
(a) 
Use the Turing machine T M > f ̲ ( W ) and test R > f ̲ ( W ) using T M > f ̲ ( W ) for input R R c .
(b) 
Use the Turing machine T M < f ¯ ( W ) and test R < f ¯ ( W ) using T M < f ¯ ( W ) for input R R c .
Let these two tests run until both Turing machines stop. If both Turing machines stop in 2, then compute G ( W , R ) and set T M ( W , R ) = G ( W , R ) .
T M actually generates a Turing computable performance function, and the Turing machine T M stops for the input ( W , R ) if and only if R ( f ̲ ( W ) ,   f ¯ ( W ) ) applies. Then, it gives the value G ( W , R ) as output. This follows from the fact that the Turing machine T M > f ̲ ( W ) stops for input R R c if and only if R > f ̲ ( W ) . The second Turing machine T M < f ¯ ( W ) from 2 stops exactly when R < f ¯ ( W ) , i.e. the Turing machine T M in 2., which simulates T M > f ̲ ( W ) and T M < f ¯ ( W ) in parallel, stops exactly when R ( f ̲ ( W ) ,   f ¯ ( W ) ) applies.
Remark 6.
Using the above approach, we can try, for example, to find upper and lower bounds for the channel reliability function by allowing general Turing computable functions G : C H ( X , Y ) c × R c + R c { } and algorithmically determine the interval from R c + for which the function G ( W , · ) delivers lower or upper bounds for the channel reliability function.
Definition 16
(Banach–Mazur computable channel function). We call f : C H c ( X , Y ) R c a Banach–Mazur computable channel function if every computable sequence { W r } r N from C H c ( X , Y ) is mapped by f into a computable sequence from R c .
For practical applications, it is necessary to have performance functions that satisfy Turing computability. Depending on W, the channel reliability function or the bounds for this function should be computed. This computation is carried out by an algorithm that also receives W as input. This means that the algorithm should also be recursively dependent on W; otherwise, a special algorithm would have to be developed for each W (depending on W but not recursively dependent), since the channel reliability function for this channel, or a bound for this function, is computed.
It is now clear that when defining the Turing computable performance function, the Turing computable channel functions f ̲ ,   f ¯ cannot be dispensed with, because the channel reliability function depends on the specific channel and the permissible rate region for which the function can be computed. For f ¯ , one often has the representation f ¯ ( W ) = C ( W ) with W C H c ( X , Y ) . For f ̲ , the choice f ̲ ( W ) = C 0 ( W ) with W C H c ( X , Y ) for the channel reliability function is a natural choice, because the channel reliability function is only useful for this interval. (We note that we showed in [35] that C 0 ( W ) is not Turing computable in general.)
For the Turing computability of the channel reliability function or corresponding upper and lower bounds, it is therefore a necessary condition that the dependency of the relevant rate intervals on W be Turing computable—that is, recursive.
Remark 7.
As noted in the Introduction, very few closed-form expressions for performance functions are known in information theory. Even for relatively simple scenarios, such as secure message transmission over a wiretap channel with an active jammer, closed-form solutions are not available (see [54,55,56]). Existing methods in information theory provide convergent multi-letter sequences for determining capacity. While these sequences enable the investigation of important properties of the capacity (see [54,57,58]), they are not yet suitable for direct numerical computation of the capacity. This is due to the reliance on Fekete’s lemma to prove the existence of the limit of these sequences. However, it was shown in [59] that Fekete’s lemma is not constructive, meaning no algorithm can effectively compute the associated limit values.
Moreover, the problem of finding simple optimizers for performance functions is generally not algorithmically solvable [60,61]. For instance, the Blahut–Arimoto algorithm can be used to compute an infinite sequence of input distributions that converge to an optimal distribution. However, there is no way to halt the process based on a reliable approximation error, making it impossible to stop the computation at a specific point (see [60,61]).

3. Results for the Rate Function R and Applications on the Sphere Packing Bound

In this section, we analyze the function R and its implications for the sphere packing bound. Specifically, we demonstrate that R is not a Turing computable performance function.
We begin by expressing R ( W ) as
R ( W ) = min Q P ( Y ) max x X log 2 1 y : W ( y | x ) > 0 Q ( y ) .
From this, we derive the equivalent representations:
R ( W ) = min Q P ( Y ) max x X log 2 1 y : W ( y | x ) > 0 Q ( y ) = min Q P ( Y ) log 2 1 min x X y : W ( y | x ) > 0 Q ( y ) = log 2 min Q P ( Y ) 1 min x X y : W ( y | x ) > 0 Q ( y ) = log 2 1 max Q P ( Y ) min x X y : W ( y | x ) > 0 Q ( y ) = log 2 1 Ψ ( W ) ,
where
Ψ ( W ) = max Q P ( Y ) min x X y : W ( y | x ) > 0 Q ( y ) .
In summary, the following holds true: let X , Y be arbitrary non-trivial finite alphabets; then, for W C H c ( X , Y )
R ( W ) = log 2 1 Ψ ( W ) .
Lemma 1.
It holds that
R : C H c ( X , Y ) R c .
Proof. 
Let W be fixed. We consider the vector Q ( 1 ) Q ( | Y | ) T of the convex set
M P r o b = { u R | Y | : u = u 1 u | Y | , u l 0 , l = 1 , , | Y | , l u l = 1 } .
G ( u ) : = min x y : W ( y | x ) > 0 u y is a computable continuous function on M P r o b . Thus, for Ψ ( W ) = max u M P r o b G ( u ) , we always have Ψ ( W ) R c with Ψ ( W ) > 0 , and thus R ( W ) R c . □
Remark 8.
We do not know whether C 0 : C H c ( X , Y ) R c holds for any finite X , Y . This statement holds for max { | X | , | Y | } 5 , but the general case is open.
For finite alphabets X , Y and λ R c with λ > 0 , we want to analyze the set
{ W C H c ( X , Y ) : R ( W ) > λ } .
To accomplish this, we refer to the proof of Theorem 23 in [35]. Along the same lines, one can show that the following holds true:
Theorem 5.
Let X , Y be non-trivial finite alphabets. For all λ R c with 0 < λ < log 2 ( min { | X | , | Y | } ) , the set
{ W C H c ( X , Y ) : R ( W ) > λ }
is not semi-decidable.
The following theorem can be derived from a combination of the proof of Theorem 5 and Theorem 24 in [35]. The proof is carried out in the same way as the proof of Theorem 24 in [35].
Theorem 6.
Let X , Y be non-trivial finite alphabets. The function R : C H c ( X , Y ) R is not Banach–Mazur computable.
We now prove a stronger result then what we were able to show for C 0 in [35] so far. We show that the analogous question, like the question in [34] for C 0 for the function R , can be answered positively.
We need a concept of distance for W 1 , W 2 C H ( X , Y ) . Therefore, for fixed and finite alphabets X , Y , we define the distance between W 1 and W 2 based on the total variation distance
d C ( W 1 , W 2 ) = max x X y Y | W 1 ( y | x ) W 2 ( y | x ) | .
Definition 17.
A function f : C H ( X , Y ) R is called computable continuously if the following are true:
1.
f is sequentially computable, i.e., f maps every computable sequence { W n } n N with W n C H c ( X , Y ) into a computable sequence { f ( W n ) } n N of computable numbers,
2.
f is effectively uniformly continuous, i.e., there is a recursive function d : N N such that for all W 1 , W 2 C H c ( X , Y ) and all N N with d C ( W 1 , W 2 ) 1 d ( N ) , it holds that | f ( W 1 ) f ( W 2 ) | 1 2 N .
Theorem 7.
Let X , Y be finite alphabets with | X | 2 and | Y | 2 . There exists a computable sequence of computable continuous functions { F N } N N on C H c ( X , Y ) with
1.
F N ( W ) F N + 1 ( W ) with W C H ( X , Y ) and N N ,
2.
lim N F N ( W ) = R ( W ) for all W C H ( X , Y ) .
Proof. 
We consider the function
Φ N ( W ) = max Q P ( Y ) min x X y Y N W ( y | x ) 1 + N W ( y | x ) Q ( y )
for N N . For all x X we have for all Q P ( Y )
y Y N W ( y | x ) 1 + N W ( y | x ) Q ( y ) y Y : W ( y | x ) > 0 Q ( y ) ,
and for all N N , we have for all x X and Q P ( Y )
y Y N W ( y | x ) 1 + N W ( y < x ) Q ( y ) y Y : W ( y | x ) > 0 ( N + 1 ) W ( y | x ) 1 + ( N + 1 ) W ( y | x ) Q ( y ) .
Φ N is a computable continuous function, and { Φ N } N N is a computable sequence of computable continuous functions. So,
F N ( W ) = log 2 a Φ N ( W ) ,
for N N and W C H ( X , Y ) . F N satisfies all properties of the theorem, and point 1 is shown.
It holds
y Y : W ( y | x ) > 0 Q ( y ) y Y N W ( y | x ) 1 + N W ( y | x ) Q ( y ) = y Y : W ( y | x ) > 0 1 1 + N W ( y | x ) Q ( y ) 1 1 + N min y Y : W ( y | x ) > 0 W ( y | x ) .
Therefore, we have
y Y : W ( y | x ) > 0 Q ( y ) 1 1 + N min y Y : W ( y | x ) > 0 W ( y | x ) + y Y N W ( y | x ) 1 + N W ( y | x ) Q ( y ) .
Because of (18), we have
Φ N ( W ) Ψ ( W )
for all W C H c ( X , Y ) . (20) yields
y Y : W ( y | x ) > 0 Q ( y ) 1 1 + N min x X min y Y : W ( y | x ) > 0 W ( y | x ) + y Y N W ( y | x ) 1 + N W ( y | x ) Q ( y ) .
So,
min x X y Y : W ( y | x ) > 0 Q ( y ) 1 1 + N min x X min y Y : W ( y | x ) > 0 W ( y | x ) + min x X y Y N W ( y | x ) 1 + N W ( y | x ) Q ( y )
and
Ψ ( W ) 1 1 + N min x X min y Y : W ( y | x ) > 0 W ( y | x ) + Φ N ( W )
holds. So, we have
0 Ψ ( W ) Φ N ( W ) 1 1 + N min x X min y Y : W ( y | x ) > 0 W ( y | x ) .
 □
We now want to prove that the corresponding question in [34] can be answered positively for R .
Theorem 8.
Let X , Y be finite alphabets with | X | 2 and | Y | 2 . For all λ R c with 0 < λ < log 2 ( min { | X | , | Y | } ) , the set
{ W C H c ( X , Y ) : R ( W ) < λ }
is semi-decidable.
Proof. 
We use the computable sequences of computable continuous functions F N from Theorem 7. It holds that
W { W C H c ( X , Y ) : R ( W ) < λ }
if and only if there is an N 0 such that F N 0 < λ holds. As in the proof of Theorem 28 from [35], we now use the construction of a Turing machine T M R , < λ , which exactly accepts the set
{ W C H c ( X , Y ) : R ( W ) < λ } .
 □
We now consider the approximability “from below” (this can be seen as a kind of reachability). We have shown that R ( · ) can always be represented as a limit value of monotonically decreasing computable sequences of computable continuous functions. From this, it can be concluded that the sequence is then also a computable sequence of Banach–Mazur computable functions. We now have the following:
Theorem 9.
Let X , Y be finite alphabets with | X | 2 and | Y | 2 . There does not exist a sequence of Banach–Mazur computable functions { F N } N N with
1.
F N ( W ) F N + 1 ( W ) with W C H c ( X , Y ) and N N ;
2.
lim N F N ( W ) = R ( W ) for all W C H ( X , Y ) .
Proof. 
We assume that such a sequence { F N } N N does exist. Then, from Theorem 7 and the assumptions from this theorem, it can be concluded that R is a Banach–Mazur computable function. This has created a contradiction.  □
With this, we immediately get the following:
Corollary 1.
Consider finite alphabets X , Y with | X | 2 , | Y | 2 , and let { F N } N N be a sequence of Banach–Mazur computable functions that satisfies the following:
1.
F N ( W ) F N + 1 ( W ) with W C H c ( X , Y ) and N N ,
2.
lim N F N ( W ) = R ( W ) for all W C H ( X , Y ) .
Then, there exists W ^ C H c ( X , Y ) such that lim N F N ( W ^ ) < R ( W ^ ) holds true.
We now want to apply the results for R to the sphere packing bound as an application. With the results via the rate function, we immediately get
Theorem 10.
Let X , Y be finite alphabets with | X | 2 and | Y | 2 . The sphere packing bound E S P ( · , · ) is not a Turing computable performance function for C H c ( X , Y ) × R c + .
Proof. 
Assuming that the statement of the theorem is incorrect, then R is a Turing computable performance function on C H c ( X , Y ) × R c + . But then the channel functions f ̲ ( W ) = R ( W ) for W C H c ( X , Y ) and f ¯ ( W ) = C ( W ) for W C H c ( X , Y ) must be Turing computable channel functions. As was already shown, however, R is not Banach–Mazur computable. We have thus created a contradiction.  □

4. Computability of the Channel Reliability Function and the Sequence of Expurgation Bound Functions

In this section, we consider the reliability function and the expurgation bound and show that these functions are not Turing computable performance functions.
With the help of the results from [35] for C 0 for noisy channels, we immediately get the following theorem:
Theorem 11.
Let X , Y be finite alphabets with | X | 2 and | Y | 2 . The channel reliability function E ( · , · ) is not a Turing computable performance function for C H c ( X , Y ) × R c .
Proof. 
Here, f ̲ ( W ) = C 0 ( W ) for W C H c ( X , Y ) is a Turing computable function, according to Definition 14. We already know that C 0 is not Banach–Mazur computable on C H c ( X , Y ) . This gives the proof in the same way as for the sphere packing bound, i.e., the proof of Theorem 10.  □
Now, we consider the rate function for the expurgation bound. The k-letter expurgation bound E e x ( W , R , k ) as a function of W and R is a lower bound for the channel reliability function. The latter can only be finite for certain intervals ( R k e x ( W ) , C ( W ) ) . Thus, we want to compute the function in these intervals. In their famous paper [13], Shannon, Gallager, and Berlekamp examined the sequence of functions { E e x ( · , · , k ) } k N and analyzed the relationship to the channel reliability function. They conjectured that for all W C H ( X , Y ) for all R with E ( W , R ) < + (one would have convergence and also E e x ( W , R , k ) < + ), the relation
lim k E e x ( W , R , k ) = E ( W , R )
holds. This conjecture was first refuted in [42] and later refuted by a simpler example in [43].
It was already clear with the introduction of the channel reliability function that it had a complicated behavior. A closed-form formula for the channel reliability function is not yet known, and the results of this paper show that such a formula cannot exist. Shannon, Gallager, and Berlekamp tried in [13] in 1967 to find sequences of seemingly simple formulas for the approximation of the channel reliability function. It seems that they considered the sequence of the k-letter expurgation bounds to be very good channel data for its approximation. It was hoped that these sequences could be computed more easily with the use of new powerful digital computers.
Let us now examine the sequence { E e x ( · , · , k ) } k N . We have already introduced the concept of computable sequences of computable continuous channel functions. We now introduce the concept of computable sequences of Turing computable performance functions.
Definition 18.
A sequence { F k } k N of Turing computable performance functions is called a computable sequence if there is a Turing machine that generates the description of F k for input k according to the definition of the function F k for the values for which the function is defined.
In the following theorem, we prove that the sequence of the k-letter expurgation bounds is not a computable sequence of computable performance functions. So, the hope mentioned above cannot be fulfilled.
Theorem 12.
Let X , Y be finite alphabets with | X | 2 and | Y | 2 . The sequence of the expurgation lower bounds { E e x ( · , · , k ) } k N is not a computable sequence of Turing computable performance functions.
Proof. 
We prove the theorem by contradiction, assuming that there exists a Turing machine T M * that generates a description of the function E e x ( · , · , k ) for a given input k, as defined in its formulation. This implies that the sequence { R k e x } k N is computable, since we have an algorithm that can generate each function in the sequence.
Notably, we can express f ̲ k ( · ) as R k e x ( · ) . Given an input k, the Turing machine T M * produces the description of E e x ( · , · , k ) , from which R k e x can be directly obtained via projection (in the sense of primitive recursive functions).
According to Shannon, Gallager, and Berlekamp [13], the following limit holds:
lim k R k e x ( W ) = C 0 ( W )
for all W C H ( X , Y ) . Furthermore, the sequence { R k e x ( W ) } k N is monotonically increasing, i.e.,
R k e x ( W ) R k + 1 e x ( W ) for all k N and W C H ( X , Y ) .
Let us consider the set
{ W C H c ( X , Y ) : C 0 ( W ) > λ }
for λ R c with 0 < λ < log 2 ( min { | X | , | Y | } ) . We are now constructing a Turing machine T M * with only one holding state, “stop”, which means that it either stops or computes forever. T M * should stop for input W C H c ( X , Y ) if and only if C 0 ( W ) applies, that is, T M * stops if W is in the above set. According to the assumption, { R k e x ( · ) } k N is a computable sequence of Turing computable channel functions. For the input W, we can generate the computable sequence { R k e x ( W ) } k N of computable numbers. We now use the Turing machine T M λ 1 , which receives an arbitrary computable number x as input and stops if and only if x > λ , i.e., T M λ 1 has only one hold state and accepts exactly the computable numbers x as input for which x > λ holds. We now use this program for the following algorithm.
  • We start with l = 1 and let T M λ 1 compute one step for input R 1 e x ( W ) . If T M λ 1 ( R 1 e x ( W ) ) stops; then, we stop the algorithm.
  • If T M λ 1 ( R 1 e x ( W ) ) does not stop, we set l = l + 1 and compute l + 1 steps T M λ 1 ( R r e x ( W ) ) for 1 r l + 1 . If one of these Turing machines stops, then the algorithm stops; if not, we set l = l + 1 and repeat the second computation.
The above algorithm stops if and only if there is a k ^ N such that R e x k ^ ( W ) > λ . But this is the case (because of the monotony of the sequence { R k e x ( W ) } k N ) if and only if C 0 ( W ) > λ . But with this, the set
{ W C H c ( X , Y ) : C 0 ( W ) > λ }
is semi-decidable. So, we have shown that this is not the case. We have thus created a contradiction.  □

5. Computability of the Zero-Error Capacity of Noisy Channels with Feedback

In this section, we consider the zero-error capacity for noisy channels with feedback. In our paper [35], we examined the properties of the zero-error capacity without feedback. Let W C H ( X , Y ) . We already noted that Shannon showed in [40] that
C 0 F B = 0 if C 0 ( W ) = 0 max P min y log 2 1 x : W ( y | x ) > 0 P ( x ) o t h e r w i s e .
From (15), recall that
Ψ ( W ) = max p P ( X ) min y Y x : W ( y | x ) > 0 P ( x ) .
Then, we have for W with C 0 ( W ) 0 ,
C 0 F B = log 2 1 Ψ ( W ) .
We know that C 0 F B ( W ) = R ( W ) if C 0 ( W ) > 0 . If C 0 ( W ) = 0 , then there is a channel W with C 0 F B ( W ) = 0 and R > 0 . Like in Lemma 1, we can show the following:
Lemma 2.
Let X , Y be finite non-trivial alphabets. It holds that
C 0 F B : C H c ( X , Y ) R c .
From Theorem 5 and the relationship between C 0 and C 0 F B , we get the following results for C 0 F B , which we have already proved for C 0 in [35].
Theorem 13.
Let X , Y be finite alphabets with | X | 2 and | Y | 2 . For all λ R c with 0 λ < log 2 min { | X | , | Y | } , the sets { W C H c ( X , Y ) : C 0 F B ( W ) > λ } are not semi-decidable.
Theorem 14.
Let X , Y be finite alphabets with | X | 2 and | Y | 2 . Then, C 0 F B : C H c ( X , Y ) R is not Banach-Mazur computable.
Now, we will prove the following:
Theorem 15.
Let X , Y be finite alphabets with | X | 2 and | Y | 2 . There is a computable sequence of computable continuous functions G with
1.
G N ( W ) G N + 1 ( W ) for W C H ( X , Y ) and N N ;
2.
lim n G N ( W ) = C 0 F B ( W ) for W C H ( X , Y ) .
Proof. 
We use for N N , y Y and P P ( X ) the function
x X N W ( y | x ) 1 + N W ( y | x ) P ( x ) .
Then, for
Φ N ( W ) = min P P ( X ) max y Y x X : W ( y | x ) > 0 P ( x ) ,
we have the same properties as in Theorem 7 and
U N ( W ) = log 2 1 Φ n ( W )
is an upper bound for C 0 F B , which is monotonically decreasing. Now, the relation C 0 F B ( W ) > 0 holds for W C H ( X , Y ) if and only if there are two x 1 , x 2 X so that
y Y W ( y | x 1 ) W ( y | x 2 ) = 0
holds. We now set g ( x ^ , x ) = y Y W ( y | x ^ ) W ( y | x ) = g ( W , x ^ , x ) and have 0 g ( x ^ , x ) 1 for x , x ^ X . g is a computable continuous function with respect to W C H ( X , Y ) . Now, we set
V N ( W ) = 1 x , x ^ g ( W , x ^ , x ) N U N ( W )
for N N . { V N } N N is thus a computable sequence of computable continuous functions. Obviously, V N ( W ) V N + 1 ( W ) for W C H ( X , Y ) and N N is satisfied.
( 1 x , x ^ g ( W , x , h x ) ) N = 1
if and only if C 0 F B > 0 . So, for C 0 F B ( W ) = 0 , we always have
lim N V N ( W ) = 0 .
For W with C 0 F B ( W ) ,
lim N V N ( W ) = lim N U N ( W ) = C 0 F B ( W ) .
This is shown in the proof of Theorem 7.  □
This immediately gives us the following theorem.
Theorem 16.
Let X , Y be finite alphabets with | X | 2 and | Y | 2 . For all λ R c with 0 λ < log 2 min { | X | , | Y | } , the sets { W C H c ( X , Y ) : C 0 F B ( W ) < λ } are semi-decidable.
Now, we want to look at the consequences of the results above for C 0 F B . The same statements apply here as in Section 3 for R with regard to the approximation from below. C 0 F B cannot be approximated by monotonically increasing sequences.
There is an elementary relationship between R and C 0 F B , which we use in the following. Again, we assume that X , Y are finite non-trivial alphabets. We remember the following functions:
R ( W ) = log 2 1 Ψ ( W ) ,
where Ψ ( W ) = max Q P ( Y ) min x X y : W ( y | x ) > 0 Q ( y ) .
C 0 F B = 0 C 0 ( W ) = 0 G ( W ) C 0 ( W ) > 0 ,
where G ( W ) = log 2 1 Ψ ( W ) and
Ψ ( W ) = min p P ( X ) min y Y x : W ( y | x ) > 0 P ( x ) .
Let A ( W ) be the | Y | × | X | matrix with ( A ( W ) ) k l { 0 , 1 } for 1 k | Y | and 1 l | X | , such that ( A ( W ) ) k l = 1 if and only if W ( k ( l ) ) > 0 . Furthermore, let
M X = u R | X | : u = u 1 u | X | , u l 0 , l = 1 | X | u l = 1
and
M Y = v R | Y | : v = v 1 v | Y | , v l 0 , l = 1 | Y | v l = 1 .
For v R | Y | and u R | X | , we consider the function F ( v , u ) = v T A ( W ) u . The function F is concave in v M Y and convex in u M X . M Y and M X are closed convex and compact sets, and F ( v , u ) is continuous in both variables. So,
max v M Y min u M X F ( v , u ) = min u M X max v M Y F ( v , u ) .
Let v M Y be fixed. Then,
F ( v , u ) = l = 1 | X | k = 1 | Y | v k A k l ( W ) u l
F ( v , u ) = l = 1 | X | d l ( v ) u l ,
with d l ( v ) = k = 1 | Y | v k A k l ( W ) . Now, d l ( v ) 0 for 1 l | X | . Hence,
min u M X F ( v , u ) = min 1 l | X | d l ( v ) = min 1 l | X | k : A k l ( W ) > 0 v k = min x X y : W ( y | x ) > 0 Q v ( y ) ,
with Q v ( y ) = v y for y { 1 , , | Y | } . So,
max v M Y min u M X F ( v , u ) = max Q P ( Y ) min x X y : W ( y | x ) > 0 Q v ( y ) = Ψ ( W ) .
Furthermore, for u M X fixed,
F ( v , u ) = k = 1 | Y | l = 1 | X | u l A k l ( W ) v k = k = 1 | Y | β k ( u ) v k ,
with β k ( u ) = l = 1 | X | u l A k l ( W ) 0 and 1 k | Y | . Therefore,
max v M Y F ( v , u ) = max 1 k | Y | β k ( u ) = max 1 k | Y | l : A k l ( W ) > 0 u l = max y Y x : W ( Y | x ) > 0 p u ( x )
with p u ( x ) = u x for 1 x | X | . It follows that
min u M X max v M Y F ( v , u ) = min p P ( X ) max y Y x : W ( y | x ) > 0 P ( x ) = Ψ ( W ) .
We get the following lemma.
Lemma 3.
Let W C H ( X , Y ) ; then,
R ( W ) = G ( W ) .
We want to investigate the behavior of E ( · , R ) for the input W 1 W 2 , where W 1 W 2 denotes the Kronecker product of the matrices W 1 and W 2 compared to E ( W 1 , R ) and E ( W 2 , R ) . For this purpose, let X 1 , Y 1 , X 2 , Y 2 be arbitrary finite non-trivial alphabets, and we consider W l C H ( X l , Y l ) for l = 1 , 2 .
Theorem 17.
Let X 1 , Y 1 , X 2 , Y 2 be arbitrary finite non-trivial alphabets, and W l C H ( X l , Y l ) for l = 1 , 2 . Then, we have
R ( W 1 W 2 ) = R ( W 1 ) + R ( W 2 ) .
Proof. 
We use the Ψ function. It applies to Q = Q 1 · Q 2 with Q 1 P ( Y 1 ) and Q 2 P ( Y 2 ) , so that
min x 1 X 1 , x 2 X 2 y 1 : W 1 ( y 1 | x 1 ) > 0 y 2 : W 2 ( y 2 | x 2 ) Q 1 ( y 1 ) Q 2 ( y 2 ) = min x 1 X 1 y 1 : W 1 ( y 1 | x 1 ) > 0 Q 1 ( y 1 ) min x 1 X 1 y 2 : W 2 ( y 2 | x 2 ) > 0 Q 2 ( y 2 ) .
This applies to all Q 1 P ( Y 1 ) and Q 2 P ( Y 2 ) arbitrarily. So,
Ψ ( W 1 W 2 ) Ψ ( W 1 ) · Ψ ( W 2 ) .
Also, we have
Ψ ( W 1 W 2 ) = min P P ( X 1 × X 2 ) max ( y 1 , y 2 ) Y 1 × Y 2 x 1 : W 1 ( y 1 | x 1 ) > 0 x 2 : W 2 ( y 2 | x 2 ) > 0 P ( x 1 , y 2 ) Ψ ( W 1 ) · Ψ ( W 2 )
as well. So,
Ψ ( W 1 W 2 ) = Ψ ( W 1 ) · Ψ ( W 2 )
and the theorem is proven.  □
We want to investigate the behavior of C 0 F B for the input W 1 W 2 compared to C 0 F B ( W 1 ) and C 0 F B ( W 2 ) . For this purpose, let X 1 , Y 1 , X 2 , Y 2 be arbitrary finite non-trivial alphabets and consider W l C H ( X l , Y l ) for l = 1 , 2 .
Theorem 18.
Let X 1 , Y 1 , X 2 , Y 2 be arbitrary finite non-trivial alphabets, and W l C H ( X l , Y l ) for l = 1 , 2 . Then, we have
1.
C 0 F B ( W 1 W 2 ) C 0 F B ( W 1 ) + C 0 F B ( W 2 )
2.
C 0 F B ( W 1 W 2 ) > C 0 F B ( W 1 ) + C 0 F B ( W 2 )
if and only if
min 1 l 2 C 0 F B ( W l ) = 0 and max 1 l 2 C 0 F B ( W l ) > 0 and min 1 l 2 R ( W l ) > 0 .
Remark 9.
The condition (33) is equivalent to
min 1 l 2 C 0 ( W l ) = 0 and max 1 l 2 C 0 ( W l ) > 0 and min 1 l 2 R ( W l ) > 0 .
Proof. 
(31) follows directly from the operational definition of C. Let (33) now be fulfilled. Then, C 0 F B ( W 1 W 2 ) > 0 must be fulfilled. Without loss of generality, we assume C 0 F B ( W 1 ) = 0 , C 0 F B ( W 2 ) > 0 and R ( W 1 ) > 0 , R ( W 2 ) > 0 . Since C 0 F B ( W 1 W 2 ) > 0 ,
C 0 F B ( W 1 W 2 ) = R ( W 1 W 2 ) = R ( W 1 ) + R ( W 2 ) = R ( W 1 ) + C 0 F B ( W 2 ) > 0 + C 0 F B ( W 2 ) = C 0 F B ( W 1 ) + C 0 F B ( W 2 ) .
If (32) is fulfilled, then C 0 F B ( W 1 W 2 ) > 0 . Then, max 1 l 2 C 0 F B ( W l ) > 0 must be, because if max 1 l 2 C 0 F B ( W l ) = 0 , then max 1 l 2 C 0 ( W l ) = 0 , and thus C 0 ( W 1 W 2 ) = 0 also (since the C 0 capacity has no super-activation). This means that C 0 F B ( W 1 W 2 ) = 0 , which would be a contradiction.
If min 1 2 C 0 F B ( W l ) > 0 , then
C 0 F B ( W 1 W 2 ) = R ( W 1 W 2 ) = R ( W 1 ) + R ( W 2 ) = C 0 F B ( W 1 ) + C 0 F B ( W 2 ) .
This is a contradiction, and thus min 1 2 C 0 F B ( W l ) = 0 . Furthermore, min 1 l 2 R ( W l ) > 0 must apply, because if min 1 l 2 R ( W l ) = 0 , then R ( W 1 ) = 0 without loss of generality. Then,
C 0 F B ( W 1 W 2 ) = R ( W 1 W 2 ) = R ( W 1 ) + R ( W 2 ) = 0 + R ( W 2 ) = 0 + C 0 F B ( W 2 ) = C 0 F B ( W 1 ) + C 0 F B ( W 2 ) ,
because C 0 F B ( W 1 ) = 0 when R ( W 1 ) = 0 . This is again a contradiction. With this, we have proven the theorem.  □
We still want to show for which alphabet sizes the behavior according to Theorem 18 can occur.
Theorem 19.
1.
If | X 1 | = | X 2 | = | Y 1 | = | Y 2 | = 2 , then for all W l C H ( X l , Y l ) with l = 1 , 2 , we have
C 0 F B ( W 1 W 2 ) = C 0 F B ( W 1 ) + C 0 F B ( W 2 ) .
2.
If X 1 , X 2 , Y 1 , Y 2 are non-trivial alphabets with
max { min { | X 1 | , | Y 1 | } , min { | X 2 | , | Y 2 | } } 3 ,
then there exists W ^ l C H ( X l , Y l ) with l = 1 , 2 , such that
C 0 F B ( W ^ 1 W ^ 2 ) > C 0 F B ( W ^ 1 ) + C 0 F B ( W ^ 2 ) .
Proof. 
  • If C 0 ( W 1 ) = C 0 ( W 2 ) , then (35) holds, since C 0 ( W 1 W 2 ) = 0 .
    If max { C 0 ( W 1 ) , C 0 ( W 2 ) } > 0 , we can assume without loss of generality that C 0 ( W 1 ) = 0 . In this case, W 1 must be either
    W 1 = 1 0 0 1 or W 1 = 0 1 1 0 ,
    which implies that C 0 ( W 2 ) = 1 , and consequently, C 0 F B ( W 2 ) = 1 . Furthermore, if R ( W 2 ) > 0 , then W 2 must also be one of the two matrices above, ensuring that (35) holds. If instead R ( W 2 ) = 0 , Theorem 17 guarantees that (35) remains valid.
  • We now prove (36) under the assumption that | X 1 | = | Y 1 | = 2 and | X 2 | = | Y 2 | = 3 . If we have found channels W ^ 1 , W ^ 2 for this case, such that (36) holds, then it is also clear how general case 2 can be proved. We set W ^ 1 = 1 0 0 1 , which means C 0 ( W ^ 1 ) = C 0 F B ( W ^ 1 ) = R ( W ^ 1 ) = 1 . For W ^ 2 , we take the three-ary typewriter channel W ^ 2 ( ϵ ) with X 2 = Y 2 = { 0 , 1 , 2 } (see [43]):
    W ^ 2 ( ϵ ) ( y | x ) = 1 ϵ y = x , ϵ y = x + 1 mod 3 .
    Let ϵ ( 0 , 1 2 ) be arbitrary, then C ( W ^ 2 ( ϵ ) ) = log 2 ( 3 ) H 2 ( ϵ ) . We have R ( W ^ 2 ( ϵ ) ) = log 2 3 2 and C 0 ( W ^ 2 ( ϵ ) ) = 0 . This means that C 0 F B ( W ^ 2 ( ϵ ) ) = 0 . Thus, because C 0 ( W ^ 1 × W ^ 2 ( ϵ ) ) C 0 ( W ^ 1 ) = 1 ,
    C 0 F B ( W ^ 1 W ^ 2 ( ϵ ) ) = R ( W ^ 1 ) = R ( W ^ 2 ( ϵ ) ) = 1 + log 2 ( 3 2 ) > C 0 F B ( W ^ 1 ) + C 0 F B ( W ^ 2 ( ϵ ) )
    and we have proven case 2.
 □

6. Behavior of the Expurgation-Bound Rates

In this section, we consider the behavior of the expurgation-bound rate. R e x k occurs in the expurgation bound as a lower bound for the channel reliability function, where k is the parameter for the k-letter description. Let X 1 , Y 1 , X 2 , Y 2 be arbitrary finite non-trivial alphabets, and W l C H ( X l , Y l ) for l = 1 , 2 . We want to examine R e x k .
Theorem 20.
There exist non-trivial alphabets X 1 , Y 1 , X 2 , Y 2 and channels W l C H ( X l , Y l ) for l = 1 , 2 , such that for all k ^ , there exists k k ^ with
R e x k ( W 1 W 2 ) R e x k ( W 1 ) + R e x k ( W 2 ) .
Proof. 
Assume that for all X 1 , Y 1 , X 2 , Y 2 and W l C H ( X l , Y l ) with l = 1 , 2 for all k N ,
R e x k ( W 1 W 2 ) = R e x k ( W 1 ) + R e x k ( W 2 ) .
We now take X 1 , Y 1 , X 2 , Y 2 such that C 0 is superadditive. Then, we have for certain W 1 , W 2 with W l C H ( X l , Y l ) ,
C 0 ( W 1 W 2 ) > C 0 ( W 1 ) + C 0 ( W 2 ) .
Then,
C 0 ( W 1 W 2 ) = lim k R e x k ( W 1 W 2 ) = lim k R e x k ( W 1 ) + R e x k ( W 2 ) = C 0 ( W 1 ) + C 0 ( W 2 ) .
This is a contradiction, and thus the theorem is proven.  □
We improve the statement of Theorem 20 with the following theorem.
Theorem 21.
There exist non-trivial alphabets X 1 , Y 1 , X 2 , Y 2 and channels W l C H ( X l , Y l ) for l = 1 , 2 and a k ^ , such that for all k k ^ ,
R e x k ( W 1 W 2 ) > R e x k ( W 1 ) + R e x k ( W 2 )
holds true.
Proof. 
Assume the statement of the theorem is false, which means for all channels W l C H ( X l , Y l ) with l = 1 , 2 , the following applies: There exists a sequence { k j } j N N with lim j k j = + , such that
R e x k l ( W 1 W 2 ) R e x k l ( W 1 ) + R e x k l ( W 2 )
for l N . We now take X ^ 1 , Y ^ 1 , X ^ 2 , Y ^ 2 so that C 0 is superadditive for these alphabets. Then, we have for certain W ^ 1 , W ^ 2 with W ^ l C H ( X l , Y l ) for l = 1 , 2 ,
C 0 ( W ^ 1 W ^ 2 ) > C 0 ( W ^ 1 ) + C 0 ( W ^ 2 ) .
Then,
C 0 ( W ^ 1 W ^ 2 ) = lim j R e x k j ( W ^ 1 W ^ 2 ) lim j R e x k j ( W ^ 1 ) + R e x k j ( W ^ 2 ) = C 0 ( W ^ 1 ) + C 0 ( W ^ 2 ) .
This is a contradiction to (38), and thus the theorem is proven.  □
We have already observed that the function E ( W , · ) exhibits significantly different behavior over certain rate intervals [ R , R ^ ] . In particular, we have analyzed the impact of the channel product W 1 W 2 on the intervals ( R ( W 1 W 2 ) , C ( W 1 W 2 ) ) and ( E e x k ( W 1 W 2 ) , C ( W 1 W 2 ) ) for k N .
For the first interval, we established the relation
( R ( W 1 W 2 ) , C ( W 1 W 2 ) ) = ( R ( W 1 ) + R ( W 2 ) , C ( W 1 ) + C ( W 2 ) ) .
However, for the second interval, we have shown that such a simple additive behavior does not hold. Given the proof of Theorem 18, we conclude that there exist channels W 1 , W 2 for which
R e x k ( W 1 W 2 ) > R e x k ( W 1 ) + R e x k ( W 2 )
is satisfied for all k k ^ .
Another important aspect is understanding the conditions under which the interval [ 0 , R ^ ) causes E ( W , r ) to become infinite. This occurs if and only if C 0 ( W ) > 0 , in which case, the interval is given by [ 0 , C 0 ( W ) ) . Consequently, there exist channels W 1 , W 2 such that for the function E ( W 1 W 2 , · ) , this interval extends beyond [ 0 , C 0 ( W 1 ) + C 0 ( W 2 ) ] .
Thus, we conclude that C 0 is generally superadditive.

7. Conclusions

We have shown that the channel reliability function is not a Turing computable performance function. The same conclusion holds for the functions associated with the sphere packing bound and the expurgation bound.
An interesting aspect of our work is that the constraints we impose on Turing computable performance functions are strictly weaker than those typically required for Turing computable functions. Specifically, we do not require that the Turing machine halt for all inputs ( W , R ) C h × R c + . This means we allow the Turing machine to compute indefinitely for certain inputs, i.e., it may never halt for some inputs. Consequently, we permit performance functions that are not defined for all ( W , R ) C h × R c + . However, we do require the Turing machine to halt for inputs ( W , R ) C h × R c whenever the performance function F is defined, and in such cases, the machine must return the computable value F ( W , R ) as output. This ensures that the algorithm generated corresponds to the number F ( W , R ) according to Definition 15.
Additionally, we considered the R function and the zero-error feedback capacity, both of which play a critical role in the context of the channel reliability function. We demonstrated that neither the R function nor the zero-error feedback capacity is Banach–Mazur computable. Furthermore, we proved that the R function is additive.
We also established that for all finite alphabets X , Y with | X | 2 and | Y | 2 , the channel reliability function itself is not a Turing computable performance function. Moreover, we showed that the commonly studied bounds, which have been extensively examined in the literature, are also not Turing computable performance functions. It remains unclear whether non-trivial upper bounds for the channel reliability function that are Turing computable even exist.
In [13], the sequence of k-letter expurgation bounds was considered an effective method for approximating the channel reliability function. It was hoped that these sequences could be computed more efficiently using modern digital computers. However, we have shown that this is not the case. Table 1 gives an overview of the main results of the paper.
As mentioned in the Introduction, future communication systems, such as 6G, will face stringent requirements for trustworthiness. Ultra-reliability, along with the corresponding performance functions, is central to 6G, and this paper addresses that challenge. It is currently unclear how the non-Turing computability of performance functions will impact the system evaluation and certification of future communication systems. A recent study [62] showed that the non-Turing computability of performance functions in artificial intelligence (AI) leads to digital AI algorithms being unable to meet essential legal requirements. It is an intriguing research question whether similar issues might arise in the context of communication systems.
This work does not claim that machine learning or artificial intelligence (AI) approaches are useless for computing capacity functions. Rather, it demonstrates that certain solutions cannot be found by such methods, or that a computer may not be able to assess how close a given result is to the optimum. Nevertheless, employing machine learning tools remains valuable; one must simply be aware that these approaches do not always guarantee optimality. In such cases, alternative theoretical frameworks may be necessary.
Turing computability and Banach–Mazur computability are two central notions in the theory of computation. Every function that is Turing computable is also Banach–Mazur computable, meaning that Banach–Mazur computability subsumes Turing computability. However, the converse does not hold: not every Banach–Mazur computable function is Turing computable. In fact, if a function is not Banach–Mazur computable, then it cannot be computable under any other standard notion of computability. This underscores the foundational and maximal character of Banach–Mazur computability within the hierarchy of computability concepts. Moreover, as shown in [63], there exist even total functions—functions defined on all computable real numbers—that are Banach–Mazur computable but not Turing computable. For readers interested in a deeper understanding of computability theory—how to determine whether a function is computable, along with illustrative examples and detailed explanations—we recommend the comprehensive work by Soare [64] and Cooper’s New Computational Paradigms [65]. Practical implications of these theoretical analyses, especially their relevance to real-world applications, are further explored in [66], which may be of particular interest to those seeking connections between theory and practice.

Author Contributions

The contributions of both authors are equal across all categories. All authors have read and agreed to the published version of the manuscript.

Funding

The authors gratefully acknowledge the financial support provided by the Federal Ministry of Education and Research (BMBF) of Germany under the “Souverän. Digital. Vernetzt.” program, specifically through the joint project 6G-life, project identification numbers 16KISK002 and 16KISK263. H. Boche and C. Deppe also acknowledge the financial support from the BMBF’s quantum program QuaPhySI under grants 16KIS1598K and 16KIS2234, as well as from the QUIET project under grants 16KISQ093 and 16KISQ0170. Additionally, they were supported by the QC-CamNetz project under grants 16KISQ077 and 16KISQ169. Furthermore, they received funding from the DFG through the project “Post Shannon Theorie und Implementierung,” under grants BO 1734/38-1 and DE 1915/2-1. Special thanks also go to the BMBF within the national initiative for their support of H. Boche under grant 16KIS1003K and of C. Deppe under grant 16KIS1005.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Acknowledgments

Holger Boche would like to thank Martin Bossert for insightful discussions and questions regarding the theory of the channel reliability function and the trustworthiness of numerical simulations of this function on digital computers. He also expresses his gratitude to Vince Poor and Martin Bossert for their discussions at ISIT 2019 in Paris, which sparked the research leading to the results presented in this paper. Finally, we express our appreciation to Yannik Böck for his helpful and insightful comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  2. Chow, T.Y. What is a Closed-Form Number? Amer. Math. Mon. 1999, 106, 440–448. [Google Scholar] [CrossRef]
  3. Borwein, J.; Crandall, R. Closed Forms: What They Are and Why We Care. Not. Am. Math. Soc. 2013, 60, 50–65. [Google Scholar] [CrossRef]
  4. Ahlswede, R. Multi-way communication channels. In Proceedings of the Second International Symposium on Information Theory, Tsahkadsor, Armenia, 2–8 September 1971. [Google Scholar]
  5. Ahlswede, R.; Dueck, G. Identification via channels. IEEE Trans. Inform. Theory 1989, 35, 15–29. [Google Scholar] [CrossRef]
  6. Sason, I. Observations on graph invariants with the Lovász ϑ-function. AIMS Math. 2024, 9, 15385–15468. [Google Scholar] [CrossRef]
  7. Sason, I. Observations on the Lovász θ-Function, Graph Capacity, Eigenvalues, and Strong Products. Entropy 2023, 25, 104. [Google Scholar] [CrossRef]
  8. Gallager, R.G. Information Theory and Reliable Communication; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1968. [Google Scholar]
  9. Haroutunian, E.; Haroutunian, M.; Harutyunyan, A. Reliability Criteria in Information Theory and in Statistical Hypothesis Testing. Found. Trends Commun. Inf. Theory 2008, 4, 97–263. [Google Scholar] [CrossRef]
  10. Blahut, R.E. Principles and Practice of Information Theory; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1987. [Google Scholar]
  11. Elias, P. Coding for noisy channels. IRE Conv. Rec. 1955, 4, 37–46. [Google Scholar]
  12. Fano, R.M. Transmission of information: A statistical theory of communications. Am. J. Phys. 1961, 29, 793–794. [Google Scholar] [CrossRef]
  13. Shannon, C.; Gallager, R.; Berlekamp, E. Lower Bounds to Error Probability for Coding in Discrete Memoryless Channels. Inf. Control 1967, 10, 65–103. [Google Scholar] [CrossRef]
  14. Turing, A. On Computable Numbers, with an Application to the Entscheidungsproblem. Proc. Lond. Math. Soc. 1936, 42, 230–265. [Google Scholar]
  15. Arimoto, S. An algorithm for computing the capacity of arbitrary discrete memoryless channels. IEEE Trans. Inf. Theory 1972, 18, 14–20. [Google Scholar] [CrossRef]
  16. Blahut, R. Computation of channel capacity and rate-distortion functions. IEEE Trans. Inf. Theory 1972, 18, 460–473. [Google Scholar] [CrossRef]
  17. Dueck, G.; Körner, J. Reliability function of a discrete memoryless channel at rates above capacity (corresp.). IEEE Trans. Inf. Theory 1979, 25, 82–85. [Google Scholar] [CrossRef]
  18. Alajaji, F.; Chen, P.; Rached, Z. A note on the Poor-Verdú upper bound for the channel reliability function. IEEE Trans. Inf. Theory 2002, 48, 309–313. [Google Scholar] [CrossRef]
  19. Ashikhmin, A.; Barg, A.; Litsyn, S. A new upper bound on the reliability function of the Gaussian channel. IEEE Trans. Inf. Theory 2000, 46, 1945–1961. [Google Scholar] [CrossRef]
  20. Lapidoth, A. On the reliability function of the ideal Poisson channel with noiseless feedback. IEEE Trans. Inf. Theory 2002, 39, 491–503. [Google Scholar] [CrossRef]
  21. Burnashev, M.; Yamamoto, H. Noisy feedback improves the Gaussian channel reliability function. In Proceedings of the IEEE International Symposium on Information Theory, Honolulu, HI, USA, 29 June–4 July 2014. [Google Scholar]
  22. Hajek, B.; Subramanian, V. Capacity and reliability function for small peak signal constraints. IEEE Trans. Inf. Theory 2002, 48, 828–839. [Google Scholar] [CrossRef]
  23. Ben-Haim, Y.; Litsyn, S. Improved upper bounds on the reliability function of the Gaussian channel. In Proceedings of the IEEE International Symposium on Information Theory, Seattle, WA, USA, 9–14 July 2006. [Google Scholar]
  24. Endo, H.; Sasaki, M. Reliability and secrecy functions of the wiretap channel under cost constraint. IEEE Trans. Inf. Theory 2014, 60, 6819–6843. [Google Scholar]
  25. Tyagi, H.; Narayan, P. The Gelfand-Pinsker channel: Strong converse and upper bound for the reliability function. In Proceedings of the IEEE International Symposium on Information Theory, Seoul, Republic of Korea, 28 June–3 July 2009. [Google Scholar]
  26. Somekh-Baruch, A. An upper bound on the reliability function of discrete memoryless channels. IEEE Trans. Inf. Theory 2024, 70, 3059–3081. [Google Scholar] [CrossRef]
  27. Burnashev, M.; Yamamoto, H. On the reliability function for a BSC with noisy feedback. Probl. Inf. Transm. 2010, 46, 103–121. [Google Scholar] [CrossRef]
  28. Burnashev, M.; Holevo, A. On the reliability function for a quantum communication channel. Probl. Peredachi Informatsii 1998, 34, 3–15. [Google Scholar]
  29. Holevo, A. Reliability function of general classical-quantum channel. IEEE Trans. Inf. Theory 2002, 46, 2256–2261. [Google Scholar] [CrossRef]
  30. Li, K.; Yang, D. Reliability function of classical-quantum channels. Phys. Rev. Lett. 2025, 134, 010802. [Google Scholar] [CrossRef]
  31. Fettweis, G.P.; Boche, H. 6G: The Personal Tactile Internet—And Open Questions for Information Theory. IEEE BITS Inf. Theory Mag. 2021, 1, 71–82. [Google Scholar] [CrossRef]
  32. Boche, H.; Schaefer, R.; Poor, H.; Fettweis, G. Trustworthiness Verification and Integrity Testing for Wireless Communication Systems. In Proceedings of the IEEE International Conference on Communications, Seoul, South Korea and Virtual, 16–20 May 2022. [Google Scholar]
  33. Fettweis, G.P.; Boche, H. On 6G and trustworthiness. Commun. ACM 2022, 65, 48–49. [Google Scholar] [CrossRef]
  34. Alon, N.; Lubetzky, E. The Shannon capacity of a graph and the independence numbers of its powers. IEEE Trans. Inf. Theory 2006, 52, 2172–2176. [Google Scholar] [CrossRef]
  35. Boche, H.; Deppe, C. Computability of the zero-error capacity of noisy channels. In Proceedings of the 2021 IEEE Information Theory Workshop (ITW), Kanazawa, Japan, 17–21 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6. [Google Scholar]
  36. Boche, H.; Deppe, C. Computability of the channel reliability function and related bounds. In Proceedings of the 2022 IEEE International Symposium on Information Theory (ISIT), Espoo, Finland, 26 June–1 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1530–1535. [Google Scholar]
  37. Weihrauch, K. Computable Analysis: An Introduction, 1st ed.; Springer Publishing Company, Incorporated: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  38. Soare, R.I. Recursively Enumerable Sets and Degrees, 1st ed.; Springer: Berlin/Heidelberg, Germany, 1987. [Google Scholar] [CrossRef]
  39. Pour-El, M.B.; Richards, J.I. Computability in Analysis and Physics; Perspectives in Logic; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar] [CrossRef]
  40. Shannon, C. The zero error capacity of a noisy channel. IRE Trans. Inf. Theory 1956, 2, 8–19. [Google Scholar] [CrossRef]
  41. Gallager, R. A simple derivation of the coding theorem and some applications. IEEE Trans. Inf. Theory 1965, 11, 3–18. [Google Scholar] [CrossRef]
  42. Katsman, G.L.; Tsfasman, M.A.; Vladuţ, S.G. Spectra of linear codes and error probability of decoding. In Coding theory and Algebraic Geometry (Luminy, 1991); Lecture Notes in Math; Springer: Berlin/Heidelberg, Germany, 1992; Volume 1518, pp. 82–98. [Google Scholar]
  43. Dalai, M.; Polyanskiy, Y. Bounds on the Reliability Function of Typewriter Channels. IEEE Trans. Inf. Theory 2018, 64, 6208–6222. [Google Scholar] [CrossRef]
  44. Lovász, L. On the Shannon capacity of a graph. IEEE Trans. Inf. Theory 1979, 25, 1–7. [Google Scholar] [CrossRef]
  45. Baumert, L.D.; McEliece, R.J.; Rodemich, E.; Rumsey, H.C., Jr.; Stanley, R.; Taylor, H. A combinatorial packing problem. In Proceedings of the Computers in Algebra and Number Theory, New York, NY, USA, 25–26 March 1970; Volume IV, pp. 97–108. [Google Scholar]
  46. Bohman, T. A limit theorem for the Shannon capacities of odd cycles. I. Proc. Am. Math. Soc. 2003, 131, 3559–3569. [Google Scholar] [CrossRef]
  47. Polak, S.C.; Schrijver, A. New lower bound on the Shannon capacity of C7 from circular graphs. Inf. Process. Lett. 2019, 143, 37–40. [Google Scholar] [CrossRef]
  48. Delsarte, P. An algebraic approach to the association schemes of coding theory. Philips Res. Rep. Suppl. 1973, 10, vi+-97. [Google Scholar]
  49. Schrijver, A. A comparison of the Delsarte and Lovász bounds. IEEE Trans. Inform. Theory 1979, 25, 425–429. [Google Scholar] [CrossRef]
  50. McEliece, R.J.; Rodemich, E.R.; Rumsey, H., Jr.; Welch, L.R. New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities. IEEE Trans. Inform. Theory 1977, IT-23, 157–166. [Google Scholar]
  51. Litsyn, S. New upper bounds on error exponents. IEEE Trans. Inform. Theory 1999, 45, 385–398. [Google Scholar] [CrossRef]
  52. Barg, A.; McGregor, A. Distance distribution of binary codes and the error probability of decoding. IEEE Trans. Inform. Theory 2005, 51, 4237–4246. [Google Scholar] [CrossRef]
  53. Kalai, G.; Linial, N. On the distance distribution of codes. IEEE Trans. Inform. Theory 1995, 41, 1467–1472. [Google Scholar] [CrossRef]
  54. Nötzel, J.; Wiese, M.; Boche, H. The arbitrarily varying wiretap channel—Secret randomness, stability, and super-activation. IEEE Trans. Inf. Theory 2016, 62, 3504–3531. [Google Scholar] [CrossRef]
  55. Schaefer, R.F.; Boche, H.; Poor, H.V. Secure communication under channel uncertainty and adversarial attacks. Proc. IEEE 2015, 103, 1796–1813. [Google Scholar] [CrossRef]
  56. Wiese, M.; Nötzel, J.; Boche, H. A channel under simultaneous jamming and eavesdropping attack—Correlated random coding capacities under strong secrecy criteria. IEEE Trans. Inf. Theory 2016, 62, 3844–3862. [Google Scholar] [CrossRef]
  57. Boche, H.; Deppe, C. Secure identification for wiretap channels; robustness, super-additivity and continuity. IEEE Trans. Inf. Forensics Secur. 2018, 13, 1641–1655. [Google Scholar] [CrossRef]
  58. Boche, H.; Schaefer, R.F. Capacity results and super-activation for wiretap channels with active wiretappers. IEEE Trans. Inf. Forensics Secur. 2013, 8, 1482–1496. [Google Scholar] [CrossRef]
  59. Boche, H.; Böck, Y.; Deppe, C. On Effective Convergence in Fekete’s Lemma and Related Combinatorial Problems in Information Theory. In Festschrift in Memory of Ning Cai: Information Theory and Related Fields; Springer: Cham, Switzerland, 2025; pp. 289–318. [Google Scholar]
  60. Boche, H.; Schaefer, R.F.; Poor, H.V. Algorithmic Computability and Approximability of Capacity-Achieving Input Distributions. IEEE Trans. Inf. Theory 2023, 69, 5449–5462. [Google Scholar] [CrossRef]
  61. Lee, Y.; Boche, H.; Kutyniok, G. Computability of Optimizers. IEEE Trans. Inf. Theory 2024, 70, 2967–2983. [Google Scholar] [CrossRef]
  62. Boche, H.; Fono, A.; Kutyniok, G. A Mathematical Framework for Computability Aspects of Algorithmic Transparency. In Proceedings of the IEEE International Symposium on Information Theory, Athens, Greece, 7–12 July 2024; IEEE: Piscataway, NJ, USA, 2024. [Google Scholar]
  63. Hertling, P. A Banach–Mazur computable but not Markov computable function on the computable real numbers. Ann. Pure Appl. Log. 2005, 132, 227–246. [Google Scholar] [CrossRef]
  64. Soare, R.I. Turing Computability: Theory and Applications, 1st ed.; Springer Publishing Company, Incorporated: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  65. Cooper, S.B.; Löwe, B.; Sorbi, A. New Computational Paradigms: Changing Conceptions of What Is Computable; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  66. Boche, H.; Fojtik, V.; Fono, A.; Kutyniok, G. Computability of Classification and Deep Learning: From Theoretical Limits to Practical Feasibility through Quantization. J. Fourier Anal. Appl. 2025, 31, 35. [Google Scholar] [CrossRef]
Table 1. Overview of results.
Table 1. Overview of results.
ProblemResult
Computability of capacity R : Yes
R 0 : Unknown
R 0 for alphabet size < 5 : Yes
Semi-decidability of capacity > λ R : No
C 0 F B : No
Semi-decidability of capacity < λ R : Yes
C 0 F B : Yes
Computability of capacity function R : No
C 0 F B : No
Computability of performance function E ( W , R ) : No
{ E e x ( · , · , k ) } k N : No
Additivity R : Yes
C 0 F B : Unknown
C 0 F B for alphabet size 2: Yes
R e x k : No
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Boche, H.; Deppe, C. The Computability of the Channel Reliability Function and Related Bounds. Algorithms 2025, 18, 361. https://doi.org/10.3390/a18060361

AMA Style

Boche H, Deppe C. The Computability of the Channel Reliability Function and Related Bounds. Algorithms. 2025; 18(6):361. https://doi.org/10.3390/a18060361

Chicago/Turabian Style

Boche, Holger, and Christian Deppe. 2025. "The Computability of the Channel Reliability Function and Related Bounds" Algorithms 18, no. 6: 361. https://doi.org/10.3390/a18060361

APA Style

Boche, H., & Deppe, C. (2025). The Computability of the Channel Reliability Function and Related Bounds. Algorithms, 18(6), 361. https://doi.org/10.3390/a18060361

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop