Abstract
The paradigm shift from an exclusive allocation of frequency bands, one for each system, to a shared use of frequencies comes along with the need of new concepts since interference will be an ubiquitous phenomenon. In this paper, we use the concept of arbitrarily varying channels to model the impact of unknown interference caused by coexisting wireless systems which operate on the same frequencies. Within this framework, capacity can be zero if pre-specified encoders and decoders are used. This necessitates the use of more sophisticated coordination schemes where the choice of encoders and decoders is additionally coordinated based on common randomness. As an application we study the arbitrarily varying bidirectional broadcast channel and derive the capacity regions for different coordination strategies. This problem is motivated by decode-and-forward bidirectional or two-way relaying, where a relay establishes a bidirectional communication between two other nodes while sharing the resources with other coexisting wireless networks.
Notation
In this paper we denote discrete random variables by non-italic capital letters and their corresponding realizations and ranges by lower case italic letters and script letters, e.g., , x, and , respectively; the notation stands for the sequence of length n; and denote the set of positive integers and non-negative real numbers; all logarithms, exponentials, and information quantities are taken to the basis 2; , , and are the mutual information, entropy, and Kullback–Leibler (information) divergence; and denote the expectation and probability; is the inner product and ; is the set of all probability distributions and is the complement of a set; is the n-th memoryless extension of the stochastic matrix W; means the value of the right hand side (rhs) is assigned to the left hand side (lhs); is defined accordingly.
1. Introduction
The ongoing research progress reveals a paradigm shift from an exclusive allocation of certain frequency bands to a shared use of frequencies. While most current systems such as conventional cellular systems usually operate on exclusive frequency bands, several future systems such as ad-hoc or sensor networks will operate on shared resources in an uncoordinated and self-organizing way. The main issue that comes along with this development is that interference becomes an ubiquitous phenomenon and that it will be one of the major impairments in future wireless networks. Since the induced interference can no longer be coordinated between the coexisting networks, new concepts are needed especially for the frequency usage.
As an example, Figure 1 depicts a wireless network that consists of several uncoordinated transmitter-receiver pairs or links, where each receiver receives the signal he is interested in but is also confronted with interfering signals from other transmitting nodes. If there is no a priori knowledge about applied transmit strategies of all other transmitting nodes such as coding or modulation schemes, there is no knowledge about the induced interference. Thus, users are confronted with channels that may vary from symbol to symbol in an unknown and arbitrary manner. The concept of arbitrarily varying channels (AVC) [1,2,3,4] provides a suitable and robust model for such communication scenarios.
Figure 1.
Wireless network with several transmitter-receiver pairs. Each receiver receives a desired signal (solid) and simultaneously receives interference from all other transmitters (dashed).
Interestingly, it is shown for the single-user AVC that its capacity highly depends on how encoder and decoder are coordinated within one transmitter-receiver link: the deterministic code capacity, i.e., the traditional approach with pre-specified encoder and decoder, either equals the random code capacity, i.e., additional encoder-decoder coordination based on common randomness, or is otherwise zero [2]. It is shown that symmetrizable AVCs prevent reliable communication for the traditional approach without additional coordination. Roughly speaking, in this case a symmetrizable AVC can emulate a valid input, which makes it impossible for the decoder to decide on the correct codeword. Unfortunately, many channels of practical importance fall in the category of symmetrizable channels [4].
The situation changes significantly, if constraints on the permissible codewords and channel states are imposed. Such restrictions are motivated by the fact that in real communication systems the transmitter as well as possible interferers are usually limited in their transmit powers. For the single-user AVC under input and state constraints, it is shown that due to the imposed constraints the deterministic code capacity may be positive even for symmetrizable channels, but may be less than its random code capacity [4,5].
Besides the single-user AVC there are several important extensions to multi-user settings as well. The arbitrarily varying wiretap channel is analyzed in [6,7]. The arbitrarily varying multiple access channel (AVMAC) is analyzed in [8,9,10], where its deterministic code and random code capacity regions are established. The AVMAC with constraints on input and states is considered in [11,12], where in the latter it is shown that the random code capacity region is non-convex in general. The AVMAC with conferencing encoders is analyzed in detail in [13,14]. While the AVMAC is well understood, there are only partial results known so far for the arbitrarily varying general broadcast channel. Achievable deterministic code rate regions are analyzed in [8,15], where the latter further imposes the assumption of degraded message sets. But unfortunately, no converses or outer bounds on the capacity region are given.
In this paper we analyze bidirectional relaying, or two-way relaying, for arbitrarily varying channels. The concept of bidirectional relaying has the potential to significantly improve the overall performance and coverage in wireless networks such as ad-hoc, sensor, and even cellular systems. This is mainly based on the fact that it advantageously exploits the bidirectional information flow of the communication to reduce the inherent loss in spectral efficiency induced by half-duplex relays [16,17,18,19].
Bidirectional relaying applies to three-node networks, where a half-duplex relay node establishes a bidirectional communication between two other nodes using a two-phase decode-and-forward protocol. There, in the initial multiple access (MAC) phase two nodes transmit their messages to the relay node which decodes them. In the succeeding broadcast phase the relay re-encodes and transmits both messages in such a way that both receiving nodes can decode their intended message using their own message from the previous phase as side information. Note that due to the complementary side information at the receiving nodes this scenario differs from the classical broadcast channel and is therefore known as bidirectional broadcast channel (BBC). It is shown in [20,21,22,23] for discrete memoryless channels and in [24] for MIMO Gaussian channels that capacity is achieved by a single data stream that combines both messages based on the network coding idea. Optimal transmit strategies for the multi-antenna BBC are then analyzed in [25,26]. Bidirectional relaying for compound channels is studied in [27,28], while [29] discusses adaptive bidirectional relaying with quantized channel state information. Besides the decode-and-forward protocol [20,21,22,23,24,25,26,27,28,29,30,31,32] there are also amplify-and-forward [32,33,34,35,36] or compress-and-forward [37,38,39] approaches similarly as for the classical relay channel. A newer approach is compute-and-forward [40,41,42,43,44,45,46], where the relay decodes a certain function of both individual messages. Another approach is given in [47] which is based on the noisy network coding idea [48,49,50].
Here, we use the concept of arbitrarily varying channels to study bidirectional relaying that operates on the same (frequency) resources as other coexisting wireless networks. Then the initial MAC phase is specified by the AVMAC and is therefore well understood [8,9,10,11,12]. Thus, it remains to study the BBC phase for arbitrarily varying channels. The arbitrarily varying bidirectional broadcast channel (AVBBC) is analyzed in [51,52,53], where it is shown that the AVBBC displays a dichotomy behavior similar to the single-user AVC: its deterministic code capacity region either equals its random code capacity region or else has an empty interior. Having practical limitations on transmit powers in mind, in this paper we impose constraints on the permissible codewords and state sequences and derive the corresponding deterministic code and random code capacity regions of the AVBBC under input and state constraints.
The rest of this paper is organized as follows. In Section 2 we briefly review the concept of types from Csiszár and Körner and state some information theoretic and combinatoric preliminaries. In Section 3 we introduce the concept of arbitrarily varying channels as a suitable model for communication in wireless networks, which share the resources with other coexisting systems in an uncoordinated way, and review the impact of coordination within one transmitter-receiver link on the capacity. As an application for this framework we then study bidirectional relaying under such conditions. We impose constraints on the permissible input and state sequences and analyze bidirectional relaying for arbitrarily varying channels in Section 4. This requires the study of the AVBBC under input and state constraints for which we derive its deterministic code and random code capacity regions. Finally, we conclude the paper in Section 5.
2. Preliminaries
We denote the mutual information between the input random variable and the output random variable by . To emphasize the dependency of the mutual information on the input distribution and the channel , we also write interchangeably.
Furthermore, we extensively use the concept of types from Csiszár and Körner [3], which is briefly reviewed in the following. The type of a sequence of length n is a distribution defined by for every . Thereby, denotes the number of indices i such that , . The set of all types of sequences in is denoted by . The notation extends to joint types in a natural way. For example the joint type of sequences and is the distribution where for every , , where is the number of indices i such that , .
For notational convenience, we represent (joint) types of sequences of length n by (joint) distributions of dummy variables. For instance, the random variables and represent a joint type, e.g., for some and . The set of all sequences of type is denoted by . Of course, this notation extends to joint types and sections in a self-explanatory way, e.g., or .
Remark 1.
To avoid notational ambiguity we usually use small letters to denote arbitrary probability distributions, e.g., , and capital letters to highlight types, e.g., .
Next, we state as facts some bounds on types which we will need for our proofs, cf. for example Csiszár and Körner ([3], Section 1.2).
Fact 1: The number of possible types of sequences of length n is a polynomial in n, i.e.,
Fact 2: We have
Fact 3: For any channel ,
where and denotes the distribution on with probability mass function .
3. Modeling of Communication in Coexisting Wireless Networks
Here we introduce the concept of arbitrarily varying channels as a suitable model for communication in coexisting wireless networks. To highlight the crucial points we consider the simplest interference scenario with two transmitter-receiver pairs (or links) as shown in Figure 2. Here, each receiver receives signals from both transmitters, but is only interested in the information from its own transmitter.
Figure 2.
Interference channel with two transmitters and receivers. Each receiver receives the desired signal (solid) from the intended transmitter but simultaneously receives also interference (dashed) the other transmitter.
Since in practical systems a transmitter usually uses a finite modulation scheme and a receiver quantizes the received signal before further processing, it is reasonable to assume finite input and output alphabets denoted by and for link i, , respectively. Then, for input and output sequences and of length n, the transmission over the discrete memoryless channel is completely characterized by a stochastic matrix
Thereby, the additive noise at the receivers is taken into account by considering stochastic matrices and not deterministic ones. Interestingly, the transmission model in Equation (1) looks like a multiple access channel, since the received signal depends on both the codeword of the intended message and the codeword of the interfering message from the other link.
Remark 2.
If we treat the received signal from the other transmitter as additional noise, we would end up with a modified stochastic matrix , , where the received signal depends only on the codeword of the intended message.
We consider the standard model with block codes of arbitrary but fixed length n. Let , , be the set of messages to transmit. The traditional coding strategy for each transmitter-receiver pair is specified by the following definition of deterministic codes.
Definition 1.
A deterministic -code or codebook for transmitter-receiver pair i is a family
with codewords , one for each message , and decoding sets for all with for , .
When and have been sent according to fixed codebooks and , and and have been received, the decoder of receiver i is in error if , . With this, we can define the probability of error at receiver 1 for given messages and as
and the average probability of error at receiver 1 as
with similar expressions and for receiver 2.
Important to note is that the probability of error depends on the codebooks that both transmitter-receiver pairs use as well as on the specific message the interfering transmitter sends.
Definition 2.
A rate is said to be deterministically achievable if for any there exists an and a sequence of deterministic -codes , , such that for all we have
while as . The deterministic code capacity is the largest deterministically achievable rate.
If we assume no coordination between both transmitter-receiver pairs, there is no a priori knowledge about the used codebooks and codewords that are chosen by the interfering transmitter. Consequently, the receiver can be confronted with arbitrary interfering sequences. This corresponds to the concept of arbitrarily varying channels (AVC) [1,2,3,4] and the only way to guarantee a successful transmission is to find a universal strategy that works for all possible codebooks and interfering codewords simultaneously.
To model the appearance of arbitrary interfering sequences, we introduce a finite state set . Then, for a fixed state sequence of length n and input and output sequences and , the discrete memoryless channel is given by . (In the following we drop the index indicating the transmitter-receiver pair, since obviously the argumentation holds for all i.)
Note that the input sequence and interfering sequence originate from different and, in particular, uncoordinated transmitters, so that they are independent of each other. But of course, the codebook has to be designed in such a way that each codeword works for all possible interfering sequences simultaneously.
Definition 3.
The discrete memoryless arbitrarily varying channel (AVC) is the family
Further, for any probability distribution we denote the averaged channel by .
3.1. Impact of Coordination within Transmitter-Receiver Pair
In the following we analyze and review different approaches of coordination in one transmitter-receiver pair and specify their impact on the transmission. Therefore, we characterize all achievable rates at which reliable communication is possible for three different types of coordination: the traditional approach as well as additional encoder-decoder coordination based on common randomness or based on correlated side information.
3.1.1. No Additional Coordination
The system design of the traditional or conventional approach without additional coordination is defined by a deterministic coding strategy, where transmitter and receiver use a pre-specified encoder and decoder as given in Definition 1. We further need the concept of symmetrizability to state the main result for this approach.
Definition 4.
An AVC is symmetrizable if for some channel
holds for every and . This means the channel is symmetric in for all and .
For the traditional approach the capacity is known [2,3,4] and summarized in the following theorem.
Theorem 1.
The deterministic code capacity of the AVC is
The complete proof can be found for example in [4]. In the following we only want to highlight the key insight why we have a zero capacity if the AVC is symmetrizable.
Let , with be arbitrary codewords. For a symmetrizable AVC , we can consider interfering sequences that look like valid codewords, more precisely we set , . Now, for each pair of codewords with we have for the probability of error
where the second equality follows from the fact that the AVC is symmetrizable, cf. Definition 4. Hence, this leads for the average probability of error to
which implies that for at least one . Since the average probability of error is bounded from below by a positive constant, a reliable transmission is not possible, so that we have if the AVC is symmetrizable.
This becomes intuitively clear, if one realizes the following. Since the AVC is symmetrizable, cf. (2), it can happen that the interfering sequence looks like another valid codeword. Then, the receiver receives a superimposed version of two valid codewords and cannot distinguish which one comes from the intended transmitter and which one is the interfering sequence. Thus, reliable communication can no longer be guaranteed.
3.1.2. Encoder-Decoder Coordination Based on Common Randomness
Since the traditional interference coordination with predetermined encoder and decoder fails in the case of symmetrizable channels, we are interested in strategies that work well also in this case. Therefore, we consider in the following a strategy with a more involved coordination, where we additionally allow transmitter and receiver to coordinate their choice of encoder and decoder based on an access to a common resource independent of the current message. This leads directly to the following definition.
Definition 5.
A random -code for the AVC is given by a family of deterministic -codes
together with a random variable distributed according to .
This means that codewords and decoding sets are chosen according to a common random experiment, realized in Definition 5 by the random variable , whose outcome has to be known to the transmitter and receiver in advance. The definitions of probability of error, a randomly achievable rate, and the random code capacity follow accordingly as in Section 3.1.1.
The access to the common resource can be realized for example by an external source such as a satellite signal. Moreover, we know from [2] that if we transmit at rate R with exponentially many messages, i.e., , it suffices to use a random code which consists of encoder-decoder pairs and a uniformly distributed random variable whose value indicates which encoder and decoder the transmitter and receiver have to use.
Due to the additional coordination within one transmitter-receiver pair, we expect an improvement in the performance compared to the traditional approach especially for symmetrizable channels. The following result confirms our intuition [1,3].
Theorem 2.
The random code capacity of the AVC is
It shows that the random code capacity has the same value as for the traditional interference coordination but is also achieved in the case of symmetrizable channels.
3.1.3. Encoder-Decoder Coordination Based on Correlated Side Information
For the previous additional encoder-decoder coordination we assumed that both transmitter and receiver have access to a common random experiment. This seems to be a hard condition and one can think of a weaker version. Therefore, we allow transmitter and receiver each to have access to an own random experiment which are both correlated. In more detail, the correlated side information strategy is given by the following definition.
Definition 6.
A correlated -code for the AVC is given by of deterministic -codes
together with random variables and distributed according to and with .
Thereby, the fact that the random variables and are correlated is guaranteed by the (weak) condition . Note that in contrast to the additional encoder-decoder coordination based on common randomness, the codewords and decoding sets now depend on a whole sequence of the random variables.
The next result states the capacity for the case of additional encoder-decoder coordination based on correlated side information at transmitter and receiver [54].
Theorem 3.
The correlated side information capacity of the AVC is
The theorem shows that even if transmitter and receiver only have access to correlated versions of a random experiment, such side information is already sufficient to achieve the same rates as for the encoder-decoder coordination based on common randomness. Thus, correlated side information suffices to overcome symmetrizable channel conditions.
4. Bidirectional Relaying under Arbitrarily Varying Channels
In the previous section we established the concept of arbitrarily varying channels as a suitable model for communication in wireless networks which operate on the same resources as other coexisting systems. Here we use this framework and apply it to bidirectional relaying. There, a relay node establishes a bidirectional communication between two other nodes using a two-phase decode-and-forward protocol as shown in Figure 3. The initial MAC phase for arbitrarily varying channels is characterized by the AVMAC and therefore well understood, cf. [8,9,10,11,12,14]. Thus, it remains to study the succeeding BBC phase. Since in practical systems transmitters are usually limited in their transmit power, this requires the study of the AVBBC under input and state constraints, which is the main contribution of this paper.
Figure 3.
Bidirectional relaying in a three-node network, where nodes 1 and 2 exchange their messages and with the help of the relay node using a decode-and-forward protocol.
4.1. Arbitrarily Varying Bidirectional Broadcast Channel
For the bidirectional broadcast phase we assume that the relay has successfully decoded both messages from the previous MAC phase. Now, the relay broadcasts an optimal re-encoded message in such a way that both nodes can decode the intended message using their own message from the previous phase as side information. The transmission is affected by a channel which varies arbitrarily in an unknown manner from symbol to symbol during the whole transmission of a codeword. We model this behavior with the help of a finite state set . Further, let and , , be finite input and output sets. Then, for a fixed state sequence of length n and input and output sequences and , , the discrete memoryless broadcast channel is given by .
Definition 7.
The discrete memoryless arbitrarily varying broadcast channel is the family
Since we do not allow any cooperation between the receiving nodes, it is sufficient to consider the marginal transition probabilities , , only. Further, for any probability distribution we denote the averaged broadcast channel by
and the corresponding averaged marginal channels by and .
Further, we will need the concept of symmetrizability for the AVBBC, which is an extension of the one for the single-user AVC introduced in [4], cf. also Definition 4.
Definition 8.
An AVBBC is -symmetrizable if for some channel
holds for every and , .
4.1.1. Input and State Constraints
Since transmitter and possible interferers are usually limited in their transmit powers, we impose constraints on the permissible input and state sequences. We follow [4] and define cost functions and on and , respectively. For convenience, we assume that and define and . For given and we set
Further, for notational convenience we define the costs caused by given probability distributions and as
and observe that, if we consider types, these definitions immediately yield
for every and every , respectively, cf. also [4].
This allows us to define the set of all state sequences of length n that satisfy a given state constraint Λ by
Furthermore, the set of all probability distributions that satisfy is given by
In [52] it is shown that an AVBBC (without state constraint) has a capacity region whose interior is empty if the AVBBC is -symmetrizable or -symmetrizable. If we impose a state constraint, the situation changes significantly. Now, it is possible that the interior of the capacity region is non-empty even if the AVBBC is -symmetrizable in the sense of Definition 8. Rather, -symmetrizability enters the picture via
, which indicates whether the symmetrization violates the imposed state constraint or not. Thereby, is the set of all channels which satisfy (4). For given type the quantity is called symmetrizability costs and can be interpreted as the minimum costs that are needed to symmetrize the AVBBC . Clearly, if is -symmetrizable, then and is finite. Further, if is non--symmetrizable, then , and we set for convenience.
4.1.2. Coordination Strategies
We consider the standard model with a block code of arbitrary but sufficient fixed length n. Let be the message set of node i, , which is also known at the relay node. Further, we use the abbreviation .
First, we introduce the traditional approach without additional coordination which is based on a deterministic coding strategy with pre-specified encoder and decoders at the relay and receivers.
Definition 9.
A deterministic -code of length n for the AVBBC under input constraint Γ and state constraint Λ is a family
with codewords
one for each message , satisfying the input constraint Γ, and decoding sets at nodes 1 and 2
for all and . For given at node 1 the decoding sets must be disjoint, i.e., for , and similarly for given at node 2 the decoding sets must satisfy for .
When with and has been sent, and and have been received at nodes 1 and 2, the decoder at node 1 is in error if is not in . Accordingly, the decoder at node 2 is in error if is not in . This allows us to define the probability of error for the deterministic code for given message and state sequence , i.e., it satisfies the state constraint Λ, as
and the corresponding marginal probabilities of error at nodes 1 and 2 as and , respectively. Thus, the average probability of error for state sequence is given by
and the corresponding marginal average probability of error at node i by , . Clearly, we always have .
For given , the code is called a -code (with average probability of error ) for the AVBBC under input constraint Γ and state constraint Λ if
Definition 10.
A rate pair is said to be deterministically achievable for the AVBBC under input constraint Γ and state constraint Λ if for any there exists an and a sequence of deterministic -codes with codewords , , , each satisfying , such that for all we have
while
with as . The set of all achievable rate pairs is the deterministic code capacity region of the AVBBC under input constraint Γ and state constraint Λ and is denoted by .
If or , then the input or state sequences are not restricted by the corresponding constraint, respectively. Consequently, we denote the capacity region with state constraint and no input constraint by and the capacity region with input constraint and no state constraint by .
Remark 3.
The definitions above require that we have to find codes such that the average probability of error goes to zero as the block length tends to infinity simultaneously for all state sequences that fulfill the state constraint. This means the codes are universal with respect to the state sequence.
Next, we introduce the encoder-decoder coordination based on common randomness which is specified by a random code, where the encoder and the decoders are chosen according to a common random experiment whose outcome has to be known at all nodes in advance.
Definition 11.
A random -code of length n for the AVBBC under input constraint Γ and state constraint Λ is given by a family of deterministic -codes
together with a random variable distributed according to . Thereby, each is a deterministic code in the sense of Definition 9, which means that each satisfies the input and state constraints individually.
Then, the average probability of error of the random code for given state sequence is given by
and accordingly the corresponding marginal average probability of error at node i by , . For given , the random code is called a -code (with average probability of error ) for the AVBBC under input constraint Γ and state constraint Λ if
The definitions of a randomly achievable rate pair under input and state constraints and the random code capacity region under input and state constraints follow accordingly.
4.2. Encoder-Decoder Coordination Based on Common Randomness
Here, we derive the random code capacity region of the AVBBC under input constraint Γ and state constraint Λ. This characterizes the scenario, where transmitter and receivers can coordinate their choice of encoder and decoders based on common randomness. For this purpose we define the region
for joint probability distributions .
Theorem 4.
The random code capacity region of the AVBBC under input constraint Γ and state constraint Λ is
In the following we give the proof of the random code capacity region where the achievability part is mainly based on an extension of Ahlswede’s robustification technique [55,56].
4.2.1. Compound Bidirectional Broadcast Channel
As in [51] for the AVBBC without constraints on input and states, we start with a construction of a suitable compound BBC, where the key idea is to restrict it in an appropriate way. Having the state constraint Λ in mind, it is reasonable to restrict our attention to all probability distributions . Let us consider the family of averaged broadcast channels, cf. (3),
and observe that this already corresponds to a compound BBC where each permissible probability distribution parametrizes one element of the compound channel which we denote by in the following. The capacity region of the compound BBC is known and can be found in [27]. It is shown that for given input distribution all rate pairs satisfying , cf. (7), are deterministically achievable. In particular, this is valid for a input distribution that satisfies the input constraint .
In more detail, in [27] it is shown that there exists a deterministic code for the compound BBC such that all rate pairs are achievable while the average probability of error can be bounded from above by
with where is the average probability of error at node i, . Moreover, for n large enough, we have
which decreases exponentially fast for increasing block length n. Thereby, , , and are constants independent of n, cf. [27].
Together with (3) this immediately implies that for the average probability of a successful transmission over the compound BBC is bounded from below by
or equivalently by
for all and .
4.2.2. Robustification
As in [51] for the AVBBC without state constraints, we use the deterministic code for the compound BBC to construct a random code for the AVBBC under input constraint Γ and state constraint Λ.
Let be the group of permutations acting on . For given sequence and permutation , we denote the permuted sequence by . Further, we denote the inverse permutation by so that since π is bijective.
Theorem 5
(Robustification technique). Let be a function such that for some the inequality
holds where . Then it also holds
Proof. The proof is a modification of the corresponding proof in [56], where a similar result is given without constraints on the sequences of states. First, we observe that (9) is equivalent to
Since each is bijective and because for all , we obtain from (11)
Therefore, averaging (12) over yields
Since , restricting the state sequences to we get from (13)
which is equivalent to
because for , the term does not depend on . Since , cf. [3], Equation (14) implies
Obviously, we have so that (15) shows that
which completes the proof of the theorem. ☐
With the robustification technique and
we immediately obtain a random -code for the AVBBC under input constraint Γ and state constraint Λ, which is given by the family
where the permutations π are uniformly distributed on and
Since is the group of permutations of size n, the cardinality of is so that the random code consists of deterministic -codes.
From the robustification technique follows that the average probability of error of is bounded from above by
Moreover, from the construction it is clear that for given input , the random code achieves for the AVBBC the same rate pairs as for the compound BBC as specified in (7). Finally, taking the union over all input distributions that satisfy the input constraint establishes the achievability of the random code capacity as stated in Theorem 4.
4.2.3. Converse
It remains to show that the presented random coding strategy actually achieves all possible rate pairs so that it is optimal in the sense that no other rate pairs are achievable.
As a first step, it is easy to show that the average probability of error for the random code for the AVBBC equals the average probability of error for the random code for the compound BBC . Hence, it is clear that we cannot achieve higher rates as for the constructed compound BBC with random codes. The deterministic rates of the compound channel can be found in [27]. Additionally, as in [57] for the single-user compound channel, it can be easily shown that for the compound BBC the achievable rates for deterministic and random codes are equal. Since the constructed random code for the AVBBC already achieves these rates, the converse is established.
This finishes the proof of Theorem 4 and therewith the random code capacity region of the AVBBC under input constraint Γ and state constraint Λ.
4.3. No Additional Coordination
A random coding strategy as constructed in the previous section requires common randomness between all nodes, since the encoder and the decoders depend all on the same random permutation which has to be known at all nodes in advance. If this kind of resource is not available, one is interested in deterministic strategies. In this section, we derive the deterministic code capacity region of the AVBBC with constraints on input and states.
Theorem 6.
If , , then the deterministic code capacity region of the AVBBC under input constraint Γ and state constraint Λ is
If or , then .
From the theorem we immediately obtain the deterministic code capacity region of the AVBBC with state constraint Λ and no input constraint, i.e., .
Corollary 1.
If , , then the deterministic code capacity region of the AVBBC with state constraint Λ and no input constraint is given by
If or , then .
We observe that the deterministic code capacity region of the AVBBC under input constraint Γ and state constraint Λ displays a dichotomy behavior similarly as in the unconstrained case [51]: it either equals a non-empty region or has an empty interior. Unfortunately, this knowledge cannot be exploited to prove the corresponding deterministic code capacity region since, as already observed in [4] for the single-user AVC, Ahlswede’s elimination technique [2] does not work anymore if constraints are imposed on the permissible codewords and sequences of states. Consequently, to prove Theorem 6 we need a proof idea which does not rely on this technique. In the following subsections we present the proof which is therefore mainly based on an extension of [4].
4.3.1. Symmetrizability
The following lemma shows that under state constraint Λ no code with codewords of type satisfying or can be good.
Lemma 1.
For a -symmetrizable AVBBC any deterministic code of block length n with codewords , , , each of type with , and has
Similarly, for a -symmetrizable AVBBC any deterministic code of block length n with codewords , , , each of type with , and has
Proof.
The proof can be found in Appendix Section A.1. ☐
Remark 4.
The lemma indicates that for a successful transmission using codewords of type the symmetrizability costs , , have to exceed the permissible (or available) costs Λ, since otherwise the AVBBC can be symmetrized, which prohibits any reliable or error-free communication. This already establishes the second part of Theorem 6 and therewith characterizes when .
4.3.2. Positive Rates
Next, we present a coding strategy with codewords of type that achieves the desired rates as specified in Theorem 6 if the symmetrizability costs exceed the permissible costs, i.e., and . Fortunately, we are in the same position as in the single-user AVC [4]: the coding strategy for the AVBBC without constraints [52] must only be slightly modified to apply also to the AVBBC with constraints.
We need codewords , , with the following properties.
Lemma 2.
For any , , , , and given type , there exist codewords , , such that for every , , and every joint type , with and , we have for each fixed the following properties
where , and further for each fixed
Proof. The proof can be found in Appendix A.2. ☐
We follow [4] and define the decoding sets similarly as for the single-user AVC under input and state constraints. Therefore, we define the set
Then, the decoding sets at node 1 are specified as follows.
Definition 12.
For given codewords , , , and we have if and only if
- (i)
- there exists an such that
- (ii)
- for each codeword with which satisfies for some , we have where are dummy random variables such that equals the joint type of .
The decoding sets at node 2 are defined accordingly with . A key part is now to ensure that these decoding sets are unambiguously defined. This means that they are disjoint for small enough and , which can be shown analogously to the single-user case [4]. Here is where the conditions on the symmetrizability costs, , , come in.
Lemma 3.
Let and , then for a sufficiently small , , no quintuple of random variables , , , , and can simultaneously satisfy with
and
Proof. The proof can be found in Appendix A.3. ☐
So far we defined coding and decoding rules. Next, we show that codewords of type with properties as given in Lemma 2 and decoding sets as given in Definition 12 suffices to achieve all rate pairs as specified by the region , cf. (7).
Lemma 4.
Given and arbitrarily small , , and , for any type satisfying
there exist a code of block length with codewords , , , such that
while
where and depend only on α, β, δ, and the AVBBC .
Proof.
The proof follows [4] (Lemma 5) where a similar result is shown for the single-user AVC.
Let , , , each satisfying the input constraint , be codewords with properties as specified in Lemma 2 (ϵ will be chosen later) and , satisfying
Let the decoding sets and be as given in Definition 12. Then Lemma 3 ensures that and can be chosen small enough to ensure that the decoding sets are well defined.
Furthermore, is uniformly continuous in and divergence dominates the variational distance [3] so that we can choose small enough to ensure that implies
In the following we carry out the analysis for the probability of error at node 1. Then the analysis for node 2 follows accordingly using the same arguments. Now, we establish an exponentially decreasing upper bound on the probability of error as postulated in (20) for node 1 for a fixed state sequence .
For each we first observe by Definition 12 of the decoding sets that is erroneously decoded if decoding rule (i) or decoding rule (ii) is violated. More precisely, when message has been sent, then the decoder makes an error if or there exists a joint type with for some such that (a) ; (b) for some ; and (c) . Let denote the set of all types which satisfy the aforementioned conditions (a)–(c). Consequently, the probability of error for message m and state sequence is bounded by
where
Next, for given we define the set
and use the trivial bound for all such . With this and (23) we get for the average probability of error
Property (18b) of the codewords and Fact 1 from Section 2 imply for the first term that
where the last inequality holds for sufficiently large n.
To bound the second term we observe that for any
where the second inequality follows from Fact 3 and the third inequality from Fact 1 and
It remains to bound for the term
Before we proceed to bound (27) we observe that if , then by (18c),
Consequently, it suffices to proceed when satisfies
From (24) we may write
Since is constant for and the inner term in (29) is bounded from above by
where the last inequality follows immediately from Fact 2. Next, using (18a), it follows from (29) together with (30) that
Since
is obviously fulfilled, we can substitute this into (31) and obtain
Since for some , it follows from (21) and (22) that
and therewith
Now, we choose so that (25), (26), (28), and (32) imply that the average probability of error decreases exponentially fast for sufficiently large n. Since the derived bounds hold uniformly for all , the first part of the proof is complete. Similarly, we can now bound the average probability of error at node 2 using the same argumentation. ☐
4.3.3. Converse
It remains to show that there are no other rate pairs achievable than these rate pairs which are already characterized by Theorem 6. If , , the converse is already established by Lemma 1. Consequently, we only need to consider the case where , , in the following.
Lemma 5.
For any , , and , there exists such that for any deterministic code of block length with codewords, each of type , satisfying
implies
And similarly, if the codewords satisfy , then .
Proof.
The proof follows [4] (Lemma 2) where a similar converse result is shown for the single-user case. We carry out the analysis for receiving node 1, then the result for receiving node 2 follows accordingly using the same argumentation.
Let us consider a joint probability distribution
If some probability distribution satisfies
for some which depends on δ but not on , then
To prove (35) let be a probability distribution which achieves the infimum in so that we have for as given in (33) with . Next, we use to construct a new probability distribution with slightly smaller costs than Λ as required in (34). Therefore, let with and define
Clearly, satisfies (34), and therefore (35) holds for sufficiently small η, since is a uniformly continuous in if is given as in (33).
Similarly as in [4] (Lemma 2), we consider now any deterministic code with codewords , , , and decoding sets and for all and , cf. Definition 9. Further, let be a sequence, where each element is independent and identically distributed according to q as constructed above. Then for receiving node 1 we get for each fixed for the probability of error
Next, we set
which is, in fact, a discrete memoryless channel (DMC). For each , (36) yields that where is the average probability of error when the deterministic code is used on the DMC . Next, observe that
which follows from (34), (5b), and Chebyshev’s inequality so that we get
Now, we are almost done. We observe that the definition of as given in (35) implies that is connected with by the channel as defined in (37). For such a DMC a strong converse in terms of maximal error can be found in [3], which immediately yields also a strong converse for the DMC in terms of average probability of error as needed here. In more detail, (36) implies, by the strong converse for a DMC with codewords of type , that if all codewords , , , each of type , then, for each , the average probability of error is arbitrarily close to 1 if and n sufficiently large enough. Finally, this together with (38) complete the first part of the proof.
The result for receiving node 2 follows accordingly using the same argumentation which completes the proof of the lemma. ☐
4.3.4. Capacity Region
Now we are in the position to finally establish the deterministic code capacity region, which is one of the main contributions of this work. Thus, summarizing the results obtained so far, we see that for given input distribution the achievable rates for the AVBBC under input constraint Γ and state constraint Λ are given by if , . Taking the union over all such valid inputs we finally obtain
On the other hand, we have if or , which follows immediately from Lemma 1. This, indeed, establishes the deterministic code capacity region of the AVBBC under input constraint Γ and state constraint Λ as stated in Theorem 6.
Remark 5.
The case where , , remains unsolved in a similar way as for the single-user AVC [4]. Likewise, we expect that in that case.
4.4. Unknown Varying Additive Interference
So far we considered discrete memoryless channels and analyzed the corresponding arbitrarily varying bidirectional broadcast channel. Here, we assume channels with additive white Gaussian noise, where the transmission in the bidirectional broadcast phase is further corrupted by unknown varying additive interference. Therefore, we also call this a BBC with unknown varying interference. Clearly, the interference at both receivers may differ so that we introduce two artificial interferers or jammers, one for each receiver, to model this scenario. Then the BBC with unknown varying interference is specified by the flat fading input-output relation between the relay node and node i, , which is given by
Here, denotes the output at node i, the input, the additive interference, and the additive Gaussian noise distributed according to .
The transmit powers of the relay and of the artificial jammers are restricted by average power constraints Γ and , , respectively. This means, all permissible input sequences of length n must satisfy
and all permissible jamming sequences , , of length n must satisfy
From conditions (39) and (40) it follows that all permissible codewords and interfering sequences lie on or within an n-dimensional sphere of radius or , , respectively.
Similarly as for the discrete memoryless AVBBC, it makes a difference for the BBC with unknown varying interference, if we consider deterministic or random coding strategies. Hence, we want specify their different impact on the transmission in the following.
4.4.1. No Additional Coordination
The traditional approach without additional coordination is in general based on a system design which ensures that the interference at the receivers does not exceed a certain threshold. For example in current cellular networks, this is realized by separating cells in space which operate at the same frequency.
Theorem 7.
The deterministic code capacity region of the BBC with unknown varying interference with input constraint Γ and jamming constraints and is the set of all rate pairs that satisfy
. This means if and only if and .
Sketch of Proof. First, we consider the case when or . Let , , with and be arbitrary codewords satisfying the input constraint (39). For we can consider the jamming sequences , , . Then for each at node 1 the following holds. For each pair with we have for the probability of error at node 1
Hence, for a fixed this leads for the average probability of error to
This implies that for at least one . Since the average probability of error is bounded from below by a positive constant, a reliable transmission from the relay to node 1 is not possible so that we end up with . The case similarly leads to .
Remark 6.
Interestingly, Theorem 7 shows that the existence of positive rates only depends on the interference and is completely independent of the noise. Consequently, the goal of the traditional approach is to ensure that the received interference will be small enough. Otherwise, there is no communication possible, not even at very low rates.
Now, we turn to the case when and . To show that the rates given in (41) are actually achievable, we follow [58] where a similar result is proved for the corresponding single-user scenario. The strategy is outlined in the following.
Without loss of generality we assume that and further , . Then it suffices to show that for every small and sufficiently large n there exist codewords (on the unit sphere) with and and with , , cf. (41), such that the average probability is arbitrarily small for all satisfying (40). To ensure that the probability of error gets arbitrarily small, the codewords must possess certain properties which are guaranteed by the following lemma. This is a straightforward extension of the single-user case [58] (Lemma 1) to the BBC with unknown varying interference.
Lemma 6.
For every , , , and , with , , for there exist unit vectors , , such that for every unit vector and constants α, β in , we have for each
and, if ,
and similarly for each
and, if ,
Proof. The proof is a straightforward extension of the corresponding single-user result given in [58] (Lemma 1) and is therefore omitted for brevity. ☐
At the receiving nodes it suffices to use a minimum-distance decoder. Then for each the decoding sets at node 1 and for each at node 2 are given by
With the presented coding and decoding rule, the probability of error gets arbitrarily small for increasing block length, which can be shown analogously to [58]. The details are omitted for brevity.
It remains to show that the described strategy is optimal, which means that no other rate pairs are achievable. From the previous discussions, we already know that the capacity region of the deterministic code capacity region is included in the capacity region of the random code capacity region. In the next subsection, from Theorem 8 we see that for , , the maximal achievable rates for both strategies are equal. Since the described strategy already achieves these rates, the optimality is proved.
4.4.2. Encoder-Decoder Coordination Based on Common Randomness
Next, we study a more involved coordination scheme. We assume that the relay and the receivers are synchronized in such a manner that they can coordinate their choice of the encoder and decoders based on an access to a common resource independent of the current message.
This can be realized by using a random code. If we transmit at rates and with exponentially many messages, i.e., and , we know from [2] that it suffices to use a random code, which consists of pairs of encoder and decoders and a uniformly distributed random variable whose value indicates which of the pair all nodes have to use. The access to the common random variable can be realized by an external source, e.g., a satellite signal, or a preamble prior to the transmission. Clearly, for sufficiently large block length the (polynomial) costs for the coordination are negligible. We call this additional encoder-decoder coordination based on common randomness. Due to the more involved coordination we expect an improvement in the performance compared to the traditional approach, especially for high interference.
Theorem 8.
The random code capacity region of the BBC with unknown varying interference with input constraint Γ and jamming constraints and is the set of all rate pairs that satisfy
Sketch of Proof. The theorem can be proved analogously to [59] where a similar result is proved for the single-user case. The random strategy which achieves the rates given in (43) is outlined in the following.
The codewords are uniformly distributed on the n-sphere of radius . Similar to the traditional approach, a minimum-distance decoder as given in (42) at the receiving nodes is sufficient. It remains to show that for all rate pairs satisfying (43) the probability of error gets arbitrarily small for increasing block length. This can be done similarly to [59].
The optimality of the presented random strategy, which means that no other rate pairs are achievable, follows immediately from [59] and can be shown by standard arguments.
Remark 7.
The capacity region is identical to the one if the interfering sequences would consist of iid Gaussian symbols distributed according to , . This means, the arbitrary, possibly non-Gaussian, unknown interference do not affect the achievable rates more than Gaussian noise of the same power.
5. Discussion
The concept of arbitrarily varying channels has been shown to be a suitable and robust model for communication in wireless networks, which share their resources with other coexisting systems in an uncoordinated way. The main issue that comes along with this development is that interference becomes an ubiquitous phenomenon and that it will be one of the limiting factors in future wireless networks.
It has been shown that unknown varying interference has a dramatic impact on the communication in such wireless systems. If the traditional approach without additional coordination is applied, unknown varying interference can lead to situations that completely prohibit any reliable communication. This is mainly based on the assumption that the traditional approach treats the interference as some kind of additional noise. As we have seen, this is in general to imprecise and leads to a performance loss especially if the interference is caused by other transmitters that use the same or a similar codebook. Then, interference can look like other valid codewords and receivers cannot reliably distinguish between the intended signal and the interference anymore. Consequently, a traditional approach based on a deterministic coding strategy is only reasonable if the interference can be made small enough. For Gaussian channels this means that the power of the interference signal must be ensured to be smaller than the power of the transmit signal. Thus, especially in the high interference case where the interference power exceeds the transmit power, a more sophisticated coordination based on a random coding strategy is needed for reliable communication. It is shown that an additional coordination of the encoder and decoder based on a common resource, such as common randomness or correlated side information, is sufficient to handle the interference even if it is stronger than the desired signal.
To date only the single-user AVC is analyzed under additional encoder-decoder coordination based on correlated side information in [54]. It would be interesting to extend it also to other (multi-user) settings.
In this paper we used the concept of arbitrarily varying channels to analyze bidirectional relaying in coexistence with other wireless networks. This required the study of the arbitrarily varying bidirectional broadcast channel (AVBBC). Based on Ahlswede’s elimination technique [2] the following dichotomy of the deterministic code capacity region of an AVBBC was revealed in [51,52]: it either equals its random code capacity region or else has an empty interior. Unfortunately, many channels of practical interest are symmetrizable, which results in an ambiguity of the codewords at the receivers. Such channels prohibit any reliable communication and therewith fall in the latter category.
Imposing constraints on the permissible sequences of channel states reveals further phenomena. Now, even when the channel is symmetrizable, the deterministic code capacity region of the AVBBC under input and state constraints may be non-empty but less than its random code capacity region. Thereby, we observed that the constraints on the state sequences may reduce the deterministic code capacity region so that it is in general strictly smaller than the corresponding random code capacity region, but they preserve the general dichotomy behavior of the deterministic code capacity region: it still either equals a non-empty region or else has an empty interior. Although the deterministic code capacity region displays a dichotomy behavior, it cannot be exploited to prove the corresponding capacity region since Ahlswede’s elimination technique [2] does not work anymore in the presence of constraints on input and states, cf. also [60]. This necessitated a proof technique which does not rely on the dichotomy behavior and is based on an idea of Csiszár and Narayan [4].
Besides the concept of arbitrarily varying channels, there are also other approaches to tackle the problem of interference or channel uncertainty in wireless networks. One approach to model the interference is based on the framework of interference functions, cf. for example [61] or [62,63]. In this axiomatic approach the interference functions are assumed to have some basic properties such as non-negativity, scale-invariance, and monotonicity. It is shown that under these assumptions the performance of wireless systems depends continuously on the interference functions. These assumptions are valid and reasonable for conventional cellular systems which are coordinated in a centralized way. But if such systems compete with other coexisting systems on the same wireless resources, the concept of arbitrarily varying channels show that these assumptions are no longer valid.
In the signal processing community, a common approach to tackle the problem of channel uncertainty is the robust design of wireless systems based on robust optimization techniques. There are statistical approaches which assume the channel to be random but according to a certain statistic that is known. For example heuristics are developed for the multi-antenna downlink scenario from a signal processing point of view in [64,65]. These approaches are developed for conventional cellular systems, and it would be interesting for future work to analyze if these approaches can be extended to the case with unknown interference from other coexisting wireless networks.
Another approach is based on the worst noise analysis as studied in [66,67,68,69]. Here, the impact of interference and channel uncertainty is analyzed for conventional single cell systems and, again, it would be interesting to analyze if this approach can be extended to scenarios with interference from coexisting wireless networks.
Acknowledgments
The authors would like to thank Igor Bjelaković for his insightful comments and fruitful discussions. This work was partly supported by the German Ministry of Education and Research (BMBF) under Grant 01BQ1050 and by the German Research Foundation (DFG) under Grant BO 1734/25-1.
References
- Blackwell, D.; Breiman, L.; Thomasian, A.J. The capacities of certain channel classes under random coding. Ann. Math. Stat. 1960, 31, 558–567. [Google Scholar] [CrossRef]
- Ahlswede, R. Elimination of correlation in random codes for arbitrarily varying channels. Z. Wahrscheinlichkeitstheorie verw. Gebiete 1978, 44, 159–175. [Google Scholar] [CrossRef]
- Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems, 2 ed.; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
- Csiszár, I.; Narayan, P. The capacity of the arbitrarily varying channel revisited: Positivity, constraints. IEEE Trans. Inf. Theory 1988, 34, 181–193. [Google Scholar] [CrossRef]
- Csiszár, I.; Narayan, P. Arbitrarily varying channels with constrained inputs and states. IEEE Trans. Inf. Theory 1988, 34, 27–34. [Google Scholar] [CrossRef]
- MolavianJazi, E.; Bloch, M.; Laneman, J.N. Arbitrary jamming can preclude secure communication. In Proceedings of the Allerton Conference Communication, Control, Computing, Urbana-Champaign, IL, USA, 30 September–02 October 2009; pp. 1069–1075.
- Bjelaković, I.; Boche, H.; Sommerfeld, J. Strong secrecy in arbitrarily varying wiretap channels. In Proceedings of the IEEE Information Theory Workshop, Lausanne, Switzerland, 3–7 September 2012.
- Jahn, J.H. Coding of arbitrarily varying multiuser channels. IEEE Trans. Inf. Theory 1981, 27, 212–226. [Google Scholar] [CrossRef]
- Gubner, J.A. On the deterministic-code capacity of the multiple-access arbitrarily varying channel. IEEE Trans. Inf. Theory 1990, 36, 262–275. [Google Scholar] [CrossRef]
- Ahlswede, R.; Cai, N. Arbitrarily varying multiple-access channels part I—Ericson’s symmetrizability is adequate, gubner’s conjecture is true. IEEE Trans. Inf. Theory 1999, 45, 742–749. [Google Scholar] [CrossRef]
- Gubner, J.A. State constraints for the multiple-access arbitrarily varying channel. IEEE Trans. Inf. Theory 1991, 37, 27–35. [Google Scholar] [CrossRef]
- Gubner, J.A.; Hughes, B.L. Nonconvexity of the capacity region of the multiple-access arbitrarily varying channel subject to constraints. IEEE Trans. Inf. Theory 1995, 41, 3–13. [Google Scholar] [CrossRef]
- Wiese, M.; Boche, H.; Bjelaković, I.; Jungnickel, V. The compound multiple access channel with partially cooperating encoders. IEEE Trans. Inf. Theory 2011, 57, 3045–3066. [Google Scholar] [CrossRef]
- Wiese, M.; Boche, H. The arbitrarily varying multiple-access channel with conferencing encoders. In Proceedings of the IEEE International Symposium Information Theory, Saint Petersburg, Russia, 31 July–5 August 2011; pp. 993–997.
- Hof, E.; Bross, S.I. On the deterministic-code capacity of the two-user discrete memoryless arbitrarily varying general broadcast channel with degraded message sets. IEEE Trans. Inf. Theory 2006, 52, 5023–5044. [Google Scholar] [CrossRef]
- Rankov, B.; Wittneben, A. Spectral efficient protocols for half-duplex fading relay channels. IEEE J. Sel. Areas Commun. 2007, 25, 379–389. [Google Scholar] [CrossRef]
- Larsson, P.; Johansson, N.; Sunell, K.E. Coded bi-directional relaying. In Proceedings of the 5th Scandinavian Workshop on Ad Hoc Networks, Stockholm, Sweden, 3–4 May 2005; pp. 851–855.
- Wu, Y.; Chou, P.; Kung, S.Y. Information exchange in wireless networks with network coding and physical-layer broadcast. In Proceedings of the Conference Information Sciences and Systems, Baltimore, MD, USA, March 2005; pp. 1–6.
- Knopp, R. Two-way radio networks with a star topology. In Proceedings of the International Zurich Seminar on Communication, Zurich, Switzerland, February 2006; pp. 154–157.
- Oechtering, T.J.; Schnurr, C.; Bjelaković, I.; Boche, H. Broadcast capacity region of two-phase bidirectional relaying. IEEE Trans. Inf. Theory 2008, 54, 454–458. [Google Scholar] [CrossRef]
- Kim, S.J.; Mitran, P.; Tarokh, V. Performance bounds for bidirectional coded cooperation protocols. IEEE Trans. Inf. Theory 2008, 54, 5235–5241. [Google Scholar] [CrossRef]
- Kramer, G.; Shamai (Shitz), S. Capacity for classes of broadcast channels with receiver side information. In Proceedings of the IEEE Information Theory Workshop, Tahoe City, CA, USA, 2–6 September 2007; pp. 313–318.
- Xie, L.L. Network coding and random binning for multi-user channels. In Proceedings of the Canadian Workshop on Information Theory, 6–8 June 2007; pp. 85–88.
- Wyrembelski, R.F.; Oechtering, T.J.; Bjelaković, I.; Schnurr, C.; Boche, H. Capacity of Gaussian MIMO bidirectional broadcast channels. In Proceedings of the IEEE International Symposium Information Theory, Toronto, Canada, 6–11 July 2008; pp. 584–588.
- Oechtering, T.J.; Wyrembelski, R.F.; Boche, H. Multiantenna bidirectional broadcast channels–optimal transmit strategies. IEEE Trans. Signal Process. 2009, 57, 1948–1958. [Google Scholar] [CrossRef]
- Oechtering, T.J.; Jorswieck, E.A.; Wyrembelski, R.F.; Boche, H. On the optimal transmit strategy for the MIMO bidirectional broadcast channel. IEEE Trans. Commun. 2009, 57, 3817–3826. [Google Scholar] [CrossRef]
- Wyrembelski, R.F.; Bjelaković, I.; Oechtering, T.J.; Boche, H. Optimal coding strategies for bidirectional broadcast channels under channel uncertainty. IEEE Trans. Commun. 2010, 58, 2984–2994. [Google Scholar] [CrossRef]
- Wyrembelski, R.F.; Oechtering, T.J.; Boche, H.; Skoglund, M. Robust transmit strategies for multiantenna bidirectional broadcast channels. In Proceedings of the ITG Workshop Smart Antennas, Dresden, Germany, 7–8 March 2012; pp. 46–53.
- Kim, T.T.; Poor, H.V. Diversity-multiplexing trade-off in adaptive two-way relaying. IEEE Trans. Inf. Theory 2011, 57, 4235–4254. [Google Scholar] [CrossRef]
- Zhao, J.; Kuhn, M.; Wittneben, A.; Bauch, G. Optimum time-division in MIMO two-way decode-and-forward relaying systems. In Proceedings of the Asilomar Conf. Signals, Systems, Computers, Pacific Grove, CA, USA, 26–29 November 2008; pp. 1494–1500.
- Sezgin, A.; Khajehnejad, M.A.; Avestimehr, A.S.; Hassibi, B. Approximate capacity region of the two-pair bidirectional Gaussian relay network. In Proceedings of the IEEE International Symposium Information Theory, Seoul, Korea, 28 June–3 July 2009; pp. 2018–2022.
- Popovski, P.; Koike-Akino, T. Coded bidirectional relaying in wireless networks. In New Directions in Wireless Communications Research; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
- Zhang, R.; Liang, Y.C.; Chai, C.C.; Cui, S. Optimal beamforming for two-way multi-antenna relay channel with analogue network coding. IEEE J. Sel. Areas Commun. 2009, 27, 699–712. [Google Scholar] [CrossRef]
- Ngo, H.Q.; Quek, T.Q.S.; Shin, H. Amplify-and-forward two-way relay networks: Error exponents and resource allocation. IEEE Trans. Commun. 2009, 58, 2653–2666. [Google Scholar] [CrossRef]
- Roemer, F.; Haardt, M. Tensor-based channel estimation and iterative refinements for two-way relaying with multiple antennas and spatial reuse. IEEE Trans. Signal Process. 2010, 58, 5720–5735. [Google Scholar] [CrossRef]
- Yilmaz, E.; Zakhour, R.; Gesbert, D.; Knopp, R. Multi-pair two-way relay channel with multiple Antenna relay station. In Proceedings of the IEEE International Conference Communication, Cape Town, South Africa, 23–27 May 2010; pp. 1–5.
- Schnurr, C.; Oechtering, T.J.; Stańczak, S. Achievable rates for the restricted half-duplex two-way relay channel. In Proceedings of the Asilomar Conference Signals, Systems, Computers, Pacific Grove, CA, USA, 4–7 November 2007; pp. 1468–1472.
- Gündüz, D.; Tuncel, E.; Nayak, J. Rate regions for the separated two-way relay channel. In Proceedings of the Allerton Conference Communication, Control, Computing, Urbana-Champaign, IL, USA, 23–26 September 2008; pp. 1333–1340.
- Zhong, P.; Vu, M. Compress-forward without Wyner-Ziv binning for the one-way and two-way relay channels. In Proceedings of the Allerton Conference Communication, Control, Computing, Urbana-Champaign, IL, USA, 28–30 September 2011; pp. 426–433.
- Ong, L.; Kellett, C.M.; Johnson, S.J. Functional-decode-forward for the general discrete memoryless two-way relay channel. In Proceedings of the IEEE International Conference Communication Systems, Singapore, 17–19 November 2010; pp. 351–355.
- Wilson, M.P.; Narayanan, K.; Pfister, H.D.; Sprintson, A. Joint physical layer coding and network coding for bidirectional relaying. IEEE Trans. Inf. Theory 2010, 56, 5641–5654. [Google Scholar] [CrossRef]
- Nam, W.; Chung, S.Y.; Lee, Y.H. Capacity of the gaussian two-way relay channel to within bit. IEEE Trans. Inf. Theory 2010, 56, 5488–5494. [Google Scholar] [CrossRef]
- Baik, I.J.; Chung, S.Y. Network coding for two-way relay channels using lattices. Telecommun. Rev. 2007, 17, 1009–1021. [Google Scholar]
- Nazer, B.; Gastpar, M. Compute-and-forward: Harnessing interference through structured codes. IEEE Trans. Inf. Theory 2011, 57, 6463–6486. [Google Scholar] [CrossRef]
- Song, Y.; Devroye, N. A lattice compress-and-forward scheme. In Proceedings of the IEEE Infomation Theory Workshop, Paraty, Brasil, 16–20 October 2011; pp. 110–114.
- Kim, S.J.; Smida, B.; Devroye, N. Lattice strategies for a multi-pair bi-directional relay network. In Proceedings of the IEEE International Symposium Infomation Theory, Saint Petersburg, Russia, 31 July–5 August 2011; pp. 2243–2247.
- Lim, S.H.; Kim, Y.H.; El Gamal, A.; Chung, S.Y. Layered noisy network coding. In Proceedings of the IEEE Wireless Network Coding Conference, Boston, MA, USA, 21 June 2010; pp. 1–6.
- Lim, S.H.; Kim, Y.H.; El Gamal, A.; Chung, S.Y. Noisy network coding. IEEE Trans. Inf. Theory 2011, 57, 3132–3152. [Google Scholar] [CrossRef]
- Kramer, G.; Hou, J. Short-message quantize-forward network coding. In Proceedings of the 8th International Workshop on Multi-Carrier Systems & Solutions, Herrsching, Germany, 3–4 May 2011; pp. 1–3.
- Kramer, G.; Hou, J. On message lengths for noisy network coding. In Proceedings of the IEEE Infomation Theory Workshop, Paraty, Brazil, 16–20 October 2011; pp. 430–431.
- Wyrembelski, R.F.; Bjelaković, I.; Boche, H. Coding strategies for bidirectional relaying for arbitrarily varying channels. In Proceedings of the IEEE Global Communication Conference, Honolulu, HI, USA, 30 November–4 December 2009; pp. 1–6.
- Wyrembelski, R.F.; Bjelaković, I.; Boche, H. On the capacity of bidirectional relaying with unknown varying channels. In Proceedings of the IEEE Workshop Computational Advances Multi-Sensor Adaptive Processing, Aruba, Dutch Antilles, 13–16 December 2009; pp. 269–272.
- Wyrembelski, R.F.; Bjelaković, I.; Boche, H. List decoding for bidirectional broadcast channels with unknown varying channels. In Proceedings of the IEEE International Conference Communication, Cape Town, South Africa, 23–27 May 2010; pp. 1–6.
- Ahlswede, R.; Cai, N. Correlated sources help transmission over an arbitrarily varying channel. IEEE Trans. Inf. Theory 1997, 43, 1254–1255. [Google Scholar] [CrossRef]
- Ahlswede, R. Coloring hypergraphs: A new approach to multi-user source coding—II. J. Comb. Inform. Syst. Sci. 1980, 5, 220–268. [Google Scholar]
- Ahlswede, R. Arbitrarily varying channels with states sequence known to the sender. IEEE Trans. Inf. Theory 1986, 32, 621–629. [Google Scholar] [CrossRef]
- Ahlswede, R.; Wolfowitz, J. The structure of capacity functions for compound channels. In Proceedings of the International Symposium on Probability and Information Theory, McMaster University, Hamilton, Canada, April 1969; pp. 12–54.
- Csiszár, I.; Narayan, P. Capacity of the gaussian arbitrarily varying channel. IEEE Trans. Inf. Theory 1991, 37, 18–26. [Google Scholar] [CrossRef]
- Hughes, B.; Narayan, P. Gaussian arbitrarily varying channels. IEEE Trans. Inf. Theory 1987, 33, 267–284. [Google Scholar] [CrossRef]
- Lapidoth, A.; Narayan, P. Reliable communication under channel uncertainty. IEEE Trans. Inf. Theory 1998, 44, 2148–2177. [Google Scholar] [CrossRef]
- Yates, R.D. A framework for uplink power control in cellular radio systems. IEEE J. Sel. Areas Commun. 1995, 13, 1341–1347. [Google Scholar] [CrossRef]
- Boche, H.; Schubert, M. A unifying approach to interference modeling for wireless networks. IEEE Trans. Signal Process. 2010, 58, 3282–3297. [Google Scholar] [CrossRef]
- Boche, H.; Schubert, M. Concave and convex interference functions—General characterizations and applications. IEEE Trans. Signal Process. 2008, 56, 4951–4965. [Google Scholar] [CrossRef]
- Vucic, N.; Boche, H. Robust QoS-constrained optimization of downlink multiuser MISO systems. IEEE Trans. Signal Process. 2009, 57, 714–725. [Google Scholar] [CrossRef]
- Vucic, N.; Boche, H.; Shi, S. Robust transceiver optimization in downlink multiuser MIMO systems. IEEE Trans. Signal Process. 2009, 57, 3576–3587. [Google Scholar] [CrossRef]
- Jorswieck, E.A.; Boche, H. Majorization and matrix-monotone functions in wireless communications. 2007, 3, 553–701. [Google Scholar] [CrossRef]
- Boche, H.; Jorswieck, E.A. Outage probability of multiple antenna systems: Optimal transmission and impact of correlation. In Proceedings of the International Zurich Seminar on Communication, Zurich, Switzerland, 18–20 February 2004; pp. 116–119.
- Jorswieck, E.A.; Boche, H. Optimal transmission strategies and impact of correlation in multiantenna systems with different types of channel state information. IEEE Trans. Signal Process. 2004, 52, 3440–3453. [Google Scholar] [CrossRef]
- Jorswieck, E.A.; Boche, H. Channel capacity and capacity-range of beamforming in MIMO wireless systems under correlated fading with covariance feedback. IEEE Trans. Wireless Commun. 2004, 3, 1543–1553. [Google Scholar] [CrossRef]
- Pinsker, M.S. Information and Information Stability of Random Variables and Processes; Holden-Day: San Francisco, CA, USA, 1964. [Google Scholar]
Appendices
A. Additional Proofs
A.1. Proof of Lemma 1
The lemma follows immediately from [4] (Lemma 1), where a similar result for the single-user AVC is proved. Using the same ideas we are able to extend the proof to the AVBBC under input constraint Γ and state constraint Λ. Thereby, we carry out the analysis for the case where for given type , then the case follows accordingly.
We consider any deterministic code for the AVBBC with codewords , , , and the corresponding decoding sets at node 1. Next, for any channel which symmetrizes the AVBBC in the sense of Definition 8, we define random variables , , with statistically independent elements and
Then for each the following holds. For each pair and every we have
where the equalities follow from the memoryless property of the channel, the definition of the expectation, and (44). Since the AVBBC is -symmetrizable, i.e., (4) holds, it follows that
so that we finally end up with
For the probability of error at node 1 this implies the following. For we have
where the second equality follows from (45). For a fixed this leads to
Thus we obtain
which implies that there exists at least one and such that
Next, we restrict to codewords of type , i.e., , , , with . Further, we choose such that it attains the minimum in (6). Then, with (5b) we get for the expectation
and the variance
From Chebyshev’s inequality we obtain
Finally, since , we get from (46) and (47)
which proves the first part of the lemma. Clearly, the second part where for given type follows accordingly using the same argumentation.
A.2. Proof of Lemma 2
In the following we show that if we select randomly codewords with and , then these codewords will possess, with probability close to 1, the properties (18a)–(18f) as stated in Lemma 2. Thereby, we follow [4, Lemma 3], where a similar result is proved for the single-user case. Further, an analogous version of the lemma for the arbitrarily varying MAC can be found in [10]. But first, we restate a lemma which will be essential to prove the desired properties of the codewords.
Lemma 7.
Let be arbitrary random variables, and let be arbitrary with , . Then the condition
, implies that
Proof. The proof can be found in [4] (Lemma A1) or [10]. ☐
Now, we turn to the proof of Lemma 2. As in [4] (Lemma 3) let , , be independent random variables, each uniformly distributed on . Further, we fix an , , and a joint type with and .
First, we show that for each the properties (18a)–(18c) are satisfied. Therefore, we fix an arbitrary for the following analysis. We define
and apply Lemma 7. Now, the condition (48) of Lemma 7 is fulfilled with
where the inequality follows from Fact 2, cf. Section 2, and the last equality because . For we choose
so that if , where
Then (49) yields
The same reasoning holds if we replace by in (50). Consequently, we similarly obtain
Moreover, if (and remember that since and as assumed) we obtain by replacing ϵ with for from (53) that
Equations (52) and (54) allow us to establish the first two properties of the codewords, i.e., (18a) and (18b). To obtain the third property (18c) we define the set as the set of indices such that . If , we set . Let
If we replace ϵ with , it follows from (53) that for
Then, we get from (55)
which follows from the independence of the and Fact 2. Next, we assume that so that (48) of Lemma 7 is satisfied with
With and for , cf. also (51), Lemma 7 yields
where the last inequality follows from the assumption . Combining this with (56), we get
If we replace “for some " by “for some " in (56), we obtain the same by symmetry. Consequently, we end up with
if and .
Now we are in the position to complete the first part of the proof. The number of all possible sequences , states and joint types grows exponentially with n. Since the bounds (52), (54) and (57) are doubly exponential probability bounds, the inequalities
hold simultaneously with probability arbitrarily close to 1 if n is sufficiently large and . This establishes the properties (18a)–(18c).
It remains to show that for each fixed the properties (18d)–(18f) simultaneously hold for n large enough. This can be done analogously to the first three properties and is therefore omitted for brevity.
A.3. Proof of Lemma 3
The lemma is proved by contradiction as done in [4] (Lemma 4) for the single-user AVC. For receiving node i, , suppose that the quintuple satisfies the conditions given in (19). Since and , we have
with and the last inequality follows from the log-sum inequality.
From [3] we know that we can bound the variational distance between two probability distributions from above by the square root of their divergence times an absolute constant. (This bound with a worse constant was first given by Pinsker [70] and is therefore also known as Pinsker’s inequality.) With this and (58) we get
with . Similarly, since and , cf. (19), we obtain
with and . Next, (59) and (60) together imply
Since , it immediately follows that
Lemma 8.
For any AVBBC with state constraint Λ and any input with , , , for which each pair and satisfies
there exists some such that
Proof. The proof can be found in Appendix A.4. ☐
If we choose and , we obtain from (63)
Finally, (61) and (64) yield
which contradicts the assumption that can be chosen arbitrarily small proving the lemma.
A.4. Proof of Lemma 8
As in [4] (Lemma A2) we can interchange the two sums and then x and without changing the maximum in (63). Thus we can write for all
as
so that we get
with . Further, since and satisfy (8) for some , then also U satisfy
Since (65) can be considered as a continuous function of the pair on the compact set of all channels , it attains its minimum for some , where the minimization is taken over all channels U that satisfy (66). Additionally, since satisfies (66), cannot satisfy (4) which in turn implies that completing the proof.
© 2012 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license ( http://creativecommons.org/licenses/by/3.0/).