Next Article in Journal
An Improved Method of Handling Missing Values in the Analysis of Sample Entropy for Continuous Monitoring of Physiological Signals
Next Article in Special Issue
A Monotone Path Proof of an Extremal Result for Long Markov Chains
Previous Article in Journal
Maximum-Entropy Priors with Derived Parameters in a Specified Distribution
Previous Article in Special Issue
Universality of Logarithmic Loss in Successive Refinement
Article

MIMO Gaussian State-Dependent Channels with a State-Cognitive Helper

1
Department of Electrical Engineering, Technion-Israel Institute of Technology, Haifa 32000, Israel
2
Samsung Semiconductor Inc., San Jose, CA 95134, USA
3
Department of ECE, The Ohio State University, Columbus, OH 43210, USA
*
Authors to whom correspondence should be addressed.
Entropy 2019, 21(3), 273; https://doi.org/10.3390/e21030273
Received: 2 January 2019 / Revised: 3 March 2019 / Accepted: 5 March 2019 / Published: 12 March 2019
(This article belongs to the Special Issue Multiuser Information Theory II)

Abstract

We consider the problem of channel coding over multiterminal state-dependent channels in which neither transmitters nor receivers but only a helper node has a non-causal knowledge of the state. Such channel models arise in many emerging communication schemes. We start by investigating the parallel state-dependent channel with the same but differently scaled state corrupting the receivers. A cognitive helper knows the state in a non-causal manner and wishes to mitigate the interference that impacts the transmission between two transmit–receive pairs. Outer and inner bounds are derived. In our analysis, the channel parameters are partitioned into various cases, and segments on the capacity region boundary are characterized for each case. Furthermore, we show that for a particular set of channel parameters, the capacity region is entirely characterized. In the second part of this work, we address a similar scenario, but now each channel is corrupted by an independent state. We derive an inner bound using a coding scheme that integrates single-bin Gel’fand–Pinsker coding and Marton’s coding for the broadcast channel. We also derive an outer bound and further partition the channel parameters into several cases for which parts of the capacity region boundary are characterized.
Keywords: dirty paper coding; Gel’fand–Pinsker scheme; non-causal channel state information; network information theory dirty paper coding; Gel’fand–Pinsker scheme; non-causal channel state information; network information theory

1. Introduction

Cellular communication systems are designed to allow multiple users to share the same communication medium. Traditionally, mobile networks have enabled this feature by dividing the physical resources (such as time, frequency, code, and space) in an orthogonal manner between users. An illustration of the typical methods, called Orthogonal Multiuser Access (OMA) is shown in Figure 1.
The future of cellular communications is facing exponential growth in bandwidth demand. Furthermore, increased popularity in Internet of Things (IoT) applications and the emergence of Vehicle-to-Vehicle (V2V) connectivity will further grow the number of network consumers. Hence, fifth-generation (5G) wireless networks are required to support extensive connectivity, low latency, and higher data rates. Such requirements cannot be satisfied using the traditional OMA methods and thus to sustain more users and higher transmission rates, non-orthogonal multiuser access (NOMA) has been intensively investigated, where interference mitigation is the key issue for non-orthogonal transmission. A comprehensive survey on NOMA from an information theoretic perspective is given in [1].
In this work, we study a particular communication model that can be used in future NOMA techniques. Specifically, we investigate a type of state-dependent channel with a helper, illustrated in Figure 2, in which two transmitters wish to send messages to their corresponding receivers over a parallel state-dependent channel. The state is not known to either transmitter or receiver but is non-causally (the side information in all times is given to the encoder before the block transmission) known to a state-cognitive helper, who tries to assist each receiver in mitigating the interference caused by the state. This model captures interference cancelation in various practical scenarios. For example, users in multi-cell systems may be interfered by a base station located in other cells. Such a base station, being as the source that causes the interference, clearly knows the information of the interference (modeled by state) and can serve as a helper to mitigate the interference. Alternatively, that base station can also convey the interference information to other base stations via the backhaul network so that other base stations can serve as helpers to reduce the interference. As another example, consider a situation where there are two Device to Device (D2D) links located in two distinct cells, and there is a downlink signal sent from the base station to some conventional mobile user in the cell. Also, there is some central unit that knows in a non-causal manner the signal to be sent by each base station, the helper in our model, and tries to assist the D2D communication links by mitigating the interference (see Figure 3). As a comparison, this type of state-dependent models differs from the original state-dependent channels studied in, e.g., [2,3], in that the state-cognitive helper is not informed of the transmitters’ messages, and hence its state cancelation strategies are necessarily independent of message encoding at the transmitters.
The study of channel coding in the presence of channel side information (CSI) was initiated by Shannon [4] who considered a discrete memoryless channel (DMC) channel with random parameters and side information provided causally to the transmitter. The single-letter expression for the capacity of the point-to-point DMC with non-causal CSI at the encoder (the G-P channel) was derived in the seminal work of Gel’fand and Pinsker [2]. One of the most interesting special cases of the G-P channel is the Gaussian additive noise and interference setting in which the additive interference plays the role of the state sequence, which is known non-causally to the transmitter. Costa showed in [3] that the capacity of this channel is equal to the capacity of the same channel without additive interference. The capacity achieving scheme of [3] (which is that of [2] applied to the Gaussian case) is termed “writing on dirty paper” (WDP), and consequently, the property of the channel where the known interference can be completely removed is dubbed “the WDP property”. Cohen and Lapidoth [5] showed that any interference sequence can be removed entirely when the channel noise is ergodic and Gaussian.
The models we study in this work all have a broadcasting node. The discrete memoryless broadcast channel (DM-BC) was introduced by Cover [6]. The capacity region of the DM-BC is still an open problem. The largest known inner bound on the capacity region of the DM-BC with private messages was derived by Marton [7]. Liang [8] derived an inner bound on the capacity region of the DM-BC with an additional common message. The best outer bound for DM-BC with a common message is due to Nair and El Gamal [9]. There are, however, some special cases where the capacity region is fully characterized. For example, the capacity region of the degraded DM-BC was established by Gallager [10]. The capacity region of the Gaussian BC was derived by Bergmans [11]. An interesting result is the capacity region of the Gaussian MIMO BC which was established by Weingarten et al. [12]. The authors introduced a new notion of an enhanced channel and used it jointly with the Entropy Power Inequality (EPI) to show their result. The capacity achieving scheme relies on the dirty paper coding technique. Liu and Viswanath [13] developed an extremal inequality proof technique and showed that it can be used to establish a converse result in various Gaussian MIMO multiterminal networks, including the Gaussian MIMO BC with private messages. Recently, Geng and Nair [14] developed a different technique to characterize the capacity region of Gaussian MIMO BC with common and private messages.
Degraded DM-BC with causal and non-causal side information was introduced by Steinberg [15]. Inner and outer bounds on the capacity region were derived. For the particular case in which the nondegraded user is informed about the channel parameters, it was shown that the bounds are tight, thus obtaining the capacity region for that case. The general DM-BC with non-causal CSI at the encoder was studied by Steinberg and Shamai [16]. An inner bound was derived, and it was shown to be tight for the Gaussian BC with private messages and independent additive interference at both channels. The latter setting was recently extended to the case of common and private messages in the Gaussian framework with K users in [17]. The special case where the transmitter sends only a common message to all receivers over an additive BC has been initially studied in [18] and has been recently extended to the compound setting in [19]. Outer bounds for DM-BC with CSI at the encoder were derived in [20].
The models addressed in this paper have a mismatched property, that is the state sequence is known only to some nodes, which differs from the classical study on state-dependent channels. The type of channels with mismatched property has been addressed in the past for various models, for example, in [21,22,23,24,25], the state-dependent multiple access channel (MAC) is studied with the state known at only one transmitter. The best outer bound for the Gaussian MAC setting was recently reported in [26]. The point-to-point helper channel studied in [27,28] can be considered as a special case of [25], where the cognitive transmitter does not send any message. Further in [28], the state-dependent MAC with an additional helper was studied, and the partial/full capacity region was characterized under various channel parameters. Moreover, some state-dependent relay channel models can also be viewed as an extension of the state-dependent channel with a helper, where the relay serves the role of the helper by knowing the state information. In [29], the state-dependent relay channel with state non-causally available at the relay is considered. An achievable rate was derived using a combination of decode-and-forward, Gel`fand–Pinsker (GP) binning and codeword splitting. Also, in [30], additional noiseless cooperation links with finite capacity were assumed between the transmitter and the relay, and various coding techniques were explored. The authors of [31] have recently considered a different scenario with a state-cognitive relay. The state-dependent Z-IC with a common state known in a non-causal manner only to the primary user was studied in [32]. A good tutorial on channel coding in the presence of CSI can be found in [33].
The basic state-dependent Gaussian channel with a helper is illustrated in Figure 4. It was first introduced in [27], where the capacity in the infinite power regime was characterized and was shown to be achievable by lattice coding. The capacity under arbitrary state power was established for some special cases in [28]. Based on a single-bin GP binning scheme the following lower bound was derived for the discrete memoryless case
R max P U X 0 | S P X min { I ( X ; Y | U ) , I ( UX ; Y ) I ( U ; S ) } .
This lower bound was further evaluated for Gaussian channel by appropriate choice of the maximizing input distribution. The surprising result of that study was that when the helper power is above some threshold, then the interference caused by the state is entirely canceled and the capacity of the channel without the state can be achieved. This threshold does not depend on the state power, and hence it was shown that this channel also has WDP property, that is the capacity of the channel is the same as the capacity of the similar channel without the interference (which is modeled as the state).
The most relevant work to this study is [34], in which the state-dependent parallel channel with a helper was studied, for the regime with infinite state power and with two receivers being corrupted by two independent states. A time-sharing scheme was proved to be capacity achieving under certain channel parameters. In contrast, in this study, we expand those results for the arbitrary state power regime. We also consider two extreme cases. At first, we address the problem where the two receivers of the parallel channel are corrupted by the same but differently scaled states, and in the second part, those states are independent. For both cases, we show that the time-sharing scheme is no longer optimal. Our main contribution in this work is a derivation of inner bound, which is an extension of the Marton coding scheme for the discrete broadcast channel to the current model. We will apply this bound for the MIMO Gaussian setting and characterize the segments of the capacity region for various channel parameters. The material in this paper was presented in part at [35,36].

2. Preliminaries

2.1. Notation Conventions

Throughout the paper, random variables are denoted using a sans-serif font, e.g., X , their realizations are denoted by the respective lower-case letters, e.g., x, and their alphabets are denoted by the respective calligraphic letters, e.g., X . Let X n stand for the set of all n-tuples of elements from X . An element from X n is denoted by x n = ( x 1 , x 2 , , x n ) and substrings are denoted by x i j = ( x i , x i + 1 , , x j ) . The cardinality of a finite set, say X , is denoted by | X | . The probability distribution function of X , the joint distribution function of X and Y , and the conditional distribution of X given Y are denoted by P X , P X , Y and P X | Y respectively. The expectation of X is denoted by E X . The probability of an event E is denoted as P { E } . The set of jointly ϵ -typical n-tuples ( x n , y n ) is defined as T ϵ ( n ) ( P X Y ) [37]. A set of consecutive integers starting at 1 and ending in 2 n R is denoted as I R ( n ) { 1 , 2 , , 2 n R } . We assume throughout this paper that 2 n R are integers, for any R and n .
We denote the covariance of a zero mean vector X by Σ X E X X T , Σ X Y E X Y T is the cross-correlation, and the conditional correlation matrix of X given Y as M X | Y Σ X Σ X Y Σ Y 1 Σ Y X .

2.2. Definitions

Definition 1.
Random variables X , Y , Z are said to form a Markov chain in that order (denoted by X Y Z ) if the conditional distribution of Z depends only on Y and is conditionally independent of X . Specifically, X , Y and Z form a Markov chain X Y Z if the joint probability mass function can be written as
P X Y Z = P X P Y | X P Z | Y .

2.3. Auxiliary Results

This section introduces some auxiliary results that are relevant to the analysis in this work [37].
Lemma 1
(Data-processing inequality). If X Y Z , then
I ( X ; Y ) I ( X ; Z ) .
The following inequality will be frequently used in the proofs of outer bounds on the capacity regions.
Lemma 2
(Fano’s Inequality). Let ( X , Y ) P X Y and P e = Pr ( X Y ) . Then
H ( X | Y ) H ( P e ) + P e log | X | 1 + P e log | X | .
The covering lemma and the packing lemma will be used in the achievability proofs throughout this paper.
Lemma 3
(Covering Lemma). Let ( U , X , X ^ ) P U X X ^ and ϵ < ϵ . Let ( U n , X n ) P U n X n be a pair of random sequences with
lim n P { ( U n , X n ) T ϵ ( n ) ( P U X ) } = 1 ,
and let X ^ n ( m ) , m A , where | A | 2 n R , be random sequences, conditionally independent of each other and of X n given U n , each distributed according to i = 1 n P X ^ | U ( x ^ i | u i ) . Then, there exists δ ( ϵ ) that approaches zero as ϵ 0 such that
lim n P { ( U n , X n , X ^ n ( m ) ) T ϵ ( n ) for all m A } = 0 ,
if R > I ( X ; X ^ | U ) + δ ( ϵ ) .
Lemma 4
(Packing Lemma). Let ( U , X , Y ) P U X Y . Let ( U ˜ n , Y ˜ n ) P U ˜ n Y ˜ n be a pair of arbitrarily distributed random sequences, not necessarily distributed according to i = 1 n P U Y ( u ˜ i , y ˜ i ) . Let X n ( m ) , m A , where | A | 2 n R , be random sequences, each distributed according to i = 1 n P X | U ( x i | u ˜ i ) . Further assume that X n ( m ) , m A , is pairwise conditionally independent of Y ˜ n given U ˜ n , but is arbitrarily dependent on other X n ( m ) sequences. Then, there exists δ ( ϵ ) that approaches zero as ϵ 0 such that
lim n P { ( U ˜ n , X n ( m ) , Y ˜ n ) T ϵ ( n ) for some m A } = 0 ,
if R < I ( X ; Y | U ) δ ( ϵ ) .

3. The MIMO Gaussian Channel with Same but Differently Scaled States

3.1. Channel Model

In this section, we study the state-dependent parallel network with a state-cognitive helper, in which two transmitters communicate with two corresponding receivers over a state-dependent parallel channel. The two receivers are corrupted by the same but differently scaled state, respectively. The state information is not known to either the transmitters or the receivers, but a helper non-causally. Hence, the helper assists these receivers to cancel the state interference (see Figure 5).
More specifically, the encoder at transmitter l, f l : I R l ( n ) X l n , maps a message m l I R l ( n ) to a codeword x l n , for l = 1 , 2 . The inputs x 1 n and x 2 n are sent respectively over the two subchannels of the parallel channel. The two receivers are corrupted by the same but differently scaled and identically distributed (i.i.d.) state sequence s n S n , which is known to a common helper non-causally. Hence, the encoder at the helper, f 0 : S n X 0 n , maps the state sequence s n S n into a codeword x 0 n X 0 n . The channel transition probability is given by P Y 1 | X 0 X 1 S · P Y 2 | X 0 X 2 S . The decoder at receiver l, g l : Y l n I R l ( n ) , maps a received sequence y l n into a message m ^ l I R l ( n ) , for l = 1 , 2 . We assume that the messages are uniformly distributed over the sets I R 1 ( n ) and I R 2 ( n ) . We define the average probability of error for a length-n code as
P e ( n ) = 1 2 n ( R 1 + R 2 ) m 1 = 1 2 n R 1 m 2 = 1 2 n R 2 P { ( m ^ 1 , m ^ 2 ) ( m 1 , m 2 ) } .
Definition 2.
A rate pair ( R 1 , R 2 ) is said to be achievable if there exist a sequence of message sets I R 1 ( n ) and I R 2 ( n ) , and encoder-decoder tuples f 0 ( n ) , f 1 ( n ) , f 2 ( n ) , g 1 ( n ) , g 2 ( n ) such that the average probability of error P e ( n ) 0 as n .
Definition 3.
We define the capacity region of the channel as the closure of the set of all achievable rate pairs ( R 1 , R 2 ) .
In this section, we focus on the MIMO Gaussian channel, with the outputs at the two receivers for one channel use given by
Y l = G l X 0 + X l + G s l S + Z l l { 1 , 2 } ,
where X 0 , X 1 , X 2 , S 2 , Z 1 and Z 2 are all real vectors of size t × 1 , and
  • X 0 , X 1 , X 2 are the input vectors that are subject to the covariance matrix constraints 1 n i = 1 n x l i x l i T K l , l { 0 , 1 , 2 } ,
  • Y l is the output vector, l { 1 , 2 } ,
  • S is a real Gaussian random vector with zero mean and covariance matrix K S = E S S T 0 ,
  • Z l is a real Gaussian random vector with zero mean and an identity covariance matrix K Z l = I , for l { 1 , 2 } .
Both the noise variables, and the state variable are i.i.d. over channel uses. G s 1 ( G s 2 ) is t × t real matrix that represents the channel matrix connecting the state source to the first (second) user. Similarly, G 1 ( G 2 ) is a t × t real channel matrix connecting the helper to the first (second) user. Thus, our model captures a general scenario, where the helper’s power and the state power can be arbitrary.
Our goal is to characterize the capacity region of the Gaussian channel under various channel parameters ( G 1 , G 2 , G s 1 , G s 2 , K 0 , K 1 , K 2 , K S ) .

3.2. Inner and Outer Bounds

In this section, we first derive inner and outer bounds on the capacity region for the state-dependent parallel channel with a helper. Then by comparing the inner and outer bounds, we characterize the segments on the capacity region boundary under various channel parameters.
We start by deriving an inner bound on the capacity region for the DMC based on the single-bin GP scheme.
Proposition 1.
For the discrete memoryless state-dependent parallel channel with a helper under the same but differently scaled states at the two receivers, an inner bound on the capacity region consists of rate pairs ( R 1 , R 2 ) satisfying:
R 1 min I ( W , X 1 ; Y 1 ) I ( W ; S ) , I ( X 1 ; Y 1 | W ) ,
R 2 min I ( W , X 2 ; Y 2 ) I ( W ; S ) , I ( X 2 ; Y 2 | W ) ,
for some distribution P W | S P X 0 | W S P X 1 P X 2 .
Proof. 
The proof is relegated to Appendix A. □
We evaluate the inner bound for the Gaussian channel by choosing the joint Gaussian distribution for random variables as follows:
W = X 0 + A S , X 0 = X 0 + B S , X 0 N ( 0 , K 0 ) , X 1 N ( 0 , K 1 ) X 2 N ( 0 , K 2 ) ,
where X 0 , X 1 , X 2 , S are independent and K 0 K 0 .
Let f 1 ( · ) , g 1 ( · ) , f 2 ( · ) and g 2 ( · ) be defined as
f 1 ( A , B , K 0 ) = I ( W , X 1 ; Y 1 ) I ( W ; S ) , g 1 ( A , B , K 0 ) = I ( X 1 ; Y 1 | W ) , f 2 ( A , B , K 0 ) = I ( W , X 2 ; Y 2 ) I ( W ; S ) , g 2 ( A , B , K 0 ) = I ( X 2 ; Y 2 | W ) ,
where the mutual information terms are evaluated using the joint Gaussian distribution chosen in (8). Based on those definitions, we obtain an achievable region for the Gaussian channel.
Proposition 2.
An inner bound on the capacity region of the parallel state-dependent MIMO Gaussian channel with same but differently scaled states and a state-cognitive helper consists of rate pairs ( R 1 , R 2 ) satisfying;
R 1 min { f 1 ( A , B , K 0 ) , g 1 ( A , B , K 0 ) } ,
R 2 min { f 2 ( A , B , K 0 ) , g 2 ( A , B , K 0 ) } ,
for some real matrices A, B and K 0 satisfying K 0 0 , K 0 + B K S B T K 0 .
We note that the above choice of the helper’s signal incorporates two parts with X 0 designed using single-bin dirty paper coding, and B S acting as direct state subtraction.
We next present an outer bound which applies the point-to-point channel capacity and the upper bound derived for the point-to-point channel with a helper in [27].
Denote
R l ub 1 ( Σ X 0 S ) 1 2 log G l K 0 G l T + K l + G l Σ X 0 S G s l + G s l Σ X 0 S T G l T + G s l K S G s l T + I G l K 0 G l T + G l Σ X 0 S G s l T + G s l Σ X 0 S T G l T + G s l K S G s l T + I + 1 2 log G l ( K 0 Σ X 0 S K S 1 Σ X 0 S T ) G l T + I .
Proposition 3.
An outer bound on the capacity region of the state-dependent parallel MIMO Gaussian channel with a helper consists of rate pairs ( R 1 , R 2 ) satisfying:
R l min R l ub 1 ( Σ X 0 S ) , 1 2 log | K l + I | ,
for every l { 1 , 2 } and Σ X 0 S that satisfies Σ X 0 S K S 1 Σ X 0 S T K 0 .
Proof. 
The second term in (11) is simply the capacity of a point-to-point channel without state. The first term is derived in Appendix B. □

3.3. Capacity Region Characterization

In this section, we optimize A and B in Proposition 2, and compare the rate bounds with the outer bounds in Proposition 3 to characterize the points or segments on the capacity region boundary.
Since the inner bound in Proposition 2 is not convex, it is difficult to provide a closed form for the jointly optimized bounds. Therefore, we first optimize the bounds for R 1 and R 2 respectively, and then provide conditions on channel parameters such that these bounds match the outer bound. Based on the conditions, we partition the channel parameters into the sets, in which different segments of the capacity region boundary can be obtained.
We first consider the rate bound for R 1 in (9a). By setting
A a ( G 1 K 0 G 1 T + I ) 1 K 0 G 1 T ( G 1 B + I ) , B a Σ X 0 S G s 1 K S 1 ,
f 1 ( A , B , K 0 ) takes the following form
f 1 ( A a , B a , K 0 ) = 1 2 log | G 1 K 0 G 1 T + K 1 + G 1 Σ X 0 S G s 1 + G s 1 T Σ X 0 S T G 1 T + G s 1 K S G s 1 T + I | | G 1 K 0 G 1 T + G 1 Σ X 0 S G s 1 + G s 1 T Σ X 0 S T G 1 T + G s 1 K S G s 1 T + I | + 1 2 log | G 1 K 0 G 1 T + I | ,
where Σ X 0 S maximizes f 1 ( A a , B ( Σ X 0 S ) , K 0 ) . In fact, A a maximizes f 1 ( A , B , K 0 ) for fixed B, and B a maximizes the function with A = A a .
If f 1 ( A a , B a , K 0 ) g 1 ( A a , B a , K 0 ) , R 1 = f 1 ( A a , B a , K 0 ) is achievable, and this matches the outer bound in (11). Thus, one segment of the capacity region is specified by
R 1 = f 1 ( A a , B a , K 0 ) ,
R 2 min { f 2 ( A a , B a , K 0 ) , g 2 ( A a , B a , K 0 ) } .
We further observe that the second term g 1 ( A , B , K 0 ) in (9a) is optimized by setting A b = B + G 1 1 G s 1 , and hence
g 1 ( A b , B , K 0 ) = 1 2 log ( | K 1 + I | ) .
If g 1 ( B G 1 1 G s 1 , B , K 0 ) f 1 ( B G 1 1 G s 1 , B , K 0 ) , i.e.,
K 0 G 1 K 0 G 1 T A K S A T ( K 1 + I ) K 0 G 1 A K S A T G 1 T ,
then the inner bound for R 1 becomes R 1 = 1 2 log ( | K 1 + I | ) , which is the capacity of the point-to-point channel without state and matches the outer bound in (11). Thus, another segment of the capacity is specified by
R 1 = 1 2 log ( | K 1 + I | ) ,
R 2 min { f 2 ( A b , B , K 0 ) , g 2 ( A b , B , K 0 ) } .
We then consider the rate bound for R 2 . Similarly, the following segments on the capacity boundary can be obtained. If f 2 ( A c , B c , K 0 ) g 2 ( A c , B c , K 0 ) , one segment of the capacity region boundary is specified by
R 1 min { f 1 ( A c , B c , K 0 ) , g 1 ( A c , B c , K 0 ) } ,
R 2 = 1 2 log | G 2 K 0 G 2 T + K 2 + G 2 Σ X 0 S G s 2 + G s 2 T Σ X 0 S T G 2 T + G s 2 K S G s 2 T + I | | G 2 K 0 G 2 T + G 2 Σ X 0 S G s 2 + G s 2 T Σ X 0 S T G 2 T + G s 2 K S G s 2 T + I | + 1 2 log | G 2 K 0 G 2 T + I | ,
where
A c ( G 2 K 0 G 2 T + I ) 1 K 0 G 2 T ( G 2 B + G S ) , B c Σ X 0 S G s 1 K S 1 ,
and Σ X 0 S * * maximizes f 2 ( A c , B c , K 0 ) .
Furthermore, if g 2 ( A , A G 2 1 G s 2 , K 0 ) f 2 ( A , A G 2 1 G s 2 , K 0 ) , one segment of the capacity region boundary is specified by
R 1 min f 1 ( A , A G 2 1 G s 2 , K 0 ) , g 1 ( A , A G 2 1 G s 2 , K 0 )
R 2 = 1 2 log ( | K 2 + I | ) .
Appendix C describes how ( A a , B a ) , ( A b , B b ) , ( A c , B c ) and ( A d , B d ) were chosen.
Summarizing the above analysis, we obtain the following characterization of segments of the capacity region boundary.
Theorem 1.
The channel parameters ( G 1 , G 2 , G s 1 , G s 2 , K 0 , K 1 , K 2 , K S ) can be partitioned into the sets A 1 , B 1 , C 1 , where
A 1 = { ( G 1 , G 2 , G s 1 , G s 2 , K 0 , K 1 , K 2 , K S ) : f 1 ( A a , B a , K 0 ) g 1 ( A a , B a , K 0 ) } , C 1 = { ( G 1 , G 2 , G s 1 , G s 2 , K 0 , K 1 , K 2 , K S ) : K 0 G 1 K 0 G 1 T A K S A T ( K 1 + I ) K 0 G 1 A K S A T G 1 T where K 0 = K 0 ( A G 1 1 G s 1 ) K S ( A G 1 1 G s 1 ) T , for some A Ω A } , B 1 = ( A 1 C 1 ) c .
If ( G 1 , G 2 , G s 1 , G s 2 , K 0 , K 1 , K 2 , K S ) A 1 , then (12a)–(12b) captures one segment of the capacity region boundary, where the state cannot be fully canceled. If ( G 1 , G 2 , G s 1 , G s 2 , K 0 , K 1 , K 2 , K S ) C 1 , then (14a)–(14b) captures one segment of the capacity region boundary where the state is fully canceled. If ( G 1 , G 2 , G s 1 , G s 2 , K 0 , K 1 , K 2 , K S ) B 1 , then the R 1 segment of the capacity region boundary is not characterized.
The channel parameters ( G 1 , G 2 , G s 1 , G s 2 , K 0 , K 1 , K 2 , K S ) can also be partitioned into the sets A 2 , B 2 , C 2 , where
A 2 = { ( G 1 , G 2 , G s 1 , G s 2 , K 0 , K 1 , K 2 , K S ) : f 2 ( A c , B c , K 0 ) g 2 ( A c , B c , K 0 ) } C 2 = { ( G 1 , G 2 , G s 1 , G s 2 , K 0 , K 1 , K 2 , K S ) : K 0 G 2 K 0 G 2 T A K S A T ( K 2 + I ) K 0 G 2 A K S A T G 2 T where K 0 = K 0 ( A G 2 1 G s 2 ) K S ( A G 2 1 G s 2 ) T , for some A Ω A } B 2 = ( A 2 C 2 ) c .
If ( G 1 , G 2 , G s 1 , G s 2 , K 0 , K 1 , K 2 , K S ) A 2 , then (15a)–(15b) captures one segment of the capacity region boundary, where the state cannot be fully canceled. If ( G 1 , G 2 , G s 1 , G s 2 , K 0 , K 1 , K 2 , K S ) C 2 , then (16a)–(16b) captures one segment of the capacity boundary where the state is fully canceled. If ( G 1 , G 2 , G s 1 , G s 2 , K 0 , K 1 , K 2 , K S ) B 2 , then the R 2 segment of the capacity region boundary is not characterized.
The above theorem describes two partitions of the channel parameters, respectively under which segments on the capacity region boundary corresponding to R 1 and R 2 can be characterized. Intersection of two sets, each from one partition, collectively characterizes the entire segments on the capacity region boundary.
Figure 6 lists all possible intersection of sets that the channel parameters can belong to. For each case in Figure 6, we use red solid line to represent the segments on the capacity region that are characterized in Theorem 1, and we also mark the value of the capacity that each segment corresponds to as characterized in Theorem 1. Please note that the case B 1 B 2 is not illustrated in Figure 6 since no segments are characterized in this case.
One interesting example in Theorem 1 is the case with G 1 1 G s 1 = G 2 1 G s 2 , in which R 1 and R 2 are optimized with the same set of coefficients A and B when ( G 1 , G 2 , G s 1 , G s 2 , K 0 , K 1 , K 2 , K S ) C 1 C 2 . Thus, the point-to-point channel capacity is simultaneously obtained for both R 1 and R 2 , with state being fully canceled. We state this result in the following theorem.
Theorem 2.
If G 1 1 G s 1 = G 2 1 G s 2 , K 0 G 1 K 0 G 1 T A K S A T ( K 1 + I ) K 0 G 1 A K S A T G 1 T and K 0 G 2 K 0 G 2 T A K S A T ( K 2 + I ) K 0 G 2 A K S A T G 2 T where K 0 = K 0 ( A G 1 1 G s 1 ) K S ( A G 1 1 G s 1 ) T , for some A Ω A then the capacity region of the state-dependent parallel Gaussian channel with a helper and under the same but differently scaled states contains ( R 1 , R 2 ) satisfying
R 1 0 . 5 log ( | K 1 + I | ) , R 2 0 . 5 log ( | K 2 + I | ) .
The channel conditions of Theorem 2 are not just of mathematical importance but also have a practical utility. Consider, for example, a scenario where the helper is also the interferer (see Figure 3), in such case it is reasonable to assume that G s 1 = G 1 and G s 2 = G 2 , and thus the aforementioned conditions are satisfied.

3.4. Numerical Example

We now examine our results via simulations. In particular, we focus on the scalar channel case, i.e., G 1 1 , G 2 b , G s 1 1 , G s 2 a , K 0 P 0 , K 0 P 0 , K 1 P 1 , K 2 P 2 and K S Q . Furthermore, we denote A α , B β and ρ 0 S β P 0 Q .
We set P 0 = 6 , P 1 = P 2 = 5 , Q = 12 , and b = 0 . 8 , and plot the inner and outer bounds for the capacity region ( R 1 , R 2 ) for two values of a. It can be observed from Figure 7 that the upper bound is defined by the rectangular region of channel without state. The inner bound, in the contrary, is susceptible to the value of a, such that in the case where a = b , our inner and outer bounds coincide everywhere, while in the case a b they coincide only on some segments. Both observations corroborate the characterization of the capacity in Theorems 1 and 2.
It is also interesting to illustrate how the channel parameters ( a , b ) affect our ability to characterize the capacity region boundary. For this we propose the following setup:
  • we choose α and β such that R 1 lies on the capacity region boundary;
  • we further choose ρ 0 S that maximizes the achievable R 2 , denoted as R 2 I ;
  • we compare it to the outer bound of R 2 , R 2 O , and plot the gap Δ R 2 O R 2 I .
Figure 8 shows the results of such simulation for two values of P 0 : P 0 = 1 for which the state is not fully canceled for user 1 and P 0 = 6 , for which the state is canceled. We fix other parameters as before, that is P 1 = P 2 = 5 and Q = 12 . The right figure shows that the capacity gap is small around the line a = b , this result is not surprising, and it appears in Theorem 2. The left Figure is also interesting. It shows that there is a curve a b for which the capacity gap is also near zero. The reason for this phenomenon is explained as follows.
  • The chosen channel parameters satisfy ( a , b , P 0 , P 1 , P 2 , Q ) A 1 , and hence
    α 1 = ( 1 + β 1 ) P 0 P 0 + 1 β 1 = ρ 0 S P 0 Q
    optimize R 1 .
  • Thus, if a b satisfies
    a b = α 1 β 1 ,
    and b 2 P 0 2 α 1 2 Q ( P 2 + 1 b 2 P 0 ) , then ( a , b , P 0 , P 1 , P 2 , Q ) C 2 , i.e., R 2 = 1 2 log ( 1 + P 2 ) is achievable.
We illustrate this result in Figure 9, where we fixed the channel parameters b = 1 , P 1 = P 2 = 5 , Q = 12 , and calculate the capacity gap for various values of a and P 0 . The shaded area is the region of P 0 where the capacity of the point-to-point helper channel is not characterized.
In practical situations the channel parameters a and b are fixed but the helper can control P 0 . The results here imply that for a fixed ( a , b ) we can choose P 0 such that the capacity gap is close to zero. We emphasize this in Figure 10, where we plot the inner and outer bounds on achievable ( R 1 , R 2 ) with the following channel parameters
( a , b , P 0 , P 1 , P 2 , Q ) = ( 3.5 , 5 , 2.17 , 5 , 5 , 12 ) .

4. MIMO Gaussian Channel with Independent States

In this section, we consider the problem of channel coding over MIMO Gaussian parallel state-dependent channel with a cognitive helper where the states are independent. We start with deriving an achievable region for a general discrete memoryless case. We then, evaluate this region for the Gaussian setting by choosing an appropriate jointly Gaussian input distribution.

4.1. Problem Formulation

Consider a 3-transmitter, 2-receiver state-dependent parallel DMC depicted in Figure 11, where Transmitter 1 wishes to communicate a message M 1 to Receiver 1, and similarly Transmitter 2 wishes to transmit a message M 2 to its corresponding Receiver 2. The messages M 1 and M 2 are independent. The communication takes over a parallel state-dependent channel characterized by a probability transition matrix p ( y 1 , y 2 | x 0 , x 1 , x 2 , s ) . The transmitter at the helper has non-causal knowledge of the state and tries to mitigate the interference caused in both channels. The state variable S is random taking values in S and drawn from a discrete memoryless source (DMS)
P S n ( s n ) = i = 1 n P S ( s i ) .
A ( 2 n R 1 , 2 n R 2 , n ) code for the parallel state-dependent channel with state known non-causally at the helper consists of
  • two message sets I R 1 ( n ) and I R 2 ( n ) ,
  • three encoders, where the encoder at the helper assigns a codeword x 0 n ( s n ) to each state sequence s n S n , encoder 1 assigns a codeword x 1 n ( m 1 ) to each message m 1 I R 1 ( n ) and encoder 2 assigns a codeword x 2 n ( m 2 ) to each message m 2 I R 2 ( n ) , and
  • two decoders, where decoder 1 assigns an estimate m ^ 1 I R 1 ( n ) or an error message e to each received sequence y 1 n , and decoder 2 assigns an estimate m 2 ^ I R 2 ( n ) or an error message e to each received sequence y 2 n .
We assume that the message pair ( M 1 , M 2 ) is uniformly distributed over I R 1 ( n ) × I R 2 ( n ) . The average probability of error for a length-n code is defined as
P e ( n ) = P { M ^ 1 M 1 or M ^ 2 M 2 } .
A rate pair ( R 1 , R 2 ) is said to be achievable if there exists a sequence of ( 2 n R 1 , 2 n R 2 , n ) codes such that lim n P e ( n ) = 0 . The capacity region C is the closure of the set of all achievable rate pairs ( R 1 , R 2 ) .
We observe that due to the lack of cooperation between the receivers, the capacity region of this channel depends on the p ( y 1 , y 2 | x 0 , x 1 , x 2 , s ) only through the conditional marginal PMFs p ( y 1 | x 0 , x 1 , s ) and p ( y 2 | x 0 , x 2 , s ) . This observation is similar to the DM-BC ([37], Lemma 5.1).
Our goal is to characterize the capacity region C for the state-dependent Gaussian parallel channel with additive state known at the helper. Here, the state S = ( S 1 , S 2 ) T . The channel is modeled by a Gaussian vector parallel state-dependent channel
Y l = G l X 0 + X l + S l + Z l , l = 1 , 2 ,
where G 1 , G 2 are t × t channel gain matrices. X 0 , X 1 , X 2 are the helper and the noncognitive transmitters channel input signals, each subject to an average matrix power constraint
1 n i = 1 n X l , i X l , i T K l , l = 0 , 1 , 2 .
The additive state variables S l and noise Z l are independent and identically distributed (i.i.d.) Gaussian with zero mean and strictly positive definite covariance matrix K S l and I respectively.

4.2. Outer and Inner Bounds

To characterize the capacity region of this channel, we first consider the following outer bound on the capacity region for the Gaussian setting.
Let,
K S K S 1 0 0 K S 2 ,
and
R l ub 2 ( Σ X 0 S ) 1 2 log | G l K 0 G l T + K l + G l Σ X 0 S l + Σ X 0 S l T G l T + K S l + I | | G l K 0 G l T + G l Σ X 0 S l + Σ X 0 S l T G l T + K S l + I | + 1 2 log | G l ( K 0 Σ X 0 S K S 1 Σ X 0 S T ) G l T + I | .
Proposition 4.
Every achievable rate pair ( R 1 , R 2 ) of the state-dependent parallel Gaussian channel with a helper must satisfy the following inequalities
R l min R l ub 2 ( Σ X 0 S ) , 1 2 log ( | K l + I | ) ,
for l = { 1 , 2 } and some covariance matrices ( Σ X S 1 , Σ X S 2 ) , such that Σ X 0 S K S 1 Σ X 0 S T K 0 , where
Σ X 0 S Σ X 0 S 1 Σ X 0 S 2 .
The proof of this outer bound is quite similar to the proof of the outer bound in Proposition 3 and is given in Appendix D.
The upper bound for each rate consists of two terms, the first one reflects the scenario when the interference cannot be completely canceled, and the second is simply the point-to-point capacity of the channel without the state. Furthermore, the individual rate bounds are connected through the choice of Σ X 0 S 1 and Σ X 0 S 2 .
We next derive an achievable region for the channel based on an achievable scheme that integrates Marton’s coding, single-bin dirty paper coding, and state cancelation. More specifically, we generate two auxiliary random variables, U and V to incorporate the state information so that Receiver 1 (and respectively 2) decodes U (and respectively V ) and then decodes the respective transmitter information. Based on such an achievable scheme, we derive the following inner bound on the capacity region for the DM case.
Proposition 5.
An inner bound on the capacity region of the discrete memoryless parallel state-dependent channel with a helper consists of rate pairs ( R 1 , R 2 ) satisfying:
R 1 min { I ( U , X 1 ; Y 1 ) I ( U ; S ) , I ( X 1 ; Y 1 | U ) } ,
R 2 min { I ( V , X 2 ; Y 2 ) I ( V ; S ) , I ( X 2 ; Y 2 | V ) } ,
R 1 + R 2 min { I ( U , X 1 ; Y 1 ) I ( U ; S ) + I ( V , X 2 ; Y 2 ) I ( V ; S ) I ( V ; U | S ) , I ( X 1 ; Y 1 | U ) + I ( X 2 ; Y 2 | V ) } ,
for some PMF P U V X 0 | S P X 1 P X 2 .
Remark 1.
The achievable region in Proposition 5 is equivalent to the following region
R 1 min { I ( U , X 1 ; Y 1 ) I ( U ; S ) , I ( X 1 ; Y 1 | U ) } ,
R 2 min { I ( V , X 2 ; Y 2 ) I ( V ; U , S ) , I ( X 2 ; Y 2 | V ) } ,
for some PMF P U V X 0 | S P X 1 P X 2 .
Proof. 
The proof of the inner bound is relegated to Appendix E. □
We evaluate the latter inner bound for the Gaussian channel by choosing the joint Gaussian distribution for random variables as follows:
U = X 01 + A 11 S 1 + A 12 S 2 , V = X 02 + A 20 X 01 + A 21 S 1 + A 22 S 2 , X 0 = X 01 + B 1 S 1 + X 02 + B 2 S 2 , X 01 N ( 0 , K 01 ) X 02 N ( 0 , K 02 ) , X 1 N ( 0 , K 1 ) X 2 N ( 0 , K 2 ) ,
where X 01 , X 02 , X 1 , X 2 , S 1 , S 2 are independent. For simplicity of representation, denote A ¯ 1 = ( A 11 , A 12 ) , A ¯ 2 = ( A 20 , A 11 , A 12 ) and B ¯ = ( B 1 , B 2 ) . Let f 1 ( · ) , g 1 ( · ) , f 2 ( · ) and g 2 ( · ) be defined as
f 1 ( A ¯ 1 , B ¯ , K 01 , K 02 ) = I ( U , X 1 ; Y 1 ) I ( U ; S ) , g 1 ( A ¯ 1 , B ¯ , K 01 , K 02 ) = I ( X 1 ; Y 1 | U ) , f 2 ( A ¯ 2 , B ¯ , K 01 , K 02 ) = I ( V , X 2 ; Y 2 ) I ( V ; U , S ) , g 2 ( A ¯ 2 , B ¯ , K 01 , K 02 ) = I ( X 2 ; Y 2 | V ) ,
where the mutual information terms are evaluated using the joint Gaussian distribution set at (29). Based on those definitions we obtain an achievable region for the Gaussian channel.
Proposition 6.
An inner bound on the capacity region of the parallel state-dependent Gaussian channel with a helper and with independent states, consists of rate pairs ( R 1 , R 2 ) satisfying;
R 1 min { f 1 ( A ¯ 1 , B ¯ , K 01 , K 02 ) , g 1 ( A ¯ 1 , B ¯ , K 01 , K 02 ) } ,
R 2 min { f 2 ( A ¯ 2 , B ¯ , K 01 , K 02 ) , g 2 ( A ¯ 2 , B ¯ , K 01 , K 02 ) } ,
for some real matrices A 20 , A 21 , A 22 , B 1 , B 2 , K 01 and K 02 satisfying K 01 , K 02 0 , K 01 + K 02 + B 1 K S 1 B 1 T + B 2 K S 2 B 2 T K 0 .
Now we provide our intuition behind such construction of the RVs in the proof of Proposition 6. X 0 contains two parts, the one with B l , l = 1 , 2 controls the direct state cancelation of each state. The second part X 0 l , l = 1 , 2 , is used for dirty paper coding via generation of the state-correlated auxiliary RVs U and V .

4.3. Capacity Region Characterization

In this section, we will characterize segments on the capacity boundary for various channel parameters using the inner and outer bounds that were derived in Section 4.2. Consider the inner bounds in (30a)–(30b). Each bound has two terms in the argument of min. We suggest optimizing each term independently and then comparing it to the outer bounds in (25). In the last step we will state the conditions under which those terms are valid. Our technique for optimal choice of ( A 11 , , A 12 , A 20 , A 21 , A 22 ) be such that cancels the respective interfering terms from the mutual information quantities. We explain how those matrices were chosen in Appendix F.
We begin by considering what choice of ( A 11 , A 12 ) can maximize f 1 ( A ¯ 1 , B ¯ , K 01 , K 02 ) . Let
A 11 a = ( G 1 ( K 01 + K 02 ) G 1 T + I ) 1 K 01 G 1 T ( G 1 B 1 + I ) , A 12 a = ( G 1 ( K 01 + K 02 ) G 1 T + I ) 1 K 01 G 1 T G 1 B 2 .
Then f 1 ( A ¯ 1 , B ¯ , K 01 , K 02 ) takes the following form
f 1 ( A ¯ 1 a , B ¯ , K 01 , K 02 ) = 1 2 log | G 1 K 0 G 1 T + K 1 + G 1 B 1 K S 1 + K S 1 B 1 T G 1 T + K S 1 + I | | G 1 K 0 G 1 T + G 1 B 1 K S 1 + K S 1 B 1 T G 1 T + I | + 1 2 log | G 1 ( K 01 + K 02 ) G 1 T + I | | G 1 K 02 G 1 T + I | .
If f 1 ( A ¯ 1 a , B ¯ a , K 01 , K 02 ) g 1 ( A ¯ 1 a , B ¯ a , K 01 , K 02 ) , then R 1 = f 1 ( A ¯ 1 a , B ¯ a , K 01 , K 02 ) is achievable. Moreover, if we choose K 02 = 0 , then R 1 = f 1 ( A 11 a , A 12 a , B 1 a , B 2 a , K 0 , 0 ) meets the outer bound (the first term in “min” in (25)) with B 1 K S 1 = Σ X 0 S 1 and B 2 K S 2 = Σ X 0 S 2 . Furthermore, by setting
A 11 b = B 1 + G 1 1 , A 12 b = B 2 ,
we obtain
g 1 ( A ¯ 1 b , B ¯ , K 01 , K 02 ) = 1 2 log | G 1 K 02 G 1 T + K 1 + I | | G 1 K 02 G 1 T + I | .
If g 1 ( A ¯ 1 b , B ¯ , K 01 , K 02 ) f 1 ( A ¯ 1 b , B ¯ , K 01 , K 02 ) , then
R 1 = 1 2 log | G 1 K 02 G 1 T + K 1 + I | | G 1 K 02 G 1 T + I |
is achievable. Similarly, by choosing K 02 = 0 , then R 1 = 1 2 log | K 1 + I | is achievable and this meets the outer bound (the second term in “min” in (25)). Next we consider the bound on R 2 . Let
A 20 a = ( G 2 K 02 G 2 T + I ) 1 K 02 G 2 T G 2 , A 21 a = ( G 2 K 02 G 2 T + I ) 1 K 02 G 2 T G 2 B 1 , A 22 a = ( G 2 K 02 G 2 T + I ) 1 K 02 G 2 T ( G 2 B 2 + I ) .
Then f 2 ( A ¯ 2 , B ¯ , K 01 , K 02 ) takes the following form
f 2 ( A ¯ 2 a , B ¯ , K 01 , K 02 ) = 1 2 log | G 2 K 0 G 2 T + K 2 + G 2 B 2 K S 2 + K S 2 B 2 T G 2 T + K S 2 + I | | G 2 K 0 G 2 T + G 2 B 2 K S 2 + K S 2 B 2 T G 2 T + K S 2 + I | + 1 2 log | G 2 K 02 G 2 T + I | .
If f 2 ( A ¯ 2 a , B ¯ , K 01 , K 02 ) g 2 ( A ¯ 2 a , B ¯ , K 01 , K 02 ) , then R 2 = f 2 ( A ¯ 2 a , B ¯ , K 01 , K 02 ) is achievable. Moreover, if we choose K 01 = 0 , then R 2 = f 2 ( A ¯ 2 a , B ¯ , 0 , K 0 ) meets the outer bound (the first term in “min” in (25)).
Furthermore, we set
A 20 b = I , A 21 b = B 1 , A 22 b = B 2 + G 2 1 ,
and then obtain
g 2 ( A ¯ 2 b , B ¯ , K 01 , K 02 ) = 1 2 log | K 2 + I | .
If g 2 ( A ¯ 2 b , B ¯ , K 01 , K 02 ) f 2 ( A ¯ 2 b , B ¯ , K 01 , K 02 ) , then R 2 = 1 2 log | K 2 + I | is achievable and this meets the outer bound. This also equals the maximum rate for R 2 when the channel is not corrupted by state.
Summarizing the above analysis, we obtain the following characterization of segments of the capacity region boundary.
Theorem 3.
The channel parameters ( G 1 , G 2 , K 0 , K 1 , K 2 , K S 1 , K S 2 ) can be partitioned into the sets A 1 , B 1 , C 1 , where
A 1 = { ( G 1 , G 2 , K 0 , K 1 , K 2 , K S 1 , K S 2 ) : f 1 ( A ¯ 1 a , B ¯ a , K 01 , K 02 ) g 1 ( A ¯ 1 a , B ¯ a , K 01 , K 02 ) , C 1 = { ( G 1 , G 2 , K 0 , K 1 , K 2 , K S 1 , K S 2 ) : f 1 ( A ¯ 1 b , B ¯ , K 01 , K 02 ) g 1 ( A ¯ 1 b , B ¯ , K 01 , K 02 ) } , B 1 = ( A 1 C 1 ) c .
If ( G 1 , G 2 , K 0 , K 1 , K 2 , K S 1 , K S 2 ) A 1 , then R 1 = f 1 ( A ¯ 1 a , B ¯ , K 0 , 0 ) captures one segment of the capacity region boundary, where the state cannot be fully canceled. If ( G 1 , G 2 , K 0 , K 1 , K 2 , K S 1 , K S 2 ) C 1 , then R 1 = 1 2 log | K 1 + I | captures one segment of the capacity region boundary where the state is fully canceled. If ( G 1 , G 2 , K 0 , K 1 , K 2 , K S 1 , K S 2 ) B 1 , then the R 1 segment of the capacity region boundary is not characterized.
The channel parameters ( G 1 , G 2 , K 0 , K 1 , K 2 , K S 1 , K S 2 ) can also be partitioned into the sets A 2 , B 2 , C 2 , where
A 2 = { ( G 1 , G 2 , K 0 , K 1 , K 2 , K S 1 , K S 2 ) : f 2 ( A ¯ 2 a , B ¯ , K 01 , K 02 ) g 2 ( A ¯ 2 a , B ¯ , K 01 , K 02 ) , C 2 = { ( G 1 , G 2 , K 0 , K 1 , K 2 , K S 1 , K S 2 ) : f 2 ( A ¯ 2 b , B ¯ , K 01 , K 02 ) g 2 ( A ¯ 2 b , B ¯ , K 01 , K 02 ) , B 2 = ( A 2 C 2 ) c .
If ( G 1 , G 2 , K 0 , K 1 , K 2 , K S 1 , K S 2 ) A 2 , then R 2 = f 2 ( A ¯ 2 a , B ¯ , 0 , K 0 ) captures one segment of the capacity region boundary, where the state cannot be fully canceled. If ( G 1 , G 2 , K 0 , K 1 , K 2 , K S 1 , K S 2 ) C 2 , then R 2 = 1 2 log | K 2 + I | captures one segment of the capacity boundary where the state is fully canceled. If ( G 1 , G 2 , K 0 , K 1 , K 2 , K S 1 , K S 2 ) B 2 , then the R 2 segment of the capacity region boundary is not characterized.
The above theorem describes two partitions of the channel parameters, respectively under which segments on the capacity region boundary corresponding to R 1 and R 2 can be characterized. Intersection of two sets, each from one partition, collectively characterizes the entire segments on the capacity region boundary.
We note that our inner bound can be tight for some set of channel parameters. As an example, assume that ( G 1 , G 2 , K 0 , K 1 , K 2 , K S 1 , K S 2 ) C 1 C 2 . In such case, R 1 = 1 2 log | G 1 K 02 G 1 T + K 1 + I | | G 1 K 02 G 1 T + I | and R 2 = 1 2 log | K 2 + I | are achievable. For the point-to-point helper channel [28], it was shown that if the helper power is above some threshold, the state is completely canceled, whereas in our model we have two parallel channels. If the helper power is high enough, it can split its signal, similarly as for the Gaussian BC, such that one part of it is intended for Receiver 2, where by using dirty paper coding it eliminates completely the interference caused by the state and the part of the signal intended for Receiver 1. In the same time the part of the helper signal intended for Receiver 1, can only cancel the interference caused by the state while the part intended to Receiver 2 is treated as noise.

4.4. Numerical Results

In this section, we provide specific numerical examples to illustrate the bounds obtained in the previous sections. In particular, we focus on scalar Gaussian channel setting, such that: G 1 η 1 ; G 2 η 2 ; K 0 P 0 ; K 01 P 01 ; K 02 P 02 ; K 1 P 1 ; K 2 P 2 , K S 1 Q 1 ; K S 2 Q 2 . We also denote ( A 11 , A 12 , A 20 , A 21 , A 22 , B 1 , B 2 ) ( α 11 , α 12 , α 20 , α 21 , α 22 , β 1 , β 2 ) . We plot the inner and outer bounds for various values of helper power P 0 , channel gains, η 1 and η 2 and different state power. The results are shown in Figure 12. The outer bound is based on Proposition 4. The inner bound is the convex hull of all the achievable regions, with interchange between the roles of the decoders. The time-sharing inner bound is according to point-to-point helper channel achievable region [28]. The scenario where the helper power is less than the users power is depicted in Figure 12a,b, while the channel gains in Figure 12a are equal, they are mismatched in Figure 12b. Please note that in both cases our inner bound outperforms the time-sharing bound, especially in the mismatched case, and some segments of the capacity region are characterized.
The scenario with helper power being higher than the user power and matched and mismatched channel gain is depicted in Figure 12c,d respectively. Similar to for low helper power regime, our proposed achievability scheme performs better than time-sharing.

5. Conclusions

In the first part of this paper, we have studied the parallel state-dependent Gaussian channel with a state-cognitive helper and with same but differently scaled states. An inner bound was derived and was compared to an upper bound, and the segments of the capacity region boundary were characterized for various channel parameters. We have shown that if the channel gain matrices satisfy a certain symmetry property, the full rectangular capacity region of the two point-to-point channels without the state can be achieved. Furthermore, for the scalar channel case, we have shown that for a given ratio of state gain over the helper signal gain, a b , one can find a value of the helper power— P 0 , such that the capacity region is fully characterized.
A different model of the parallel state-dependent Gaussian channel with a state-cognitive helper and independent states was considered in the second part of this study. Inner and outer bounds were derived, and segments of the capacity region boundary were characterized for various channel parameters. We have also demonstrated our results using numerical simulation and have shown that our achievability scheme outperforms time-sharing that was shown to be optimal for the infinite state power regime in [34].
These two models represent a special case of a more general scenario with correlated states, our results in both studies imply that as the states get more correlated, it is easier to mitigate the interference. Furthermore, the gap between the inner bound and the outer bound in this work suggests that a new techniques for outer bound derivation is needed as we believe that the inner bounds consisting of pairs ( R 1 , R 2 ) = ( f 1 ( A ¯ 1 a , B ¯ , K 01 , K 02 ) , f 2 ( A ¯ 2 , B ¯ , K 01 , K 02 ) ) is indeed tight for some set of channel parameters.

Author Contributions

Conceptualization, Y.L.; Formal analysis, M.D.; Supervision, S.S.; Validation, R.D.; Writing—original draft, M.D.; Writing—review & editing, M.D.

Funding

The work of M.D. and S.S. has been supported by the European Union’s Horizon 2020 Research And Innovation Programme, grant agreement no. 694630, and by the Heron consortium via the Israel minister of economy and science. The work of Y.L. was supported in part by the U.S. National Science Foundation under Grant CCF-1801846.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Proposition 1

Fix the following joint PMF
P S W X 0 X 1 X 2 Y 1 Y 2 = P S P W | S P X 0 | S W P X 1 P X 2 P Y 1 | S X 0 X 1 P Y 2 | S X 0 X 2 .

Appendix A.1. Codebook Generation

Randomly and independently generate 2 n R ˜ sequences w n ( m ˜ ) , m ˜ I R ˜ ( n ) , each according to i = 1 n P W ( w i ) . Similarly, for l = { 1 , 2 } , generate 2 n R l sequences x l n ( m l ) , m l I R l ( n ) , each according to i = 1 n P X l ( x l i ) . These sequences constitute the codebook, which is revealed to the encoders and the decoders.

Appendix A.2. Encoding

Appendix A.2.1. Encoder at the Helper

Fix ϵ > 0 . Given s n , find m ˜ , such that ( w n ( m ˜ ) , s n ) T ϵ ( n ) ( P S W ) . If no such sequence exists, it declares an error. Then, given ( w n ( m ˜ ) , s n ) , generate x 0 n according to i = 1 n P X 0 | S W ( x 0 i | s i , w i ( m ˜ ) ) . The encoder at the helper then transmits x 0 i at time i [ 1 : n ] .

Appendix A.2.2. Encoder at Transmitter l

To send message m l , encoder l transmits x l n ( m l ) , for l = { 1 , 2 } .

Appendix A.3. Decoding

Let ϵ > ϵ and l { 1 , 2 } . Upon receiving y l n , the decoder at Receiver l declares that m ^ l I R l ( n ) is sent if it is the unique message such that ( w n ( m ^ ) , x l n ( m ^ l ) , y l n ) T ϵ ( n ) ( P W X l Y l ) for some m ^ I R ˜ ( n ) ; otherwise it declares an error.

Appendix A.4. Analysis of the Probability of Error

The encoder at the helper declares an error if the following event occurs
E 0 = { ( S n , W n ( m ˜ ) ) T ϵ ( n ) ( P S W ) for all m ˜ I R ˜ ( n ) } .
By the covering lemma (Section 2.3), with setting the original random variables ( U , X , X ^ ) as ( , S , W ) respectively, and A = I R ˜ ( n ) , P { E 0 } tends to zero as n if
R ˜ > I ( W ; S ) .
Assume without loss of generality that ( M 1 , M 2 ) = ( 1 , 1 ) , condition (A3) holds, and let M ˜ denote the index of the chosen w n sequence for s n . The decoder at Receiver l makes an error only if one or more of the following events occur:
E l 1 = { ( W n ( M ˜ ) , X l n ( 1 ) , Y l n ) T ϵ ( n ) ( P W X l Y l ) } , E l 2 = { ( W n ( M ˜ ) , X l n ( m l ) , Y l n ) T ϵ ( n ) ( P W X l Y l ) for some m l l } , E l 3 = { ( W n ( m ˜ ) , X l n ( m l ) , Y l n ) T ϵ ( n ) ( P W X l Y l ) for some m l 1 and m ˜ M ˜ } .
Thus, by the union of events bound,
P { E l } = P { E l 1 E l 2 E l 3 } P { E l 1 } + P { E l 2 } + P { E l 3 } .
By the LLN, the first term P { E l 1 } tend to zero as n . For the second term, note that for m l 1 ,
p ( w n ( M ˜ ) , x l n ( m l ) , y l n ) = x l n ( 1 ) , x 0 n ( M ˜ ) , s n p ( w n ( M ˜ ) , x 0 n ( M ˜ ) , x l n ( 1 ) , x l n ( m l ) , s n , y l n ) = p ( x l n ( m l ) ) x l n ( 1 ) , x 0 n ( M ˜ ) , s n p ( w n ( M ˜ ) , x 0 n ( M ˜ ) , s n ) p ( x l n ( 1 ) ) p ( y l n | x 0 n ( M ˜ ) , x l n ( 1 ) , s n ) = i = 1 n p ( x l i ( m l ) ) x l n ( 1 ) , x 0 n ( M ˜ ) , s n i = 1 n p ( w i ( M ˜ ) , x 0 i ( M ˜ ) , s i ) p ( x l i ( 1 ) ) p ( y l i | x 0 i ( M ˜ ) , x l i ( 1 ) , s i ) = i = 1 n p ( x l i ( m l ) ) p ( w i ( M ˜ ) , y l i ) .
Hence, by the packing lemma, choosing the original random variable ( U , X , Y ) as ( , X l , ( W , Y l ) ) respectively, A = I R l ( n ) , P { E l 2 } tends to zero as n if R l < I ( X l ; Y l , W ) . Since X l and W are independent, R l < I ( X l ; Y l | W ) . Finally, for the third term, note that for m l 1 and m ˜ M ˜
p ( w n ( m ˜ ) , x l n ( m l ) , y l n ) = w n ( M ˜ ) , x l n ( 1 ) , x 0 n ( M ˜ ) , s n p ( w n ( m ˜ ) , w n ( M ˜ ) , x 0 n ( M ˜ ) , x l n ( 1 ) , x l n ( m l ) , s n , y l n ) = p ( x l n ( m l ) ) p ( w n ( m ˜ ) ) w n ( M ˜ ) , x l n ( 1 ) , x 0 n ( M ˜ ) , s n p ( w n ( M ˜ ) , x 0 n ( M ˜ ) , s n ) p ( x l n ( 1 ) ) p ( y l n | x 0 n ( M ˜ ) , x l n ( 1 ) , s n ) = i = 1 n p ( x l i ( m l ) ) p ( w i ( m ˜ ) ) p ( y l i ) .
Again, by the packing lemma, choosing the original random variable ( U , X , Y ) as ( , ( W , X l ) , Y l ) respectively, A = I R l ( n ) × I R ˜ ( n ) , P { E l 3 } tends to zero as n if R ˜ + R l < I ( W , X l ; Y l ) .

Appendix B. Proof of Proposition 3

We prove for a general l { 1 , 2 } . By Fano’s inequality (Lemma 2),
H ( M l | Y l n ) n R l P e ( n ) + 1 n ϵ n
where ϵ n tends to zero as n by the assumption that lim n P e ( n ) = 0 .
Now consider
n R l + I ( S n ; Y l n | M l ) = H ( M l ) H ( M l | Y l n ) + H ( M l | Y l n ) + I ( S n ; Y l n | M l ) I ( M l , S n ; Y l n ) + n ϵ n = h ( Y l n ) h ( Y l n | M l , S n ) + n ϵ n h ( Y l n ) h ( Y l n | M l , X l n , X 0 n , S n ) + n ϵ n = h ( Y l n ) h ( Z l n ) + n ϵ n n 2 log | G l K 0 G l T + K l + G l Σ X 0 S G s l T + G s l Σ X 0 S T G l T + G s l K S G s l T + I | + n ϵ n .
I ( S n ; Y l n | M l ) can be lower bounded as follows:
I ( S n ; Y l n | M l ) = h ( S n ) h ( S n | Y ˜ l n ) ,
where Y ˜ l n G l X 0 n + G s l S n + Z l n . The conditional differential entropy can be upper bounded as follows
h ( S n | Y ˜ l n ) i = 1 n h ( S i | Y ˜ l i ) n 2 log ( 2 π e ) t K S Σ S Y ˜ l Σ Y ˜ l 1 Σ S Y ˜ l T
where Σ S Y ˜ l = E S Y ˜ l T = K S G s l T + Σ X 0 S T G l T , and
Σ Y ˜ l = G l K 0 G l T + G l Σ X 0 S G s l T + G s l Σ X 0 S T G l T + G s l K S G s l T + I .
Now we apply Silvester’s Determinant Theorem [38] to have
K S Σ S Y ˜ l Σ Y ˜ l 1 Σ S Y ˜ l T = K S I K S 1 Σ S Y ˜ l Σ Y ˜ l 1 Σ S Y ˜ l T = K S I Σ S Y ˜ l T K S 1 Σ S Y ˜ l Σ Y ˜ l 1 = K S Σ Y ˜ l Σ S Y ˜ l T K S 1 Σ S Y ˜ l Σ Y ˜ l 1 .
Consider the argument of the middle determinant. Since K S 1 Σ S Y ˜ l = G s l T + K S 1 Σ X 0 S T G l T , it follows that
Σ S Y ˜ l T K S 1 Σ S Y ˜ l = G s l K S + G l Σ X 0 S G s l T + K S 1 Σ X 0 S T G l T = G S l K S G S l T + G S l Σ X 0 S T G l T + G l Σ X 0 S G S l T + G l Σ X 0 S K S 1 Σ X 0 S T G l T ,
and
Σ Y ˜ l Σ S Y ˜ l T K S 1 Σ S Y ˜ l = G l ( K 0 Σ X 0 S K S 1 Σ X 0 S T ) G l T + I .
Finally, by collecting terms,
I ( S n ; Y l n | M l ) n 2 log | G l K 0 G l T + G l Σ X 0 S G s l T + G s l Σ X 0 S T G l T + G s l K S G s l T + I | | G l ( K 0 Σ X 0 S K S 1 Σ X 0 S T ) G l T + I | .
Thus, the bound in (11) is satisfied.
It remains to show that Σ X 0 S K S 1 Σ X 0 S T K 0 . We use the non-negativity property of the covariance matrix of the vector ( X 0 , S ) T
det E ( X 0 , S ) T ( X 0 , S ) = det K 0 Σ X 0 S Σ X 0 S T K S = | K S | · K 0 Σ X 0 S K S 1 Σ X 0 S T 0 ,
where the last inequality follows since any covariance matrix ( Σ X 0 S ) is by definition positive definite. Now we arrange parts to have:
Σ X 0 S K S 1 Σ X 0 S T K 0 .
This completes the proof of Proposition 3.

Appendix C. Optimal Coefficients for the MIMO Gaussian with Differently Scaled States Channel

We first consider the bound on R 1 . Consider the first argument in min of (7a)
I ( W , X 1 ; Y 1 ) I ( W ; S ) = I ( X 1 ; Y 1 ) + I ( W ; Y 1 | X 1 ) I ( W ; S | X 1 ) = I ( X 1 ; Y 1 ) + h ( W | S , X 1 ) h ( W | X 1 , Y 1 ) .
It is straightforward to show that
I ( X 1 ; Y 1 ) = 1 2 log | G 1 K 0 G 1 T + K 1 + G 1 B K S G s 1 T + G s 1 K S B T G 1 T + G s 1 K S G s 1 T + I | | G 1 K 0 G 1 T + G 1 B K S G s 1 T + G s 1 K S B T G 1 T + G s 1 K S G s 1 T + I | ,
and
h ( W | S , X 1 ) = h ( X 0 ) .
As for the third term, denote Y ˜ 1 = Y 1 X 1 , thus
h ( W | X 1 , Y 1 ) = h ( W | Y ˜ 1 ) = h ( W M W | Y ˜ 1 Y ˜ 1 ) = h X 0 + A S M W | Y ˜ 1 G 1 X 0 + B S + G s 1 S + Z 1 .
We require that term S in the argument of the differential entropy be completely canceled, therefore we choose
A a = M W | Y ˜ 1 ( G 1 B + G s 1 ) .
With the above choice of A, we have
h ( W | X 1 , Y 1 ) = h X 0 M W | Y ˜ 1 G 1 X 0 + Z 1 .
Finally, we demand that M W | Y ˜ 1 be the MMSE of X 0 given G 1 X 0 + Z 1 , i.e.,
M W | Y ˜ 1 = ( G 1 K 0 G 1 T + I ) 1 K 0 G 1 T .
In such case
A a = ( G 1 K 0 G 1 T + I ) 1 K 0 G 1 T ( G 1 B + G s 1 ) .
Hence
h ( W | X 1 , Y 1 ) = h ( X 0 | G 1 X 0 + Z 1 ) ,
and thus
h ( W | S , X 1 ) h ( W | X 1 , Y 1 ) = h ( X 0 ) h ( X 0 | G 1 X 0 + Z 1 ) = I ( X 0 ; G 1 X 0 + Z 1 ) = 1 2 log | G 1 K 0 G 1 T + I | .
Furthermore, if we choose A b = B + G 1 1 G s 1 , then
I ( X 1 ; Y 1 | W ) = I ( X 1 ; G 1 ( X 0 + B S ) + G s 1 S + X 1 + Z 1 | X 0 + ( B + G 1 1 G s 1 ) S ) = I ( X 1 ; X 1 + Z 1 ) = 1 2 log | K 1 + I | .
With this choice of A, h ( W | Y ˜ 1 ) is equal to
h ( W | Y ˜ 1 ) = h ( X 0 + A S | G 1 ( X 0 + ( A G 1 1 G s 1 ) S ) + G s 1 S + Z 1 ) = h ( X 0 + A S | G 1 ( X 0 + A S ) + Z 1 ) = 1 2 log ( 2 π e ) t K 0 + A K S A T ( K 0 G 1 T + A K S A T G 1 T ) ( G 1 ( K 0 + A K S A T ) G 1 T + I ) 1 ( K 0 G 1 T + A K S A T G 1 T ) T = 1 2 log ( 2 π e ) t K 0 + A K S A T ( K 0 + A K S A T ) G 1 T ( G 1 ( K 0 + A K S A T ) G 1 T + I ) 1 G 1 ( K 0 + A K S A T ) T = 1 2 log ( 2 π e ) t K 0 + A K S A T I G 1 T ( G 1 ( K 0 + A K S A T ) G 1 T + I ) 1 G 1 ( K 0 + A K S A T ) T = 1 2 log ( 2 π e ) t K 0 + A K S A T I ( G 1 ( K 0 + A K S A T ) G 1 T + I ) 1 G 1 ( K 0 + A K S A T ) T G 1 T = 1 2 log ( 2 π e ) t K 0 + A K S A T G 1 ( K 0 + A K S A T ) G 1 T + I ( G 1 ( K 0 + A K S A T ) G 1 T + I ) G 1 ( K 0 + A K S A T ) T G 1 T = 1 2 log ( 2 π e ) t K 0 + A K S A T G 1 ( K 0 + A K S A T ) G 1 T + I .
Thus
I ( W , X 1 ; Y 1 ) I ( W ; S ) = 1 2 log | K 0 | | G 1 ( K 0 + A K S A T ) G 1 T + K 1 + I | | K 0 + A K S A T | .
We would like to obtain a condition under which g 1 ( A b , B , K 0 ) f 1 ( A b , B , K 0 ) , i.e.,
| K 0 | | G 1 ( K 0 + A K S A T ) G 1 T + K 1 + I | | K 0 + A K S A T | | K 1 + I |
that is equivalent to
K 0 G 1 ( K 0 + A K S A T ) G 1 T + K 0 K 1 + K 0 K 0 K 1 + K 0 + A K S A T K 1 + A K S A T .
Furthermore, after rearranging terms, we have
K 0 G 1 K 0 G 1 T A K S A T ( K 1 + I ) K 0 G 1 A K S A T G 1 T .
The choices of A c and A d for the achievability proof of R 2 follows using similar steps by interchanging the indices 1 2 .

Appendix D. Proof of Proposition 4

This proof relies greatly on the proof of the outer bound in the differently scaled states scenario in Appendix B. The main differences are:
  • S = ( S 1 , S 2 ) T . Since S 1 and S 2 are independent, the covariance matrix of S is block-diagonal
    K S = K S 1 0 0 K S 2 ,
  • the helper signal X 0 correlates with S 1 and S 2 and it characterized by the cross-covariance matrices Σ X 0 S 1 and Σ X 0 S 2 respectively,
  • the state gain matrices, G s 1 and G s 2 are unity matrices.
Hence, in the independent-states case, we have the following upper bound on n R 1 + I ( S n ; Y 1 n | M 1 )
n R 1 + I ( S n ; Y 1 n | M 1 ) n 2 log | G 1 K 0 G 1 T + K 1 + G 1 Σ X 0 S 1 + Σ X 0 S 1 T G 1 T + K S 1 + I | + n ϵ n .
Let Y ˜ 1 n G 1 X 0 n + S 1 n + Z 1 n . We proceed to lower bound I ( S n ; Y 1 n | M 1 ) = h ( S n ) h ( S n | Y ˜ 1 n ) In similar fashion to (A6), the conditional differential entropy can be upper bounded as follows
h ( S n | Y ˜ 1 n ) n 2 log ( 2 π e ) 2 t K S Σ Y ˜ l K S Y ˜ l T Σ S 1 Σ S Y ˜ l Σ Y ˜ 1 1 ,
where
Σ S Y ˜ 1 = E S Y ˜ 1 = K S 1 + Σ X 0 S 1 T G 1 T Σ X 0 S 2 T G 1 T ,
and Σ Y ˜ 1 = G 1 K 0 G 1 T + G 1 Σ X 0 S 1 + Σ X 0 S 1 T G 1 T + K S 1 + I . The power 2 t in (A8) is due to the size of the vector ( S 1 , S 2 ) T . The argument of the inner determinant in (A8) can be further evaluated as follows,
Σ S 1 Σ S Y ˜ 1 = K S 1 1 0 0 K S 2 1 K S 1 + Σ X 0 S 1 T G 1 T Σ X 0 S 2 T G 1 T = I + K S 1 1 Σ X 0 S 1 T G 1 T K S 2 1 Σ X 0 S 2 T G 1 T ,
and
Σ S Y ˜ 1 T Σ S 1 Σ S Y ˜ 1 = K S 1 + G 1 Σ X 0 S 1 G 1 Σ X 0 S 2 I + K S 1 1 Σ X 0 S 1 T G 1 T K S 2 1 Σ X 0 S 2 T G 1 T = K S 1 + G 1 Σ X 0 S 1 + Σ X 0 S 1 T G 1 T + G 1 Σ X 0 S 1 K S 1 1 Σ X 0 S 1 T G 1 T + G 1 Σ X 0 S 2 K S 2 1 Σ X 0 S 2 T G 1 T .
Thus,
Σ Y ˜ 1 Σ S Y ˜ 1 T Σ S 1 Σ S Y ˜ 1 = G 1 ( K 0 Σ X 0 S 1 K S 1 1 Σ X 0 S 1 T Σ X 0 S 2 K S 2 1 Σ X 0 S 2 T ) G 1 T + I = G 1 ( K 0 Σ X 0 S K S 1 Σ X 0 S T ) G 1 T + I ,
where in the last equality we used the definition of Σ X 0 S from (26). Consequently, we established an upper bound on I ( S n ; Y 1 n | M 1 ) :
I ( S n ; Y 1 n | M 1 ) n 2 log | G 1 K 0 G 1 T + G 1 Σ X 0 S 1 + Σ X 0 S 1 T G 1 T + K S 1 + I | | G 1 ( K 0 Σ X 0 S K S 1 Σ X 0 S T ) G 1 T + I | .
Finally, collecting (A7) and (A9), the bound in (25), with l = 1 , is satisfied. The bound (25) for l = 2 follows from similar considerations. It remains to show that Σ X 0 S K S 1 Σ X 0 S T K 0 . We use the non-negativity property of the covariance matrix of the vector ( X 0 , S 1 , S 2 ) T
det E ( X 0 , S 1 , S 2 ) T ( X 0 , S 1 , S 2 ) = det K 0 Σ X 0 S 1 Σ X 0 S 2 Σ X 0 S 1 T K S 1 0 Σ X 0 S 2 T 0 K S 2 = K S 1 0 0 K S 2 · K 0 Σ X 0 S 1 Σ X 0 S 2 K S 1 0 0 K S 2 1 Σ X 0 S 1 T Σ X 0 S 2 T = K S 1 0 0 K S 2 · K 0 Σ X 0 S 1 Σ X 0 S 2 K S 1 1 0 0 K S 2 1 Σ X 0 S 1 T Σ X 0 S 2 T = K S 1 0 0 K S 2 · K 0 Σ X 0 S 1 K S 1 1 Σ X 0 S 1 T Σ X 0 S 2 K S 2 1 Σ X 0 S 2 T ( b ) 0 ,
where the last inequality follows from non-negativity of covariance matrix. Now we arrange parts to have:
Σ X 0 S 1 K S 1 1 Σ X 0 S 1 T + Σ X 0 S 2 K S 2 1 Σ X 0 S 2 T = Σ X 0 S K S 1 Σ X 0 S T K 0 .
This completes the proof of Proposition 4.

Appendix E. Proof of Proposition 5

We use random codes and fix the following joint distribution:
P S U V X 0 X 1 X 2 Y 1 Y 2 = P S U V P X 0 | S U V P X 1 P X 2 P Y 1 | S X 0 X 1 P Y 2 | S X 0 X 2 .

Appendix E.1. Codebook Generation

Generate 2 n R ˜ U randomly and independently generated sequences u n ( r ) , r I R ˜ U ( n ) , each according to i = 1 n P U ( u i ) . Similarly, generate 2 n R ˜ V randomly and independently generated sequences v n ( t ) , t I R ˜ V ( n ) according to i = 1 n P V ( v i ) .
Let l { 1 , 2 } . Randomly and independently generate 2 n R l sequences x 1 n ( m l ) , m l I R l ( n ) , each according to i = 1 n P X l ( x l i ) .
These sequences constitute the codebook, which is revealed to the encoders and the decoders.

Appendix E.2. Encoding

Fix ϵ > ϵ > 0 . The encoder at the helper, given s n , finds r ˜ such that
( s n , u n ( r ˜ ) ) T ϵ ( n ) ( P S U ) ,
if there is more than one such r ˜ , choose the smallest one. If no such r ˜ can be found declare an error. Next, given s n , u n ( r ˜ ) , find t ˜ such that
s n , u n ( r ˜ ) , v n ( t ˜ ) T ϵ ( n ) ( P S U V ) ,
if there is more than one such t ˜ , choose the smallest one. If no such t ˜ can be found declare an error. Then, given s n , u n ( r ˜ ) and v n ( t ˜ ) , generate x 0 n with i.i.d. components according to i = 1 n P X 0 | S U V ( x 0 i | s i , u i , v i ) . Let ( m 1 , m 2 ) be the messages to be sent. The encoder at transmitter l transmits x l n ( m l ) .

Appendix E.3. Decoding

Fix ϵ > ϵ . Given y 1 n , decoder 1 declares that m ^ 1 was sent if it is the unique message such that
( u n ( r ^ ) , x 1 n ( m ^ 1 ) , y 1 n ) T ϵ ( n ) ( P U X 1 Y 1 ) .
If no or more than one such m ^ 1 can be found, it declares an error.
Similarly, given y 2 n , decoder 2 finds the unique message m ^ 2 such that
( v n ( t ^ ) , x 2 n ( m ^ 2 ) , y 2 n ) T ϵ ( n ) ( P V X 2 Y 2 ) .
If no or more than one such m ^ 2 can be found, it declares an error.

Appendix E.4. Analysis of the Probability of Error

Assume without loss of generality that the message pair ( M 1 , M 2 ) = ( 1 , 1 ) was sent and let r 0 be the chosen index for u n and t 0 be the chosen index for v n . The encoder at the helper makes an error only if one or both of the following errors occur:
E 01 = { ( S n , U n ( r ) ) T ϵ ( n ) ( P S U ) for all r I R ˜ U ( n ) } , E 02 = { ( S n , U n ( r 0 ) , V n ( t ) ) T ϵ ( n ) ( P S U V ) for all t I R ˜ V ( n ) } .
Thus, by the union of events bound, the probability that the encoder at the helper makes an error, can be upper bounded as
Pr ( E 0 ) = Pr ( E 01 E 02 ) Pr ( E 01 ) + Pr ( E 01 c E 02 ) .
By the covering lemma with U = , X S , X ^ U , and A = I R ˜ U ( n ) , Pr ( E 01 ) tends to zero as n if R ˜ U > I ( U ; S ) + δ ( ϵ ) .
Similarly, using the covering lemma with U = , X ( S , U ) , X ^ V , and A = I R ˜ V ( n ) , Pr ( E 01 c E 02 ) tends to zero as n if R ˜ V > I ( V ; S , U ) + δ ( ϵ ) .
The decoder at receiver 1 makes an error only if one or more of the following events occur
E 11 = { ( U n ( r 0 ) , X 1 n ( 1 ) , Y 1 n ) T ϵ ( n ) ( P U X 1 Y 1 ) } , E 12 = { ( U n ( r 0 ) , X 1 n ( m 1 ) , Y 1 n ) T ϵ ( n ) ( P U X 1 Y 1 ) for some m 1 1 } , E 13 = { ( U n ( r ) , X 1 n ( m 1 ) , Y 1 n ) T ϵ ( n ) ( P U X 1 Y 1 ) for some r r 0 and m 1 1 } .
Again, by the union of events bound, the probability that the decoder at receiver 1 makes an error, can be upper bounded as
Pr ( E 1 ) = Pr ( E 11 E 12 E 13 ) Pr ( E 01 E 11 E 12 E 13 ) Pr ( E 01 ) + Pr ( E 01 c E 11 ) + Pr ( E 01 c E 12 ) + Pr ( E 13 ) .
We have already shown that Pr ( E 01 ) tends to zero as n if R ˜ U > I ( U ; S ) + δ ( ϵ ) . Next, note that
E 01 c = { ( S n , U n ( r 0 ) ) T ϵ ( n ) ( P S U ) } = { ( S n , U n ( r 0 ) , X 0 n ) T ϵ ( n ) ( P S U X 0 ) } ,
and
P Y 1 n | S n U n ( r 0 ) X 0 n X 1 n ( 1 ) ( y 1 n | s n , u n , x 0 n , x 1 n ) = i = 1 n P Y 1 | S U X 0 X 1 ( y 1 i | s i , u i , x 0 i , x 1 i ) = i = 1 n P Y 1 | S X 0 X 1 ( y 1 i | s i , x 0 i , x 1 i ) .
Hence, by the conditionally typicality lemma, Pr ( E 01 c E 11 ) tends to zero as n .
As for the probability of the event ( E 01 c E 12 ) , X 1 n ( m 1 ) is independent of ( U n ( r 0 ) , Y 1 n ) i = 1 n P U Y 1 ( u i , y 1 i ) . Hence, by the packing lemma, with U = , X X 1 , Y ( U , Y 1 ) and A = [ 2 : 2 n R 1 ] , Pr ( E 01 c E 12 ) tends to zero as n if R 1 < I ( X 1 ; U , Y 1 ) δ ( ϵ ) . X 1 and U are mutually independent, hence the latter condition is equivalent to R 1 < I ( X 1 ; Y 1 | U ) δ ( ϵ ) .
Finally, since for m 1 1 , r r 0 , ( X 1 n ( m 1 ) , U n ( r ) ) is independent of ( X 1 n ( 1 ) , U n ( r 0 ) , Y 1 n ) , again by the packing lemma with U = , X ( U , X 1 ) , Y Y 1 and A = [ 2 : 2 n R 1 ] × [ 1 : 2 n R ˜ U ] / r 0 , Pr ( E 13 ) tends to zero as n if R ˜ U + R 1 < I ( U , X 1 ; Y 1 ) δ ( ϵ ) .
Next consider the average probability of error for decoder 2. The decoder at receiver 2 makes an error only if one or more of the following events occur
E 21 = { ( V n ( t 0 ) , X 2 n ( 1 ) , Y 2 n ) T ϵ ( n ) ( P V X 2 Y 2 ) } , E 22 = { ( V n ( t 0 ) , X 2 n ( m 2 ) , Y 2 n ) T ϵ ( n ) ( P V X 2 Y 2 ) for some m 2 1 } , E 23 = { ( V n ( t ) , X 2 n ( m 2 ) , Y 2 n ) T ϵ ( n ) ( P V X 2 Y 2 ) for some t t 0 and m 2 1 } .
Similar to (A10), the probability that the decoder at receiver 2 makes an error, can be upper bounded as
Pr ( E 2 ) Pr ( E 0 ) + Pr ( E 0 c E 21 ) + Pr ( E 0 c E 22 ) + Pr ( E 23 ) .
In a very similar fashion as was shown for decoder 1, it can be shown that Pr ( E 2 ) tends to zero as n if
R ˜ V I ( V ; S , U ) + δ ( ϵ ) , R 2 I ( X 2 ; Y 2 | V ) δ ( ϵ ) , R 2 + R ˜ V I ( V , X 2 ; Y 2 ) δ ( ϵ ) .
Finally, combining the aforementioned bounds yields the following achievable region:
R 1 min I ( U , X 1 ; Y 1 ) I ( U ; S ) , I ( X 1 ; Y 1 | U ) , R 2 min I ( V , X 2 ; Y 2 ) I ( V ; U , S ) , I ( X 2 ; Y 2 | V ) .
This completes the proof of achievability.

Appendix F. Optimal Coefficients for the MIMO Gaussian with Independent States Channel

We first consider the bound on R 1 . Consider the first argument in min of (28a)
I ( U , X 1 ; Y 1 ) I ( U ; S ) = I ( X 1 ; Y 1 ) + I ( U ; Y 1 | X 1 ) I ( U ; S | X 1 ) = I ( X 1 ; Y 1 ) + h ( U | S , X 1 ) h ( U | X 1 , Y 1 ) .
It is straightforward to show that
I ( X 1 ; Y 1 ) = 1 2 log | G 1 K 0 G 1 T + K 1 + G 1 B 1 K S 1 + K S 1 B 1 T G 1 T + K S 1 + I | | G 1 K 0 G 1 T + G 1 B 1 K S 1 + K S 1 B 1 T G 1 T + I | ,
and
h ( U | S , X 1 ) = h ( X 01 ) = 1 2 log ( 2 π e ) t | K 01 | .
As for the third term in (A11), denote Y ˜ 1 = Y 1 X 1 ,
h ( U | X 1 , Y 1 ) = h ( U | Y ˜ 1 ) = h ( U M U | Y ˜ 1 Y ˜ 1 ) = h X 01 + A 11 S 1 + A 12 S 2 M U | Y ˜ 1 G 1 X 01 + X 02 + B 1 S 1 + B 2 S 2 + S 1 + Z 1 .
We require that the terms S 1 and S 2 in the argument of the differential entropy be completely canceled, hence we choose
A 11 a = M U | Y ˜ 1 ( G 1 B 1 + I ) , A 12 a = M U | Y ˜ 1 G 1 B 2 .
With the above choice of ( A 11 , A 12 ) , we have
h ( U | X 1 , Y 1 ) = h X 01 M U | Y ˜ 1 G 1 X 01 + X 02 + Z 1 .
Finally, we demand that M U | Y ˜ 1 be the MMSE of X 01 given G 1 ( X 01 + X 02 ) + Z 1 , i.e.,
M U | Y ˜ 1 = G 1 K 01 + K 02 G 1 T + I 1 K 01 G 1 T .
In such case
A 11 a = G 1 K 01 + K 02 G 1 T + I 1 K 01 G 1 T ( G 1 B 1 + I ) , A 12 a = G 1 K 01 + K 02 G 1 T + I 1 K 01 G 1 T G 1 B 2 .
Hence
h ( U | X 1 , Y 1 ) = h X 01 | G 1 X 01 + X 02 + Z 1 ,
and thus
h ( U | S , X 1 ) h ( U | X 1 , Y 1 ) = h ( X 01 ) h X 01 | G 1 X 01 + X 02 + Z 1 = I X 01 ; G 1 X 01 + X 02 + Z 1 = 1 2 log | G 1 ( K 01 + K 02 ) G 1 T + I | | G 1 K 02 G 1 T + I | .
Furthermore, if we choose A 11 b = B 1 + G 1 1 and A 12 b = B 2 , then
I ( X 1 ; Y 1 | U ) = I X 1 ; G 1 X 01 + X 02 + B 1 S 1 + B 2 S 2 + X 1 + S 1 + Z 1 | X 01 + A 11 b S 1 + A 12 b S 2 = I ( X 1 ; G 1 X 02 + X 1 + Z 1 ) = 1 2 log | G 1 K 02 G 1 T + K 1 + I | | G 1 K 02 G 1 T + I | .
We next consider the bound on R 2 . Consider the first argument in min of (28b)
I ( V , X 2 ; Y 2 ) I ( V ; U , S ) = I ( X 2 ; Y 2 ) + I ( V ; Y 2 | X 2 ) I ( V ; U , S | X 2 ) = I ( X 2 ; Y 2 ) + h ( V | U , S , X 2 ) h ( V | Y 2 , X 2 ) .
It is straightforward to show that
I ( X 2 ; Y 2 ) = 1 2 log | G 2 K 0 G 2 T + K 2 + G 2 B 2 K S 2 + K S 2 B 2 T G 2 T + K S 2 + I | | G 2 K 0 G 2 T + G 2 B 2 K S 2 + K S 2 B 2 T G 2 T + K S 2 + I | ,
and h ( V | U , S , X 2 ) = h ( X 02 ) . As for the third term in (A12), denote Y ˜ 2 = Y 2 X 2
h ( V | Y 2 , X 2 ) = h ( V | Y ˜ 2 ) = h ( V M V | Y ˜ 2 Y ˜ 2 ) = h X 02 + A 20 X 01 + A 21 S 1 + A 22 S 2 M V | Y ˜ 2 G 2 X 01 + X 02 + B 1 S 1 + B 2 S 2 + S 2 + Z 2 .
We require that terms X 01 , S 1 and S 2 in the argument of the differential entropy be completely canceled, hence we choose
A 20 a = M V | Y ˜ 2 G 2 , A 21 a = M V | Y ˜ 2 G 2 B 1 , A 12 a = M V | Y ˜ 2 ( G 2 B 2 + I ) .
With the above choice of ( A 20 , A 21 , A 22 ) , we have
h ( V | Y 2 , X 2 ) = h X 02 M V | Y ˜ 2 G 2 X 02 . + Z 2 .
Following this we demand that M V | Y ˜ 2 be the MMSE of X 02 given G 2 X 02 + Z 2 , i.e.,
M V | Y ˜ 2 = ( G 2 K 02 G 2 T + I ) 1 K 02 G 2 T .
In such case
A 20 a = ( G 2 K 02 G 2 T + I ) 1 K 02 G 2 T G 2 , A 21 a = ( G 2 K 02 G 2 T + I ) 1 K 02 G 2 T G 2 B 1 , A 22 a = ( G 2 K 02 G 2 T + I ) 1 K 02 G 2 T ( G 2 B 2 + I ) .
Hence
h ( V | Y 2 , X 2 ) = h ( X 02 | G 2 X 02 + Z 2 ) ,
and thus
h ( V | U , S , X 2 ) h ( V | Y 2 , X 2 ) = h ( X 02 ) h ( X 02 | G 2 X 02 + Z 2 ) = I ( X 02 ; G 2 X 02 + Z 2 ) = 1 2 log | G 2 K 02 G 2 T + I | .
Furthermore, if we choose A 20 b = I , A 21 b = B 1 and A 22 b = B 2 + G 2 1 , then
I ( X 2 ; Y 2 | V ) = I ( X 2 ; X 2 + Z 2 ) = 1 2 log | K 2 + I | .

References

  1. Vaezi, M.; Vincent Poor, H. NOMA: An Information-Theoretic Perspective. In Multiple Access Techniques for 5G Wireless Networks and Beyond; Springer International Publishing: Cham, Switzerland, 2019; pp. 167–193. [Google Scholar] [CrossRef]
  2. Gel’fand, S.; Pinsker, M. Coding for channels with ramdom parameters. Probl. Contr. Inf. Theory 1980, 9, 19–31. [Google Scholar]
  3. Costa, M. Writing on dirty paper (Corresp.). IEEE Trans. Inf. Theory 1983, 29, 439–441. [Google Scholar] [CrossRef]
  4. Shannon, C.E. Channels with Side Information at the Transmitter. IBM J. Res. Dev. 1958, 2, 289–293. [Google Scholar] [CrossRef]
  5. Cohen, A.S.; Lapidoth, A. Generalized writing on dirty paper. In Proceedings of the IEEE International Symposium on Information Theory, Lausanne, Switzerland, 30 June–5 July 2002; p. 227. [Google Scholar] [CrossRef]
  6. Cover, T. Broadcast channels. IEEE Trans. Inf. Theory 1972, 18, 2–14. [Google Scholar] [CrossRef]
  7. Marton, K. A coding theorem for the discrete memoryless broadcast channel. IEEE Trans. Inf. Theory 1979, 25, 306–311. [Google Scholar] [CrossRef]
  8. Liang, Y. Multiuser Communications with Relaying and User Cooperation. Ph.D. Thesis, University of Illinois, Urbana-Champaign, IL, USA, 2005. [Google Scholar]
  9. Nair, C.; Gamal, A.E. An Outer Bound to the Capacity Region of the Broadcast Channel. IEEE Trans. Inf. Theory 2007, 53, 350–355. [Google Scholar] [CrossRef]
  10. Gallager, R.G. Capacity and coding for degraded broadcast channels. Probl. Pereda. Inf. 1974, 10, 3–14. [Google Scholar]
  11. Bergmans, P. A simple converse for broadcast channels with additive white Gaussian noise (Corresp.). IEEE Trans. Inf. Theory 1974, 20, 279–280. [Google Scholar] [CrossRef]
  12. Weingarten, H.; Steinberg, Y.; Shamai, S. The Capacity Region of the Gaussian Multiple-Input Multiple-Output Broadcast Channel. IEEE Trans. Inf. Theory 2006, 52, 3936–3964. [Google Scholar] [CrossRef][Green Version]
  13. Liu, T.; Viswanath, P. An Extremal Inequality Motivated by Multiterminal Information-Theoretic Problems. IEEE Trans. Inf. Theory 2007, 53, 1839–1851. [Google Scholar] [CrossRef][Green Version]
  14. Geng, Y.; Nair, C. The Capacity Region of the Two-Receiver Gaussian Vector Broadcast Channel with Private and Common Messages. IEEE Trans. Inf. Theory 2014, 60, 2087–2104. [Google Scholar] [CrossRef]
  15. Steinberg, Y. Coding for the degraded broadcast channel with random parameters, with causal and noncausal side information. IEEE Trans. Inf. Theory 2005, 51, 2867–2877. [Google Scholar] [CrossRef]
  16. Steinberg, Y.; Shamai, S. Achievable rates for the broadcast channel with states known at the transmitter. In Proceedings of the International Symposium on Information Theory, Adelaide, Australia, 4–9 September 2005; pp. 2184–2188. [Google Scholar] [CrossRef]
  17. Ghabeli, L. On the capacity of a class ofK-user Gaussian broadcast channel with states known at the transmitter. IET Commun. 2018, 12, 787–795. [Google Scholar] [CrossRef]
  18. Khisti, A.; Erez, U.; Lapidoth, A.; Wornell, G.W. Carbon Copying Onto Dirty Paper. IEEE Trans. Inf. Theory 2007, 53, 1814–1827. [Google Scholar] [CrossRef][Green Version]
  19. Rini, S.; Shamai, S. On the Capacity of the Carbon Copy onto Dirty Paper Channel. IEEE Trans. Inf. Theory 2017, 63, 5907–5922. [Google Scholar] [CrossRef]
  20. Khosravi-Farsani, R.; Marvasti, F. Capacity bounds for multiuser channels with non-causal channel state information at the transmitters. In Proceedings of the 2011 IEEE Information Theory Workshop, Paraty, Brazil, 16–20 October 2011; pp. 195–199. [Google Scholar] [CrossRef]
  21. Kotagiri, S.; Laneman, J.N. Achievable rates for multiple access channels with state information known at one encoder. In Proceedings of the Allerton Conference on Communications, Control, and Computing, Monticello, IL, USA, 29 September–1 October 2004. [Google Scholar]
  22. Kotagiri, S.; Laneman, J.N. Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets. In Proceedings of the 2007 IEEE International Symposium on Information Theory, Nice, France, 24–29 June 2007; pp. 1566–1570. [Google Scholar] [CrossRef]
  23. Kotagiri, S.P.; Laneman, J.N. Multiaccess Channels with State Known to Some Encoders and Independent Messages. EURASIP J. Wirel. Commun. Netw. 2008, 2008, 26:1–26:14. [Google Scholar] [CrossRef]
  24. Somekh-Baruch, A.; Shamai, S.; Verdu, S. Cooperative Multiple-Access Encoding With States Available at One Transmitter. IEEE Trans. Inf. Theory 2008, 54, 4448–4469. [Google Scholar] [CrossRef][Green Version]
  25. Zaidi, A.; Kotagiri, S.P.; Laneman, J.N.; Vandendorpe, L. Multiaccess channels with state known to one encoder: Another case of degraded message sets. In Proceedings of the 2009 IEEE International Symposium on Information Theory, Seoul, Korea, 28 June–3 July 2009; pp. 2376–2380. [Google Scholar] [CrossRef]
  26. Yang, W.; Liang, Y.; Shitz, S.S.; Poor, H.V. State-Dependent Gaussian Multiple Access Channels: New Outer Bounds and Capacity Results. IEEE Trans. Inf. Theory 2018, 64, 7866–7882. [Google Scholar] [CrossRef][Green Version]
  27. Mallik, S.; Koetter, R. Helpers for Cleaning Dirty Papers. In Proceedings of the 7th International ITG Conference on Source and Channel Coding, Ulm, Germany, 14–16 January 2008; pp. 1–5. [Google Scholar]
  28. Sun, Y.; Duan, R.; Liang, Y.; Khisti, A.; Shamai, S. Capacity Characterization for State-Dependent Gaussian Channel With a Helper. IEEE Trans. Inf. Theory 2016, 62, 7123–7134. [Google Scholar] [CrossRef]
  29. Zaidi, A.; Kotagiri, S.P.; Laneman, J.N.; Vandendorpe, L. Cooperative Relaying with State Available Noncausally at the Relay. IEEE Trans. Inf. Theory 2010, 56, 2272–2298. [Google Scholar] [CrossRef]
  30. Li, M.; Simeone, O.; Yener, A. Message and State Cooperation in a Relay Channel When Only the Relay Knows the State. ArXiv, 2011; arXiv:1102.0768. [Google Scholar]
  31. Boostanpour, J.; Hodtani, G.A. Impact of relay side information on the coverage region for the wireless relay channel with correlated noises. IET Commun. 2018, 12, 776–786. [Google Scholar] [CrossRef]
  32. Ghasemi-Goojani, S.; Papadimitratos, P. On the capacity of Gaussian “dirty” Z-interference channel with common state. In Proceedings of the 2018 52nd Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, USA, 21–23 March 2018; pp. 1–6. [Google Scholar] [CrossRef]
  33. Keshet, G.; Steinberg, Y.; Merhav, N. Channel coding in the presence of side information. Found. Trends Commun. Inf. Theory 2008, 4, 445–586. [Google Scholar] [CrossRef]
  34. Duan, R.; Liang, Y.; Khisti, A.; Shamai, S. State-Dependent Parallel Gaussian Networks With a Common State-Cognitive Helper. IEEE Trans. Inf. Theory 2015, 61, 6680–6699. [Google Scholar] [CrossRef]
  35. Dikshtein, M.; Duan, R.; Liang, Y.; Shamai, S. Parallel Gaussian Channels Corrupted by Independent States With a State-Cognitive Helper. In Proceedings of the IEEE International Conference on the Science of Electrical Engineering (ICSEE), Eilat, Israel, 12–14 December 2018. [Google Scholar]
  36. Dikshtein, M.; Duan, R.; Liang, Y.; Shamai, S. State-Dependent Parallel Gaussian Channels with a State-Cognitive Helper. In Proceedings of the International Zurich Seminar on Information and Communication (IZS 2018), ETH Zurich, Zurich, Switzerland, 21–23 February 2018. [Google Scholar]
  37. El Gamal, A.; Kim, Y. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  38. Pozrikidis, C. An Introduction to Grids, Graphs, and Networks; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
Figure 1. Orthogonal Multiple Access Techniques.
Figure 1. Orthogonal Multiple Access Techniques.
Entropy 21 00273 g001
Figure 2. General State-Dependent Parallel Channel with a helper.
Figure 2. General State-Dependent Parallel Channel with a helper.
Entropy 21 00273 g002
Figure 3. Particular NOMA configuration.
Figure 3. Particular NOMA configuration.
Entropy 21 00273 g003
Figure 4. Point-to-Point Helper Channel.
Figure 4. Point-to-Point Helper Channel.
Entropy 21 00273 g004
Figure 5. The state-dependent parallel channel with same but differently scaled states and a state-cognitive helper.
Figure 5. The state-dependent parallel channel with same but differently scaled states and a state-cognitive helper.
Entropy 21 00273 g005
Figure 6. Segments of the capacity region for all cases of channel parameters.
Figure 6. Segments of the capacity region for all cases of channel parameters.
Entropy 21 00273 g006
Figure 7. Capacity bounds for channel parameters P 0 = 6 , P 1 = P 2 = 5 , Q = 12 , b = 0.8 and various state gain a.
Figure 7. Capacity bounds for channel parameters P 0 = 6 , P 1 = P 2 = 5 , Q = 12 , b = 0.8 and various state gain a.
Entropy 21 00273 g007
Figure 8. Capacity gap for foxed P 0 .
Figure 8. Capacity gap for foxed P 0 .
Entropy 21 00273 g008
Figure 9. Capacity gap for fixed b.
Figure 9. Capacity gap for fixed b.
Entropy 21 00273 g009
Figure 10. Inner and outer bounds for ( a , b , P 0 , P 1 , P 2 , Q ) = ( 3 . 5 , 5 , 2 , 5 , 5 , 12 ) .
Figure 10. Inner and outer bounds for ( a , b , P 0 , P 1 , P 2 , Q ) = ( 3 . 5 , 5 , 2 , 5 , 5 , 12 ) .
Entropy 21 00273 g010
Figure 11. MIMO State-Dependent Parallel Channel with a Helper.
Figure 11. MIMO State-Dependent Parallel Channel with a Helper.
Entropy 21 00273 g011
Figure 12. Numerical Results.
Figure 12. Numerical Results.
Entropy 21 00273 g012
Back to TopTop