Information Bottleneck for a Rayleigh Fading MIMO Channel with an Oblivious Relay

This paper considers the information bottleneck (IB) problem of a Rayleigh fading multiple-input multiple-out (MIMO) channel with an oblivious relay. The relay is constrained to operate without knowledge of the codebooks, i.e., it performs oblivious processing. Moreover, due to the bottleneck constraint, it is impossible for the relay to inform the destination node of the perfect channel state information (CSI) in each channel realization. To evaluate the bottleneck rate, we first provide an upper bound by assuming that the destination node can get the perfect CSI at no cost. Then, we provide four achievable schemes where each scheme satisfies the bottleneck constraint and gives a lower bound to the bottleneck rate. In the first and second schemes, the relay splits the capacity of the relay-destination link into two parts, and conveys both the CSI and its observation to the destination node. Due to CSI transmission, the performance of these two schemes is sensitive to the MIMO channel dimension, especially the channel input dimension. To ensure that it still performs well when the channel dimension grows large, in the third and fourth achievable schemes, the relay only transmits compressed observation to the destination node. Numerical results show that with simple symbol-by-symbol oblivious relay processing and compression, the proposed achievable schemes work well and can demonstrate lower bounds coming quite close to the upper bound on a wide range of relevant system parameters.


I. INTRODUCTION
For a Markov chain X → Y → Z and an assigned joint probability distribution p X,Y , consider the following information bottleneck (IB) problem where C is the bottleneck constraint parameter and the optimization is with respect to the conditional probability distribution p Z|Y of Z given Y . Formulation (1) was introduced by Tishby in [1], and has found remarkable applications in supervised and unsupervised learning problems such as classification, clustering, prediction, etc.
An interesting application of the IB problem in communications consists of a source node, an oblivious relay, and a destination node, which is connected to the relay via an error-free link with capacity C. The source node sends codewords over a communication channel and an observation is made at the relay. X and Y are respectively the channel input from the source node and output at the relay. The relay is oblivious in the sense that it cannot decode the information message of the source node itself. This feature can be modeled rigorously by assuming that the source and destination nodes make use of a codebook selected at random over a library, while the relay is unaware of such random selection. For example, in a cloud radio access network (C-RAN), each remote radio head (RRH) acts as a relay and is usually constrained to implement only radio functionalities while the baseband functionalities are migrated to the cloud central processor [12]. Considering the relatively simple structure of the RRHs, it is usually prohibitive to let them know the codebooks and random encoding operations, particularly as the network size gets large. The fact that the relay cannot decode, is also supported by secrecy demands, which means that the codebooks known to the source and destination nodes are to be considered absolutely random, as done here.
Due to the oblivious feature, the relaying strategies which require the codebooks to be known at the relay, e.g., decode-and-forward, compute-and-forward, etc. [13]- [15] cannot be applied. Instead, the relay has to perform oblivious processing, i.e., employ strategies in forms of compress-and-forward [16]- [19]. In particular, the relay must treat X as a random process with a distribution induced by the random selection over the codebook library (see [12] and references therein), and has to produce some useful representation Z by simple signal processing and convey it to the destination node subject to the link constraint C. Then, it makes sense to find Z such that I(X; Z) is maximized.
The IB problem for this kind of communication scenario has been studied in [20]- [26], [12]. In [20], the IB method was applied to reduce the fronthaul data rate of a C-RAN network. References [21] and [22] respectively considered Gaussian scalar and vector channels with IB constraint, and investigated the optimal trade-off between the compression rate and the relevant information. In [23], the bottleneck rate of a frequency-selective scalar Gaussian primitive diamond relay channel was examined. In [24] and [25], the rate-distortion region of a vector Gaussian system with multiple relays was characterized under logarithmic loss distortion measure. Reference [12] further extended the work in [25] to a C-RAN network with multiple transmitters and multiple relays, and studied the capacity region of this network. However, all references [20]- [25] and [12] considered block fading channels, and assumed that the perfect channel state information (CSI) was known at both the relay and the destination node. In [26], the IB problem of a scalar Rayleigh fading channel was studied. Due to the bottleneck constraint, it is impossible to inform the destination node of the perfect CSI in each channel realization.
An upper bound and two achievable schemes were provided in [26] to investigate the bottleneck rate.
In this paper, we extend the work in [26] to the multiple-input multiple-out (MIMO) channel with independent and identically distributed (i.i.d.) Rayleigh fading. This model is relevant for the practical setting of the uplink of a wireless multiuser system where K users send coded uplink signals to a base station. The base station is formed by an RRH with M antennas, connected to a cloud central processor via a digital link of rate C (bottleneck link). The RRH is oblivious of the user codebooks and can apply only simple localized signal processing corresponding to the low-level physical layer functions (i.e., it is an oblivious relay). In current implementations, the RRH quantizes both the uplink pilot symbols and the data-bearing symbols received from the users on each "resource block" 1 and sends the quantization bits to the cloud processor via the digital link. Here we simplify the problem and instead of considering a specific pilot-based channel estimation scheme we assume that the channel matrix is given perfectly to the relay (RRH), i.e., that the CSI is perfect, but local at the relay. Then, we consider an upper bound and specific achievability strategies to maximize the mutual information between the user transmitted signals and the message delivered to the cloud processor, where we allow the relay to operate local oblivious processing as an alternative to direct quantization of both the CSI and the received data-bearing signal.
Intuitively, the relay can split the capacity of the relay-destination link into two parts, and convey both the CSI and its observation to the destination node. Hence, in the first and second achievable schemes, the relay transmits compressed CSI and observation to the destination node.
Specifically, in the first scheme, the relay simply compresses the channel matrix as well as its observation and then forwards them to the destination node. Roughly speaking, this is what happens today in 'naive' implementation of RRH systems. Therefore, this scheme can be seen as a baseline scheme. However, the capacity allocated for conveying the CSI to the destination in this scheme is proportional to both the channel input dimension and the number of antennas at the relay. To reduce the channel use required for CSI transmission, in the second achievable scheme, the relay first gets an estimate of the channel input using channel inversion and then transmits the quantized noise levels as well as the compressed noisy signal to the destination node. In contrast to the first scheme, the capacity allocated to CSI transmission in this scheme is only proportional to the channel input dimension.
where x ∈ C K×1 and n ∈ C M ×1 are respectively zero-mean circularly symmetric complex Gaussian input and noise with covariance matrices I K and σ 2 I M , i.e., x ∼ CN (0, I K ) and n ∼ CN (0, σ 2 I M ). H ∈ C M ×K is a random matrix independent of both x and n, and the elements of H are i.i.d. zero-mean unit-variance complex Gaussian random variables, i.e., H ∼ CN (0, I K ⊗ I M ). Let ρ = 1 σ 2 denote the signal-to-noise ratio (SNR). Let z denote a useful representation of y produced by the relay for the destination node. x → (y, H) → z thus forms a Markov chain. We assume that the relay node has a direct observation of the channel matrix H, while the destination node does not since we consider Rayleigh fading channel and capacity-constrained relay-destination link. Then, the IB problem can be formulated as follows max p(z|y,H) where C is the bottleneck constraint, i.e., the link capacity of Channel 2. In this paper, we call I(x; z) the bottleneck rate and I(y, H; z) the compression rate. Obviously, for a joint probability distribution p(x, y, H) determined by (2), problem (3) is a slightly augmented version of IB problem (1). In our problem, we aim to find a conditional distribution p(z|y, H) such that bottleneck constraint (3b) is satisfied and the bottleneck rate is maximized, i.e., as much as information of x can be extracted from representation z.

III. INFORMED RECEIVER UPPER BOUND
As stated in [26], an obvious upper bound to problem (3) can be obtained by letting both the relay and the destination node know the channel matrix H. We call the bound in this case the informed receiver upper bound. The IB problem in this case takes on the following form max p(z|y,H) In reference [21], the IB problem for a scalar Gaussian channel with block fading has been studied. In the following theorem, we show that for the considered MIMO channel with Rayleigh fading, (4) can be decomposed into a set of parallel scalar IB problems, and the informed receiver upper bound can be obtained based on the result in [21].
Theorem 1. For the considered MIMO channel with Rayleigh fading, the informed receiver upper bound, i.e., the optimal objective function of IB problem (4), is where T = min{K, M }, λ is identically distributed as the unordered positive eigenvalues of HH H , its probability density function (pdf), i.e., f λ (λ), is given in (103), and ν is chosen such that the following bottleneck constraint is met Proof: See Appendix A.

IV. ACHIEVABLE SCHEMES
In this section, we provide four achievable schemes where each scheme satisfies the bottleneck constraint and gives a lower bound to the bottleneck rate. In the first and second schemes, the relay transmits both its observation and partial CSI to the destination node. In the third and fourth schemes, to avoid transmitting CSI, the relay first estimates x and then sends a representation of the estimate to the destination node.

A. Non-decoding transmission (NDT) scheme
Our first achievable scheme assumes that without decoding x, the relay simply source-encodes both y and H and then sends the encoded representations to the destination node. It should be noticed that this scheme is actually reminiscent of the current state of the art in remote antenna head technology, where both the pilot field (corresponding to H) and the data field (corresponding to y) are quantized and sent to the central processing unit.
Let h denote the vectorization of matrix H, and z 1 and z 2 denote the representations of h and where 0 < D ≤ 1 and d(h, z 1 ) = (h − z 1 ) H (h − z 1 ) is the squared-error distortion measure. Let e 1 denote the error vector of quantizing h, i.e., e 1 = h − z 1 . z 1 and e 1 are the vectorizations of and z 1 is independent of e 1 . Hence, In [27,Theorem 10.3.3], the achievability of an information rate for a given distortion, e.g., (8), is proven by considering a backward Gaussian test channel. However, the backward Gaussian test channel does not provide an expression of z 1 or e 1 . Though the specific formulations of z 1 and e 1 are not necessary for the analysis in this section, since we are providing an achievable scheme, we still give a feasible z 1 which satisfies (8) here to make the content more complete.
By adding an independent Gaussian noise vector r ∼ CN (0, εI KM ) with ε = D 1−D , to h, we get Obviously,h ∼ CN 0, 1 1−D I KM . A representation of h can then be obtained as follows which is actually the MMSE estimate of h obtained from (10). The error vector is then given by It can be readily verified that z 1 provided in (11) satisfies (8), To meet the bottleneck constraint, we have to ensure that Using the chain rule of mutual information, I(h, y; z 1 , z 2 ) =I(h, y; z 1 ) + I(h, y; z 2 |z 1 ) =I(h; z 1 ) + I(y; z 1 |h) + I(y; z 2 |z 1 ) + I(h; z 2 |z 1 , y).
Since z 1 is a representation of h, y and z 1 are conditionally independent given h. Similarly, since z 2 is a representation of y, h and z 2 are conditionally independent given y. Hence, I(y; z 1 |h) = 0, From (8), (14), and (15), it is known that to have constraint (13) guaranteed, I(y; z 2 |z 1 ), which is the information rate at which the relay quantizes y (given z 1 ), should satisfy Obviously, C −R(D) > 0 has to be guaranteed, which yields D > 2 − C KM . Hence, in this section, we always assume 2 − C KM < D ≤ 1.
We then evaluate I(y; z 2 |z 1 ). Since H = Z 1 + E 1 , y in (2) can be rewritten as For a given Z 1 , the second moment of y is E yy H |Z 1 = Z 1 Z H 1 + (KD + σ 2 )I M . Denote the eigendecomposition of Z 1 Z H 1 byŨ ΩŨ H and The second moment ofỹ is E ỹỹ H |Z 1 = Ω + (KD + σ 2 )I M . Since E 1 is unknown,ỹ is not a Gaussian vector. To evaluate I(y; z 2 |z 1 ), we define a new Gaussian vector where n g ∼ CN (0, (KD + σ 2 )I M ). For a given Z 1 , y g ∼ CN (0, Ω + (KD + σ 2 )I M ). The channel in (19) can thus be seen as a set of parallel sub-channels. Let z g denote a representation of y g and consider the following IB problem max p(zg|yg) Obviously, for a given feasible D, problem (20) can be similarly solved as (4) by following the steps in Appendix A. We thus have the following theorem.
Theorem 2. For a given feasible D, the optimal objective function of IB problem (20) is where γ = 1−D KD+σ 2 , the pdf of λ, i.e., f λ (λ), is given by (103), and ν is chosen such that the following bottleneck constraint is met Proof: See Appendix C.
Since for a given Z 1 , (19) can be seen as a set of parallel scalar Gaussian sub-channels, according to [21, (16)], the representation of y g , i.e., z g , can be constructed by adding independent fading and Gaussian noise to each element of y g . Denote where Ψ is a diagonal matrix with non-negative and real diagonal entries, and n g ∼ CN (0, I M ).
Note that y g in (19) and its representation z g in (23) are only auxiliary variables. What we are really interested in is the representation of y and the corresponding bottleneck rate. Hence, we also add fading Ψ and Gaussian noise n g toỹ in (18) and get the following representation In the following lemma we show that by transmitting representations z 1 and z 2 to the destination node, R lb1 is an achievable lower bound to the bottleneck rate and the bottleneck constraint is satisfied.
Lemma 2. If the representation of h, i.e., z 1 , resulted from (8), is forwarded to the destination node for each channel realization, with observations y and y g in (17) and (18), and representations z 2 and z g in (24) and (23), we have where (25) indicates that I(y; z 2 |Z 1 ) ≤ C − R(D) and (26) gives I(x; z 2 |Z 1 ) ≥ R lb1 .
Proof: See Appendix D.
Lemma 2 shows that by representing h andỹ using z 1 and z 2 in (11) and (24), respectively, lower bound R lb1 is achievable and the bottleneck constraint is satisfied.
When ρ → +∞, R lb1 tends to a constant which can be obtained by letting γ = 1−D KD and using (21). In addition, when C → +∞, there exists a small D such that R lb1 approaches the capacity of Channel 1, i.e., Proof: See Appendix E.
T for convenience. It can be readily verified that 0 ≤ R lb1 0 ≤ C. From (8) it is known that R(D) is also a function of M . Besides, as stated after (16), we always assume 2 − C KM < D ≤ 1 in this section such that C − R(D) > 0. Hence, when M → +∞, D approaches 1 and γ tends to 0.
All this makes it difficult to obtain a further concise expression of R lb1 0 . We investigate the effect of M on R lb1 in Section V by simulation.

B. Quantized channel inversion (QCI) scheme when K ≤ M
In our second scheme, the relay first gets an estimate of the channel input using channel inversion and then transmits the quantized noise levels as well as the compressed noisy signal to the destination node.
In particular, we apply the pseudo inverse matrix of H, i.e., (H H H) −1 H H , to y, and get the zero-forcing estimate of x as follows For a given channel matrix where A 1 and A 2 respectively consist of the diagonal and off-diagonal elements of A, i.e., If H could be perfectly transmitted to the destination node, the bottleneck rate could be obtained by following similar steps in Appendix A. However, since H follows a non-degenerate continuous distribution and the bottleneck constraint is finite, as shown in the previous subsection, this is not possible. To reduce the number of bits per channel use required for informing the destination node of the channel information, we only convey a compressed version of A 1 and consider a set of independent scalar Gaussian sub-channels.
Specifically, we force each diagonal entry of A 1 to belong to a finite set of quantized levels by adding artificial noise, i.e., by introducing physical degradation. We fix a finite grid of J and define the following ceiling operation Then, by adding a Gaussian noise vectorñ which is independent of everything else, to (29), a degraded version ofx can be obtained as wheren ∼ CN (0, A 1 + A 2 ) for a given H and A 1 diag a 1 B , · · · , a K B . Obviously, due to A 2 , the elements in noise vectorn are correlated.
To evaluate the bottleneck rate, we consider a new variablê wheren g ∼ CN (0, A 1 ). Obviously, (32) can be seen as K parallel scalar Gaussian sub-channels with noise power a k B for each sub-channel. Since each quantized noise level a k B only has J possible values, it is possible for the relay to inform the destination node of the channel information via the constrained link. Note that from the definition of A in (29), it is known that a k , ∀ k ∈ K {1, · · · , K} are correlated. The quantized noise levels a k B , ∀ k ∈ K are thus also correlated. Hence, we can jointly source-encode a k B , ∀ k ∈ K to further reduce the number of bits used for CSI transmission. For convenience, we define a space Ξ = It is obvious that there are a total of J K points in this space. Let ξ = (j 1 , · · · , j K ) denote a point in space Ξ and define the following probability mass function (pmf) The joint entropy of a k B , ∀ k ∈ K, i.e., the number of bits used for jointly source-encoding a k B , ∀ k ∈ K, is thus given by Then, the IB problem for (32) takes on the following form max p(ẑg|xg) whereẑ g is a representation ofx g .
Note that as stated above, there are a total of J K points in space Ξ. The pmf P ξ thus has J K possible values and it becomes difficult to obtain the joint entropy H joint from (34) (even numerically) when J or K is large. To reduce the computational complexity, we consider the (slightly) suboptimal, but far more practical, entropy coding of each noise level a k B separately, and get the following sum of individual entropies where H k denotes the entropy of a k B or the number of bits used for informing the destination node of noise level a k B . In Appendix F, we show that a k , ∀k ∈ K are marginally identically inverse chi squared distributed with M − K + 1 degrees of freedom, and their pdf is given in (130). Hence, where P j = Pr a B = b j can be obtained from (131) and a follows the same distribution as a k . Since P j only has J possible values, the computational complexity of calculating H sum is proportional to J. Using the chain rule of entropy and the fact that conditioning reduces entropy, we know that H joint ≤ H sum . In Section V, the gap between H joint and H sum is investigated by simulation. Replacing H joint in (35b) with H sum , we get the following IB problem max p(ẑg|xg) The optimal solution of this problem is given in the following theorem.
Theorem 3. If A 1 is conveyed to the destination node for each channel realization, the optimal objective function of IB problem (38) is where ρ j = 1 b j , c j = log ρ j ν + , and ν is chosen such that the following bottleneck constraint is Proof: See Appendix F.
Since (32) can be seen as K parallel scalar Gaussian sub-channels, according to [21, (16)], the representation ofx g , i.e.,ẑ g , can be constructed by adding independent fading and Gaussian noise to each element ofx g . Denoteẑ where Φ is a diagonal matrix with positive and real diagonal entries, andn g ∼ CN (0, I K ).
Note that similar to y g and z g in the previous subsection,x g in (32) and its representationẑ g in (41) are also auxiliary variables. What we are really interested in is the representation ofx and the corresponding bottleneck rate. Hence, we also add fading Φ and Gaussian noisen g tô x in (31) and get its representation as follows In the following lemma we show that by transmitting quantized noise levels a k B , ∀k ∈ K and representation z to the destination node, R lb2 is an achievable lower bound to the bottleneck rate and the bottleneck constraint is satisfied.

Lemma 4.
If A 1 is forwarded to the destination node for each channel realization, with signal vectorsx andx g in (31) and (32), and their representations z andẑ g in (42) and (41), we have where (43) indicates that I(x; z|A 1 ) ≤ C − KH 0 and (44) gives I(x; z|A 1 ) ≥ R lb1 .
Proof: See Appendix G.
Lemma 5. When M → +∞ or ρ → +∞, we can always find a sequence of quantization points where the expectation can be calculated by using the pdf of a in (130) and I(x; y, H) is the capacity of Channel 1.
Proof: See Appendix H.
For the sake of simplicity, we may choose the quantization levels as quantiles such that we obtain the uniform pmf P j = 1 J . The lower bound (39) can thus be simplified as and the bottleneck constraint (40) becomes where B = log J can be seen as the number of bits required for quantizing each diagonal entry of A 1 . Since ρ 1 ≥ · · · ≥ ρ J−1 , from the strict convexity of the problem, we know that there must exist a unique integer 1 ≤ l ≤ J − 1 such that [28] l j=1 log Hence, ν can be obtained from and R lb1 can be calculated as follows Then, we only need to test the above condition for l = 1, 2, 3, · · · till (48) is satisfied. Note that to ensure R lb2 > 0, JC K − JB in (47) has to be positive, i.e., B < C K . Moreover, though choosing the quantization levels as quantiles makes it easier to calculate R lb2 , the results in Lemma 5 may not hold in this case since the choice of quantization points B = {b 1 , · · · , b J } is restricted.

C. Truncated channel inversion (TCI) scheme when K ≤ M
Both the NDT and QCI schemes proposed in the preceding two subsections require that the relay transmits partial CSI to the destination node. Specifically, in the NDT scheme, channel matrix H is compressed and conveyed to the destination node. Hence, the channel use required for transmitting compressed H is proportional to K and M . In contrast, the number of bits required for transmitting quantized noise levels in the QCI scheme is proportional to K and B.
Due to the bottleneck constraint, the performance of the NDT and QCI schemes is thus sensitive to the MIMO channel dimension, especially K. To ensure that it still performs well when the channel dimension is large, in this subsection, the relay first estimates x using channel inversion and then transmits a truncated representation of the estimate to the destination node.
In particular, as in the previous subsection, we first get the zero-forcing estimate of x using channel inversion, i.e.,x As given in Appendix A, the unordered eigenvalues of Note that though the interfering terms can be nulled out by zero-forcing equalizer, the noise may be greatly amplified when the channel is noisy. Therefore, we put a threshold λ th on λ min such that zero capacity is allocated for states with λ min < λ th .
Specifically, when λ min < λ th , the relay does not transmit the observation, while when λ min ≥ λ th , the relay takesx as the new observation and transmits a compressed version ofx to the destination node. The information about whether to transmit the observation or not is encoded into a 0 − 1 sequence and is also sent to the destination node. Then, we need to solve the source coding problem at the relay, i.e., encoding blocks ofx when λ min ≥ λ th . For convenience, we use ∆ to denote event 'λ min ≥ λ th '. Here we choose p(z|x, ∆) to be a conditionally Gaussian distribution, i.e., where q ∼ CN (0, DI K ) is independent of the other variables. It can be easily found from (52) that I(x; z|λ min < λ th ) = 0 and I(x; z|λ min < λ th ) = 0. Hence, we consider the following modified IB problem where P th = Pr {∆} and H th is a binary entropy function with parameter P th .
Since we assume K ≤ M in this subsection, as stated in Appendix A, Then, according to [29,Proposition 2.6] and [29,Proposition 4.7], P th is given by where When K = M , using [30, Theorem 3.2], a more concise expression of P th can be obtained as Note that in (56), the lower bound of the integral is 2λ th rather than λ th . This is because in this paper, the elements of H are assumed to be i.i.d. zero-mean unit-variance complex Gaussian random variables, while in [30], the real and imaginary parts of the elements in H are independent standard normal variables.
Given condition ∆, letx g denote a zero-mean circularly symmetric complex Gaussian random vector with the same second moment asx, i.e.,x g ∼ CN 0, E xx H |∆ , andz g =x g + q.
P th I(x g ;z g |∆) is then achievable if P th I(x g ;z g |∆) ≤ C − H th . Hence, let To calculate D from (57), we denote the eigendecomposition of H H H by VΛV H , where V is a unitary matrix whose columns are the eigenvectors of H H H,Λ is a diagonal matrix whose diagonal elements are unordered eigenvalues λ k , ∀ k ∈ K, and V andΛ are independent. Then, from (51), Based on [31], the joint pdf of the unordered eigenvalues λ k , ∀ k ∈ K under condition ∆ is given by The marginal pdf of one of the eigenvalues can thus be obtained by integrating out all the other eigenvalues. Taking λ 1 for example, we have Then, Combining (57), (58), and (61), D can be calculated as follows Remark 2. Note that we show in Appendix I that when K = M and λ th = 0, the integral in (61) diverges. E 1 λ |∆ thus does not exist in this case. Therefore, without special instructions, the results derived in this subsection are for the cases with K = M and λ th > 0 or with K < M and λ th ≥ 0.
With (57), rate P th I(x g ;z g |∆) is achievable. Due to the fact that Gaussian input maximizes the mutual information of a Gaussian additive noise channel, we have I(x; z|∆) ≤ I(x g ;z g |∆).
P th I(x; z|∆) is thus also achievable. The next step is to evaluate the resulting achievable bottleneck rate, i.e., I(x; z). To this end, we first obtain the following lower bound to I(x; z|∆) from the fact that conditioning reduces differential entropy, Then, we evaluate the differential entropies h(z|H, ∆) and h(z|x, ∆), respectively. From (51) and (52), it is known that z is conditionally Gaussian given H and ∆. Hence, On the other hand, using the fact that Gaussian distribution maximizes the entropy over all distributions with the same variance [27, Theorem 8.6.5], we have Substituting (64) and (65) into (63), we can get a lower bound to I(x; z) as shown in the following theorem.
where P th and D are respectively given in (54) and (62), and the expectations can be calculated by using pdf (60).
Lemma 6. Using Jensen's inequality on convex function log(1 + 1/x) and concave function log x, we can get a lower bound to R lb3 , i.e., and an upper bound to R lb3 , i.e., Remark 3. Obviously,Ř lb3 is also a lower bound to I(x; z). As forR lb3 , it is not an upper bound to I(x; z) since it is derived after lower bound R lb3 . However, we can assess how good the lower bounds R lb3 andŘ lb3 are by comparing them withR lb3 .
Lemma 8. When K < M and λ th = 0, where Remark 4. When K < M , λ th = 0, and σ 2 M −K is small (e.g., when ρ is large, i.e., σ 2 is small, or when M − K is large),R lb3 −Ř lb3 ≈ 0. In this case,Ř lb3 is close toR lb3 , and is thus also close to R lb3 . Then, we can useŘ lb3 instead of R lb3 to lower bound I(x; z) since it has a more concise expression.

D. MMSE estimate at the relay
In this subsection, we assume that the relay first produces the MMSE estimate of x given (y, H), and then source-encode this estimate.
The MMSE estimate of x is thus given bȳ Then, we consider the following modified IB problem max p(z|x) Note that since matrix HH H + σ 2 I K in (73) is always invertible, the results obtained in this subsection always hold no matter K ≤ M or K > M .
Analogous to the previous subsection, we define z =x + q, where q has the same definition as in (52), and Let Then, rate I(x g ;z g ) is achievable and D can be calculated from (78). Since I(x; z) ≤ I(x g ;z g ), I(x; z) is thus also achievable.
In the following, we obtain a lower bound to I(x; z) by evaluating h(z|H) and h(z|x) separately, and then using First, since z is conditionally Gaussian given H, we have Next, based on the fact that conditioning reduces differential entropy and Gaussian distribution maximizes the entropy over all distributions with the same variance [32], we have where Combining (79), (80), and (81), we can get a lower bound to I(x; z) as shown in the following theorem.
Theorem 5. With MMSE estimate at the relay, a lower bound to I(x; z) can be obtained as where and the expectations can be calculated by using the pdf of λ in (103).
Proof: See Appendix K.

V. NUMERICAL RESULTS
In this section, we evaluate the lower bounds obtained by different achievable schemes proposed in Section IV and compare them with the upper bound derived in Section III. Before showing the numerical results, we first give the following lemma, which compares the bottleneck rate of the NDT scheme with those of the other three schemes in the C → +∞ case.
Lemma 10. When C → +∞, the NDT scheme outperforms the other three schemes, i.e., Proof: See Appendix M.
Remark 5. Besides the proof in Appendix M, we can also explain Lemma 10 from a more intuitive perspective. When C → +∞, the destination node can get perfect y and H from the relay by using the NDT scheme. The bottleneck rate is thus determined by the capacity of Channel 1. In the QCI scheme, though the destination node can get perfect signal vector and noise power of each channel, the correlation between the elements of the noise vector is neglected since the off-diagonal entries of A are not considered. The bottleneck rate obtained by the QCI scheme is thus upper bounded by the capacity of Channel 1. As for the TCI or MMSE schemes, the destination node can get perfectx orx from the relay. However, the bottleneck rate in these two cases is not only affected by the capacity of Channel 1, but is also limited by the performance of zero-forcing or MMSE estimation since the estimation inevitably incurs a loss of information. Hence, the NDT scheme has better performance when C → +∞.
In the following we give the numerical results. Note that when performing the QCI scheme, we choose the quantization levels as quantiles for the sake of convenience.     is large. Hence, R lb1 decreases with D. In the following simulation process, when implementing the NDT scheme, we vary D, calculate R lb1 using (21), and then let R lb1 be the maximum value.
In Fig. 3 and Fig. 4, we do Monte Carlo simulation to get joint entropy H joint in (34)      Threshold λ th R lb3 (bits/complex dimension)  Fig. 4 shows that there exists an obvious increase in the gap between H joint and H sum . Hence, when M = K and K increases, the correlation between a k B , ∀ k ∈ K is enhanced. We will thus get a gain to R lb2 if we use H joint instead of H sum .
However, we would like to point out: First, it can be found from Fig. 4 that when M > K, this trend becomes less evident. Second, as shown in the following results, when K ≥ 4, since the QCI scheme uses a lot of capacity in C to quantize a k B , ∀ k ∈ K, its performance is not as good as the TCI scheme or MMSE scheme. Third, when K or B is large, it becomes difficult to get H joint . Therefore, when implementing the QCI scheme in the following, we obtain R lb2 by using H sum , i.e., quantizing a k B , ∀ k ∈ K separately.
In Fig. 5 and Fig. 6, we investigate the effect of threshold λ th on R lb3 for the cases with K = M and K < M , respectively. From these two figures, several observations can be made.
First, when K = M , and ρ or K is small, R lb3 increases greatly and then decreases with λ th , indicating that the choice of λ th has a significant impact on R lb3 . It is thus important to look for a good λ th to maximize R lb3 in these cases. Second, when K = M , and K as well as ρ is large or when K < M , R lb3 first remains unchanged and then monotonically decreases with R lb3 . In these cases, a small λ th is good enough to guarantee a large R lb3 and search of λ th can thus be     avoided. For example, when K < M , we can set λ th = 0, based on which a simpler expression of R lb3 is given in (69). As for the case with K = M , since E 1 λ does not exist when λ th = 0, we can set λ th to be a fixed small number.
In Fig. 7 and Fig. 8, we compare R lb3 with its upper boundR lb3 and lower boundŘ lb3 . As expected, R lb3 ,R lb3 , andŘ lb3 all increase with M and ρ. When M or ρ is small, there is a small gap between R lb3 andR lb3 , and a small gap between R lb3 andŘ lb3 . As M and ρ increase, these gaps narrow rapidly and the curves almost coincide, which verifies Remark 2. As a result, when M − K or ρ is large, we can set λ th = 0 and useŘ lb3 in (70) to lower bound I(x; z) since it has a more concise expression. 20      In Fig. 9 and Fig. 10, the upper bound R ub and lower bounds obtained by different schemes are depicted versus SNR ρ. Several observations can be made from these two figures. First, as expected, all bounds increase with ρ. Second, when K, M , and ρ are small, the NDT scheme outperforms the other achievable schemes. However, as these parameters increase, the performance of the NDT scheme deteriorates rapidly. This is because when K, M , and ρ are small, the performance of the considered system is mainly limited by the capacity of Channel 1, and the NDT scheme works well since the destination node can extract more information from the compressed observation of the relay and CSI. However, when K and M increase, the NDT scheme requires too many channel uses for CSI transmission. Third, the QCI scheme can get a good performance when K is small. Of course, as stated at the beginning of Subsection IV-C, the number of bits required for transmitting quantized noise levels in the QCI scheme is proportional to K and B. Hence, the performance of the QCI scheme varies significantly when K and B change. Moreover, it is also shown that the performance of the TCI scheme is worse than that of the MMSE scheme in the low SNR regime, while getting quite close to that of the MMSE scheme in the high SNR regime. When ρ grows large, the lower bounds obtained by the TCI and MMSE schemes both approach C and are larger than those obtained by the NDT and QCI schemes.
In Fig. 11 and Fig. 12, the effect of the bottleneck constraint C is investigated. From Fig. 11, it can be found that as C increases, all bounds grow and converge to different constants, which can be calculated based on Lemma 1, Lemma 3, Lemma 5, Lemma 7, and Lemma 9, respectively.   Fig. 11 also shows that thanks to CSI transmission, the NDT and QCI schemes outperform the TCI and MMSE schemes when C is large. By comparing these two figures, it can be found that in Fig. 11, no bound approaches C, even for the case with C = 20, while in Fig. 12, it is possible for R ub , R lb3 , and R lb4 to approach C. For example, when K = M = 4 and C ≤ 30, R ub , R lb3 , R lb4 → C. This is because the bottleneck rate is limited by the capacity of Channel 1 and C. In Fig. 11, since K and M are small, the capacity of Channel 1 is smaller than C.
Hence, the bounds of course will not approach C. In Fig. 12, more multi-antenna gains can be obtained due to larger K and M . The capacity of Channel 1 is thus larger than C in some cases (e.g., K = M = 4 and C ≤ 30). Hence, R ub , R lb3 , and R lb4 may approach C in these cases.
Note that as shown in Fig. 11, since B < C K is not satisfied, R lb4 = 0 when C ≤ 30. In Fig. 13 and Fig. 14, the effect of M is investigated for different configurations of ρ. These two figures show that R ub , R lb2 , R lb3 , and R lb4 all increase monotonically with M , and as M grows, R lb3 as well as R lb4 gets very close to R ub . As for R lb1 , except the M = 3 case in Fig. 13, R lb1 monotonically decreases with M since the relay has to transmit more channel information to the destination node.
In Fig. 15 and Fig. 16, we set K = M and depict the upper and lower bounds versus K or M . In Fig. 15, we fix C to 50, while in Fig. 16, we set C = 8K, which makes sense since the bottleneck constraint should scale with the number of degrees of freedom of the input signal x. Since we choose the quantization levels as quantiles when performing the QCI scheme, as  stated at the end of Subsection IV-B, B < C K should be satisfied. Hence, in Fig. 15 and Fig. 16, we only consider B = 1, 2, 4 bits when performing the QCI scheme. When K = M and they grow simultaneously, the capacity of Channel 1 increases due to the muti-antenna gains. Hence, for a fixed C, Fig. 15 shows that all bounds increase first. When K or M grows large, R lb3 and R lb4 approach the bottleneck constraint C while R lb2 decreases for all values of B. This is because the number of bits per channel use required for informing the destination node of A 1 in the QCI scheme is proportional to K, while CSI transmission is unnecessary for the TCI and MMSE schemes. As for the NDT scheme, since the number of bits required for quantizing H is proportional to both K and M , there is only an increase when K grows from 1 to 2. After that, R lb1 decreases monotonically and has the worst performance. In contrast, when C = 8K, the bottleneck rate of the system is mainly limited by C. Hence, Fig. 16 shows that all bounds, except R lb1 , increase almost linearly with K, and R ub , R lb3 , and R lb4 are quite close to C.

VI. CONCLUSIONS
This work extends the IB problem of the scalar case in [26] to the case of MIMO Rayleigh fading channels. Due to the information bottleneck constraint, the destination node cannot get the perfect CSI from the relay. Hence, we provide an upper bound to the bottleneck rate by assuming that the destination node can get the perfect CSI at no cost. Besides, we also provide four achievable schemes where each scheme satisfies the bottleneck constraint and gives a lower bound to the bottleneck rate. Our results show that with simple symbol-by-symbol relay processing and compression, we can get bottleneck rate close to the upper bound on a wide range of relevant system parameters. Although we have focused on a MIMO channel with one relay, we plan to extend the problem to considering the case of multiple parallel relays, which is particularly relevant to the centralized processing of multiple remote antennas, as in the so-called C-RAN architectures.
APPENDIX A PROOF OF THEOREM 1 Before proving Theorem 1, we first consider the following scalar Gaussian channel where x ∼ CN (0, 1), n ∼ CN (0, σ 2 ), and s ∈ C is the deterministic channel gain. With bottleneck constraint C, the IB problem for (87) has been studied in [21] and the optimal bottleneck rate is given by In the following, we show that (4) can be decomposed into a set of parallel scalar IB problems, and (88) can then be applied to get upper bound R ub in Theorem 1.
According to the definition of conditional entropy, problem (4) can be rewritten as where t ∈ T and T = {1, · · · , T }. Let Then, for a given channel realization H = H,ŷ is conditionally Gaussian, i.e., Since we work withŷ instead of y in the following.
Based on (89) and (91), it is known that MIMO channel p(ŷ|x, H) can be first divided into a set of parallel channels for different realizations of H, and each channel p(ŷ|x, H = H) can be further divided into T independent scalar Gaussian channels with SNRs ρλ t , ∀t ∈ T .
Accordingly, problem (4) can be decomposed into a set of parallel IB problems. For a scalar Gaussian channel with SNR ρλ t , let c ub t denote the allocation of the bottleneck constraint C and R ub t denote the corresponding rate. According to (88), we have Then, the solution of problem (4) can be obtained by solving the following problem Assume that λ t , ∀t ∈ T are unordered positive eigenvalues of HH H . 2 Then, they are identically distributed. For convenience, define a new variable λ which follows the same distribution as λ t .
The subscript 't' in c ub t and R ub t can thus be omitted. In order to distinguish from R ub in (5), we use R ub 0 to denote the bottleneck rate corresponding to c ub , i.e., Then, we have Problem (94) is thus equivalent to This problem can be solved by the water-filling method. Consider the Lagrangian where α is the Lagrange multiplier. The Karush-Kuhn-Tucker (KKT) condition for the optimality Then, where ν = α/(1 − α) and it is chosen such that the following bottleneck constraint is met The informed receiver upper bound is thus given by From the definition of H in (2), it is known that when K ≤ M (resp., when K > M ), H H H (resp., HH H ) is a central complex Wishart matrix with M (resp., K) degrees of freedom and covariance matrix I K (resp., I M ), i.e., H H H ∼ CW K (M, I K ) (resp., HH H ∼ CW M (K, I M )) [33]. Since λ can be seen as one of the unordered positive eigenvalues of H H H or HH H , its pdf is thus given by [33,Theorem 2.17], [31] where S = max{K, M } and the Laguerre polynomials are Substituting (103) and (104) into (102) and (101), (5) and (6) which shows that λ follows Erlang distribution with shape parameter M and rate parameter 1, i.e., λ ∼ Erlang(M, 1). The expectation of λ is thus M . As M → +∞, f λ (λ) becomes a delta function [34]. Hence, for a sufficiently small positive real number , Then, when M → +∞, the bottleneck constraint (6) based on which we get Using (5), (106), and (108), it is known that when M → +∞, thus also holds for this general case. Then, based on which we get Hence, when M → +∞, Now we prove that R ub approaches C as ρ → +∞. From (6), it can be seen that ∞ ν ρ log ρλ ν f λ (λ)dλ reduces with ν. Therefore, when ρ → +∞, to ensure that constraint (6) holds, ν becomes large.
Then, we have In addition, when C → +∞, it can be found from (6) that ν → 0. Using (5), we can get (7), which is the capacity of Channel 1. This completes the proof. and CN (0, 1), and λ is the unordered positive eigenvalue of HH H as defined in Appendix A, ω is thus identically distributed as (1 − D)λ. Then, the pdf of ω is where f λ is the pdf of λ and is given in (103).
For a given feasible D, problem (20) can be similarly solved as (4) by following the steps in Appendix A and the optimal solution is where ν is chosen such that the following bottleneck constraint is met Using (114), (115) can be reformulated as where γ = 1−D KD+σ 2 . Analogously, bottleneck constraint (116) can be transformed to Theorem 2 is thus proven.

APPENDIX D PROOF OF LEMMA 2
We first prove inequation (25).

APPENDIX F PROOF OF THEOREM 3
Sincen g ∼ CN (0, A 1 ) and a k B has J possible values, i.e., b 1 , · · · , b J , the channel in (32) can be divided into KJ independent scalar Gaussian sub-channels with noise power a k B = b j for each sub-channel. For the sub-channel with noise power a k B = b j , let c k,j denote the allocation of the bottleneck constraint C and R k,j denote the corresponding rate. According to (88), we have where ρ j = 1 b j . Since b J = +∞, we let R k,J = 0 and c k,J = 0. Note that based on [21, (16)], the representation ofx g , i.e.,ẑ g , can be constructed by adding independent fading and Gaussian noise to each element ofx g in (32). Denote Then, the optimal I(x;ẑ g |A 1 ) is equal to the objective function of the following problem where H k = − J j=1 P k,j log P k,j . Since K ≤ M , as stated in Appendix A, H H H ∼ CW K (M, I K ). Matrix (H H H) −1 thus follows complex inverse Wishart distribution and its diagonal elements are identically inverse chi squared distributed with M − K + 1 degrees of freedom [35]. Let η denote one of the diagonal element of (H H H) −1 . The pdf of η is thus given by Since A = σ 2 (H H H) −1 , the diagonal entries of A, i.e., a k , ∀k ∈ K, are marginally identically distributed. Let a denote a new variable with the same distribution as a k . a thus follows the same distribution as σ 2 η and its pdf is given by In addition, P k,j , R k,j , and c k,j can be simplified to P j , R j , and c j by dropping subscript 'k'.
Using (130), P j can be calculated as follows Problem (128) thus becomes where Analogous to problem (97), (132) can be optimally solved by the water-filling method. The optimal I(x;ẑ g |A 1 ) is given by where c j = log ρ j ν + and ν is chosen such that the bottleneck constraint is met. Theorem 3 is then proven.

APPENDIX G PROOF OF LEMMA 4
Since Φ is a diagonal matrix with positive and real diagonal entries, it is invertible. Denote For a given A 1 , each element inn is Gaussian distributed with zero mean and variance a k B .
However,n is not a Gaussian vector since H is unknown. Hence, z is not a Gaussian vector.
where (a) holds since Gaussian distribution maximizes the entropy over all distributions with the same variance, and (b) follows by using Hadamard's inequality.
Then, we prove inequation (44). Using the chain rule of mutual information, where (a) holds since for a given A 1 , both z k andẑ g,k follow CN 0, 1 + a k B + ϕ −2 k , and (b) follows since the elements in x andẑ g are independent.
where is a sufficiently small positive real number. Since A − σ 2 M I K → 0, we have P 1 → 1 and H 0 → 0. Then, from (39) and (40), When ρ → +∞, σ 2 → 0 and A → 0. By setting J = 2 and b 1 small enough, it can be proven as above that R lb2 → C.
When C → +∞, we could choose quantization points B = {b 1 , · · · , b J } with sufficiently large J such that the diagonal entries of A 1 , which are continuously valued, can be represented precisely using the discretely valued points in B, and the representation indexes of all diagonal entries can be transmitted to the destination node since C is large enough. On the other hand, as shown in (41), a representation ofx g iŝ where Φ is a diagonal matrix with positive and real diagonal entries, andn g ∼ CN (0, I K ). As C → +∞, according to [21, (17) and (20)], the diagonal entries of Φ Since Φ is a diagonal matrix with positive and real diagonal entries, as in (136), we can get From (142) it is known that the elements in noise vector Φ −1n g have zero mean and very small (approaches 0) power when C → +∞. Hence, (x,ẑ g ) → (x,x g ) in distribution. Then, based on [36], we have In addition, since Gaussian noise vectorn g (defined in (32)) is independent of x and Φ −1n g in (143) is independent of both x andn g , x →x g →ẑ g forms a Markov Chain. Then, according to data-processing inequality, we have Combining (145) and (144), we have showing that the limit lim inf C→+∞ I(x;ẑ g |A 1 ) exists and it is equal to I(x;x g |A 1 ). Then, when C → +∞, On the other hand, the capacity of Channel 1 is given by To prove that (147) is upper bounded by (148), we first give and prove the following lemma.
x when x ≥ 0. By taking the first-order derivative to g 1 (x), we have To prove g 1 (x) ≤ 0, we show in the following that for any positive definite matrix O, we always where Define a real vector u = (u 1 , · · · , u K ) T with u k > 0, ∀ k ∈ K, and function g 2 (u) = K k=1 1 u k . It is obvious that g 2 (u) is convex and symmetric. Hence, g 2 (u) is a Schur-convex function. Therefore, Using (154), we have based on which we get g 1 (x) ≤ 0 and (149) can then be proven.
Then, from (147), (148), and Lemma 11, it is known that when C → +∞, R lb2 → E [log det (I K + A 1 ) − log det (A 1 )] = KE log 1 + 1 a ≤ I(x; y, H), where the expectation can be calculated by using the pdf of a in (130). Lemma 5 is thus proven.
APPENDIX I PROOF OF REMARK 2 In this appendix, we show that when K = M and λ th = 0, E 1 λ does not exist. When K = M , f λ (λ) is given in (103). From (104), it is known that for any 0 ≤ i ≤ K − 1, L 0 i (λ) can always be expressed as follows where ς i,j is a constant. Accordingly, from (103), where τ j is a constant. Let denote a sufficiently small positive real number. Then, when λ th = 0, where we used ∞ 0 e −λ λ j−1 dλ = (j − 1)! and Ei(·) is the exponential integral. As is well-known, lim x→0 − Ei(−x) = ∞. Hence, the integral in (159) Combining (160) with (66), (67), and (68), we have When ρ → +∞, σ 2 → 0. Hence, When C → +∞, it can be found from (62) that D → 0. Then, from (66), (67), and (68), it is known that R lb3 ,Ř lb3 andR lb3 all approach constants, which can be respectively obtained by where 0 K−T is a (K − T )-dimensional all '0' column vector. Based on (163), where 1 K−T is a (K − T )-dimensional all '1' column vector. Since Λ is independent of U , L is independent of U as well as V , and λ t , ∀t ∈ T are unordered, we have I(x g ;z g ) in (78) can thus be calculated as follows I(x g ;z g ) = log det I K + E xx H D As shown in Lemma 3 and Lemma 5, when C → +∞, R lb1 approaches the capacity of Channel 1, while R lb2 is upper bounded by the capacity of Channel 1. Hence, when event ∆ happens and get its representation z. When C → +∞, it is known from (62) that D → 0. Hence, (x, z) → (x,x) in distribution and it can be proven similarly as (147) that R lb3 ≤ P th I(x; z|∆) → P th I(x;x|∆).
wherex is the MMSE estimate of x at the relay, i.e., (74). This completes the proof.