Next Article in Journal
Entropic Divergence and Entropy Related to Nonlinear Master Equations
Previous Article in Journal
Optimal Stream Gauge Network Design Using Entropy Theory and Importance of Stream Gauge Stations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Achievable Distortion in Sending Gaussian Sources over a Bandwidth-Matched Gaussian MAC with No Transmitter CSI

by
Chathura Illangakoon
* and
Pradeepa Yahampath
*
Department of ECE, University of Manitoba, Winnipeg, MB R3T 5V6, Canada
*
Authors to whom correspondence should be addressed.
Entropy 2019, 21(10), 992; https://doi.org/10.3390/e21100992
Submission received: 2 August 2019 / Revised: 14 September 2019 / Accepted: 9 October 2019 / Published: 11 October 2019
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
This paper investigates the minimum mean square error (MMSE) of communicating a pair of Gaussian sources over a a bandwidth-matched Gaussian multiple-access channel with block Rayleigh fading in the absence of channel state information (CSI) at the transmitters. The achievable MMSE is not known. To this end, we derive several upper-bounds to the minimum achievable average MMSE as a function of the transmitter powers, the average channel fading power-to-noise ratio, and the correlation coefficient of the two sources. To derive nontrivial upper bounds which improve on those of separate source-channel coding and uncoded transmission, we incorporate ideas from joint source-channel coding and hybrid digital–analog coding to construct specific coding schemes for which the achievable MMSE can be determined.

1. Introduction

An important problem in wireless communication is the design of systems that are robust against random variations in the channel signal-to-noise ratio (CSNR) caused by fading. If the channel response can be measured at both transmitter and receiver prior to transmitting each codeword and the channel remains stationary during the transmission of a codeword, adaptive transmitters and receivers can be used to achieve optimal communication. Although the receiver adaptation is feasible in most cases, the transmitter adaptation can be impractical in some cases. An obvious case is a single transmitter communicating with multiple receivers over a broadcast channel (BC). Another case is multiple transmitters communicating with a common receiver over a multiple-access channel (MAC) where the individual transmitters have no access to the respective CSNRs observed at the receiver. An important practical application of the latter case is a wireless sensor network (WSN) [1], where possibly correlated, sampled analog signals sensed at multiple locations are transmitted to a single receiver over a MAC [2]. The work presented in this paper is an attempt to characterize the theoretical limits to performance achievable in transmitting a pair of sampled Gaussian sources over a Gaussian MAC (GMAC) to a common receiver. More specifically, the basic question of interest is, “what is the total minimum mean square error (MMSE) with which we can reconstruct at a common receiver, a pair of Gaussian sources transmitted over a two-user power-limited GMAC with block fading, when the receiver knows the channel state information (CSI) but the transmitters only have prior knowledge of the distribution of CSI?” The problem involves averaging the achievable MMSE for a given channel state over the CSI distribution. A complete answer to this question remains an open problem. In this paper, we partially answer this question by considering certain coding schemes for which the MMSE can be computed. The answer to our question partly depends on whether or not the two sources are correlated. Both cases are considered in this paper. We limit attention to the particular case where transmission rate is one source sample per channel use, or in other words, the bandwidth of each source is identical to the GMAC bandwidth (“bandwidth-matched”).

1.1. Related Work

Asymptotically optimal (achieves the MMSE as the codeword length approaches infinity) communication of a Gaussian source over a point-to-point block fading channel whose CSI is known to both the transmitter and the receiver can be achieved by separate source-channel (SSC) coding, i.e., by cascading an optimal vector quantizer (VQ) for the Gaussian source with a capacity achieving channel code for the Gaussian channel [3]. Even if CSI is available at the transmitters, the source-channel separation is not in general optimal for communication over MACs ([4], Ch. 15) [5]. General conditions under which the optimality of separation holds for Gaussian sources and a GMAC are not completely known. Some special cases are however known. It is known that separation is optimal for orthogonalized transmission over a GMAC if the CSI is available at both the transmitters and the receiver [6]. When the sources are memoryless and mutually independent, SSC coding is also known to be optimal for the so-called two-to-one GMAC with no fading (NF-GMAC) [7]. In both cases, the MMSEs achievable for a set of Gaussian sources at given rates can be obtained by combining the rate-distortion functions of the sources and the capacity region of the GMAC. For a block-fading MAC (BF-MAC), the same optimality result applies if the CSI is available at both the transmitters and the receiver. In this case, the optimality can be achieved by adaptive coding at each transmitter. This is, however, not possible if CSI is not available at the transmitters.
For the transmission of mutually correlated Gaussian sources over a NF-GMAC, SSC coding requires, at each transmitter, a cascade of an optimal distributed VQ [8] and a capacity achieving channel code for the GMAC. It is, however, known that this approach is not optimal even if the transmitters know CSI [9]. This follows from a simple observation regarding the channel capacity. When the sources at the inputs are correlated, the maximum mutual information between the inputs and the output of a two-to-one MAC can be made higher than that with uncorrelated inputs, which implies that the achievable rate region of a MAC for correlated sources is larger than that for uncorrelated sources. However, realizing the rates in the enlarged region necessitates joint source-channel (JSC) codes capable of creating mutually correlated inputs to the GMAC. For example, if the source sequences themselves are used as the channel codewords, the source correlation is directly transferred to the GMAC inputs. This the simplest possible JSC coding scheme and is commonly referred to as “uncoded” or “amplify-and-forward” transmission. Despite the simplicity of this scheme, it is shown in [10] that uncoded transmission is optimal for transmitting two memoryless and correlated Gaussian sources with equal bandwidths over a two-to-one NF-GMAC with the same bandwidth, if the CSNR is below a threshold that is determined by the correlation coefficient between the two sources. Furthermore, the MMSE of uncoded transmission remains below that of SSC coding for a wide range of CSNRs even above this threshold. This is in sharp contrast to orthogonal multiple access over a NF-GMAC, in which the separation is strictly optimal at all CSNRs [6]. However, as the CSNR approaches infinity, SSC coding on a two-to-one NF-GMAC outperforms uncoded transmission. Intuitively, if the MAC is noisy, the dependence between the channel inputs allows better estimation of the individual inputs from their noisy sum observed at the channel output; whereas, if the MAC is almost noise-free, there is little to be gained by having dependent channel inputs. In the latter case, proper coding allows recovering the individual GMAC inputs error-free, from which each source can be reconstructed within the quantization error. This is not possible when the GMAC output is the direct sum of the two sources as in the case of uncoded transmission. Another instance of the NF-GMAC in which the uncoded transmission is optimal is the so-called “CEO problem” [11]. In the simplest instance of the CEO problem, the transmitters at the inputs of a GMAC observe noisy versions of the same memoryless Gaussian source, and the objective is to estimate the this source from the GMAC output. For this set-up, if the sources and the channel are bandwidth matched, the uncoded transmission is optimal regardless of the CSNR [12], whereas the SSC coding is suboptimal. This result is not surprising if one considers the fact that, for a single memoryless Gaussian source and a memoryless Gaussian channel with identical bandwidths, the uncoded transmission is optimal [13].
The trade-off between the energy per channel use and the achievable distortion in transmission of two correlated Gaussian sources over a NF-GMAC is studied in [14]. The focus in [14] is the minimum transmit energy pairs, which can achieve a given distortion pair with no restriction on the source-channel bandwidth ratio. Interestingly, the analysis in [14] reveals that, with no feedback from the receiver to the transmitters, uncoded transmission is more energy efficient than SSC coding for sufficiently large distortion targets.

1.2. Main Contributions

This paper derives a number of upper-bounds to the average MMSE, referred to as the fading-averaged MMSE (FA-MMSE), achievable in sending a pair of Gaussian sources over a GMAC with Rayleigh-fading and no transmit-side CSI, as a function of transmitter powers, average channel fading power-to-noise ratio, and source correlation coefficient. We refer to the MMSE in this case as the distortion power function (DPF) for Gaussian sources and GMAC. What is derived here are the upper bounds of the unknown DPF.
  • The obvious and relatively straightforward to-determine upper-bounds to the DPF are the FA-MMSEs, which are achievable with uncoded transmission and SSC coding. In this paper, we derive an alternative bound by considering conventional hybrid digital–analog (HDA) coding, wherein vector quantization error in conventional digital coding is transmitted in analog form by superposition. As expected, the numerical results show that, for uncorrelated sources, HDA coding improves on both SSC coding and uncoded transmission. For correlated sources, uncoded transmission has an advantage over HDA coding at low CSNRs where digital coding frequently suffers receiver outages due to lack of CSI at the the transmitters.
  • It is shown in [10] that when the sources are correlated and the GMAC is fixed (no fading), although uncoded transmission of the sources over the GMAC is optimal at SNRs below a threshold determine by the source correlation, the uncoded transmission of vector quantized sources directly over the GMAC (JSC-VQ) is asymptotically (as the CSNR approaches infinity) optimal. Furthermore, it has been shown that this scheme, when enhanced with a superimposed uncoded transmission of the sources (HDA-JSC-VQ), is nearly optimal at all CSNRs. Based on these observations, we derive two upper bounds to the DPF for the fading GMAC, referred as the JSC-VQ bound and HDA-JSC-VQ bound, respectively. Although these bound do not have expressions that can be readily interpreted, they can be numerically computed. It is observed that JSC-VQ and HDA-JSC-VQ bounds are not significantly different, regardless of the source correlation. However, the comparison of these bounds with the distortion bound for SSC coding shows a gap that grows with source correlation and CSNR. Although there exists a gap even when the sources are uncorrelated, this gap is relatively much smaller. The HDA-JSC-VQ bound established here is the lowest known upper bound to unknown DPF. It is shown that, for highly correlated sources and under low average CSNRs, uncoded transmission can achieve performance approaching the HDA-JSC-VQ bound.

2. Problem Definition

We begin with a formal statement of the basic problem addressed in this paper. Suppose we observe two continuous-valued information sources, S 1 and S 2 , at different locations and there is no communication link between the two locations. We wish to communicate and reproduce these two sources at a central location, where the communication takes place over a wireless channel modeled by a two-to-one GMAC with block Rayleigh fading (BF-GMAC). The fading gains of the GMAC are known to the receiver, but are not known to the the respective transmitters. Each source is a circularly symmetric complex-valued Gaussian variable, S i C , with mean zero, variance of E { | S i | 2 } = σ 2 , and the correlation E { S 1 * S 2 } = ρ σ 2 , where | ρ | < 1 . A sequence of n samples from the source S i , denoted by S i = ( S i , 1 , , S i , n ) , is assumed to be independent and identically distributed (iid). Our interest in this paper is the transmission of a sequence of n source samples in n uses of the GMAC. The encoder for source S i is therefore a mapping f i ( n ) : C n C n , where the channel codeword X i = ( X i , 1 , , X i , n ) is given by
X i = f i ( n ) ( S i ) ,
and X i , k C is the channel input for S i at time k = 1 , , n (the superscript in f i ( n ) emphasizes the fact that each encoder is a block-encoder for n connective source samples.) The transmitter for S i has an average power constraint, P i , so that
1 n k = 1 n E | X i , k | 2 P i ,
i = 1 , 2 .
Let the GMAC output for the input codeword pair ( X 1 , X 2 ) be the sequence Y = ( Y 1 , , Y n ) , where Y k C is the GMAC output at time k given by
Y k = h 1 , k X 1 , k + h 2 , k X 2 , k + W k ,
h i , k C is the gain of the channel between S i and the receiver at time k and W k C is complex-valued channel noise. As usual, it is assumed that ( h 1 , k , h 2 , k ) are iid complex Gaussian random variables with mean zero and independent real and complex parts. In a BF-GMAC, h 1 , k and h 2 , k remain constant during the transmission of a length n codeword. Therefore, henceforth we will drop the time index, k, denote the channel gains by h 1 and h 2 , and denote ( h 1 , h 2 ) by h . The channel noise, W k , is assumed to be circularly symmetric Gaussian random variable with mean zero and variance N. The noise W = ( W 1 , W n ) is assumed to be an iid sequence. For convenience, define the CSNR Γ i = | h i | 2 / N and γ i = | h i | 2 , which is the exponentially distributed power gain of the channel i = 1 , 2 . Let E { γ i } = γ ¯ . The total output CSNR in the channel state h is
Γ = γ 1 P 1 + γ 2 P 2 + 2 ρ x γ 12 P 1 P 2 N = Γ 1 P 1 + Γ 2 P 2 + 2 ρ x Γ 12 P 1 P 2 ,
where γ 12 = R e { h 1 h 2 * } , ρ x = E { X 1 X 2 * } P 1 P 2 , and Γ 12 = γ 12 / N . We will refer to Γ ¯ = γ ¯ N as the “fading power-to-noise ratio” (FPNR) of the channel, which is a figure-of-merit for the BF-GMAC. Note that ρ x depends on the encoding scheme. For example, if SSC coding is used, the GMAC inputs are independent and we will have ρ x = 0 regardless of source correlation ρ . On the other hand, if uncoded transmission is used, ρ x = ρ .
The receiver observes the channel output Y and the channel state h = ( h 1 , h 2 ) and reconstructs the sequences S 1 and S 2 . This decoder can be described by a pair of mappings ϕ i ( i ) : C n × C 2 C n , such that the decoded source sequences are given by
S ^ i = ϕ i ( n ) ( Y , h ) , i = 1 , 2 .
We will measure the distortion between S i and S ^ i using the average MSE, given by
d i = 1 n k = 1 n E | S i , k S ^ i , k | 2 ,
For notational simplicity, we denote the minimum achievable d i for a fixed channel state h by d i ( h ) and let
d 1 , 2 ( h ) = d 1 ( h ) + d 2 ( h ) .
Our main goal in this paper is to determine the distortion power function (DPF) for two Gaussian sources and a BF-GMAC, given by
D ( P 1 , P 2 ) = inf f 1 , f 2 , ϕ 1 , ϕ 2 1 2 d 1 , 2 ( h ) p ( h ) d h ,
where p ( h ) = p 1 ( h 1 ) p 2 ( h 2 ) and p i ( h ) is the pdf of h i , i = 1 , 2 . Note that D ( P 1 , P 2 ) is defined for a given source correlation ρ and a FPNR Γ ¯ of a bandwidth-matched BF-GMAC. An alternative description of D ( P 1 , P 2 ) is the achievable power region ( P 1 , P 2 ) for a given target D. Note that DPF for a single Gaussian source and a point-point AWGN channel can be found by evaluating the distortion rate function of the source at the rate equal to the channel capacity, i.e., D ( P ) = σ 2 2 log 2 1 + P / N [4].
Finding D ( P 1 , P 2 ) in general is difficult. Therefore, our end goal is to find useful upper-bounds to D ( P 1 , P 2 ) , by considering certain coding schemes for which d 1 ( h ) and d 2 ( h ) can be found in closed-form, and therefore Equation (2) can be at least numerically evaluated.

Notation and Terminology

For simplicity of presentation, throughout the paper we define the index variable i { 1 , 2 } . The index j { 1 , 2 } is always defined in relation to i as follows,
j = 2 if i = 1 1 if i = 2 .
The complex conjugate of X is denoted by X * . The transpose and conjugate transpose (Hermitian) of a matrix X are denoted by X T and X H , respectively. The time-averaged expectation of a length n sequence X 1 , , X n will be denoted by
1 n k = 1 n E { X k } = E { X } ¯ n .

3. Separate Source-Channel Coding

As a benchmark, we consider ubiquitous SSC coding. In this case, the encoder mapping X i = f i ( n ) ( S i ) is a concatenation of two-stages. In the first, the sequence S i is vector-quantized to produce a “digital” index I i = Π i ( n ) ( S i ) . In the second stage, the index I i is encoded into a channel codeword X i = Λ i ( n ) ( I i ) . The first-stage (VQ) is a mapping Π i ( n ) : C n { 0 , , 2 n R i 1 } , and the second stage (channel encoder) is a mapping Λ i ( n ) : { 0 , , 2 n R i 1 } C n , where R i is the rate of the encoder i in bits/channel-use, i = 1 , 2 . If S 1 and S 2 are uncorrelated, Π i ( n ) is a rate-distortion optimal VQ for S i . If S 1 and S 2 are correlated, the pair ( Π 1 ( n ) , Π 2 ( n ) ) is an optimal distributed VQ for ( S 1 , S 2 ) [15]. The decoder ϕ i ( n ) also consists of two stages. The first stage (channel decoder) decodes the index I i using Y and h . As usual, we will say that a transmitted channel codeword is “correctly decodable”, or simply decodable, if the codeword can be recovered from the channel output with an arbitrarily small error probability by letting n . The second stage (source decoder) optimally estimates S i using recovered ( I 1 , I 2 ) , i.e., with MMSE estimation, S ^ i = E { S i | I 1 , I 2 } . However, as the transmitters do not observe h , the source and channel codes cannot be chosen adaptively to guarantee the error-free recovery of ( I 1 , I 2 ) . With fixed f i ( n ) , i = 1 , 2 , depending on the realization of h the received channel codewords may or may not be decodable. The event where only a single codeword, either X 1 or X 2 , can be decoded is referred to as a “partial outage” and that where both codewords are undecodable is referred to as a “total outage”.
Let E i denote the event that I i is decoded correctly and E 12 denote the no-outage event that both codewords are decoded correctly (no-outage event). Further, let E i denote the partial-outage event that only the codeword I i is correctly decoded and E 12 denote the total-outage event. The probabilities of outage events for uncorrelated sources and transmitters with the same power have been determined in [16]. In general, P ( E | R 1 , R 2 ) , given ( R 1 , R 2 ) , can be found as shown in Appendix A. Let d ( E | R 1 , R 2 ) denote the conditional MMSE under the outage event E , given ( R 1 , R 2 ) . If the conditional fading-averaged MMSE (FA-MMSE), given ( R 1 , R 2 ) , is d ¯ ( R 1 , R 2 ) , then the minimum FA-MMSE achievable with SSC coding is
D S S C ( P 1 , P 2 ) = min R 1 , R 2 d ¯ ( R 1 , R 2 ) .

3.1. Uncorrelated Sources

In this case, the reconstruction of S i only requires I i and straightforwardly
d ¯ ( R 1 , R 2 ) = σ 2 2 i = 1 2 2 2 R i P r ( E i | R 1 , R 2 ) + [ 1 P r ( E i | R 1 , R 2 ) ] .

3.2. Correlated Sources

When the sources are correlated, the source encoders must constitute a distributed VQ. One issue with an optimal distributed VQ is that, due to the mutual dependence of quantizers for the two sources, the reconstruction of neither source is possible unless the channel codewords from both transmitters can be correctly decoded. Therefore,
d ¯ ( R 1 , R 2 ) = d ( E 12 | R 1 , R 2 ) p 12 + σ 2 ( 1 p 12 ) ,
where p 12 = P r ( E 12 | R 1 , R 2 ) and d ( E 12 | R 1 , R 2 ) in this case are the minimum achievable MSE D D V Q ( R 1 , R 2 ) of a distributed VQ with rates ( R 1 , R 2 ) , which is given by the the following lemma.
Lemma 1.
Let
Δ ( x , y ) = 2 2 x ( 1 ρ 2 + ρ 2 2 2 y )
and R s u m = R 1 + R 2 . The MMSE of a distributed VQ for a pair of mean-zero, variance σ 2 Gaussian sources with the correlation coefficient ρ is given by
D D V Q ( R 1 , R 2 ) = σ 2 Δ * + Δ ( R s u m , R s u m ) Δ *
where
Δ * = max Δ ( R 1 , R 2 ) , Δ ( R 2 , R 1 ) , Δ ( R s u m , R s u m ) .
Proof. 
See Appendix B. □

4. Conventional HDA Coding

Both fully analog (uncoded) transmission and fully digital SSC coding are special cases of more general HDA coding, where the total power and/or channel bandwidth are split between an analog encoder and a conventional digital encoder. When CSI is not available at the transmitters, an HDA system with a power allocation optimized for the channel state distribution can always outperform (in a FA-MMSE sense) both the uncoded and SSC-coded transmission. In this section, we analyze an HDA scheme that uses a conventional SSC encoder (optimal VQ in cascade with a channel coder for GMAC) as the digital part and the quantization error as the analog part [17]. The conventional approach to combining the analog and digital channel signals is by superposition as considered below. Later, in Section 5.1, we will consider an alternative approach where a vector quantizer is used a JSC code in the digital part.
A conventional HDA system is shown in Figure 1. The analog and digital components of each transmitter output share the same channel bandwidth via superposition, whereas the total available transmitter power is split between the two components via digital and analog scaling factors ( α d i , α a i ) . The decoding at the GMAC output relies on the principle of successive interference cancellation (SIC). Each encoder is parameterized by ( R i , t i ) , where R i is VQ rate and 0 t i 1 is the digital–analog power allocation factor to be introduced below, i = 1 , 2 . Note that in this HDA system, each source encoder is a rate-distortion optimal VQ for the respective source, regardless of whether the sources are correlated or not. Therefore, unlike in Section 3, the source reconstruction becomes possible even under partial-outage events. Furthermore, if the sources are correlated, the quantization errors are also correlated (which is not the case if a distributed VQ is used), allowing analog components of the HDA transmission to interfere, on average, in a constructive manner over the GMAC.
Let the quantized value and quantization error for source sequence S i be S ˜ i and Z i , respectively. For rate-distortion optimal VQ of a Gaussian source, the quantization error variance is [4]
E { | Z i | 2 } ¯ n = σ z i 2 = σ 2 2 2 R i ,
where Z i = ( Z i , 1 , , Z i , n ) . Furthermore, S ˜ i and Z i are uncorrelated, and therefore
E { | S ˜ i | 2 } ¯ n = σ ˜ i 2 = σ 2 ( 1 2 2 R i ) .
where S ˜ i = S ˜ i , 1 , S ˜ i , n . For correlated Gaussian sources, the following results regarding the time-averaged asymptotic cross-correlations hold [10].
E { S i * S ˜ i } ¯ n = σ 2 ( 1 2 2 R i )
E { S i * S ˜ j } ¯ n = σ 2 ρ ( 1 2 2 R j )
E { S ˜ 1 * S ˜ 2 } ¯ n = σ 2 ρ ( 1 2 2 R 1 ) ( 1 2 2 R 2 )
E { Z 1 * Z 2 } ¯ n = ρ σ 2 2 2 ( R 1 + R 2 ) .
Further, we define the correlation coefficients
ρ ˜ = E { S ˜ 1 * S ˜ 2 } ¯ n σ ˜ 1 σ 2 ˜ = ρ ( 1 2 2 R 1 ) ( 1 2 2 R 2 ) ,
ρ z = E { Z 1 * Z 2 } ¯ n σ z 1 σ z 2 .
The VQ codeword S ˜ i (specifically, an index identifying it) is encoded into a channel codeword C i of a capacity achieving channel code for the GMAC. The channel input for source sequence S i is given by
X i = X ˜ i + Z ˜ i ,
where X ˜ i = α d i C i and Z ˜ i = α a i Z i , and 0 α d i 1 and 0 α a i 1 are chosen such that
α d i 2 E { | C i | 2 } ¯ n + α a i 2 E { | Z i | 2 } ¯ n = P i , i = 1 , 2 .
We define the digital–analog power allocation factors as t i = α a i 2 E { | Z i | 2 } ¯ n P i , i = 1 , 2 , so that α d i 2 E { | C i | 2 } ¯ n = ( 1 t i ) P i . The resulting GMAC output is given by
Y k = h 1 X ˜ 1 , k + Z ˜ 1 , k + h 2 X ˜ 2 , k + Z ˜ 2 , k + W k , k = 1 , , n ,
where X ˜ i = ( X ˜ i , 1 , , X ˜ i , n ) , W = ( W 1 , , W n ) and Y = ( Y 1 , , Y n ) .
The analog channel inputs act as noise to the digital channel decoder which jointly decodes the two codewords C 1 and C 2 . Recall that with asymptotically optimal VQ, Z ˜ i , k s are iid Gaussian variables and therefore the total noise Z ˜ i + W i at the input of the channel decoder is also iid Gaussian. Digital codewords are decoded first and the correctly decoded codewords are then used to cancel out the digital channel inputs from the observed channel output. The source sequences are then linearly estimated from the correctly decoded channel codewords and the residual channel output. The achievable MMSE of the HDA system in any given channel state h depends on the the decodability of digital codewords C 1 and C 2 . Let d i ( E | R 1 , R 2 , t 1 , t 2 ) be the conditional MMSE for source i under the event E , given R 1 , R 2 , t 1 , and t 2 , i = 1 , 2 . The necessary and sufficient conditions for each event can be found using the basic achievable rate region for a GMAC [4].
  • No outage event E 12 : Both C 1 and C 2 are decodable if and only if ([4], Equations 15.147–15.149)
    R i < 1 2 log 1 + ( 1 t i ) P i Γ i t i P i Γ i + t j P j Γ j + 2 Γ 12 ρ z t 1 t 2 P 1 P 2 + 1 , i = 1 , 2
    R 1 + R 2 < 1 2 log 1 + ( 1 t 1 ) P 1 Γ 1 + ( 1 t 2 P 2 ) Γ 2 t 1 P 1 Γ 1 + t 2 P 2 Γ 2 + 2 Γ 12 ρ z t 1 t 2 P 1 P 2 + 1 .
  • Partial outage event E i (either C 1 or C 2 is decodable, but not both): C i is decodable while C j is not decodable if and only if
    R i < 1 2 log 1 + ( 1 t i ) P i Γ i t 1 P 1 Γ 1 + P 2 Γ 2 + 2 Γ 12 ρ z t 1 t 2 P 1 P 2 + 1 ,
    R j > 1 2 log 1 + ( 1 t j ) P j Γ j t i P i Γ i + t j P j Γ j + 2 Γ 12 ρ z t 1 t 2 P 1 P 2 + 1 ,
  • Total outage event E 12 : Neither C i nor C j is decodable if and only if Equation (11) is violated for i = 1 , 2 and Equation (12) is violated.
It can be verified that, when the sources are uncorrelated, Equations (11)–(14) describe regions of ( γ 1 , γ 2 ) for these decoding events as shown in Figure 2. For correlated sources, these regions not only depend on γ 1 and γ 2 , but also on γ 12 . We now proceed to evaluate MMSE under each event.

4.1. No Outage Event ( E 12 )

In this case, the digital channel codewords C i , i = 1 , 2 are correctly decoded at the receiver. Therefore h 1 X ˜ 1 + h 2 X ˜ 2 can be perfectly canceled from the received signal Y to obtain the residual Y ˜ = Y ˜ 1 , , Y ˜ n , where
Y ˜ k = h 1 Z ˜ 1 , k + h 2 Z ˜ 2 , k + W k , k = 1 , , n ,
and the source components ( S 1 , S 2 ) can be estimated from the recovered source codewords S ˜ 1 and S ˜ 2 , and the residual sequence Y ˜ . The asymptotically optimal estimator is linear, and therefore the estimated source sequence S i is given by
S ^ i , k = q i 1 S ˜ 1 , k + q i 2 S ˜ 2 , k + q i 3 Y ˜ k k = 1 , , n ,
where q i , 1 , q i , 2 , and q i , 3 are the coefficients of the optimal linear estimator. The MSE of the optimal estimator is
d i ( E 12 | R 1 , R 2 , t 1 , t 2 ) = σ 2 q i 1 c 1 q i 2 c 2 q i 3 c 3 , i = 1 , 2 ,
where q i l and c l , l = 1 , 2 , 3 are found in Appendix C.1. Now, for given ( R 1 , R 2 , t 1 , t 2 ) , the total FA-MMSE can be found by evaluating
d ¯ i ( E 12 | R 1 , R 2 , t 1 , t 2 ) = 1 2 E 12 i = 1 2 d i ( E 12 | R 1 , R 2 , t 1 , t 2 ) p 1 ( h 1 ) p 2 ( h 2 ) d h 1 d h 2 .
Remark 1.
1. 
For the special case of uncoded transmission, we can set R 1 = R 2 = 0 and t 1 = t 2 = 1 ( ρ z = 0 ).
2. 
On the other hand, by setting t 1 = t 2 = 0 , we obtain the achievable MMSE of a purely digital SSC coding system. However, for correlated sources, this MMSE is less than that given by the Lemma 1. This is because the digital encoder in Figure 1 does not achieve a distributed coding gain, as does the digital encoder in Section 3. To achieve a coding gain, a distributed VQ must be used, but this will render the quantization errors of the two source uncorrelated, preventing us from exploiting the correlation between the GMAC inputs to our advantage. The advantage of the HDA scheme in Figure 1 is its robustness against unknown CSI. In particular, when the two sources are correlated, so will be their quantization errors. This correlation allows for a form of statistical cooperation at the GMAC output as reflected by the appearance of ρ z in Equation (16), see Appendix C.

4.2. Partial Outage Event ( E i )

Suppose only C i is decodable ( C j is undecodable), i { 1 , 2 } . Upon decoding C i , the decoder computes Y ˜ = Y h i X i ˜ to obtain the residuals
Y ˜ k = h i U i , k + h j U j , k + X ˜ j , k + W k , k = 1 , , n .
The optimal estimates of the source sequences are
S ^ i , k = q i 1 S ˜ i , k + q i 2 Y ˜ k , S ^ j , k = q i 1 S ˜ i , k + q i 2 Y ˜ k .
and the corresponding MSEs are
d i ( E i | R 1 , R 2 , t 1 , t 2 ) = σ 2 q i 1 c 1 q i 2 c 2 , d j ( E i | R 1 , R 2 , t 1 , t 2 ) = σ 2 q i 1 c 1 q i 2 c 2 ,
where q i l , q i l , c l and c l , l = 1 , 2 are found in Appendix C.2. The total FA-MMSE can be found by evaluating
d ¯ ( E i | R 1 , R 2 , t 1 , t 2 ) = 1 2 i = 1 2 E i d i ( E i | R 1 , R 2 , t 1 , t 2 ) + d j ( E i | R 1 , R 2 , t 1 , t 2 ) p 1 ( h 1 ) p 2 ( h 2 ) d h 1 d h 2 .

4.3. Total Outage Event ( E 12 )

In this case, neither digital codeword is decodable, and the source sequences are reconstructed as
S ^ i , k = q i Y k k = 1 , , n ,
i = 1 , 2 . The MSE of the optimal estimator is
d i ( E 12 | R 1 , R 2 , t 1 , t 2 ) = σ 2 q i c i *
where q i and c i are found in Appendix C.3. For given ( R 1 , R 2 , t 1 , t 2 ) , the total FA-MMSE can be found by evaluating
d ¯ ( E 12 | R 1 , R 2 , t 1 , t 2 ) = 1 2 i E 12 d i ( E 12 | R 1 , R 2 , t 1 , t 2 ) p 1 ( h 1 ) p 2 ( h 2 ) d h 1 d h 2 .
Finally, the total FA-MMSE of the superposition-based HDA scheme can be obtained by solving
D H D A ( s u p ) ( P 1 , P 2 ) = min R 1 , R 2 , t 1 , t 2 d ¯ ( R 1 , R 2 , t 1 , t 2 ) ,
where
d ¯ ( R 1 , R 2 , t 1 , t 2 ) = d ¯ ( E 12 | R 1 , R 2 , t 1 , t 2 ) + d ¯ ( E 1 | R 1 , R 2 , t 1 , t 2 ) + d ¯ ( E 2 | R 1 , R 2 , t 1 , t 2 ) + d ¯ ( E 12 | R 1 , R 2 , t 1 , t 2 ) .

5. JSC Coding

5.1. JSC-VQ

One problem of digital coding with no knowledge of CSI at the transmitter is the unavoidable outage condition, which also exists in HDA coding schemes that use the quantization error of the digital encoder as the analog component. Although the problem does not exist in uncoded transmission, on a GMAC, uncoded transmission becomes inferior to SSC coding as the CSNR increases (for example, see Figure 3 and Figure 4). A simple way to improve the performance of uncoded transmission at high CSNR is suggested in [10]. Rate-distortion optimal VQ is applied to each source to be transmitted over the GMAC; the VQ codewords are scaled to meet the individual power constraints and directly transmitted over the channel. As the source codewords and channel codewords are the same in this case, optimal detection at the receiver can be used for joint source-channel decoding. More importantly, even if detection of VQ codewords fails, some estimate of the sources can still obtained from the observed channel output. The interest in [10] is the transmission of correlated sources over a NF-GMAC, or equivalently a BF-GMAC with CSI available at the transmitters. However, as we will demonstrate here, this approach can outperform HDA coding, even when the sources are uncorrelated, i.e., when the transmitters do not observe instantaneous CSI. In the following, we determine the FA-MMSE for this joint source-channel VQ (JSC-VQ) scheme. Our analysis considers the general case of two correlated Gaussian sources with the correlation coefficient ρ (the result for uncorrelated source can be obtained by setting ρ = 0 .) The JSC-VQ scheme has an additional advantage with correlated sources, since it allows the two correlated sources to statistically cooperate over a GMAC, i.e., create on the average constructive interference at the channel output. Furthermore, whenever one of the codewords is decoded correctly, the effective CSNR for the other codeword increases. As in HDA coding, the achievable MMSE is determined by three possible outage conditions at the decoder. However, the advantage here is that, even when none of the two codewords can be decoded correctly some estimates of the two sources can still be obtained from the observed channel output.
The JSC-VQ encoder i vector-quantizes the source sequence S i using a rate- R i codebook, scales the resulting codeword U i o to satisfy its power constraint P i , and transmits the scaled codeword X i = α i U i o over the BF-GMAC, where
α i = P i σ 2 ( 1 2 2 R i ) ,
i = 1 , 2 . Note that, when S 1 and S 2 are correlated, so will be the channel inputs X 1 and X 2 , and therefore the advantage of this scheme. Upon observing the resulting channel output Y , the decoder first uses the same VQ codebooks used by the encoders to jointly detect the transmitted codewords ( U 1 o , U 2 o ) by considering their correlation (detection step). In the second step, the detected VQ codewords are used to estimate the source sequences S 1 and S 2 (estimation step). Note that ( S 1 , S 2 , U 1 o , U 2 o , Y ) are asymptotically jointly Gaussian, and therefore the optimal (MMSE) estimator is linear. Let the codeword pair found in the detection step be ( U ^ 1 , U ^ 2 ) . In general, the estimated source sequences are given by
S ^ i = q i 1 U ^ 1 + q i 2 U ^ 2 + q i 3 Y , i = 1 , 2 .
where coefficients q i 1 , q i 2 , and q i 3 of the optimal linear estimator are to be determined. For given ( R 1 , R 2 , α 1 , α 2 ) used by the encoders, it is not guaranteed that ( U 1 o , U 2 o ) can be correctly decoded in all channel states, and therefore q i 1 , q i 2 , and q i 3 will depend on the state (outage event) of the decoder. Let H 12 , H i , and H 12 , respectively, be the set of ( h 1 , h 2 ) , for which the outage events E 12 , E i , and E 12 occur for given (fixed) ( R 1 , R 2 , α 1 , α 2 ) . Define the encoder parameter vector t = ( R 1 , R 2 , α 1 , α 2 ) . The FA-MMSE for given t can then be given by
d ¯ ( t ) = 1 2 i = 1 2 H 12 d i ( E 12 | t ) d h + H 1 d i ( E 1 | t ) d h + H 2 d i ( E 2 | t ) d h + H 12 d i ( E 12 | t ) d h .
The minimum achievable FA-MMSE can be found by
D J S C V Q ( P 1 , P 2 ) = min t d ¯ ( t ) .
We next consider all possible outage events to determine the conditional MMSEs in Equation (22).
  • No-outage event E 12 : The set of all rate pairs for which E 12 occurs are given by the following lemma.
    Lemma 2.
    For given ( P 1 , P 2 ) , ( h 1 , h 2 ) , ( α 1 , α 2 ) , and ρ, both of the source-channel VQ codewords can be detected with an asymptotically vanishing error probability, if ( R 1 , R 2 ) satisfy
    R i < 1 2 log 2 ( 1 ρ ˜ 2 ) P i Γ i + 1 1 ρ ˜ 2 i = 1 , 2 R 1 + R 2 < 1 2 log 2 P 2 Γ 1 + P 2 Γ 2 + 2 ρ ˜ Γ 12 P 1 P 2 + 1 1 ρ ˜ 2 ,
    where ρ ˜ is given by Equation (9).
    Proof .
    See Appendix D. □
    Denote all ( R 1 , R 2 ) pairs that satisfy the above constraints by R ( E 12 ) . Let the codeword pair found in the detection step be ( U ^ 1 , U ^ 2 ) . If ( R 1 , R 2 ) R ( E 12 ) then U ^ 1 = U 1 o and U ^ 2 = U 2 o . In this case, the linear estimator need not use the channel output Y ( q i 3 = 0 ), and the source sequences can be reconstructed as
    S ^ i = q i 1 U 1 o + q i 2 U 2 o , i = 1 , 2
    The MMSE of the linear estimator for ( R 1 , R 2 ) R ( E 12 ) is given by Equation (A24).
  • Partial outage event E i : The set of all rate pairs for which E i ( i = 1 , 2 ) occurs are given by the following lemma.
    Theorem 1.
    For given ( P 1 , P 2 ) , ( h 1 , h 2 ) , ( α 1 , α 2 ) , and ρ, the codeword U i o is decodable and U j o is undecodable if and only if
    R i < 1 2 log 2 P 1 Γ 1 + P 2 Γ 2 + 2 Γ 12 ρ ˜ P 1 P 2 + 1 Γ j P j ( 1 ρ ˜ 2 ) + 1
    R j > 1 2 log 2 ( 1 ρ ˜ 2 ) P j Γ j + 1 1 ρ ˜ 2
    for i { 1 , 2 } .
    Proof .
    The proof, considering the case i = 1 and j = 2 , is given in Appendix D. □
    Denote all ( R 1 , R 2 ) pairs which satisfy above constraints by R ( E i ) . If ( R 1 , R 2 ) R ( E i ) , then U ^ i = U i o and U ^ i = 0 . The source sequences are reconstructed as
    S ^ i = q i 1 U i o + q i 3 Y , i = 1 , 2 .
    The expressions for the MMSE of the linear estimator for ( R 1 , R 2 ) R ( E i ) are given by Equations (A25) and (A26).
  • Total outage event E 12 : The set of all rate pairs for which E 12 occurs is given by the following lemma.
    Lemma 3.
    For given ( P 1 , P 2 ) , ( h 1 , h 2 ) , ( α 1 , α 2 ) , and ρ, neither codeword can be decoded if
    R 1 > 1 2 log 2 Γ 1 P 1 + Γ 2 P 2 + 2 Γ 12 P 1 P 2 ) + 1 Γ 2 P 2 ( 1 ρ ˜ 2 ) + 1 R 2 > 1 2 log 2 Γ 2 P 1 + Γ 2 P 2 + 2 Γ 12 P 1 P 2 ) + 1 Γ 1 P 1 ( 1 ρ ˜ 2 ) + 1 R 1 + R 2 > 1 2 log 2 Γ 1 P 1 + Γ 2 P 2 + 2 Γ 12 ρ ˜ P 1 P 2 ) + 1 ( 1 ρ ˜ 2 ) .
    Proof .
    Follows from Equations (24) and (25) in the previous two lemmas. □
    Denote all ( R 1 , R 2 ) pairs which satisfy above constraints by R ( E 12 ) . The source sequences are reconstructed as
    S ^ i = q i 3 Y , i = 1 , 2 .
    The expressions for the MMSE of the linear estimator for ( R 1 , R 2 ) R ( E 12 ) are given Equation (A27).

5.2. HDA-JSC-VQ Coding

Finally, we consider an HDA scheme based on the aforementioned JSC-VQ scheme, which possibly provides the lowest known upper bound to the distortion power function D ( P 1 , P 2 ) for a pair of correlated Gaussian sources and a fading GMAC. In this scheme, a scaled (analog) version of each source is superimposed on the JSC-VQ codewords in the scheme discussed in Section 5.1. In particular, the resulting HDA scheme (which we will refer to as HDA-JSC-VQ coding) can be shown to outperform the JSC-VQ at all CSNRs on a non-fading GMAC [10]. This improvement can be attributed to the optimality of analog transmission as SNR 0 . In this section, we determine the minimum achievable FA-MMSE of the HDA-JSC-VQ coding over the BF-GMAC. In this case, there is an additional gain due to combining an analog transmission with JSV-VQ as this prevents the complete outages that would otherwise occur with JSV-VQ. We will also demonstrate that this scheme achieves a better FA-MMSE than any other known scheme, even for the uncorrelated Gaussian sources.
Using the same notation as in the previous section, the channel codeword generated by the encoder i of our HDA-JSC-VQ system can be given by
X i = α i S i + β i U i o ,
where U i o is the VQ codeword for S i , and α i and β i are constants. Using the transmit power constraint, we have
β i = P i α i 2 σ 2 2 2 R i σ 2 ( 1 2 2 R i ) α i ,
and the digital–analog power allocation factor α i must be chosen to minimize the FA-MMSE. Given the observed channel output Y = h 1 X 1 + h 2 X 2 + W , the receiver first decodes ( U 1 o , U 2 o ) as ( U ^ 1 , U ^ 2 ) and then linearly estimates the source sequence S i as
S ^ i = q i 1 U ^ 1 + q i 2 U ^ 2 + q i 3 Y ,
where the coefficients q i l , l = 1 , 2 are chosen to minimize the reconstruction MMSE for given ( h 1 , h 2 ) . The evaluation FA-MMSE can proceed as in the case of JSC-VQ system. Specifically, by redefinition of the encoder parameters as t = ( R 1 , R 2 , α 1 , β 1 , α 2 , β 2 ) , the conditional FA-MMSE can be found by Equation (22). The minimum achievable FA-MMSE in this case is given by
D H D A J S C V Q ( P 1 , P 2 ) = min t d ¯ ( t ) .
What remains is to determine the conditional MMSEs in Equation (22), by considering every possible outage event. We obtain the required MMSEs by proving a set of lemmas.
  • No-outage event E 12 : For fixed ( h 1 , h 2 ) , the bounds on VQ rates required to guarantee error-free decoding of ( U 1 o , U 2 o ) can be found through a slight generalization of the results in ([10], Theorem IV.6) to account for channel-gains and complex-valued random variables. In particular we can prove the following lemma.
    Lemma 4.
    For given ( P 1 , P 2 ) , ( α 1 , β 1 ) , ( α 2 , β 2 ) , ( h 1 , h 2 ) , ρ, the VQ codeword-pair ( U 1 o , U 2 o ) can be decoded error-free whenever ( R 1 , R 2 ) satisfies
    R 1 < 1 2 log 2 | β 1 | 2 k 11 ( 1 ρ ˜ 2 ) + N N ( 1 ρ ˜ 2 ) R 2 < 1 2 log 2 | β 2 | 2 k 22 ( 1 ρ ˜ 2 ) + N N ( 1 ρ ˜ 2 ) R 1 + R 2 < 1 2 log 2 | β 1 | 2 k 11 + | β 2 | 2 k 22 + 2 R e { β 1 ( β 2 ) * } ρ ˜ k 11 k 22 ) + N N ( 1 ρ ˜ 2 ) ,
    where
    N = | h 1 | 2 α 1 2 ν 1 + | h 2 | 2 α 2 2 ν 2 + 2 R e { h 1 * h 2 } α 1 α 2 ν 3 + N ,
    ν 1 , ν 1 , and ν 3 are given by the set of expression following (45) in [10], but with
    β 1 = h 1 α 1 ( 1 a 1 ρ ˜ ) + h 1 β 1 + h 2 α 2 a 2
    β 2 = h 2 α 2 ( 1 a 2 ρ ˜ ) + h 2 β 2 + h 1 α 1 a 1 ,
    for a 1 and a 2 given by Equations (48)–(50) in [10].
    For ( R 1 , R 2 ) satisfying the bounds in Lemma 4, the minimum achievable MMSE is given by Equation (A28).
  • Partial outage event E i : Rate pairs ( R 1 , R 2 ) for which only one codeword can be decoded is given by the following lemma.
    Lemma 5.
    For given ( P 1 , P 2 ) , ( α 1 , β 1 ) , ( α 2 , β 2 ) , ( h 1 , h 2 ) , and ρ, the necessary and sufficient conditions where only codeword U i o , but not U j o , is correctly decodable are
    R i < 1 2 log 2 | β 1 | 2 k 11 + | β 2 | 2 k 22 + 2 R e { β 1 ( β 2 * ) } ρ ˜ k 11 k 22 ) + N | β i | 2 k j j ( 1 ρ ˜ 2 ) + N R j > 1 2 log 2 | β i | 2 k j j ( 1 ρ ˜ 2 ) + N N ( 1 ρ ˜ 2 )
    For ( R 1 , R 2 ) satisfying the bounds in Lemma 5, the minimum achievable MMSE is given by Equation (A29).
  • Total outage event E 12 : Rate pairs ( R 1 , R 2 ) for which neither codeword can be decoded is given by the following lemma.
    Lemma 6.
    For given ( P 1 , P 2 ) , ( α 1 , β 1 ) , ( α 2 , β 2 ) , ( h 1 , h 2 ) , and ρ, neither U i o nor U j o can be decoded if
    R 1 > 1 2 log 2 | β 1 | 2 k 11 + | β 2 | 2 k 22 + 2 R e { β 1 ( β 2 ) * } ρ ˜ k 11 k 22 ) + N | β 2 | 2 k 22 ( 1 ρ ˜ 2 ) + N R 2 > 1 2 log 2 | β 1 | 2 k 11 + | β 2 | 2 k 22 + 2 R e { β 1 ( β 2 ) * } ρ ˜ k 11 k 22 ) + N | β 1 | 2 k 11 ( 1 ρ ˜ 2 ) + N R 1 + R 2 > 1 2 log 2 | β 1 | 2 k 11 + | β 2 | 2 k 22 + 2 R e { β 1 ( β 2 ) * } ρ ˜ k 11 k 22 ) + N N ( 1 ρ ˜ 2 ) .
    For ( R 1 , R 2 ) satisfying the bounds in Lemma 6, the minimum achievable MMSE is given by Equation (A30).
Lemmas 4–6 can be proven (details omitted for brevity) by considering the “genie-aided” decoder argument in ([10], Appendix F), in conjunction with the rate conditions for three-types decoding events established in Appendix D of this paper. In particular, the decoding events of HDA-JSC-VQ scheme can be mapped to those of the JSC-VQ scheme in Section 5.1, by re-expressing the channel output in the form Y = β 1 U 1 o + β 2 U 2 o + W such that the additive noise W satisfies the properties required by the proofs in Appendix D. It can be verified (see [10], Lemma F.1) that the desired representation for Y is obtained by choosing β 1 and β 2 as in Equations (28) and (29), respectively.

6. Comparison of Bounds and Discussion

In summary, Equations (3), (19), (23), and (27) are all computable upper bounds to the (unknown) distortion power function D ( P 1 , P 2 ) of Gaussian sources and a Rayleigh fading GMAC, under the constraint that CSI is not observable at the transmitters (the function minimizations required to evaluate these bounds have been carried out by using global optimization software). These bounds have been numerically evaluated for certain examples and the results are presented in Figure 3, Figure 4 and Figure 5. For simplicity of presentation, we consider the symmetric case of P 1 = P 2 and Γ ¯ 1 = Γ ¯ 2 = 1 (CSNR is thus the same as P). We consider sources with variance σ 2 = 1 .
Recall that if CSI is available at the transmitters as well, SSC coding is optimal for uncorrelated sources, whereas uncoded transmission is not. The performance curves in Figure 3 confirm that this is not the case if CSI is not available to the transmitters. The lack of CSI forces the coded transmitters to choose an encoding rate (based on prior knowledge of the CSI distribution) to minimize the MSE considering the unavoidable receiver outages. In the lower-power regime where outage probability is very high, uncoded transmission therefore achieves a better distortion than coded transmission. At the high-power regime, however, the two sources cannot be completely separated from the sum created by the MAC if transmitted completely uncoded and hence the MMSE of uncoded system reaches a constant (which in this case is σ 2 / 2 = 0.5 or 3 dB.) As seen in Figure 3 (right) conventional HDA coding essentially remains uncoded transmission up to about P = 6.5 dB (all power allocated to analog part), and then diverges thereafter due to the increasing power allocation to the digital part. While the exact DPF is not known, the HDA-JSC-VQ provides lowest known upper bound to the DPF. Note that JSC-VQ bound coincides with the HDA-JSC-VQ bound for high P where all available power gets allocated to the VQ codewords.
For correlated sources, SSC coding would not be optimal even if CSI was known to the transmitters. Figure 4 shows that at ρ = 0.9 , SSC coding has a significant gap (~3–6 dB) to the JSC-VQ and HDA-JSC-VQ bounds. Figure 5 left and right shows the achievable FA-MMMSE of each system as a function of the source correlation coefficient, at low and high transmitter powers, respectively. It is known that on a fixed GMAC, uncoded transmission is optimal for power-to-noise ratios P / N ρ 1 ρ 2 [10]. For example, if ρ = 0.9 , uncoded transmission must be optimal for the fixed channel if P / N 6.75 dB. Clearly, uncoded transmission can only be optimal for a fraction of time in a system with fading and fixed (non-adaptive) transmitters and therefore cannot be optimal in a FA-MMSE sense. As Figure 4 shows, the uncoded system performs identical to the JSC-VQ system in the low-power regime (P less than about 15 dB in this example). This should be expected, as the the limiting optimal JSC-VQ system (as P 0 ) is the uncoded system, i.e., the receiver operates in outage nearly all the time, and therefore the optimal system has a rate that approaches infinity. The HDA-JSC-VQ system, on the other hand, exhibits a different behavior. The analog–digital power allocation and VQ rates in the HDA-JSC-VQ system ensure that the receiver achieves an optimal operating point with respect to all four outage events. This allows the HDA-JSC-VQ system to achieve a lower FA-MMSE at a given P, compared to both JSC-VQ and uncoded system in the lower power regime, as evident from Figure 4.
So far, we have considered the symmetric case, that is, P 1 = P 2 . As an example for an asymmetric case, Figure 6 shows the FA-MMSE of JSC-VQ and HDA-JSC-VQ schemes for P 2 = P 1 / 10 (i.e., P 2 is 10 dB below P 1 ) when ρ = 0.9 and Γ ¯ = 1 . Note that at high CSNR, the high correlation between the two sources allows the receiver to achieve D 2 D 1 , despite the transmitter powers being very different. When comparing this figure to Figure 4, note that the total transmitter output power ( P 1 + P 2 ) is lower here ( 1.1 P 1 compared to 2 P 1 ), which explains the higher total FA-MMSE compared to Figure 4.
Although practical code construction methods related to our problem have been reported in previous work, e.g., [18,19,20], those methods perform well only when CSI is available at both the transmitter and the receiver. Furthermore, [18,19] also suffer an additional performance loss due to being zero-delay coding schemes. The results presented in this paper serve as a guide to developing good practical multiple-access block codes for transmitting Gaussian-like sources in systems with no CSI at the transmitters. If the average CSNR is low, uncoded transmission can achieve nearly the same performance as an HDA-JSC-VQ system. The relative performance of the uncoded system improves with the source correlation. However, at moderate to high CSNRs, HDA-JSC-VQ will have a definite advantage, regardless of source correlation. Optimal VQ and typical sequence detection, as considered here to analyze HDA-JSC-VQ, are obviously not practically realizable. A potential approach to practically realizing a HDA-JSC-VQ system is by using trellis-coded quantization (TCQ) [21] at the transmitters (with optimal rates and power allocations found as described in this paper) and joint maximum likelihood (ML) sequence detection at the receiver [22]. On the one hand, TCQ allows computationally efficient way of quantizing long source sequences with distortion close to the distortion rate bound for optimal VQ; on the other hand, the joint detection of a pair of long VQ codewords can be efficiently implemented using a suitable variant of the Vitterbi algorithm operating on a combined trellises of two TCQs. Our preliminary experimental results suggest that this approach can achieve performance very close to the FA-MMSE bound derived in this paper. A complete set of experimental results will be reported in a future paper.

Author Contributions

Formal analysis, C.I.; Methodology, C.I.; Supervision, P.Y.; Writing—original draft, C.I.; Writing—review and editing, P.Y.

Funding

Chathura Illangakoon has been funded by a University of Manitoba Graduate Fellowship (UMGF).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Outage Probabilities in SSC Coding of Gaussian Sources

From ([4], Equations 15.147–15.149), it directly follows that the the necessary and sufficient conditions for various outage events are as follows.
  • E 12 (both codewords decodable):
    P i Γ i 2 2 R i 1 , i = 1 , 2 P 1 Γ 1 + P 2 Γ 2 2 2 ( R 1 + R 2 ) 1
  • E i (only one codeword decodable):
    P i Γ i 1 + P j Γ j 2 2 R i 1 , i = 1 , 2 P j Γ j < 2 2 R j 1
  • E 12 (neither codeword decodable):
    P i Γ i 1 + P j Γ j < 2 2 R i 1 , i = 1 , 2 P 1 Γ 1 + P 2 Γ 2 < 2 1 2 ( R 1 + R 2 ) 1
It is straightforward to verify that ( γ 1 , γ 2 ) pairs corresponding to these events are as shown in Figure A1, where we define μ i = P i / N .
Figure A1. ( γ 1 , γ 2 ) pairs corresponding to outage events in separate source-channel (SSC) coding of Gaussian sources.
Figure A1. ( γ 1 , γ 2 ) pairs corresponding to outage events in separate source-channel (SSC) coding of Gaussian sources.
Entropy 21 00992 g0a1
The probability of the event E is given by
P r ( E | R 1 , R 2 ) = E p 1 ( γ 1 ) p 2 ( γ 2 ) d γ 1 d γ 2 .
where, for Rayleigh fading, the pdf of γ i is p i ( γ i ) = 1 γ ¯ exp γ i γ ¯ . By evaluating the integral over each region, both P r ( E 12 | R 1 , R 2 ) and P r ( E i | R 1 , R 2 ) can be obtained in closed-form. The former has cumbursome expression (not shown here), but the latter is given by
P r ( E i | R 1 , R 2 ) = e 2 R i ( 2 2 R j 1 ) μ i 1 e ( 2 2 R i 1 ) μ j , i = 1 , 2 .
Straightforwardly, P r ( E 12 | R 1 , R 2 ) = 1 P r ( E 12 | R 1 , R 2 ) P r ( E 1 | R 1 , R 2 ) P r ( E 2 | R 1 , R 2 ) .

Appendix B. Proof of Lemma 1

We first state a lemma that is required to prove Lemma 1.
Lemma A1.
Let
Δ ( x , y ) = 2 2 x ( 1 ρ 2 + ρ 2 2 2 y )
and R s u m = R 1 + R 2 . Given a distributed VQ (for a pair of zero-mean, variance σ 2 Gaussian sources whose correlation coefficient is ρ) with rates R 1 and R 2 , the distortion pair ( D 1 , D 2 ) is achievable if and only if
( D 1 , D 2 ) D 1 ( R 1 , R 2 ) D 2 ( R 1 , R 2 ) D prod ( R 1 , R 2 )
where
D 1 ( R 1 , R 2 ) = D 1 : D 1 σ 2 Δ ( R 1 , R 2 ) D 2 ( R 1 , R 2 ) = D 2 : D 2 σ 2 Δ ( R 2 , R 1 ) D prod ( R 1 , R 2 ) = ( D 1 , D 2 ) : D 1 D 2 σ 4 Δ ( R s u m , R s u m ) .
Proof. 
This lemma follows from the achievable rate region given by ([8], Theorem 1). □
Proof of Lemma 1: Let ( d 1 , d 2 ) be all distortion pairs achievable by a distributed VQ with rates ( R 1 , R 2 ) . From Lemma A1, it follows that all achievable distortion pairs satisfy
d 1 Δ 1 = σ 2 Δ ( R 1 , R 2 )
d 2 Δ 2 = σ 2 Δ ( R 2 , R 1 )
d 1 d 2 Δ 12 = σ 4 Δ ( R s u m , R s u m ) .
For a given ( R 1 , R 2 ) pair, all ( d 1 , d 2 ) that satisfy these constraints are above the curve shown in Figure A2 [achievable distortion region for ( R 1 , R 2 ) ]. The minimum achievable total MSE is found by minimizing d 1 + d 2 subject to the constraints Equations (A2)–(A4) (feasibility set). It should be clear that the optimal solution occurs when δ is such that the line d 1 + d 2 = δ touches the boundary of the feasibility set. Depending on the relative values of Δ 1 , Δ 2 and Δ 12 , this will occur at the point A, B, or C as follows,
A if   Δ 2 < Δ 12 < Δ 1   and   δ = Δ 2 + Δ 12 Δ 2 [Figure A2a],
B if   Δ 1 < Δ 12 < Δ 2   and   δ = Δ 1 + Δ 12 Δ 1 [Figure A2c],
C if   Δ 12 > max { Δ 1 , Δ 2 }   and   δ = 2 Δ 12 [Figure A2b].
The inequalities in the first two statements are equivalent to max { Δ 1 , Δ 2 } < Δ 12 < max { Δ 1 , Δ 2 } . Therefore, the minimum d 1 + d 2 is
δ * = Δ * + Δ 12 Δ * ,
where Δ * as given by Equation (4).
Figure A2. Distortion pairs achievable in distributed VQ of two correlated Gaussian sources at a fixed rate pair ( R 1 , R 2 ) .
Figure A2. Distortion pairs achievable in distributed VQ of two correlated Gaussian sources at a fixed rate pair ( R 1 , R 2 ) .
Entropy 21 00992 g0a2

Appendix C. Optimal Linear Estimators for HDA Coding

In this appendix, we determine the MSE of the optimal linear estimator in conventional HDA coding.
  • No-outage event ( E 12 ): Linear estimator used in the HDA system, estimates the source sample S i , k , k = 1 , , n using the observation vector Y o = ( S ˜ 1 , k S ˜ 2 , k Y ˜ ) T . Define the asymptotic autocovariance matrix K = E { Y o Y o H } ¯ n and the cross-covariance vector c i = E { S ˜ i , k * Y o } ¯ n . For optimal VQ of Gaussian sources, Equations (5)–(8) hold and the following can be verified.
    k 11 = E { | S ˜ 1 | 2 } ¯ n = σ 2 ( 1 2 2 R 1 ) k 12 = k 21 = E { S ˜ 1 * S ˜ 2 } ¯ n = σ 2 ρ ( 1 2 2 R 1 ) ( 1 2 2 R 2 ) k 13 = k 31 * = E { S ˜ 1 * Y ˜ } ¯ n = α a 2 h 2 ρ 2 2 R 2 k 11 k 22 = E { | S ˜ 2 | 2 } ¯ n = σ 2 ( 1 2 2 R 2 ) k 23 = k 32 * = E { S ˜ 2 * Y ˜ } ¯ n = α a 1 h 1 ρ 2 2 R 1 k 22 k 33 = E { | Y ˜ | 2 } ¯ n = γ 1 t 1 P 1 + γ 2 t 2 P 2 + 2 γ 12 ρ z t 1 t 2 P 1 P 2 + N , c 11 = E { S 1 * S ˜ 1 } ¯ n = k 11 c 12 = E { S 1 * S ˜ 2 } ¯ n = ρ k 22 c 13 = c 31 * = E { S 1 * Y ˜ } ¯ n = ( α a 1 h 1 2 2 R 1 + α a 2 h 2 ρ 2 2 R 2 ) σ 2 c 21 = E { S 2 * S ˜ 1 } ¯ n = ρ k 11 c 22 = E { S 2 * S ˜ 2 } ¯ n = k 22 c 23 = c 32 * = E { S 2 * Y ˜ } ¯ n = ( α a 2 h 2 2 2 R 2 + α a 1 h 1 ρ 2 2 R 1 ) σ 2 .
    Let the optimal linear estimator coefficients be q i = ( q i 1 q i 2 q 13 ) T . Then, we have q i = K 1 c i * , i = 1 , 2 .
  • Partial outage event ( E i ): In this case, define Y o = ( S ˜ i , k Y ˜ k ) T , i { 1 , 2 } , and K i = E { Y o Y o H } ¯ n , which is a 2 × 2 matrix whose elements are
    k 11 = E { | S ˜ i | 2 } ¯ n = σ 2 ( 1 2 2 R i ) k 12 = k 21 * = E { S ˜ i * Y ˜ } ¯ n = α a j h j ρ 2 2 R j k 11 k 22 = E { | Y ˜ | 2 } ¯ n = γ i t i P i + γ j P j + 2 γ 12 ρ z t 1 t 2 P 1 P 2 + N .
    Let optimal linear estimates for S i , k and S j , k , k = 1 , , n be q i = ( q i 1 q i 2 ) T and q i = ( q i 1 q i 2 ) T . Then, we have q i = K i 1 c i * and q j = K i 1 ( c i ) * , where c i = ( c i 1 c i 2 ) T and c i = ( c i 1 c i 2 ) T with
    c i 1 = E { S i * S ˜ i } ¯ n = ρ k 11 c i 2 = E { S i * Y ˜ } ¯ n = α a i h i σ 2 2 2 R i + α a j h j ρ σ 2 2 2 R j c i 1 = E { S j * S ˜ i } ¯ n = ρ k 11 c i 2 = E { S j * Y ˜ } ¯ n = α a i h i ρ σ 2 2 2 R i + α a j h j σ 2 2 2 R j .
  • Total outage event ( E 12 ): The optimal source estimates are given by S ^ i , k = q i Y k , i = 1 , 2 for k = 1 , , n , where q i = k 1 c i and
    k = E { | Y | 2 } ¯ n = γ 1 P 1 + γ 2 P 2 + 2 γ 12 ρ z t 1 t 2 P 1 P 2 + N c i = E { S i * Y } ¯ n = α a i h i σ 2 2 2 R i + α a j h j ρ σ 2 2 2 R j .

Appendix D. Proof of Lemma 2 and Theorem 1

We start by summarizing the proof of ([10], Theorem IV.4). The code construction, encoding, and decoding in the JSC-VQ scheme are as follows. Let ϵ > 0 be a fixed constant and rates R 1 and R 2 be fixed. The VQ codebook C i C n , i = 1 , 2 , is generated by independently drawing 2 n R i vectors of length n from the surface of the origin-centered sphere of radius r i = n σ 2 ( 1 2 2 R i ) in C n . The encoder for source i uses C i and vector quantizes the source sequence s i to generate a codeword u i o C i . The code vector is then scaled to meet the power constraint and transmitted over the GMAC without any further encoding. Crucial to the proof given in [10] is the geometric view of the VQ encoder. To this end, consider the cosine angle between any pair of non-zero vectors w and v , defined by
cos ( w , v ) = R e { w , v } w v .
Let the F ( s i , C i ) be all u i C i for which cos ( s i , u i ) is between 1 2 2 R i ( 1 ± ϵ ) . The VQ encoder for source i quantizes the sequence s i into the codeword u i o as follows. If F ( s i , C i ) = then set u i o = 0 ; otherwise, u i o is the codevector u i F ( s i , C i ) with the smallest | cos ( s i , u i ) 1 2 2 R i | . The channel input is then formed as x i = α i u i o , where α i is given by Equation (20). Upon reception of the GMAC output y due to both transmitters, the receiver derives the source estimate ( s ^ 1 , s ^ 2 ) in two steps: First, the receiver obtains a guess ( u ^ 1 , u ^ 2 ) for the channel input codeword pair ( u 1 o , u 2 o ) by finding the jointly typical codeword pair ( u 1 , u 2 ) C 1 × C 2 such that α 1 u 1 + α 2 u 2 has the smallest Euclidean distance to the channel output y . A jointly typical pair is defined as ( u 1 , u 2 ) , for which | ρ ˜ cos ( u 1 , u 2 ) | 7 ϵ , where ϵ > 0 and ρ ˜ is given by Equation (9), which is the correlation between the transmitted VQ codewords ( u 1 o , u 2 o ) . Note that, guessing the channel inputs based on the output y in this case is akin to channel decoding in SSC coding, but the use of the correlation ρ ˜ to define a jointly typical set amounts to JSC decoding. In the second step, the source estimates are improved by computing the MMSE linear estimates of the source sequences, given y and the already decoded VQ codewords ( u ^ 1 , u ^ 2 ).
Given the channel output y , let E U ^ be the error event that there exists a jointly-typical codeword pair ( u ^ 1 , u ^ 2 ) ( u 1 o , u 2 o ) for which
y α 1 u ^ 1 α 2 u ^ 2 y α 1 u 1 o α 2 u 2 o .
It can be shown that, for sufficiently large n, the probability of joint decoding error P r ( E U ^ ) 0 as n if the rates ( R 1 , R 2 ) satisfy the constraints ([10], Lemma D.1)
R i < 1 2 log 2 P i ( 1 ρ ˜ 2 ) + N N ( 1 ρ ˜ 2 ) , i = 1 , 2
R 1 + R 2 < 1 2 log 2 P 1 + P 2 + 2 ρ ˜ P 1 P 2 + N N ( 1 ρ ˜ 2 ) .

Appendix D.1. Proof of Lemma 2

Consider the decoder JSC-VQ in a system where the GMAC exhibits block fading h i for transmitted codeword α i u i o . The channel output is given by y = α 1 h 1 u 1 o + α 2 h 2 u 2 o + w , where u i o C n , h i C , i = 1 , 2 , and w C n . The set of ( R 1 , R 2 ) pairs for which both VQ codewords are decodable can be obtained straightforwardly by replacing u 1 o and u 2 o with their scaled versions h 1 u 1 o and h 2 u 2 o in the proof of ([10], Lemma D.1).

Appendix D.2. Proof of Theorem 1

Without a loss of generality, we prove, in the following, Theorem 1 for the case i = 1 and j = 2 . Suppose we wish to determine the set of all rate pairs ( R 1 , R 2 ) for which only u 1 o can be correctly decoded for a given channel state h . In particular, given h
(i)
what is the largest R 1 for which the joint decoding procedure described above can guarantee that P r [ u ^ 1 = u 1 o ] 1 as n regardless of R 2 ?
(ii)
if u 1 o is provided to the decoder, what is lowest R 2 above which the correct decoding of u ^ 2 o cannot be guaranteed?
The answer to the question (ii) can already be found in [10], Lemma D.5., i.e., the necessary and sufficient condition for incorrect decoding of u 2 o given u 1 o is
R 2 > 1 2 log 2 | h 2 | 2 P 2 ( 1 ρ ˜ 2 ) + N N ( 1 ρ ˜ 2 ) = 1 2 log 2 P 2 Γ 2 + 1 1 ρ ˜ 2 .
which establishes Equation (26) (note that for uncorrelated sources, as expected, the RHS of the above inequality is the capacity of the AWGN channel for u 2 o obtained by canceling α 1 h 1 u 1 o from the GMAC output.) We next answer the question (i) posed above to establish Equation (25).
We start by defining E U ^ 1 * as the event that consists of all tuples ( s 1 , s 2 , C 1 , C 2 , z ) for which there exists a VQ codeword pair ( u ˜ 1 , u ˜ 2 ) C 1 × C 2 such that u ˜ 1 u 1 o . More precisely,
E U ^ 1 * = { ( s 1 , s 2 , C 1 , C 2 , z ) : u ˜ 1 C 1 { u 1 o } and u ˜ 2 C 2 s . t . | ρ ˜ cos ( u ˜ 1 , u ˜ 2 ) | 7 ϵ and y ( h 1 α 1 u ˜ 1 + h 2 α 2 u ˜ 2 ) | | 2 | | y ( h 1 α 1 u 1 o + h 2 α 2 u 2 o ) 2 } .
Now, observing that P r [ u ^ 1 = u 1 o ] 1 is equivalent to P r [ E U ^ 1 * ] 0 , we establish Equation (25) in Theorem 1 by proving the following lemma.
Lemma A2.
For every δ > 0 and 0 < ϵ < 0.3 , there exists an n 4 ( δ , ϵ ) N such that for all n > n 4 ( δ , ϵ )
P r [ E U ^ 1 * ] < 9 δ , whenever R 1 R 1 ( ϵ ) ,
where
R 1 ( ϵ ) = { R 1 < 1 2 log 2 | h 1 | 2 P 1 + | h 2 | 2 P 2 + 2 R e { h 1 h 2 * } P 1 P 2 + N | h 2 | 2 P 2 ( 1 ρ ˜ 2 ) + N ξ 15 ϵ } ,
and ξ 15 is a positive constant determined by P 1 , P 2 , h 1 , h 2 , and N.
Proof. 
Consider the following three auxiliary events related to source sequences, encoder output sequences, and channel error sequences. The first auxiliary error event, E S , is the same as ([10], (83)) with the exception that, in our case, ( s 1 , s 2 ) C n × C n and corresponds to an atypical source output sequence. The second auxiliary event E Z is the same as ([10], (84)) but with Z C n and corresponds to an atypical additive noise sequences. The third auxiliary event E X is given by the union of three events; Equations (85)–(87) in [10]. Now, by defining E ν c as the complement of E ν and using Pr [ E ν ] to denote Pr [ ( S 1 , S 2 , C 1 , C 2 , Z ) E ν ] , we can write
Pr [ E U ^ 1 * ] = Pr [ E U ^ 1 * E S c E X c E Z c ] + Pr [ E U ^ 1 * | E S E X E Z ] Pr [ E S E X E Z ] Pr [ E U ^ 1 * E S c E X c E Z c ] + Pr [ E S ] + Pr [ E X ] + Pr [ E Z ]
Pr [ E U ^ 1 * E S c E X c E Z c ] + 8 δ ,
9 δ ,
where Equation (A7) follows from Lemmas D.2, D.3, and D.4 in [10], whereas Equation (A8) is due to Lemma A3 proven below. □
Lemma A3.
For every δ > 0 and every ϵ > 0 there exists n 4 ( δ , ϵ ) N such that for all n > n 4 ( δ , ϵ )
P r [ E U ^ 1 * E S c E X c E Z c ] δ , i f R 1 < 1 2 log 2 | h 1 | 2 P 1 + | h 2 | 2 P 2 + 2 R e { h 1 h 2 * } P 1 P 2 + N | h 2 | 2 P 2 ( 1 ρ ˜ 2 ) + N ξ 15 ϵ
where ξ 15 is a positive constant determined by P 1 , P 2 , h 1 , h 2 , and N.
Proof. 
We can prove that
Pr E U ^ 1 * E Z c E S c E X c Pr E U ^ 1 * E Z c E S c E X c
Pr E U ^ 1 * E X 1 c ,
where Equation (A10) is due to the new Lemma A4 proven below and Equation (A11) is due to the fact that E X c E X 1 c . Now, the proof of Equation (A9) can be obtained by combining Equation (A11) and Lemma D.7 in [10] with w = y h 1 α 1 . Specifically, it then follows that, for every δ > 0 and every ϵ > 0 , there exists some n 4 ( δ , ϵ ) such that for n > n 4 ( δ , ϵ ) , Pr E U ^ 1 * E Z c E S c E X c < δ whenever
R 1 < 1 2 log 2 | h 2 | 2 P 2 ( 1 ρ ˜ 2 ) + N | h 1 | 2 P 1 + | h 2 | 2 P 2 + 2 Re { h 1 h 2 * } P 1 P 2 ) + N + ξ 14 ϵ 1 2 log 2 | h 1 | 2 P 1 + | h 2 | 2 P 2 + 2 Re { h 1 h 2 * } P 1 P 2 ) + N | h 2 | 2 P 2 ( 1 ρ ˜ 2 ) + N ξ 15 ϵ
where ξ 15 is a positive constant determined by P 1 , P 2 , h 1 , h 2 , and N. □
Lemma A4.
Let φ j [ 0 , π ] be the angle between y and u 1 ( j ) and let the set E U ^ 1 * be defined as
E U ^ 1 * { ( s 1 , s 1 , C 1 , C 2 , z ) : u 1 ( k ) C 1 { u 1 o } a n d u 2 ( l ) C 2 s . t . cos φ j | h 1 | 2 P 1 + R e { h 1 h 2 * } ρ ˜ P 1 P 2 ξ ϵ | h 1 | 2 P 1 ( | h 1 | 2 P 1 + | h 2 | 2 P 2 + 2 R e { h 1 h 2 * } ρ ˜ P 1 P 2 + N ) + ξ 2 ϵ a n d cos ( u 1 ( k ) , u 2 ( l ) ) ρ ˜ 7 ϵ } ,
where ξ and ξ 5 depend only on P 1 , P 2 , and N. Then, for every sufficiently small ϵ > 0
P r E U ^ 1 * E Z c E S c E X c P r E U ^ 1 * E Z c E S c E X c .
Proof. 
First we note that, for the error event E U ^ 1 * to occur, there must exist codewords u 1 ( k ) C 1 { u 1 o } , and codeword u 2 ( l ) C 2 (whose decodability is unknown) such that
| ρ ˜ cos ( u ˜ 1 , u ˜ 2 ) | 7 ϵ ( see Lemma A5 )
and
y ( h 1 α 1 u ˜ 1 + h 2 α 2 u ˜ 2 ) 2 y ( h 1 α 1 u 1 o + h 2 α 2 u 2 o ) 2 .
Now consider the following series of statements related Equations (A12) and (A13).
Statement A: 
For every ( s 1 , s 2 , C 1 , C 2 , z ) E X E Z , it holds that
( y ( h 1 α 1 u ˜ 1 + h 2 α 2 u ˜ 2 ) 2 y ( h 1 α 1 u 1 o + h 2 α 2 u 2 o ) 2 ) ) Re { y * , α 1 h 1 u 1 ( j ) } n ( | h 1 | 2 P 1 + R e { h 1 h 2 * } P 1 P 2 ξ 13 ϵ )
where ξ 13 only depends on P 1 , P 2 , h 1 , h 2 , and z.
First, by rewriting the LHS of Equation (A14), we have
Re { y * , α 1 h 1 u 1 ( j ) + α 2 h 2 u 2 ( l ) } h 1 α 1 u 1 o + h 2 α 2 u 2 o 2 + Re { z * , h 1 α 1 u 1 o + h 2 α 2 u 2 o } + 1 2 h 1 α 1 u 1 ( j ) + h 2 α 2 u 2 ( l ) 2 h 1 α 1 u 1 o + h 2 α 2 u 2 o 2 h 1 α 1 u 1 o + h 2 α 2 u 2 o 2 n ( h 1 P 1 N ϵ + h 2 P 2 N ϵ ) + Re { [ h 1 α 1 u 1 ( j ) ] * , h 2 α 2 u 2 ( l ) } Re { [ h 1 α 1 u 1 o ] * , h 2 α 2 u 2 o } h 1 α 1 u 1 o + h 2 α 2 u 2 o 2 n ξ 1 ϵ + n Re { h 1 * h 2 } ρ ˜ P 1 P 2 ( 1 7 ϵ ) ρ ˜ P 1 P 2 ( 1 + 7 ϵ ) h 1 α 1 u 1 o + h 2 α 2 u 2 o 2 n ξ 2 ϵ .
Next, by rewriting LHS of the above inequality Equation (A15),
Re { h 1 α 1 u 1 o + h 2 α 2 u 2 o * , α 1 h 1 u 1 ( j ) + α 2 h 2 u 2 ( l ) } h 1 α 1 u 1 o + h 2 α 2 u 2 o 2 Re ( z , α 1 h 1 u 1 ( j ) + α 2 h 2 u 2 ( l ) ) n ξ 2 ϵ h 1 α 1 u 1 o + h 2 α 2 u 2 o 2 ( h 1 P 1 N ϵ + h 2 P 2 N ϵ ) n ξ 2 ϵ h 1 α 1 u 1 o + h 2 α 2 u 2 o 2 n ξ 3 ϵ
Figure A3 illustrates an example of vectors h 1 α 1 u 1 ( j ) , h 2 α 2 u 2 ( j ) and in the complex vector space C n . Let angles ϕ , θ , and γ are defined as ϕ = ( h 1 α 1 u 1 ( j ) , h 1 α 1 u 1 o + h 2 α 2 u 2 o ) , θ = ( h 1 α 1 u 1 ( j ) + h 2 α 2 u 2 ( j ) , h 1 α 1 u 1 o + h 2 α 2 u 2 o ) and γ = ( h 1 α 1 u 1 ( j ) , h 1 α 1 u 1 ( j ) + h 2 α 2 u 2 ( j ) ) . The respective cosine values of ϕ , θ and γ are given by
cos ϕ = Re h 1 α 1 u 1 o + h 2 α 2 u 2 o * , α 1 h 1 u 1 ( j ) h 1 α 1 u 1 o + h 2 α 2 u 2 o . α 1 h 1 u 1 ( j ) cos θ = Re h 1 α 1 u 1 o + h 2 α 2 u 2 o * , α 1 h 1 u 1 ( j ) + α 2 h 2 u 2 ( l ) h 1 α 1 u 1 o + h 2 α 2 u 2 o . α 1 h 1 u 1 ( j ) + α 2 h 2 u 2 ( l ) cos γ = Re α 1 h 1 u 1 ( j ) * , α 1 h 1 u 1 ( j ) + α 2 h 2 u 2 ( l ) α 1 h 1 u 1 ( j ) . α 1 h 1 u 1 ( j ) + α 2 h 2 u 2 ( l )
Recalling that α i u i = n P i , α i u i o = n P i for i { 1 , 2 } , | ρ ˜ cos ( u 1 ( j ) , u 2 ( l ) ) | < 7 ϵ , and | ρ ˜ cos ( u 1 o , u 2 o ) | < 7 ϵ (see Lemma A5), it can be shown that
| h 1 α 1 u 1 o + h 2 α 2 u 2 o 2 h 1 α 1 u 1 ( j ) + h 2 α 2 u 2 ( l ) 2 | n ξ 4 ϵ
Figure A3. The definition of asymptotic angles used to prove Lemma A4.
Figure A3. The definition of asymptotic angles used to prove Lemma A4.
Entropy 21 00992 g0a3
By substituting Equations (A16) and (A18) in Equation (A17), we can write
cos θ h 1 α 1 u 1 o + h 2 α 2 u 2 o 2 n ξ 3 ϵ h 1 α 1 u 1 o + h 2 α 2 u 2 * h 1 α 1 u 1 o + h 2 α 2 u 2 o + n ξ 4 ϵ 1 ξ 5 ϵ 1 + ξ 6 ϵ .
For ϵ 0 choose ξ 7 such that 1 1 + ξ 6 ϵ > 1 ξ 7 ϵ , then it follows that
cos θ ( 1 ξ 5 ϵ ) ( 1 ξ 7 ϵ ) 1 ξ 8 ϵ .
With ϕ γ + θ , theequality holds true when vectors h 1 α 1 u 1 ( j ) , h 1 α 1 u 1 ( j ) + h 2 α 2 u 2 ( l ) and h 1 α 1 u 1 * + h 2 α 2 u 2 * are on the same plane. Note that 0 γ π and 0 θ < π 2 , it follows that
cos ϕ cos ( γ + θ ) = cos γ cos θ sin γ sin θ = cos γ cos θ sin γ 1 cos 2 θ
cos γ ( 1 ξ 8 ϵ ) sin γ ξ 8 ϵ .
As ϵ 0 let ( cos γ ξ 8 ϵ sin γ ξ 8 ϵ ) ξ 9 ϵ , where ξ 9 is only a function of P 1 , P 2 , h 1 , h 2 and N. Now Equation (A19) can be written as
cos ϕ cos γ ξ 9 ϵ .
By revisiting the definition of ϕ and γ , Equation (A20) can be rewritten as
Re h 1 α 1 u 1 ( j ) , h 1 α 1 u 1 o + h 2 α 2 u 2 o h 1 α 1 u 1 ( j ) . h 1 α 1 u 1 o + h 2 α 2 u 2 o Re h 1 α 1 u 1 ( j ) , h 1 α 1 u 1 ( j ) + h 2 α 2 u 2 ( l ) h 1 α 1 u 1 ( j ) . h 1 α 1 u 1 ( j ) + h 2 α 2 u 2 ( l ) ξ 9 ϵ
Recalling that | h 1 α 1 u 1 o + h 2 α 2 u 2 o 2 h 1 α 1 u 1 ( j ) + h 2 α 2 u 2 ( l ) 2 | n ξ 4 ϵ , (A21) can be written as
Re h 1 α 1 u 1 ( j ) * , h 1 α 1 u 1 o + h 2 α 2 u 2 o Re h 1 α 1 u 1 ( j ) * , h 1 α 1 u 1 ( j ) + h 2 α 2 u 2 ( l ) n ξ 10 ϵ n P 1 + Re { h 1 * h 2 } ρ ˜ P 1 P 2 ( 1 7 ϵ ) n ξ 10 ϵ = n P 1 + Re { h 1 * h 2 } ρ ˜ P 1 P 2 n ξ 11 ϵ .
AS z E z , Re { z * , h 1 α 1 u 1 ( j ) ) } n | h 1 | P 1 N ϵ , we can bound the real component of the inner product between the received signal vector y and h 1 α 1 u 1 ( j ) as follows,
Re h 1 α 1 u 1 ( j ) * , y = Re h 1 α 1 u 1 ( j ) * , h 1 α 1 u 1 o + h 2 α 2 u 2 o + Re h 1 α 1 u 1 ( j ) * , z n P 1 + Re { h 1 * h 2 } ρ ˜ P 1 P 2 n ξ 11 ϵ n | h 1 | P 1 N ϵ = n P 1 + Re { h 1 * h 2 } ρ ˜ P 1 P 2 n ξ 12 ϵ .
Statement B: 
For every ( s 1 , s 2 , C 1 , C 2 , z ) E X c E Z c , we have the straightforward relation
y 2 n ( | h 1 | 2 P 1 + | h 2 | 2 P 2 + 2 ρ ˜ Re { h 1 * h 2 } P 1 P 2 + N + ξ 13 ϵ )
where ξ 13 is only a function of P 1 , P 2 , h 1 , and h 2 .
Statement C: 
For every ( s 1 , s 2 , C 1 , C 2 , z ) E X c , it follows from the power constraint that
h 1 α 1 u 1 ( j ) | h 1 | n P 1 .
Statement D: 
For every ( s 1 , s 2 , C 1 , C 2 , z ) E X c E Z c , the following implication holds
| ρ ˜ cos ( h 1 u 1 ( j ) , h 1 u 2 ( l ) ) | < 7 ϵ and y ( α 1 h 1 u 1 ( j ) + α 2 h 2 u 2 ( l ) ) 2 y ( α 1 h 1 u 1 o + α 2 h 2 u 2 o ) 2 cos ( y , h 1 α 1 u 1 ( j ) ) Δ ( ϵ ) ,
where
Δ ( ϵ ) | h 1 | 2 P 1 + Re { h 1 h 2 * } ρ ˜ P 1 P 2 ξ ϵ | h 1 | 2 P 1 ( | h 1 | 2 P 1 + | h 2 | 2 P 2 + 2 Re { h 1 h 2 * } ρ ˜ P 1 P 2 + N ) + ξ 2 ϵ
This statement follows from rewriting the cos ( y , h 1 α 1 u 1 ( k ) ) as
cos ( y , h 1 α 1 u 1 ( k ) ) = Re { y * , h 1 α 1 u 1 ( k ) } y h 1 α 1 u 1 ( k )
and then lower bounding Re y * , h 1 α 1 u 1 ( k ) using Statement A and upper bounding y and h 1 α 1 u 1 ( j ) using Statements B and C, respectively.
Now, by Statement D and the definition of E U ^ 1 * , we conclude that
E U ^ 1 * E Z c E S c E X c E U ^ 1 * E Z c E S c E X c
and therefore
Pr E U ^ 1 * | E Z c E S c E X c Pr E U ^ 1 * | E Z c E S c E X c .
 □
Lemma A5.
Let ( u ˜ 1 , u ˜ 2 ) C 1 × C 2 be observed VQ codewords for any tuple ( s 1 , s 2 , C 1 , C 2 ) . Then, for every δ > 0 and ϵ > 0 there exists an n 0 ( δ , ϵ ) N such that for all n > n 0 ( δ , ϵ ) N
| ρ ˜ cos ( u ˜ 1 , u ˜ 2 ) | 7 ϵ ,
where ρ ˜ = ρ ( 1 2 2 R 1 ) ( 1 2 2 R 2 ) .
Proof .
Let u ˜ i = ν i s i + v i , i { 1 , 2 } , where ν i is chosen such that s i , v i = 0 , i.e.,
ν i = u ˜ i s i cos ( s i , u ˜ i ) , i { 1 , 2 } .
Then, we can write
Re u ˜ 1 , u ˜ 2 u ˜ 1 u ˜ 2 = 1 u ˜ 1 u ˜ 2 Re ν 1 ν 2 s 1 , s 2 + ν 1 s 1 , v 2 + ν 2 v 1 , s 2 + v 1 , v 2 .
Define the set of events
A 1 = ( s 1 , s 2 , C 1 , C 2 ) : | ρ ˜ ν 1 ν 2 u ˜ 1 u ˜ 2 s 1 , s 2 | > 4 ϵ A 2 = ( s 1 , s 2 , C 1 , C 2 ) : | ν 1 u ˜ 1 u ˜ 2 Re s 1 , v 2 | > ϵ A 3 = ( s 1 , s 2 , C 1 , C 2 ) : | ν 2 u ˜ 1 u ˜ 2 Re s 2 , v 1 | > ϵ A 4 = ( s 1 , s 2 , C 1 , C 2 ) : | 1 u ˜ 1 u ˜ 2 Re v 1 , v 2 | > ϵ .
Let E ( X 1 , X 2 ) be the event that two channel-input sequences X 1 and X 2 are jointly typical. From Equation (A22), we observe that E ( X 1 , X 2 ) A 1 A 2 A 3 A 4 and therefore
Pr E ( X 1 , X 2 ) E S c E X 1 c E X 2 c Pr A 1 | E S c E X 1 c E X 2 c + Pr A 2 | E S c + Pr A 3 | E S c + Pr A 4 | E S c .
The RHS terms of Equation (A23) can be upper-bounded by an arbitrarily small positive number δ, see Lemmas D.20 and D.21 in [10]. Thus, Pr E ( X 1 , X 2 ) E S c E X 1 c E X 2 c 3 δ . Furthermore, from Lemmas D.2 and D.4 in [10], we have Pr [ E S ] < δ , Pr [ E X 1 ] < 6 δ , and Pr [ E X 2 ] < 6 δ . Thus, it follows that, as n , Pr E ( X 1 , X 2 ) 0 and hence
Pr | ρ ˜ cos ( u ˜ 1 , u ˜ 2 ) 7 ϵ | 0 ,
which establishes the desired results. □

Appendix E. MMSE of Linear Estimation Step in JSC-VQ and HDA-JSC-VQ Decoders

Appendix E.1. JSC-VQ System

The MMSE of the system in Section 5.1 can be obtained by using asymptotic arguments similar to those used in ([10], Appendix D.D).
Let S ˜ i be the vector-quantized value of S i , and Y k = α 1 h 1 S ˜ 1 , k + α 2 h 2 S ˜ 2 , k + W k , k = 1 , , n . For optimal VQ of Gaussian sources, Equations (5)–(8) hold and the following asymptotic covariances can be verified.
k 11 = E { | S ˜ 1 | 2 } ¯ n = σ 2 ( 1 2 2 R 1 ) k 12 = k 21 = E { S ˜ 1 * S ˜ 2 } ¯ n = σ 2 ρ ( 1 2 2 R 1 ) ( 1 2 2 R 2 ) k 13 = k 31 * = E { S ˜ 1 * Y } ¯ n = ( α 1 h 1 + α 2 h 2 ρ ) k 11 k 22 = E { | S ˜ 2 | 2 } ¯ n = σ 2 ( 1 2 2 R 2 ) k 23 = k 32 * = E { S ˜ 2 * Y } ¯ n = ( α 2 h 2 + α 1 h 1 ρ ) k 22 k 33 = E { | Y | 2 } ¯ n = α 1 2 γ 1 σ 2 + 2 α 1 α 2 γ 12 ρ σ 2 + α 2 2 γ 2 σ 2 + N , c 11 = E { S 1 * S ˜ 1 } ¯ n = k 11 c 12 = E { S 1 * S ˜ 2 } ¯ n = ρ k 22 c 13 = E { S 1 * Y } ¯ n = ( α 1 h 1 + α 2 h 2 ρ ) σ 2 c 21 = E { S 2 * S ˜ 1 } ¯ n = ρ k 11 c 22 = E { S 2 * S ˜ 2 } ¯ n = k 22 c 23 = E { S 2 * Y } ¯ n = ( α 2 h 2 + α 1 h 1 ρ ) σ 2 .
Using these results, the MMSE of the linear estimator for different outage events can be determined as follows.
  • Outage event E 12 : Define the column vector y ˜ k = ( S ˜ 1 , k S ˜ 2 , k ) T whose covariance matrix is
    K 12 = k 11 k 12 k 21 k 22
    and let c i = ( c i 1 c i 2 ) T . The optimal linear estimator is S ^ i , k = ( q i 1 q i 2 ) T y ˜ k whose coefficients are given by ( q i 1 q i 2 ) T = K 12 1 c i . The MMSE of this estimator is
    d i ( E 12 | R 1 , R 2 , α 1 , α 2 ) = σ 2 q i 1 c i 1 q i 2 c i 2 = σ 2 2 2 R i 1 ρ 2 ( 1 2 2 R j ) 1 ρ ˜ 2 , i = 1 , 2 .
  • Partial outage event E 1 : Define the column vector y ˜ k = ( S ˜ 1 , k Y k ) T whose covariance matrix is
    K 1 = k 11 k 13 k 31 k 33
    and let c i = ( c i 1 c i 3 ) T . The optimal linear estimator is S ^ i , k = ( q i 1 q i 3 ) T y ˜ k whose coefficients are given by ( q i 1 q i 3 ) T = ( K 1 ) 1 c i . The MMSE of this estimator is
    d i ( E 1 | R 1 , R 2 , α 1 , α 2 ) = σ 2 q i 1 c i 1 q i , 3 c i 3 , i = 1 , 2 .
  • Partial outage event E 2 : Define the column vector y ˜ k = ( S ˜ 2 , k Y k ) T whose covariance matrix is
    K 2 = k 22 k 23 k 32 k 33
    and let c i = ( c i 2 c i 3 ) T . The optimal linear estimator is S ^ i , k = ( q i 2 q i 3 ) T y ˜ k whose coefficients are given by ( q i 2 q i 3 ) T = ( K 2 ) 1 c i . The MMSE of this estimator is
    d i ( E 2 | R 1 , R 2 , α 1 , α 2 ) = σ 2 q i 2 c i 2 q i 3 c i 3 , i = 1 , 2 .
  • Total outage event E 12 : The optimal linear estimator is S ^ i , k = q i 3 Y k , where ( q i 3 = c i 3 k 33 and the MMSE is
    d i ( E 12 | R 1 , R 2 , α 1 , α 2 ) = σ 2 | c i 3 | 2 k 33 , i = 1 , 2 .

Appendix E.2. HDA-JSC-VQ System

The MMSE of the HDA-JSC-VQ system can be determined in the same way as in the case of JSC-VQ system by modifying the covariances to account for the difference in the channel output. For the HDA-JSC-VQ system, we have
Y k = α 1 h 1 S ˜ 1 , k + α 2 h 2 S ˜ 2 , k + β 1 h 1 S 1 , k + β 2 h 2 S 2 , k + W k , k = 1 , , n .
Therefore, k 11 , k 12 , k 22 , c 11 , c 12 , c 21 , and c 22 will be the same as in the previous section, but the channel output dependent quantities now become
k 13 = k 31 * = [ ( α 1 + β 1 ) h 1 + α 2 h 2 ρ ] k 11 + β 2 h 2 k 12 k 23 = k 32 * = [ ( α 2 + β 2 ) h 2 + α 1 h 1 ρ ] k 22 + β 1 h 1 k 12 k 33 = α 1 2 γ 1 σ 2 + 2 α 1 β 1 γ 1 k 11 + 2 α 1 α 2 γ 12 ρ σ 2 + 2 α 1 β 2 γ 12 ρ k 22 + β 1 2 γ 1 k 11 + 2 β 1 α 2 γ 12 ρ k 11 + 2 β 1 β 1 β 2 γ 12 k 12 + 2 α 2 β 2 γ 12 k 22 + α 2 2 γ 2 σ 2 + β 2 2 γ 2 k 22 + N , c 13 = ( α 1 h 1 + α 2 h 2 ρ ) σ 2 + β 1 h 1 k 11 + β 2 h 2 ρ k 22 c 23 = ( α 2 h 2 + α 1 h 1 ρ ) σ 2 + β 1 h 1 ρ k 11 + β 2 h 2 k 22 .
Parallel to Equations (A24)–(A27), the MMSE for each outage event can be obtained as follows.
  • Outage event E 12 : Both S ˜ 1 and S ˜ 2 are decoded correctly. Unlike in the JSC-VQ system, the linear estimator is S ^ i , k = ( q i 1 q i 2 q i 2 ) T y ˜ k and the optimal coefficients are given by ( q i 1 q i 2 q i 3 ) T = K 1 c i , where y ˜ k = ( S ˜ 1 , k S ˜ 2 , k Y k ) T , and K is the 3 × 3 matrix whose ( l , m ) -elements is k l m . and c i = ( c i 1 c i 2 c i 2 ) T . The MMSE is thus
    d i ( E 12 | R 1 , R 2 , α 1 , α 2 , β 1 , β 2 ) = σ 2 q i 1 c i 1 q i 2 c i 2 q i 3 c i 3 , i = 1 , 2 ,
  • Partial outage event E 1 : Only S ˜ 1 is decoded correctly and hence
    d i ( E 1 | R 1 , R 2 , α 1 , α 2 , β 1 , β 2 ) = σ 2 q i 1 c i 1 q i 3 c i 3 , i = 1 , 2 ,
  • Partial outage event E 2 : Only S ˜ 2 is decoded correctly and hence
    d i ( E 2 | R 1 , R 2 , α 1 , α 2 , β 1 , β 2 ) = σ 2 q i 2 c i 2 q i 3 c i 3 , i = 1 , 2 ,
  • Total outage event E 12 : Neither S ˜ 1 nor S ˜ 2 is decoded correctly, and therefore
    d i ( E 12 | R 1 , R 2 , α 1 , α 2 , β 1 , β 2 ) = σ 2 q i 3 c i 3 , i = 1 , 2 ,

References

  1. Chong, C.Y.; Kumar, S.P. Sensor networks: Evolution, opportunities, and challenges. Proc. IEEE 2003, 91, 1247–1256. [Google Scholar] [CrossRef]
  2. Gastpar, M.; Vetterli, M. Source-channel communication in sensor networks. In Information Processing in Sensor Networks; Springer: Berlin/Heidelberg, Germany, 2003; pp. 162–176. [Google Scholar]
  3. Goldsmith, A.J.; Varaiya, P. Capacity of fading channels with channel side information. IEEE Trans. Inf. Theory 1997, 43, 1986–1992. [Google Scholar] [CrossRef] [Green Version]
  4. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley: Hoboken, NJ, USA, 2006. [Google Scholar]
  5. Cover, T.M.; Gamal, A.E.; Salehi, M. Multiple access channels with arbitrarily correlated sources. IEEE Trans. Inf. Theory 1980, 26, 648–657. [Google Scholar] [CrossRef] [Green Version]
  6. Xiao, J.J.; Luo, Z.Q. Multiterminal Source–Channel Communication Over an Orthogonal Multiple-Access Channel. IEEE Trans. Inf. Theory 2007, 53, 3255–3264. [Google Scholar] [CrossRef]
  7. Tian, C.; Chen, J.; Diggavi, S.H.; Shitz, S.S. Optimality and Approximate Optimality of Source-Channel Separation in Networks. IEEE Trans. Inf. Theory 2014, 60, 904–917. [Google Scholar] [CrossRef]
  8. Wagner, A.B.; Tavildar, S.; Viswanath, P. Rate region of the quadratic Gaussian two-encoder source-coding problem. IEEE Trans. Inf. Theory 2008, 54, 1938–1961. [Google Scholar] [CrossRef]
  9. Ray, S.; Médard, M.; Effros, M.; Koetter, R. On Separation for multiple-access channels. In Proceedings of the 2006 IEEE Information Theory Workshop—ITW ’06, Punta del Este, Uruguay, 22–26 October 2006; pp. 399–403. [Google Scholar]
  10. Lapidoth, A.; Tinguely, S. Sending a bi-variate Gaussian source over a Gaussian MAC. IEEE Trans. Inf. Theory 2010, 56, 2714–2752. [Google Scholar] [CrossRef]
  11. Viswanathan, H.; Berger, T. The quadratic Gaussian CEO problem. IEEE Trans. Inf. Theory 1997, 43, 1549–1559. [Google Scholar] [CrossRef]
  12. Gastpar, M. Uncoded transmission is exactly optimal for a simple Gaussian network. IEEE Trans. Inf. Theory 2008, 54, 5247–5251. [Google Scholar] [CrossRef]
  13. Goblick, T.J. Theoretical Limitations on the Transmission of Data From Analog Sources. IEEE Trans. Inf. Theory 1965, 11, 558–567. [Google Scholar] [CrossRef]
  14. Jain, A.; Gündüz, D.; Kulkarni, S.R.; Poor, H.V.; Verdú, S. Energy-Distortion Tradeoffs in Gaussian Joint Source-Channel Coding Problems. IEEE Trans. Inf. Theory 2012, 58, 3153–3167. [Google Scholar] [CrossRef]
  15. Berger, T. Multiterminal source coding. In The Information Theory Approach to Communication; CSIM Courses and Lecture Notes No. 229; Springer: Berlin/Heidelberg, Germany, 1977; pp. 171–231. [Google Scholar]
  16. Narasimhan, R. Individual Outage Rate Regions for Fading Multiple Access Channels. In Proceedings of the 2007 IEEE International Symposium on Information Theory, Nice, France, 24–29 June 2007; pp. 1571–1575. [Google Scholar]
  17. Mittal, U.; Phamdo, N. Hybrid digital–analog (HDA) joint source-channel codes for broadcasting and robust communication. IEEE Trans. Inf. Theory 2002, 48, 1082–1102. [Google Scholar] [CrossRef]
  18. Floor, P.A.; Kim, A.N.; Wernersson, N.; Ramstad, T.A.; Skoglund, M.; Balasingham, I. Zero-delay joint source-channel coding for a bivariate Gaussian on a Gaussian MAC. IEEE Trans. Commun. 2012, 60, 3091–3102. [Google Scholar] [CrossRef]
  19. Kron, J.; Alajaji, F.; Skoglund, M. Low-delay joint source-channel mappings for the Gaussian MAC. IEEE Comm. Lett. 2018, 18, 249–252. [Google Scholar] [CrossRef]
  20. Floor, P.A.; Kim, A.N.; Ramstad, T.A.; Balasingham, I.; Wernersson, N.; Skoglund, M. On joint source-channel coding for a multivariate Gaussian on a Gaussian MAC. IEEE Trans. Commun. 2015, 63, 1824–1936. [Google Scholar] [CrossRef]
  21. Fischer, T.R.; Marcellin, M.W.; Wang, M. Trellis-coded vector quantization. IEEE Trans. Inf. Theory 1992, 38, 1551–1566. [Google Scholar] [CrossRef]
  22. Yahampath, P.; Samarawickrama, U. Joint source-channel decoding of convolutionally encoded multiple descriptions. In Proceedings of the IEEE Global Telecommunications Conference, St. Louis, MO, USA, 28 November–2 December 2005; pp. 1363–1367. [Google Scholar]
Figure 1. Hybrid digital–analog (HDA) transmission of Gaussian source over GMAC. VQ: vector quantizer; CC: channel encoder.
Figure 1. Hybrid digital–analog (HDA) transmission of Gaussian source over GMAC. VQ: vector quantizer; CC: channel encoder.
Entropy 21 00992 g001
Figure 2. ( γ 1 , γ 2 ) pairs corresponding to outage events in HDA coding of uncorrelated Gaussian sources; τ = 1 t 2 1 t 1 .
Figure 2. ( γ 1 , γ 2 ) pairs corresponding to outage events in HDA coding of uncorrelated Gaussian sources; τ = 1 t 2 1 t 1 .
Entropy 21 00992 g002
Figure 3. Fading-averaged (FA)-mean minimum square error (MMSE) for uncorrelated unit-variance Gaussian sources and Rayleigh-fading GMAC with Γ ¯ = 1.0 .
Figure 3. Fading-averaged (FA)-mean minimum square error (MMSE) for uncorrelated unit-variance Gaussian sources and Rayleigh-fading GMAC with Γ ¯ = 1.0 .
Entropy 21 00992 g003
Figure 4. FA-MMSE for correlated unit-variance Gaussian sources with ρ = 0.9 and Rayleigh-fading GMAC with Γ ¯ = 1.0 .
Figure 4. FA-MMSE for correlated unit-variance Gaussian sources with ρ = 0.9 and Rayleigh-fading GMAC with Γ ¯ = 1.0 .
Entropy 21 00992 g004
Figure 5. P = 10 dB (left) and P = 30 dB (right).
Figure 5. P = 10 dB (left) and P = 30 dB (right).
Entropy 21 00992 g005
Figure 6. FA-MMSE for correlated unit-variance Gaussian sources with ρ = 0.9 and Γ ¯ = 1.0 , when the output power of the transmitter for source 2 has 10 dB lower than that for source 1 (i.e., P 1 = 10 P 2 ).
Figure 6. FA-MMSE for correlated unit-variance Gaussian sources with ρ = 0.9 and Γ ¯ = 1.0 , when the output power of the transmitter for source 2 has 10 dB lower than that for source 1 (i.e., P 1 = 10 P 2 ).
Entropy 21 00992 g006

Share and Cite

MDPI and ACS Style

Illangakoon, C.; Yahampath, P. On Achievable Distortion in Sending Gaussian Sources over a Bandwidth-Matched Gaussian MAC with No Transmitter CSI. Entropy 2019, 21, 992. https://doi.org/10.3390/e21100992

AMA Style

Illangakoon C, Yahampath P. On Achievable Distortion in Sending Gaussian Sources over a Bandwidth-Matched Gaussian MAC with No Transmitter CSI. Entropy. 2019; 21(10):992. https://doi.org/10.3390/e21100992

Chicago/Turabian Style

Illangakoon, Chathura, and Pradeepa Yahampath. 2019. "On Achievable Distortion in Sending Gaussian Sources over a Bandwidth-Matched Gaussian MAC with No Transmitter CSI" Entropy 21, no. 10: 992. https://doi.org/10.3390/e21100992

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop