Next Article in Journal
The Tale of Two Financial Crises: An Entropic Perspective
Next Article in Special Issue
Artificial Noise-Aided Physical Layer Security in Underlay Cognitive Massive MIMO Systems with Pilot Contamination
Previous Article in Journal
A Framework for Designing the Architectures of Deep Convolutional Neural Networks
Previous Article in Special Issue
On Linear Coding over Finite Rings and Applications to Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Energy-Distortion Tradeoff of Gaussian Broadcast Channels with Feedback

1
Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
2
Goldman Sachs, New York, NY 10282, USA
3
Department of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Be’er-Sheva 8410501, Israel
4
Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UK
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(6), 243; https://doi.org/10.3390/e19060243
Submission received: 25 April 2017 / Revised: 17 May 2017 / Accepted: 19 May 2017 / Published: 24 May 2017
(This article belongs to the Special Issue Network Information Theory)

Abstract

:
This work studies the relationship between the energy allocated for transmitting a pair of correlated Gaussian sources over a two-user Gaussian broadcast channel with noiseless channel output feedback (GBCF) and the resulting distortion at the receivers. Our goal is to characterize the minimum transmission energy required for broadcasting a pair of source samples, such that each source can be reconstructed at its respective receiver to within a target distortion, when the source-channel bandwidth ratio is not restricted. This minimum transmission energy is defined as the energy-distortion tradeoff (EDT). We derive a lower bound and three upper bounds on the optimal EDT. For the upper bounds, we analyze the EDT of three transmission schemes: two schemes are based on separate source-channel coding and apply encoding over multiple samples of source pairs, and the third scheme is a joint source-channel coding scheme that applies uncoded linear transmission on a single source-sample pair and is obtained by extending the Ozarow–Leung (OL) scheme. Numerical simulations show that the EDT of the OL-based scheme is close to that of the better of the two separation-based schemes, which makes the OL scheme attractive for energy-efficient, low-latency and low-complexity source transmission over GBCFs.

1. Introduction

This work studies the energy-distortion tradeoff (EDT) for the transmission of a pair of correlated Gaussian sources over a two-user Gaussian broadcast channel (GBC) with noiseless, causal feedback, referred to as the GBCF. The EDT was originally proposed in [1] to characterize the minimum energy-per-source sample required to achieve a target distortion at the receiver, without constraining the source-channel bandwidth ratio. In many practical scenarios, e.g., satellite broadcasting [2], sensor networks measuring physical processes [3,4] and wireless body-area sensor networks [5,6,7], correlated observations need to be transmitted over noisy channels. Moreover, in various emerging applications, particularly in the context of the Internet of Things, the sampling rates are low; and hence, the channel bandwidth for transmission is much larger than the rate of the sources. Consequently, the main fundamental limitation for the communication system is the available energy per source sample. For example, in wireless body-area sensor networks, wireless computing devices located on, or inside, the human body measure physiological parameters, which typically exhibit correlations as they originate from the same source. These devices commonly have limited energy supply due to their size and are also subject to transmission power constraints due to safety restrictions, while bandwidth can be relatively large as communications takes place over short distances [8,9,10]. In this application, transmission of correlated parameters measured by a single sensor to different devices can be modeled as a BC with correlated sources. As an example for such a setting, consider a sensor measuring heart rate, as well as cardiac output (volume of blood outputted from the heart per unit time), which are correlated parameters (see, e.g., [10] (Section 2.6)), where the heart rate measurements are communicated to a smart watch (e.g., for the purpose of fitness tracking), while the cardiac output is communicated to a smart phone (e.g., for health monitoring and reporting purposes).
It is well known that for lossy transmission of a Gaussian source over a Gaussian memoryless point-to-point channel, either with or without feedback, when the source-channel bandwidth ratio is fixed and the average power is finite, then separate source and channel coding (SSCC) achieves the minimum possible average mean square error (MSE) distortion [11] (Theorem 3). In [1] (Cor. 1), it is further shown that SSCC is optimal also in the sense of EDT: for any target MSE distortion level, the minimal transmission energy is achieved by optimal lossy compression [12] (Chapter 13) followed by the most energy efficient channel code [13]. While [1] (Cor. 1) considered unbounded number of source samples, more recent works [14] (Theorem 9) and [15] showed that similar observations hold also for the point-to-point channel with a finite number of source samples. Except for a few special scenarios, e.g., [16,17,18] and the references therein, the optimality of SSCC does not generalize to multiuser networks, and a joint design of the source and channel codes can improve the performance.
An example for a setting in which SSCC is sub-optimal is the transmission of a pair of correlated Gaussian sources over a GBC where the bandwidths of the source and the channel match (i.e., on average, a single source sample pair is transmitted over a single use of the channel). The complete characterization of the achievable distortion pairs for this problem was given in [19], which also showed that a joint source-channel coding (JSCC) transmission scheme is optimal while separation-based schemes cannot achieve the optimal performance. JSCC for the transmission of correlated sources over GBCs with a source-channel bandwidth mismatch was recently studied in [20], where novel hybrid digital/analog coding schemes were proposed and shown to be superior to other schemes known in the literature. It should be noted that the transmission of correlated sources over GBCs is an important communications scenario, which applies to a vast number of practical applications, including broadcasting video [21,22], images [23] and physical measurements [24].
The impact of feedback on lossy JSCC over multiuser channels was considered in relatively few works. Several achievability schemes and a set of necessary conditions for losslessly transmitting a pair of discrete and memoryless correlated sources over a multiple-access channel (MAC) with feedback were presented in [25]. Lossy transmission of correlated Gaussian sources over a two-user Gaussian MAC with feedback was studied in [26], in which sufficient conditions, as well as necessary conditions for the achievability of an MSE distortion pair were derived for the case in which the source and channel bandwidths match. The work [26] also showed that for the symmetric setting, if the channel signal-to-noise ratio (SNR) is low enough, then uncoded transmission is optimal. While [26] considered source-channel coding with a unit bandwidth ratio, [1] studied the EDT for the transmission of correlated Gaussian sources over a two-user Gaussian MAC with and without feedback, when the bandwidth ratio is not restricted. Lastly, [27] improved the lower bound derived in [1] for the two-user Gaussian MAC without feedback and extended the results to more than two users.
While EDT analysis has gained some attention in recent years, the EDT of broadcast channels was considered only for GBCs without feedback. In particular, the work [15] studied the transmission of Gaussian sources over a GBC and characterized the energy-distortion exponents, namely, the exponential rate of decay of the square-error distortion as the available energy-to-noise ratio increases without bound. For GBCFs, the existing literature mainly focused on channel coding aspects, considering independent and uniformly distributed messages. A key work in this context is the work of Ozarow and Leung (OL) [28], which obtained inner and outer bounds on the capacity region of the two-user GBCF, by extending the point-to-point transmission strategy of Schalkwijk–Kailath (SK) [29]. The work [30] extended the OL scheme for two-user GBCFs by using estimators with memory (at the receivers) instead of the memoryless estimators used in the original OL scheme of [28]. In contrast to the point-to-point case [29], for GBCFs, both the scheme of [28] and the scheme of [30] are generally suboptimal. While the analysis and construction of the OL scheme [28] are carried out in an estimation theoretic framework, the works [31,32] approached the problem of channel coding for the GBCF within a control theoretic framework. Specifically, [32] proposed a transmission scheme based on linear quadratic Gaussian (LQG) control theory, that achieves rate pairs outside the achievable rate region of the OL code developed in [28]. Recently, it was shown in [33,34] that, for the two-user GBCF when the noise components at the receivers are mutually independent with equal variances, the LQG scheme of [32] achieves the maximal sum-rate among all possible linear-feedback schemes. Finally, it was shown in [35] that the capacity of GBCFs with independent noises at the receivers and only a common message cannot be achieved using a linear feedback scheme. Instead, the work [35] presented a capacity-achieving non-linear feedback scheme.
JSCC for the transmission of correlated Gaussian sources over GBCFs when the number of transmitted symbols is finite (referred to as the finite horizon regime) was previously considered in [36], which studied the minimal number of channel uses required to achieve a target MSE distortion pair. Three linear encoding schemes based on uncoded transmission were considered: the first scheme was a JSCC scheme based on the coding scheme of [28], to which we shall refer as the OL scheme; the second scheme was a JSCC scheme based on the scheme of [32], to which we shall refer as the LQG scheme; and the third scheme was a JSCC scheme whose parameters are obtained using dynamic programming (DP) (in the present work we discuss only the former OL and LQG schemes since the scheme based on DP becomes analytically and computationally infeasible as the number of channel uses goes to infinity). We note that linear and uncoded transmission, as implemented in the OL and in the LQG schemes, has important advantages, including low computational complexity, short coding delays and small storage requirements, which make this type of coding very desirable. We further note that although the LQG channel coding scheme of [32] for the two-user GBCF (with two messages) achieves the largest rate region out of all known channel coding schemes, in [36], it was shown that when the time horizon is finite, JSCC based on the OL scheme can achieve MSE distortion pairs lower than the JSCC based on the LQG scheme. In the present work, we analyze lossy source coding over GBCFs using SSCC and JSCC schemes based on a different performance metric: the EDT.
We note here that, as discussed above, noiseless feedback has been studied extensively in wireless Gaussian networks. An immediate benefit of this analysis is that the performance obtained for noiseless feedback serves as an upper bound on the performance for channels with noisy feedback. The analysis of noiseless feedback scenarios also leads to guidelines and motivation, which then can be applied to channels with noisy feedback. Indeed, the works [37,38], which studied channel coding for point-to-point Gaussian channels with noisy feedback and for GBCs with noisy feedback, respectively, considered transmission schemes, which are based on the SK [29] and on the OL schemes [28], respectively, originally developed for noiseless feedback scenarios. In [37,38], the noise in the feedback links was handled by applying modulo-lattice precoding in both the direct and feedback links. It is shown in [37,38] that, while having noise in the feedback links results in a performance degradation compared to the case of noiseless feedback [37] (Section V.D), many of the benefits of noiseless feedback can be carried over to the more practical setup of noisy feedback, thereby further motivating the current work. It follows that the analysis of noiseless feedback models provides practically relevant insights while facilitating simpler analysis.
Main contributions: In this work, the EDT for GBCFs is studied for the first time. We derive lower and upper bounds on the minimum energy per source pair required to achieve a target MSE distortion at each receiver, for the problem of transmitting a pair of Gaussian sources over a two-user GBCF, without constraining the number of channel uses per source sample. The new lower bound is based on cut-set arguments, while the upper bounds are obtained using three transmission schemes: two SSCC schemes and an uncoded JSCC scheme. The first SSCC scheme jointly compresses the two source sequences into a single bit stream, and transmits this stream to both receivers as a common message. The second SSCC scheme separately encodes each source sequence into two distinct bit streams, and broadcasts them via the LQG channel code of [32]. It is shown that in terms of the minimum energy-per-bit, the LQG code provides no gain compared to orthogonal transmission, from which we conclude that the first SSCC scheme, that jointly compresses the sequences into a single stream, is more energy efficient. As both SSCC schemes apply coding over multiple samples of the source pairs, they require high computational complexity, long delays and large storage space. We then consider the uncoded JSCC OL scheme presented in [36]. For this scheme, we first consider the case of fixed SNR and derive an upper bound on the number of channel uses required to achieve a target distortion pair. When the SNR approaches zero, the required number of channel uses grows, and the derived bound becomes tight. At the limiting scenario of SNR 0 , this provides a simple upper bound on the EDT. While our primary focus in this work is on the analysis of the three schemes mentioned above, such an analysis is a first step towards identifying schemes that would achieve improved EDT performance in GBCFs.
Numerical results indicate that the SSCC scheme based on joint compression achieves better EDT compared to the JSCC OL scheme; yet, the gap is quite small. Moreover, in delay-sensitive applications, there is a constraint on the maximal allowed latency in transmitting each source sample to the destination. In such scenarios, coding over large blocks of independent and identically distributed (i.i.d.) pairs of source samples is not possible, and instantaneous transmission of each observed pair of source samples via the JSCC-OL scheme may be preferable in order to satisfy the latency requirement, while maintaining high energy efficiency.
The rest of this paper is organized as follows: The problem formulation is detailed in Section 2. The lower bound on the minimum energy per source sample is derived in Section 3. Upper bounds on the minimum energy per source sample are derived in Section 4 and Section 5. Numerical results are detailed in Section 6, and concluding remarks are provided in Section 7.

2. Problem Definition

2.1. Notation

We use capital letters to denote random variables, e.g., X, and boldface letters to denote column random vectors, e.g., X ; the k th element of a vector X is denoted by X k , k 1 , and we use X k j , with j k , to denote ( X k , X k + 1 , . . . , X j ) . We use sans-serif fonts to denote matrices, e.g., Q . We use h ( · ) to denote differential entropy, I ( · ; · ) to denote mutual information, and X Y Z to denote a Markov chain, as defined in [12] (Chapters 9 and 2). We use E · , ( · ) T , log ( · ) , R and N to denote expectation, transpose, natural base logarithm, the set of real numbers and the set of non-negative integers, respectively. We let O ( g 1 ( P ) ) denote the set of functions g 2 ( P ) such that lim sup P 0 g 2 ( P ) / g 1 ( P ) < . Finally, we define sgn ( x ) as the sign of x R , with sgn ( 0 ) 1 .

2.2. Problem Setup

The two-user GBCF is depicted in Figure 1, with all of the signals being real. In this work, we consider the symmetric setting in which the sources have the same variances and the noises have the same variances. The encoder observes m i.i.d. realizations of a correlated and jointly Gaussian pair of sources ( S 1 , j , S 2 , j ) N ( 0 , Q s ) , j = 1 , , m , where Q s σ s 2 · 1 ρ s ρ s 1 , | ρ s | < 1 . The task of the encoder (transmitter) is to generate a transmitted signal that will facilitate decoding of the sequence of the i th source, S i , 1 m , i = 1 , 2 , at the ith decoder (receiver), denoted by Rx i , whose channel output at time k is given by:
Y i , k = X k + Z i , k , i = 1 , 2 ,
for k = 1 , , n . The noise sequences { Z 1 , k , Z 2 , k } k = 1 n , are i.i.d. over k = 1 , 2 , , n , with ( Z 1 , k , Z 2 , k ) N ( 0 , Q z ) , where Q z σ z 2 · 1 ρ z ρ z 1 , | ρ z | < 1 .
Let Y k ( Y 1 , k , Y 2 , k ) . The encoder maps the observed pair of source sequences and the noiseless causal channel outputs obtained through the feedback links into a channel input via: X k = f k ( S 1 , 1 m , S 2 , 1 m , Y 1 , Y 2 , , Y k 1 ) , f k : R 2 ( m + k 1 ) R . Rx i , i = 1 , 2 , uses its channel output sequence Y i , 1 n to estimate S i , 1 m via S ^ i , 1 m = g i ( Y i , 1 n ) , g i : R n R m .
We study the symmetric GBCF with parameters ( σ s 2 , ρ s , σ z 2 , ρ z ) , and define a ( D , E , m , n ) code to be a collection of n encoding functions { f k } k = 1 n and two decoding functions g 1 , g 2 , such that the MSE distortion satisfies:
j = 1 m E ( S i , j S ^ i , j ) 2 m D , 0 < D σ s 2 , i = 1 , 2 ,
and the energy of the transmitted signals satisfies:
k = 1 n E X k 2 m E .
Our objective is to characterize the minimal E, for a given target MSE D at each user, such that for all ϵ > 0 , there exist m , n and a ( D + ϵ , E + ϵ , m , n ) code. We call this minimal value the EDT and denote it by E ( D ) .
Remark 1 (Energy constraint vs. power constraint).
The constraint (3) reflects the energy per source sample rather than per channel use. Note that by defining P m n E , the constraint (3) can be equivalently stated as 1 n k = 1 n E X k 2 P which is the well known average power constraint. Yet, since there is no constraint on the ratio between m and n, given a finite energy E, when the number of channel uses per source sample n m goes to infinity, the classical average power constraint goes to zero. We also note that E ( D ) can be obtained by evaluating the power-distortion tradeoff, namely, the minimal power required to achieve a given distortion at each receiver (see, e.g., [39] (Section II) for the definition of achievable distortion and power for a GBC with a given set of scenario parameters), in the limit n m . This approach was indeed used in [15] to derive energy-distortion exponents for GBCs without feedback. However, to the best of our knowledge, there are no tight bounds on the power-distortion tradeoff for GBCFs. Moreover, for the GBCF, we show next that directly characterizing E ( D ) leads to significantly simpler results.

3. Lower Bound on E ( D )

Our first result is a lower bound on E ( D ) . First, we define R S 1 ( D ) as the rate-distortion function for the source variable S 1 , and R S 1 , S 2 ( D ) as the rate distortion function for jointly compressing the pair of sources ( S 1 , S 2 ) . Using [40] (Section III.B), we can write these functions explicitly as:
R S 1 ( D ) 1 2 log 2 σ s 2 D
R S 1 , S 2 ( D ) 1 2 log 2 σ s 2 ( 1 + | ρ s | ) 2 D σ s 2 ( 1 | ρ s | ) , D > σ s 2 ( 1 | ρ s | ) 1 2 log 2 σ s 4 ( 1 ρ s 2 ) D 2 , D σ s 2 ( 1 | ρ s | ) .
Note that [40] (Section III.B) uses the function R S 1 , S 2 ( D 1 , D 2 ) as it allows for a different distortion constraint for each source. For the present setup, in which the same distortion constraint is applied to both sources, R S 1 , S 2 ( D ) can be obtained by setting D 1 = D 2 = D in [40] (Equation (8)), and thus, we use the simplified notation R S 1 , S 2 ( D ) . Next, define:
E lb ( D ) = σ z 2 · log e 2 · max 2 R S 1 ( D ) , ( 1 + ρ z ) R S 1 , S 2 ( D ) .
The lower bound on the EDT is stated in the following theorem:
Theorem 1.
The EDT E ( D ) satisfies E ( D ) E lb ( D ) .
Remark 2 (Different approaches for deriving a lower bound).
The work [27] presented a novel technique for lower bounding the EDT in a Gaussian MAC. Applying this technique to the symmetric GBCF results in the lower bound reported in Theorem 1. The work [39] presented a lower bound on the distortion achievable in sending correlated Gaussian sources over a GBC (without feedback). This bound uses the entropy power inequality while relying on the fact that GBCs are degraded. As GBCFs are not degraded, it is not clear if the technique used in [39] can be used for deriving lower bounds on the EDT for GBCFs.
Proof of Theorem 1.
As we consider a symmetric setting, in the following, we focus on the distortion at Rx 1 , and derive two different lower bounds. The first lower bound is obtained by identifying the minimal energy required in order to achieve an MSE distortion of D at Rx 1 , while ignoring Rx 2 . The second lower bound is obtained by considering the transmission of both sources over a point-to-point channel with two outputs Y 1 and Y 2 . We begin with the following lemma:
Lemma 1.
If for any ϵ > 0 , a ( D + ϵ , E + ϵ , m , n ) code exists, then the rate-distortion functions in (4) are upper bounded by:
R S 1 ( D ) 1 m k = 1 n I ( X k ; Y 1 , k )
R S 1 , S 2 ( D ) 1 m k = 1 n I ( X k ; Y 1 , k , Y 2 , k ) .
Proof. 
The proof is provided in Appendix A. ☐
Now, for achievable ( D , E , m , n ) fix ϵ > 0 and consider a ( D + ϵ , E + ϵ , m , n ) code. For the right-hand side of (6a), we write:
1 m k = 1 n I ( X k ; Y 1 , k ) ( a ) 1 m k = 1 n 1 2 log 2 1 + var { X k } σ z 2 ( b ) 1 m k = 1 n 1 2 var { X k } σ z 2 · log e 2 ( c ) E + ϵ 2 σ z 2 · log e 2 ,
where (a) follows by considering the point-to-point channel from X k to Y 1 , k and noting that the capacity of this additive white Gaussian noise channel, subject to an input variance variance constraint P k , is 1 2 log 1 + P k σ z 2 . Thus, given X k with variance var ( X k ) , then setting P k = var ( X k ) , it follows that I ( X k ; Y 1 , k ) 1 2 log 2 1 + var { X k } σ z 2 ; (b) follows from changing the logarithm base and from the inequality log e ( 1 + x ) x , x 0 ; and (c) follows by noting that (3) implies k = 1 n var { X k } m ( E + ϵ ) . Combining with (6a), we obtain R S 1 ( D + ϵ ) E + ϵ 2 σ z 2 · log e 2 , which implies that 2 σ z 2 · log e 2 · R S 1 ( D ) E + ϵ . Since this holds for every ϵ > 0 , we arrive at the first term on the right-hand-side (RHS) of (5).
Next, the RHS of (6b) can be upper bounded by considering a Gaussian single-input-multiple-output channel with two receive antennas. Then, the mutual information I ( X k ; Y 1 , k , Y 2 , k ) is upper bounded by the capacity of the channel subject to the variance of X k :
1 m k = 1 n I ( X k ; Y 1 , k , Y 2 , k ) 1 m k = 1 n 1 2 log 2 | Q Y k | | Q Z k | ,
where (8) follows from [12] (Theorem 9.6.5), combined with [12] (Theorem 9.4.1) for jointly Gaussian random variables, and by defining Z k = ( Z 1 , k , Z 2 , k ) and the covariance matrices Q Y k E Y k Y k T and Q Z k E Z k Z k T . To explicitly write Q Y k , we note that E { Y i , k 2 } = E ( X k + Z i , k ) 2 = E X k 2 + σ z 2 for i = 1 , 2 , and similarly, E Y 1 , k Y 2 , k = E X k 2 + ρ z σ z 2 . We also have E { Z i , k 2 } = σ z 2 and E Z 1 , k Z 2 , k = ρ z σ z 2 . Thus, we obtain | Q Y k | = 2 E { X k 2 } σ z 2 ( 1 ρ z ) + σ z 4 ( 1 ρ z 2 ) and | Q Z k | = σ z 4 ( 1 ρ z 2 ) . Plugging these expressions into (8) results in:
1 m k = 1 n 1 2 log 2 | Q Y k | | Q Z k | 1 m k = 1 n E X k 2 σ z 2 ( 1 + ρ z ) log e 2 E + ϵ σ z 2 ( 1 + ρ z ) log e 2 ,
where the inequalities follow the same arguments as those leading to (7). Combining with (6b), we obtain R S 1 , S 2 ( D ) E + ϵ σ z 2 ( 1 + ρ z ) log e 2 , which implies that 2 σ z 2 ( 1 + ρ z ) log e 2 · R S 1 , S 2 ( D ) E + ϵ . Since this holds for every ϵ > 0 , we obtain the second term on the RHS of (5). This concludes the proof. ☐
In the next sections, we study three achievability schemes which lead to upper bounds on E ( D ) . While these schemes have simple constructions, analyzing their achievable EDT is novel and challenging.

4. Upper Bounds on E(D) via SSCC

SSCC in multiuser scenarios carries the advantages of modularity and ease of integration with the layered architecture, which is the fundamental design architecture in many practical communications systems. In this section, we analyze the EDT of two SSCC schemes. The first scheme takes advantage of the correlation between the sources and ignores the correlation between the noise components, while the second scheme ignores the correlation between the sources and aims at utilizing the correlation between the noise components.

4.1. The SSCC- ρ s Scheme

This scheme utilizes the correlation between the sources by first jointly encoding both source sequences into a single bit stream via the source coding scheme proposed in [41] (Theorem 6); see also [40] (Theorem III.1). For a given distortion D, the minimum required compression bit rate is given by the rate-distortion function stated in (4b). The bit stream generated through compression is then encoded via a channel code designed for sending a common message over the GBC (without feedback), and the corresponding codeword is transmitted to both receivers. Note that the optimal code for transmitting a common message over GBCFs with ρ z 0 is not known, but, when ρ z = 0 , the optimal code for sending a common message over the GBCF is known to be the optimal point-to-point channel code which ignores the feedback [35] (Equation (13)). Thus, SSCC- ρ s uses the correlation between the sources, but ignores the correlation between the noises at the receivers. The following theorem characterizes the minimum energy per source sample achieved by this scheme.
Theorem 2.
The SSCC- ρ s scheme achieves the following EDT:
E sep ( ρ s ) ( D ) = σ z 2 log e σ s 2 ( 1 + | ρ s | ) 2 D σ s 2 ( 1 | ρ s | ) , D > σ s 2 ( 1 | ρ s | ) σ z 2 log e σ s 4 ( 1 ρ s 2 ) D 2 , D σ s 2 ( 1 | ρ s | ) .
Proof. 
The optimal rate for jointly encoding the source sequences into a single-bit stream is R S 1 , S 2 ( D ) , given in (4b) [40] (Section III.B). Note that from this stream both source sequences can be recovered to within a distortion D. The encoded bit stream is then transmitted to both receivers via a capacity-achieving point-to-point channel code [12] (Theorem 10.1.1) (note that this code does not exploit the causal feedback [12] (Theorem 8.12.1)). Let E b min common denote the minimum energy-per-bit required for reliable transmission over the Gaussian point-to-point channel [13]. From [13] (p. 1025), we have E b min common = 2 σ z 2 log e 2 . As the considered scheme is based on source-channel separation, the achievable EDT is given by E ( D ) = E b min common × R S 1 , S 2 ( D ) , where R S 1 , S 2 ( D ) is stated in (4b). This results in the EDT in (10). ☐
Remark 3 (EDT of GBC without feedback).
A basic question that may arise is about the EDT for transmitting a pair of correlated Gaussian sources over the GBC without feedback. The work [15] studied asymmetric GBCs, namely, when the noises have different variances, and used bounds derived in [39] to characterize the energy-distortion exponents. It is not clear whether the techniques used to derive the bounds in [39] can be used for the symmetric setting discussed in the current work. For the symmetric setting, the transmission of correlated Gaussian source over the GBC has been studied in [42]. Applying the results of [42] (Footnote 2) to the current case leads to the EDT of the SSCC- ρ s scheme, which indeed does not exploit feedback.

4.2. The SSCC- ρ z Scheme

This scheme aims at utilizing the correlation between the noises at the receivers, that is available at the encoder through the feedback links, for generating the channel symbols, while avoiding using the correlation between the sources for compression. As in this section we focus on separation-based schemes, the correlation between the noises at the receivers can be utilized only via the channel code. Our results show that in terms of EDT (or minimum required energy per pair of encoded bits), even the best known channel code cannot utilize the correlation between the noises at the receivers.
In the SSCC- ρ z scheme each of the source sequences is first compressed using the optimal rate-distortion source code for scalar Gaussian sources [12] (Theorem 13.3.2). Then, the resulting compressed bit streams are sent over the GBCF using the best known channel code for transmission over the GBCF, which is the LQG channel coding scheme of [32], that generally utilizes the correlation between the noises at the receivers, as is evident from [32] (IV.B) and in particular from [32] (Equations (23) and (24)). The following theorem characterizes the minimum energy per source sample required by this scheme.
Theorem 3.
The SSCC- ρ z scheme achieves the EDT:
E sep ( ρ z ) ( D ) = 2 σ z 2 log e σ s 2 D .
Proof. 
The encoder separately compresses each source sequence at rate R S 1 ( D ) , where R S 1 ( D ) is given in (4a). Thus, from each encoded stream the corresponding source sequence can be recovered to within a distortion D. Next, the two compressed bit streams are broadcast to their corresponding receivers using the LQG scheme of [32]. Let E b min LQG denote the minimum energy per pair of encoded bits required by the LQG scheme. E b min LQG is given in the following lemma:
Lemma 2.
For the symmetric setting, the minimum energy per pair of encoded bits required by the LQG scheme is given by:
E b min LQG = 2 σ z 2 log e 2 .
Proof. 
The proof is provided in Appendix B. ☐
Since two bit streams are transmitted, the achievable EDT is given by E sep ( ρ z ) ( D ) = E b min LQG × 2 R S 1 ( D ) , yielding the EDT in (11). ☐
Remark 4 (SSCC-ρz vs. time-sharing)
Note that E b min LQG in (12) is independent of ρ z , and therefore, even though in general the LQG scheme is capable of utilizing the correlation between the noises at the receivers, in terms of minimum energy per pair of encoded bits it cannot (recall that the LQG scheme is the best known channel coding scheme for the GBCF). Therefore, E sep ( ρ z ) ( D ) is also independent of ρ z , and the SSCC- ρ z scheme does not take advantage of the correlation between the noises at the receivers to improve the minimum energy per source sample needed in the symmetric setting. Indeed, an EDT of E sep ( ρ z ) ( D ) can also be achieved by transmitting the two bit streams via time sharing over the GBCF without using the feedback. In this context, we recall that [43] (Prop. 1) also stated that in Gaussian broadcast channels without feedback, time sharing is asymptotically optimal as the power tends to zero.
Remark 5 (Relationship between E sep ( ρ s ) ( D ) , E sep ( ρ z ) ( D ) and E lb ( D ) ).
We observe that E sep ( ρ s ) ( D ) E sep ( ρ z ) ( D ) . For D σ s 2 ( 1 | ρ s | ) this relationship directly follows from the expressions of E sep ( ρ s ) ( D ) and E sep ( ρ z ) ( D ) . For D > σ s 2 ( 1 | ρ s | ) the above relationship holds if the polynomial q ( D ) = D 2 ( 1 + | ρ s | ) 2 σ s 2 D + σ s 4 ( 1 | ρ s | ) is positive. This is satisfied as the discriminant of q ( D ) is negative. We thus conclude that it is preferable to use the correlation between the sources than the correlation between the noise components. We further note that as D 0 , the gap between E sep ( ρ s ) ( D ) and E sep ( ρ z ) ( D ) is bounded. On the other hand, as D 0 , the gap between E sep ( ρ s ) ( D ) and E lb ( D ) is not bounded (note that when ρ z = 0 , the RHS of (5) is given by 2 σ z 2 · log e 2 · R S 1 ( D ) ).
Remark 6 (Relevance to more than two users).
The lower bound presented in Theorem 1 can be extended to the case of K > 2 sources using the results of [41] (Theorem 1) and [44]. The upper bound of Theorem 2 can also be extended in a relatively simple manner to K > 2 sources, again, using [41] (Theorem 1). The upper bound in Theorem 3 can be extended to K > 2 sources by using the LQG scheme for K > 2 [32] (Theorem 1), or by using time-sharing.

5. Upper Bound on E(D) via the OL Scheme

Next, we derive a third upper bound on E ( D ) by applying uncoded JSCC transmission based on the OL scheme [36] (Section 3). This scheme sequentially transmits the source pairs ( S 1 , j , S 2 , j ) , j = 1 , 2 , , m , without source coding. Thus, the delay introduced by the OL scheme is significantly lower than the delay introduced by the schemes discussed in Section 4. We note that the OL scheme is designed for a fixed P = E / n , and from condition (3) we obtain that P = E / n 1 n k = 1 n E X k 2 . An upper bound on E ( D ) can now be obtained by first calculating the minimal number of channel uses required by the OL scheme to achieve the target distortion D, which we denote by K OL ( P , D ) , and then determining the required energy via k = 1 K OL ( P , D ) E X k 2 .

5.1. JSCC Based on the OL Scheme

In the OL scheme, each receiver recursively estimates its intended source samples. At each time index, the transmitter uses the feedback to compute the estimation errors at the receivers at the previous time index, and transmits a linear combination of these errors. The scheme is terminated after K OL ( P , D ) channel uses, when the target MSE D is achieved at each receiver.
Setup and Initialization: Let S ^ i , k be the estimate of S i at Rx i after receiving the k th channel output Y i , k , ϵ i , k S ^ i , k S i be the estimation error after k transmissions, and define ϵ ^ i , k 1 S ^ i , k 1 S ^ i , k . It follows that ϵ i , k = ϵ i , k 1 ϵ ^ i , k 1 . Next, define α i , k E { ϵ i , k 2 } to be the MSE at Rx i after k transmissions, ρ k E ϵ 1 , k ϵ 2 , k α 1 , k α 2 , k to be the correlation between the estimation errors after k transmissions, and Ψ k P 2 ( 1 + | ρ k | ) . For initialization, set S ^ i , 0 = 0 and ϵ i , 0 = S i , i = 1 , 2 ; thus, ρ 0 = ρ s . Note that for this setup and initializations, we have α 1 , k = α 2 , k α k .
Encoding: At the k th channel use the transmitter sends X k = Ψ k 1 α k ϵ 1 , k 1 + ϵ 2 , k 1 · sgn ( ρ k 1 ) , and the corresponding channel outputs are given by (1).
Decoding: Each receiver computes ϵ ^ i , k 1 , i = 1 , 2 , based only on Y i , k via ϵ ^ i , k 1 = E ϵ i , k 1 Y i , k E Y i , k 2 Y i , k , which can be explicitly computed as in [28] (p. 669). Then, similarly to [45] (Equation (7)), the estimate of the source S i is given by S ^ i , k = m = 1 k ϵ ^ i , m 1 . Let Θ P + σ z 2 ( 2 ρ z ) and ν z σ z 4 ( 1 ρ z ) 2 . The instantaneous MSE α k is given by the recursive expression [28] (Equation (5)):
α k = α k 1 σ z 2 + Ψ k 1 2 ( 1 ρ k 1 2 ) P + σ z 2 , i = 1 , 2 ,
where the recursive expression for ρ k is given by [28] (Equation (7)):
ρ k = ( ρ z σ z 2 Θ + ν z ) ρ k 1 Ψ k 1 2 Θ ( 1 ρ k 1 2 ) sgn ( ρ k 1 ) ( P + σ z 2 ) ( σ z 2 + Ψ k 1 2 ( 1 ρ k 1 2 ) ) .
Remark 7 (Initialization of the OL scheme).
Note that in the above OL scheme we do not apply the initialization procedure described in [28] (p. 669), as it optimizes the achievable rate rather than the distortion. Instead, we set ϵ i , 0 = S i and ρ 0 = ρ s , thus, taking advantage of the correlation between the sources. Moreover, in Appendix C, it is explicitly shown that for the OL scheme, in the low SNR regime, the impact of the correlation between the sources on the distortion at the receivers lasts over a large number of channel transmissions. It thus follows that the proposed initialization clearly exploits the correlation between the sources.. We further note that [36] (Section III.B) considered several initialization methods for the OL scheme and showed that setting ϵ i , 0 = S i and ρ 0 = ρ s outperforms the other studied initialization approaches.
Let E OL , min ( D ) denote the minimal energy per source pair required to achieve MSE D at each receiver using the OL scheme. Since in the OL scheme E X k 2 = P , k , we have E OL , min ( D ) = min P P · K OL ( P , D ) . From (13) one observes that the MSE value at time instant k depends on ρ k 1 and the MSE at time k 1 . Due to the non-linear recursive expression for ρ k in (14), it is very complicated to obtain an explicit analytical characterization for K OL ( P , D ) . For any fixed P, we can upper bound E OL , min ( D ) , and therefore E ( D ) , via upper bounding P · K OL ( P , D ) . In [36] (Theorem 1) we showed that K OL ( P , D ) 2 ( P + σ z 2 ) P log σ s 2 D , which leads to the upper bound: E OL , min ( D ) min P 2 ( P + σ z 2 ) log σ s 2 D P 0 E sep ( ρ z ) ( D ) . However, when P 0 , the upper bound K OL ( P , D ) 2 ( P + σ z 2 ) P log σ s 2 D is not tight This can be seen by considering a numerical example: Let σ s 2 = 1 , ρ s = 0 . 9 , σ z 2 = 1 , ρ z = 0 . 7 , D = 1 , and consider two possible values for P: P 1 = 10 4 and P 2 = 10 6 . Via numerical simulations one can find that K OL ( P 1 , D ) = 38 , 311 , while the upper bound is 46,058. For P 2 we have K OL ( P 2 , D ) = 3 , 830 , 913 , while the upper bound is 4,605,176. Thus, the gap between K OL ( P , D ) and the above bound increases as P decreases. For this reason, in the next subsection we derive a tighter upper bound on K OL ( P , D ) whose ratio to K OL ( P , D ) approaches 1 as P 0 . This bound is then used to derive a tighter upper bound on E OL , min ( D ) .

5.2. A New Upper Bound on K OL ( P , D )

Following ideas from [1] (Theorem 7), we assume a fixed σ z 2 and approximate the recursive relationships for ρ k and α k given in (13) and (14) for small values of P σ z 2 . We note that while [1] (Theorem 7) obtained only asymptotic expressions for ρ k and α k for P σ z 2 0 , in the following we derive tight bounds for these quantities and obtain an upper bound on K OL ( P , D ) which is valid for small values of P σ z 2 > 0 . Then, letting P σ z 2 0 , the derived upper bound on K OL ( P , D ) yields an upper bound on E OL , min ( D ) , and therefore on E ( D ) .
First, define: ψ 1 2 | ρ z | + 5 ( 1 ρ z ) , ψ 2 min 2 ρ z , 2 ( 1 ρ z ) 2 σ z 2 and ψ 3 max 1 ρ z ( 2 ρ z ) 2 , 1 + ρ z 4 ( 1 ρ z ) 2 . We further define the positive quantities B 1 ( P ) and B 2 ( P ) :
B 1 ( P ) 8 + ψ 1 P 3 + 24 σ z 2 P 2 + 12 σ z 4 ψ 1 P + 4 σ z 6 4 σ z 2 ψ 1 + 8 8 σ z 10 P 2 ,
B 2 ( P ) P + 2 σ z 2 2 σ z 6 P 2 ,
and finally, we define the quantities:
ρ ¯ ( P ) P ( 3 ρ z ) 2 8 σ z 2 + B 1 ( P ) ,
F 1 ( P ) ρ s P ψ 2 B 1 ( P ) · ψ 3 · ( 3 ρ z ) 2 P 8 σ z 2 + B 1 ( P ) 2 ,
F 2 ( P ) ρ s P ψ 2 B 1 ( P ) B 1 ( P ) ψ 2 2 σ z 2 ,
F 3 ( P ) ρ s P ψ 2 B 1 ( P ) · ( 3 ρ z ) 2 P 8 σ z 2 + B 1 ( P ) 2 ( 1 ρ z ) 2 + B 1 ( P ) 1 ρ z + B 2 ( P ) ,
F 4 ( P ) P 2 σ z 2 1 + ρ ¯ ( P ) + 2 σ z 2 P B 2 ( P ) ,
ρ lb ( P , D ) 2 ρ z + σ s 2 D ρ z + | ρ s | 2 e F 3 ( P ) ,
D th ub σ s 2 ( 2 ρ z | ρ s | ) e F 3 ( P ) 2 ρ z ,
D th lb σ s 2 ( 2 ρ z | ρ s | ) e F 3 ( P ) 2 ρ z .
For small values of P σ z 2 , the following theorem provides a tight upper bound on K OL ( P , D ) :
Theorem 4.
Let P satisfy the conditions ρ ¯ ( P ) + 2 σ z 2 P B 2 ( P ) < 1 and B 1 ( P ) < P ψ 2 . The OL scheme achieves MSE D at each receiver within K OL ( P , D ) K OL ub ( P , D ) channel uses, where, K OL ub ( P , D ) is given by:
K OL ub ( P , D ) = (17a) 2 σ z 2 P ( 3 ρ z ) log ( 2 ρ z ρ lb ( P , D ) ) ( 1 + | ρ s | ) ( 2 ρ z | ρ s | ) ( 1 + ρ lb ( P , D ) ) + 2 σ z 2 P F 1 ( P ) + F 2 ( P ) , D > D th ub , log D ( 2 ρ z ρ ¯ ( P ) ) σ s 2 ( 2 ρ z | ρ s | ) F 3 ( P ) 1 F 4 ( P ) (17b) + 2 σ z 2 P ( 3 ρ z ) log ( 2 ρ z ) ( 1 + | ρ s | ) 2 ρ z | ρ s | + 2 σ z 2 P F 1 ( P ) + F 2 ( P ) , D < D th lb .
Proof outline.
Let ρ s 0 (otherwise replace S 1 with S 1 ). From [28] (p. 669) it follows that ρ k monotonically decreases with k until it crosses zero. Let K th min { k N : ρ k + 1 < 0 } be the largest time index k for which ρ k 0 . In the proof of Theorem 4 we show that, for sufficiently small P σ z 2 , | ρ k | ρ ¯ ( P ) , k K th . Hence, ρ k decreases until time K th and then it has a bounded magnitude (larger than zero). This implies that the behavior of α k is different in the regions k K th and k > K th . Let D ˜ th be the MSE after K th channel uses. We first derive upper and lower bounds on D ˜ th , denoted by D th ub and D th lb , respectively. Consequently, we arrive at the two cases in Theorem 4: (17a) corresponds to the case of K OL ( P , D ) < K th , while (17b) corresponds to the case K OL ( P , D ) > K th . The detailed proof is provided in Appendix C. ☐
Remark 8 (Bandwidth used by the OL scheme).
Note that as P 0 , K OL ub increases to infinity. Since, as P 0 , K OL K OL ub 1 , it follows that as P 0 , K OL . Assuming the source samples are generated at a fixed rate, this implies that the bandwidth used by the OL scheme increases to infinity as P 0 .
Remark 9 (Theorem 4 holds for non-asymptotic values of P).
Note that the conditions on P in Theorem 4 can be written as P < P th with P th depending explicitly on σ z 2 and ρ z . Plugging B 1 ( P ) in (15) into the condition B 1 ( P ) < P ψ 2 , we obtain the condition: 8 + ψ 1 P 4 + 24 σ z 2 P 3 + 12 σ z 4 ψ 1 P 2 + 4 σ z 6 4 σ z 2 ψ 1 + 8 P < 8 ψ 2 σ z 10 . We note that, in this formulation the coefficients of P m , m = 1 , 2 , 3 , 4 , are all positive. Therefore, the left-hand-side (LHS) is monotonically increasing with P, and since 8 ψ 2 σ z 10 is constant, the condition B 1 ( P ) < P ψ 2 is satisfied if P < P th , 2 , for some threshold P th , 2 . Following similar arguments, the same conclusion holds for ρ ¯ ( P ) + 2 σ z 2 P B 2 ( P ) < 1 with some threshold P th , 1 instead of P th , 2 . Thus, by setting P th = min { P th , 1 , P th , 2 } we obtain that the conditions in Theorem 4 restrict the range of power constraint values P for which the theorem holds for some P < P th .

5.3. An Upper Bound on E OL , min ( D )

Next, we let P 0 , and use K OL ub ( P , D ) derived in Theorem 4 to obtain an upper bound on E OL , min ( D ) , and therefore on E ( D ) . This upper bound is stated in the following theorem.
Theorem 5.
Let D th σ s 2 ( 2 ρ z | ρ s | ) 2 ρ z . Then, E OL , min ( D ) E OL ( D ) , where
E OL ( D ) = 2 σ z 2 3 ρ z log σ s 2 ( 1 + | ρ s | ) D + ( 2 ρ z ) ( D σ s 2 ) + σ s 2 · | ρ s | , D D th , 2 σ z 2 log ( 2 ρ z | ρ s | ) σ s 2 ( 2 ρ z ) D + 1 3 ρ z log ( 2 ρ z ) ( 1 + | ρ s | ) 2 ρ z | ρ s | , D < D th .
Proof. 
We evaluate P · K OL ub ( P , D ) for P 0 . Note that B i ( P ) O ( P 2 ) , i = 1 , 2 , which implies that F j ( P ) O ( P ) , j = 1 , 2 , 3 , 4 . To see why this holds, consider, for example, F 1 ( P ) :
F 1 ( P ) = ρ s · ψ 3 P ψ 2 B 1 ( P ) F 1 ( a ) ( P ) ( 3 ρ z ) 2 P 8 σ z 2 + B 1 ( P ) 2 F 1 ( b ) ( P ) .
Since ρ s , ψ 2 , and ψ 3 are constants, and since B 1 ( P ) O ( P 2 ) , we have F 1 ( a ) ( P ) O ( 1 / P ) . Now, since ( 3 ρ z ) 2 8 σ z 2 is constant we have F 1 ( b ) ( P ) O ( P 2 ) . Taking the product of these two asymptotics we conclude that F 1 ( P ) O ( P ) .
Now, for D D th we bound the minimum E ( D ) as follows: First, for D D th ub defined in (16g), we multiply both sides of (17a) by P. As F 1 ( P ) , F 2 ( P ) O ( P ) , then, as P 0 , we obtain:
P · K OL ub ( P , D ) = 2 σ z 2 3 ρ z log ( 2 ρ z ρ lb ( P , D ) ) ( 1 + | ρ s | ) ( 2 ρ z | ρ s | ) ( 1 + ρ lb ( P , D ) ) + O ( P ) ( a ) P 0 2 σ z 2 3 ρ z log σ s 2 ( 1 + | ρ s | ) D + ( 2 ρ z ) ( D σ s 2 ) + σ s 2 · | ρ s | ,
where (a) follows from (16f) by noting that F 3 ( P ) O ( P ) , and therefore, when P 0 , F 3 ( P ) 0 . This implies that as P 0 we have ρ lb ( P , D ) 2 ρ z + σ s 2 D ρ z + | ρ s | 2 . Finally, note that for P 0 we have D th ub D th .
Next, for D < D th we bound the minimum E ( D ) by first noting that since ρ ¯ ( P ) O ( P ) and 2 σ z 2 P B 2 ( P ) O ( P ) , then F 4 ( P ) O ( P ) . Now, for D < D th lb defined in (16h), multiplying both sides of (17b) by P, we obtain:
P · K OL ub ( P , D ) = 2 σ z 2 log D ( 2 ρ z ρ ¯ ( P ) ) σ s 2 ( 2 ρ z | ρ s | ) + O ( P ) · 1 1 + O ( P ) + 2 σ z 2 3 ρ z log ( 2 ρ z ) ( 1 + | ρ s | ) 2 ρ z | ρ s | + O ( P ) ( a ) P 0 2 σ z 2 log ( 2 ρ z | ρ s | ) σ s 2 ( 2 ρ z ) D + 1 3 ρ z log ( 2 ρ z ) ( 1 + | ρ s | ) 2 ρ z | ρ s | ,
where (a) follows from the fact that ρ ¯ ( P ) O ( P ) , see (16a). This concludes the proof. ☐
Remark 10 (Performance for extreme correlation values).
Similarly to Remark 5, as D 0 , the gap between E OL ( D ) and E lb ( D ) is not bounded, which is in contrast to the situation for the OL-based JSCC for the Gaussian MAC with feedback, cf. [1] (Remark 6). When ρ s = 0 we obtain that E OL ( D ) = E sep ( ρ s ) ( D ) = E sep ( ρ z ) ( D ) , for all 0 D σ s 2 , which follows as the sources are independent. When | ρ s | 1 and ρ z 1 then E OL ( D ) E lb ( D ) σ z 2 log σ s 2 D , in this case we also have E sep ( ρ s ) ( D ) E lb ( D ) and E sep ( ρ z ) ( D ) 2 E OL ( D ) .
Remark 11 (Comparison of the OL scheme and the separation-based schemes).
From (10) and (18), it follows that if D < σ s 2 ( 1 | ρ s | ) then E OL ( D ) E sep ( ρ s ) ( D ) is given by:
E OL ( D ) E sep ( ρ s ) ( D ) = 2 σ z 2 log ( 2 ρ z | ρ s | ) σ s 2 2 ρ z + 1 3 ρ z log ( 2 ρ z ) ( 1 + | ρ s | ) 2 ρ z | ρ s | 1 2 log σ s 4 ( 1 ρ s 2 ) .
Note that E OL ( D ) E sep ( ρ s ) ( D ) is independent of D in this range. Similarly, from (11) and (18) it follows that if D < D th then E sep ( ρ z ) ( D ) E OL ( D ) is independent of D and is given by:
E sep ( ρ z ) ( D ) E OL ( D ) = 2 σ z 2 log 2 ρ z 2 ρ z | ρ s | + 1 3 ρ z log 2 ρ z | ρ s | ( 2 ρ z ) ( 1 + | ρ s | ) .
Note that in both cases the gap decreases as | ρ s | decreases, which follows as the scenario approaches the transmission of independent sources. The gap also increases as ρ z decreases.
Remark 12 (Uncoded JSCC via the LQG scheme).
In this work, we do not include an analysis of the EDT of JSCC using the LQG scheme, E LQG ( D ) , because JSCC-LQG does not lend itself to a concise analytical treatment, and, moreover, our numerical study demonstrated that, in terms of EDT, JSCC-LQG is generally inferior to JSCC-OL. To elaborate on these aspects, we first recall that the LQG scheme of [32] was already applied to the transmission of correlated Gaussian sources over GBCFs in [36] (Section IV). It follows from the derivations in [36] that E LQG ( D ) is expressed as the solution of an optimization problem which does not have an explicit analytic solution. It is also shown in [36] that, for a finite duration of transmission and low transmission power, when the covariance matrix of the sources is different from the covariance matrix of the steady-state of the LQG scheme, then the JSCC-OL scheme outperforms the JSCC-LQG scheme, which stands in contrast to the results of [33] for the channel coding problem. This surprising conclusion carries over to the EDT as well. Indeed, using the results of [36] we carried out an extensive numerical study of JSCC-LQG, the outcome of which was that the JSCC-LQG scheme of [36] (Section IV) achieves roughly the same minimum energy as the SSCC- ρ z scheme. Since in Section 6 we show that the JSCC-OL scheme outperforms the SSCC- ρ z scheme in terms of the EDT, we decided to exclude the JSCC-LQG scheme from the numerical comparisons reported in Section 6.

6. Numerical Results

In the following, we numerically compare E lb ( D ) , E sep ( ρ s ) ( D ) , E sep ( ρ z ) ( D ) and E OL ( D ) . We set σ s 2 = σ z 2 = 1 and consider several values of ρ z and ρ s . Figure 2a depicts E lb ( D ) , E sep ( ρ s ) ( D ) , E sep ( ρ z ) ( D ) and E OL ( D ) for ρ z = 0 . 5 , and for two values of ρ s : ρ s = 0 . 2 and ρ s = 0 . 9 . As E sep ( ρ z ) ( D ) is not a function of ρ s , it is plotted only once. It can be observed that when ρ s = 0 . 2 , then E sep ( ρ s ) ( D ) , E sep ( ρ z ) ( D ) and E OL ( D ) are almost the same. This follows because when the correlation between the sources is low, the gain from utilizing this correlation is also low. Furthermore, when ρ s = 0 . 2 the gap between the lower bound and the upper bounds is evident. On the other hand, when ρ s = 0 . 9 , both SSCC- ρ s and OL significantly improve upon SSCC- ρ z . This follows as SSCC- ρ z does not take advantage of the correlation among the sources. It can further be observed that when the distortion is low, there is a small gap between OL and SSCC- ρ s , while when the distortion is high, OL and SSCC- ρ s require roughly the same amount of energy per source-pair sample. This is also supported by Figure 2c. We conclude that as the SSCC- ρ s scheme encodes over long sequences of source samples, it better exploits the correlation among the sources compared to the OL scheme.
Figure 2b depicts E lb ( D ) , E sep ( ρ s ) ( D ) , E sep ( ρ z ) ( D ) and E OL ( D ) vs. D, for ρ s = 0 . 8 , and for ρ z { 0 . 9 , 0 . 9 } . As E sep ( ρ s ) ( D ) and E sep ( ρ z ) ( D ) are not functions of ρ z , we plot them only once. It can be observed that when ρ z = 0 . 9 , E lb ( D ) , E sep ( ρ s ) ( D ) and E OL ( D ) are very close to each other, as was analytically concluded in Remark 10. On the other hand, for ρ z = 0 . 9 the gap between the bounds is large.
Note that while analytically comparing E sep ( ρ s ) ( D ) , E sep ( ρ z ) ( D ) and E OL ( D ) for any D is difficult, our numerical simulations suggest the relationship E sep ( ρ s ) ( D ) E OL ( D ) E sep ( ρ z ) ( D ) , for all values of D , ρ s , ρ z . For example, Figure 2c depicts the difference E OL ( D ) E sep ( ρ s ) ( D ) for ρ z = 0 . 5 , and for all values of D and | ρ s | . It can be observed that for low values of | ρ s | , or for high values of D, E sep ( ρ s ) ( D ) E OL ( D ) . On the other hand, when the correlation among the sources is high and the distortion is low, then the SSCC- ρ s scheme improves upon the OL scheme. When D < σ s 2 ( 1 | ρ s | ) we can use (19) to analytically compute the gap between the energy requirements of the two schemes. For instance, at ρ s = 0 . 99 and D < 0 . 02 the gap is approximately 3.173. Figure 2d depicts the difference E sep ( ρ z ) ( D ) E OL ( D ) for ρ z = 0 . 5 . It can be observed that larger | ρ s | results in a larger gap. Again we can use (20) to analytically compute the gap between the energy requirements of the two schemes for a certain range of distortion values: At ρ s = 0 . 99 and D < 0 . 34 , the gap is approximately 0.744. Finally, as stated in Remark 12, the LQG scheme achieves approximately the same minimum energy as the SSCC- ρ z scheme, hence, OL is expected to outperform LQG. This is in accordance with [36] (Section VI), which shows that for low values of P, OL outperforms LQG, but, is in contrast to the channel coding problem in which the LQG scheme of [32] is known to achieve higher rates compared to the OL scheme of [28].

7. Conclusions and Future Work

In this work, we studied the EDT for sending correlated Gaussian sources over GBCFs, without constraining the source-channel bandwidth ratio. In particular, we first derived a lower bound on the minimum energy per source pair sample using information theoretic tools and then presented upper bounds on the minimum energy per source pair sample by analyzing three transmission schemes. The first scheme, SSCC- ρ s , jointly encodes the source sequences into a single bit stream, while the second scheme, SSCC- ρ z , separately encodes each of the sequences, thus, it does not exploit the correlation among the sources. We further showed that the LQG channel coding scheme of [32] achieves the same minimum energy-per-bit as orthogonal transmission, and therefore, in terms of the minimum energy-per-bit, it does not take advantage of the correlation between the noises at the receivers. We also concluded that SSCC- ρ s outperforms SSCC- ρ z .
The third scheme analyzed is the OL scheme for which we first derived an upper bound on the number of channel uses required to achieve a target distortion pair, which, in the limit P 0 , leads to an upper bound on the minimum energy per source pair sample. Numerical results indicate that SSCC- ρ s outperforms the OL scheme, as well. On the other hand, the gap between the energy requirements of the two schemes is rather small. We note that the SSCC- ρ s scheme implements coding over blocks of samples of source pairs, which introduces high computational complexity, large delays and requires a large amount of storage space. On the other hand, the OL scheme applies linear and uncoded transmission to each source pair sample separately, which requires low computational complexity, short delays and limited storage space. Our results demonstrate that the OL scheme provides an attractive alternative for energy efficient transmission over GBCFs.
Finally, we note that for the Gaussian MAC with feedback, OL-based JSCC is very close to the lower bound, cf. [1] (Figure 4), while, as indicated in Section 6, for the GBCF, the gap between the OL-JSCC and the lower bound is larger. This difference is also apparent in the channel coding problem for GBCFs, namely between the achievable rate region of the OL scheme and the tightest outer bound (note that while the OL strategy achieves the capacity of the Gaussian MAC with feedback [32] (Section V.A), for the GBCF the OL strategy is sub-optimal [28]). Therefore, it is interesting to see if the duality results between the Gaussian MAC with feedback and the GBCF, presented in [33,34] for the channel coding problem, can be extended to JSCC and if the approach of [33,34] facilitates a tractable EDT analysis. We consider this as a direction for future work.

Acknowledgments

This work was supported in part by the Israel Science Foundation under Grant 396/11 and by the European Research Council (ERC) through Starting Grant BEACON(Agreement #677854). Parts of this work were presented at the IEEE Information Theory Workshop (ITW), April 2015, Jerusalem, Israel [46], and at the IEEE International Symposium on Information Theory (ISIT), July 2016, Barcelona, Spain [47].

Author Contributions

Yonathan Murin developed this work in discussion with Yonatan Kaspi, Ron Dabora, and Deniz Gündüz. Yonathan Murin wrote the paper with comments from Yonatan Kaspi, Ron Dabora, and Deniz Gündüz. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EDTEnergy distortion tradeoff
GBCGaussian broadcast channel
GBCGaussian broadcast channel with feedback
JSCCJoint source-channel coding
LHSLeft-hand side
LQGLinear quadratic Gaussian
MACMultiple access channel
MSEMean square error
OLOzarow–Leung
RHSRight-hand side
SKSchalkwijk–Kailath
SNRSignal-to-noise ratio
SSCCSeparate source-channel coding

Appendix A. Proof of Lemma 1

We begin with the proof of (6a). From [12] (Theorem 13.2.1) we have:
R S 1 ( D ) = inf P S ^ 1 | S 1 : E ( S ^ 1 S 1 ) 2 D I ( S ^ 1 ; S 1 ) .
Now, for any ϵ > 0 we write:
m · R S 1 ( D + ϵ ) ( a ) inf P S ^ 1 , 1 m | S 1 , 1 m : j = 1 m E ( S ^ 1 , j S 1 , j ) 2 m ( D + ϵ ) j = 1 m I ( S ^ 1 , j ; S 1 , j | S 1 , 1 j 1 ) ( b ) I ( S ^ 1 , 1 m ; S 1 , 1 m ) ,
where (a) follows from the convexity of the mutual information I ( S ^ 1 ; S 1 ) in the conditional distribution P S ^ 1 | S 1 , and from the assumption that the sources are memoryless; (b) is due to the non-negativity of mutual information combined with the chain rule for mutual information. Next, we upper bound I ( S ^ 1 , 1 m ; S 1 , 1 m ) as follows:
I ( S ^ 1 , 1 m ; S 1 , 1 m ) ( a ) I ( Y 1 , 1 n ; S 1 , 1 m ) ( b ) k = 1 n I ( X k ; Y 1 , k ) ,
where (a) follows from the data processing inequality [12] (Section 2.8), by noting that S 1 m Y 1 n S ^ 1 m ; (b) follows from the fact that conditioning reduces entropy, and from the fact that since the channel is memoryless, then Y 1 , k depends on ( S 1 m , X k , Y 1 , 1 k 1 ) only through the channel input X k , see (1). By combining (A1)–(A3) we obtain (6a).
Next, we prove (6b). From [40] (Theorem III.1) we have:
R S 1 , S 2 ( D ) = inf P S ^ 1 , S ^ 2 | S 1 , S 2 : E ( S ^ i S i ) 2 D , i = 1 , 2 I ( S ^ 1 , S ^ 2 ; S 1 , S 2 ) .
Again, for any ϵ > 0 , we write:
m · R S 1 , S 2 ( D + ϵ ) ( a ) inf P S ^ 1 , 1 m , S ^ 2 , 1 m | S 1 , 1 m , S 2 , 1 m : j = 1 m E ( S ^ j , i S j , i ) 2 m ( D + ϵ ) , i = 1 , 2 j = 1 m I ( S ^ 1 , j , S ^ 2 , j ; S 1 , j , S 2 , j ) ( b ) I ( S ^ 1 , 1 m , S ^ 2 , 1 m ; S 1 , 1 m , S 2 , 1 m ) ,
where (a) is due to the convexity of the mutual information I ( S ^ 1 , S ^ 2 ; S 1 , S 2 ) in the conditional distribution P S ^ 1 , S ^ 2 | S 1 , S 2 , and (b) follows from the memorylessness of the sources, the chain rule for mutual information, and from the fact that it is non-negative. Next, we upper bound I ( S ^ 1 , 1 m , S ^ 2 , 1 m ; S 1 , 1 m , S 2 , 1 m ) as follows:
I ( S ^ 1 m , S ^ 2 m ; S 1 m , S 2 m ) ( a ) I ( Y 1 n , Y 2 n ; S 1 m , S 2 m ) ( b ) k = 1 n I ( X k ; Y 1 , k , Y 2 , k ) ,
where (a) follows from the data processing inequality [12] (Section 2.8), by noting that we have ( S 1 m , S 2 m ) ( Y 1 n , Y 2 n ) ( S ^ 1 m , S ^ 2 m ) ; (b) follows from the fact that conditioning reduces entropy, and from the fact that the channel is memoryless, thus, Y 1 , k and Y 2 , k depend on ( S 1 m , S 2 m , X k , Y 1 , 1 k 1 , Y 2 , 1 k 1 ) only through the channel input X k , see (1). By combining (A4)–(A6) we obtain (6b). This concludes the proof of the lemma.

Appendix B. Proof of Lemma 2: Minimum Energy-Per-Bit for the LQG Scheme

We first note that by following the approach taken in the achievability part of [48] (Theorem 1) it can be shown that for the symmetric GBCF with symmetric rates, the minimum energy-per-bit is given by:
E b min LQG = lim P 0 P R LQG sum ( P ) ,
where R LQG sum ( P ) is the sum rate achievable by the LQG scheme. Let x 0 be the unique positive real root of the third order polynomial p ( x ) = ( 1 + ρ z ) x 3 + ( 1 ρ z ) x 2 1 + ρ z + 2 P σ z 2 x ( 1 ρ z ) . From [32] (Equation (26)), for the symmetric GBCF, the achievable per-user rate of the LQG scheme is R LQG ( P ) = 1 2 log 2 ( x 0 ) bits. We now follow the approach taken in [36] (Appendix A.3) and bound x 0 using Budan’s theorem [49].
Explicitly writing the derivatives of p ( x ) and evaluating the sequence p ( i ) ( 1 ) , i = 0 , 1 , 2 , 3 , we have V ( 1 ) = 1 . Next, we let χ = 2 P α σ z 2 where α > 0 is a real constant. Setting x = 1 + χ we obtain p ( 1 + χ ) = ( 1 + ρ z ) χ 3 + ( 4 + 2 ρ z α ) χ 2 + ( 4 α ) χ , p ( 1 ) ( 1 + χ ) = 3 ( 1 + ρ z ) χ 2 + ( 8 + 4 ρ z α ) χ + 4 , and p ( 2 ) ( 1 + χ ) , p ( 3 ) ( 1 + χ ) > 0 . Note that we are interested in the regime P 0 which implies that χ 0 . Now, for χ small enough we have p ( 1 ) ( 1 + χ ) 4 > 0 . Furthermore, when χ 0 we have p ( 0 ) ( 1 + χ ) = p ( 0 ) 1 + 2 P α σ z 2 ( 4 α ) 2 P α σ z 2 . Clearly, for any 0 < α < 4 , lim P 0 p ( 0 ) ( 1 + 2 P α σ z 2 ) > 0 , and when α > 4 , lim P 0 p ( 0 ) ( 1 + 2 P α σ z 2 ) < 0 . Thus, letting 0 < δ < 4 , Budan’s theorem implies that when P 0 , the number of roots of p ( x ) in the interval 1 + 2 P ( 4 + δ ) σ z 2 , 1 + 2 P ( 4 δ ) σ z 2 is 1. From Descartes’ rule [50] (Section 1.6.3), we know that there is a unique positive root, thus, as this holds for any 0 < δ < 4 , we conclude that lim P 0 x 0 = 1 + P 2 σ z 2 . Plugging the value of x 0 into (A7), and considering the sum-rate, we obtain:
E b min LQG = lim P 0 P log 2 1 + P 2 σ z 2 = 2 σ z 2 log e 2 .
This concludes the proof.

Appendix C. Proof of Theorem 4

First, note that if ρ s < 0 , we can replace S 1 with S 1 , which changes only the sign of ρ s in the joint distribution of the sources. Note that changing the sign of ρ k 1 in (14) only changes the sign of ρ k while | ρ k | remains unchanged. Hence, α k in (13) is not affected by changing the sign of ρ s . Therefore, in the following we assume that 0 ρ s < 1 . To simplify the notation we also omit the dependence of K OL ( P , D ) on P and D, and write K OL . For characterizing the termination time of the OL scheme we first characterize the temporal evolution of ρ k . From [28] (p. 669), ρ k decreases (with k) until it crosses zero. Let K th min { k : ρ k + 1 < 0 } , regardless of whether the target MSE was achieved or not. We begin our analysis with the case K OL K th .

Appendix C.1. The Case of KOL ≤ Kth

From (14) we write the (first order) Maclaurin series expansion [50] (Chapter 7.3.3.3) of ρ k + 1 ρ k in the parameter P:
ρ k + 1 ρ k = P ( 1 ρ k 2 ) sgn ( ρ k ) + ( 1 ρ z ) ( sgn ( ρ k ) + ρ k ) 2 σ z 2 + Res 1 ( P , k ) ,
where Res 1 ( P , k ) is the remainder of the first order Maclaurin series expansion. The following lemma upper bounds | Res 1 ( P , k ) | :
Lemma A1.
For any k, we have | Res 1 ( P , k ) | B 1 ( P ) , where B 1 ( P ) is defined in (15).
Proof. 
Let φ ( P , k ) ρ k + 1 ρ k . From Taylor’s Theorem [50] (Subsection 6.1.4.5) it follows that Res 1 ( P , k ) = 2 φ ( x , k ) 2 x 2 · P 2 , for some 0 x P . In the following we upper bound 2 φ ( x , k ) x 2 , for 0 x P : Let b 2 ( 1 ρ k 2 ) ( sgn ( ρ k ) + ρ k ) , b 1 ρ z σ z 2 ( 1 ρ k 2 ) ( sgn ( ρ k ) + ρ k ) + σ z 2 ( 1 ρ z ) ( 2 ( sgn ( ρ k ) + ρ k ) + ρ k ( 1 ρ k 2 ) ) , a 2 ( 1 ρ k 2 ) , a 1 σ z 2 2 ( 1 + | ρ k | ) + 1 ρ k 2 , and a 0 2 σ z 4 ( 1 + | ρ k | ) (note that in order to simplify the expressions we ignore the dependence of b 2 , b 2 , a 2 , a 1 , and a 0 on k). Using (14), the expression ρ k + 1 ρ k can now be explicitly written as φ ( P , k ) = b 2 P 2 b 1 P a 2 P 2 + a 1 P + a 0 , from which we obtain:
2 φ ( x , k ) x 2 = 2 ( a 1 a 2 b 2 a 2 2 b 1 ) x 3 + 3 a 0 a 2 b 2 x 2 + 3 a 0 a 2 b 1 x + a 0 a 1 b 1 a 0 2 b 2 ( a 2 x 2 + a 1 x + a 0 ) 3 .
Since a 1 , a 2 > 0 , we lower bound the denominator of 2 φ ( x , k ) x 2 in the range 0 x P by ( a 2 x 2 + a 1 x + a 0 ) 3 a 0 3 = 8 σ z 12 . Next, we upper bound each of the terms in the numerator of 2 φ ( x , k ) x 2 . For the coefficient of x 3 we write | a 1 a 2 b 2 a 2 2 b 1 | 4 σ z 2 · 2 + | ρ z | σ z 2 · 2 + σ z 2 ( 1 ρ z ) · 5 = σ z 2 8 + ψ 1 , where the inequality follows from the fact that 3 + 2 | ρ k | ρ k 2 4 . For the coefficient of x 2 we write | 3 a 0 a 2 b 2 | 24 σ z 4 . For the coefficient of x we write | 3 a 0 a 2 b 1 | 12 σ z 6 2 | ρ z | + 5 ( 1 ρ z ) = 12 σ z 6 ψ 1 . Finally, for the constant term we write | a 0 a 1 b 1 a 0 2 b 2 | 4 σ z 8 4 σ z 2 ψ 1 + 8 . Collecting the above bounds on the terms of the numerator, and the bound on the denominator, we obtain | Res 1 ( P , k ) | B 1 ( P ) , concluding the proof of the lemma. ☐
Note that for k K th we have ρ k > 0 . Hence, (A9) together with Lemma A1 imply that, for k K th we have:
| ρ k + 1 ρ k | P ( 1 + ρ k ) ( 2 ρ z ρ k ) 2 σ z 2 + B 1 ( P ) P .
Next, note that the function f ( x ) ( 1 + x ) ( 2 ρ z x ) , 0 x < 1 satisfies:
min { 2 ρ z , 2 ( 1 ρ z ) } f ( x ) ( 3 ρ z ) 2 4 , 0 x < 1 .
The lower bound on f ( x ) follows from the fact that f ( x ) is concave, and the upper bound is obtained via: max x R f ( x ) . When B 1 ( P ) < ψ 2 P then we have min { 2 ρ z , 2 ( 1 ρ z ) } 2 σ z 2 > B 1 ( P ) P , hence min { 2 ρ z , 2 ( 1 ρ z ) } 2 σ z 2 B 1 ( P ) P > 0 . Thus, we can combine the lower and upper bounds on Res 1 ( P , k ) , and the bound on ( 1 + ρ k ) ( 2 ρ z ρ k ) 2 σ z 2 to obtain the following lower and upper bounds on | ρ k + 1 ρ k | P :
min { 2 ρ z , 2 ( 1 ρ z ) } 2 σ z 2 B 1 ( P ) P | ρ k + 1 ρ k | P ( 3 ρ z ) 2 8 σ z 2 + B 1 ( P ) P .
Now, recalling that ρ 0 = ρ s , the fact that the bound in (A11) does not depend on k results in the following upper bound on K th :
K th ρ s P 2 σ z 2 min { 2 ρ z , 2 ( 1 ρ z ) } B 1 ( P ) = ρ s P ψ 2 B 1 ( P ) .
Next, using the fact that ρ k 0 for k < K th , we rewrite (A9) as follows:
ρ k + 1 ρ k ( 1 + ρ k ) ( 2 ρ z ρ k ) = P 2 σ z 2 + Res 1 ( P , k ) ( 1 + ρ k ) ( 2 ρ z ρ k ) ,
which implies that for K OL K th we have:
k = 0 K OL 1 ρ k + 1 ρ k ( 1 + ρ k ) ( 2 ρ z ρ k ) = K OL P 2 σ z 2 + k = 0 K OL 1 Res 1 ( P , k ) ( 1 + ρ k ) ( 2 ρ z ρ k ) .
Observe that Res 1 ( P , k ) ( 1 + ρ k ) ( 2 ρ z ρ k ) O ( P 2 ) , which follows from the fact that 0 < ( 1 + ρ k ) ( 2 ρ z ρ k ) is lower and upper bounded independent of P and ρ k (see (A10)), and from the fact that | Res 1 ( P , k ) | O ( P 2 ) . Next, we focus on the LHS of (A13) and write:
k = 0 K OL 1 ρ k + 1 ρ k ( 1 + ρ k ) ( 2 ρ z ρ k ) = k = 0 K OL 1 1 ( 1 + ρ k ) ( 2 ρ z ρ k ) ρ k ρ k + 1 d ρ .
Since | ρ z | < 1 , it follows that 1 f ( x ) = 1 ( 1 + x ) ( 2 x ρ z ) is continuous, differentiable and bounded over 0 x < 1 , which implies that there exists a constant c 0 such that:
max x [ ρ k + 1 , ρ k ] 1 f ( x ) 1 f ( ρ k ) c 0 | ρ k + 1 ρ k | .
The constant c 0 is upper bounded in the following Lemma A2. Note that (A15) constitutes an upper bound on the maximal magnitude of the difference between 1 f ( ρ k + 1 ) and 1 f ( ρ k ) .
Lemma A2.
The constant c 0 , in (A15), satisfies: c 0 max | ρ z 1 | ( 2 ρ z ) 2 , 1 + ρ z 4 ( 1 ρ z ) 2 ψ 3 .
Proof. 
Since 0 ρ k , ρ k + 1 < 1 , the mean-value theorem [50] (Section 6.1.4) implies: c 0 max x [ 0 , 1 ] 1 f ( x ) . Writing 1 f ( x ) explicitly we have: 1 f ( x ) = 2 x 1 + ρ z ( ( 1 + x ) ( 2 x ρ z ) ) 2 g 0 ( x ) . To maximize g 0 ( x ) over x [ 0 , 1 ] , we compute g 0 ( x ) = 2 ( 3 3 ρ z + ρ z 2 3 ( 1 ρ z ) x + 3 x 2 ) ( ( 1 + x ) ( 2 + x + ρ z ) ) 3 . Setting g 0 ( x ) = 0 requires g 1 ( x ) = x 2 ( 1 ρ z ) x + 1 ρ z + ρ z 2 3 = 0 . Since for all | ρ z | < 1 the roots of g 1 ( x ) are complex (the determinant of g 1 ( x ) is equal to ρ z 2 3 + 2 ρ z 3 < 0 , | ρ z | < 1 .), then g 0 ( x ) is not equal to 0 in the interval x [ 0 , 1 ] , and hence its maximal value is achieved at one of the boundaries of the interval [0,1]. This concludes the proof of the lemma. ☐
Next, we write the LHS of (A14) as follows:
k = 0 K OL 1 ρ k + 1 ρ k ( 1 + ρ k ) ( 2 ρ z ρ k ) = ( a ) k = 0 K OL 1 ρ k ρ k + 1 d ρ ( 1 + ρ k ) ( 2 ρ z ρ k ) ( b ) k = 0 K OL 1 ρ k ρ k + 1 d ρ ( 1 + ρ ) ( 2 ρ z ρ ) + k = 0 K OL 1 ρ k ρ k + 1 ψ 3 · | ρ k + 1 ρ k | d ρ ρ s ρ K OL d ρ ( 1 + ρ ) ( 2 ρ z ρ ) + k = 0 K OL 1 ψ 3 · | ρ k + 1 ρ k | 2 ( c ) 1 ρ z 3 log ( 2 ρ z ρ K OL ) ( 1 + ρ s ) ( 2 ρ z ρ s ) ( 1 + ρ K OL ) + F 1 ( P ) ,
where (a) follows from (A14); (b) follows from (A15) which implies that x [ ρ k + 1 , ρ k ] : 1 f ( ρ k ) 1 f ( x ) + c 0 | ρ k + 1 ρ k | , and from Lemma A2; (c) follows from explicitly calculating the integral, and by multiplying (A12) by the RHS of (A11) to bound the summation, and then using the upper bounds (A11) and (A12) which leads to an upper bound on the second summation by F 1 ( P ) , which is defined in (16b). By following arguments similar to those leading to (A16) the summation at the LHS of (A14) can be lower bounded via:
k = 0 K OL 1 ρ k + 1 ρ k ( 1 + ρ k ) ( 2 ρ z ρ k ) 1 ρ z 3 log ( 2 ρ z ρ K OL ) ( 1 + ρ s ) ( 2 ρ z ρ s ) ( 1 + ρ K OL ) F 1 ( P ) .
Next, consider again the RHS of (A13). Using the bound (A10) and Lemma A1, we can write:
K OL P 2 σ z 2 + k = 0 K OL 1 Res 1 ( P ) ( 1 + ρ k ) ( 2 ρ z ρ k ) K OL P 2 σ z 2 + k = 0 K OL 1 B 1 ( P ) min 2 ρ z , 2 ( 1 ρ z ) ( a ) K OL P 2 σ z 2 + F 2 ( P ) ,
where (a) follows from (A12), the LHS of (A10) and Lemma A2, and from the definitions of ψ 2 and F 2 ( P ) in Section 5.2. Plugging the lower bound (A17) and the upper bound (A18) into (A13) we arrive at an upper bound on K OL when K OL < K th :
K OL 2 σ z 2 P 1 3 ρ z log ( 2 ρ z ρ K OL ) ( 1 + ρ s ) ( 2 ρ z ρ s ) ( 1 + ρ K OL ) + 2 σ z 2 P F 1 ( P ) + F 2 ( P ) .
We emphasize that the above expressions hold only for K OL K th , and we note that these expressions depend on ρ K OL . As ρ K OL is unknown, in the following we bound its value. For this purpose, we set α K OL = D in (13) and write:
log D σ s 2 = k = 0 K OL 1 log 2 σ z 2 ( 1 + | ρ k | ) + P ( 1 ρ k 2 ) 2 ( P + σ z 2 ) ( 1 + | ρ k | ) = ( a ) k = 0 K OL 1 P 2 σ z 2 ( 1 + | ρ k | ) + k = 0 K OL 1 Res 2 ( P , k ) ,
where (a) follows from the first order Maclaurin series expansion of log 2 σ z 2 ( 1 + | ρ k | ) + P ( 1 ρ k 2 ) 2 ( P + σ z 2 ) ( 1 + | ρ k | ) in the parameter P, and Res 2 ( P , k ) is the remainder term. Note that this holds for any K OL , irrespective whether it is smaller or larger than K th . The following lemma upper bounds | Res 2 ( P , k ) | :
Lemma A3.
For any k we have | Res 2 ( P , k ) | B 2 ( P ) , where B 2 ( P ) is defined in (15).
Proof outline.
We follow the technique used in the proof of Lemma A1. We let φ ( P , k ) log 2 σ z 2 ( 1 + | ρ k | ) + P ( 1 ρ k 2 ) 2 ( P + σ z 2 ) ( 1 + | ρ k | ) , and use Taylor’s theorem to write Res 2 ( P , k ) = 2 φ ( x , k ) 2 x 2 · P 2 for some 0 x P . Then, we upper bound 2 φ ( x , k ) x 2 in the range 0 x P . ☐
Next, we focus on the first summation on the RHS of (A20): From (A9), and for k K th , we have ρ k + 1 ρ k 2 ρ z ρ k = P 2 σ z 2 ( 1 + ρ k ) + Res 1 ( P , k ) 2 ρ z ρ k . Hence, we write the first summation on the RHS of (A20), for K OL K th as:
k = 0 K OL 1 P 2 σ z 2 ( 1 + | ρ k | ) = k = 0 K OL 1 ρ k + 1 ρ k 2 ρ z ρ k k = 0 K OL 1 Res 1 ( P , k ) 2 ρ z ρ k .
Similarly to (A16) we write:
k = 0 K OL 1 ρ k + 1 ρ k 2 ρ z ρ k ρ s ρ K OL 1 2 ρ z ρ d ρ + F 3 , 1 ( P ) = log 2 ρ z ρ s 2 ρ z ρ K OL + F 3 , 1 ( P ) ,
where
F 3 , 1 ( P ) = ( a ) ρ s P ψ 2 B 1 ( P ) ( ) × max x [ 0 , 1 ] 1 2 ρ z x · ( 3 ρ z ) 2 P 8 σ z 2 + B 1 ( P ) 2 ( ) = ( b ) ρ s P ψ 2 B 1 ( P ) · 1 ( 1 ρ z ) 2 · ( 3 ρ z ) 2 P 8 σ z 2 + B 1 ( P ) 2 .
Here, in step (a) ( ) is obtained as K OL K th , where K th is upper bounded as in (A12), and ( ) follows from bounding | 1 2 ρ z ρ 1 2 ρ z ρ k | d 0 | ρ k + 1 ρ k | , where d 0 is found using a similar approach to the one in the proof of Lemma A2. Then, applying arguments similar to those leading to (A16), we plug the upper bound on | ρ k + 1 ρ k | stated in the RHS of (A11), and combine with the bound on d 0 to obtain ( ) . Step (b) follows from the fact that 2 x 2 1 2 ρ z x > 0 , x [ 0 , 1 ] which implies that x 1 2 ρ z x is increasing with x [ 0 , 1 ] , and therefore, its maximal value is achieved at x = 1 .
For the second term on the RHS of (A21), noting that for | ρ k | < 1 , 0 < 1 2 ρ z ρ k < 1 1 ρ z , we write:
k = 0 K OL 1 Res 1 ( P , k ) 2 ρ z ρ k ρ s P ψ 2 B 1 ( P ) B 1 ( P ) 1 ρ z F 3 , 2 ( P ) .
Now, we consider the second term on the RHS of (A20). From (A12) and Lemma A3 we obtain:
k = 0 K OL 1 Res 2 ( P , k ) K OL · B 2 ( P ) ρ s B 2 ( P ) P ψ 2 B 1 ( P ) F 3 , 3 ( P ) .
Therefore, from (A20)–(A24) using the definition of F 3 ( P ) in (16d), we obtain:
log D σ s 2 log 2 ρ z ρ s 2 ρ z ρ K OL + F 3 ( P ) .
By following similar arguments for lower bounding log D σ s 2 , we also obtain:
log D σ s 2 log 2 ρ z ρ s 2 ρ z ρ K OL F 3 ( P ) .
From (A25a), we can extract the following lower bound on ρ K OL : ρ K OL 2 ρ z + σ s 2 D ( ρ z + ρ s 2 ) e F 3 ( P ) ρ lb ( P , D ) . Similarly, from (A25b), we can extract the following upper bound on ρ K OL : ρ K OL 2 ρ z + σ s 2 D ( ρ z + ρ s 2 ) e F 3 ( P ) ρ ub ( P , D ) . Up to this point we assumed that K OL K th and therefore ρ K OL 0 . Hence, we restricted our attention only to values of D for which ρ lb ( D ) 0 , which is satisfied for D σ s 2 ( 2 ρ z ρ s ) e F 3 ( P ) 2 ρ z = D th ub . We conclude that if D D th ub , we can obtain an upper bound on K OL plugging ρ lb ( P , D ) into (A19):
K OL 2 σ z 2 P 1 3 ρ z log ( 2 ρ z ρ lb ( P , D ) ) ( 1 + ρ s ) ( 2 ρ z ρ s ) ( 1 + ρ lb ( P , D ) ) + 2 σ z 2 P F 1 ( P ) + F 2 ( P ) .
This corresponds to the bounds (17a). In the next subsection, we consider the case of K OL > K th .

Appendix C.2. The Case of KOL > Kth

For upper bounding K OL when K OL > K th , we first derive an upper bound on | ρ k | for k K th . From (A9) we have for any k:
| ρ k + 1 ρ k | P 2 σ z 2 ( 1 ρ k 2 ) sgn ( ρ k ) + ( 1 ρ z ) ( sgn ( ρ k ) + ρ k ) + Res 1 ( P , k ) ( a ) P 2 σ z 2 ( 1 | ρ k | 2 ) + ( 1 ρ z ) ( 1 + | ρ k | ) + B 1 ( P ) = ( b ) P 2 σ z 2 ( 1 + | ρ k | ) ( 2 ρ z | ρ k | ) + B 1 ( P ) ,
where (a) follows from Lemma A1, and (b) follows since | ρ k | is non-negative. Thus, we can use the upper bound in (A10) to further bound:
P 2 σ z 2 ( 1 + | ρ k | ) ( 2 ρ z | ρ k | ) + B 1 ( P ) P ( 3 ρ z ) 2 8 σ z 2 + B 1 ( P ) ρ ¯ ( P ) .
Note that this bound holds for every k, regardless of the value of K OL . Further note that the condition ρ ¯ ( P ) + 2 σ z 2 P B 2 ( P ) < 1 implies that ρ ¯ ( P ) < 1 . The following lemma uses (A27) to bound | ρ k | , k K th .
Lemma A4.
For k K th it holds that | ρ k | ρ ¯ ( P ) .
Proof. 
We first recall that ρ K th > 0 while ρ K th + 1 < 0 . Therefore, the bound | ρ k + 1 ρ k | ρ ¯ ( P ) combined with | ρ K th + 1 ρ K th | = | ρ K th | + | ρ K th + 1 | implies that | ρ K th | ρ ¯ ( P ) as well as | ρ K th + 1 | ρ ¯ ( P ) . From [28] (p. 669) we have that if ρ k > 0 then ρ k + 1 < ρ k , and if ρ k < 0 then ρ k + 1 > ρ k . Note that these statements hold for every k. We now prove by induction the statement: Suppose | ρ K th + Δ | < ρ ¯ ( P ) , for Δ > 0 , then | ρ K th + Δ + 1 | < ρ ¯ ( P ) . Note that the induction assumption is satisfied for Δ = 1 . If ρ K th + Δ < 0 , then ρ K th + Δ < ρ K th + Δ + 1 , which implies that | ρ K th + Δ + 1 | ρ ¯ ( P ) since | ρ k + 1 ρ k | ρ ¯ ( P ) . If ρ K th + Δ > 0 , then ρ K th + Δ > ρ K th + Δ + 1 , which again, implies that | ρ K th + Δ + 1 | ρ ¯ ( P ) since | ρ k + 1 ρ k | ρ ¯ ( P ) . Thus, by induction we conclude that | ρ K th | ρ ¯ ( P ) , k K th . ☐
Next, we characterize a lower bound on the distortion achieved after K th time steps. Recall that for K OL K th we have ρ K OL ρ ub ( P , D ) , where ρ ub ( P , D ) is defined in Appendix C.1. By setting ρ ub ( P , D ) = 0 , we obtain D = σ s 2 ( 2 ρ z ρ s ) e F 3 ( P ) 2 ρ z D th lb . Thus, D th lb constitutes a lower bound on D th .
Now, we are ready to analyze the case of K OL > K th . We first note that (A20) holds for any value of K OL . Hence, we write:
log D σ s 2 = k = 0 K th 1 P 2 σ z 2 ( 1 + | ρ k | ) + Res 2 ( P , k ) + k = K th K OL 1 P 2 σ z 2 ( 1 + | ρ k | ) + Res 2 ( P , k ) .
For the second term on the RHS of (A28), we write:
k = K th K OL 1 P 2 σ z 2 ( 1 + | ρ k | ) + Res 2 ( P , k ) ( a ) ( K OL K th ) P 2 σ z 2 1 + ρ ¯ ( P ) + 2 σ z 2 P B 2 ( P ) = ( K OL K th ) F 4 ( P ) .
where (a) follows from Lemma A3, as the lemma holds for any k, and from the fact that | ρ k | ρ ¯ ( P ) , k K th . Since the sum in (A20) is negative, we require F 4 ( P ) < 0 , which results in ρ ¯ ( P ) + 2 σ z 2 P B 2 ( P ) < 1 . Now, we write (A28) as:
k = K th K OL 1 P 2 σ z 2 ( 1 + | ρ k | ) + Res 2 ( P , k ) = log D σ s 2 k = 0 K th 1 P 2 σ z 2 ( 1 + | ρ k | ) + Res 2 ( P , k ) ,
and note that since (A20)–(A25) hold for K OL K th , then replacing K OL with K th in (A20)–(A25) and ρ K OL with ρ K th we can bound:
k = 0 K th 1 P 2 σ z 2 ( 1 + | ρ k | ) + Res 2 ( P , k ) log 2 ρ z ρ s 2 ρ z ρ ¯ ( P ) + F 3 ( P ) ,
where we used the fact that 0 < ρ K th ρ ¯ ( P ) . Thus, to obtain an upper bound on K OL we write:
( K OL K th ) F 4 ( P ) log D σ s 2 log 2 ρ z ρ s 2 ρ z ρ ¯ ( P ) F 3 ( P ) .
Finally, plugging ρ K th instead of ρ K OL in (A19), we obtain an upper bound on K th . Since the function ( 2 ρ z x ) ( 1 + ρ s ) ( 2 ρ z ρ s ) ( 1 + x ) in (A19) monotonically decreases with x, using the lower bound K th 0 , we obtain an explicit upper bound on K th . Combining this upper bound on K th with (A30) we obtain the following upper bound on K OL :
K OL log D ( 2 ρ z ρ ¯ ( P ) ) σ s 2 ( 2 ρ z ρ s ) F 3 ( P ) 1 F 4 ( P ) + 2 σ z 2 P 1 3 ρ z log ( 2 ρ z ) ( 1 + ρ s ) 2 ρ z ρ s + 2 σ z 2 P F 1 ( P ) + F 2 ( P ) ,
where since F 4 ( P ) < 0 , dividing by F 4 ( P ) changes the direction of the inequality. This concludes the proof.

References

  1. Jain, A.; Gündüz, D.; Kulkarni, S.R.; Poor, H.V.; Verdú, S. Energy-distortion tradeoffs in Gaussian joint source-channel coding problems. IEEE Trans. Inf. Theory 2012, 58, 3153–3168. [Google Scholar] [CrossRef]
  2. Alagöz, F.; Gür, G. Energy efficiency and satellite networking: A holistic overview. Proc. IEEE 2011, 99, 1954–1979. [Google Scholar] [CrossRef]
  3. Rault, T.; Bouabdallah, A.; Challal, Y. Energy efficiency in wireless sensor networks: A top-down survey. Comput. Netw. 2014, 67, 104–122. [Google Scholar] [CrossRef]
  4. He, T.; Krishnamurthy, S.; Stankovic, J.A.; Abdelzaher, T.; Luo, L.; Stoleru, R.; Yan, T.; Gu, L. Energy-efficient surveillance system using wireless sensor networks. In Proceedings of the International Conference on Mobile Aystems, Applications, and Services (MobiSys), Boston, MA, USA, 6–9 June 2004. [Google Scholar]
  5. Blessy, J.; Anpalagan, A. Body area sensor networks: Requirements, operations, and challenges. IEEE Potentials 2014, 33, 21–25. [Google Scholar]
  6. Hanson, M.A.; Powell, H.C., Jr.; Barth, A.T.; Ringgenberg, K.; Calhoun, B.H.; Aylor, J.H.; Lach, J. Body area sensor networks: Challenges and opportunities. Computer 2009, 42, 58–65. [Google Scholar] [CrossRef]
  7. Ntouni, G.D.; Lioumpas, A.S.; Nikita, K.S. Reliable and energy-efficient communications for wireless biomedical implant systems. IEEE J. Biomed. Health Inform. 2014, 18, 1848–1856. [Google Scholar] [CrossRef] [PubMed]
  8. Guha, A.N. Joint Network-Channel-Correlation Decoding in Wireless Body Area Networks. Ph.D. Thesis, University of Nebraska, Lincoln, NE, USA, May 2014. [Google Scholar]
  9. Deligiannis, N.; Zimos, E.; Ofrim, D.M.; Andreopoulos, Y.; Munteanu, A. Distributed joint source-channel coding with raptor codes for correlated data gathering in wireless sensor networks. In Proceedings of the International Conference on Body Area Networks (BodyNets), London, UK, 29 September–1 October 2014. [Google Scholar]
  10. Brandão, P. Abstracting Information on Body Area Networks; Technical Report; University of Cambridge: Cambridge, UK, 2012. [Google Scholar]
  11. Shannon, C.E. Coding theorems for a discrete source with a fidelity criterion. IRE Nat. Conv. Rec. 1959, 4, 142–163. [Google Scholar]
  12. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 1st ed.; John Wiley & Sons: New York, NY, USA, 1991. [Google Scholar]
  13. Verdú, S. On channel capacity per unit cost. IEEE Trans. Inf. Theory 1990, 36, 1019–1030. [Google Scholar] [CrossRef]
  14. Kostina, V.; Polyanskiy, Y.; Verdú, S. Joint source-channel coding with feedback. In Proceedings of the 2015 IEEE International Symposium on Information Theory (ISIT), Hong Kong, China, 14–19 June 2015. [Google Scholar]
  15. Köken, E.; Tuncel, E.; Gündüz, D. Energy-distortion exponents in lossy transmission of Gaussian sources over Gaussian channels. IEEE Trans. Inf. Theory 2017, 63, 1227–1236. [Google Scholar] [CrossRef]
  16. Murin, Y.; Dabora, R.; Gündüz, D. Source-channel coding theorems for the multiple-access relay channel. IEEE Trans. Inf. Theory 2013, 59, 5446–5465. [Google Scholar] [CrossRef]
  17. Gündüz, D.; Erkip, E.; Goldsmith, A.; Poor, H.V. Source and channel coding for correlated sources over multiuser channels. IEEE Trans. Inf. Theory 2009, 55, 3927–3944. [Google Scholar] [CrossRef]
  18. Tian, C.; Chen, J.; Diggavi, S.N.; Shamai, S. Optimality and approximate optimality of source-channel separation in networks. IEEE Trans. Inf. Theory 2014, 60, 904–918. [Google Scholar] [CrossRef]
  19. Tian, C.; Diggavi, S.N.; Shamai, S. The achievable distortion region of sending a bivariate Gaussian source on the Gaussian broadcast channel. IEEE Trans. Inf. Theory 2011, 57, 6419–6427. [Google Scholar] [CrossRef]
  20. Koken, E.; Tuncel, E. Joint source-channel coding for broadcasting correlated sources. In Proceedings of the 2016 IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016. [Google Scholar]
  21. Ji, W.; Li, Z.; Chen, Y. Joint source-channel coding and optimization for layered video broadcasting to heterogeneous devices. IEEE Trans. Multimed. 2012, 14, 443–455. [Google Scholar] [CrossRef]
  22. Wu, J.; Shang, Y.; Huang, J.; Zhang, X.; Cheng, B.; Chen, J. Joint source-channel coding and optimization for mobile video streaming in heterogeneous wireless networks. J. Wirel. Commun. Netw. 2013, 2013. [Google Scholar] [CrossRef]
  23. Bovik, A. Handbook of Image and Video Processing, 2nd ed.; Elsevier Academic Press: San Diego, CA, USA, 2005. [Google Scholar]
  24. Mohassel, R.R.; Fung, A.S.; Mohammadi, F.; Raahemifar, K. A survey on advanced metering infrastructure and its application in smart grids. In Proceedings of the IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE), Toronto, ON, Canada, 4–7 May 2014. [Google Scholar]
  25. Ong, L.; Motani, M. Coding strategies for multiple-access channels with feedback and correlated sources. IEEE Trans. Inf. Theory 2007, 53, 3476–3497. [Google Scholar] [CrossRef]
  26. Lapidoth, A.; Tinguely, S. Sending a bivariate Gaussian source over a Gaussian MAC with feedback. IEEE Trans. Inf. Theory 2010, 56, 1852–1864. [Google Scholar] [CrossRef]
  27. Jiang, N.; Yang, Y.; Høst-Madsen, A.; Xiong, Z. On the minimum energy of sending correlated sources over the Gaussian MAC. IEEE Trans. Inf. Theory 2014, 60, 6254–6275. [Google Scholar] [CrossRef]
  28. Ozarow, L.H.; Leung-Yan-Cheong, S. An achievable region and outer bound for the Gaussian broadcast channel with feedback. IEEE Trans. Inf. Theory 1984, 30, 667–671. [Google Scholar] [CrossRef]
  29. Schalkwijk, J.P.M.; Kailath, T. A coding scheme for additive white noise channels with feedback—Part I: No bandwidth constraint. IEEE Trans. Inf. Theory 1966, 12, 172–182. [Google Scholar] [CrossRef]
  30. Murin, Y.; Kaspi, Y.; Dabora, R. On the Ozarow-Leung scheme for the Gaussian broadcast channel with feedback. IEEE Signal Proc. Lett. 2015, 22, 948–952. [Google Scholar] [CrossRef]
  31. Elia, N. When Bode meets Shannon: Control oriented feedback communication schemes. IEEE Trans. Automat. Control 2004, 49, 1477–1488. [Google Scholar] [CrossRef]
  32. Ardestanizadeh, E.; Minero, P.; Franceschetti, M. LQG control approach to Gaussian broadcast channels with feedback. IEEE Trans. Inf. Theory 2012, 58, 5267–5278. [Google Scholar] [CrossRef]
  33. Amor, S.B.; Steinberg, Y.; Wigger, M. MIMO MAC-BC duality with linear-feedback coding schemes. IEEE Trans. Inf. Theory 2015, 61, 5976–5998. [Google Scholar] [CrossRef]
  34. Amor, S.B.; Wigger, M. Linear-feedback MAC-BC duality for correlated BC-noises, and iterative coding. In Proceedings of the 53rd Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 29 September–2 October 2015. [Google Scholar]
  35. Wu, Y.; Minero, P.; Wigger, M. Insufficiency of linear-feedback schemes in Gaussian broadcast channels with common message. IEEE Trans. Inf. Theory 2014, 60, 4553–4566. [Google Scholar] [CrossRef]
  36. Murin, Y.; Kaspi, Y.; Dabora, R.; Gündüz, D. Finite-length linear schemes for joint source-channel coding over Gaussian broadcast channels with feedback. IEEE Trans. Inf. Theory 2017, 63, 2737–2772. [Google Scholar] [CrossRef]
  37. Ben-Yishai, A.; Shayevitz, O. Interactive schemes for the AWGN channel with noisy feedback. IEEE Trans. Inf. Theory 2017, 63, 2409–2427. [Google Scholar] [CrossRef]
  38. Ben-Yishai, A.; Shayevitz, O. The AWGN BC with MAC feedback: a reduction to noiseless feedback via interaction. In Proceedings of the 2015 IEEE Information Theory Workshop, Jerusalem, Israel, 26 April–1 May 2015. [Google Scholar]
  39. Behroozi, H.; Alajaji, F.; Linder, T. On the performance of hybrid digital/analog coding for broadcasting correlated Gaussian sources. IEEE Trans. Commun. 2011, 59, 3335–3343. [Google Scholar] [CrossRef]
  40. Lapidoth, A.; Tinguely, S. Sending a bivariate Gaussian source over a Gaussian MAC. IEEE Trans. Inf. Theory 2010, 56, 2714–2752. [Google Scholar] [CrossRef]
  41. Xiao, J.J.; Luo, Z.Q. Compression of correlated Gaussian sources under individual distortion criteria. In Proceedings of the 43rd Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 28–30 September 2015. [Google Scholar]
  42. Bross, S.I.; Lapidoth, A.; Tinguely, S. Broadcasting correlated Gaussians. IEEE Trans. Inf. Theory 2010, 56, 3057–3068. [Google Scholar] [CrossRef]
  43. Lapidoth, A.; Telatar, İ.E.; Urbanke, R. On wide-band broadcast channels. IEEE Trans. Inf. Theory 2003, 49, 3250–3258. [Google Scholar] [CrossRef]
  44. Gastpar, M. Cut-set arguments for source-channel networks. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Chicago, IL, USA, 27 June–2 July 2004. [Google Scholar]
  45. Gallager, R.G.; Nakiboğlu, B. Variations on a theme by Schalkwijk and Kailath. IEEE Trans. Inf. Theory 2010, 56, 6–17. [Google Scholar] [CrossRef]
  46. Murin, Y.; Kaspi, Y.; Dabora, R.; Gündüz, D. On the transmission of a bivariate Gaussian source over the Gaussian broadcast channel with feedback. In Proceedings of the 2015 IEEE Information Theory Workshop, Jerusalem, Israel, 26 April–1 May 2015. [Google Scholar]
  47. Murin, Y.; Kaspi, Y.; Dabora, R.; Gündüz, D. Energy-distortion tradeoff for the Gaussian broadcast channel with feedback. In Proceedings of the 2016 IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016. [Google Scholar]
  48. El Gamal, A.; Mohseni, M.; Zahedi, S. Bounds on capacity and minimum energy-per-bit for AWGN relay channels. IEEE Trans. Inf. Theory 2006, 52, 1545–1561. [Google Scholar] [CrossRef]
  49. Wilf, H.S. Budan’s theorem for a class of entire functions. Proc. Am. Math. Soc. 1962, 13, 122–125. [Google Scholar]
  50. Bronshtein, I.N.; Semendyayev, K.A.; Musiol, G.; Muehlig, H. Handbook of Mathematics, 5th ed.; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
Figure 1. Gaussian broadcast channel with correlated sources and feedback links. S ^ 1 , 1 m and S ^ 2 , 1 m are the reconstructions of S 1 , 1 m and S 2 , 1 m , respectively.
Figure 1. Gaussian broadcast channel with correlated sources and feedback links. S ^ 1 , 1 m and S ^ 2 , 1 m are the reconstructions of S 1 , 1 m and S 2 , 1 m , respectively.
Entropy 19 00243 g001
Figure 2. Numerical results. (a) Upper and lower bounds on E ( D ) for σ s 2 = σ z 2 = 1 , and ρ z = 0.5 . Solid lines correspond to ρ s = 0.9 , while dashed lines correspond to ρ s = 0.2 . (b) Upper and lower bounds on E ( D ) for σ s 2 = σ z 2 = 1 , ρ s = 0.8 . Solid lines correspond to ρ z = 0.9 , while dashed lines correspond to ρ z = 0.9 . (c) Normalized excess energy requirement of the OL scheme over the SSCC- ρ s scheme, ρ z = 0.5 . (d) Normalized excess energy requirement of the SSCC- ρ z scheme over the OL scheme, ρ z = 0.5 .
Figure 2. Numerical results. (a) Upper and lower bounds on E ( D ) for σ s 2 = σ z 2 = 1 , and ρ z = 0.5 . Solid lines correspond to ρ s = 0.9 , while dashed lines correspond to ρ s = 0.2 . (b) Upper and lower bounds on E ( D ) for σ s 2 = σ z 2 = 1 , ρ s = 0.8 . Solid lines correspond to ρ z = 0.9 , while dashed lines correspond to ρ z = 0.9 . (c) Normalized excess energy requirement of the OL scheme over the SSCC- ρ s scheme, ρ z = 0.5 . (d) Normalized excess energy requirement of the SSCC- ρ z scheme over the OL scheme, ρ z = 0.5 .
Entropy 19 00243 g002

Share and Cite

MDPI and ACS Style

Murin, Y.; Kaspi, Y.; Dabora, R.; Gündüz, D. On the Energy-Distortion Tradeoff of Gaussian Broadcast Channels with Feedback. Entropy 2017, 19, 243. https://doi.org/10.3390/e19060243

AMA Style

Murin Y, Kaspi Y, Dabora R, Gündüz D. On the Energy-Distortion Tradeoff of Gaussian Broadcast Channels with Feedback. Entropy. 2017; 19(6):243. https://doi.org/10.3390/e19060243

Chicago/Turabian Style

Murin, Yonathan, Yonatan Kaspi, Ron Dabora, and Deniz Gündüz. 2017. "On the Energy-Distortion Tradeoff of Gaussian Broadcast Channels with Feedback" Entropy 19, no. 6: 243. https://doi.org/10.3390/e19060243

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop