Next Article in Journal
Thermodynamic Efficiency of Interactions in Self-Organizing Systems
Previous Article in Journal
Currencies in Resource Theories
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Capacity-Achieving Feedback Scheme of the Gaussian Multiple-Access Channel with Degraded Message Sets

1
School of Information Science and Technology, Southwest Jiaotong University, Chengdu 611756, China
2
The State Key Laboratory of Integrated Services Networks, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2021, 23(6), 756; https://doi.org/10.3390/e23060756
Submission received: 28 April 2021 / Revised: 4 June 2021 / Accepted: 11 June 2021 / Published: 16 June 2021

Abstract

:
The classical Schalkwijk–Kailath (SK) scheme for the point-to-point white Gaussian channel with noiseless feedback plays an important role in information theory due to the fact that it is capacity-achieving and the complexity of its encoding–decoding procedure is extremely low. In recent years, it has been shown that an extended SK feedback scheme also achieves the capacity region of the two-user Gaussian multiple-access channel with noiseless feedback (GMAC-NF), where two independent messages are, respectively, encoded by two intended transmitters. However, for the two-user GMAC-NF with degraded message sets (one common message for both users and one private message for an intended user), the capacity-achieving feedback scheme remains open. In this paper, we propose a novel two-step SK-type feedback scheme for the two-user GMAC-NF with degraded message sets and show that this scheme is capacity-achieving.

1. Introduction

Shannon showed that feedback does not increase the capacity of the discrete memoryless channel (DMC) [1]. Later, Schalkwijk and Kailath [2] proposed a feedback coding scheme called the SK scheme and showed that this scheme is capacity-achieving and greatly improves the encoding–decoding performance of the point-to-point white Gaussian channel. According to the landmark paper [2], Ozarow [3] extended the classic SK scheme [2] to a multiple-access situation; namely, the two-user Gaussian multiple-access channel (GMAC) with noiseless feedback (GMAC-NF), where two independent messages are, respectively, encoded by two intended transmitters. Similar to [2], Ozarow showed that this extended scheme is also capacity-achieving, i.e., achieving the capacity region of the two-user GMAC-NF. Parallel work of [3] involves extending the classic SK scheme [2] to a broadcast situation; namely, the Gaussian broadcast channel with noiseless feedback (GBC-NF). Ref. [4] proposed an SK-type feedback scheme for the GBC-NF, but unfortunately this scheme is not capacity-achieving. Other applications of SK schemes include the following:
  • Weissman and Merhav [5] proposed an SK-type feedback scheme for the dirty paper channel with noiseless feedback and showed that this scheme is capacity-achieving. Subsequently, Rosenzweig [6] extended the SK-type scheme of [5] to multiple-access and broadcast situations.
  • Kim extended the SK scheme to colored (non-white) Gaussian channels with noiseless feedback [7,8] and showed that these extended schemes are also capacity-achieving for some special cases.
  • Bross [9] extended the SK-type scheme to the Gaussian relay channel with noiseless feedback and showed that the proposed scheme is better than the ones already existing in the literature.
  • Ben-Yishai and Shayevitz [10] studied the SK-type scheme for the white Gaussian channel with noisy feedback and showed that a variation of the SK scheme achieves a rate that tends to be the capacity for some special cases.
In this paper, we revisit the two-user GMAC-NF by considering the case that one common message for both users and one private message for an intended user are transmitted through the channel (see Figure 1), which is also called the two-user GMAC with degraded message sets and noiseless feedback (GMAC-DMS-NF). Here, note that the two-user GMAC with degraded message sets is especially useful when considering partial cooperation between the encoders of the GMAC. Although it has already been shown that feedback does not increase the capacity region of the two-user GMAC-DMS-NF, the capacity-achieving SK-type feedback scheme of this model remains open. In this paper, a novel SK-type feedback scheme is proposed for the model of Figure 1 and it is proven to be capacity-achieving.
The rest of this paper is organized as follows. Section 2 introduces some preliminary results for the SK scheme and the GMAC with degraded message sets. Model formulation and the main results are given in Section 3. Conclusions and future work are given in Section 4.

2. Preliminary

In this section, we introduce the SK schemes for the point-to-point white Gaussian channel with feedback and the two-user GMAC-NF.
Basic notation: A random variable (RV) is denoted with an upper case letter (e.g., X), its value is denoted with an lower case letter (e.g., x), the finite alphabet of the RV is denoted with calligraphic letter (e.g., X ), and the probability distribution of an event { X = x } is denoted with P ( x ) . A random vector and its value are denoted by a similar convention. For example, X N represents an N-dimensional random vector ( X 1 , , X N ) , and x N = ( x 1 , , x N ) represents a vector value in X N (the N-th Cartesian power of the finite alphabet X ). In addition, define A j N = ( A j , 1 , A j , 2 , , A j , N ) and a j N = ( a j , 1 , a j , 2 , , a j , N ) . Throughout this paper, the base of the log function is 2.

2.1. The SK Scheme for the Point-to-Point White Gaussian Channel with Feedback

For the white Gaussian channel with feedback (see Figure 2), at time instant i ( i { 1 , 2 , , N } ), the channel input and output are given by
Y i = X i + η i ,
where Y i is the channel output, X i is the channel input, and η i N ( 0 , σ 2 ) is the i.i.d white Gaussian noise across the time index i. The message W is uniformly chosen from the set W = { 1 , 2 , , | W | } , and the i-th time channel input X i is a function of the message W and the feedback Y i 1 , namely,
X i = f i ( W , Y i 1 ) .
The receiver obtains an estimation W ^ = ψ ( Y N ) , where ψ is the receiver’s decoding function, and the average decoding error probability is given by
P e = 1 | W | w W P r { ψ ( y N ) w | w } .
For a given positive rate R, if for arbitrarily small ϵ and sufficiently large N, there exists a channel encoder–decoder such that
log | W | N R ϵ , P e ϵ ,
we say that R is achievable. The channel capacity is the maximum over all achievable rates. Denote the capacity of the white Gaussian channel with feedback by C g f . Since feedback does not increase C g f , it is easy to see that
C g f = C g = 1 2 l o g ( 1 + P σ 2 ) ,
where C g is the capacity of the white Gaussian channel (the same model without feedback).
In [2], it has been shown that the classical SK scheme achieves C g f , and it consists of the following two properties:
  • At time 1, the channel input is a deterministic function of the real transmitted message.
  • From time 2 to N, the channel input is the linear combination of previous channel noise.
The details of the SK scheme are described below.
Let W = { 1 , 2 , 2 N R } be the message set of W, divide the interval [ 0.5 , 0.5 ] into 2 N R equally spaced sub-intervals, and each sub-interval center corresponds to a value in W. The center of the sub-interval with respect to (w.r.t.) W is denoted by θ , where the variance of θ approximately equals 1 12 .
At time 1, the transmitter transmits
X 1 = 12 P θ .
The receiver receives Y 1 = X 1 + η 1 , and obtains an estimation of θ by computing
θ ^ 1 = Y 1 12 P = θ + η 1 12 P = θ + ϵ 1 ,
where ϵ 1 = θ ^ 1 θ = η 1 12 P , and α 1 σ 2 12 P .
At time 2 k N , the receiver obtains Y k = X k + η k , and gets an estimation of θ k by computing
θ ^ k = θ ^ k 1 E [ Y k ϵ k 1 ] E [ Y k 2 ] Y k ,
where
ϵ k = θ ^ k θ ,
(9) yields that
ϵ k = ϵ k 1 E [ Y k ϵ k 1 ] E [ Y k 2 ] Y k .
At time k ( k { 2 , 3 , , N } ), the transmitter sends
X k = P α k 1 ϵ k 1 ,
where α k 1 V a r ( ϵ k 1 ) .
In [2], it has been shown that the decoding error P e of the above coding scheme is upper-bounded by
P e P r { | ϵ N | > 1 2 | W | 1 } 2 Q ( 1 2 · 2 N R 1 α N ) ,
where Q ( x ) is the tail of the unit Gaussian distribution evaluated at x, and
α N = σ 2 12 P ( σ 2 P + σ 2 ) N 1 .
From (12) and (13), we conclude that if
R < 1 2 l o g ( 1 + P σ 2 ) = C g ,
P e 0 as N .

2.2. The Two-User GMAC with Degraded Message Sets

The channel model consisting of two inputs, one output, and the Gaussian channel noise is called GMAC. On the basis of GMAC, if message W 1 is known by encoder 1 and encoder 2, message 2 is only known by encoder 2, and this model is called GMAC-DMS.
The GMAC with degraded message sets is shown in Figure 3. At time i ( i { 1 , 2 , , N } ), the channel inputs and output are given by
Y i = X 1 , i + X 2 , i + η i ,
where X 1 , i , X 2 , i are the channel inputs, respectively, subject to average power constraints P 1 and P 2 , namely, 1 N i = 1 N E [ X 1 , i 2 ] P 1 , 1 N i = 1 N E [ X 2 , i 2 ] P 2 , Y i is the channel output, η i N ( 0 , σ 2 ) is i.i.d. Gaussian noise across i. The message W j ( j = 1 , 2 ) is uniformly drawn in the set W j = { 1 , 2 , , | W j | } . The input X 1 N is a function of the message W 1 , and the input X 2 N is a function of both messages W 1 and W 2 . After receiving the channel output, the receiver computes ( W ^ 1 , W ^ 2 ) = ψ ( Y N ) for decoding, where ψ is the receiver’s decoding function. The average decoding error probability is defined as
P e = 1 | W 1 | · | W 2 | w 1 W 1 , w 2 W 2 P r { ψ ( y N ) ( w 1 , w 2 ) | ( w 1 , w 2 ) } .
A rate pair ( R 1 , R 2 ) is said to be achievable if, for any ϵ and sufficiently large N, there exist channel encoders and decoder such that
l o g | W 1 | N R 1 ϵ , l o g | W 2 | N R 2 ϵ , P e ϵ .
The capacity region of the GMAC-DMS is noted as C g m a c d m s , which is composed of all such achievable rate pairs.
Theorem 1.
The capacity region C g m a c d m s is given by
C g m a c d m s = 0 ρ 1 ( R 1 0 , R 2 0 ) : R 2 1 2 log 1 + P 2 ( 1 ρ 2 ) σ 1 2 , R 1 + R 2 1 2 log 1 + P 1 + P 2 + 2 P 1 P 2 ρ σ 1 2 .
Proof. 
Achievability proof of C g m a c d m s : From [11], the capacity region C m a c d m s of the discrete memoryless multiple-access channel with degraded message sets is given by
C m a c d m s = { ( R 1 , R 2 ) : R 2 I ( X 2 ; Y | X 1 ) , R 1 + R 2 I ( X 1 , X 2 ; Y ) }
for some joint distribution P ( x 1 , x 2 ) . Then, substituting X 1 N ( 0 , P 1 ) , X 2 N ( 0 , P 2 ) and (15) into (19), defining
ρ = E [ X 1 X 2 ] P 1 P 2 ,
and following the idea of the encoding–decoding scheme of [11], the achievability of C g m a c d m s is proved.
Converse proof of C g m a c d m s : The converse proof of C g m a c d m s follows the converse part of GMAC with feedback [3] (see the converse proof of the bounds on R 2 and R 1 + R 2 ), and hence we omit the details here. The proof of Theorem 1 is completed. □

3. Model Formulation and the Main Results

In this section, we will first give a formal definition of the two-user GMAC-DMS-NF, then we will give the main results of this paper.

3.1. The Two-User GMAC-DMS-NF

The two-user GMAC-DMS-NF is shown in Figure 1. At time i ( i { 1 , 2 , , N } ), the channel inputs and output are given by
Y i = X 1 , i + X 2 , i + η i ,
where X 1 , i , X 2 , i are the channel inputs, respectively, subject to average power constraints P 1 and P 2 , namely, 1 N i = 1 N E [ X 1 , i 2 ] P 1 , 1 N i = 1 N E [ X 2 , i 2 ] P 2 , Y i is the channel output, η i N ( 0 , σ 2 ) is the i.i.d. Gaussian noise across i. The message W j ( j = 1 , 2 ) is uniformly drawn in the set W j = { 1 , 2 , , | W j | } . At time i, the input X 1 , i is a function of the common message W 1 and the feedback Y i 1 , and the input X 2 , i is a function of the common message W 1 , the private message W 2 , and the feedback Y i 1 . After receiving the channel output, the receiver generates an estimation pair ( W ^ 1 , W ^ 2 ) = ψ ( Y N ) , where ψ is the receiver’s decoding function. This model’s average decoding error probability equals
P e = 1 | W 1 | · | W 2 | w 1 W 1 , w 2 W 2 P r { ψ ( y N ) ( w 1 , w 2 ) | ( w 1 , w 2 ) } .
The capacity region of the two-user GMAC-DMS-NF is noted as C g m a c d m s f , and it is characterized in the following Theorem 2.
Theorem 2.
C g m a c d m s f = C g m a c d m s , where C g m a c d m s is given in Theorem 1.
Proof. 
From the converse proof of the bounds on R 2 and R 1 + R 2 in [3], we conclude that
C g m a c d m s f C g m a c d m s .
However, since the non-feedback model is a special case of the feedback model, thus we have
C g m a c d m s C g m a c d m s f .
The proof of Theorem 2 is completed. □

3.2. A Capacity-Achieving SK-Type Scheme for the Two-User GMAC-DMS-NF

In this subsection, we propose a two-step SK-type scheme for the two-user GMAC-DMS-NF, and show that this scheme is capacity-achieving, i.e., achieving C g m a c d m s f . This new scheme is briefly described in the following Figure 4.
The common message W 1 is encoded by both transmitters, and the private message W 2 is only available by Transmitter 2. Transmitter 1 uses power P 1 to encode W 1 and the feedback as X 1 N . Transmitter 2 uses power ( 1 ρ 2 ) P 2 to encode W 2 and the feedback as V N , and power ρ 2 P 2 to encode W 1 and the feedback as U N , where 0 ρ 1 . Here note that since W 1 is known by Transmitter 2, the codewords X 1 N and U N can be subtracted when applying the SK scheme to W 2 , i.e., for the SK scheme of W 2 , the equivalent channel model has input V N , output Y N = Y N X 1 N U N , and channel noise η N .
In addition, since W 1 is known by both transmitters and W 2 is only available at Transmitter 2, for the SK scheme of W 1 , the equivalent channel model has inputs X 1 N and U N , output Y N , and channel noise η 1 N + V N , which is non-white Gaussian noise since V N is not i.i.d. generated. Furthermore, we observe that
Y i = X 1 , i + U i + V i + η i = X i * + V i + η i ,
where X i * = X 1 , i + U i , X i * is Gaussian-distributed with zero mean and variance P i * ,
P i * = E ( X i * 2 ) = P 1 + ρ 2 P 2 + 2 P 1 P 2 ρ ρ i P 1 + ρ 2 P 2 + 2 P 1 P 2 ρ = P * ,
where
ρ i = E [ X 1 , i U i ] ρ P 1 P 2 ,
and 0 ρ i 1 . Hence for the SK scheme of W 1 , the input of the equivalent channel model can be viewed as X i * , where X i * N ( 0 , P i * ) . Define
U i = ρ P 2 P 1 X 1 , i ,
Then we have ρ = 1 , which leads to
P i * = P * = P 1 + ρ 2 P 2 + 2 P 1 P 2 ρ ,
where X i * N ( 0 , P * ) . The encoding and decoding procedure of Figure 4 is described below.

3.2.1. Encoding Procedure for the Two-Step SK-Type Scheme

Define W j = { 1 , 2 , , 2 N R j } and divide the interval [ 0.5 , 0.5 ] into 2 N R j equally spaced sub-intervals. The center of each sub-interval is mapped to a message value in W j ( j = 1 , 2 ). Let θ j be the center of the sub-interval w.r.t. the message W j (the variance of θ j approximately equals 1 12 ).
At time 1, Transmitter 2 sends
V 1 = 12 ( 1 ρ 2 P 2 ) θ 2 .
Transmitter 1 and Transmitter 2, respectively, send X 1 , 1 , U 1 = ρ P 2 P 1 X 1 , 1 such that
X 1 * = U 1 + X 1 , 1 = 12 P * θ 1 .
The receiver obtains Y 1 = V 1 + X 1 , 1 + U 1 + η 1 and sends Y 1 back to Transmitter 2. Subtracting X 1 , 1 and U 1 from Y 1 and letting Y 1 = V 1 + η 1 , Transmitter 2 computes
Y 1 12 ( 1 ρ 2 P 2 ) = θ 2 + η 1 12 ( 1 ρ 2 P 2 ) = θ 2 + ϵ 1 ,
and defines α 1 V a r ( ϵ 1 ) = σ 2 12 ( 1 ρ 2 ) P 2 . Since Y 1 = X 1 * + V 1 + η 1 , Transmitter 1 computes
Y 1 12 P * = U 1 + X 1 , 1 + V 1 + η 1 12 P * = θ 1 + V 1 + η 1 12 P * = θ 1 + ϵ 1 ,
and defines
α 1 V a r ( ϵ 1 ) = σ 2 + ( 1 ρ 2 P 2 ) 12 P * .
At time 2, Transmitter 2 sends
V 2 = ( 1 ρ 2 ) P 2 α 1 ϵ 1 .
In the meantime, Transmitter 1 and Transmitter 2, respectively, send X 1 , 2 and U 2 = ρ P 2 P 1 X 1 , 2 such that
X 2 * = U 2 + X 1 , 2 = P * α 1 ϵ 1 .
At time 3 k N , once it has received Y k 1 = X 1 , k 1 + U k 1 + V k 1 + η 1 , k 1 , Transmitter 2 computes
ϵ k 1 = ϵ k 2 E [ ( Y k 1 X 1 , k 1 U k 1 ) ϵ k 2 ] E [ ( Y k 1 X 1 , k 1 U k 1 ) 2 ] ( Y k 1 X 1 , k 1 U k 1 ) ,
and sends
V k = ( 1 ρ 2 ) P 2 α k 1 ϵ k 1 ,
where α k 1 V a r ( ϵ k 1 ) . In the meantime, Transmitters 1 and 2, respectively, send X 1 , k and U k = ρ P 2 P 1 X 1 , k such that
X k * = U k + X 1 , k = P * α k 1 ϵ k 1 .
where
ϵ k 1 = ϵ k 2 E [ Y k 1 ϵ k 2 ] E [ Y k 1 2 ] Y k 1 ,
and α k 1 V a r ( ϵ k 1 ) .

3.2.2. Decoding Procedure for the Two-Step SK-Type Scheme

The decoding procedure for the receiver consists of two steps. First, from (8), we see that at time ( 1 k N ) , the receiver’s estimation θ ^ 1 of W 1 ( θ 1 ) is given by
θ ^ 1 , k = θ ^ 1 , k 1 E [ Y k ϵ k 1 ] E [ Y k 2 ] Y k ,
where ϵ k 1 = θ ^ 1 , k 1 θ 1 and
θ ^ 1 , 1 = Y 1 12 P * = U 1 + X 1 , 1 + V 1 + η 1 12 P * = θ 1 + V 1 + η 1 12 P * = θ 1 + ϵ 1 .
Second, after decoding W 1 and the corresponding codewords X 1 , k and U k for all 1 k N , the receiver subtracts X 1 , k and U k from Y k , and obtains Y k = V k + η 1 , k . At time 1 k N , the receiver computes θ ^ 2 , k of W 2 ( θ 2 ) by
θ ^ 2 , k = θ ^ 2 , k 1 E [ Y k ϵ k 1 ] E [ ( Y k ) 2 ] Y k ,
where ϵ k 1 = θ ^ 2 , k 1 θ 2 and
θ ^ 2 , 1 = Y 1 12 ( 1 ρ 2 ) P 2 = θ 2 + η 1 12 ( 1 ρ 2 ) P 2 = θ 2 + ϵ 1 .
The receiver’s decoding error probability P e j ( j = 1 , 2 ) for W j is upper-bounded by
P e P e 1 + P e 2 .
From the classical SK scheme [2] (also introduced in Section 2.1), we know that the decoding error probability P e 2 of W 2 tends to 0 as N if
R 2 1 2 l o g ( 1 + ( 1 ρ 2 ) P 2 σ 2 ) ,
and hence we omit the derivation here. The decoding error probability P e 1 is upper-bounded by the following Lemma 1.
Lemma 1.
P e 1 0 as N if R 1 1 2 l o g ( 1 + P * ( 1 ρ 2 ) P 2 + σ 2 ) is satisfied.
Proof. 
See Appendix A. □
From (46) and Lemma 1, we can conclude that if R 1 1 2 log 1 + P 1 + ρ 2 P 2 + 2 P 1 P 2 1 ρ 2 P 2 + σ 2 , R 2 1 2 log 1 + 1 ρ 2 P 2 σ 2 , the decoding error probability P e of the receiver tends to 0 as N . In other words, the rate pair R 1 = 1 2 log 1 + P 1 + ρ 2 P 2 + 2 P 1 P 2 1 ρ 2 P 2 + σ 2 , R 2 = 1 2 log 1 + 1 ρ 2 P 2 σ 2 is achievable for all 0 ρ 1 , which indicates that all rate pairs ( R 1 , R 2 ) in C g m a c d m s f are achievable. Hence this two-step SK-type feedback scheme achieves the capacity region C g m a c d m s f of the two-user GMAC-DMS-NF.

4. Conclusions

In this paper, we have proposed a capacity-achieving SK-type feedback scheme for the two-user GMAC-DMS-NF, which remains open in the literature. The proposed scheme in this paper adopts a two-step encoding–decoding procedure, where the common message is encoded as the codeword X 1 N and one part of the codeword X 2 N (namely, U N ), the private message is encoded as the other part of the codeword X 2 N (namely, V N ), and the SK scheme is applied to the encoding procedure of both common and private messages. In the decoding procedure, the receiver first decodes the common message by using the SK decoding scheme and viewing V N as part of the channel noise. Then, after successfully decoding the common message, the receiver subtracts its corresponding codewords X 1 N and U N from its received signal Y N , and uses the SK decoding scheme to decode the private message. Here note that the proposed two-step SK-type scheme is not a trivial extension of the already existing feedback scheme [3] for the two-user GMAC, where two independent encoders apply the SK scheme to encode their independent messages. In fact, a simple application of Ozarow’s SK-type scheme [3] cannot achieve the capacity region of the two-user GMAC-DMS-NF, which indicates that it is not an optimal choice for the two-user GMAC-DMS-NF. Possible future work includes the following:
  • Capacity-achieving SK-type feedback schemes for the fading GMAC.
  • SK-type feedback schemes for the GMAC with noisy feedback.

Author Contributions

H.Y. performed the theoretical work and the experiments, analyzed the data and drafted the work; B.D. designed the work, performed the theoretical work, analyzed the data, interpreted data for the work and revised the work. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key R&D Program of China under Grant 2019YFB1803400; in part by the National Natural Science Foundation of China under Grant 62071392; in part by the Open Research Fund of the State Key Laboratory of Integrated Services Networks, Xidian University, under Grant ISN21-12; and in part by the 111 Project No.111-2-14.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this work are available from the corresponding author upon reasonable request.

Acknowledgments

Authors would like to thank anonymous reviewers for careful reading of the manuscript and providing constructive comments and suggestions, which have helped them to improve the quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SKSchalkwijk–Kailath
GMACGaussian multiple-access channel
GMAC-NFGaussian multiple-access channel with noiseless feedback
DMCDiscrete memoryless channel
GBC-NFGaussian broadcast channel with noiseless feedback
GMAC-DMSTwo-user GMAC with degraded message sets
GMAC-DMS-NFTwo-user GMAC with degraded message sets and noiseless feedback

Appendix A

Appendix A proves Lemma 1 described in Section 3. First, we determine the channel noise of the equivalent channel model. For the SK scheme of W 1 , the equivalent channel model has input X * N = X 1 N + U N , output Y N , and channel noise η 1 N + V N , where η N + V N is non-white Gaussian because V N is generated by the classical SK scheme and it is a combination of previous channel noise. For 1 k N , define
η k = η k + V k .
Note that
E [ ( η k ) 2 ) ] = E [ ( η k + V k ) 2 ] = ( a ) E [ ( η k ) 2 ] + E [ ( V k ) 2 ] = ( b ) σ 2 + ( 1 ρ 2 ) P 2 ,
where ( a ) follows from the fact that V k is independent of η k since V 1 is a function of θ 2 and V k ( 2 k N ) is a function of η 1 , , η k 1 , and ( b ) follows from (38). Furthermore, from (37) and (38), V k can be re-written as
V k = ( 1 ρ 2 ) P 2 α k 1 ϵ k 1 = ( 1 ρ 2 ) P 2 α k 1 ( ϵ k 2 E [ ( V k 1 + η k 1 ) ϵ k 2 ] E [ ( V k 1 + η k 1 2 ) ] ( V k 1 + η k 1 ) ) = ( c ) ( 1 ρ 2 ) P 2 α k 1 ( ϵ k 2 ( 1 ρ 2 ) P 2 α k 2 ( 1 ρ 2 ) P 2 + σ 1 2 ( V k 1 + η k 1 ) ) = ( 1 ρ 2 ) P 2 α k 2 α k 2 α k 1 ( ϵ k 2 ( 1 ρ 2 ) P 2 α k 2 ( 1 ρ 2 ) P 2 + σ 1 2 ( V k 1 + η k 1 ) ) = ( d ) α k 2 α k 1 V k 1 ( 1 ρ 2 ) P 2 α k 1 ( 1 ρ 2 ) P 2 α k 2 ( 1 ρ 2 ) P 2 + σ 1 2 ( V k 1 + η k 1 ) = α k 2 α k 1 σ 1 2 ( 1 ρ 2 ) P 2 + σ 1 2 V k 1 α k 2 α k 1 ( 1 ρ 2 ) P 2 ( 1 ρ 2 ) P 2 + σ 1 2 η k 1 .
where ( c ) follows from ϵ k 2 and is independent of η k 1 , V k 1 = ( 1 ρ 2 ) P 2 α k 2 ϵ k 2 , α k 2 V a r ( ϵ k 2 ) , and ( d ) follows from V k 1 = ( 1 ρ 2 ) P 2 α k 2 ϵ k 2 . Substituting (A3) into (A1), we have
η k = η k + V k = η k + α k 2 α k 1 σ 2 ( 1 ρ 2 ) P 2 + σ 2 V k 1 α k 2 α k 1 ( 1 ρ 2 ) P 2 ( 1 ρ 2 ) P 2 + σ 2 η k 1 = η k + α k 2 α k 1 σ 2 ( 1 ρ 2 ) P 2 + σ 2 V k 1 α k 2 α k 1 ( 1 ρ 2 ) P 2 ( 1 ρ 2 ) P 2 + σ 2 η k 1 + α k 2 α k 1 σ 2 ( 1 ρ 2 ) P 2 + σ 2 η k 1 α k 2 α k 1 σ 2 ( 1 ρ 2 ) P 2 + σ 2 η k 1 = η k + α k 2 α k 1 σ 2 ( 1 ρ 2 ) P 2 + σ 2 ( V k 1 + η k 1 ) α k 2 α k 1 η k 1 = η k + α k 2 α k 1 σ 2 ( 1 ρ 2 ) P 2 + σ 2 η k 1 α k 2 α k 1 η k 1 .
From the classical SK scheme [2] (see (13)), we know that
α k α k 1 = σ 2 ( 1 ρ 2 ) P 2 + σ 2 .
Substituting (A5) into (A4), we obtain
η k = σ ( 1 ρ 2 ) P 2 + σ 2 η k 1 + η k ( 1 ρ 2 ) P 2 + σ 2 σ 2 η k 1 .
Here note that (A6) holds for 3 k N , and
η 1 = η 1 + 12 ( 1 ρ 2 ) P 2 θ 2 , η 2 = η 2 + η 1 ( 1 ρ 2 ) P 2 σ .
After determining the noise expression η k of the equivalent channel model, we still need to determine the decoding error ϵ k . According to (40), we have
E [ Y k 1 ϵ k 2 ] = E [ ( X k 1 * + η k 1 ) ϵ k 2 ] = ( e ) E [ ( P * α k 2 ϵ k 2 + η k 1 ) ϵ k 2 ] = P * α k 2 + E [ η k 1 ϵ k 2 ] ,
and
E [ Y k 1 2 ] = E [ ( X k 1 * + η k 1 ) 2 ] = E [ ( P * α k 2 ϵ k 2 + η k 1 ) 2 ] = ( f ) P * + 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 ,
where ( e ) follows from (39), and ( f ) follows from (A2). Substituting (A8) and (A9) into (40), ϵ k 1 can be re-written as
ϵ k 1 = ϵ k 2 E [ Y k 1 ϵ k 2 ] E [ Y k 1 2 ] Y k 1 = ϵ k 2 P * α k 2 + E [ η k 1 ϵ k 2 ] P * + 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 ( P * α k 2 ϵ k 2 + η k 1 ) = ϵ k 2 ϵ k 2 ( P * + E [ ϵ k 2 η k 1 ] P * α k 2 ) + η k 1 ( P * α k 2 + E [ ϵ k 2 η k 1 ] ) P * + 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 = ϵ k 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 P * + 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 η k 1 P * α k 2 + E [ ϵ k 2 η k 1 ] P * + 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 .
From (A10), we see that ϵ k 1 depends on E [ ϵ k 2 η k 1 ] . Combining (A6) with (A10), we can conclude that
E [ ϵ k 2 η k 1 ] = E [ ( σ ( 1 ρ 2 ) P 2 + σ 2 η k 1 + η k ( 1 ρ 2 ) P 2 + σ 2 σ 2 η k 1 ) · ( ϵ k 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 P * + 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 η k 1 P * α k 2 + E [ ϵ k 2 η k 1 ] P * + 2 P * α k 2 E [ ϵ k 2 η 1 , k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 ) ] = ( g ) P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 P * + 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 · σ 1 ( 1 ρ 2 ) P 2 + σ 2 E [ ϵ k 2 η k 1 ] P * α k 2 + E [ ϵ k 2 η k 1 ] P * + 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 · σ ( 1 ρ 2 ) P 2 + σ 2 E [ ( η k 1 ) 2 ] + P * α k 2 + E [ ϵ k 2 η k 1 ] P * + 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 · ( 1 ρ 2 ) P 2 + σ 2 σ 2 E [ η k 1 η k 1 ] = ( h ) P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 P * + 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 · σ ( 1 ρ 2 ) P 2 + σ 2 E [ ϵ k 2 η k 1 ] P * α k 2 + E [ ϵ k 2 η k 1 ] P * + 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 · σ ( 1 ρ 2 ) P 2 + σ 2 + P * α k 2 + E [ ϵ k 2 η k 1 ] P * + 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 · σ ( 1 ρ 2 ) P 2 + σ 2 = P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 P * + 2 P * α k 2 E [ ϵ k 2 η k 1 ] + ( 1 ρ 2 ) P 2 + σ 2 · σ ( 1 ρ 2 ) P 2 + σ 2 E [ ϵ k 2 η k 1 ] ,
where ( g ) follows from E [ ϵ k 2 η k 1 ] = E [ ϵ k 2 η k 1 ] = E [ η k 1 η k ] = 0 , and ( h ) follows from (A6), which indicates that
E [ η k 1 η k 1 ] = E [ ( σ ( 1 ρ 2 ) P 2 + σ 2 η k 2 + η k 1 ( 1 ρ 2 ) P 2 + σ 2 σ 2 η k 2 ) η k 1 ] = ( i ) E [ ( η k 1 ) 2 ] = σ 2 ,
where ( i ) follows from E [ η k 2 η k 1 ] = E [ η k 2 η k 1 ] = 0 . From (A10) and the fact that
E [ ϵ 1 η 2 ] = ( 1 ρ 2 ) P 2 σ 2 12 P * 0 ,
it is easy to see that
E [ ϵ k 1 η k ] 0
for all 2 k N . The final step before we bound P e 1 is the determination of α k , which is defined as α k = V a r ( ϵ k ) = E [ ( ϵ k ) 2 ] . Using (A10), we have
α k = ( j ) α k 1 P * α k 1 E ϵ k 1 η k + 1 ρ 2 P 2 + σ 2 P * + 2 P α k 1 E ϵ k 1 η k + 1 ρ 2 P 2 + σ 2 2 2 P * α k 1 E ϵ k 1 η k + 1 ρ 2 P 2 + σ 2 P * · α k 1 + E ϵ k 1 η k P * + 2 P α k 1 E ϵ k 1 η k + 1 ρ 2 P 2 + σ 2 2 + σ 2 + 1 ρ 2 P 2 P * · α k 1 + E ϵ k 1 η k P * + 2 P * α k 1 E ϵ k 1 η k + 1 ρ 2 P 2 + σ 2 2 = ( k ) α k 1 r 2 r 2 + P * + 2 P * α k 1 r 2 A k A k 2 P * + r 2 2 P * α k 1 A k 3 P * + r 2 + 2 P * α k 1 A k 2 ( l ) α k 1 r 2 r 2 + P * + 2 P * α k 1 r 2 A k P * + r 2 + 2 P * α k 1 A k 2 = r 2 · 2 P * α k 1 A k + α k 1 r 2 + P * P * + r 2 + 2 P * α k 1 A k 2 = r 2 · r 2 + P * α k 1 + P * A k r 2 + P * 2 P * A k 2 r 2 + P * P * + r 2 + 2 P * α k 1 A k 2 r 2 · r 2 + P * α k 1 + P * A k r 2 + P * 2 P * + r 2 + 2 P * α k 1 A k 2 ,
where ( j ) follows from (A2), ( k ) follows from the definitions
r = 1 ρ 2 P 2 + σ 2 , A k = E ϵ k 1 η k ,
and ( l ) follows from (A13) that A k = E ϵ k 1 η k 0 . Since A k 0 and r 0 , (A14) can be re-written as
α k r · r 2 + P * α k 1 + P * A k r 2 + P * P * + r 2 + 2 P * α k 1 A k = r α k 1 r 2 + P * + P * A k r 2 + P * · α k 1 P * + r 2 + 2 P * α k 1 A k = r α k 1 r 2 + P * r 2 + P * + P * α k 1 A k P * + r 2 + 2 P α k 1 A k ( m ) r α k 1 r 2 + P * ,
where ( m ) follows from (A13) that A k 0 . From (A16), we can conclude that
α N r r 2 + P * α N 1 r r 2 + P * N 1 α 1 = ( n ) r r 2 + P * N 1 σ 2 + 1 ρ 2 P 2 12 P * ,
where ( n ) follows from (34). Finally, we bound P e 1 as follows
P e 1 P r ϵ N > 1 2 W 1 1 ( o ) 2 Q 1 2 · 2 N R 1 · 1 α N ( p ) 2 Q 1 2 · 2 N R 1 r r 2 + P * N + 1 12 P * σ 2 + 1 ρ 2 P 2 = 2 Q 1 2 12 P * σ 2 + 1 ρ 2 P 2 2 N R 1 r 2 + P * r N 1 = 2 Q 1 2 12 P * σ 2 + 1 ρ 2 P 2 2 N R 1 2 ( N 1 ) log r 2 + P * r = 2 Q 1 2 12 P * σ 2 + 1 ρ 2 P 2 2 log r 2 + P * r 2 N R 1 log r 2 + P * r ,
where ( o ) follows from Q ( x ) and is the tail of the unit Gaussian distribution evaluated at x, and ( p ) follows from (A17) and the fact that Q ( x ) is decreasing while x is increasing. From (A18), we can conclude that if
R 1 < log r 2 + P * r = 1 2 log 1 + P * r 2 = ( q ) 1 2 log 1 + P 1 + ρ 2 P 2 + 2 P 1 P 2 ρ 1 ρ 2 P 2 + σ 2 ,
where ( q ) follows from (29) and (A15), P e 1 0 as N . The proof of Lemma 1 is completed.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  2. Schalkwijk, J.; Kailath, T. A coding scheme for additive noise channels with feedback–I: No bandwidth constraint. IEEE Trans. Inf. Theory 1966, 12, 172–182. [Google Scholar] [CrossRef]
  3. Ozarow, L. The capacity of the white Gaussian multiple access channel with feedback. IEEE Trans. Inf. Theory 1984, 30, 623–629. [Google Scholar] [CrossRef]
  4. Ozarow, L.; Leung-Yan-Cheong, S. An achievable region and outer bound for the Gaussian broadcast channel with feedback (corresp.). IEEE Trans. Inf. Theory 1984, 30, 667–671. [Google Scholar] [CrossRef]
  5. Weissman, T.; Merhav, N. Coding for the feedback Gel’fand-Pinsker channel and the feedforward Wyner-ziv source. IEEE Trans. Inf. Theory 2006, 52, 4207–4211. [Google Scholar]
  6. Rosenzweig, A. The capacity of Gaussian multi-user channels with state and feedback. IEEE Trans. Inf. Theory 2007, 53, 4349–4355. [Google Scholar] [CrossRef]
  7. Kim, Y.H. Feedback capacity of the first-order moving average Gaussian channel. IEEE Trans. Inf. Theory 2006, 52, 3063–3079. [Google Scholar]
  8. Kim, Y.H. Feedback capacity of stationary Gaussian channels. IEEE Trans. Inf. Theory 2010, 56, 57–85. [Google Scholar] [CrossRef]
  9. Bross, S.I.; Wigger, M.A. A Schalkwijk-Kailath type encoding scheme for the Gaussian relay channel with receiver-transmitter feedback. In Proceedings of the IEEE International Symposium on Information Theory, Nice, France, 24–29 June 2007; pp. 1051–1055. [Google Scholar]
  10. Ben-Yishai, A.; Shayevitz, O. Interactive schemes for the AWGN channel with noisy feedback. IEEE Trans. Inf. Theory 2017, 63, 2409–2427. [Google Scholar] [CrossRef] [Green Version]
  11. Slepian, D.; Wolf, J. A coding theorem for multiple access channels with correlated sources. Bell Syst. Tech. J. 1973, 52, 1037–1076. [Google Scholar] [CrossRef]
Figure 1. The two-user GMAC with degraded message sets and noiseless feedback.
Figure 1. The two-user GMAC with degraded message sets and noiseless feedback.
Entropy 23 00756 g001
Figure 2. The point-to-point white Gaussian channel with feedback.
Figure 2. The point-to-point white Gaussian channel with feedback.
Entropy 23 00756 g002
Figure 3. The GMAC with degraded message sets.
Figure 3. The GMAC with degraded message sets.
Entropy 23 00756 g003
Figure 4. A two-step SK-type scheme for the two-user GMAC-DMS-NF.
Figure 4. A two-step SK-type scheme for the two-user GMAC-DMS-NF.
Entropy 23 00756 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yuan, H.; Dai, B. A Capacity-Achieving Feedback Scheme of the Gaussian Multiple-Access Channel with Degraded Message Sets. Entropy 2021, 23, 756. https://doi.org/10.3390/e23060756

AMA Style

Yuan H, Dai B. A Capacity-Achieving Feedback Scheme of the Gaussian Multiple-Access Channel with Degraded Message Sets. Entropy. 2021; 23(6):756. https://doi.org/10.3390/e23060756

Chicago/Turabian Style

Yuan, Haoheng, and Bin Dai. 2021. "A Capacity-Achieving Feedback Scheme of the Gaussian Multiple-Access Channel with Degraded Message Sets" Entropy 23, no. 6: 756. https://doi.org/10.3390/e23060756

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop