Lossy State Communication over Fading Multiple Access Channels

Joint communications and sensing functionalities integrated into the same communication network have become increasingly relevant due to the large bandwidth requirements of next-generation wireless communication systems and the impending spectral shortage. While there exist system-level guidelines and waveform design specifications for such systems, an information-theoretic analysis of the absolute performance capabilities of joint sensing and communication systems that take into account practical limitations such as fading has not been addressed in the literature. Motivated by this, we undertake a network information-theoretic analysis of a typical joint communications and sensing system in this paper. Towards this end, we consider a state-dependent fading Gaussian multiple access channel (GMAC) setup with an additive state. The state process is assumed to be independent and identically distributed (i.i.d.) Gaussian, and non-causally available to all the transmitting nodes. The fading gains on the respective links are assumed to be stationary and ergodic and available only at the receiver. In this setting, with no knowledge of fading gains at the transmitters, we are interested in joint message communication and estimation of the state at the receiver to meet a target distortion in the mean-squared error sense. Our main contribution here is a complete characterization of the distortion-rate trade-off region between the communication rates and the state estimation distortion for a two-sender GMAC. Our results show that the optimal strategy is based on static power allocation and involves uncoded transmissions to amplify the state, along with the superposition of the digital message streams using appropriate Gaussian codebooks and dirty paper coding (DPC). This acts as a design directive for realistic systems using joint sensing and transmission in next-generation wireless standards and points to the relative benefits of uncoded communications and joint source-channel coding in such systems.


Introduction
The scarcity of spectrum, as well as the bandwidth requirements of key emerging applications such as 6G, necessitate a rethinking of resource consumption. In such systems, it appears prudent to co-design sensing and communication functionalities. This method enables significant gains in spectral, energy, hardware, and cost efficiency. This is known as joint sensing and communication, and it represents a paradigm shift in which sensing and communication operations can be jointly optimized by utilizing a single hardware platform and a joint signal processing framework. These ideas have already been used in a number of novel applications, including vehicular networks, indoor positioning, and covert communications. Joint sensing and communication scenarios have recently received a lot of attention from the signal processing community (see for instance [1][2][3][4]), the communications community (see [5][6][7][8][9][10][11]), and the information theory community (see [12][13][14][15][16]). This work belongs to the final category, where we take an information-theoretic view of joint sensing and communication in a multi-terminal setting.
Joint sensing and communication also arises in multi-user networks with several sensor nodes observing a common analog source phenomenon, and communicating to a base station (destination) over a wireless fading medium, see Figure 1. In this setting, the sensor nodes must convey a description of the source process to the base station, which then tries to estimate the source process subject to a fidelity criterion. Some of the sensor nodes might also have additional digital data to convey to the base station, which must reliably recover them as well. Since the source process, as well as the data from each node, are of interest to the base station, a tension naturally arises between the rates of data communication and source estimation fidelity. The trade-off between these objectives is of particular interest in such systems, which is among the primary motivations for this work. The analog phenomenon in this example can be thought of as a channel state that affects the digital communication of messages, with the receiver being required to reliably estimate this channel state while also recovering the transmitted messages. As far as the fading process is concerned, it is reasonable in practice to assume that the receiver can track the channel variations, for example, via the use of pilot transmission sequences.
In this work, we consider an information-theoretic abstraction of the communication setting in Figure 1. In particular, we focus on joint communication and state estimation over a state-dependent fading Gaussian multiple access channel with no fading knowledge at the transmitters. At each encoder, the state process is assumed to be known non-causally. The fading processes encountered on the respective links are assumed to be stationary and ergodic and to be known only at the receiver. The dual goals of message communication and state estimation at the receiver must be met with a distortion tolerance with respect to a squared-error metric. The trade-off between the average message communication rates and the average distortion in receiver state estimation is of interest. We completely characterize the optimal trade-off region between the communication rates of the different transmitters and the state estimation distortion at the receiver. The details of the setting as well as the motivation for investigating it, will be elucidated in the section that follows.
Having introduced the general problem framework, we now discuss the other relevant contributions in the literature, emphasizing the differences from our setting under consideration in the following section.

Literature Review
In this section, we discuss the related literature and place our contributions in the context of the state of the art. In particular, we enlist prior works on joint communication and channel state estimation in both point-to-point as well as network information theoretic settings and identify several knowledge gaps.
Systems such as the one in Figure 1 can be modeled as state-dependent channels , where the channel state typically refers to a variable used to model unknown parameters of the channel statistics. A canonical form of such state-dependent channel models consisting of an additive state over an additive white Gaussian noise (AWGN) channel was investigated in [17], popularly known as the dirty paper coding (DPC) setting. Surprisingly, ref. [17] demonstrated that regardless of the presence of the state, the channel capacity of this setting remained the same as that of an AWGN channel, independent of the variance of the channel state. This phenomenon later found widespread applications in settings such as digital watermarking [18] and multiple-input-multiple-output wireless broadcast channels [19].
In certain state-dependent channels, in addition to communicating messages, the transmitter may wish to assist the receiver in estimating the channel state (as in the sensor network scenario described in Figure 1). Splitting the average available power between the dual tasks of uncoded transmission of the state and DPC for the message was found to be optimal for the mean squared error distortion measure [20] in a point-to-point (single-user) AWGN channel. In [21], joint communication and state estimation were considered in a different scenario where the transmitters were unaware of the channel state. In an interesting variation of [20], Tian et al. [22] characterized the distortion-transmit power trade-off in a point-to-point Gaussian model with noisy state observations at the transmitter in the absence of messages. However, in the presence of messages [22], a complete characterization of the rate-distortion trade-off region remains unknown for the case of noisy state observations at the transmitter.
In the literature, channel state estimation has been studied in two different frameworks, each of them being motivated by different real-world problems. These frameworks correspond to (a) state estimation performed at the receiver and (b) state estimation performed at the transmitter side. We first consider the problem of channel state estimation at the receiver, which has been investigated before in certain information-theoretic settings. Relevant works include [23] (see also [24]), which investigated joint estimation and message communication over a Gaussian broadcast channel (BC) without state-dependence and derived a complete characterization of the trade-off between achievable rates and estimation errors. In [25], communication and state estimation were studied in a multiple access setting without fading, and the distortion-rate trade-off region was characterized. In [26], a state-dependent Gaussian BC with the dual goals of amplifying the channel state at one of the receivers while masking it [27] from the other receiver (with no message transmissions) was investigated, and achievable coding schemes, as well as outer bounds, were derived. More recently, ref. [28] addressed state estimation for a discrete memoryless BC with causal state information at the transmitter, also taking into account any possible feedback signals from the strong receiver to the sender, and gave a characterization of the capacity region.
As far as channel state estimation at the transmitter is concerned, this line of work originated in [12], where a point-to-point channel with generalized feedback signals to aid the state estimation was investigated. Such models are motivated by joint radar and communications systems where the radar, as well as data communication, share the same frequency band. Following this, ref. [13] considered a multiple access channel extension of the same, where both the senders obtained generalized feedback and obtained an achievable trade-off region. An improved achievable scheme for the multiple access setting was derived recently in [15]. Most recently, a broadcast channel variant was investigated in [16] (see also [14]), where inner and outer bounds were given for general broadcast channels while a complete characterization was obtained for the special case of physically degraded broadcast channels.
Fading multiple access channels without state (or state estimation requirements) and different degrees of channel state information have been explored in the literature. For instance, the ergodic capacity region for fast-fading Gaussian multiple access channels (GMAC) with perfect channel state information at the transmitters and the receiver was characterized in [29]. More general configurations for transmitter channel state availability were analyzed in [30], where the capacity region for time-varying models was determined via optimization over appropriate power control laws. Slow fading multiple access channels with distributed channel state information at the transmitters were studied in [31].
State-dependent point-to-point fading channels with no state estimation have received some attention in the literature. Vaze and Varanasi [32], for example, investigated a model with full state knowledge and partial knowledge of the fading process at the transmitter, and the high-SNR achievable rate was characterized. Rini and Shamai [33] examined the impact of phase fading in the DPC setting [17] when the receiver was informed of the fading process. We also note that [34] has addressed point-to-point fading channels (without any state process or state estimation requirements) with channel gains known at both the sender and the receiver.
Having reviewed the related literature, we now identify the key knowledge gaps in prior work and the necessity of our work.
Analysis of the State of the art and Research gaps: We identify the following crucial aspects.

•
We note that none of the works above consider joint state estimation along with message communication over state-dependent multi-terminal settings with noncausal transmitter state information, which is highly relevant in applications like the joint sensing and communication setting in Figure 1. This is addressed in this paper. • While there exist system-level guidelines and waveform design specifications for such systems, a network information-theoretic analysis of the absolute performance capabilities of joint sensing and communication systems that take into account practical limitations has not been addressed in the literature, which we undertake here. • Moreover, none of the works on joint communication and estimation mentioned above take fading links into account. Fading is an impairment that must be accounted for in practical wireless communication channel models, such as the joint sensing and communication application shown in Figure 1. This is another gap in the literature that this paper seeks to fill by investigating joint communication and estimation over state-dependent multi-user fading channels, the point-to-point counterpart of which was addressed by the author in [35].
Novelty and relevance: In this paper, we address the problem of joint communication and state estimation over a state-dependent fading GMAC with no fading knowledge at the transmitters. The key scientific question we address here is: what is the best possible trade-off between the competing goals of message communication from multiple senders and the fidelity in state estimation at the receiver? • The key novelty of our work is that it is the first instance where joint communication and estimation have been considered in a multiple-user setting that also accounts for fading links, as opposed to previous works, which focused only on non-fading links. • Moreover, it is the first work that considers non-causal state information (as opposed to causal or strictly causal) at the transmitter in a fading multi-user scenario which is practically relevant as described in the sensor network example from Figure 1. • Furthermore, we undertake a comprehensive network information-theoretic study of the fundamental performance limits of such joint communication and estimation settings, which is lacking in the literature. Please refer to Table 1, which highlights our contributions in this paper with respect to the existing works.
The key relevance of our study is that it serves as a design guideline for practical systems employing joint sensing and communication envisioned in future 6G wireless standards and broadly applies to systems that involve joint compression and communication/ratedistortion trade-offs. It also points to the relative benefits of uncoded transmission versus joint source-channel coding in such systems. The progress embodied herein builds up towards a better understanding of joint state estimation and communication problems in multi-terminal settings (such as multiple access channels), which is relatively less explored in the literature (with or without the fading aspect).
Summary of contributions: We list them below. See also the contribution summary Table 1, which emphasizes the novelty of our work with respect to the existing works.
• One of our main contributions in the paper is a complete characterization (Theorem 1) of the rate-distortion trade-off region for joint state estimation and communication over a two-user fading GMAC with the state. The key non-trivial part is the proof of converse, which is given in Section 5.
• We prove that the optimal strategy for the setting under consideration involves uncoded transmissions to amplify the state, along with the superposition of the digital message streams using appropriate Gaussian codebooks and DPC. • We prove the optimality of uncoded state amplification in the special case where there are no messages to communicate-please refer to the section on implications of our result given after the statement of Theorem 1 for the details. • Our framework naturally generalizes the results of [35] to multiple users, ref. [25] to fading links and the work of [20] to fading links with multiple users, thereby providing a unified framework that encompasses all these prior works on joint communication and estimation. • Our study gives a network information-theoretic analysis of the fundamental performance limits of joint sensing and communication systems that take into account practical limitations such as fading. This acts as a design directive for realistic systems using joint sensing and transmission anticipated in upcoming wireless standards and points to the relative merits of uncoded communications and joint source-channel coding in such systems. Notations: Random variables are denoted by capital letters, while their realizations are denoted by the corresponding lower-case letters. We use P(·) to denote the probability of an event. The joint probability distribution of two random variables (X, Y) is denoted by p X,Y (x, y). Let E[·] denote the expected value of a random variable. At times, we will use an explicit subscript in the expectation operation, E X [·], to denote that the expectation is taken with respect to the probability distribution of the random variable X. All logarithms are in base 2, unless mentioned otherwise. We denote random sequences of length n with a superscript notation, i.e., U n := U 1 , U 2 , · · · , U n . An indexed set of random sequences each of length n is denoted with a subscript for the random variable and a superscript for the length, i.e., U n j := U j1 , U j2 , · · · , U jn , where U ji stands for the i-th component of U n j . The covariance of a random vector X n is denoted by Cov(X n ). Calligraphic letters represent alphabets of random variables. . denotes the Euclidean norm of a vector, while Conv(·) denotes the convex closure of a set. The absolute value of a number is denoted | · |, while the transpose of a matrix A is denoted as A T . The notation A ⊥ ⊥ B is used to denote independent random variables (A, B). The Gaussian (normal) distribution with mean µ and variance σ 2 is denoted by N (µ, σ 2 ). The set of real numbers is denoted by R, while the set of n−tuples of positive real numbers is denoted by R n + . The Shannon entropy of a discrete-valued random variable X is denoted by H(X), while the differential entropy of a continuous-valued random variable Y is denoted by h(Y). The mutual information between any two random variables V and W is denoted by I(V; W). The corresponding conditional quantities given a random variable Z are conditional entropy H(X|Z), conditional differential entropy h(Y|Z), and conditional mutual information I(V; W|Z). If n is an integer variable, φ(n) is a positive function and f (n) is an arbitrary function, we say that f = o(φ) provided that lim n→∞ f (n)/φ(n) = 0. For any three random variables (A, B, C), we say that A → B → C is a Markov chain if A and C are conditionally independent given B.
The rest of the paper is organized as follows: in Section 3, we introduce the system model and state our main results. Sections 4 and 5 contain the achievable coding scheme and the converse to the rate-distortion trade-off region, respectively. Section 6 contains concluding remarks. The Appendices A.1, A.2, and A.3 contain all the details of the proofs that are skipped in the main discussion, to maintain the readability of the paper.

System Model and Results
Consider the fading multiple access channel shown in Figure 2. The observed samples at the receiver at time instant i ∈ {1, 2, . . . , n} are given by Here, the samples of the additive noise process are independent and identically distributed (i.i.d.) according to Z i ∼ N (0, σ 2 Z ), while the samples of the state process are i.i.d. according to S i ∼ N (0, σ 2 S ). The state process is assumed to be known noncausally at each encoder. The fading processes encountered on the respective links are represented by Θ n j , j ∈ {1, 2}, with these fading gains being known only at the receiver. The fading processes encountered on both links are assumed to be stationary and ergodic. The codeword lengths can be chosen arbitrarily long to average over the fading of the channel. The given model represents a fast-fading multiple access channel with no knowledge of the fading processes at the transmitters. The state, fading, and additive noise processes are assumed to be independent of each other. In our model, the power constraint on the inputs is assumed to be across blocks, averaged over the random state and the codebook. The dual goals of message ((W 1 , W 2 ) in Figure 2) communication, and estimation of the state S n at the receiver to meet a distortion tolerance with respect to a squared error metric must be met. The trade-off between the average message communication rates (R 1 , R 2 ) and the average distortion incurred in state estimation (D) at the receiver is sought. We take W j to be uniformly drawn from the set W j {1, 2, · · · , 2 nR j } for j = {1, 2}, and independent of each other. The messages (W 1 , W 2 ) are assumed to be independent of the state process S n . The power constraint on the transmissions is: After n observations, the decoder estimatesŜ n = φ(Y n , Θ n 1 , Θ n 2 ) using a (state) reconstruction map φ(·) : Y n × ∏ 2 j=1 Θ n j → R n , and also decodes the messages (W 1 , W 2 ). The (message) decoding map is given by ψ : Our aim here is to maintain the average squared-error distortion incurred in state estimation below a given threshold, while also ensuring that the average error probability of decoding the messages is small enough.
A triple (R 1 , R 2 , D) is said to be achievable if there exists a sequence of (n, R 1 , R 2 , P 1 , P 2 ) communication-estimation schemes such that lim sup and lim sup Let C fad-mac est (P 1 , P 2 ) be the closure of the set of all achievable (R 1 , R 2 , D) triples, with 0 ≤ D ≤ σ 2 S . The main result of the paper is stated below.
Proof. The proof is given in the following two sections, wherein Section 4 contains the achievability proof while the converse is proved in Section 5.
Implications of our result: We now discuss the main consequences of our Theorem 1 for the sensor network scenario outlined earlier in Figure 1. If a given transmitter (sensor node) has a message to communicate to the receiver (base station), then the optimal strategy involves splitting its available power budget into two parts: one part is used to send a scaled version of the state (uncoded state amplification), while the other part is used to communicate the message using dirty paper coding. The parameters γ and β in Theorem 1 precisely perform this role of power-sharing between the dual goals of communication and estimation. This will be elaborated upon in the proof of achievability in Section 4.
On the other hand, if a given transmitter (sensor node) has no messages to communicate to the receiver (base station), then the optimal strategy simply involves utilizing its entire power budget for uncoded state amplification, i.e., sending the scaled version In other words, uncoded transmission is optimal for such nodes. This phenomenon resembles that of [38], albeit the latter is in the context of non-fading links with no message communication and no state-dependence. We close this section with a series of remarks that shed further light on the implications of our Theorem 1 and its connection with existing results in the literature.

Remark 2.
When the fading gains are constant, i.e., Θ 1 = Θ 2 = 1 almost surely, our Theorem 1 recovers the characterization of [25] for the multiple-access non-fading scenario as a special case.

Remark 3.
When γ = β = 0, we obtain an extreme point of the region with zero rates, i.e, R 1 = R 2 = 0, and the best state estimate, i.e., minimum possible distortion This corresponds to each encoder utilizing its entire power budget for uncoded state amplification, and therefore no message communication is possible.

Remark 4.
On the other hand, when γ = β = 1, we obtain the other extreme point of the region with the maximum possible rates for a fading Gaussian MAC, and the worst state estimate, i.e., maximum possible distortion This corresponds to each encoder utilizing its entire power budget for message communication, and therefore no state amplification is possible, and maximum distortion is incurred in state estimation.

Achievability
The achievability builds upon well-known techniques like dirty paper coding and successive cancellation, along with appropriate power splitting. The power P 1 available at encoder 1 is split into two parts: γP 1 for message transmission andγP 1 for state amplification, for some γ ∈ [0, 1]. Similarly, the power P 2 available at the second encoder is split into βP 2 (message transmission) andβP 2 (state amplification) for some β ∈ [0, 1]. Then, the following state amplification signals are generated at the respective encoders. In other words, the power fractionsγP 1 andβP 2 at encoders 1 and 2 respectively are used for uncoded state amplification. Hence, (1) can be rewritten as where E[||X n 1m || 2 ] ≤ nγP 1 and E[||X n 2m || 2 ] ≤ nβP 2 , with both X n 1m and X n 2m being independent of S n . The subscript m in (16) indicates that the corresponding signals are intended for message transmission, whereas the subscript s indicates state amplification signals. To communicate the messages across to the receiver, we invoke the writing on dirty paper result for a Gaussian MAC [37].
From the DPC result [17], we recall that a known state process over an AWGN channel can be completely canceled. In particular, a rate R that satisfies when evaluated for some feasible joint probability distribution p U,S,X (u, s, x)p Y|X,S (y|x, s), can be achieved by Gelfand-Pinsker coding [39] for a point-to-point channel with a noncausally known state. In order to prove the achievability of the rates (5)-(7), we first consider a dirty paper channel with input Θ 1j X 1mj , known state S j = 1 + Θ 1j γP 1 /σ 2 S + Θ 2j β P 2 /σ 2 S S j , and unknown noise Θ 2j X 2mj + Z j . We choose U 1j = Θ 1j X 1mj + α 1j S j , X 1mj ⊥ ⊥ S j with X 1mj ∼ N (0, γP 1 ) and This yields the following rate for user-1 at time instant j with the error probability approaching zero The achievable rate for user-1 averaged over a time interval {1, 2, . . . , n} is which converges as n → ∞ to due to the stationarity and ergodicity of the fading processes. The decoded codeword U 1j is then subtracted from the channel output to obtain another dirty paper channel with input Θ 2j X 2mj , known state S j = (1 − α 1j )S j and unknown noise Z j . We pick U 2j = Θ 2j X 2mj + α 2j S j , X 2mj ⊥ ⊥ S j with X 2mj ∼ N (0, βP 2 ) and This yields the following rate for user-2 at time instant j with the error probability approaching zero The achievable rate for user-2 averaged over a time interval {1, 2, . . . , n} is which converges as n → ∞ to due to the stationarity and ergodicity of the fading processes. By reversing the decoding order and using time-sharing, the region in expressions (5) through (7) can be achieved. Note that the right-hand sides of expressions (20) and (23) add up to the right-hand side of the sum rate expression in (7). For the state estimate, the receiver forms the linear minimum mean-squared error (MMSE) estimateŜ j (Y j ) of S j based on Y j : For the achievable distortion at time instant j, we evaluate the expected squared error between S j andŜ j (Y j ), above. The resulting MMSE can be easily verified to be The achievable distortion averaged over a time interval {1, 2, . . . , n} is due to the stationarity and ergodicity of the fading processes. This concludes the proof of achievability.

Converse
In this section, we prove that any successful scheme (that has a vanishing probability of error and is within the distortion tolerance) must satisfy the rate-distortion constraints of Theorem 1. This is proved in two subsections: in Section 5.1, we construct an outer bound on the rate-distortion trade-off region. Subsequently, we shall prove in the next Section 5.2 that this outer bound is achievable, thereby proving Theorem 1.

Outer Bound
The proof of our outer bound is aided by the following lemma, adapted from (Equation (2), [20]).

Lemma 1. Any communication estimation scheme achieving a distortion D n
1 n E||S n −Ŝ n || 2 over blocklength n satisfies n 2 log σ 2 S D n ≤ I(S n ; Y n , Θ n 1 , Θ n 2 ). (25) Proof. The proof is given in Appendix A.1.
Another useful property is the differential entropy maximizing property of the Gaussian distribution [40], i.e., for X n g ∼ N (0, K), The above facts will be extensively used in our proofs.
where the maximum is over all (R 1 , R 2 , D) obeying (5)- (8). We note that it suffices to restrict attention to η i ≥ 0, since η i < 0 will trivially correspond to R i = 0 in the maximization, a case already accounted for by η i = 0. Likewise, since D ≤ σ 2 S , only λ ≥ 0 need be considered. Therefore, we only consider non-negative weighting coefficients in the sequel. The converse is established by showing that if (R 1 , R 2 , D n ) is achievable using block length n, then, for all η 1 , η 2 , λ ≥ 0, where o(1) has the usual meaning in standard Landau notation. We note that since the tuple (W 1 , W 2 , S n , Θ n 1 , Θ n 2 ) is independent, the Markov chain X n 1 → S n → X n 2 holds. Denoting we have for the i-th entry in a block of transmissions, We define the empirical covariance matrix K i of the vector (X 1i , X 2i , S i ) with K i (p, l) denoting its entries. Let us denote where P ji , j = 1, 2 satisfy the power constraints Next, we introduce two parameters γ i ∈ [0, 1] and β i ∈ [0, 1] such that We now define two parameters γ ∈ [0, 1] and β ∈ [0, 1] such that With this, we are ready to prove (27). Firstly, considering the case where η 1 ≥ η 2 is sufficient, as a simple renaming of the indices will give us the other case. For η 2 > 0, since λ is an arbitrary positive number, maximizing the left-hand side of (27) is equivalent to maximizing η 1 R 1 + η 2 R 2 + η 2 λ 1 2 log σ 2 S D n . Dividing by η 2 , and then renaming η 1 η 2 as η, the maximization becomes ∀η ≥ 1, λ ≥ 0, For a given η > 1, three regimes of λ arise, as shown in Figure 3. Let R 1 (γ), R 2 (β), R sum (γ, β) and D(γ, β), respectively, denote the right-hand side of Equations (5)-(8).
The following two lemmas are crucial to our proofs.
We now consider the different regimes for λ (see Figure 3). Case 1 (λ ≤ 1 and η ≥ 1): In this regime, Lemma 2 directly gives a bound on the weighted sum rate.
Case 2 (λ ≥ η and η ≥ 1): Since η ≥ 1, we have where (a) follows from an application of Lemma 2 followed by Lemma 3, and (b) follows from the fact that uncoded transmission of the state by the two users acting as a super-user with power ( √ P 1 + √ P 2 ) 2 results in the minimal distortion possible [20].

Equivalence of Inner and Outer Bounds
We now show that the regions defined by the inner and outer bounds in Sections 4 and 5.1 coincide, thereby establishing the capacity region. We will consider three regimes for λ ≥ 0 for each η ≥ 1, and prove that the maximal value of ηR 1 + R 2 + λ 2 log σ 2 S D in the outer bound specified by (32)-(34) can be achieved.
While maximizing ηR 1 + R 2 + λ 2 log σ 2 S D n , we already proved that λ ≥ η corresponds to an extreme point with zero sum-rate (Case 2 in Section 5.1). Clearly, the corresponding distortion lower bound D(0, 0) for this case specified by (33) can be achieved by uncoded state transmission by both transmitters using all the available powers. As a result, the condition λ = η encompasses all λ ≥ η. Moreover, the regime 1 ≤ λ ≤ η (Case 3 of Section 5.1) corresponds to the case where R 2 = 0. This implies that we only need to consider λ = 1 rather than λ ∈ [1, η). Clearly, the region with R 2 = 0 follows from the single-user results of [35], but for a state process with variance ( √ P 2 + σ 2 S ) 2 . This proves the achievability of the bound specified by (34). This leaves us with proving the achievability for those cases in which 0 < λ < 1 holds, corresponding to (32). In this regime, the following lemma is crucial.
Proof. The proof is given in Appendix A.3.
Since we know that ηR 1 1], the strict concavity of k(·) suggests the existence of a unique (γ * , β * ) for which D n is maximized for the given η > 1 and 0 ≤ λ ≤ 1. Evidently, choosing these maximizing parameters (γ * , β * ) in our achievable theorem will give us the same operating point. Reversing the roles of R 1 and R 2 , we have covered the whole region. Thus, we have established the achievability of the outer bound specified with (32)- (34). This completes the proof of Theorem 1.

Conclusions
We investigated joint message transmission and state estimation in a state-dependent fading Gaussian multiple access channel in this paper and characterized the trade-off region between message rates and state estimation distortion. It was shown that the optimal strategy involved static power allocation and uncoded state amplification combined with Gaussian signaling and dirty paper coding. While the role of uncoded communications has been examined before for non-fading settings without state dependence, ours is the first result that brings out its significance in the context of state-dependent fading systems.
Our framework naturally generalizes previous results concerning state estimation on point-to-point fading channels to multiple users as well as point-to-point non-fading settings to fading links with multiple users. Our results contribute to a better understanding of joint state estimation and communication problems in multi-terminal settings. They can be used as design guidelines for practical systems employing joint sensing and communication envisioned in future 6G wireless standards and broadly applies to systems that involve joint compression and communication/rate-distortion trade-offs.
However, we assumed perfect state observation at the transmitters in this work. A long-standing open problem is that of communicating state and message streams in a fading GMAC with noisy state observations at the transmitters, which is left for future work. Moreover, there could be settings when the receiver cannot track the channel fading gains either, unlike this work. Thus, another interesting avenue for further research is an investigation of the current setup when the encoders, as well as the decoder, are totally uninformed about the fading coefficients on the links. The proof is along the lines of (Equation (2), [20]), with appropriate modifications to incorporate the fading coefficients (Θ n 1 , Θ n 2 ). We provide a full proof for completeness. Consider the following chain of inequalities, starting with the right-hand side in Lemma 1: where (a) follows sinceŜ n (Y n , Θ n 1 , Θ n 2 ) is a function of (Y n , Θ n 1 , Θ n 2 ), (b) follows since S n is i.i.d., (c) follows since the Gaussian distribution maximizes differential entropy for a given variance, while (d) follows from Jensen's inequality.