Abstract
In this paper, we determine the capacity of the Gaussian arbitrarily-varying channel with a (possibly stochastic) encoder and a deterministic list-decoder under the average probability of error criterion. We assume that both the legitimate and the adversarial signals are restricted by their power constraints. We also assume that there is no path between the adversary and the legitimate user but the adversary knows the legitimate user’s code. We show that for any list size L, the capacity is equivalent to the capacity of a point-to-point Gaussian channel with noise variance increased by the adversary power, if the adversary has less power than L times the transmitter power; otherwise, the capacity is zero. In the converse proof, we show that if the adversary has enough power, then the decoder can be confounded by the adversarial superposition of several codewords while satisfying its power constraint with positive probability. The achievability proof benefits from a novel variant of the Csiszár-Narayan method for the arbitrarily-varying channel.
1. Introduction
An arbitrarily-varying channel (AVC) represents a memoryless channel including unknown parameters that are changing arbitrarily from channel use to channel use. Because these parameters (state) can change arbitrarily, we consider this to be a model for an active adversarial jammer. This adversary sends its signal to restrain the legitimate transmitter and receiver from maintaining reliable communication. In wireless channels, these unpleasant adversaries can easily enter channels, so it is of great importance to study the adversary’s effect on the channel capacity. The capacity of the AVC depends on the coding method (such as randomized coding, stochastic encoder or deterministic coding), the performance criterion (such as average or maximum error probability) and the amount of adversary’s knowledge about the transmitted signal (omniscient, myopic or oblivious adversary). Table 1 provides a summary of various models appearing in the literature, including the one considered here.
Table 1.
References’ Summary.
Blackwell et al. introduced AVC in Reference [1] and under the average error probability criterion they found the capacity of the discrete memoryless AVC to be given by a min-max expression over the mutual information of input and output. They employed randomized coding that is, common randomness between the encoder and the decoder and assumed the jammer to be oblivious that is, the jammer does not have any information about the transmitted signal except the code. In Reference [2], it is shown that the min-max expression is equivalent to the corresponding max-min one. Further, in Reference [3], the authors examined that this capacity remains the same even for the maximum error probability criterion, again provided access to common randomness. The case without common randomness between the transmitter and the receiver is referred to as the deterministic code setting. Ahlswede in a notable paper [4] characterized the deterministic capacity of a discrete AVC under the average probability of error through a dichotomy theorem. He proved that the capacity either corresponds to the AVC randomized code capacity or else it equals zero but he did not state any necessary or sufficient condition for which of the two cases prevails. Ericson, in Reference [5], found the necessary condition for the positive alternative by defining symmetrizability. A symmetrizable AVC is an AVC in which the adversary can mimic the transmitted signal in order to prevent the decoder from distinguishing between the true message and an adversarial imitation. Thus, he showed that if the deterministic code capacity of an AVC is positive then the channel should be nonsymmetrizable. Later, in Reference [6], a sufficient condition was provided by Csiszár and Narayen stating that if the AVC is nonsymmetrizable then the deterministic code capacity is not zero. Therefore, considering both conditions, the deterministic code capacity of an AVC is positive if and only if the channel is nonsymmetrizable.
The capacity of discrete AVC is also investigated under input and state (or adversarial signal) constraints. Restricted by peak or average input and state cost constraints, the random code capacity of discrete AVC is studied in Reference [7] using the average probability of error as the performance criterion. Furthermore, the second part of Csiszár and Narayan work in Reference [6] focuses on the deterministic code capacity of AVC under input and state constraints for the same performance criterion. They proved that in this case if the capacity is positive then it is less than or equal to the corresponding random code capacity. In particular, with input and state constraints, the capacity can be positive but strictly less than the random code capacity. Note that this does not occur without cost constraints. Csiszár, in Reference [8], extended this result to general input and output alphabets and state sets rather than only finite alphabets and state sets.
There is a wide variety of research on different versions of AVCs under various adversaries model, including References [9,10,11]. Sarwate, in [9], considered a myopic adversary in which there is a discrete memoryless channel (DMC) between the legitimate user and the jammer and the jammer chooses its signal based on this noisy version of the user’s codeword. He found the capacity by minimizing over all DMCs that the jammer can applied by its worst strategies. In Reference [10], single letter characterizations of capacity is obtained in the presence of a delayed adversary which can observe the transmitted signal after a delay. By assuming randomization at the encoder, the capacity is corresponding to the randomized code capacity. B. K. Dey et al., in Reference [11], obtained upper bounds on the capacity of binary channel in the presence of a causal adversary for both maximal and average error probabilities.
On the other hand, AVCs are also studied in network settings throughout [12,13,14,15,16,17,18], such as multiple-access and broadcast channels. The authors in Reference [12,13,14] investigated the capacity of arbitrarily varying multiple-access channels (AVMACs). In Reference [13], it is proved that symmetrizability, which is defined for the two-user AVC, is a sufficient condition for an AVMAC to have empty interior for its deterministic code capacity region. Moreover, throughout [15,16,17,18], the capacity of arbitrarily varying wiretap channels (AVWCs) are determined.
This paper focuses on the Gaussian AVC (GAVC), wherein all alphabets are continuous rather than discrete. Initially, Ahlswede, in Reference [23], studied the capacity of a GAVC in which the adversary chooses the noise variance rather than an additive signal. Hughes and Narayan in Reference [21] determined the randomized code capacity of GAVC under the peak power input and state constraints. They further extended their result in Reference [24] for a vector GAVC. The deterministic code capacity of the GAVC, for the average probability of error, was found in Reference [22]. The authors showed that if the adversary’s power is greater than the legitimate transmitter’s power, then symmetrizability occurs causing the capacity to drop to zero. Note that for a discrete AVC with no cost constraint non-symmetrizability makes the deterministic capacity positive and equal to the randomized capacity [6] (Theorem 1). It is further proved in [6] (Theorem 3) that under input and state cost constraints, non-symmetrizability only results in positive deterministic capacity but it is sometimes strictly less than the randomized capacity. In the Gaussian case, even though there are input and state cost constraints, the behavior is like that of a discrete AVC with no cost constraint, in that if the channel is non-symmetrizable, then its deterministic capacity is positive and equal to the randomized capacity [22].
For the first time, in Reference [19], Hughes showed that using list-decoding, in which the decoder can decode to a small (and bounded) list rather than a unique message estimate, causes positive capacity for most symmetrizable discrete-memoryless AVCs. Intuitively, list-decoding combats the symmetrizing attack by allowing the list to contain the true message as well as the counterfeit(s) generated by the adversary; thus, the receiver can successfully decode even if it cannot specify the correct message. Furthermore, the authors in Reference [20] extended the list-decoding result to the discrete-memoryless AVCs with state constraints. They determined upper and lower bounds on the capacity by introducing two notions of symmetrizability for this channel.
The capacity of AVMAC was also studied with list-decoding in References [25,26]. Sirin Nitinawarat in Reference [25] introduced symmetrizability of an AVMAC and showed that the capacity region for deterministic codes with fixed list-size is empty if the list size is less than the symmetrizability . He obtained that the capacity corresponds to the random code capacity if the list size is greater than . H. Boche and R.F. Schaefer in Reference [26] obtained the list capacity region of AVMAC with conferencing encoders which is proved for large list size to be equal to the common randomness assisted capacity region. Moreover, in Reference [27], the authors found the deterministic code and random code capacity regions of arbitrarily varying broadcast channel with receiver side information. By defining a concept of symmetrizability for the channel, they characterized deterministic list codes capacity region as either identical to the random code capacity region or empty.
In this paper, we characterize the capacity of GAVC in Reference [22], using list-decoding for any list size and almost all power values of the transmitter and adversary, a similar result to that of Hughes in Reference [19] which obtained the list capacity for the discrete-memoryless AVC, for which the capacity was determined in Reference [6]. We assume that the encoder may be stochastic—that is, the encoder has access to private randomness—and a deterministic list decoder with constant list size L. Under the average probability of error criterion and without common randomness, we obtain the capacity of GAVC with list decoding to be equal to the corresponding randomized code capacity if the list size is greater than the power ratio of the jammer to the legitimate user; otherwise, the capacity is zero. Generally, our problem is a generalized version of the multiple packing of spherical caps problem in Reference [28] with Gaussian noise; although, they assumed maximal probability of error as the performance criterion. Their upper bound, which is only calculated for the noiseless case, depends on the list size L even in the asymptotic case. However, we only have list size in our symmetrizability conditions rather than the capacity itself.
In our converse proof (in Section 4), the adversary focuses on two possible strategies, one of which is simply sending Gaussian noise which causes the channel to act as a standard Gaussian channel with increased noise variance. The second strategy for the adversary is to transmit the superposition of some random (counterfeit) codewords, which is shown to be possible with positive probability if its power is large enough. In our achievability proof (in Section 5), we employ Gaussian codewords with a particular version of minimum distance list-decoding based on typicality. We extend the scheme of Reference [22] to show that with high probability a random Gaussian codebook has desirable properties to make the probability of error zero. However, our achievability proof originates from the idea of Reference Reference [6] based on typical sets, rather than the geometric approach of Reference [22]. This scheme allows us for simpler characterizations of codebook constraints. It is worth mentioning that we prove the achievability for the deterministic encoder since it suffices to achieve a rate even by a deterministic code, that is any deterministic code is a realistic value of a stochastic code. Our converse and achievability proofs apply for any list size; our previous work [29] provided proofs only for list size .
Notations: Upper case letters denote random variables while lower case letters specify a deterministic value or the realization of a random variable. Bold letters denotes n-length vectors. We indicate inner product and 2-norm by and , respectively. We use and to denote the positive-part function and the expectation, respectively. Also, for an integer N, notation represents the set . Notation and stand for the identity matrix of size n and a vector of size n with n ones elements, respectively. For a vector , we use superscript to denote its transpose. Both and functions has base 2, so we define the Gaussian channel capacity function .
2. Problem Statement
A GAVC is a modified standard point-to-point Gaussian channel in the presence of an additive arbitrarily chosen adversary signal. Both transmitter and receiver do not know anything about the adversary signal and the adversary do not have any information about the transmitted signal except the codebooks. The received signal is given by
where the n-length vector is the legitimate transmitter’s signal, represents the independent adversary signal and noise is a sequence of n-length i.i.d. zero mean Gaussian random variables with variance , independent of and .
We have the assumption of peak power constraints for the transmitter and adversary signals respectively as and . In addition, the transmitter and receiver are assumed to know the three parameters P, and .
An stochastic list code for the GAVC is given by:
- Message set and encoder private randomness set ,
- Stochastic encoding function ,
- List decoding function ,
where the rate of the code is . The transmitter encodes the message M and its private randomness K to where M and K are chosen uniformly respectively from sets and . At the receiver, signal is decoded by a deterministic function to the set which is the set of all subsets of with cardinality at most L. In other words, L is the size of the list decoder.
First, define the probability of error for a specific message in the presence of a specific adversary signal as the probability that . Therefore, the average probability of error for is given by
Finally, the overall probability of error is obtained by maximizing over all possible choices of jammers’ sequences satisfying peak power constraint . Suppose r is rate of private randomness. Given L and r, rate R is achievable if there exists a sequence of codes such that . The list-code capacity is the supremum of all achievable rates given L and r.
3. Main Results
Theorem 1.
The list-code capacity of GAVC is given by
Note that the capacity foris unsolved.
Remark 1.
Note that this result holds for all r, includingwhich corresponds to a deterministic encoder. That is, the capacity does not depend on the amount of private randomness.
Remark 2.
The condition on the ratiodetermines whether it is possible for the adversary to launch a symmetrizing attack, wherein it transmits a superposition of codewords. Since each codeword has power P, the most codewords that the adversary can superpose while obeying its power constraint of Λ is the . Thus, if the allowable list size is greater than , then even under this attack the decoder can output a list made up of the true message and the superposed codewords selected by the adversary. Of course, the decoder does not know which is which but it can still guarantee that the true message is in the list. Thus, the worst the adversary can do is to act as an extra additive Gaussian noise with variance Λ, so the capacity is equal to the capacity of a standard Gaussian channel with increased noise variance as in . However, if the allowable list size is less than , then there are too many possibilities for the decoder to decode correctly, so the capacity is zero. Note that none of this depends on the channel noise, so does not come into play in the condition on L.
Remark 3.
For the achievability proof, we make no assumptions about what the adversary does. However, for the converse proof, it is allowable to weaken the adversary by making certain assumptions about its behavior, because doing so can only increase the achievable rates. Since the converse is an upper bound on achievable rates, weakening the adversary in this manner still yields a valid upper bound.
4. Converse Proof
Without loss of generality, suppose . For the first case where , we assume that we have a code with vanishing probability of error. Since these codes must function for arbitrary jamming signals, we may derive an outer bound by assuming the adversary transmits Gaussian noise with variance for any or if Gaussian realization has power greater than . By the law of large numbers, with high probability the resulting channel is equivalent to a standard Gaussian channel with noise variance . Thus, since can be chosen arbitrarily small, from the capacity of a non-adversarial Gaussian channel,
Now, assume the symmetrizable case where . In order to show , first consider a sequence of stochastic codebooks and probability of error . We claim that if and the jammer has the following strategy, then is bounded away from zero for sufficiently large n: The jammer randomly and uniformly chooses L messages from and also L private keys from where and are independent. Note that the jammer knows the transmitted codebook. The jammer then constructs
where is a constant vector that we will choose later. The jammer transmits if or else transmits . In the former case, the received signal is
where is the true message. If are all different and for all sets with ,
then from the decoder’s perspective, any L of the messages might have been forged by the adversary. Therefore, the list decoder with list size at most L has a probability of error at least ; that is, the probability that the decoder chooses L from the messages that does not contain the true message . That is,
where and the second term in (10) shows the probability of messages not being distinct which tends to zero as . Note that are independent and each distributed as a transmitted sequence from the code. We proceed to show that there exists a choice of based only on the codebook such that (10) is bounded away from zero for sufficiently large n if .
Let
Note that , since by the power constraint we always have . Fix any and let . Let
Since we have . Thus for n sufficiently large, there exists such that
This is the one to be used by the jammer in (5). Let be the set of all that satisfy . Note that .
Since , by the definition of ,
Specifically, there exists n sufficiently large so that for all ,
Fix any and consider those such that
which implies
Thus, we obtain the following by applying (15) with as
Moreover, if satisfy for all , then
where (23) holds since and (24) holds for sufficiently small and by assumption . Now we have
where (25) follows from the analysis leading to (24), (26) follows from the union bound and the fact that are independent and (27) follows from the lower bound on the probability of and (20). For sufficiently small , (27) is bounded away from zero, so by (10), is also bounded away from zero for sufficiently large n if .
5. Achievability Proof
Before proceeding to the proof, let define the typical set for Gaussian random variables as:
Without loss of generality, assume , and
Note that assuming makes the code deterministic. Thus, it suffices to achieve the list-code capacity by a deterministic code construction as follows:
Codebook generation: Fix . We generate i.i.d zero mean Gaussian sequences with variance for each .
Encoding: The transmitter sends if its power is less than 1, otherwise it sends zero.
Decoding: First, given , for let set be the set of ℓ-tuple messages that for any set of zero-mean Gaussian variables such that
and .
Now we define the decoding function as
where we choose between multiple minimizing arbitrarily.
Analysis of the probability of error: To prove that the constructed code is achievable meaning that the probability of error tends to zero as , we utilize several lemmas including the following. We provide some necessary codebook properties that hold with high probability in Lemma 1, the proof for which is in the Appendix A.
Lemma 1.
Letfor,, be the Gaussian codebook described above. With probability approaching 1 as, the codebook satisfies the following, for anywhereand any zero-mean jointly Gaussian random vectorwith positive definite covariance matrices with diagonals at mostfor all:
The following lemmas can be easily generalized for Gaussian random variables by following the corresponding lemmas in Reference [30] for discrete memoryless random variables.
Conditional Typicality Lemma: Let be jointly Gaussian random variables. Suppose that and . Then, for every ,
Joint Typicality Lemma: Let be jointly Gaussian random variables. If is a pair of arbitrary sequences and then there exists that tends to zero as such that
Assume that the legitimate user transmits message M. Then, the probability of error is upper bounded by the sum of the following L error events probabilities:
By Lemma 1, we may assume we have a deterministic codebook that satisfies (33)–(37). Consider any state sequence . By (33), with high probability where are independent and (33). Thus, by the conditional typicality lemma, for every with high probability where are mutually independent and .
We first bound probability event for . Define the shorthand . Let denote a finite set of Gaussian distributions of that is -dense in the set of all Gaussian distributions of with variances at most . Note that the cardinality of does not depend on n. We may upper bound by
where
We will show that for all Gaussian vectors (whether or not they are in ). We may restrict ourselves to
where
where (44) holds since the legitimate transmitter, the jammer and the noise are independently generated, (45) follows from the assumptions for , (46) corresponds to following from (31) and the assumption that and (47) is obtained by (31) as follows:
Now, suppose
Then we would have for all . Thus, are mutually independent since
and
where (53) follows from (44), (55) and (56) follow from (31) and . Hence, if the message satisfies the conditions in the probability in (43), then . This implies that takes on the value in the minimum, so and so we must have
However, this contradicts the assumptions that X is independent from , since
Therefore, the assumption in (52) is false and there exists such that
Now, consider the following two cases.
Case (a): . By (37), we only need to consider distributions where
Then for any
where in (62) the sum is over all , in (63) we use (36), the joint typicality lemma and and (65) follows from (60) and (61). Thus, (65) tends to zero exponentially fast for sufficiently small .
Case (b): . We may assume without loss of generality that . Now, we upper bound (43) by
Note that by (35), we may narrow the distributions to those with
Therefore,
where (68) follows from (34) and the joint typicality lemma, (69) follows since by and (67) we have
Using this result in (70), we obtain
meaning that is exponentially vanishing if is sufficiently small and the rate condition in (30) holds.
Now, we consider error probability . Define . Let denote a finite -dense subset of Gaussian vectors with variances at most . Thus, can be upper bounded by
where
Thus, we need to show that vanishes as for all Gaussian vectors that satisfy (44)–(47) for all and
where (80) follows from (31) in which for all .
Since for all and , the covariance matrix of is equal to
which has the determinant of . This determinant should be positive since the covariance matrix is positive definite. However, since , this assumption contradicts the assumption that in (29). Thus, there exists such that
where we have used the fact that .
Now, we may consider two cases and . Therefore, using an identical argument as in the cases (a) and (b) for , it follows that is also exponentially vanishing.
Author Contributions
Conceptualization, F.H. and O.K.; methodology, F.H. and O.K.; formal analysis, F.H. and O.K.; investigation, F.H.; resources, O.K.; writing—original draft preparation, F.H.; writing—review and editing, O.K.; supervision, O.K.; project administration, O.K.; funding acquisition, O.K.
Funding
This research was funded by National Science Foundation under grant number CCF-1453718.
Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Appendix A. Proof of Lemma 1
In order to prove (33), we use our proof in Reference [31] (Lemma 6) for one codebook. Moreover, to obtain (34)–(37), we apply the corresponding proof of these four equations in Reference [19] (Lemma 1) for Gaussian distributions. Note that Reference [19] focuses on discrete alphabets but the same proofs can be extended to Gaussian distributions by quantization of the set of continuous random variables in the following way.
Let be Gaussian i.i.d. n-length random vector (codebook) independent from each other with . Fix and a covariance matrix such that is a -dense subset of for such that and is a -dense subset of for positive definite covariance matrices with diagonals at most .
Using the similar proof in Reference [19] (Lemma 1), we obtain for given and covariance matrix that the complement of each event in (34)–(37) happens with decreasingly doubly exponential probability for sufficiently large n meaning that
Then, in order to complete the proof, since for any fixed the cardinality of finite set is only increasingly exponentially in n and the set is finite along with the doubly decreasing exponential probabilities in (A1)–(A4), we derive that with probability approaching to 1, all inequalities in (34)–(37) hold simultaneously for sufficiently large n. Since these inequalities hold for every element in the finite sets and , then for any vector and any given covariance matrix (with ) which is not in its corresponding -dense subset, there exists a point in the corresponding -dense subset that is close enough to it (in its distance neighborhood). Now, by using the continuity properties of all sets, we may conclude that (34)–(37) hold also for any point which is not in the -dense subset.
References
- Blackwell, D.; Breiman, L.; Thomasian, A.J. The Capacities of Certain Channel Classes under Random Coding. Ann. Math. Statist. 1960, 31, 558–567. [Google Scholar] [CrossRef]
- Stiglitz, I. Coding for a class of unknown channels. IEEE Trans. Inf. Theory 1966, 12, 189–195. [Google Scholar] [CrossRef]
- Ahlswede, R.; Wolfowitz, J. Correlated decoding for channels with arbitrarily varying channel probability functions. Inf. Control 1969, 14, 457–473. [Google Scholar] [CrossRef]
- Ahlswede, R. Elimination of correlation in random codes for arbitrarily varying channels. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 1978, 44, 159–175. [Google Scholar] [CrossRef]
- Ericson, T. Exponential error bounds for random codes in the arbitrarily varying channel. IEEE Trans. Inf. Theory 1985, 31, 42–48. [Google Scholar] [CrossRef]
- Csiszar, I.; Narayan, P. The capacity of the arbitrarily varying channel revisited: Positivity, constraints. IEEE Trans. Inf. Theory 1988, 34, 181–193. [Google Scholar] [CrossRef]
- Csiszar, I.; Narayan, P. Arbitrarily varying channels with constrained inputs and states. IEEE Trans. Inf. Theory 1988, 34, 27–34. [Google Scholar] [CrossRef]
- Csiszar, I. Arbitrarily varying channels with general alphabets and states. IEEE Trans. Inf. Theory 1992, 38, 1725–1742. [Google Scholar] [CrossRef]
- Sarwate, A.D. Coding against myopic adversaries. In Proceedings of the 2010 IEEE Information Theory Workshop, Cairo, Egypt, 6–8 January 2010; pp. 1–5. [Google Scholar] [CrossRef]
- Dey, B.K.; Jaggi, S.; Langberg, M.; Sarwate, A.D. Coding against delayed adversaries. In Proceedings of the 2010 IEEE International Symposium on Information Theory, Austin, TX, USA, 12–18 June 2010; pp. 285–289. [Google Scholar] [CrossRef]
- Dey, B.K.; Jaggi, S.; Langberg, M.; Sarwate, A.D. Upper Bounds on the Capacity of Binary Channels with Causal Adversaries. IEEE Trans. Inf. Theory 2013, 59, 3753–3763. [Google Scholar] [CrossRef]
- Jahn, J. Coding of arbitrarily varying multiuser channels. IEEE Trans. Inf. Theory 1981, 27, 212–226. [Google Scholar] [CrossRef]
- Gubner, J.A. On the deterministic-code capacity of the multiple-access arbitrarily varying channel. IEEE Trans. Inf. Theory 1990, 36, 262–275. [Google Scholar] [CrossRef]
- Ahlswede, R.; Cai, N. Arbitrarily varying multiple-access channels. I. Ericson’s symmetrizability is adequate, Gubner’s conjecture is true. IEEE Trans. Inf. Theory 1999, 45, 742–749. [Google Scholar] [CrossRef]
- MolavianJazi, E. Secure Communications over Arbitrarily Varying Wiretap Channels. Ph.D. Thesis, University of Notre Dame, Notre Dame, Indiana, 2009. [Google Scholar]
- Bjelaković, I.; Boche, H.; Sommerfeld, J. Strong secrecy in arbitrarily varying wiretap channels. In Proceedings of the 2012 IEEE Information Theory Workshop, Lausanne, Switzerland, 3–7 September 2012; pp. 617–621. [Google Scholar] [CrossRef]
- Nötzel, J.; Wiese, M.; Boche, H. The Arbitrarily Varying Wiretap Channel—Secret Randomness, Stability, and Super-Activation. IEEE Trans. Inf. Theory 2016, 62, 3504–3531. [Google Scholar] [CrossRef]
- Goldfeld, Z.; Cuff, P.; Permuter, H.H. Arbitrarily Varying Wiretap Channels with Type Constrained States. IEEE Trans. Inf. Theory 2016, 62, 7216–7244. [Google Scholar] [CrossRef]
- Hughes, B.L. The smallest list for the arbitrarily varying channel. IEEE Trans. Inf. Theory 1997, 43, 803–815. [Google Scholar] [CrossRef]
- Sarwate, A.D.; Gastpar, M. List-Decoding for the Arbitrarily Varying Channel under State Constraints. IEEE Trans. Inf. Theory 2012, 58, 1372–1384. [Google Scholar] [CrossRef]
- Hughes, B.; Narayan, P. Gaussian arbitrarily varying channels. IEEE Trans. Inf. Theory 1987, 33, 267–284. [Google Scholar] [CrossRef]
- Csiszar, I.; Narayan, P. Capacity of the Gaussian arbitrarily varying channel. IEEE Trans. Inf. Theory 1991, 37, 18–26. [Google Scholar] [CrossRef]
- Ahlswede, R. The capacity of a channel with arbitrarily varying additive Gaussian channel probability functions. In Proceedings of the Transactions of the Sixth Prague Conference on Information Theory, Statistical Decision Functions, Random Processes, Prague, Czech Republic, 19–25 September 1971; pp. 13–21. [Google Scholar]
- Hughes, B.; Narayan, P. The capacity of a vector Gaussian arbitrarily varying channel. IEEE Trans. Inf. Theory 1988, 34, 995–1003. [Google Scholar] [CrossRef]
- Nitinawarat, S. On the Deterministic Code Capacity Region of an Arbitrarily Varying Multiple-Access Channel under List Decoding. IEEE Trans. Inf. Theory 2013, 59, 2683–2693. [Google Scholar] [CrossRef]
- Boche, H.; Schaefer, R.F. List decoding for arbitrarily varying multiple access channels with conferencing encoders. In Proceedings of the 2014 IEEE International Conference on Communications (ICC), Sydney, Australia, 10–14 June 2014; pp. 1934–1940. [Google Scholar] [CrossRef]
- Schaefer, R.F.; Boche, H. List Decoding for Arbitrarily Varying Broadcast Channels with Receiver Side Information. IEEE Trans. Inf. Theory 2014, 60, 4472–4487. [Google Scholar] [CrossRef]
- Blachman, N.M.; Few, L. Multiple packing of spherical caps. Mathematika 1963, 10, 84–88. [Google Scholar] [CrossRef]
- Hosseinigoki, F.; Kosut, O. Capacity of the Gaussian Arbitrarily-Varying Channel with List Decoding. In Proceedings of the 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 471–475. [Google Scholar] [CrossRef]
- El Gamal, A.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, MA, USA, 2011. [Google Scholar]
- Hosseinigoki, F.; Kosut, O. The Gaussian Interference Channel in the Presence of Malicious Jammers. arXiv 2017, arXiv:1712.04133. [Google Scholar]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).