entropy-logo

Journal Browser

Journal Browser

Probabilistic Methods in Information Theory, Hypothesis Testing, and Coding

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (31 August 2019) | Viewed by 19667

Special Issue Editor


E-Mail Website
Guest Editor
Andrew and Erna Viterbi Faculty of Electrical Engineering, Technion–Israel Institute of Technology, Haifa 32000, Israel
Interests: information theory; coding theory
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Probabilistic methods play a key role in establishing direct and converse results in information theory, statistical hypothesis testing and coding. In this Special Issue, we welcome unpublished contributions related to such probabilistic tools and their information and coding-theoretic applications. Examples include probabilistic methods which are used to establish results in channel coding, lossless and lossy source coding such as concentration of measure inequalities, large deviations, method of types, martingales, majorization theory, coupling and Stein's method.

Prof. Dr. Igal Sason
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • channel coding
  • data compression
  • universal coding
  • hypothesis testing
  • information measures
  • method of types
  • concentration of measures
  • martingales
  • Stein's method

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 544 KiB  
Article
Guessing with a Bit of Help
by Nir Weinberger and Ofer Shayevitz
Entropy 2020, 22(1), 39; https://doi.org/10.3390/e22010039 - 26 Dec 2019
Cited by 5 | Viewed by 2899
Abstract
What is the value of just a few bits to a guesser? We study this problem in a setup where Alice wishes to guess an independent and identically distributed (i.i.d.) random vector and can procure a fixed number of k information bits from [...] Read more.
What is the value of just a few bits to a guesser? We study this problem in a setup where Alice wishes to guess an independent and identically distributed (i.i.d.) random vector and can procure a fixed number of k information bits from Bob, who has observed this vector through a memoryless channel. We are interested in the guessing ratio, which we define as the ratio of Alice’s guessing-moments with and without observing Bob’s bits. For the case of a uniform binary vector observed through a binary symmetric channel, we provide two upper bounds on the guessing ratio by analyzing the performance of the dictator (for general k 1 ) and majority functions (for k = 1 ). We further provide a lower bound via maximum entropy (for general k 1 ) and a lower bound based on Fourier-analytic/hypercontractivity arguments (for k = 1 ). We then extend our maximum entropy argument to give a lower bound on the guessing ratio for a general channel with a binary uniform input that is expressed using the strong data-processing inequality constant of the reverse channel. We compute this bound for the binary erasure channel and conjecture that greedy dictator functions achieve the optimal guessing ratio. Full article
Show Figures

Figure 1

25 pages, 903 KiB  
Article
Conditional Rényi Divergence Saddlepoint and the Maximization of α-Mutual Information
by Changxiao Cai and Sergio Verdú
Entropy 2019, 21(10), 969; https://doi.org/10.3390/e21100969 - 04 Oct 2019
Cited by 11 | Viewed by 3082
Abstract
Rényi-type generalizations of entropy, relative entropy and mutual information have found numerous applications throughout information theory and beyond. While there is consensus that the ways A. Rényi generalized entropy and relative entropy in 1961 are the “right” ones, several candidates have been put [...] Read more.
Rényi-type generalizations of entropy, relative entropy and mutual information have found numerous applications throughout information theory and beyond. While there is consensus that the ways A. Rényi generalized entropy and relative entropy in 1961 are the “right” ones, several candidates have been put forth as possible mutual informations of order α . In this paper we lend further evidence to the notion that a Bayesian measure of statistical distinctness introduced by R. Sibson in 1969 (closely related to Gallager’s E 0 function) is the most natural generalization, lending itself to explicit computation and maximization, as well as closed-form formulas. This paper considers general (not necessarily discrete) alphabets and extends the major analytical results on the saddle-point and saddle-level of the conditional relative entropy to the conditional Rényi divergence. Several examples illustrate the main application of these results, namely, the maximization of α -mutual information with and without constraints. Full article
27 pages, 432 KiB  
Article
Distributed Hypothesis Testing with Privacy Constraints
by Atefeh Gilani, Selma Belhadj Amor, Sadaf Salehkalaibar and Vincent Y. F. Tan
Entropy 2019, 21(5), 478; https://doi.org/10.3390/e21050478 - 07 May 2019
Cited by 13 | Viewed by 3392
Abstract
We revisit the distributed hypothesis testing (or hypothesis testing with communication constraints) problem from the viewpoint of privacy. Instead of observing the raw data directly, the transmitter observes a sanitized or randomized version of it. We impose an upper bound on the mutual [...] Read more.
We revisit the distributed hypothesis testing (or hypothesis testing with communication constraints) problem from the viewpoint of privacy. Instead of observing the raw data directly, the transmitter observes a sanitized or randomized version of it. We impose an upper bound on the mutual information between the raw and randomized data. Under this scenario, the receiver, which is also provided with side information, is required to make a decision on whether the null or alternative hypothesis is in effect. We first provide a general lower bound on the type-II exponent for an arbitrary pair of hypotheses. Next, we show that if the distribution under the alternative hypothesis is the product of the marginals of the distribution under the null (i.e., testing against independence), then the exponent is known exactly. Moreover, we show that the strong converse property holds. Using ideas from Euclidean information theory, we also provide an approximate expression for the exponent when the communication rate is low and the privacy level is high. Finally, we illustrate our results with a binary and a Gaussian example. Full article
Show Figures

Figure 1

15 pages, 361 KiB  
Article
Guessing with Distributed Encoders
by Annina Bracher, Amos Lapidoth and Christoph Pfister
Entropy 2019, 21(3), 298; https://doi.org/10.3390/e21030298 - 19 Mar 2019
Cited by 7 | Viewed by 2552
Abstract
Two correlated sources emit a pair of sequences, each of which is observed by a different encoder. Each encoder produces a rate-limited description of the sequence it observes, and the two descriptions are presented to a guessing device that repeatedly produces sequence pairs [...] Read more.
Two correlated sources emit a pair of sequences, each of which is observed by a different encoder. Each encoder produces a rate-limited description of the sequence it observes, and the two descriptions are presented to a guessing device that repeatedly produces sequence pairs until correct. The number of guesses until correct is random, and it is required that it have a moment (of some prespecified order) that tends to one as the length of the sequences tends to infinity. The description rate pairs that allow this are characterized in terms of the Rényi entropy and the Arimoto–Rényi conditional entropy of the joint law of the sources. This solves the guessing analog of the Slepian–Wolf distributed source-coding problem. The achievability is based on random binning, which is analyzed using a technique by Rosenthal. Full article
Show Figures

Graphical abstract

31 pages, 995 KiB  
Article
Detection Games under Fully Active Adversaries
by Benedetta Tondi, Neri Merhav and Mauro Barni
Entropy 2019, 21(1), 23; https://doi.org/10.3390/e21010023 - 29 Dec 2018
Cited by 8 | Viewed by 2624
Abstract
We study a binary hypothesis testing problem in which a defender must decide whether a test sequence has been drawn from a given memoryless source P 0 , while an attacker strives to impede the correct detection. With respect to previous works, the [...] Read more.
We study a binary hypothesis testing problem in which a defender must decide whether a test sequence has been drawn from a given memoryless source P 0 , while an attacker strives to impede the correct detection. With respect to previous works, the adversarial setup addressed in this paper considers an attacker who is active under both hypotheses, namely, a fully active attacker, as opposed to a partially active attacker who is active under one hypothesis only. In the fully active setup, the attacker distorts sequences drawn both from P 0 and from an alternative memoryless source P 1 , up to a certain distortion level, which is possibly different under the two hypotheses, to maximize the confusion in distinguishing between the two sources, i.e., to induce both false positive and false negative errors at the detector, also referred to as the defender. We model the defender–attacker interaction as a game and study two versions of this game, the Neyman–Pearson game and the Bayesian game. Our main result is in the characterization of an attack strategy that is asymptotically both dominant (i.e., optimal no matter what the defender’s strategy is) and universal, i.e., independent of P 0 and P 1 . From the analysis of the equilibrium payoff, we also derive the best achievable performance of the defender, by relaxing the requirement on the exponential decay rate of the false positive error probability in the Neyman–Pearson setup and the tradeoff between the error exponents in the Bayesian setup. Such analysis permits characterizing the conditions for the distinguishability of the two sources given the distortion levels. Full article
Show Figures

Figure 1

25 pages, 452 KiB  
Article
Tight Bounds on the Rényi Entropy via Majorization with Applications to Guessing and Compression
by Igal Sason
Entropy 2018, 20(12), 896; https://doi.org/10.3390/e20120896 - 22 Nov 2018
Cited by 26 | Viewed by 4551
Abstract
This paper provides tight bounds on the Rényi entropy of a function of a discrete random variable with a finite number of possible values, where the considered function is not one to one. To that end, a tight lower bound on the Rényi [...] Read more.
This paper provides tight bounds on the Rényi entropy of a function of a discrete random variable with a finite number of possible values, where the considered function is not one to one. To that end, a tight lower bound on the Rényi entropy of a discrete random variable with a finite support is derived as a function of the size of the support, and the ratio of the maximal to minimal probability masses. This work was inspired by the recently published paper by Cicalese et al., which is focused on the Shannon entropy, and it strengthens and generalizes the results of that paper to Rényi entropies of arbitrary positive orders. In view of these generalized bounds and the works by Arikan and Campbell, non-asymptotic bounds are derived for guessing moments and lossless data compression of discrete memoryless sources. Full article
Show Figures

Figure 1

Back to TopTop