Special Issue "Rate-Distortion Theory and Information Theory"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (30 June 2018).

Special Issue Editor

Prof. Dr. Jerry D. Gibson
E-Mail Website1 Website2
Guest Editor
Department of Electrical and Computer Engineering, University of California, Santa Barbara, CA 93106-9560, USA
Interests: information theoretic techniques for signal processing and machine learning; lossy source coding; rate distortion theory
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Rate distortion theory has historically received less attention than Channel Capacity, primarily due to the difficulty of crafting physically meaningful, mathematically tractable source models and fidelity criteria. It is the goal of this Special Issue to reemphasize the critical accomplishments of rate distortion theory and to highlight new directions in rate distortion theoretic and information theoretic research. Toward this end, we solicit papers on source models and fidelity criteria for physical sources and the resulting rate distortion bounds, historical perspectives on rate distortion theory, information theoretic techniques in machine learning, information theoretic approaches for biological signal processing, the rate distortion theory impact on lossy source coding, the rate distortion theory of multiple correlated sources, and new directions in rate distortion theoretic and information theoretic research that move beyond the standard independent and identically distributed, Gaussian, and Bernoulli source models.

Prof. Dr. Jerry D. Gibson
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • source modeling
  • fidelity criteria
  • rate distortion bounds
  • information theoretic signal processing
  • information theoretic algorithms for machine learning

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Editorial
Special Issue on Rate Distortion Theory and Information Theory
Entropy 2018, 20(11), 825; https://doi.org/10.3390/e20110825 - 27 Oct 2018
Cited by 2 | Viewed by 982
Abstract
Shannon introduced the fields of information theory and rate distortion theory in his landmark 1948 paper [...] Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)

Research

Jump to: Editorial, Review

Article
Entropy Power, Autoregressive Models, and Mutual Information
Entropy 2018, 20(10), 750; https://doi.org/10.3390/e20100750 - 30 Sep 2018
Cited by 10 | Viewed by 1785
Abstract
Autoregressive processes play a major role in speech processing (linear prediction), seismic signal processing, biological signal processing, and many other applications. We consider the quantity defined by Shannon in 1948, the entropy rate power, and show that the log ratio of entropy powers [...] Read more.
Autoregressive processes play a major role in speech processing (linear prediction), seismic signal processing, biological signal processing, and many other applications. We consider the quantity defined by Shannon in 1948, the entropy rate power, and show that the log ratio of entropy powers equals the difference in the differential entropy of the two processes. Furthermore, we use the log ratio of entropy powers to analyze the change in mutual information as the model order is increased for autoregressive processes. We examine when we can substitute the minimum mean squared prediction error for the entropy power in the log ratio of entropy powers, thus greatly simplifying the calculations to obtain the differential entropy and the change in mutual information and therefore increasing the utility of the approach. Applications to speech processing and coding are given and potential applications to seismic signal processing, EEG classification, and ECG classification are described. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Show Figures

Figure 1

Article
Fixed-Rate Universal Lossy Source Coding and Model Identification: Connection with Zero-Rate Density Estimation and the Skeleton Estimator
Entropy 2018, 20(9), 640; https://doi.org/10.3390/e20090640 - 25 Aug 2018
Cited by 1 | Viewed by 1630
Abstract
This work demonstrates a formal connection between density estimation with a data-rate constraint and the joint objective of fixed-rate universal lossy source coding and model identification introduced by Raginsky in 2008 (IEEE TIT, 2008, 54, 3059–3077). Using an equivalent learning formulation, we derive [...] Read more.
This work demonstrates a formal connection between density estimation with a data-rate constraint and the joint objective of fixed-rate universal lossy source coding and model identification introduced by Raginsky in 2008 (IEEE TIT, 2008, 54, 3059–3077). Using an equivalent learning formulation, we derive a necessary and sufficient condition over the class of densities for the achievability of the joint objective. The learning framework used here is the skeleton estimator, a rate-constrained learning scheme that offers achievable results for the joint coding and modeling problem by optimally adapting its learning parameters to the specific conditions of the problem. The results obtained with the skeleton estimator significantly extend the context where universal lossy source coding and model identification can be achieved, allowing for applications that move from the known case of parametric collection of densities with some smoothness and learnability conditions to the rich family of non-parametric L 1 -totally bounded densities. In addition, in the parametric case we are able to remove one of the assumptions that constrain the applicability of the original result obtaining similar performances in terms of the distortion redundancy and per-letter rate overhead. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Show Figures

Figure 1

Article
Rate-Distortion Function Upper Bounds for Gaussian Vectors and Their Applications in Coding AR Sources
Entropy 2018, 20(6), 399; https://doi.org/10.3390/e20060399 - 23 May 2018
Cited by 5 | Viewed by 1707
Abstract
In this paper, we give upper bounds for the rate-distortion function (RDF) of any Gaussian vector, and we propose coding strategies to achieve such bounds. We use these strategies to reduce the computational complexity of coding Gaussian asymptotically wide sense stationary (AWSS) autoregressive [...] Read more.
In this paper, we give upper bounds for the rate-distortion function (RDF) of any Gaussian vector, and we propose coding strategies to achieve such bounds. We use these strategies to reduce the computational complexity of coding Gaussian asymptotically wide sense stationary (AWSS) autoregressive (AR) sources. Furthermore, we also give sufficient conditions for AR processes to be AWSS. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Show Figures

Figure 1

Article
Exponential Strong Converse for Source Coding with Side Information at the Decoder
Entropy 2018, 20(5), 352; https://doi.org/10.3390/e20050352 - 08 May 2018
Cited by 14 | Viewed by 1343
Abstract
We consider the rate distortion problem with side information at the decoder posed and investigated by Wyner and Ziv. Using side information and encoded original data, the decoder must reconstruct the original data with an arbitrary prescribed distortion level. The rate distortion region [...] Read more.
We consider the rate distortion problem with side information at the decoder posed and investigated by Wyner and Ziv. Using side information and encoded original data, the decoder must reconstruct the original data with an arbitrary prescribed distortion level. The rate distortion region indicating the trade-off between a data compression rate R and a prescribed distortion level Δ was determined by Wyner and Ziv. In this paper, we study the error probability of decoding for pairs of ( R , Δ ) outside the rate distortion region. We evaluate the probability of decoding such that the estimation of source outputs by the decoder has a distortion not exceeding a prescribed distortion level Δ . We prove that, when ( R , Δ ) is outside the rate distortion region, this probability goes to zero exponentially and derive an explicit lower bound of this exponent function. On the Wyner–Ziv source coding problem the strong converse coding theorem has not been established yet. We prove this as a simple corollary of our result. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Show Figures

Figure 1

Article
Content Adaptive Lagrange Multiplier Selection for Rate-Distortion Optimization in 3-D Wavelet-Based Scalable Video Coding
Entropy 2018, 20(3), 181; https://doi.org/10.3390/e20030181 - 08 Mar 2018
Cited by 4 | Viewed by 1889
Abstract
Rate-distortion optimization (RDO) plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC). Among all the possible coding modes, it aims to select the one which has the best trade-off [...] Read more.
Rate-distortion optimization (RDO) plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC). Among all the possible coding modes, it aims to select the one which has the best trade-off between bitrate and compression distortion. Specifically, this tradeoff is tuned through the choice of the Lagrange multiplier. Despite the prevalence of conventional method for Lagrange multiplier selection in hybrid video coding, the underlying formulation is not applicable to 3-D wavelet-based SVC where the explicit values of the quantization step are not available, with on consideration of the content features of input signal. In this paper, an efficient content adaptive Lagrange multiplier selection algorithm is proposed in the context of RDO for 3-D wavelet-based SVC targeting quality scalability. Our contributions are two-fold. First, we introduce a novel weighting method, which takes account of the mutual information, gradient per pixel, and texture homogeneity to measure the temporal subband characteristics after applying the motion-compensated temporal filtering (MCTF) technique. Second, based on the proposed subband weighting factor model, we derive the optimal Lagrange multiplier. Experimental results demonstrate that the proposed algorithm enables more satisfactory video quality with negligible additional computational complexity. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Show Figures

Figure 1

Article
A Coding Theorem for f-Separable Distortion Measures
Entropy 2018, 20(2), 111; https://doi.org/10.3390/e20020111 - 08 Feb 2018
Cited by 3 | Viewed by 2430
Abstract
In this work we relax the usual separability assumption made in rate-distortion literature and propose f-separable distortion measures, which are well suited to model non-linear penalties. The main insight behind f-separable distortion measures is to define an n-letter distortion measure [...] Read more.
In this work we relax the usual separability assumption made in rate-distortion literature and propose f -separable distortion measures, which are well suited to model non-linear penalties. The main insight behind f -separable distortion measures is to define an n-letter distortion measure to be an f -mean of single-letter distortions. We prove a rate-distortion coding theorem for stationary ergodic sources with f -separable distortion measures, and provide some illustrative examples of the resulting rate-distortion functions. Finally, we discuss connections between f -separable distortion measures, and the subadditive distortion measure previously proposed in literature. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Show Figures

Figure 1

Article
Rate-Distortion Region of a Gray–Wyner Model with Side Information
Entropy 2018, 20(1), 2; https://doi.org/10.3390/e20010002 - 22 Dec 2017
Cited by 3 | Viewed by 2086
Abstract
In this work, we establish a full single-letter characterization of the rate-distortion region of an instance of the Gray–Wyner model with side information at the decoders. Specifically, in this model, an encoder observes a pair of memoryless, arbitrarily correlated, sources [...] Read more.
In this work, we establish a full single-letter characterization of the rate-distortion region of an instance of the Gray–Wyner model with side information at the decoders. Specifically, in this model, an encoder observes a pair of memoryless, arbitrarily correlated, sources ( S 1 n , S 2 n ) and communicates with two receivers over an error-free rate-limited link of capacity R 0 , as well as error-free rate-limited individual links of capacities R 1 to the first receiver and R 2 to the second receiver. Both receivers reproduce the source component S 2 n losslessly; and Receiver 1 also reproduces the source component S 1 n lossily, to within some prescribed fidelity level D 1 . In addition, Receiver 1 and Receiver 2 are equipped, respectively, with memoryless side information sequences Y 1 n and Y 2 n . Important in this setup, the side information sequences are arbitrarily correlated among them, and with the source pair ( S 1 n , S 2 n ) ; and are not assumed to exhibit any particular ordering. Furthermore, by specializing the main result to two Heegard–Berger models with successive refinement and scalable coding, we shed light on the roles of the common and private descriptions that the encoder should produce and the role of each of the common and private links. We develop intuitions by analyzing the developed single-letter rate-distortion regions of these models, and discuss some insightful binary examples. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Show Figures

Figure 1

Article
Rate Distortion Functions and Rate Distortion Function Lower Bounds for Real-World Sources
Entropy 2017, 19(11), 604; https://doi.org/10.3390/e19110604 - 11 Nov 2017
Cited by 9 | Viewed by 2442
Abstract
Although Shannon introduced the concept of a rate distortion function in 1948, only in the last decade has the methodology for developing rate distortion function lower bounds for real-world sources been established. However, these recent results have not been fully exploited due to [...] Read more.
Although Shannon introduced the concept of a rate distortion function in 1948, only in the last decade has the methodology for developing rate distortion function lower bounds for real-world sources been established. However, these recent results have not been fully exploited due to some confusion about how these new rate distortion bounds, once they are obtained, should be interpreted and should be used in source codec performance analysis and design. We present the relevant rate distortion theory and show how this theory can be used for practical codec design and performance prediction and evaluation. Examples for speech and video indicate exactly how the new rate distortion functions can be calculated, interpreted, and extended. These examples illustrate the interplay between source models for rate distortion theoretic studies and the source models underlying video and speech codec design. Key concepts include the development of composite source models per source realization and the application of conditional rate distortion theory. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Show Figures

Figure 1

Article
Upper Bounds for the Rate Distortion Function of Finite-Length Data Blocks of Gaussian WSS Sources
Entropy 2017, 19(10), 554; https://doi.org/10.3390/e19100554 - 19 Oct 2017
Cited by 5 | Viewed by 2044
Abstract
In this paper, we present upper bounds for the rate distortion function (RDF) of finite-length data blocks of Gaussian wide sense stationary (WSS) sources and we propose coding strategies to achieve such bounds. In order to obtain those bounds, we previously derive new [...] Read more.
In this paper, we present upper bounds for the rate distortion function (RDF) of finite-length data blocks of Gaussian wide sense stationary (WSS) sources and we propose coding strategies to achieve such bounds. In order to obtain those bounds, we previously derive new results on the discrete Fourier transform (DFT) of WSS processes. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

Review
Information Theory and Cognition: A Review
Entropy 2018, 20(9), 706; https://doi.org/10.3390/e20090706 - 14 Sep 2018
Cited by 14 | Viewed by 3024
Abstract
We examine how information theory has been used to study cognition over the last seven decades. After an initial burst of activity in the 1950s, the backlash that followed stopped most work in this area. The last couple of decades has seen both [...] Read more.
We examine how information theory has been used to study cognition over the last seven decades. After an initial burst of activity in the 1950s, the backlash that followed stopped most work in this area. The last couple of decades has seen both a revival of interest, and a more firmly grounded, experimentally justified use of information theory. We can view cognition as the process of transforming perceptions into information—where we use information in the colloquial sense of the word. This last clarification is one of the problems we run into when trying to use information theoretic principles to understand or analyze cognition. Information theory is mathematical, while cognition is a subjective phenomenon. It is relatively easy to discern a subjective connection between cognition and information; it is a different matter altogether to apply the rigor of information theory to the process of cognition. In this paper, we will look at the many ways in which people have tried to alleviate this problem. These approaches range from narrowing the focus to only quantifiable aspects of cognition or borrowing conceptual machinery from information theory to address issues of cognition. We describe applications of information theory across a range of cognition research, from neural coding to cognitive control and predictive coding. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Show Figures

Figure 1

Back to TopTop