entropy-logo

Journal Browser

Journal Browser

Information Theory for Communication Systems

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (15 December 2020) | Viewed by 25230

Special Issue Editors


E-Mail Website
Guest Editor
Signal Theory and Communications Department, Universidad Carlos III de Madrid, Avenida de la Universidad, 30, 28911 Leganés, Spain
Interests: fading channels; information theory at finite blocklength; quantization and sampling; rate–distortion theory

E-Mail Website
Guest Editor
1. Signal and Information Processing Lab, ETH Zürich, 8092 Zürich, Switzerland
2. Institute of Communications Engineering, National Chiao Tung University, Hsinchu 30010, Taiwan
Interests: optical communication; molecular communication; performance analysis of communication systems; connections between biology and information theory

Special Issue Information

Dear Collegaues,

The founding work of the field of information theory, Claude Shannon's 1948 article "A mathematical theory of communication", concerns the fundamental limits of communication systems. It is, therefore, not surprising that, since its origins, information theory has been very successful in providing performance benchmarks and design guidelines for numerous communication scenarios. This Special Issue aims to bring together recent research efforts that apply information theory to characterize and study the fundamental limits of communication systems. Possible topics include, but are not limited to the following:

  • Asymptotic performance characterizations, such as channel capacity, second-order rates, or error exponents, of communication channels
  • Nonasymptotic performance bounds for communication systems
  • Information-theoretic limits of delay- and energy-limited communication systems
  • Information-theoretic analyses of signal constellations and low-precision decoders
  • Error-correcting codes for communication systems

Dr. Tobias Koch
Dr. Stefan M. Moser
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • channel capacity
  • communication systems
  • energy-limited communications
  • error-correcting codes
  • error exponents
  • low-latency communications
  • low-precision decoders
  • performance bounds
  • second-order rates
  • signal constellations

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 8444 KiB  
Article
Subjective Information and Survival in a Simulated Biological System
by Tyler S. Barker, Massimiliano Pierobon and Peter J. Thomas
Entropy 2022, 24(5), 639; https://doi.org/10.3390/e24050639 - 2 May 2022
Cited by 3 | Viewed by 1613
Abstract
Information transmission and storage have gained traction as unifying concepts to characterize biological systems and their chances of survival and evolution at multiple scales. Despite the potential for an information-based mathematical framework to offer new insights into life processes and ways to interact [...] Read more.
Information transmission and storage have gained traction as unifying concepts to characterize biological systems and their chances of survival and evolution at multiple scales. Despite the potential for an information-based mathematical framework to offer new insights into life processes and ways to interact with and control them, the main legacy is that of Shannon’s, where a purely syntactic characterization of information scores systems on the basis of their maximum information efficiency. The latter metrics seem not entirely suitable for biological systems, where transmission and storage of different pieces of information (carrying different semantics) can result in different chances of survival. Based on an abstract mathematical model able to capture the parameters and behaviors of a population of single-celled organisms whose survival is correlated to information retrieval from the environment, this paper explores the aforementioned disconnect between classical information theory and biology. In this paper, we present a model, specified as a computational state machine, which is then utilized in a simulation framework constructed specifically to reveal emergence of a “subjective information”, i.e., trade-off between a living system’s capability to maximize the acquisition of information from the environment, and the maximization of its growth and survival over time. Simulations clearly show that a strategy that maximizes information efficiency results in a lower growth rate with respect to the strategy that gains less information but contains a higher meaning for survival. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

25 pages, 1724 KiB  
Article
Zero-Delay Joint Source Channel Coding for a Bivariate Gaussian Source over the Broadcast Channel with One-Bit ADC Front Ends
by Weijie Zhao and Xuechen Chen
Entropy 2021, 23(12), 1679; https://doi.org/10.3390/e23121679 - 14 Dec 2021
Cited by 2 | Viewed by 1868
Abstract
In this work, we consider the zero-delay transmission of bivariate Gaussian sources over a Gaussian broadcast channel with one-bit analog-to-digital converter (ADC) front ends. An outer bound on the conditional distortion region is derived. Focusing on the minimization of the average distortion, two [...] Read more.
In this work, we consider the zero-delay transmission of bivariate Gaussian sources over a Gaussian broadcast channel with one-bit analog-to-digital converter (ADC) front ends. An outer bound on the conditional distortion region is derived. Focusing on the minimization of the average distortion, two types of methods are proposed to design nonparametric mappings. The first one is based on the joint optimization between the encoder and decoder with the use of an iterative algorithm. In the second method, we derive the necessary conditions to develop the optimal encoder numerically. Using these necessary conditions, an algorithm based on gradient descent search is designed. Subsequently, the characteristics of the optimized encoding mapping structure are discussed, and inspired by which, several parametric mappings are proposed. Numerical results show that the proposed parametric mappings outperform the uncoded scheme and previous parametric mappings for broadcast channels with infinite resolution ADC front ends. The nonparametric mappings succeed in outperforming the parametric mappings. The causes for the differences between the performances of two nonparametric mappings are analyzed. The average distortions of the parametric and nonparametric mappings proposed here are close to the bound for the cases with one-bit ADC front ends in low channel signal-to-noise ratio regions. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

45 pages, 640 KiB  
Article
Expected Logarithm and Negative Integer Moments of a Noncentral χ2-Distributed Random Variable
by Stefan M. Moser
Entropy 2020, 22(9), 1048; https://doi.org/10.3390/e22091048 - 19 Sep 2020
Cited by 1 | Viewed by 2051
Abstract
Closed-form expressions for the expected logarithm and for arbitrary negative integer moments of a noncentral χ2-distributed random variable are presented in the cases of both even and odd degrees of freedom. Moreover, some basic properties of these expectations are derived and [...] Read more.
Closed-form expressions for the expected logarithm and for arbitrary negative integer moments of a noncentral χ2-distributed random variable are presented in the cases of both even and odd degrees of freedom. Moreover, some basic properties of these expectations are derived and tight upper and lower bounds on them are proposed. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

33 pages, 1096 KiB  
Article
Nearest Neighbor Decoding and Pilot-Aided Channel Estimation for Fading Channels
by A. Taufiq Asyhari, Tobias Koch and Albert Guillén i Fàbregas
Entropy 2020, 22(9), 971; https://doi.org/10.3390/e22090971 - 31 Aug 2020
Viewed by 2003
Abstract
We study the information rates of noncoherent, stationary, Gaussian, and multiple-input multiple-output (MIMO) flat-fading channels that are achievable with nearest neighbor decoding and pilot-aided channel estimation. In particular, we investigate the behavior of these achievable rates in the limit as the signal-to-noise ratio [...] Read more.
We study the information rates of noncoherent, stationary, Gaussian, and multiple-input multiple-output (MIMO) flat-fading channels that are achievable with nearest neighbor decoding and pilot-aided channel estimation. In particular, we investigate the behavior of these achievable rates in the limit as the signal-to-noise ratio (SNR) tends to infinity by analyzing the capacity pre-log, which is defined as the limiting ratio of the capacity to the logarithm of the SNR as the SNR tends to infinity. We demonstrate that a scheme estimating the channel using pilot symbols and detecting the message using nearest neighbor decoding (while assuming that the channel estimation is perfect) essentially achieves the capacity pre-log of noncoherent multiple-input single-output flat-fading channels, and it essentially achieves the best so far known lower bound on the capacity pre-log of noncoherent MIMO flat-fading channels. Extending the analysis to fading multiple-access channels reveals interesting relationships between the number of antennas and Doppler bandwidth in the comparative performance of joint transmission and time division multiple-access. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

18 pages, 1260 KiB  
Article
Asymptotic Capacity Results on the Discrete-Time Poisson Channel and the Noiseless Binary Channel with Detector Dead Time
by Ligong Wang
Entropy 2020, 22(8), 846; https://doi.org/10.3390/e22080846 - 30 Jul 2020
Viewed by 1879
Abstract
This paper studies the discrete-time Poisson channel and the noiseless binary channel where, after recording a 1, the channel output is stuck at 0 for a certain period; this period is called the “dead time.” The communication capacities of these channels are analyzed, [...] Read more.
This paper studies the discrete-time Poisson channel and the noiseless binary channel where, after recording a 1, the channel output is stuck at 0 for a certain period; this period is called the “dead time.” The communication capacities of these channels are analyzed, with main focus on the regime where the allowed average input power is close to zero, either because the bandwidth is large, or because the available continuous-time input power is low. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

23 pages, 452 KiB  
Article
Achievable Information Rates for Probabilistic Amplitude Shaping: An Alternative Approach via Random Sign-Coding Arguments
by Yunus Can Gültekin, Alex Alvarado and Frans M. J. Willems
Entropy 2020, 22(7), 762; https://doi.org/10.3390/e22070762 - 11 Jul 2020
Cited by 6 | Viewed by 2859
Abstract
Probabilistic amplitude shaping (PAS) is a coded modulation strategy in which constellation shaping and channel coding are combined. PAS has attracted considerable attention in both wireless and optical communications. Achievable information rates (AIRs) of PAS have been investigated in the literature using Gallager’s [...] Read more.
Probabilistic amplitude shaping (PAS) is a coded modulation strategy in which constellation shaping and channel coding are combined. PAS has attracted considerable attention in both wireless and optical communications. Achievable information rates (AIRs) of PAS have been investigated in the literature using Gallager’s error exponent approach. In particular, it has been shown that PAS achieves the capacity of the additive white Gaussian noise channel (Böcherer, 2018). In this work, we revisit the capacity-achieving property of PAS and derive AIRs using weak typicality. Our objective is to provide alternative proofs based on random sign-coding arguments that are as constructive as possible. Accordingly, in our proofs, only some signs of the channel inputs are drawn from a random code, while the remaining signs and amplitudes are produced constructively. We consider both symbol-metric and bit-metric decoding. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

29 pages, 440 KiB  
Article
Some Useful Integral Representations for Information-Theoretic Analyses
by Neri Merhav and Igal Sason
Entropy 2020, 22(6), 707; https://doi.org/10.3390/e22060707 - 26 Jun 2020
Cited by 2 | Viewed by 2686
Abstract
This work is an extension of our earlier article, where a well-known integral representation of the logarithmic function was explored and was accompanied with demonstrations of its usefulness in obtaining compact, easily-calculable, exact formulas for quantities that involve expectations of the logarithm of [...] Read more.
This work is an extension of our earlier article, where a well-known integral representation of the logarithmic function was explored and was accompanied with demonstrations of its usefulness in obtaining compact, easily-calculable, exact formulas for quantities that involve expectations of the logarithm of a positive random variable. Here, in the same spirit, we derive an exact integral representation (in one or two dimensions) of the moment of a nonnegative random variable, or the sum of such independent random variables, where the moment order is a general positive non-integer real (also known as fractional moments). The proposed formula is applied to a variety of examples with an information-theoretic motivation, and it is shown how it facilitates their numerical evaluations. In particular, when applied to the calculation of a moment of the sum of a large number, n, of nonnegative random variables, it is clear that integration over one or two dimensions, as suggested by our proposed integral representation, is significantly easier than the alternative of integrating over n dimensions, as needed in the direct calculation of the desired moment. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

31 pages, 795 KiB  
Article
Probabilistic Shaping for Finite Blocklengths: Distribution Matching and Sphere Shaping
by Yunus Can Gültekin, Tobias Fehenberger, Alex Alvarado and Frans M. J. Willems
Entropy 2020, 22(5), 581; https://doi.org/10.3390/e22050581 - 21 May 2020
Cited by 41 | Viewed by 5416
Abstract
In this paper, we provide a systematic comparison of distribution matching (DM) and sphere shaping (SpSh) algorithms for short blocklength probabilistic amplitude shaping. For asymptotically large blocklengths, constant composition distribution matching (CCDM) is known to generate the target capacity-achieving distribution. However, as the [...] Read more.
In this paper, we provide a systematic comparison of distribution matching (DM) and sphere shaping (SpSh) algorithms for short blocklength probabilistic amplitude shaping. For asymptotically large blocklengths, constant composition distribution matching (CCDM) is known to generate the target capacity-achieving distribution. However, as the blocklength decreases, the resulting rate loss diminishes the efficiency of CCDM. We claim that for such short blocklengths over the additive white Gaussian noise (AWGN) channel, the objective of shaping should be reformulated as obtaining the most energy-efficient signal space for a given rate (rather than matching distributions). In light of this interpretation, multiset-partition DM (MPDM) and SpSh are reviewed as energy-efficient shaping techniques. Numerical results show that both have smaller rate losses than CCDM. SpSh—whose sole objective is to maximize the energy efficiency—is shown to have the minimum rate loss amongst all, which is particularly apparent for ultra short blocklengths. We provide simulation results of the end-to-end decoding performance showing that up to 1 dB improvement in power efficiency over uniform signaling can be obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a discussion on the complexity of these algorithms from the perspectives of latency, storage and computations. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

13 pages, 410 KiB  
Article
On Training Neural Network Decoders of Rate Compatible Polar Codes via Transfer Learning
by Hyunjae Lee, Eun Young Seo, Hyosang Ju and Sang-Hyo Kim
Entropy 2020, 22(5), 496; https://doi.org/10.3390/e22050496 - 25 Apr 2020
Cited by 1 | Viewed by 2892
Abstract
Neural network decoders (NNDs) for rate-compatible polar codes are studied in this paper. We consider a family of rate-compatible polar codes which are constructed from a single polar coding sequence as defined by 5G new radios. We propose a transfer learning technique for [...] Read more.
Neural network decoders (NNDs) for rate-compatible polar codes are studied in this paper. We consider a family of rate-compatible polar codes which are constructed from a single polar coding sequence as defined by 5G new radios. We propose a transfer learning technique for training multiple NNDs of the rate-compatible polar codes utilizing their inclusion property. The trained NND for a low rate code is taken as the initial state of NND training for the next smallest rate code. The proposed method provides quicker training as compared to separate learning of the NNDs according to numerical results. We additionally show that an underfitting problem of NND training due to low model complexity can be solved by transfer learning techniques. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

Back to TopTop