Special Issue "Information Theory for Communication Systems"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (15 December 2020).

Special Issue Editors

Dr. Tobias Koch
E-Mail Website
Guest Editor
Signal Theory and Communications Department, Universidad Carlos III de Madrid, Avenida de la Universidad, 30, 28911 Leganés, Spain
Interests: fading channels; information theory at finite blocklength; quantization and sampling; rate–distortion theory
Dr. Stefan M. Moser
E-Mail Website
Guest Editor
ETH Zurich, Switzerland
Interests: optical communication; molecular communication; performance analysis of communication systems; connections between biology and information theory

Special Issue Information

Dear Collegaues,

The founding work of the field of information theory, Claude Shannon's 1948 article "A mathematical theory of communication", concerns the fundamental limits of communication systems. It is, therefore, not surprising that, since its origins, information theory has been very successful in providing performance benchmarks and design guidelines for numerous communication scenarios. This Special Issue aims to bring together recent research efforts that apply information theory to characterize and study the fundamental limits of communication systems. Possible topics include, but are not limited to the following:

  • Asymptotic performance characterizations, such as channel capacity, second-order rates, or error exponents, of communication channels
  • Nonasymptotic performance bounds for communication systems
  • Information-theoretic limits of delay- and energy-limited communication systems
  • Information-theoretic analyses of signal constellations and low-precision decoders
  • Error-correcting codes for communication systems

Dr. Tobias Koch
Dr. Stefan M. Moser
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • channel capacity
  • communication systems
  • energy-limited communications
  • error-correcting codes
  • error exponents
  • low-latency communications
  • low-precision decoders
  • performance bounds
  • second-order rates
  • signal constellations

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Expected Logarithm and Negative Integer Moments of a Noncentral χ2-Distributed Random Variable
Entropy 2020, 22(9), 1048; https://doi.org/10.3390/e22091048 - 19 Sep 2020
Viewed by 626
Abstract
Closed-form expressions for the expected logarithm and for arbitrary negative integer moments of a noncentral χ2-distributed random variable are presented in the cases of both even and odd degrees of freedom. Moreover, some basic properties of these expectations are derived and tight upper and lower bounds on them are proposed. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

Open AccessArticle
Nearest Neighbor Decoding and Pilot-Aided Channel Estimation for Fading Channels
Entropy 2020, 22(9), 971; https://doi.org/10.3390/e22090971 - 31 Aug 2020
Viewed by 706
Abstract
We study the information rates of noncoherent, stationary, Gaussian, and multiple-input multiple-output (MIMO) flat-fading channels that are achievable with nearest neighbor decoding and pilot-aided channel estimation. In particular, we investigate the behavior of these achievable rates in the limit as the signal-to-noise ratio [...] Read more.
We study the information rates of noncoherent, stationary, Gaussian, and multiple-input multiple-output (MIMO) flat-fading channels that are achievable with nearest neighbor decoding and pilot-aided channel estimation. In particular, we investigate the behavior of these achievable rates in the limit as the signal-to-noise ratio (SNR) tends to infinity by analyzing the capacity pre-log, which is defined as the limiting ratio of the capacity to the logarithm of the SNR as the SNR tends to infinity. We demonstrate that a scheme estimating the channel using pilot symbols and detecting the message using nearest neighbor decoding (while assuming that the channel estimation is perfect) essentially achieves the capacity pre-log of noncoherent multiple-input single-output flat-fading channels, and it essentially achieves the best so far known lower bound on the capacity pre-log of noncoherent MIMO flat-fading channels. Extending the analysis to fading multiple-access channels reveals interesting relationships between the number of antennas and Doppler bandwidth in the comparative performance of joint transmission and time division multiple-access. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Asymptotic Capacity Results on the Discrete-Time Poisson Channel and the Noiseless Binary Channel with Detector Dead Time
Entropy 2020, 22(8), 846; https://doi.org/10.3390/e22080846 - 30 Jul 2020
Viewed by 695
Abstract
This paper studies the discrete-time Poisson channel and the noiseless binary channel where, after recording a 1, the channel output is stuck at 0 for a certain period; this period is called the “dead time.” The communication capacities of these channels are analyzed, [...] Read more.
This paper studies the discrete-time Poisson channel and the noiseless binary channel where, after recording a 1, the channel output is stuck at 0 for a certain period; this period is called the “dead time.” The communication capacities of these channels are analyzed, with main focus on the regime where the allowed average input power is close to zero, either because the bandwidth is large, or because the available continuous-time input power is low. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

Open AccessArticle
Achievable Information Rates for Probabilistic Amplitude Shaping: An Alternative Approach via Random Sign-Coding Arguments
Entropy 2020, 22(7), 762; https://doi.org/10.3390/e22070762 - 11 Jul 2020
Viewed by 784
Abstract
Probabilistic amplitude shaping (PAS) is a coded modulation strategy in which constellation shaping and channel coding are combined. PAS has attracted considerable attention in both wireless and optical communications. Achievable information rates (AIRs) of PAS have been investigated in the literature using Gallager’s [...] Read more.
Probabilistic amplitude shaping (PAS) is a coded modulation strategy in which constellation shaping and channel coding are combined. PAS has attracted considerable attention in both wireless and optical communications. Achievable information rates (AIRs) of PAS have been investigated in the literature using Gallager’s error exponent approach. In particular, it has been shown that PAS achieves the capacity of the additive white Gaussian noise channel (Böcherer, 2018). In this work, we revisit the capacity-achieving property of PAS and derive AIRs using weak typicality. Our objective is to provide alternative proofs based on random sign-coding arguments that are as constructive as possible. Accordingly, in our proofs, only some signs of the channel inputs are drawn from a random code, while the remaining signs and amplitudes are produced constructively. We consider both symbol-metric and bit-metric decoding. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

Open AccessArticle
Some Useful Integral Representations for Information-Theoretic Analyses
Entropy 2020, 22(6), 707; https://doi.org/10.3390/e22060707 - 26 Jun 2020
Cited by 1 | Viewed by 921
Abstract
This work is an extension of our earlier article, where a well-known integral representation of the logarithmic function was explored and was accompanied with demonstrations of its usefulness in obtaining compact, easily-calculable, exact formulas for quantities that involve expectations of the logarithm of [...] Read more.
This work is an extension of our earlier article, where a well-known integral representation of the logarithmic function was explored and was accompanied with demonstrations of its usefulness in obtaining compact, easily-calculable, exact formulas for quantities that involve expectations of the logarithm of a positive random variable. Here, in the same spirit, we derive an exact integral representation (in one or two dimensions) of the moment of a nonnegative random variable, or the sum of such independent random variables, where the moment order is a general positive non-integer real (also known as fractional moments). The proposed formula is applied to a variety of examples with an information-theoretic motivation, and it is shown how it facilitates their numerical evaluations. In particular, when applied to the calculation of a moment of the sum of a large number, n, of nonnegative random variables, it is clear that integration over one or two dimensions, as suggested by our proposed integral representation, is significantly easier than the alternative of integrating over n dimensions, as needed in the direct calculation of the desired moment. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Probabilistic Shaping for Finite Blocklengths: Distribution Matching and Sphere Shaping
Entropy 2020, 22(5), 581; https://doi.org/10.3390/e22050581 - 21 May 2020
Cited by 8 | Viewed by 1232
Abstract
In this paper, we provide a systematic comparison of distribution matching (DM) and sphere shaping (SpSh) algorithms for short blocklength probabilistic amplitude shaping. For asymptotically large blocklengths, constant composition distribution matching (CCDM) is known to generate the target capacity-achieving distribution. However, as the [...] Read more.
In this paper, we provide a systematic comparison of distribution matching (DM) and sphere shaping (SpSh) algorithms for short blocklength probabilistic amplitude shaping. For asymptotically large blocklengths, constant composition distribution matching (CCDM) is known to generate the target capacity-achieving distribution. However, as the blocklength decreases, the resulting rate loss diminishes the efficiency of CCDM. We claim that for such short blocklengths over the additive white Gaussian noise (AWGN) channel, the objective of shaping should be reformulated as obtaining the most energy-efficient signal space for a given rate (rather than matching distributions). In light of this interpretation, multiset-partition DM (MPDM) and SpSh are reviewed as energy-efficient shaping techniques. Numerical results show that both have smaller rate losses than CCDM. SpSh—whose sole objective is to maximize the energy efficiency—is shown to have the minimum rate loss amongst all, which is particularly apparent for ultra short blocklengths. We provide simulation results of the end-to-end decoding performance showing that up to 1 dB improvement in power efficiency over uniform signaling can be obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a discussion on the complexity of these algorithms from the perspectives of latency, storage and computations. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

Open AccessArticle
On Training Neural Network Decoders of Rate Compatible Polar Codes via Transfer Learning
Entropy 2020, 22(5), 496; https://doi.org/10.3390/e22050496 - 25 Apr 2020
Viewed by 1067
Abstract
Neural network decoders (NNDs) for rate-compatible polar codes are studied in this paper. We consider a family of rate-compatible polar codes which are constructed from a single polar coding sequence as defined by 5G new radios. We propose a transfer learning technique for [...] Read more.
Neural network decoders (NNDs) for rate-compatible polar codes are studied in this paper. We consider a family of rate-compatible polar codes which are constructed from a single polar coding sequence as defined by 5G new radios. We propose a transfer learning technique for training multiple NNDs of the rate-compatible polar codes utilizing their inclusion property. The trained NND for a low rate code is taken as the initial state of NND training for the next smallest rate code. The proposed method provides quicker training as compared to separate learning of the NNDs according to numerical results. We additionally show that an underfitting problem of NND training due to low model complexity can be solved by transfer learning techniques. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

Back to TopTop