entropy-logo

Journal Browser

Journal Browser

Measures of Information

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (14 May 2021) | Viewed by 25655

Special Issue Editor


E-Mail Website
Guest Editor
Dipartimento di Biologia, Università di Napoli Federico II, 80126 Naples, NA, Italy
Interests: stochastic orders; reliability theory; measures of discrimination (in particular entropy, extropies, inaccuracy, Kullback-Leibler); coherent systems; inference
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

How important is uncertainty in the life of a human being? Certainly an existence in which everything is deterministic is not worth living.

In 1948, Claude Shannon developed the general concept of entropy, a “measure of uncertainty”, a fundamental cornerstone of information theory, coming out from the idea of quantifying how much information there is in a message. In his paper “A Mathematical Theory of Communication”, he set out to mathematically quantify the statistical nature of “lost information” in phone-line signals, while working at Bell Telephone Laboratories.

Entropy in information theory is directly analogous to entropy in statistical thermodynamics.

In information theory, the entropy of a random variable is the average level of “information”, “uncertainty” or “surprise”, inherent in the variable’s possible outcomes.

The entropy was originally a part of his theory of communication, in which a data communication system is composed of three elements: a source of data, a communication channel, and a receiver. In Shannon’s theory, the “fundamental problem of communication” is for the receiver to be able to identify what data were generated by the source, based on the signal it receives through the channel. Thus, the basic idea is that the “informational value” of a communicated message depends on the degree to which the content of the message is surprising.

Entropy has relevance to other areas of mathematics. The definition comes from a set of axioms establishing that entropy should be a measure of how “surprising” the average outcome of a variable is. For a continuous random variable, differential entropy is analogous to entropy.

If an event is very probable, it is uninteresting when that event happens as expected; hence, transmission of such a message carries very little new information. However, if an event is unlikely to occur, it is much more informative to learn if the event happened or will happen.

In the last decades, several new measures of information and of discrimination have been defined and studied, and it is clear that many other ones (with applications in different fields) will be introduced. This Special Volume has the aim of enriching notions related to measures of discrimination.

Prof. Dr. Maria Longobardi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

measures of discrimination; stochastic orders; ordered statistical data; goodness of fit testing; robust inference

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issues

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 2348 KiB  
Article
Using the Semantic Information G Measure to Explain and Extend Rate-Distortion Functions and Maximum Entropy Distributions
by Chenguang Lu
Entropy 2021, 23(8), 1050; https://doi.org/10.3390/e23081050 - 15 Aug 2021
Cited by 3 | Viewed by 2964
Abstract
In the rate-distortion function and the Maximum Entropy (ME) method, Minimum Mutual Information (MMI) distributions and ME distributions are expressed by Bayes-like formulas, including Negative Exponential Functions (NEFs) and partition functions. Why do these non-probability functions exist in Bayes-like formulas? On the other [...] Read more.
In the rate-distortion function and the Maximum Entropy (ME) method, Minimum Mutual Information (MMI) distributions and ME distributions are expressed by Bayes-like formulas, including Negative Exponential Functions (NEFs) and partition functions. Why do these non-probability functions exist in Bayes-like formulas? On the other hand, the rate-distortion function has three disadvantages: (1) the distortion function is subjectively defined; (2) the definition of the distortion function between instances and labels is often difficult; (3) it cannot be used for data compression according to the labels’ semantic meanings. The author has proposed using the semantic information G measure with both statistical probability and logical probability before. We can now explain NEFs as truth functions, partition functions as logical probabilities, Bayes-like formulas as semantic Bayes’ formulas, MMI as Semantic Mutual Information (SMI), and ME as extreme ME minus SMI. In overcoming the above disadvantages, this paper sets up the relationship between truth functions and distortion functions, obtains truth functions from samples by machine learning, and constructs constraint conditions with truth functions to extend rate-distortion functions. Two examples are used to help readers understand the MMI iteration and to support the theoretical results. Using truth functions and the semantic information G measure, we can combine machine learning and data compression, including semantic compression. We need further studies to explore general data compression and recovery, according to the semantic meaning. Full article
(This article belongs to the Special Issue Measures of Information)
Show Figures

Figure 1

13 pages, 309 KiB  
Article
Upper Bounds for the Capacity for Severely Fading MIMO Channels under a Scale Mixture Assumption
by Johannes T. Ferreira
Entropy 2021, 23(7), 845; https://doi.org/10.3390/e23070845 - 30 Jun 2021
Cited by 1 | Viewed by 2083
Abstract
A cornerstone in the modeling of wireless communication is MIMO systems, where a complex matrix variate normal assumption is often made for the underlying distribution of the propagation matrix. A popular measure of information, namely capacity, is often investigated for the performance of [...] Read more.
A cornerstone in the modeling of wireless communication is MIMO systems, where a complex matrix variate normal assumption is often made for the underlying distribution of the propagation matrix. A popular measure of information, namely capacity, is often investigated for the performance of MIMO designs. This paper derives upper bounds for this measure of information for the case of two transmitting antennae and an arbitrary number of receiving antennae when the propagation matrix is assumed to follow a scale mixture of complex matrix variate normal distribution. Furthermore, noncentrality is assumed to account for LOS scenarios within the MIMO environment. The insight of this paper illustrates the theoretical form of capacity under these key assumptions and paves the way for considerations of alternative distributional choices for the channel propagation matrix in potential cases of severe fading, when the assumption of normality may not be realistic. Full article
(This article belongs to the Special Issue Measures of Information)
16 pages, 670 KiB  
Article
Designing Bivariate Auto-Regressive Timeseries with Controlled Granger Causality
by Shohei Hidaka and Takuma Torii
Entropy 2021, 23(6), 742; https://doi.org/10.3390/e23060742 - 12 Jun 2021
Cited by 1 | Viewed by 2083
Abstract
In this manuscript, we analyze a bivariate vector auto-regressive (VAR) model in order to draw the design principle of a timeseries with a controlled statistical inter-relationship. We show how to generate bivariate timeseries with given covariance and Granger causality (or, equivalently, transfer entropy), [...] Read more.
In this manuscript, we analyze a bivariate vector auto-regressive (VAR) model in order to draw the design principle of a timeseries with a controlled statistical inter-relationship. We show how to generate bivariate timeseries with given covariance and Granger causality (or, equivalently, transfer entropy), and show the trade-off relationship between these two types of statistical interaction. In principle, covariance and Granger causality are independently controllable, but the feasible ranges of their values which allow the VAR to be proper and have a stationary distribution are constrained by each other. Thus, our analysis identifies the essential tri-lemma structure among the stability and properness of VAR, the controllability of covariance, and that of Granger causality. Full article
(This article belongs to the Special Issue Measures of Information)
Show Figures

Figure 1

14 pages, 690 KiB  
Article
On Representations of Divergence Measures and Related Quantities in Exponential Families
by Stefan Bedbur and Udo Kamps
Entropy 2021, 23(6), 726; https://doi.org/10.3390/e23060726 - 8 Jun 2021
Cited by 1 | Viewed by 2353
Abstract
Within exponential families, which may consist of multi-parameter and multivariate distributions, a variety of divergence measures, such as the Kullback–Leibler divergence, the Cressie–Read divergence, the Rényi divergence, and the Hellinger metric, can be explicitly expressed in terms of the respective cumulant function and [...] Read more.
Within exponential families, which may consist of multi-parameter and multivariate distributions, a variety of divergence measures, such as the Kullback–Leibler divergence, the Cressie–Read divergence, the Rényi divergence, and the Hellinger metric, can be explicitly expressed in terms of the respective cumulant function and mean value function. Moreover, the same applies to related entropy and affinity measures. We compile representations scattered in the literature and present a unified approach to the derivation in exponential families. As a statistical application, we highlight their use in the construction of confidence regions in a multi-sample setup. Full article
(This article belongs to the Special Issue Measures of Information)
Show Figures

Figure 1

15 pages, 346 KiB  
Article
Information Geometry of the Exponential Family of Distributions with Progressive Type-II Censoring
by Fode Zhang, Xiaolin Shi and Hon Keung Tony Ng
Entropy 2021, 23(6), 687; https://doi.org/10.3390/e23060687 - 28 May 2021
Cited by 1 | Viewed by 2281
Abstract
In geometry and topology, a family of probability distributions can be analyzed as the points on a manifold, known as statistical manifold, with intrinsic coordinates corresponding to the parameters of the distribution. Consider the exponential family of distributions with progressive Type-II censoring as [...] Read more.
In geometry and topology, a family of probability distributions can be analyzed as the points on a manifold, known as statistical manifold, with intrinsic coordinates corresponding to the parameters of the distribution. Consider the exponential family of distributions with progressive Type-II censoring as the manifold of a statistical model, we use the information geometry methods to investigate the geometric quantities such as the tangent space, the Fisher metric tensors, the affine connection and the α-connection of the manifold. As an application of the geometric quantities, the asymptotic expansions of the posterior density function and the posterior Bayesian predictive density function of the manifold are discussed. The results show that the asymptotic expansions are related to the coefficients of the α-connections and metric tensors, and the predictive density function is the estimated density function in an asymptotic sense. The main results are illustrated by considering the Rayleigh distribution. Full article
(This article belongs to the Special Issue Measures of Information)
Show Figures

Figure 1

19 pages, 366 KiB  
Article
Stochastic Order and Generalized Weighted Mean Invariance
by Mateu Sbert, Jordi Poch, Shuning Chen and Víctor Elvira
Entropy 2021, 23(6), 662; https://doi.org/10.3390/e23060662 - 25 May 2021
Viewed by 1977
Abstract
In this paper, we present order invariance theoretical results for weighted quasi-arithmetic means of a monotonic series of numbers. The quasi-arithmetic mean, or Kolmogorov–Nagumo mean, generalizes the classical mean and appears in many disciplines, from information theory to physics, from economics to traffic [...] Read more.
In this paper, we present order invariance theoretical results for weighted quasi-arithmetic means of a monotonic series of numbers. The quasi-arithmetic mean, or Kolmogorov–Nagumo mean, generalizes the classical mean and appears in many disciplines, from information theory to physics, from economics to traffic flow. Stochastic orders are defined on weights (or equivalently, discrete probability distributions). They were introduced to study risk in economics and decision theory, and recently have found utility in Monte Carlo techniques and in image processing. We show in this paper that, if two distributions of weights are ordered under first stochastic order, then for any monotonic series of numbers their weighted quasi-arithmetic means share the same order. This means for instance that arithmetic and harmonic mean for two different distributions of weights always have to be aligned if the weights are stochastically ordered, this is, either both means increase or both decrease. We explore the invariance properties when convex (concave) functions define both the quasi-arithmetic mean and the series of numbers, we show its relationship with increasing concave order and increasing convex order, and we observe the important role played by a new defined mirror property of stochastic orders. We also give some applications to entropy and cross-entropy and present an example of multiple importance sampling Monte Carlo technique that illustrates the usefulness and transversality of our approach. Invariance theorems are useful when a system is represented by a set of quasi-arithmetic means and we want to change the distribution of weights so that all means evolve in the same direction. Full article
(This article belongs to the Special Issue Measures of Information)
13 pages, 274 KiB  
Article
Fractional Deng Entropy and Extropy and Some Applications
by Mohammad Reza Kazemi, Saeid Tahmasebi, Francesco Buono and Maria Longobardi
Entropy 2021, 23(5), 623; https://doi.org/10.3390/e23050623 - 17 May 2021
Cited by 38 | Viewed by 3023
Abstract
Deng entropy and extropy are two measures useful in the Dempster–Shafer evidence theory (DST) to study uncertainty, following the idea that extropy is the dual concept of entropy. In this paper, we present their fractional versions named fractional Deng entropy and extropy and [...] Read more.
Deng entropy and extropy are two measures useful in the Dempster–Shafer evidence theory (DST) to study uncertainty, following the idea that extropy is the dual concept of entropy. In this paper, we present their fractional versions named fractional Deng entropy and extropy and compare them to other measures in the framework of DST. Here, we study the maximum for both of them and give several examples. Finally, we analyze a problem of classification in pattern recognition in order to highlight the importance of these new measures. Full article
(This article belongs to the Special Issue Measures of Information)
Show Figures

Figure 1

14 pages, 284 KiB  
Article
Bounds on the Lifetime Expectations of Series Systems with IFR Component Lifetimes
by Tomasz Rychlik and Magdalena Szymkowiak
Entropy 2021, 23(4), 385; https://doi.org/10.3390/e23040385 - 24 Mar 2021
Cited by 3 | Viewed by 1785
Abstract
We consider series systems built of components which have independent identically distributed (iid) lifetimes with an increasing failure rate (IFR). We determine sharp upper bounds for the expectations of the system lifetimes expressed in terms of the mean, and various scale units based [...] Read more.
We consider series systems built of components which have independent identically distributed (iid) lifetimes with an increasing failure rate (IFR). We determine sharp upper bounds for the expectations of the system lifetimes expressed in terms of the mean, and various scale units based on absolute central moments of component lifetimes. We further establish analogous bounds under a more stringent assumption that the component lifetimes have an increasing density (ID) function. We also indicate the relationship between the IFR property of the components and the generalized cumulative residual entropy of the series system lifetime. Full article
(This article belongs to the Special Issue Measures of Information)
10 pages, 777 KiB  
Article
Discrete Versions of Jensen–Fisher, Fisher and Bayes–Fisher Information Measures of Finite Mixture Distributions
by Omid Kharazmi and Narayanaswamy Balakrishnan
Entropy 2021, 23(3), 363; https://doi.org/10.3390/e23030363 - 18 Mar 2021
Viewed by 2205
Abstract
In this work, we first consider the discrete version of Fisher information measure and then propose Jensen–Fisher information, to develop some associated results. Next, we consider Fisher information and Bayes–Fisher information measures for mixing parameter vector of a finite mixture probability mass function [...] Read more.
In this work, we first consider the discrete version of Fisher information measure and then propose Jensen–Fisher information, to develop some associated results. Next, we consider Fisher information and Bayes–Fisher information measures for mixing parameter vector of a finite mixture probability mass function and establish some results. We provide some connections between these measures with some known informational measures such as chi-square divergence, Shannon entropy, Kullback–Leibler, Jeffreys and Jensen–Shannon divergences. Full article
(This article belongs to the Special Issue Measures of Information)
Show Figures

Figure 1

14 pages, 315 KiB  
Article
Results on Varextropy Measure of Random Variables
by Nastaran Marzban Vaselabadi, Saeid Tahmasebi, Mohammad Reza Kazemi and Francesco Buono
Entropy 2021, 23(3), 356; https://doi.org/10.3390/e23030356 - 17 Mar 2021
Cited by 10 | Viewed by 2267
Abstract
In 2015, Lad, Sanfilippo and Agrò proposed an alternative measure of uncertainty dual to the entropy known as extropy. This paper provides some results on a dispersion measure of extropy of random variables which is called varextropy and studies several properties of this [...] Read more.
In 2015, Lad, Sanfilippo and Agrò proposed an alternative measure of uncertainty dual to the entropy known as extropy. This paper provides some results on a dispersion measure of extropy of random variables which is called varextropy and studies several properties of this concept. Especially, the varextropy measure of residual and past lifetimes, order statistics, record values and proportional hazard rate models are discussed. Moreover, the conditional varextropy is considered and some properties of this measure are studied. Finally, a new stochastic comparison method, named varextropy ordering, is introduced and some of its properties are presented. Full article
(This article belongs to the Special Issue Measures of Information)
Show Figures

Figure 1

Back to TopTop