E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Advances in Information Theory"

Quicklinks

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory".

Deadline for manuscript submissions: closed (30 October 2013)

Special Issue Editor

Guest Editor
Dr. Paola Zizzi (Website)

Department of Pure and Applied Mathematics, University of Padova, Via Belzoni 7, 35131 Padova, Italy
Fax: +39 049 8275892
Interests: quantum gravity; quantum cosmology; quantum information

Special Issue Information

Entropy is a key concept in information theory. We decide to expand Entropy journal to cover all areas of information theory, including the applications. In future, a section "Information Theory" will be set up.

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed Open Access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs).

Published Papers (17 papers)

View options order results:
result details:
Displaying articles 1-17
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Measuring Instantaneous and Spectral Information Entropies by Shannon Entropy of Choi-Williams Distribution in the Context of Electroencephalography
Entropy 2014, 16(5), 2530-2548; doi:10.3390/e16052530
Received: 6 December 2013 / Revised: 30 April 2014 / Accepted: 5 May 2014 / Published: 9 May 2014
Cited by 1 | PDF Full-text (1220 KB) | HTML Full-text | XML Full-text
Abstract
The theory of Shannon entropy was applied to the Choi-Williams time-frequency distribution (CWD) of time series in order to extract entropy information in both time and frequency domains. In this way, four novel indexes were defined: (1) partial instantaneous entropy, calculated as [...] Read more.
The theory of Shannon entropy was applied to the Choi-Williams time-frequency distribution (CWD) of time series in order to extract entropy information in both time and frequency domains. In this way, four novel indexes were defined: (1) partial instantaneous entropy, calculated as the entropy of the CWD with respect to time by using the probability mass function at each time instant taken independently; (2) partial spectral information entropy, calculated as the entropy of the CWD with respect to frequency by using the probability mass function of each frequency value taken independently; (3) complete instantaneous entropy, calculated as the entropy of the CWD with respect to time by using the probability mass function of the entire CWD; (4) complete spectral information entropy, calculated as the entropy of the CWD with respect to frequency by using the probability mass function of the entire CWD. These indexes were tested on synthetic time series with different behavior (periodic, chaotic and random) and on a dataset of electroencephalographic (EEG) signals recorded in different states (eyes-open, eyes-closed, ictal and non-ictal activity). The results have shown that the values of these indexes tend to decrease, with different proportion, when the behavior of the synthetic signals evolved from chaos or randomness to periodicity. Statistical differences (p-value < 0.0005) were found between values of these measures comparing eyes-open and eyes-closed states and between ictal and non-ictal states in the traditional EEG frequency bands. Finally, this paper has demonstrated that the proposed measures can be useful tools to quantify the different periodic, chaotic and random components in EEG signals. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Open AccessArticle Cross Layer Interference Management in Wireless Biomedical Networks
Entropy 2014, 16(4), 2085-2104; doi:10.3390/e16042085
Received: 10 November 2013 / Revised: 4 March 2014 / Accepted: 4 April 2014 / Published: 14 April 2014
Cited by 4 | PDF Full-text (1706 KB) | HTML Full-text | XML Full-text
Abstract
Interference, in wireless networks, is a central phenomenon when multiple uncoordinated links share a common communication medium. The study of the interference channel was initiated by Shannon in 1961 and since then this problem has been thoroughly elaborated at the Information theoretic [...] Read more.
Interference, in wireless networks, is a central phenomenon when multiple uncoordinated links share a common communication medium. The study of the interference channel was initiated by Shannon in 1961 and since then this problem has been thoroughly elaborated at the Information theoretic level but its characterization still remains an open issue. When multiple uncoordinated links share a common medium the effect of interference is a crucial limiting factor for network performance. In this work, using cross layer cooperative communication techniques, we study how to compensate interference in the context of wireless biomedical networks, where many links transferring biomedical or other health related data may be formed and suffer from all other interfering transmissions, to allow successful receptions and improve the overall network performance. We define the interference limited communication range to be the critical communication region around a receiver, with a number of surrounding interfering nodes, within which a successful communication link can be formed. Our results indicate that we can achieve more successful transmissions by adapting the transmission rate and power, to the path loss exponent, and the selected mode of the underline communication technique allowing interference mitigation and when possible lower power consumption and increase achievable transmission rates. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Open AccessArticle Localization of Discrete Time Quantum Walks on the Glued Trees
Entropy 2014, 16(3), 1501-1514; doi:10.3390/e16031501
Received: 4 December 2013 / Revised: 15 January 2014 / Accepted: 10 March 2014 / Published: 18 March 2014
Cited by 2 | PDF Full-text (101 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we consider the time averaged distribution of discrete time quantum walks on the glued trees. In order to analyze the walks on the glued trees, we consider a reduction to the walks on path graphs. Using a spectral analysis [...] Read more.
In this paper, we consider the time averaged distribution of discrete time quantum walks on the glued trees. In order to analyze the walks on the glued trees, we consider a reduction to the walks on path graphs. Using a spectral analysis of the Jacobi matrices defined by the corresponding random walks on the path graphs, we have a spectral decomposition of the time evolution operator of the quantum walks. We find significant contributions of the eigenvalues, ±1, of the Jacobi matrices to the time averaged limit distribution of the quantum walks. As a consequence, we obtain the lower bounds of the time averaged distribution. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Open AccessArticle The Maximum Error Probability Criterion, Random Encoder, and Feedback, in Multiple Input Channels
Entropy 2014, 16(3), 1211-1242; doi:10.3390/e16031211
Received: 30 October 2013 / Revised: 10 December 2013 / Accepted: 24 December 2013 / Published: 25 February 2014
Cited by 2 | PDF Full-text (338 KB) | HTML Full-text | XML Full-text
Abstract
For a multiple input channel, one may define different capacity regions, according to the criterions of error, types of codes, and presence of feedback. In this paper, we aim to draw a complete picture of relations among these different capacity [...] Read more.
For a multiple input channel, one may define different capacity regions, according to the criterions of error, types of codes, and presence of feedback. In this paper, we aim to draw a complete picture of relations among these different capacity regions. To this end, we first prove that the average-error-probability capacity region of a multiple input channel can be achieved by a random code under the criterion of maximum error probability. Moreover, we show that for a non-deterministic multiple input channel with feedback, the capacity regions are the same under two different error criterions. In addition, we discuss two special classes of channels to shed light on the relation of different capacity regions. In particular, to illustrate the roles of feedback, we provide a class of MAC, for which feedback may enlarge maximum-error-probability capacity regions, but not average-error-capacity regions. Besides, we present a class of MAC, as an example for which the maximum-error-probability capacity regions are strictly smaller than the average-error-probability capacity regions (first example showing this was due to G. Dueck). Differently from G. Dueck’s enlightening example in which a deterministic MAC was considered, our example includes and further generalizes G. Dueck’s example by taking both deterministic and non-deterministic MACs into account. Finally, we extend our results for a discrete memoryless two-input channel, to compound, arbitrarily varying MAC, and MAC with more than two inputs. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Open AccessArticle Prediction Method for Image Coding Quality Based on Differential Information Entropy
Entropy 2014, 16(2), 990-1001; doi:10.3390/e16020990
Received: 27 October 2013 / Revised: 19 December 2013 / Accepted: 26 January 2014 / Published: 17 February 2014
PDF Full-text (2324 KB) | HTML Full-text | XML Full-text
Abstract
For the requirement of quality-based image coding, an approach to predict the quality of image coding based on differential information entropy is proposed. First of all, some typical prediction approaches are introduced, and then the differential information entropy is reviewed. Taking JPEG2000 [...] Read more.
For the requirement of quality-based image coding, an approach to predict the quality of image coding based on differential information entropy is proposed. First of all, some typical prediction approaches are introduced, and then the differential information entropy is reviewed. Taking JPEG2000 as an example, the relationship between differential information entropy and the objective assessment indicator PSNR at a fixed compression ratio is established via data fitting, and the constraint for fitting is to minimize the average error. Next, the relationship among differential information entropy, compression ratio and PSNR at various compression ratios is constructed and this relationship is used as an indicator to predict the image coding quality. Finally, the proposed approach is compared with some traditional approaches. From the experiments, it can be seen that the differential information entropy has a better linear relationship with image coding quality than that with the image activity. Therefore, the conclusion can be reached that the proposed approach is capable of predicting image coding quality at low compression ratios with small errors, and can be widely applied in a variety of real-time space image coding systems for its simplicity. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Open AccessArticle New Methods of Entropy-Robust Estimation for Randomized Models under Limited Data
Entropy 2014, 16(2), 675-698; doi:10.3390/e16020675
Received: 17 October 2013 / Revised: 17 December 2013 / Accepted: 14 January 2014 / Published: 23 January 2014
Cited by 5 | PDF Full-text (525 KB) | HTML Full-text | XML Full-text
Abstract
The paper presents a new approach to restoration characteristics randomized models under small amounts of input and output data. This approach proceeds from involving randomized static and dynamic models and estimating the probabilistic characteristics of their parameters. We consider static and dynamic [...] Read more.
The paper presents a new approach to restoration characteristics randomized models under small amounts of input and output data. This approach proceeds from involving randomized static and dynamic models and estimating the probabilistic characteristics of their parameters. We consider static and dynamic models described by Volterra polynomials. The procedures of robust parametric and non-parametric estimation are constructed by exploiting the entropy concept based on the generalized informational Boltzmann’s and Fermi’s entropies. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Open AccessArticle Some Convex Functions Based Measures of Independence and Their Application to Strange Attractor Reconstruction
Entropy 2011, 13(4), 820-840; doi:10.3390/e13040820
Received: 8 February 2011 / Revised: 28 March 2011 / Accepted: 28 March 2011 / Published: 8 April 2011
PDF Full-text (514 KB) | HTML Full-text | XML Full-text
Abstract
The classical information-theoretic measures such as the entropy and the mutual information (MI) are widely applicable to many areas in science and engineering. Csiszar generalized the entropy and the MI by using the convex functions. Recently, we proposed the grid occupancy (GO) [...] Read more.
The classical information-theoretic measures such as the entropy and the mutual information (MI) are widely applicable to many areas in science and engineering. Csiszar generalized the entropy and the MI by using the convex functions. Recently, we proposed the grid occupancy (GO) and the quasientropy (QE) as measures of independence. The QE explicitly includes a convex function in its definition, while the expectation of GO is a subclass of QE. In this paper, we study the effect of different convex functions on GO, QE, and Csiszar’s generalized mutual information (GMI). A quality factor (QF) is proposed to quantify the sharpness of their minima. Using the QF, it is shown that these measures can have sharper minima than the classical MI. Besides, a recursive algorithm for computing GMI, which is a generalization of Fraser and Swinney’s algorithm for computing MI, is proposed. Moreover, we apply GO, QE, and GMI to chaotic time series analysis. It is shown that these measures are good criteria for determining the optimum delay in strange attractor reconstruction. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Open AccessArticle Analysis of Resource and Emission Impacts: An Emergy-Based Multiple Spatial Scale Framework for Urban Ecological and Economic Evaluation
Entropy 2011, 13(3), 720-743; doi:10.3390/e13030720
Received: 3 December 2010 / Revised: 17 January 2011 / Accepted: 1 March 2011 / Published: 23 March 2011
Cited by 9 | PDF Full-text (261 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The development of the complex and multi-dimensional urban socio-economic system creates impacts on natural capital and human capital, which range from a local to a global scale. An emergy-based multiple spatial scale analysis framework and a rigorous accounting method that can quantify [...] Read more.
The development of the complex and multi-dimensional urban socio-economic system creates impacts on natural capital and human capital, which range from a local to a global scale. An emergy-based multiple spatial scale analysis framework and a rigorous accounting method that can quantify the values of human-made and natural capital losses were proposed in this study. With the intent of comparing the trajectory of Beijing over time, the characteristics of the interface between different scales are considered to explain the resource trade and the impacts of emissions. In addition, our improved determination of emergy analysis and acceptable management options that are in agreement with Beijing’s overall sustainability strategy were examined. The results showed that Beijing’s economy was closely correlated with the consumption of nonrenewable resources and exerted rising pressure on the environment. Of the total emergy use by the economic system, the imported nonrenewable resources from other provinces contribute the most, and the multi‑scale environmental impacts of waterborne and airborne pollution continued to increase from 1999 to 2006. Given the inputs structure, Beijing was chiefly making greater profits by shifting resources from other provinces in China and transferring the emissions outside. The results of our study should enable urban policy planners to better understand the multi-scale policy planning and development design of an urban ecological economic system. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Open AccessArticle Information Theory and Dynamical System Predictability
Entropy 2011, 13(3), 612-649; doi:10.3390/e13030612
Received: 25 January 2011 / Revised: 14 February 2011 / Accepted: 20 February 2011 / Published: 7 March 2011
Cited by 12 | PDF Full-text (1206 KB) | HTML Full-text | XML Full-text
Abstract
Predicting the future state of a turbulent dynamical system such as the atmosphere has been recognized for several decades to be an essentially statistical undertaking. Uncertainties from a variety of sources are magnified by dynamical mechanisms and given sufficient time, compromise any [...] Read more.
Predicting the future state of a turbulent dynamical system such as the atmosphere has been recognized for several decades to be an essentially statistical undertaking. Uncertainties from a variety of sources are magnified by dynamical mechanisms and given sufficient time, compromise any prediction. In the last decade or so this process of uncertainty evolution has been studied using a variety of tools from information theory. These provide both a conceptually general view of the problem as well as a way of probing its non-linearity. Here we review these advances from both a theoretical and practical perspective. Connections with other theoretical areas such as statistical mechanics are emphasized. The importance of obtaining practical results for prediction also guides the development presented. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Figures

Open AccessArticle Information Theoretic Hierarchical Clustering
Entropy 2011, 13(2), 450-465; doi:10.3390/e13020450
Received: 8 December 2010 / Revised: 31 December 2010 / Accepted: 27 January 2011 / Published: 10 February 2011
Cited by 6 | PDF Full-text (298 KB) | HTML Full-text | XML Full-text
Abstract
Hierarchical clustering has been extensively used in practice, where clusters can be assigned and analyzed simultaneously, especially when estimating the number of clusters is challenging. However, due to the conventional proximity measures recruited in these algorithms, they are only capable of detecting [...] Read more.
Hierarchical clustering has been extensively used in practice, where clusters can be assigned and analyzed simultaneously, especially when estimating the number of clusters is challenging. However, due to the conventional proximity measures recruited in these algorithms, they are only capable of detecting mass-shape clusters and encounter problems in identifying complex data structures. Here, we introduce two bottom-up hierarchical approaches that exploit an information theoretic proximity measure to explore the nonlinear boundaries between clusters and extract data structures further than the second order statistics. Experimental results on both artificial and real datasets demonstrate the superiority of the proposed algorithm compared to conventional and information theoretic clustering algorithms reported in the literature, especially in detecting the true number of clusters. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Open AccessArticle Estimating Neuronal Information: Logarithmic Binning of Neuronal Inter-Spike Intervals
Entropy 2011, 13(2), 485-501; doi:10.3390/e13020485
Received: 9 December 2010 / Revised: 14 January 2011 / Accepted: 26 January 2011 / Published: 10 February 2011
Cited by 6 | PDF Full-text (143 KB) | HTML Full-text | XML Full-text
Abstract
Neurons communicate via the relative timing of all-or-none biophysical signals called spikes. For statistical analysis, the time between spikes can be accumulated into inter-spike interval histograms. Information theoretic measures have been estimated from these histograms to assess how information varies across organisms, [...] Read more.
Neurons communicate via the relative timing of all-or-none biophysical signals called spikes. For statistical analysis, the time between spikes can be accumulated into inter-spike interval histograms. Information theoretic measures have been estimated from these histograms to assess how information varies across organisms, neural systems, and disease conditions. Because neurons are computational units that, to the extent they process time, work not by discrete clock ticks but by the exponential decays of numerous intrinsic variables, we propose that neuronal information measures scale more naturally with the logarithm of time. For the types of inter-spike interval distributions that best describe neuronal activity, the logarithm of time enables fewer bins to capture the salient features of the distributions. Thus, discretizing the logarithm of inter-spike intervals, as compared to the inter-spike intervals themselves, yields histograms that enable more accurate entropy and information estimates for fewer bins and less data. Additionally, as distribution parameters vary, the entropy and information calculated from the logarithm of the inter-spike intervals are substantially better behaved, e.g., entropy is independent of mean rate, and information is equally affected by rate gains and divisions. Thus, when compiling neuronal data for subsequent information analysis, the logarithm of the inter-spike intervals is preferred, over the untransformed inter-spike intervals, because it yields better information estimates and is likely more similar to the construction used by nature herself. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Open AccessArticle Information Theory in Scientific Visualization
Entropy 2011, 13(1), 254-273; doi:10.3390/e13010254
Received: 25 November 2010 / Revised: 30 December 2010 / Accepted: 31 December 2010 / Published: 21 January 2011
Cited by 25 | PDF Full-text (261 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, there is an emerging direction that leverages information theory to solve many challenging problems in scientific data analysis and visualization. In this article, we review the key concepts in information theory, discuss how the principles of information theory can [...] Read more.
In recent years, there is an emerging direction that leverages information theory to solve many challenging problems in scientific data analysis and visualization. In this article, we review the key concepts in information theory, discuss how the principles of information theory can be useful for visualization, and provide specific examples to draw connections between data communication and data visualization in terms of how information can be measured quantitatively. As the amount of digital data available to us increases at an astounding speed, the goal of this article is to introduce the interested readers to this new direction of data analysis research, and to inspire them to identify new applications and seek solutions using information theory. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Figures

Open AccessArticle Information Storage in Liquids with Ordered Molecular Assemblies
Entropy 2011, 13(1), 1-10; doi:10.3390/e13010001
Received: 31 October 2010 / Revised: 20 December 2010 / Accepted: 21 December 2010 / Published: 23 December 2010
PDF Full-text (242 KB) | HTML Full-text | XML Full-text
Abstract
In some unique cases, liquids can divert from pure isotropy due to the formation of ordered molecular assemblies with acquired “negative entropy” and information storage. The energy stored in such ordered domains can be combined with an independent quantitative parameter related to [...] Read more.
In some unique cases, liquids can divert from pure isotropy due to the formation of ordered molecular assemblies with acquired “negative entropy” and information storage. The energy stored in such ordered domains can be combined with an independent quantitative parameter related to the degree of order, which can then translate the dormant information to the quantitative energetic term “information capacity”. Information storage in liquids can be thus expressed in absolute energy units. Three liquid systems are analyzed in some detail. The first is a solution of a chiral substance, e.g., amino acid in water, where the degree of optical rotation provides the measure for order while the heat liberated upon racemization is the energy corresponding to the negative entropy. The second is a neat chiral fluid, e.g., 2-butanol, complying with the same parameters as those of chiral solutions. The third is electronically excited fluorescent solute, where the shift in the emission spectrum corresponds to the energy acquired by the transiently oriented solvent envelopes. Other, yet unexplored, possibilities are also suggested. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Open AccessArticle Incorporating Spatial Structures in Ecological Inference: An Information Theory Approach
Entropy 2010, 12(10), 2171-2185; doi:10.3390/e12102171
Received: 30 August 2010 / Revised: 12 October 2010 / Accepted: 12 October 2010 / Published: 14 October 2010
Cited by 2 | PDF Full-text (229 KB) | HTML Full-text | XML Full-text
Abstract
This paper introduces an Information Theory-based method for modeling economic aggregates and estimating their sub-group (sub-area) decomposition when no individual or sub-group data are available. This method offers a flexible framework for modeling the underlying variation in sub-group indicators, by addressing the [...] Read more.
This paper introduces an Information Theory-based method for modeling economic aggregates and estimating their sub-group (sub-area) decomposition when no individual or sub-group data are available. This method offers a flexible framework for modeling the underlying variation in sub-group indicators, by addressing the spatial dependency problem. A basic ecological inference problem, which allows for spatial heterogeneity and dependence, is presented with the aim of first estimating the model at the aggregate level, and then of employing the estimated coefficients to obtain the sub-group level indicators. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Open AccessArticle Increasing and Decreasing Returns and Losses in Mutual Information Feature Subset Selection
Entropy 2010, 12(10), 2144-2170; doi:10.3390/e12102144
Received: 28 August 2010 / Accepted: 19 September 2010 / Published: 11 October 2010
Cited by 5 | PDF Full-text (853 KB) | HTML Full-text | XML Full-text
Abstract
Mutual information between a target variable and a feature subset is extensively used as a feature subset selection criterion. This work contributes to a more thorough understanding of the evolution of the mutual information as a function of the number of features [...] Read more.
Mutual information between a target variable and a feature subset is extensively used as a feature subset selection criterion. This work contributes to a more thorough understanding of the evolution of the mutual information as a function of the number of features selected. We describe decreasing returns and increasing returns behavior in sequential forward search and increasing losses and decreasing losses behavior in sequential backward search. We derive conditions under which the decreasing returns and the increasing losses behavior hold and prove the occurrence of this behavior in some Bayesian networks. The decreasing returns behavior implies that the mutual information is concave as a function of the number of features selected, whereas the increasing returns behavior implies this function is convex. The increasing returns and decreasing losses behavior are proven to occur in an XOR hypercube. Full article
(This article belongs to the Special Issue Advances in Information Theory)

Review

Jump to: Research

Open AccessReview On a Connection between Information and Group Lattices
Entropy 2011, 13(3), 683-708; doi:10.3390/e13030683
Received: 19 January 2011 / Revised: 14 March 2011 / Accepted: 14 March 2011 / Published: 18 March 2011
Cited by 4 | PDF Full-text (220 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we review a particular connection between information theory and group theory. We formalize the notions of information elements and information lattices, first proposed by Shannon. Exploiting this formalization, we expose a comprehensive parallelism between information lattices and subgroup lattices. [...] Read more.
In this paper we review a particular connection between information theory and group theory. We formalize the notions of information elements and information lattices, first proposed by Shannon. Exploiting this formalization, we expose a comprehensive parallelism between information lattices and subgroup lattices. Qualitatively, isomorphisms between information lattices and subgroup lattices are demonstrated. Quantitatively, a decisive approximation relation between the entropy structures of information lattices and the log-index structures of the corresponding subgroup lattices, first discovered by Chan and Yeung, is highlighted. This approximation, addressing both joint and common entropies, extends the work of Chan and Yeung on joint entropy. A consequence of this approximation result is that any continuous law holds in general for the entropies of information elements if and only if the same law holds in general for the log-indices of subgroups. As an application, by constructing subgroup counterexamples, we find surprisingly that common information, unlike joint information, obeys neither the submodularity nor the supermodularity law. We emphasize that the notion of information elements is conceptually significant—formalizing it helps to reveal the deep connection between information theory and group theory. The parallelism established in this paper admits an appealing group-action explanation and provides useful insights into the intrinsic structure among information elements from a group-theoretic perspective. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Open AccessReview Extreme Fisher Information, Non-Equilibrium Thermodynamics and Reciprocity Relations
Entropy 2011, 13(1), 184-194; doi:10.3390/e13010184
Received: 12 November 2010 / Revised: 10 January 2011 / Accepted: 12 January 2011 / Published: 14 January 2011
Cited by 5 | PDF Full-text (113 KB) | HTML Full-text | XML Full-text
Abstract
In employing MaxEnt, a crucial role is assigned to the reciprocity relations that relate the quantifier to be extremized (Shannon’s entropy S), the Lagrange multipliers that arise during the variational process, and the expectation values that constitute the a priori input information. [...] Read more.
In employing MaxEnt, a crucial role is assigned to the reciprocity relations that relate the quantifier to be extremized (Shannon’s entropy S), the Lagrange multipliers that arise during the variational process, and the expectation values that constitute the a priori input information. We review here just how these ingredients relate to each other when the information quantifier S is replaced by Fisher’s information measure I. The connection of these proceedings with thermodynamics constitute our physical background. Full article
(This article belongs to the Special Issue Advances in Information Theory)

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
entropy@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy
Back to Top