E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Maximum Entropy"

Quicklinks

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (30 September 2009)

Special Issue Editor

Editor-in-Chief
Dr. Peter Harremoës (Website)

Copenhagen Business College, Rønne Alle 1, st., DK-2860 Søborg, Denmark
Interests: symmetry; information divergence; cause and effect; Maxwell\'s demon; probability and statistics

Related Special Issue

Published Papers (10 papers)

View options order results:
result details:
Displaying articles 1-10
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Estimation of Seismic Wavelets Based on the Multivariate Scale Mixture of Gaussians Model
Entropy 2010, 12(1), 14-33; doi:10.3390/e12010014
Received: 9 October 2009 / Accepted: 11 December 2009 / Published: 28 December 2009
Cited by 3 | PDF Full-text (332 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase). We can transform the estimation of the wavelet into determining these three parameters. The [...] Read more.
This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase). We can transform the estimation of the wavelet into determining these three parameters. The phase of the wavelet is estimated by constant-phase rotation to the seismic signal, while the other two parameters are obtained by the Higher-order Statistics (HOS) (fourth-order cumulant) matching method. In order to derive the estimator of the Higher-order Statistics (HOS), the multivariate scale mixture of Gaussians (MSMG) model is applied to formulating the multivariate joint probability density function (PDF) of the seismic signal. By this way, we can represent HOS as a polynomial function of second-order statistics to improve the anti-noise performance and accuracy. In addition, the proposed method can work well for short time series. Full article
(This article belongs to the Special Issue Maximum Entropy)
Open AccessArticle Entropy-Based Wavelet De-noising Method for Time Series Analysis
Entropy 2009, 11(4), 1123-1147; doi:10.3390/e11041123
Received: 9 October 2009 / Accepted: 11 December 2009 / Published: 22 December 2009
Cited by 23 | PDF Full-text (657 KB) | HTML Full-text | XML Full-text
Abstract
The existence of noise has great influence on the real features of observed time series, thus noise reduction in time series data is a necessary and significant task in many practical applications. When using traditional de-noising methods, the results often cannot meet [...] Read more.
The existence of noise has great influence on the real features of observed time series, thus noise reduction in time series data is a necessary and significant task in many practical applications. When using traditional de-noising methods, the results often cannot meet the practical needs due to their inherent shortcomings. In the present paper, first a set of key but difficult wavelet de-noising problems are discussed, and then by applying information entropy theories to the wavelet de-noising process, i.e., using the principle of maximum entropy (POME) to describe the random character of the noise and using wavelet energy entropy to describe the degrees of complexity of the main series in original series data, a new entropy-based wavelet de-noising method is proposed. Analysis results of both several different synthetic series and typical observed time series data have verified the performance of the new method. A comprehensive discussion of the results indicates that compared with traditional wavelet de-noising methods, the new proposed method is more effective and universal. Furthermore, because it uses information entropy theories to describe the obviously different characteristics of noises and the main series in the series data is observed first and then de-noised, the analysis process has a more reliable physical basis, and the results of the new proposed method are more reasonable and are the global optimum. Besides, the analysis process of the new proposed method is simple and is easy to implement, so it would be more applicable and useful in applied sciences and practical engineering works. Full article
(This article belongs to the Special Issue Maximum Entropy)
Open AccessArticle Best Probability Density Function for Random Sampled Data
Entropy 2009, 11(4), 1001-1024; doi:10.3390/e11041001
Received: 9 October 2009 / Accepted: 2 December 2009 / Published: 4 December 2009
PDF Full-text (1356 KB) | HTML Full-text | XML Full-text
Abstract
The maximum entropy method is a theoretically sound approach to construct an analytical form for the probability density function (pdf) given a sample of random events. In practice, numerical methods employed to determine the appropriate Lagrange multipliers associated with a set of [...] Read more.
The maximum entropy method is a theoretically sound approach to construct an analytical form for the probability density function (pdf) given a sample of random events. In practice, numerical methods employed to determine the appropriate Lagrange multipliers associated with a set of moments are generally unstable in the presence of noise due to limited sampling. A robust method is presented that always returns the best pdf, where tradeoff in smoothing a highly varying function due to noise can be controlled. An unconventional adaptive simulated annealing technique, called funnel diffusion, determines expansion coefficients for Chebyshev polynomials in the exponential function. Full article
(This article belongs to the Special Issue Maximum Entropy)
Open AccessArticle A Weighted Generalized Maximum Entropy Estimator with a Data-driven Weight
Entropy 2009, 11(4), 917-930; doi:10.3390/e11040917
Received: 24 September 2009 / Accepted: 16 November 2009 / Published: 26 November 2009
Cited by 4 | PDF Full-text (189 KB) | HTML Full-text | XML Full-text
Abstract
The method of Generalized Maximum Entropy (GME), proposed in Golan, Judge and Miller (1996), is an information-theoretic approach that is robust to multicolinearity problem. It uses an objective function that is the sum of the entropies for coefficient distributions and disturbance distributions. [...] Read more.
The method of Generalized Maximum Entropy (GME), proposed in Golan, Judge and Miller (1996), is an information-theoretic approach that is robust to multicolinearity problem. It uses an objective function that is the sum of the entropies for coefficient distributions and disturbance distributions. This method can be generalized to the weighted GME (W-GME), where different weights are assigned to the two entropies in the objective function. We propose a data-driven method to select the weights in the entropy objective function. We use the least squares cross validation to derive the optimal weights. MonteCarlo simulations demonstrate that the proposedW-GME estimator is comparable to and often outperforms the conventional GME estimator, which places equal weights on the entropies of coefficient and disturbance distributions. Full article
(This article belongs to the Special Issue Maximum Entropy)
Open AccessArticle Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains
Entropy 2009, 11(4), 867-887; doi:10.3390/e11040867
Received: 21 September 2009 / Accepted: 10 November 2009 / Published: 17 November 2009
Cited by 5 | PDF Full-text (210 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems [...] Read more.
In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model. Full article
(This article belongs to the Special Issue Maximum Entropy)
Open AccessArticle What is Fair Pay for Executives? An Information Theoretic Analysis of Wage Distributions
Entropy 2009, 11(4), 766-781; doi:10.3390/e11040766
Received: 31 August 2009 / Accepted: 26 October 2009 / Published: 3 November 2009
Cited by 5 | PDF Full-text (222 KB) | HTML Full-text | XML Full-text
Abstract
The high pay packages of U.S. CEOs have raised serious concerns about what would constitute a fair pay. Since the present economic models do not adequately address this fundamental question, we propose a new theory based on statistical mechanics and information theory. [...] Read more.
The high pay packages of U.S. CEOs have raised serious concerns about what would constitute a fair pay. Since the present economic models do not adequately address this fundamental question, we propose a new theory based on statistical mechanics and information theory. We use the principle of maximum entropy to show that the maximally fair pay distribution is lognormal under ideal conditions. This prediction is in agreement with observed data for the bottom 90%–95% of the working population. The theory estimates that the top 35 U.S. CEOs were overpaid by about 129 times their ideal salaries in 2008. We also provide an insight of entropy as a measure of fairness, which is maximized at equilibrium, in an economic system. Full article
(This article belongs to the Special Issue Maximum Entropy)
Open AccessArticle The Maximum Entropy Rate Description of a Thermodynamic System in a Stationary Non-Equilibrium State
Entropy 2009, 11(4), 675-687; doi:10.3390/e11040675
Received: 14 September 2009 / Accepted: 27 October 2009 / Published: 29 October 2009
Cited by 3 | PDF Full-text (164 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we present a simple model to describe a rather general system in a stationary non-equilibrium state, which is an open system traversed by a stationary flux. The probabilistic description is provided by a non-homogeneous Markov chain, which is not [...] Read more.
In this paper we present a simple model to describe a rather general system in a stationary non-equilibrium state, which is an open system traversed by a stationary flux. The probabilistic description is provided by a non-homogeneous Markov chain, which is not assumed on the basis of a model of the microscopic interactions but rather derived from the knowledge of the macroscopic fluxes traversing the system through a maximum entropy rate principle. Full article
(This article belongs to the Special Issue Maximum Entropy)

Review

Jump to: Research

Open AccessReview Fisher Information and Semiclassical Treatments
Entropy 2009, 11(4), 972-992; doi:10.3390/e11040972
Received: 30 October 2009 / Accepted: 27 November 2009 / Published: 3 December 2009
Cited by 12 | PDF Full-text (266 KB) | HTML Full-text | XML Full-text
Abstract
We review here the difference between quantum statistical treatments and semiclassical ones, using as the main concomitant tool a semiclassical, shift-invariant Fisher information measure built up with Husimi distributions. Its semiclassical character notwithstanding, this measure also contains abundant information of a purely [...] Read more.
We review here the difference between quantum statistical treatments and semiclassical ones, using as the main concomitant tool a semiclassical, shift-invariant Fisher information measure built up with Husimi distributions. Its semiclassical character notwithstanding, this measure also contains abundant information of a purely quantal nature. Such a tool allows us to refine the celebrated Lieb bound for Wehrl entropies and to discover thermodynamic-like relations that involve the degree of delocalization. Fisher-related thermal uncertainty relations are developed and the degree of purity of canonical distributions, regarded as mixed states, is connected to this Fisher measure as well. Full article
(This article belongs to the Special Issue Maximum Entropy)
Open AccessReview Use of Maximum Entropy Modeling in Wildlife Research
Entropy 2009, 11(4), 854-866; doi:10.3390/e11040854
Received: 4 September 2009 / Accepted: 11 November 2009 / Published: 16 November 2009
Cited by 96 | PDF Full-text (149 KB) | HTML Full-text | XML Full-text
Abstract
Maximum entropy (Maxent) modeling has great potential for identifying distributions and habitat selection of wildlife given its reliance on only presence locations. Recent studies indicate Maxent is relatively insensitive to spatial errors associated with location data, requires few locations to construct useful [...] Read more.
Maximum entropy (Maxent) modeling has great potential for identifying distributions and habitat selection of wildlife given its reliance on only presence locations. Recent studies indicate Maxent is relatively insensitive to spatial errors associated with location data, requires few locations to construct useful models, and performs better than other presence-only modeling approaches. Further advances are needed to better define model thresholds, to test model significance, and to address model selection. Additionally, development of modeling approaches is needed when using repeated sampling of known individuals to assess habitat selection. These advancements would strengthen the utility of Maxent for wildlife research and management. Full article
(This article belongs to the Special Issue Maximum Entropy)
Open AccessReview The Maximum Entropy Formalism and the Prediction of Liquid Spray Drop-Size Distribution
Entropy 2009, 11(4), 713-747; doi:10.3390/e11040713
Received: 27 August 2009 / Accepted: 26 October 2009 / Published: 2 November 2009
Cited by 11 | PDF Full-text (367 KB) | HTML Full-text | XML Full-text
Abstract
The efficiency of any application involving a liquid spray is known to be highly dependent on the spray characteristics, and mainly, on the drop-diameter distribution. There is therefore a crucial need of models allowing the prediction of this distribution. However, atomization processes [...] Read more.
The efficiency of any application involving a liquid spray is known to be highly dependent on the spray characteristics, and mainly, on the drop-diameter distribution. There is therefore a crucial need of models allowing the prediction of this distribution. However, atomization processes are partially known and so far a universal model is not available. For almost thirty years, models based on the Maximum Entropy Formalism have been proposed to fulfill this task. This paper presents a review of these models emphasizing their similarities and differences, and discusses expectations of the use of this formalism to model spray drop-size distribution Full article
(This article belongs to the Special Issue Maximum Entropy)

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
entropy@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy
Back to Top