E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Maximum Entropy and Bayes Theorem"

Quicklinks

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (30 June 2013)

Special Issue Editor

Guest Editor
Dr. Adom Giffin

Mathematics and Computer Science, Clarkson University, Potsdam, NY 13699, USA
Website | E-Mail
Interests: Bayesian inference and maximum entropy methods; data fusion; multi-scale dynamical systems; machine learning; probability theory

Special Issue Information

Dear Colleagues,

The probabilistic version of entropy originated with Boltzmann. Much later, Shannon (1948) created his information “entropy” using an axiomatic approach. Finally it was Jaynes (1957) who put the two together and showed that by maximizing Shannon’s entropy, along with specific information from physics  in the form of expectation values, one gets Boltzmann’s and more importantly Gibbs’ statistical mechanics.

Before Shannon, Jeffreys (1939) revitalized the Bayesian (Laplacian) view of probabilities (among others) and examined the ratio of a posterior versus a prior, specifically the log of this ratio. He used this ratio as a kind of measure to compare distributions with a focus on finding an invariant prior. This concept was expanded by others such as Kullback and Leibler (1951), Shore and Johnson (1980), Skilling (1988) and Caticha and Giffin (2006) by using the mean of this ratio as a measure to examine the “distance” between distributions. This mean has a form similar to Shannon’s entropy and is sometimes called the cross entropy or relative entropy.

For the purposes of inference, the goal of both Bayes Theorem and Maximum Entropy is to determine a probability distribution based on certain information. However, even though the two are indubitably linked, the use of each is controversial, particularly Maximum Entropy. In this volume we seek contributions that will shed light on their use, their connections and their controversies. The scope of the contributions will be very broad to include philosophical discussions, theoretical derivations and explicit, practical solutions to current problems. Such problems may come from, but by no means are limited to, topics from fields such as data mining, dynamical systems, ecology, economics, image analysis and computer vision, machine learning, signal analysis, physics and other scientific areas of inquiry. Papers demonstrating that Bayesian or Maximum Entropy methods yielding unique solutions that cannot be produced otherwise are especially welcome.

Dr. Adom Giffin
Guest Editor

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed Open Access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs).

Keywords

  • cross entropy
  • relative entropy
  • information theory
  • inference
  • maximum entropy
  • maximum relative entropy
  • Bayes theorem
  • Bayes rule
  • Bayesian
  • machine learning
  • data mining
  • image analysis
  • computer vision
  • machine learning
  • signal analysis
  • data fusion
  • multi-scale dynamical systems
  • distributed computing
  • probability theory

Published Papers (7 papers)

View options order results:
result details:
Displaying articles 1-7
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle The Kalman Filter Revisited Using Maximum Relative Entropy
Entropy 2014, 16(2), 1047-1069; doi:10.3390/e16021047
Received: 12 September 2013 / Revised: 5 February 2014 / Accepted: 7 February 2014 / Published: 19 February 2014
Cited by 2 | PDF Full-text (387 KB) | HTML Full-text | XML Full-text
Abstract
In 1960, Rudolf E. Kalman created what is known as the Kalman filter, which is a way to estimate unknown variables from noisy measurements. The algorithm follows the logic that if the previous state of the system is known, it could be used
[...] Read more.
In 1960, Rudolf E. Kalman created what is known as the Kalman filter, which is a way to estimate unknown variables from noisy measurements. The algorithm follows the logic that if the previous state of the system is known, it could be used as the best guess for the current state. This information is first applied a priori to any measurement by using it in the underlying dynamics of the system. Second, measurements of the unknown variables are taken. These two pieces of information are taken into account to determine the current state of the system. Bayesian inference is specifically designed to accommodate the problem of updating what we think of the world based on partial or uncertain information. In this paper, we present a derivation of the general Bayesian filter, then adapt it for Markov systems. A simple example is shown for pedagogical purposes. We also show that by using the Kalman assumptions or “constraints”, we can arrive at the Kalman filter using the method of maximum (relative) entropy (MrE), which goes beyond Bayesian methods. Finally, we derive a generalized, nonlinear filter using MrE, where the original Kalman Filter is a special case. We further show that the variable relationship can be any function, and thus, approximations, such as the extended Kalman filter, the unscented Kalman filter and other Kalman variants are special cases as well. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayes Theorem)
Open AccessArticle Consistency and Generalization Bounds for Maximum Entropy Density Estimation
Entropy 2013, 15(12), 5439-5463; doi:10.3390/e15125439
Received: 9 July 2013 / Revised: 13 November 2013 / Accepted: 3 December 2013 / Published: 9 December 2013
Cited by 2 | PDF Full-text (318 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the statistical properties of maximum entropy density estimation, both for the complete data case and the incomplete data case. We show that under certain assumptions, the generalization error can be bounded in terms of the complexity of the underlying feature functions.
[...] Read more.
We investigate the statistical properties of maximum entropy density estimation, both for the complete data case and the incomplete data case. We show that under certain assumptions, the generalization error can be bounded in terms of the complexity of the underlying feature functions. This allows us to establish the universal consistency of maximum entropy density estimation. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayes Theorem)
Open AccessArticle Objective Bayesianism and the Maximum Entropy Principle
Entropy 2013, 15(9), 3528-3591; doi:10.3390/e15093528
Received: 28 June 2013 / Revised: 21 August 2013 / Accepted: 21 August 2013 / Published: 4 September 2013
Cited by 7 | PDF Full-text (732 KB) | HTML Full-text | XML Full-text
Abstract
Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes
[...] Read more.
Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayes Theorem)
Open AccessArticle Relative Entropy Derivative Bounds
Entropy 2013, 15(7), 2861-2873; doi:10.3390/e15072861
Received: 24 May 2013 / Revised: 12 July 2013 / Accepted: 16 July 2013 / Published: 23 July 2013
Cited by 2 | PDF Full-text (289 KB) | HTML Full-text | XML Full-text
Abstract
We show that the derivative of the relative entropy with respect to its parameters is lower and upper bounded. We characterize the conditions under which this derivative can reach zero. We use these results to explain when the minimum relative entropy and the
[...] Read more.
We show that the derivative of the relative entropy with respect to its parameters is lower and upper bounded. We characterize the conditions under which this derivative can reach zero. We use these results to explain when the minimum relative entropy and the maximum log likelihood approaches can be valid. We show that these approaches naturally activate in the presence of large data sets and that they are inherent properties of any density estimation process involving large numbers of random variables. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayes Theorem)
Open AccessArticle Classification of Knee Joint Vibration Signals Using Bivariate Feature Distribution Estimation and Maximal Posterior Probability Decision Criterion
Entropy 2013, 15(4), 1375-1387; doi:10.3390/e15041375
Received: 4 February 2013 / Revised: 8 April 2013 / Accepted: 15 April 2013 / Published: 17 April 2013
Cited by 13 | PDF Full-text (193 KB) | HTML Full-text | XML Full-text
Abstract
Analysis of knee joint vibration or vibroarthrographic (VAG) signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the
[...] Read more.
Analysis of knee joint vibration or vibroarthrographic (VAG) signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the pathological condition of degenerative articular cartilages in the knee. This paper used the kernel-based probability density estimation method to model the distributions of the VAG signals recorded from healthy subjects and patients with knee joint disorders. The estimated densities of the VAG signals showed explicit distributions of the normal and abnormal signal groups, along with the corresponding contours in the bivariate feature space. The signal classifications were performed by using the Fisher’s linear discriminant analysis, support vector machine with polynomial kernels, and the maximal posterior probability decision criterion. The maximal posterior probability decision criterion was able to provide the total classification accuracy of 86.67% and the area (Az) of 0.9096 under the receiver operating characteristics curve, which were superior to the results obtained by either the Fisher’s linear discriminant analysis (accuracy: 81.33%, Az: 0.8564) or the support vector machine with polynomial kernels (accuracy: 81.33%, Az: 0.8533). Such results demonstrated the merits of the bivariate feature distribution estimation and the superiority of the maximal posterior probability decision criterion for analysis of knee joint VAG signals. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayes Theorem)
Open AccessArticle Experimental Assessment of a 2-D Entropy-Based Model for Velocity Distribution in Open Channel Flow
Entropy 2013, 15(3), 988-998; doi:10.3390/e15030988
Received: 24 December 2012 / Revised: 18 February 2013 / Accepted: 1 March 2013 / Published: 6 March 2013
Cited by 8 | PDF Full-text (540 KB) | HTML Full-text | XML Full-text
Abstract
Velocity distribution in an open channel flow can be very useful to model many hydraulic phenomena. Among the others, several 1D models based on the concept of entropy are available in the literature, which allow estimating the velocity distribution by measuring velocities only
[...] Read more.
Velocity distribution in an open channel flow can be very useful to model many hydraulic phenomena. Among the others, several 1D models based on the concept of entropy are available in the literature, which allow estimating the velocity distribution by measuring velocities only in a few points. Nevertheless, since 1D models have often a limited practical use, a 2D entropy based model was recently developed. The model provides a reliable estimation of the velocity distribution for open channel flow with a rectangular cross section, if the maximum velocity and the average velocity are known. In this paper results from the proposed model were compared with measured velocities carried out from laboratory experiments. Calculated values were also compared with results inferred from a 2D model available in the literature, resulting in a greater ease of use and a more reliable estimate of the velocity profile. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayes Theorem)

Review

Jump to: Research

Open AccessReview Quantum Models of Classical World
Entropy 2013, 15(3), 789-925; doi:10.3390/e15030789
Received: 21 December 2012 / Revised: 11 February 2013 / Accepted: 17 February 2013 / Published: 27 February 2013
Cited by 2 | PDF Full-text (592 KB)
Abstract
This paper is a review of our recent work on three notorious problems of non-relativistic quantum mechanics: realist interpretation, quantum theory of classical properties, and the problem of quantum measurement. A considerable progress has been achieved, based on four distinct new ideas. First,
[...] Read more.
This paper is a review of our recent work on three notorious problems of non-relativistic quantum mechanics: realist interpretation, quantum theory of classical properties, and the problem of quantum measurement. A considerable progress has been achieved, based on four distinct new ideas. First, objective properties are associated with states rather than with values of observables. Second, all classical properties are selected properties of certain high entropy quantum states of macroscopic systems. Third, registration of a quantum system is strongly disturbed by systems of the same type in the environment. Fourth, detectors must be distinguished from ancillas and the states of registered systems are partially dissipated and lost in the detectors. The paper has two aims: a clear explanation of all new results and a coherent and contradiction-free account of the whole quantum mechanics including all necessary changes of its current textbook version. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayes Theorem)

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Type of Paper: Philosophical Foundations
Title: Objective Bayesianism and the Maximum Entropy Principle
Authors: Juergen Landes and Jon Williamson
Affiliation: Philosophy Department, University of Kent, Canterbury, CT2 7NF, UK; E-Mail: J.Williamson@kent.ac.uk
Abstract: The standard argument for the Bayesian view that rational degrees of belief are probabilities is that they have to be in order to avoid the possibility of sure loss (the Dutch Book Argument). But this can be put another way: in order to minimise worst-case expected loss, rational degrees of belief must be probabilities. Now the goal of minimising worst-case expected loss can be used to argue for other norms of rational belief: that rational degrees of belief must be directly calibrated to known physical probabilities (explicated via the Principal Principle), and that rational degrees of belief must otherwise equivocate as far as possible between the basic possibilities under consideration (explicated via the Maximum Entropy Principle). So the standard motivation for Bayesianism motivates a strong, ‘objective’ Bayesianism that invokes the Maximum Entropy Principle. Interestingly, however, conditional probabilities play a less central part in this account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes Theorem.

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
entropy@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy
Back to Top