Next Article in Journal
Complexity, Criticality and Computation
Previous Article in Journal
On the Modelling and Control of a Laboratory Prototype of a Hydraulic Canal Based on a TITO Fractional-Order Model
Article Menu
Issue 8 (August) cover image

Export Article

Open AccessArticle
Entropy 2017, 19(8), 402; https://doi.org/10.3390/e19080402

Optimal Belief Approximation

1
Max-Planck-Institut für Astrophysik, Karl-Schwarzschildstr. 1, 85748 Garching, Germany
2
Ludwig-Maximilians-Universität München, Geschwister-Scholl-Platz 1, 80539 Munich, Germany
*
Author to whom correspondence should be addressed.
Received: 18 April 2017 / Revised: 4 July 2017 / Accepted: 5 July 2017 / Published: 4 August 2017
(This article belongs to the Section Information Theory)
Full-Text   |   PDF [361 KB, uploaded 4 August 2017]   |  

Abstract

In Bayesian statistics probability distributions express beliefs. However, for many problems the beliefs cannot be computed analytically and approximations of beliefs are needed. We seek a loss function that quantifies how “embarrassing” it is to communicate a given approximation. We reproduce and discuss an old proof showing that there is only one ranking under the requirements that (1) the best ranked approximation is the non-approximated belief and (2) that the ranking judges approximations only by their predictions for actual outcomes. The loss function that is obtained in the derivation is equal to the Kullback-Leibler divergence when normalized. This loss function is frequently used in the literature. However, there seems to be confusion about the correct order in which its functional arguments—the approximated and non-approximated beliefs—should be used. The correct order ensures that the recipient of a communication is only deprived of the minimal amount of information. We hope that the elementary derivation settles the apparent confusion. For example when approximating beliefs with Gaussian distributions the optimal approximation is given by moment matching. This is in contrast to many suggested computational schemes. View Full-Text
Keywords: information theory; Bayesian inference; loss function; axiomatic derivation; machine learning information theory; Bayesian inference; loss function; axiomatic derivation; machine learning
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Leike, R.H.; Enßlin, T.A. Optimal Belief Approximation. Entropy 2017, 19, 402.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top