Divergence and Sufficiency for Convex Optimization
AbstractLogarithmic score and information divergence appear in information theory, statistics, statistical mechanics, and portfolio theory. We demonstrate that all these topics involve some kind of optimization that leads directly to regret functions and such regret functions are often given by Bregman divergences. If a regret function also fulfills a sufficiency condition it must be proportional to information divergence. We will demonstrate that sufficiency is equivalent to the apparently weaker notion of locality and it is also equivalent to the apparently stronger notion of monotonicity. These sufficiency conditions have quite different relevance in the different areas of application, and often they are not fulfilled. Therefore sufficiency conditions can be used to explain when results from one area can be transferred directly to another and when one will experience differences. View Full-Text
Share & Cite This Article
Harremoës, P. Divergence and Sufficiency for Convex Optimization. Entropy 2017, 19, 206.
Harremoës P. Divergence and Sufficiency for Convex Optimization. Entropy. 2017; 19(5):206.Chicago/Turabian Style
Harremoës, Peter. 2017. "Divergence and Sufficiency for Convex Optimization." Entropy 19, no. 5: 206.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.