Next Article in Journal
Distributed Consensus for Metamorphic Systems Using a GossipAlgorithm for CAT(0) Metric Spaces
Next Article in Special Issue
Maximum Entropy and Probability Kinematics Constrained by Conditionals
Previous Article in Journal
Comparing Security Notions of Secret Sharing Schemes
Previous Article in Special Issue
Relational Probabilistic Conditionals and Their Instantiations under Maximum Entropy Semantics for First-Order Knowledge Bases
Article Menu

Export Article

Open AccessArticle
Entropy 2015, 17(3), 1146-1164; doi:10.3390/e17031146

Maximum Relative Entropy Updating and the Value of Learning

Faculty of Philosophy, University of Groningen, Oude Boteringestraat 52, Groningen, 9712 GL, The Netherlands
Academic Editors: Juergen Landes and Jon Williamson
Received: 7 January 2015 / Revised: 16 February 2015 / Accepted: 4 March 2015 / Published: 11 March 2015
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
View Full-Text   |   Download PDF [211 KB, uploaded 11 March 2015]

Abstract

We examine the possibility of justifying the principle of maximum relative entropy (MRE) considered as an updating rule by looking at the value of learning theorem established in classical decision theory. This theorem captures an intuitive requirement for learning: learning should lead to new degrees of belief that are expected to be helpful and never harmful in making decisions. We call this requirement the value of learning. We consider the extent to which learning rules by MRE could satisfy this requirement and so could be a rational means for pursuing practical goals. First, by representing MRE updating as a conditioning model, we show that MRE satisfies the value of learning in cases where learning prompts a complete redistribution of one’s degrees of belief over a partition of propositions. Second, we show that the value of learning may not be generally satisfied by MRE updates in cases of updating on a change in one’s conditional degrees of belief. We explain that this is so because, contrary to what the value of learning requires, one’s prior degrees of belief might not be equal to the expectation of one’s posterior degrees of belief. This, in turn, points towards a more general moral: that the justification of MRE updating in terms of the value of learning may be sensitive to the context of a given learning experience. Moreover, this lends support to the idea that MRE is not a universal nor mechanical updating rule, but rather a rule whose application and justification may be context-sensitive. View Full-Text
Keywords: maximum relative entropy; probabilistic updating; the value of learning theorem; decision theory; Skyrms’s condition M; the Judy Benjamin problem; context-sensitivity maximum relative entropy; probabilistic updating; the value of learning theorem; decision theory; Skyrms’s condition M; the Judy Benjamin problem; context-sensitivity
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Dziurosz-Serafinowicz, P. Maximum Relative Entropy Updating and the Value of Learning. Entropy 2015, 17, 1146-1164.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top