- freely available
Entropy 2015, 17(5), 3458-3460; doi:10.3390/e17053458
Abstract: This editorial explains the scope of the special issue and provides a thematic introduction to the contributed papers.
With this special issue, we wanted to provide a platform for practitioners, proponents and opponents of Maximum Entropy methods (MaxEnt) applied to inductive logic and reasoning. Unfortunately, we received no papers arguing against MaxEnt. However, we did receive an exciting array of positive contributions, which are described below.
There has been much debate in the inductive logic and reasoning literature on the justificatory status of MaxEnt. Jeff Paris, in his paper , defends MaxEnt against the charge of language dependence and warns against mistaking subjective degrees of belief for estimates of objective probabilities and mis-applying the Principle of Insufficient Reason.
The other contributed papers focus on extensions of the standard framework for MaxEnt in inductive logic and reasoning. The standard framework considers a single agent equipped with a finite propositional language (or a finite domain of propositions) and a propositional knowledge base, seeking to determine probabilities from these ingredients. Papers in this volume extend the standard framework in four different directions: the multi-agent setting, the dynamic setting, more elaborate knowledge bases, and richer underlying languages.
George Wilmers extends the notion of an inference process, which determines probabilities from a knowledge base, to the multi-agent setting, and studies the resulting social inference processes and their logical foundations . The social entropy process, which naturally generalises the maximum entropy inference process and the multi-agent normalised geometric mean pooling operator, emerges as uniquely justified on the basis of an information theoretic argument. The information geometry of pooling operators and their connections to Bregman divergences are studied by Martin Adamčík . The Kullback-Leibler divergence and Wilmers’ social inference process appear as natural special cases in Adamčík’s analysis.
Two papers apply MaxEnt to dynamical belief formation. Patryk Dziurosz-Serafinowicz investigates Maximum Relative Entropy (MRE) updating and the value of learning theorem . His analysis of the Judy Benjamin problem highlights the language dependence of MaxEnt discussed by Jeff Paris in . In , Stefan Lukits shows MaxEnt generalises both Jeffrey conditioning and Wagner conditioning. He concludes that MaxEnt provides a more integrated approach to probability updating.
Christoph Beierle, Marc Finthammer and Gabriele Kern-Isberner apply MaxEnt to knowledge bases containing conditionals in which relations occur . They develop a logical framework, PCI, which captures captures two kinds of semantics: grounding semantics and aggregation semantics. In a number of cases, PCI-grounding and PCI-aggregation semantics coincide when employing MaxEnt.
In the last paper of this special issue, Jürgen Landes and Jon Williamson seek to justify MaxEnt when the underlying language is a first-order predicate language . We previously argued in the setting of a propositional language that if an agent is to avoid avoidable losses then her degrees of belief need to be obtained by MaxEnt . Here, we extend that line of justification to the richer setting of a first-order language.
The depth and breadth of papers in this volume suggests the presence of a mature and progressive research programme. The future for MaxEnt applied to inductive logic and reasoning appears very bright indeed.
- Paris, J.B. What you see is what you get. Entropy 2014, 16, 6186–6194. [Google Scholar]
- Wilmers, G. A Foundational Approach to Generalising the Maximum Entropy Inference Process to the Multi-Agent Context. Entropy 2015, 17, 594–645. [Google Scholar]
- Adamčík, M. The Information Geometry of Bregman Divergences and Some Applications in Multi-Expert Reasoning. Entropy 2014, 16, 6338–6381. [Google Scholar]
- Dziurosz-Serafinowicz, P. Maximum Relative Entropy Updating and the Value of Learning. Entropy 2015, 17, 1146–1164. [Google Scholar]
- Lukits, S. Maximum Entropy and Probability Kinematics Constrained by Conditionals. Entropy 2015, 17, 1690–1700. [Google Scholar]
- Beierle, C.; Finthammer, M.; Kern-Isberner, G. Relational Probabilistic Conditionals and Their Instantiations under Maximum Entropy Semantics for First-Order Knowledge Bases. Entropy 2015, 17, 852–865. [Google Scholar]
- Landes, J.; Williamson, J. Justifying objective Bayesianism on predicate languages. Entropy 2015, 17, 2459–2543. [Google Scholar]
- Landes, J.; Williamson, J. Objective Bayesianism and the maximum entropy principle. Entropy 2013, 15, 3528–3591. [Google Scholar]
© 2015 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).