E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Maximum Entropy Applied to Inductive Logic and Reasoning"

Quicklinks

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (1 December 2014)

Special Issue Editors

Guest Editor
Dr. Juergen Landes

Department of Philosophy, School of European Culture and Languages, University of Kent, Canterbury CT2 7NF, UK
E-Mail
Interests: mathematical logics and applications; imperfect information, in all varieties and forms; rationality; decision science; quantum computing
Guest Editor
Prof. Dr. Jon Williamson

Department of Philosophy, School of European Culture and Languages, University of Kent, Canterbury CT2 7NF, UK
Website | E-Mail
Interests: causality; probability; logics and reasoning; their application to science, maths and ai

Special Issue Information

Dear Colleagues,

Since E.T. Jaynes showed how maximizing Shannon Entropy can be applied to rational belief formation, Maximum Entropy (MaxEnt) methods have played an important role in inductive reasoning. This special issue provides a forum for proponents, opponents and practitioners to discuss and advance the current state of the art. We explicitly welcome contributions arguing for or against MaxEnt methods.

Specific areas of interest include (but are not limited to):

  • Formal applications of MaxEnt to inductive logic or inductive reasoning.
  • Philosophical accounts of MaxEnt methods for inductive logic or inductive reasoning (including contributions arguing for or against MaxEnt methods).
  • MaxEnt methods for rational agents (in a single agent, multi-agent or autonomous agent setting).
  • Connections between MaxEnt and scoring rules.
  • Surveys of the state of the art in one of the above areas.
  • Historical perspectives on MaxEnt and inductive logic with a focus on where we stand today.

Dr. Juergen Landes
Prof. Dr. Jon Williamson
Guest Editors

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed Open Access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs).

Keywords

  • maximum entropy principle
  • maxent
  • inductive logic
  • inductive reasoning
  • inductive inference
  • objective bayesianism
  • scoring rules

Published Papers (8 papers)

View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:

Editorial

Jump to: Research

Open AccessEditorial Maximum Entropy Applied to Inductive Logic and Reasoning
Entropy 2015, 17(5), 3458-3460; doi:10.3390/e17053458
Received: 8 May 2015 / Accepted: 13 May 2015 / Published: 18 May 2015
PDF Full-text (70 KB) | HTML Full-text | XML Full-text
Abstract This editorial explains the scope of the special issue and provides a thematic introduction to the contributed papers. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)

Research

Jump to: Editorial

Open AccessArticle Justifying Objective Bayesianism on Predicate Languages
Entropy 2015, 17(4), 2459-2543; doi:10.3390/e17042459
Received: 11 February 2015 / Revised: 27 March 2015 / Accepted: 9 April 2015 / Published: 22 April 2015
Cited by 2 | PDF Full-text (560 KB) | HTML Full-text | XML Full-text
Abstract
Objective Bayesianism says that the strengths of one’s beliefs ought to be probabilities, calibrated to physical probabilities insofar as one has evidence of them, and otherwise sufficiently equivocal. These norms of belief are often explicated using the maximum entropy principle. In this paper
[...] Read more.
Objective Bayesianism says that the strengths of one’s beliefs ought to be probabilities, calibrated to physical probabilities insofar as one has evidence of them, and otherwise sufficiently equivocal. These norms of belief are often explicated using the maximum entropy principle. In this paper we investigate the extent to which one can provide a unified justification of the objective Bayesian norms in the case in which the background language is a first-order predicate language, with a view to applying the resulting formalism to inductive logic. We show that the maximum entropy principle can be motivated largely in terms of minimising worst-case expected loss. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
Open AccessArticle Maximum Entropy and Probability Kinematics Constrained by Conditionals
Entropy 2015, 17(4), 1690-1700; doi:10.3390/e17041690
Received: 15 November 2014 / Revised: 23 March 2015 / Accepted: 25 March 2015 / Published: 27 March 2015
Cited by 2 | PDF Full-text (208 KB) | HTML Full-text | XML Full-text
Abstract
Two open questions of inductive reasoning are solved: (1) does the principle of maximum entropy (PME) give a solution to the obverse Majerník problem; and (2) isWagner correct when he claims that Jeffrey’s updating principle (JUP) contradicts PME? Majerník shows that PME provides
[...] Read more.
Two open questions of inductive reasoning are solved: (1) does the principle of maximum entropy (PME) give a solution to the obverse Majerník problem; and (2) isWagner correct when he claims that Jeffrey’s updating principle (JUP) contradicts PME? Majerník shows that PME provides unique and plausible marginal probabilities, given conditional probabilities. The obverse problem posed here is whether PME also provides such conditional probabilities, given certain marginal probabilities. The theorem developed to solve the obverse Majerník problem demonstrates that in the special case introduced by Wagner PME does not contradict JUP, but elegantly generalizes it and offers a more integrated approach to probability updating. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
Open AccessArticle Maximum Relative Entropy Updating and the Value of Learning
Entropy 2015, 17(3), 1146-1164; doi:10.3390/e17031146
Received: 7 January 2015 / Revised: 16 February 2015 / Accepted: 4 March 2015 / Published: 11 March 2015
Cited by 1 | PDF Full-text (211 KB) | HTML Full-text | XML Full-text
Abstract
We examine the possibility of justifying the principle of maximum relative entropy (MRE) considered as an updating rule by looking at the value of learning theorem established in classical decision theory. This theorem captures an intuitive requirement for learning: learning should lead to
[...] Read more.
We examine the possibility of justifying the principle of maximum relative entropy (MRE) considered as an updating rule by looking at the value of learning theorem established in classical decision theory. This theorem captures an intuitive requirement for learning: learning should lead to new degrees of belief that are expected to be helpful and never harmful in making decisions. We call this requirement the value of learning. We consider the extent to which learning rules by MRE could satisfy this requirement and so could be a rational means for pursuing practical goals. First, by representing MRE updating as a conditioning model, we show that MRE satisfies the value of learning in cases where learning prompts a complete redistribution of one’s degrees of belief over a partition of propositions. Second, we show that the value of learning may not be generally satisfied by MRE updates in cases of updating on a change in one’s conditional degrees of belief. We explain that this is so because, contrary to what the value of learning requires, one’s prior degrees of belief might not be equal to the expectation of one’s posterior degrees of belief. This, in turn, points towards a more general moral: that the justification of MRE updating in terms of the value of learning may be sensitive to the context of a given learning experience. Moreover, this lends support to the idea that MRE is not a universal nor mechanical updating rule, but rather a rule whose application and justification may be context-sensitive. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
Open AccessArticle Relational Probabilistic Conditionals and Their Instantiations under Maximum Entropy Semantics for First-Order Knowledge Bases
Entropy 2015, 17(2), 852-865; doi:10.3390/e17020852
Received: 26 December 2014 / Revised: 29 January 2015 / Accepted: 9 February 2015 / Published: 13 February 2015
Cited by 4 | PDF Full-text (304 KB) | HTML Full-text | XML Full-text
Abstract
For conditional probabilistic knowledge bases with conditionals based on propositional logic, the principle of maximum entropy (ME) is well-established, determining a unique model inductively completing the explicitly given knowledge. On the other hand, there is no general agreement on how to extend the
[...] Read more.
For conditional probabilistic knowledge bases with conditionals based on propositional logic, the principle of maximum entropy (ME) is well-established, determining a unique model inductively completing the explicitly given knowledge. On the other hand, there is no general agreement on how to extend the ME principle to relational conditionals containing free variables. In this paper, we focus on two approaches to ME semantics that have been developed for first-order knowledge bases: aggregating semantics and a grounding semantics. Since they use different variants of conditionals, we define the logic PCI, which covers both approaches as special cases and provides a framework where the effects of both approaches can be studied in detail. While the ME models under PCI-grounding and PCI-aggregating semantics are different in general, we point out that parametric uniformity of a knowledge base ensures that both semantics coincide. Using some concrete knowledge bases, we illustrate the differences and common features of both approaches, looking in particular at the ground instances of the given conditionals. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
Open AccessArticle A Foundational Approach to Generalising the Maximum Entropy Inference Process to the Multi-Agent Context
Entropy 2015, 17(2), 594-645; doi:10.3390/e17020594
Received: 1 December 2014 / Revised: 10 December 2014 / Accepted: 13 January 2015 / Published: 2 February 2015
Cited by 5 | PDF Full-text (496 KB) | HTML Full-text | XML Full-text
Abstract
The present paper seeks to establish a logical foundation for studying axiomatically multi-agent probabilistic reasoning over a discrete space of outcomes. We study the notion of a social inference process which generalises the concept of an inference process for a single agent which
[...] Read more.
The present paper seeks to establish a logical foundation for studying axiomatically multi-agent probabilistic reasoning over a discrete space of outcomes. We study the notion of a social inference process which generalises the concept of an inference process for a single agent which was used by Paris and Vencovská to characterise axiomatically the method of maximum entropy inference. Axioms for a social inference process are introduced and discussed, and a particular social inference process called the Social Entropy Process, or SEP, is defined which satisfies these axioms. SEP is justified heuristically by an information theoretic argument, and incorporates both the maximum entropy inference process for a single agent and the multi–agent normalised geometric mean pooling operator. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
Open AccessArticle The Information Geometry of Bregman Divergences and Some Applications in Multi-Expert Reasoning
Entropy 2014, 16(12), 6338-6381; doi:10.3390/e16126338
Received: 19 October 2014 / Revised: 24 November 2014 / Accepted: 25 November 2014 / Published: 1 December 2014
Cited by 5 | PDF Full-text (438 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this paper is to develop a comprehensive study of the geometry involved in combining Bregman divergences with pooling operators over closed convex sets in a discrete probabilistic space. A particular connection we develop leads to an iterative procedure, which is
[...] Read more.
The aim of this paper is to develop a comprehensive study of the geometry involved in combining Bregman divergences with pooling operators over closed convex sets in a discrete probabilistic space. A particular connection we develop leads to an iterative procedure, which is similar to the alternating projection procedure by Csiszár and Tusnády. Although such iterative procedures are well studied over much more general spaces than the one we consider, only a few authors have investigated combining projections with pooling operators. We aspire to achieve here a comprehensive study of such a combination. Besides, pooling operators combining the opinions of several rational experts allows us to discuss possible applications in multi-expert reasoning. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
Figures

Open AccessArticle What You See Is What You Get
Entropy 2014, 16(11), 6186-6194; doi:10.3390/e16116186
Received: 23 June 2014 / Revised: 30 October 2014 / Accepted: 4 November 2014 / Published: 21 November 2014
Cited by 6 | PDF Full-text (156 KB) | HTML Full-text | XML Full-text
Abstract This paper corrects three widely held misunderstandings about Maxent when used in common sense reasoning: That it is language dependent; That it produces objective facts; That it subsumes, and so is at least as untenable as, the paradox-ridden Principle of Insufficient Reason. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
entropy@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy
Back to Top