Next Article in Journal
An Energetic Analysis of the Phase Separation in Non-Ionic Surfactant Mixtures: The Role of the Headgroup Structure
Next Article in Special Issue
Effect of Conformational Entropy on the Nanomechanics of Microcantilever-Based Single-Stranded DNA Sensors
Previous Article in Journal
A Natural Gradient Algorithm for Stochastic Distribution Systems
Previous Article in Special Issue
Tsallis Wavelet Entropy and Its Application in Power Signal Analysis
Article Menu

Export Article

Open AccessArticle
Entropy 2014, 16(8), 4353-4374; doi:10.3390/e16084353

Learning Functions and Approximate Bayesian Computation Design: ABCD

1
Department of Applied Statistics, Johannes Kepler University, 4040 Linz, Austria
2
Department of Statistics, London School of Economics, Houghton Street, London WC2A 2AE, UK
*
Author to whom correspondence should be addressed.
Received: 25 April 2014 / Revised: 18 July 2014 / Accepted: 28 July 2014 / Published: 4 August 2014
(This article belongs to the Special Issue Entropy in Experimental Design, Sensor Placement, Inquiry and Search)
View Full-Text   |   Download PDF [215 KB, uploaded 24 February 2015]   |  

Abstract

A general approach to Bayesian learning revisits some classical results, which study which functionals on a prior distribution are expected to increase, in a preposterior sense. The results are applied to information functionals of the Shannon type and to a class of functionals based on expected distance. A close connection is made between the latter and a metric embedding theory due to Schoenberg and others. For the Shannon type, there is a connection to majorization theory for distributions. A computational method is described to solve generalized optimal experimental design problems arising from the learning framework based on a version of the well-known approximate Bayesian computation (ABC) method for carrying out the Bayesian analysis based on Monte Carlo simulation. Some simple examples are given. View Full-Text
Keywords: learning; Shannon information; majorization; optimum experimental design; approximate Bayesian computation learning; Shannon information; majorization; optimum experimental design; approximate Bayesian computation
This is an open access article distributed under the Creative Commons Attribution License (CC BY 3.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Hainy, M.; Müller, W.G.; P. Wynn, H. Learning Functions and Approximate Bayesian Computation Design: ABCD. Entropy 2014, 16, 4353-4374.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top