Next Article in Journal
Active Control of a Chaotic Fractional Order Economic System
Next Article in Special Issue
Computing and Learning Year-Round Daily Patterns of Hourly Wind Speed and Direction and Their Global Associations with Meteorological Factors
Previous Article in Journal
Deformed Algebras and Generalizations of Independence on Deformed Exponential Families
Previous Article in Special Issue
Gaussian Network’s Dynamics Reflected into Geometric Entropy
Article Menu

Export Article

Open AccessArticle
Entropy 2015, 17(8), 5752-5770; doi:10.3390/e17085752

Consistency of Learning Bayesian Network Structures with Continuous Variables: An Information Theoretic Approach

Department of Mathematics, Graduate School of Science, Osaka University, Toyonaka-shi 560-0043,Japan
Academic Editor: Carlo Cafaro
Received: 30 April 2015 / Revised: 30 April 2015 / Accepted: 5 August 2015 / Published: 10 August 2015
(This article belongs to the Special Issue Dynamical Equations and Causal Structures from Observations)
View Full-Text   |   Download PDF [347 KB, uploaded 10 August 2015]   |  

Abstract

We consider the problem of learning a Bayesian network structure given n examples and the prior probability based on maximizing the posterior probability. We propose an algorithm that runs in O(n log n) time and that addresses continuous variables and discrete variables without assuming any class of distribution. We prove that the decision is strongly consistent, i.e., correct with probability one as n ! 1. To date, consistency has only been obtained for discrete variables for this class of problem, and many authors have attempted to prove consistency when continuous variables are present. Furthermore, we prove that the “log n” term that appears in the penalty term of the description length can be replaced by 2(1+ε) log log n to obtain strong consistency, where ε > 0 is arbitrary, which implies that the Hannan–Quinn proposition holds. View Full-Text
Keywords: posterior probability; consistency; minimum description length; universality; discrete and continuous variables; Bayesian network posterior probability; consistency; minimum description length; universality; discrete and continuous variables; Bayesian network
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Suzuki, J. Consistency of Learning Bayesian Network Structures with Continuous Variables: An Information Theoretic Approach. Entropy 2015, 17, 5752-5770.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top