Next Article in Journal
Partial Classifier Chains with Feature Selection by Exploiting Label Correlation in Multi-Label Classification
Next Article in Special Issue
FASTENER Feature Selection for Inference from Earth Observation Data
Previous Article in Journal
Development of Automated Sleep Stage Classification System Using Multivariate Projection-Based Fixed Boundary Empirical Wavelet Transform and Entropy Features Extracted from Multichannel EEG Signals
Previous Article in Special Issue
Models of the Gene Must Inform Data-Mining Strategies in Genomics
Open AccessArticle

Approximate Learning of High Dimensional Bayesian Network Structures via Pruning of Candidate Parent Sets

1
Bayesian Artificial Intelligence Research Lab, School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK
2
The Alan Turing Institute, British Library, 96 Euston Road, London NW1 2DB, UK
*
Authors to whom correspondence should be addressed.
Entropy 2020, 22(10), 1142; https://doi.org/10.3390/e22101142
Received: 10 September 2020 / Revised: 2 October 2020 / Accepted: 7 October 2020 / Published: 10 October 2020
(This article belongs to the Special Issue Statistical Inference from High Dimensional Data)
Score-based algorithms that learn Bayesian Network (BN) structures provide solutions ranging from different levels of approximate learning to exact learning. Approximate solutions exist because exact learning is generally not applicable to networks of moderate or higher complexity. In general, approximate solutions tend to sacrifice accuracy for speed, where the aim is to minimise the loss in accuracy and maximise the gain in speed. While some approximate algorithms are optimised to handle thousands of variables, these algorithms may still be unable to learn such high dimensional structures. Some of the most efficient score-based algorithms cast the structure learning problem as a combinatorial optimisation of candidate parent sets. This paper explores a strategy towards pruning the size of candidate parent sets, and which could form part of existing score-based algorithms as an additional pruning phase aimed at high dimensionality problems. The results illustrate how different levels of pruning affect the learning speed relative to the loss in accuracy in terms of model fitting, and show that aggressive pruning may be required to produce approximate solutions for high complexity problems. View Full-Text
Keywords: structure learning; probabilistic graphical models; pruning structure learning; probabilistic graphical models; pruning
Show Figures

Figure 1

MDPI and ACS Style

Guo, Z.; Constantinou, A.C. Approximate Learning of High Dimensional Bayesian Network Structures via Pruning of Candidate Parent Sets. Entropy 2020, 22, 1142.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop