Next Article in Journal
Performance of Different Risk Indicators in a Multi-Period Polynomial Portfolio Selection Problem Based on the Credibility Measure
Previous Article in Journal
Negentropy Spectrum Decomposition and Its Application in Compound Fault Diagnosis of Rolling Bearing
Previous Article in Special Issue
How the Choice of Distance Measure Influences the Detection of Prior-Data Conflict
Article Menu

Article Versions

Export Article

Open AccessArticle

Discriminative Structure Learning of Bayesian Network Classifiers from Training Dataset and Testing Instance

1
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
2
College of Computer Science and Technology, Jilin University, Changchun 130012, China
3
Faculty of Science, Engineering & Built Environment, Deakin University, Burwood, VIC 3125, Australia
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(5), 489; https://doi.org/10.3390/e21050489
Received: 12 February 2019 / Revised: 29 April 2019 / Accepted: 6 May 2019 / Published: 13 May 2019
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
PDF [598 KB, uploaded 13 May 2019]

Abstract

Over recent decades, the rapid growth in data makes ever more urgent the quest for highly scalable Bayesian networks that have better classification performance and expressivity (that is, capacity to respectively describe dependence relationships between attributes in different situations). To reduce the search space of possible attribute orders, k-dependence Bayesian classifier (KDB) simply applies mutual information to sort attributes. This sorting strategy is very efficient but it neglects the conditional dependencies between attributes and is sub-optimal. In this paper, we propose a novel sorting strategy and extend KDB from a single restricted network to unrestricted ensemble networks, i.e., unrestricted Bayesian classifier (UKDB), in terms of Markov blanket analysis and target learning. Target learning is a framework that takes each unlabeled testing instance P as a target and builds a specific Bayesian model Bayesian network classifiers (BNC) P to complement BNC T learned from training data T . UKDB respectively introduced UKDB P and UKDB T to flexibly describe the change in dependence relationships for different testing instances and the robust dependence relationships implicated in training data. They both use UKDB as the base classifier by applying the same learning strategy while modeling different parts of the data space, thus they are complementary in nature. The extensive experimental results on the Wisconsin breast cancer database for case study and other 10 datasets by involving classifiers with different structure complexities, such as Naive Bayes (0-dependence), Tree augmented Naive Bayes (1-dependence) and KDB (arbitrary k-dependence), prove the effectiveness and robustness of the proposed approach.
Keywords: Bayesian network classifiers; Markov blanket; target learning Bayesian network classifiers; Markov blanket; target learning
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Wang, L.; Liu, Y.; Mammadov, M.; Sun, M.; Qi, S. Discriminative Structure Learning of Bayesian Network Classifiers from Training Dataset and Testing Instance. Entropy 2019, 21, 489.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top