Next Article in Journal
Symmetry Properties of Mixed and Heat Photo-Assisted Noise in the Quantum Hall Regime
Previous Article in Journal
Minimum Memory-Based Sign Adjustment in Signed Social Networks
Article Menu
Issue 8 (August) cover image

Export Article

Open AccessArticle

Universal Target Learning: An Efficient and Effective Technique for Semi-Naive Bayesian Learning

1,2, 3, 2,4, 2,4 and 5,*
1
College of Software, Jilin University, Changchun 130012, China
2
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
3
Department of Software and Big Data, Changzhou College of Information Technology, Changzhou 213164, China
4
College of Computer Science and Technology, Jilin University, Changchun 130012, China
5
College of Instrumentation and Electrical Engineering, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(8), 729; https://doi.org/10.3390/e21080729
Received: 15 June 2019 / Revised: 15 July 2019 / Accepted: 22 July 2019 / Published: 25 July 2019
(This article belongs to the Section Information Theory, Probability and Statistics)
  |  
PDF [446 KB, uploaded 29 July 2019]
  |  

Abstract

To mitigate the negative effect of classification bias caused by overfitting, semi-naive Bayesian techniques seek to mine the implicit dependency relationships in unlabeled testing instances. By redefining some criteria from information theory, Target Learning (TL) proposes to build for each unlabeled testing instance P the Bayesian Network Classifier BNC P , which is independent and complementary to BNC T learned from training data T . In this paper, we extend TL to Universal Target Learning (UTL) to identify redundant correlations between attribute values and maximize the bits encoded in the Bayesian network in terms of log likelihood. We take the k-dependence Bayesian classifier as an example to investigate the effect of UTL on BNC P and BNC T . Our extensive experimental results on 40 UCI datasets show that UTL can help BNC improve the generalization performance. View Full-Text
Keywords: information theory; universal target learning; Bayesian network classifier information theory; universal target learning; Bayesian network classifier
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Gao, S.; Lou, H.; Wang, L.; Liu, Y.; Fan, T. Universal Target Learning: An Efficient and Effective Technique for Semi-Naive Bayesian Learning. Entropy 2019, 21, 729.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top