Next Article in Journal
Nonadditive Entropies and Complex Systems
Next Article in Special Issue
Structure Extension of Tree-Augmented Naive Bayes
Previous Article in Journal
Entropy and Mixing Entropy for Weakly Nonlinear Mechanical Vibrating Systems
Previous Article in Special Issue
Melodies as Maximally Disordered Systems under Macroscopic Constraints with Musical Meaning
Open AccessArticle

Discriminatory Target Learning: Mining Significant Dependence Relationships from Labeled and Unlabeled Data

1
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
2
Faculty of Science, Engineering & Built Environment, Deakin University Geelong, Burwood, VIC 3125, Australia
3
Changzhou College of Information Technology, Changzhou 213164, China
4
College of Computer Science and Technology, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(5), 537; https://doi.org/10.3390/e21050537
Received: 27 April 2019 / Revised: 20 May 2019 / Accepted: 24 May 2019 / Published: 26 May 2019
(This article belongs to the Special Issue Information Theoretic Measures and Their Applications)
Machine learning techniques have shown superior predictive power, among which Bayesian network classifiers (BNCs) have remained of great interest due to its capacity to demonstrate complex dependence relationships. Most traditional BNCs tend to build only one model to fit training instances by analyzing independence between attributes using conditional mutual information. However, for different class labels, the conditional dependence relationships may be different rather than invariant when attributes take different values, which may result in classification bias. To address this issue, we propose a novel framework, called discriminatory target learning, which can be regarded as a tradeoff between probabilistic model learned from unlabeled instance at the uncertain end and that learned from labeled training data at the certain end. The final model can discriminately represent the dependence relationships hidden in unlabeled instance with respect to different possible class labels. Taking k-dependence Bayesian classifier as an example, experimental comparison on 42 publicly available datasets indicated that the final model achieved competitive classification performance compared to state-of-the-art learners such as Random forest and averaged one-dependence estimators. View Full-Text
Keywords: Bayesian network; discriminatory target learning; unlabeled instance Bayesian network; discriminatory target learning; unlabeled instance
Show Figures

Figure 1

MDPI and ACS Style

Duan, Z.-Y.; Wang, L.-M.; Mammadov, M.; Lou, H.; Sun, M.-H. Discriminatory Target Learning: Mining Significant Dependence Relationships from Labeled and Unlabeled Data. Entropy 2019, 21, 537.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop