Next Article in Journal
Application of the Entropy Spectral Method for Streamflow and Flood-Affected Area Forecasting in the Brahmaputra River Basin
Next Article in Special Issue
Permutation Entropy and Statistical Complexity Analysis of Brazilian Agricultural Commodities
Previous Article in Journal
Empirical Estimation of Information Measures: A Literature Guide
Previous Article in Special Issue
Discriminatory Target Learning: Mining Significant Dependence Relationships from Labeled and Unlabeled Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Structure Extension of Tree-Augmented Naive Bayes

1
College of Software, Jilin University, Changchun 130012, China
2
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
3
College of Computer Science and Technology, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(8), 721; https://doi.org/10.3390/e21080721
Submission received: 20 May 2019 / Revised: 16 July 2019 / Accepted: 23 July 2019 / Published: 25 July 2019
(This article belongs to the Special Issue Information Theoretic Measures and Their Applications)

Abstract

:
Due to the simplicity and competitive classification performance of the naive Bayes (NB), researchers have proposed many approaches to improve NB by weakening its attribute independence assumption. Through the theoretical analysis of Kullback–Leibler divergence, the difference between NB and its variations lies in different orders of conditional mutual information represented by these augmenting edges in the tree-shaped network structure. In this paper, we propose to relax the independence assumption by further generalizing tree-augmented naive Bayes (TAN) from 1-dependence Bayesian network classifiers (BNC) to arbitrary k-dependence. Sub-models of TAN that are built to respectively represent specific conditional dependence relationships may “best match” the conditional probability distribution over the training data. Extensive experimental results reveal that the proposed algorithm achieves bias-variance trade-off and substantially better generalization performance than state-of-the-art classifiers such as logistic regression.

1. Introduction

Supervised classification is an important task in data-mining and pattern recognition [1]. It requires building a classifier that can map an unlabeled instance into a class label. Traditional approaches to classification problems include decision trees, logistic regression etc. More recently, Bayesian network classifiers (BNCs) have attracted more attention from researchers in terms of explicit, graphical, interpretable representation and competitive performance against state-of-the-art classifiers.
Among numerous BNCs, naive Bayes (NB) is an extremely simple and remarkably effective approach to classification [2]. It infers the conditional probability by assuming that the attributes are independent given the class label [3]. It follows logically that relaxing NB’s independence assumption is a feasible and effective approach to build more powerful BNCs [4,5]. Researchers proposed to extend NB from 0-dependence BNC to 1-dependence BNCs [6,7] (e.g., tree-augmented naive Bayes or TAN), and then to arbitrary k-dependence BNCs [8,9] (e.g., k-dependence Bayesian classifier or KDB). These BNCs learn from training data and allow additional edges between attributes that capture the dependence relationships among them. These restricted BNCs also capture another assumption behind NB, i.e., every attribute is dependent on the class variable and thus the class is the root in the network.
Given a random instance x = ( x 1 , , x n ) , where x i Ω X i , classification is done by applying Bayes rule to predict the class label y * that corresponds to the highest posterior probability of the class variable, i.e., y * = arg max P ( y | x ) , where y Ω y . By using Bayes theorem, for restricted BNC we have
y * = arg max P ( y | x ) = arg max P ( y , x ) P ( x ) = arg max P ( y , x ) = arg max P ( x | y ) P ( y )
The objective of restricted BNC learning is to induce a network (or a set of networks) that may “best match” the conditional probability distribution P ( x | y ) given different class labels over the training data and explicitly represent statements about conditional independence. Information theory, which is proposed by Shannon, has established mathematical basis for the rapid development of BN. Mutual information (MI) I ( X i ; Y ) is the most commonly used criterion to rank attributes for attribute sorting or filtering [10,11], and conditional mutual information (CMI) I ( X i ; X j | Y ) is used to measure conditional dependence between attribute pair X i and X j for identifying possible dependencies.
Among numerous proposals to improve the accuracy of NB by weakening its attribute independence assumption, TAN demonstrates remarkable classification performance, yet at the same time maintains the computational simplicity and robustness that characterize NB. However, it can only model 1-dependence relationships among attributes. The optimization process of BNCs is implemented in practice by using heuristic search techniques to find the best candidate over the space of possible networks. The search process relies on a scoring function that evaluates each network with respect to the training data, and then to search for the best network according to this function. The likelihood function, e.g., Kullback–Leibler divergence, plays a fundamental role in Bayesian statistics [12,13]. The likelihood principle states that all relevant information for inference is contained in the likelihood function for the observed data given the assumed statistical model. We prove from the viewpoint of Kullback–Leibler divergence that the difference between NB and its variations lies in different orders of CMIs represented by these augmenting edges in the tree-shaped network structure. The CMIs may vary greatly for different class labels. Thus, in this paper we propose to generalize TAN from 1-dependence BNC to arbitrary k-dependence one. Different sub-models of TAN are introduced to respectively represent specific conditional dependence relationships depending on y. The Bayes rule is applied to select the maximum of the joint probability distribution P ( y , x ) for classification. Extensive experimental results reveal that the proposed algorithm, called Extensive TAN (ETAN), achieves competitive generalization performance and outperforms several state-of-the-art BNCs such as KDB while retaining excellent computational complexity.

2. Prior Work

A BNC is a graphical representation of the joint probability distribution P ( y , x ) . It comprises two components. Firstly, a directed acyclic graph G = ( U , V ) , where U = X Y . X = { X 1 , , X n } and Y respectively represent the attributes and class variable. V represents the set of arcs or direct dependencies. Secondly, a set of parameters, which are usually conditional probability distributions for each attribute in U . Given a training data set D , the goal of learning a BNC is to find the Bayesian network B that best represents P ( u ) or P ( y , x ) and predicts the class label for an unlabeled instance by selecting arg max y P ( y , x ) . According to the chain rule of joint probability, P ( y , x ) is calculated by
P ( u ) = P ( y , x ) = P ( y ) P ( x 1 | y ) P ( x 2 | x 1 , y ) P ( x n | x 1 , , x n 1 , y ) ,
For discrete probability distributions P ( u ) and Q ( u ) , the Kullback–Leibler divergence (also called relative entropy) is a measure of distance between these two probability distributions and is defined to be [14]
K L ( P | | Q ) = u P ( u ) log P ( u ) Q ( u ) = H Q ( U ) H P ( U )
It is the expectation of the logarithmic difference between P ( u ) and Q ( u ) , where the expectation is taken using P ( u ) . In other words, it is also the difference between H Q ( U ) and H P ( U ) . Suppose that B is a Bayesian network over U , P B ( u ) is the joint probability encoded in B , the Kullback–Leibler divergence between the expected P ( u ) in Equation (2) and P B ( u ) is
K L ( P | | B ) = u P ( u ) log P ( u ) P B ( u ) = H B ( U ) H P ( U )
where H B ( U ) = u P ( u ) log P B ( u ) . The entropy function H P ( U ) is the optimal number of bits needed to store all possible combinations of attribute values of U . Thus, K L ( P | | B ) can measure the difference between the information quantity carried by D and that encoded in B .
NB, which is the simplest BNC, involves no dependence in its network structure according to conditional independence assumption [15]. Figure 1a shows an example of its network structure. Hence, for NB, P N B ( u ) is calculated by
P N B ( u ) = P ( y ) P ( x 1 | y ) P ( x 2 | y ) P ( x n | y ) ,
Thus, H N B ( U ) can be calculated by
H N B ( U ) = y , x 1 , , x n P ( y , x 1 , , x n ) l o g P ( y ) P ( x 1 | y ) P ( x 2 | y ) P ( x n | y ) = y , x 1 , , x n P ( y , x 1 , , x n ) l o g P ( y ) y , x 1 , , x n P ( y , x 1 , , x n ) { i = 1 n l o g P ( x i | y ) } = y P ( y ) l o g P ( y ) i = 1 n y , x 1 , , x n { P ( y , x 1 , , x n ) l o g P ( x i | y ) } = y P ( y ) l o g P ( y ) i = 1 n y , x i { P ( y , x i ) l o g P ( x i | y ) } = H ( Y ) + i = 1 n H ( X i | Y )
The remarkable classification performance of NB has stimulated the exploration of improving its classification performance [16]. However, the dependency relationships between attributes always violate this assumption in many learning tasks. Many methods [17,18,19,20] attempt to improve the classification performance of NB by relaxing its independence assumption, such as TAN.
TAN constructs the tree structure by finding a maximum weighted spanning tree [21] (see Figure 1b). The structure is determined by extending Chow-Liu tree [22], which uses CMI to measure the weight of arcs. The CMI between X i and X j given the class Y, I ( X i ; X j | Y ) , is defined as follows [23],
I ( X i ; X j | Y ) = x i x j y P ( x i , x j , y ) log P ( x i , x j | y ) P ( x i | y ) P ( x j | y )
For each attribute X i X , its parent set is π i = { X j X | X j X i V } . The learning procedure of TAN is shown in Algorithm 1.
Algorithm 1: The Tree-augment Naive Bayes.
Entropy 21 00721 i001
To illustrate the learning process of TAN, we take dataset Balance-Scale as an example. Dataset Balance-Scale is from the University of California Irvine (UCI) machine learning repository and has 625 instances, 4 attributes, and 3 class labels. As a 1-dependence BNC, TAN requires that each attribute can have at most 1 parent. In the first step we need to find the most significant dependence between attributes. As shown in Figure 2a, I ( X 2 ; X 4 | Y ) corresponds to the maximum of I ( X i ; X j | Y ) for any attribute pairs. Then arc X 2 X 4 is added to the topology of TAN. In the second step, we need to find the next significant dependence relationship between attributes. As shown in Figure 2b, I ( X 1 ; X 4 | Y ) corresponds to the maximum of I ( X i ; Π i | Y ) where Π i { X 2 , X 4 } and X i { X 2 , X 4 } . Then arc X 1 X 4 is added to the topology. The next iteration begins. As shown in Figure 2c, I ( X 2 ; X 3 | Y ) corresponds to the maximum of I ( X i ; Π i | Y ) where Π i { X 1 , X 2 , X 4 } and X i { X 1 , X 2 , X 4 } . Then arc X 2 X 3 is added to the topology. Finally, there exist at least 1 dependence relationship between any attribute X i and other attributes and the learning procedure of TAN stops.
According to the structure of TAN, P T A N ( u ) is calculated by
P T A N ( u ) = P ( y ) P ( x 1 | y ) P ( x 2 | x 1 , y ) P ( x n | π n , y ) = P ( y ) P ( x 1 | y ) i = 2 n P ( x i | π i , y ) ,
where π i ( i > 1 ) represents the parent attribute of X i ( i > 1 ). Correspondingly, H T A N ( U ) can be calculated by
H T A N ( U ) = y , x 1 , , x n P ( y , x 1 , , x n ) l o g P ( y ) P ( x 1 | y ) P ( x 2 | x 1 , y ) P ( x n | π n , y ) = y , x 1 , , x n P ( y , x 1 , , x n ) l o g P ( y ) y , x 1 , , x n P ( y , x 1 , , x n ) l o g P ( x 1 | y ) y , x 1 , , x n { P ( y , x 1 , , x n ) i = 2 n l o g P ( x i | π i , y ) } = y P ( y ) l o g P ( y ) y , x 1 P ( y , x 1 ) l o g P ( x 1 | y ) y , x i , π i { P ( y , x i , π i ) i = 2 n l o g P ( x i | π i , y ) } = H ( Y ) + H ( X 1 | Y ) + i = 2 n H ( X i | π i , Y ) ,
According to Equation (6) and Equation (9), the difference between H N B ( U ) and H T A N ( U ) can be calculated by
H N B ( U ) H T A N ( U ) = { H ( Y ) + i = 1 n H ( X i | Y ) } { H ( Y ) + H ( X 1 | Y ) + i = 2 n H ( X i | π i , Y ) } = i = 2 n H ( X i | Y ) i = 2 n H ( X i | π i , Y ) = i = 2 n I ( X i ; π i | Y ) ,
Thus, the difference between H N B ( U ) and H T A N ( U ) is the summation of 1-order CMIs that correspond to the conditional dependence relationships among attributes. Equation (10) can clarify why TAN applies CMI to fully describe the 1-dependence relationships in the maximum weighted spanning tree. As TAN is a successful structure augmentation of NB, many researchers suggest that identifying significant dependencies can help to achieve more precise classification accuracy [24,25]. Ziebart et al. [26] model the selective forest-augmented naive Bayes by allowing attributes to be optionally dependent on the class. Jing and Pavlovi [27] presented the boosted BNC which greedily builds the structure with the arcs with the highest value of CMI.
KDB [28] extends the network structure further by using variable k to control the attribute dependence spectrum (see Figure 1c). KDB first sorts attributes by comparing MI I ( X i ; Y ) . Suppose that the attribute order is { X 1 , X 2 , , X n } , P K D B ( u ) is calculated by
P K D B ( u ) = P ( y ) P ( x 1 | y ) P ( x 2 | x 1 , y ) P ( x k | x 1 , , x k 1 , y ) P ( x k + 1 | Π k + 1 , y ) P ( x n | Π n , y ) = P ( y ) P ( x 1 | y ) i = 2 k P ( x i | x 1 , , x i 1 , y ) j = k + 1 n P ( x j | Π j , y )
where Π i is the set of k parent attributes of X i when k < i n . Whereas when k i , X i takes the first i 1 attributes in the order as its parent attributes. Correspondingly, H K D B ( U ) can be calculated by
H K D B ( U ) = y , x 1 , , x n P ( y , x 1 , , x n ) l o g P ( y ) P ( x 1 | y ) P ( x 2 | x 1 , y ) P ( x n | Π n , y ) = y , x 1 , , x n P ( y , x 1 , , x n ) l o g P ( y ) y , x 1 , , x n P ( y , x 1 , , x n ) l o g P ( x 1 | y ) y , x 1 , , x n i = 2 k P ( y , x 1 , , x n ) l o g P ( x i | x 1 , , x i 1 , y ) y , x 1 , , x n j = k + 1 n P ( y , x 1 , , x n ) l o g P ( x j | Π j , y ) = y P ( y ) l o g P ( y ) y , x 1 P ( y , x 1 ) l o g P ( x 1 | y ) i = 2 k y , x 1 , , x i P ( y , x 1 , , x i ) l o g P ( x i | x 1 , , x i 1 , y ) j = k + 1 n y , x j , Π j P ( y , x j , Π j ) l o g P ( x j | Π j , y ) = H ( Y ) + H ( X 1 | Y ) + i = 2 k H ( X i | X 1 , , X i 1 , Y ) + j = k + 1 n H ( X j | Π j , Y )
According to Equation (6) and Equation (12), the difference between H N B ( U ) and H K D B ( U ) can be calculated by
H N B ( U ) H K D B ( U ) = { H ( Y ) + i = 1 n H ( X i | Y ) } { H ( Y ) + H ( X 1 | Y ) + i = 2 k H ( X i | X 1 , , X i 1 , Y ) + j = k + 1 n H ( X j | Π j , Y ) } = i = 2 k I ( X i ; X 1 , , X i 1 | Y ) + j = k + 1 n I ( X j ; Π j | Y )
Thus, the difference between H N B ( U ) and H K D B ( U ) is actually the summation of these CMIs of different orders.
Extending the network structure with high attribute dependence spectrum has become popular to improve the classification performance of BNCs [29]. Pernkopf and Bilmes [30] establish k-graphs via ranking attributes by means of a greedy algorithm and selecting the k best parents by scoring each possibility with the classification accuracy. Lu and Mineichi [31] propose k-dependence classifier chains with label-specific features and demonstrate the effectiveness of the method.

3. Extensive Tree-Augmented Naive Bayes

KDB allows us to construct classifiers at arbitrary points (values of k) along the attribute dependence spectrum. To build an ideal KDB, we need to learn how to maximize the Kullback–Leibler divergence shown in Equation (13). The original KDB sort attributes by comparing MI and uses a set of 1-order CMIs (e.g., I ( X i ; X 1 | Y ) , I ( X i ; X 2 | Y ) , I ( X i ; X i 1 | Y ) ) rather than one higher-order CMI (e.g., I ( X i ; X 1 , , X i 1 | Y ) ) to measure the conditional dependencies between X i and its parent attributes. To illustrate the difference between these two measures, we take data set Census-income, which has 299,285 instances, 41 attributes and 2 class labels, for example to learn specific KDB, corresponding distributions of X j Π i I ( X i ; X j | Y ) and I ( X i ; Π i | Y ) are shown in Figure 3, where the X-axis denotes the index of attributes sorted in the decreasing order of I ( X i ; Π i | Y ) . From Figure 3, the distribution of X j Π i I ( X i ; X j | Y ) differs greatly to that of I ( X i ; Π i | Y ) , thus the former is not appropriate to approximate the latter.
CMI or I ( X i ; X j | Y ) is a popular measure to evaluate the dependency relationship between attributes, and the maximum weighted spanning tree learned by TAN describes the most significant dependencies in its 1-dependence structure. While I ( X i ; X j | Y ) may fail to discriminate the dependency relationships given different class labels. The definition of CMI shown in Equation (7) can be represented as follows,
I ( X i ; X j | Y ) = x i x j y P ( x i , x j , y ) log P ( x i , x j | y ) P ( x i | y ) P ( x j | y ) = y P ( y ) { x i x j P ( x i , x j | y ) log P ( x i , x j | y ) P ( x i | y ) P ( x j | y ) } = y P ( y ) I ( X i ; X j | y )
where I ( X i ; X j | y ) measures the informational correlation between X i and X j given specific class label y and is defined as follows,
I ( X i ; X j | y ) = x i x j P ( x i , x j | y ) log P ( x i , x j | y ) P ( x i | y ) P ( x j | y ) ,
From Equation (1), for restricted BNCs the most important issue is how to deeply mine the significant conditional dependencies among attributes given class label y. Whereas from Equation (14), CMI or I ( X i ; X j | Y ) is just a simple uniform averaging of I ( X i ; X j | y ) , and the latter assumes that the data set has been divided into | Y | subsets and each subset corresponds a specific class label. To illustrate the variety of I ( X i ; X j | y ) with different class labels, we present their comparison on data set Census-income in Figure 4. As Figure 4 shows, the distribution of I ( X i ; X j | y ) differs greatly as y changes. Correspondingly the network structures, in which the conditional dependencies are measured by I ( X i ; X j | y ) , should be different.
According to Figure 3 and Figure 4, X j Π i I ( X i ; X j | Y ) may not be able to approximate I ( X i ; Π i | Y ) , and to discriminate the conditional dependence between attributes, I ( X i , X j | y ) rather than I ( X i , X j | Y ) is appropriate to measure the conditional dependencies implicated in different subspaces of training data. Thus, motivated by the learning schemes of TAN and KDB, the structure of the proposed BNC is just like an extension of TAN from 1-dependence BNC to arbitrary k-dependence BNC, and we respectively learn | Y | sub-classifiers from | Y | subspaces of training data. The attributes are sorted in such a way that the Kullback–Leibler divergence will be maximized and each attribute can have at most k parent attributes. Supposed that the attribute order is { X 1 , X 2 , , X n } , from the chain rule of joint probability of BNC (see Equation (2)) we know that any possible parents of attribute X i must be selected from { X 1 , X 2 , , X i 1 } . As a k-dependence BNC, ETAN uses a heuristic search strategy and the learning procedure of ETAN is divided into two parts: when i k + 1 , all attributes in { X 1 , X 2 , , X i 1 } will be selected as the parents of X i ; when i > k + 1 , only k attributes in { X 1 , X 2 , , X i 1 } will be selected as the parents of X i . The learning procedure of one sub-model of ETAN is shown in Algorithm 2.
Algorithm 2: Sub-ETAN (y,k).
Entropy 21 00721 i002
To illustrate the learning process of ETAN, we also take dataset Balance-Scale as an example and set Y = y 1 , k = 2 . In the first step we need to find the most significant dependence between attributes. As shown in Figure 5a, I ( X 3 ; X 4 | y 1 ) corresponds to the maximum of I ( X i ; X j | y 1 ) for any attribute pairs. Then arc X 3 X 4 is added to the topology of ETAN. In the second step, we need to find the next significant dependence relationship between attributes. As shown in Figure 5b, I ( X 2 ; X 3 , X 4 | y 1 ) corresponds to the maximum of I ( X i ; Π i | y 1 ) where Π i { X 3 , X 4 } and X i { X 3 , X 4 } . Then arcs X 2 X 4 and X 2 X 3 are added to the topology. The next iteration begins. As shown in Figure 5c, I ( X 1 ; X 3 , X 4 | y 1 ) corresponds to the maximum of I ( X i ; Π i | y 1 ) where Π i { X 2 , X 3 , X 4 } and X i { X 2 , X 3 , X 4 } . Then arcs X 1 X 3 and X 1 X 4 are added to the topology. Finally, there exist at least k dependence relationship between any attribute X i and other attributes and the learning procedure of ETAN stops.
For testing instance x , its class label y will correspond to one of the | Y | candidate classifiers, whose structure may lead to the maximum of joint probability P ( y , x ) , i.e., y * = arg max P ( y , x | BNC). The prediction procedure of our proposed method, the extensive tree-augmented naive Bayes, is shown in Algorithm 3.
Algorithm 3: The Extensive tree-augmented naive Bayes (ETAN).
Entropy 21 00721 i003

4. Experimental Results

We compare the performance of our proposed methods with other algorithms. All experiments are carried out on 40 data sets from UCI machine learning repository. Table 1 shows the details of each data set used, including the number of instances, attributes, and the class. These data sets are arranged in the order of the number of instances. For each data set, numeric attributes are discretized using Minimum Description Length discretization [32]. To allow the proposed algorithm to be compared with Weka’s algorithms, missing values for qualitative attributes are replaced with modes and those for quantitative attributes are replaced with means from the training data.
The following algorithms are compared for experimental study,
  • NB, naive Bayes.
  • TAN, standard tree-augmented naive Bayes.
  • KDB, k-dependence Bayesian classifier with k = 2 .
  • LR, Logistic Regression.
  • ETAN, Extensive TAN with k = 2 .
  • AODE, averaged one-dependence estimators [33].
  • WAODE, weighted averaged one-dependence estimators [34].
In machine learning, zero-one loss is the most common function to measure the classification performance. Kohavi and Wolpert [35] presented a bias-variance decomposition of zero-one loss for analyzing supervised learning scenarios. Bias represents the systematic component of error, which measures how closely the classifier can describe surfaces for a data set. Variance represents the component of error that stems from sampling, which represents the sensitivity of the classifier to changes in training data. The estimation of these measures is using 10-fold cross validation to provide an accurate evaluation of the performance of algorithms. The experimental results of zero-one loss, bias and variance are shown in Table A1Table A3 respectively in Appendix A. Statistically, we employ win/draw/loss when two algorithms are compared with respect to a performance measure. A win and a draw respectively indicate that the algorithm has significantly and not significantly lower error than the comparator. We assess a difference as significant if the outcome of a one-tailed binomial sign test is less than 0.05 . Base probability estimates with M-estimation which leads to more accurate probabilities are applied in our paper, where the value of M is 1 [36].

4.1. ETAN vs. BNCs

In this section, we present experimental results of our proposed algorithms, ETAN. Table 2 displays the win/draw/loss records summarizing the relative zero-one loss, bias, and variance of different algorithms. Cell [ i , j ] in Table 2 contains win/draw/loss records for the BNC on row i against the BNC on column j.
As shown in Table 2, ETAN respectively performs significantly better than NB on 28 datasets in terms of zero-one loss. In particular, ETAN shows obvious advantages when compared with TAN (19 wins). ETAN has a clear edge over KDB with win/draw/loss records of 17/14/9. Ensemble of classifiers, e.g., AODE and WAODE, brings improvement in accuracy in the sense that small variations in the training set would lead them to produce very different models, which help achieve higher classification accuracy compared to these single-structure classifiers. ETAN is also an ensemble but has higher attribute dependence spectrum than AODE or WAODE. It retains the advantage over AODE and WAODE in terms of zero-one loss, although less significantly. As structure complexity increases, ETAN uses higher-order CMI to measure the high-dependence relationships that may help to improve the classification performance.
The win/draw/loss records of bias and variance are also shown in Table 2. Bias-variance analysis of BNCs is given in the following discussion. By modeling BNCs with respect to the class labels, ETAN makes full use of the training data. For bias, ETAN performs better than NB (32/4/4). When ETAN is compared with 1-dependence classifier, ETAN beats TAN on 17 datasets and loses on 8 datasets. As each sub-model in ETAN is k-dependence classifier, ETAN beats AODE on 20 datasets. It indicates that BNCs with more interdependencies can perform better in terms of bias. Variance-wise, NB performs the best, because the structure of NB is definite and insensitive to variations in training data. For single-structure BNCs, higher-dependence BNCs (e.g., KDB) performs worse than lower-dependence BNCs (e.g., TAN). This also holds for ensemble classifiers, and ETAN performs worse than AODE and WAODE. The reason may be that further dependence discovery will result in overfitting.
ETAN is a structural augmentation of TAN where every attribute takes the class variable and at most k other attributes as its parents. k-order CMI is introduced to measure the conditional dependencies among attributes and the final structure is an extended maximum spanning tree. This alleviates some of NB’s independence assumption and therefore reduces its bias at the expense of increasing its variance. As can be seen from Table 2, ETAN performs better in terms of variance than NB on 4 datasets, i.e., Anneal, Vowel, Dis and Mushrooms. Each dataset has smaller number of instances and larger number of attributes. That may lead to sparsely distributed data and imprecise estimate of probability distribution. For lower quantities of data, the lower variance results in lower error for ETAN, while for larger quantities of data the lower bias results in lower error. ETAN may underfit the sparsely distributed training data that will lead to lower variance and then higher classification accuracy. The ideal datasets on which ETAN has better variance prediction accuracy are those with small data quantity and sparse data. For example, dataset Anneal has only 898 instances but 38 attributes and 6 class labels.

4.2. ETAN vs. Logistic Regression (LR)

In this section, our proposed algorithms are compared with the state-of-the-art algorithm, Logistic Regression (LR). LR can be viewed as a partially parametric approach [37], hence, a BNC can be mapped to a LR model [38]. We use LR’s implementation in Weka, an open source provided by the University of Waikato for machine learning. Weka offers an improved implementation of LR, which use a quasi-Newton to search for the optimized values of attributes and considers the instance weights. The experimental results in terms of zero-one one loss, bias and variance have been shown in the fifth column of Table A1Table A3 in Appendix A. Table 3 shows the win/draw/loss results. Due to the computational constraints of LR, the size of data sets has an obvious effect on its training time; hence, we have not been able to learn the classification models for the two largest data sets. That is why the sum of the number of all cells is exactly 38.
As we can see from Table 3, ETAN beats LR on 26 data sets, which means ETAN achieves better classification performance than LR. ETAN results in not only better bias performance on 22 data sets but also better variance performance on 21 data sets. In the other words, ETAN is difficult to be beaten by LR.
To further illustrate the advantages of our algorithms, we present the comparison results with respect to zero-one loss in Figure 6, where the X-axis represents the zero-one loss results of ETAN and the Y-axis represents those of LR. Most of the points in Figure 6 are above the diagonal lines, which means that our algorithms have shown better classification performance in general. LR is a popular binary classifier and attempts to predict outcomes based on a set of independent attributes. Among those data sets which correspond to the points under the diagonal lines, most of them have 2 class labels, less than 1000 instances and at least 8 attributes. Obviously, the sparsely distributed data and binary classification may be the main reasons why LR performs better. However, for data sets containing non-binary attributes, ETAN allows us to build (more expressive) non-linear classifiers, which is impossible for LR, unless one “binarizes” all attributes and this may artificially introduce noise.

4.3. Comparison of All Algorithms

To compare multiple algorithms over multiple data sets, Friedman test is used in the following discussion, which ranks the algorithms for each data set [39]. We calculate the rank of each algorithm for each data set separately (assign average ranks in case of tie). The null hypothesis is that there is no significant difference in the average ranks. The Friedman statistic is distributed according to χ F 2 with t 1 degrees of freedom. For any level of significance α , the null hypothesis will be rejected if χ F 2 > χ α 2 . The critical value of χ α 2 for α = 0.05 with t 1 = 6 degrees of freedom is 14.07. The Friedman statistic for zero-one loss is 37.90. Therefore, the null hypothesis is rejected.
As the null hypothesis is rejected, we perform Nemenyi test which is used to analyze which pairs of algorithms are significantly different [40]. If the difference between the average ranks of two algorithms is less than the critical difference (CD), their performance is significantly different. For these 7 algorithms and 38 data sets, the value of CD is 1.462.
The comparison of all algorithms against each other with Nemenyi test in terms of zero-one loss is shown in Figure 7. We plot the algorithms on the left line according to average ranks, the higher the position of algorithms, the lower average ranks will be, hence the better performance. As we can see, the rank of ETAN is significantly better than that of other algorithms. WAODE and AODE also achieve lower average rank than KDB, TAN, and NB. It indicates that ensemble classifiers may help to improve performance of the single-structure classifiers. The advantage of ETAN over ADOE and that of KDB over TAN may be attributed to the increasing attribute dependence spectrum.

5. Conclusions

Our work was primarily motivated by the consideration that the structure difference between NB and its variations can be measured by different orders of CMIs in terms of Kullback–Leibler divergence, and conditional dependencies between attributes may vary greatly for different class labels. In this paper, we provide a novel learning algorithm, ETAN, which extends TAN to arbitrary k-dependence BNC. The final network structure is similar to an extended version of maximum weighted spanning tree and corresponds to the maximum of sum of CMIs. ETAN substantially achieves better performance with respect to different evaluation functions and is highly competitive with the state-of-the-art higher-dependence BNC, e.g., KDB.

Author Contributions

All authors have contributed to the study and preparation of the article. Y.L. and L.W. conceived the idea, derived equations and wrote the paper. M.S. did the analysis and finished the programming work. All authors have read and approved the final manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 61272209, 61872164),the Agreement of Science and Technology Development Project, Jilin Province (20150101014JC), and the Fundamental Research Funds for the Central Universities.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Detailed Experimental Results

Table A1. Experimental results of zero-one loss.
Table A1. Experimental results of zero-one loss.
Data SetNBTANKDBLRETANAODEWAODE
Labor0.02890.02110.02790.02110.02740.02050.0200
Labor-Negotiations0.05050.07630.05530.04220.02370.02680.0268
Lymphography0.09020.09760.10410.14220.08140.08530.0951
Iris0.05900.05500.06560.03430.07600.06260.0624
Hungarian0.16460.14540.14800.10570.14560.15970.1611
Heart-Disease-C0.12970.12630.12990.13000.13000.11710.1092
Soybean-Large0.10700.12750.10860.11040.09640.08120.0655
Ionosphere0.12200.08000.08550.11170.08170.09030.0061
House-Votes-840.08990.04100.02580.03070.04060.05180.0406
Musk10.18470.15630.15350.13570.15270.16700.1501
Cylinder-Bands0.20000.32420.19390.18630.17460.16840.1286
Chess0.14130.14270.11190.08320.11920.13800.0180
Syncon0.05160.02030.03140.11230.03390.03340.1827
Balance-Scale0.18400.18430.19020.07530.18770.19050.0503
Soybean0.10150.05040.04910.06560.06220.06900.0900
Credit-A0.09120.11710.11370.12790.12660.08920.0953
Breast-Cancer-W0.01870.03150.04490.03750.02630.02430.1941
Pima-Ind-Diabetes0.19570.19460.19440.18980.19050.19350.2398
Vehicle0.33300.23840.24940.15400.24820.24350.0194
Anneal0.03540.01950.00730.07880.01860.01800.2104
Vowel0.33010.19310.17450.16880.21350.22680.1811
Led0.23220.22470.23170.22110.23480.23270.3766
Car0.09370.04780.03870.05360.05240.05970.0633
Hypothyroid0.01160.01050.00960.01810.00950.00940.0315
Dis0.01650.01940.01910.01950.01940.01630.0078
Sick0.02460.02080.01980.02770.02150.02210.3212
Abalone0.41800.31340.30330.36130.30890.32010.0574
Spambase0.09290.05710.04970.05600.04880.06310.1184
Waveform-50000.17620.12320.11570.12070.11220.12330.2172
Page-Blocks0.04510.03060.02800.02820.02590.02590.0224
Optdigits0.06850.02750.02500.03820.01820.02030.0902
Satellite0.17460.09480.08080.10640.08340.08890.0002
Mushrooms0.02370.00010.00010.00000.00010.00040.0561
Thyroid0.09940.05720.05530.09990.05350.06580.0561
Letter-Recog0.22070.10320.13870.19450.05690.08060.0892
Adult0.16490.13120.12200.13940.12150.14400.0006
Connect-40.26600.22530.20220.22790.19810.22790.0158
Waveform0.02190.01520.02100.02670.01400.01570.3068
Census-Income0.23030.05440.0421-0.04500.08590.2083
Poker-Hand0.49790.28650.1326-0.40400.42170.1716
Table A2. Experimental results of Bias.
Table A2. Experimental results of Bias.
Data SetNBTANKDBLRETANAODEWAODE
Labor0.02890.02110.02790.02110.02740.02050.0200
Labor-Negotiations0.05050.07630.05530.04220.02370.02680.0268
Lymphography0.09020.09760.10410.14220.08140.08530.0951
Iris0.05900.05500.06560.03430.07600.06260.0624
Hungarian0.16460.14540.14800.10570.14560.15970.1611
Heart-Disease-C0.12970.12630.12990.13000.13000.11710.1092
Soybean-Large0.10700.12750.10860.11040.09640.08120.0655
Ionosphere0.12200.08000.08550.11170.08170.09030.0061
House-Votes-840.08990.04100.02580.03070.04060.05180.0406
Musk10.18470.15630.15350.13570.15270.16700.1501
Cylinder-Bands0.20000.32420.19390.18630.17460.16840.1286
Chess0.14130.14270.11190.08320.11920.13800.0180
Syncon0.05160.02030.03140.11230.03390.03340.1827
Balance-Scale0.18400.18430.19020.07530.18770.19050.0503
Soybean0.10150.05040.04910.06560.06220.06900.0900
Credit-A0.09120.11710.11370.12790.12660.08920.0953
Breast-Cancer-W0.01870.03150.04490.03750.02630.02430.1941
Pima-Ind-Diabetes0.19570.19460.19440.18980.19050.19350.2398
Vehicle0.33300.23840.24940.15400.24820.24350.0194
Anneal0.03540.01950.00730.07880.01860.01800.2104
Vowel0.33010.19310.17450.16880.21350.22680.1811
Led0.23220.22470.23170.22110.23480.23270.3766
Car0.09370.04780.03870.05360.05240.05970.0633
Hypothyroid0.01160.01050.00960.01810.00950.00940.0315
Dis0.01650.01940.01910.01950.01940.01630.0078
Sick0.02460.02080.01980.02770.02150.02210.3212
Abalone0.41800.31340.30330.36130.30890.32010.0574
Spambase0.09290.05710.04970.05600.04880.06310.1184
Waveform-50000.17620.12320.11570.12070.11220.12330.2172
Page-Blocks0.04510.03060.02800.02820.02590.02590.0224
Optdigits0.06850.02750.02500.03820.01820.02030.0902
Satellite0.17460.09480.08080.10640.08340.08890.0002
Mushrooms0.02370.00010.00010.00000.00010.00040.0561
Thyroid0.09940.05720.05530.09990.05350.06580.0561
Letter-Recog0.22070.10320.13870.19450.05690.08060.0892
Adult0.16490.13120.12200.13940.12150.14400.0006
Connect-40.26600.22530.20220.22790.19810.22790.0158
Waveform0.02190.01520.02100.02670.01400.01570.3068
Census-Income0.23030.05440.0421-0.04500.08590.2083
Poker-Hand0.49790.28650.1326-0.40400.42170.1716
Table A3. Experimental results of Variance.
Table A3. Experimental results of Variance.
Data SetNBTANKDBLRETANAODEWAODE
Labor0.03950.06320.07210.03280.07790.02680.0221
Labor-Negotiations0.06530.13950.12890.06550.08680.06260.0626
Lymphography0.03430.11060.14080.12120.09610.04120.0478
Iris0.03900.05100.03640.03270.04600.03940.0396
Hungarian0.02010.05560.05610.07510.04110.02700.0317
Heart-Disease-C0.02480.04790.05820.09200.05910.03040.0383
Soybean-Large0.07830.11270.09820.15420.08990.07470.0855
Ionosphere0.02420.04140.05810.09460.04480.03190.0242
House-Votes-840.00660.01700.01970.07140.00830.00680.0123
Musk10.11080.11910.13200.16910.11570.11530.1010
Cylinder-Bands0.06560.07240.07500.14370.08880.08270.0364
Chess0.04010.04910.05310.07910.05780.03850.0230
Syncon0.02040.02220.03010.17640.02460.01610.0913
Balance-Scale0.08480.09410.08720.03390.08630.08540.0334
Soybean0.03020.05930.04390.08390.03950.02880.0321
Credit-A0.02490.05550.07680.07370.06730.02690.0264
Breast-Cancer-W0.00100.03720.05040.03950.03760.01180.0700
Pima-Ind-Diabetes0.07150.06630.06890.04250.06970.07290.1276
Vehicle0.11200.12970.12830.07970.13300.12460.0161
Anneal0.01680.01560.01520.05930.01390.01180.0604
Vowel0.25420.24660.23250.22390.23100.24650.2489
Led0.03330.05360.05650.06400.04600.03720.1106
Car0.05200.03750.04340.03850.04270.04310.0509
Hypothyroid0.00310.00290.00240.00620.00390.00260.0083
Dis0.00690.00060.00110.00380.00090.00480.0056
Sick0.00470.00520.00430.00840.00630.00380.1543
Abalone0.06820.16900.17690.07460.16790.15360.0111
Spambase0.00920.01570.02140.02430.01770.00980.0420
Waveform-50000.02590.06870.08430.03100.06930.04030.1311
Page-Blocks0.01350.01440.01770.01230.01390.01110.0137
Optdigits0.01530.01850.02540.07520.01620.01330.0364
Satellite0.01390.03680.04550.05170.03950.03250.0001
Mushrooms0.00430.00020.00020.00010.00010.00010.0239
Thyroid0.02050.02520.02720.04530.02390.02020.0241
Letter-Recog0.04710.05910.01130.04220.05230.07090.0417
Adult0.00690.01650.02850.01080.02360.01040.0004
Connect-40.01560.01490.03090.01270.03730.01990.0023
Waveform0.00090.00530.00370.00240.00590.00230.0632
Census-Income0.00520.01000.0110-0.01440.01380.0224
Poker-Hand0.00000.04240.0633-0.04400.02730.0602

References

  1. Acid, S.; Campos, L.M.; Castellano, J.G. Learning Bayesian network classifiers: Searching in a space of partially directed acyclic graphs. Mach. Learn. 2005, 59, 213–235. [Google Scholar] [CrossRef]
  2. Hand, D.J.; Yu, K. Idiot’s Bayes not so stupid after all? Int. Stat. Rev. 2001, 69, 385–398. [Google Scholar]
  3. Kontkanen, P.; Myllymaki, P.; Silander, T.; Tirri, H. BAYDA: Software for Bayesian classification and feature selection. In Proceedings of the 4th International Conference on Knowledge Discovery and Data Mining (KDD-1998); AAAI Press: Menlo Park, CA, USA, 1998; pp. 254–258. [Google Scholar]
  4. Langley, P.; Sage, S. Induction of selective Bayesian classifiers. In Uncertainty Proceedings 1994; Morgan Kaufmann: Burlington, MA, USA, 1994; pp. 399–406. [Google Scholar]
  5. Ekdahl, M.; Koski, T. Bounds for the loss in probability of correct classification under model based approximation. J. Mach. Learn. Res. 2006, 7, 2449–2480. [Google Scholar]
  6. Yang, Y.; Webb, G.I.; Cerquides, J. To select or to weigh: A comparative study of linear combination schemes for superparent-one-dependence estimators. IEEE Trans. Knowl. Data Eng. 2007, 19, 1652–1665. [Google Scholar] [CrossRef]
  7. Pernkopf, F.; Wohlmayr, M. Stochastic margin-based structure learning of Bayesian network classifiers. Pattern Recognit. 2013, 46, 464–471. [Google Scholar] [CrossRef] [PubMed]
  8. Xiao, J.; He, C.; Jiang, X. Structure identification of Bayesian classifiers based on GMDH. Knowl. Based Syst. 2009, 22, 461–470. [Google Scholar] [CrossRef]
  9. Louzada, F.; Ara, A. Bagging k-dependence probabilistic networks: An alternative powerful fraud detection tool. Expert Syst. Appl. 2012, 39, 11583–11592. [Google Scholar] [CrossRef]
  10. Pazzani, M.; Billsus, D. Learning and revising user profiles: The identification of interesting web sites. Mach. Learn. 1997, 27, 313–331. [Google Scholar] [CrossRef]
  11. Hall, M.A. Correlation-Based Feature Selection for Machine Learning. Ph.D. Thesis, University of Waikato, Hamilton, New Zealand, 1999. [Google Scholar]
  12. Jiang, L.X.; Cai, Z.H.; Wang, D.H.; Zhang, H. Improving tree augmented naive bayes for class probability estimation. Knowl. Based Syst. 2012, 26, 239–245. [Google Scholar] [CrossRef]
  13. Grossman, D.; Domingos, P. Learning Bayesian network classifiers by maximizing conditional likelihood. In International Conference on Machine Learning; ACM: Hyères, France, 2004. [Google Scholar]
  14. Ruz, G.A.; Pham, D.T. Building Bayesian network classifiers through a Bayesian complexity monitoring system. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2009, 223, 743–755. [Google Scholar] [CrossRef]
  15. Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Morgan Kaufmann: San Francisco, CA, USA, 1988. [Google Scholar]
  16. Jingguo, D.; Jia, R.; Wencai, D. An improved evolutionary approach-based hybrid algorithm for Bayesian network structure learning in dynamic constrained search space. In Neural Computing and Applications; Springer: Berlin, Germany, 2018; pp. 1–22. [Google Scholar]
  17. Jiang, L.; Zhang, L.; Li, C.; Wu, J. A Correlation-based feature weighting filter for naive bayes. IEEE Trans. Knowl. Data Eng. 2019, 31, 201–213. [Google Scholar] [CrossRef]
  18. Jiang, L.; Li, C.; Wang, S.; Zhang, L. Deep feature weighting for naive bayes and its application to text classification. Eng. Appl. Artif. Intell. 2016, 52, 26–39. [Google Scholar] [CrossRef]
  19. Zhao, Y.; Chen, Y.; Tu, K. Learning Bayesian network structures under incremental construction curricula. Neurocomputing 2017, 258, 30–40. [Google Scholar] [CrossRef]
  20. Wu, J.; Cai, Z. A naive Bayes probability estimation model based on self-adaptive differential evolution. J. Intell. Inf. Syst. 2014, 42, 671–694. [Google Scholar] [CrossRef]
  21. Friedman, N.; Dan, G.; Goldszmidt, M. Bayesian network classifiers. Mach. Learn. 1997, 29, 131–163. [Google Scholar] [CrossRef]
  22. Chow, C.K.; Liu, C.N. Approximating discrete probability distributions dependence trees. IEEE Trans. Inf. Theory 1968, 14, 462–467. [Google Scholar] [CrossRef]
  23. Shannon, C.E.; Weaver, W. The mathematical theory of communication. Bell Labs Tech. J. 1950, 3, 31–32. [Google Scholar] [CrossRef]
  24. Bielza, C. Discrete Bayesian network classifiers: A survey. ACM Comput. Surv. (CSUR) 2014, 47, 1–43. [Google Scholar] [CrossRef]
  25. Francois, P.; Wray, B.; Webb, G.I. Accurate parameter estimation for Bayesian network classifiers using hierarchical Dirichlet processes. Mach. Learn. 2018, 107, 1303–1331. [Google Scholar] [Green Version]
  26. Ziebart, B.D.; Dey, A.K.; Bagnell, J.A. Learning selectively conditioned forest structures with applications to DBNs and classification. In Proceedings of the 23rd Conference Annual Conference on Uncertainty in Artificial Intelligence; AUAI Press: Corvallis, OR, USA, 2007; pp. 458–465. [Google Scholar]
  27. Jing, Y.; Pavlovi, V.; Rehg, J.M. Boosted Bayesian network classifiers. Mach. Learn. 2008, 73, 155–184. [Google Scholar] [CrossRef] [Green Version]
  28. Sahami, M. Learning limited dependence Bayesian classifiers. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining; AAAI: Menlo Park, CA, USA, 1996; pp. 335–338. [Google Scholar]
  29. Luo, L.; Yang, J.; Zhang, B. Nonparametric Bayesian correlated group regression with applications to image classification. IEEE Trans. Neural Netw. Learn. Syst. 2018, 99, 1–15. [Google Scholar] [CrossRef] [PubMed]
  30. Pernkopf, F.; Bilmes, J.A. Efficient heuristics for discriminative structure learning of Bayesian network classifiers. J. Mach. Learn. Res. 2010, 11, 2323–2360. [Google Scholar]
  31. Sun, L.; Kudo, M. Optimization of classifier chains via conditional likelihood maximization. Pattern Recognit. 2018, 74, 503–517. [Google Scholar] [CrossRef]
  32. Fayyad, U.M.; Irani, K.B. Multi-interval discretization of continuous valued attributes for classification learning. In Proceedings of the 5th International Joint Conference on Artificial Intelligence; Organ Kaufmann Publishers Inc.: San Francisco, CA, USA, 1993; pp. 1022–1029. [Google Scholar]
  33. Webb, G.I.; Boughton, J.R.; Wang, Z. Not so naive Bayes: Aggregating one-dependence estimators. Mach. Learn. 2005, 58, 5–24. [Google Scholar] [CrossRef]
  34. Jiang, L.; Zhang, H. Weightily averaged one-dependence estimators. In Proceedings of the 9th Biennial Pacific Rim International Conference on Artificial Intelligence; Springer: Berlin, Germany, 2006; pp. 970–974. [Google Scholar]
  35. Kohavi, R.; Wolpert, D. Bias plus variance decomposition for zero-one loss functions. In Proceedings of the Thirteenth International Conference on Machine Learning; organ Kaufmann Publishers Inc.: San Francisco, CA, USA, 1996; pp. 275–283. [Google Scholar]
  36. Cestnil, B. Estimating probabilities: A crucial task in machine learning. Proc. Ninth Eur. Conf. Artif. Intell. 1990, 6–10, 147–149. [Google Scholar]
  37. McLachlan, G. Discriminant Analysis and Statistical Pattern Recognition; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  38. Roos, T.; Wettig, H.; Grunwald, P. On discriminative Bayesian network classifiers and logistic regression. Mach. Learn. 2005, 59, 267–296. [Google Scholar] [CrossRef]
  39. Demisar, J.; Schuurmans, D. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  40. Nemenyi, P. Distribution-Free Multiple Comparisons. Ph.D. Thesis, Princeton University, Princenton, NJ, USA, 1963. [Google Scholar]
Figure 1. Examples of different BNCs. (a) Naive Bayes, (b) Tree-augmented naive Bayes, (c) k-dependence Bayesian Classifier.
Figure 1. Examples of different BNCs. (a) Naive Bayes, (b) Tree-augmented naive Bayes, (c) k-dependence Bayesian Classifier.
Entropy 21 00721 g001
Figure 2. The learning procedure of TAN on Balance-Scale.
Figure 2. The learning procedure of TAN on Balance-Scale.
Entropy 21 00721 g002
Figure 3. Distributions of I ( X i ; Π i | Y ) and X j Π i I ( X i ; X j | Y ) on Census-income.
Figure 3. Distributions of I ( X i ; Π i | Y ) and X j Π i I ( X i ; X j | Y ) on Census-income.
Entropy 21 00721 g003
Figure 4. I ( X i ; X j | y ) with different class labels on Census-income.
Figure 4. I ( X i ; X j | y ) with different class labels on Census-income.
Entropy 21 00721 g004
Figure 5. The learning procedure of ETAN with k = 2 on Balance-Scale.
Figure 5. The learning procedure of ETAN with k = 2 on Balance-Scale.
Entropy 21 00721 g005
Figure 6. Comparison between LR and ETAN in terms of zero-one loss.
Figure 6. Comparison between LR and ETAN in terms of zero-one loss.
Entropy 21 00721 g006
Figure 7. Nemenyi test for all algorithms.
Figure 7. Nemenyi test for all algorithms.
Entropy 21 00721 g007
Table 1. Data sets.
Table 1. Data sets.
No.Data SetIns.Att.ClassNo.Data SetInst.Att.Class
1Labor5716221Vowel9901311
2Labor-Negotiations5716222Led1000710
3Lymphography1504323Car172864
4Iris1504324Hypothyroid3163252
5Hungarian29413225Dis3772292
6Heart-Disease-C30313226Sick3772292
7Soybean-Large307351927Abalone417783
8Ionosphere35134228Spambase4601572
9House-Votes-8443516229Waveform-50005000403
10Musk1476166230Page-Blocks5473105
11Cylinder-Bands54039231Optdigits56206410
12Chess55139232Satellite6435366
13Syncon60060633Mushrooms8124222
14Balance-Scale6254334Thyroid91692920
15Soybean683351935Letter-Recog20000262
16Credit-A69015236Adult48842142
17Breast-Cancer-W6999237Connect-467557423
18Pima-Ind-Diabetes7688238Waveform100000213
19Vehicle84618439Census-Income299285412
20Anneal89838640Poker-Hand10250101010
Table 2. The records of win/draw/loss for BNCs and our algorithms.
Table 2. The records of win/draw/loss for BNCs and our algorithms.
BNCNBTANKDBAODEWAODE
TAN27/5/8----
KDB25/10/516/13/11---
Zero-one lossAODE29/8/313/15/1213/14/13--
WAODE28/7/519/14/718/13/914/19/7-
ETAN30/6/421/11/819/12/918/13/915/15/10
TAN28/5/7----
KDB26/8/618/14/8---
BiasAODE31/7/214/10/1613/6/21--
WAODE24/2/1419/4/1718/4/1818/4/18-
ETAN32/4/418/14/89/19/1220/13/719/3/18
TAN6/2/32----
KDB9/2/2910/7/23---
VarianceAODE10/11/1930/3/729/3/8--
WAODE12/5/2221/3/1621/1/1812/4/24-
ETAN4/5/3115/8/1724/6/103/6/3118/3/19
Table 3. The records of win/draw/loss for LR and our algorithms.
Table 3. The records of win/draw/loss for LR and our algorithms.
LR
Zero-one loss26/4/8
ETANBias22/5/11
Variance21/3/14

Share and Cite

MDPI and ACS Style

Long, Y.; Wang, L.; Sun, M. Structure Extension of Tree-Augmented Naive Bayes. Entropy 2019, 21, 721. https://doi.org/10.3390/e21080721

AMA Style

Long Y, Wang L, Sun M. Structure Extension of Tree-Augmented Naive Bayes. Entropy. 2019; 21(8):721. https://doi.org/10.3390/e21080721

Chicago/Turabian Style

Long, Yuguang, Limin Wang, and Minghui Sun. 2019. "Structure Extension of Tree-Augmented Naive Bayes" Entropy 21, no. 8: 721. https://doi.org/10.3390/e21080721

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop