Next Article in Journal
Analytic Exact Upper Bound for the Lyapunov Dimension of the Shimizu–Morioka System
Next Article in Special Issue
Escalation with Overdose Control is More Efficient and Safer than Accelerated Titration for Dose Finding
Previous Article in Journal
Minimal Rényi–Ingarden–Urbanik Entropy of Multipartite Quantum States
Previous Article in Special Issue
General and Local: Averaged k-Dependence Bayesian Classifiers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Averaged Extended Tree Augmented Naive Classifier

EEECS, Queen's University Belfast, University Road, Belfast BT7 1NN, UK
*
Author to whom correspondence should be addressed.
Entropy 2015, 17(7), 5085-5100; https://doi.org/10.3390/e17075085
Submission received: 8 June 2015 / Revised: 10 June 2015 / Accepted: 17 June 2015 / Published: 21 July 2015
(This article belongs to the Special Issue Inductive Statistical Methods)

Abstract

:
This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN), which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN) and Averaged One-Dependence Estimator (AODE) classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.

1. Introduction

Bayesian network classifiers are based on learning the relations of (conditional) independence among variables (also called attributes) in a domain in order to predict the label (or state) of a targeted variable (also called class). They have been shown to perform well with respect to other general purpose classifiers [1,2]. Some well-known examples are the Naive Bayes and the Tree Augmented Naive Bayes (TAN) classifiers. The Naive Bayes has a very straightforward approach of assuming that all attributes are parented by the class and are conditionally independent of each other given the class. TAN on the other hand weakens this assumption of independence by using a tree structure wherein each attribute directly depends on the class and one other attribute. Both of these classifiers are based on the Bayesian networks [3,4]. Because Naive Bayes uses a much simpler model, it generally fits data less well than TAN and so is less likely to overfit compared with TAN, which forces attributes to be dependent on each other. Irrespective of their structural constraints, Naive Bayes and TAN have been shown to perform extremely well when compared to unrestricted Bayesian network classifiers [2] (that is, Bayesian networks without any constraint on their graph structure). Unrestricted Bayesian networks have also a disadvantage in terms of computational complexity: Learning their structure (or essentially the Markov blanket of the class node) is an NP-hard task [5,6]. In this work we focus on Bayesian network classifiers that can be learned exactly and efficiently from data, that is, we aim at learning both the structure and the parameters of such networks with a tractable exact approach. Learning can be performed in a generative manner (aiming at the joint distribution) or in a discriminative way (with respect to the targeted class variable). Because we are interested in exact and efficient learning, we choose to perform generative learning. Such decision precludes us from using discriminative learning of Bayesian network classifiers [710], which have been shown to perform well even though learning is conducted using approximate methods. On the other hand, our focus on exact and efficient learning allow us to conclude that the differences in the results are actually differences in the quality of the models and not due to some possibly sub-optimal solution during learning, while we acknowledge that other classifiers with greater accuracy (but approximate learning) might exist if we expanded our scope.
Very recently, the Extended TAN (ETAN) classifier [11,12] has been created to fill the gap in model complexity between Naive Bayes and TAN, in that it can detect the best model of dependencies among attributes without forcing them to be either conditionally independent or directly dependent. This approach allows ETAN to adapt to the characteristics of the data and usually produces models that fit better than both TAN and Naive Bayes while avoiding increased complexity. It is empirically shown that ETAN achieves classification accuracy comparable to the Averaged One-Dependence Estimator (AODE) classifier, one of the state-of-the-art general purpose classification methods [13]. However, ETAN is still usually outperformed by AODE.
Herein we present two simple improvements to ETAN that we can show to be effective. The first improvement builds on the structure of dependencies of ETAN by extending the search space of models that is captured by ETAN to include even further models of (conditional) independence among the variables in the domain. This idea can fit the data better than ETAN can. The second and possibly more significant extension made, is to build upon that by using a concept of model averaging: instead of picking the most probable model of independences, we average over a set of possible models. This is a promising approach as it goes in the direction of a full Bayesian estimation for the model. In fact, AODE—a high performing classifier—uses a similar technique. Our approach is different from AODE in that by building upon ETAN we have a wider range of possible models when compared with AODE (which averages only Naive Bayes classifiers), so in theory our classifier should build better models. We will empirically demonstrate this assertion. Some classifiers in the literature are related to this work and deserve a mention. Keogh and Pazzani [14] propose an extension of Naive Bayes that goes in the same direction of AODE and AETAN, however their approach is not globally optimal. Qiu et al. [15] build a similar graph structure as AODE, but use a discriminative learning approach. Other attempts to improve on Naive Bayes and TAN have been proposed [16,17], but they usually do not focus on exact and efficient learning.
This document is divided as follows. Section 2 describes the problem of classification and the most common Bayesian network classifiers. These are general purpose classifiers, in the sense that no specific domain knowledge or other information is exploited but the data set. Section 3 describes the accuracy measures that are used for comparing classifiers in this work, and then Section 4 presents the new classifier and the details on how to implement it. Section 5 describes our experimental setting and results, and finally Section 6 concludes the work and points to future developments.

2. Background

We assume that a data set D = {D1, D2,…, Dm} without missing values is available, where each Di is a vector observation for the n + 1 variables X, composed of a target (or class) variable C and n covariates (or attributes) Y. By using these data, we want to build a probabilistic model with which we can evaluate the posterior probability of C given a vector y of observations for Y to make a prediction about the unknown class label. This problem is usually called supervised classification. That is, given a class C ∈ X and some attribute observations y, where Y = X \ {C}, we use the classifier , which was previously built from D, to calculate the posterior probability function p ( C | y ), which can then be used for instance to guess the most probable label of C. This probability function p is obtained from the distribution that we have learned from D, which might underlying have a complicated set of conditional independences among variables in X. This set of conditional independences is encoded by a directed acyclic graph (DAG) where each variable is associated to a node (by a one-to-one correspondence, so we might refer to either nodes or variables entailing the same meaning) in the graph and the arcs are used to define relationships between nodes. An arc from Xi to Xj denotes that Xi is a parent of Xj (and so Xj is a child of Xi). The DAG represents a Markov condition: every variable is conditionally independent of its non-descendent variables given its parents. By using such a representation, we can write
p ( C | y ) p ( C | π C ) i = 1 n p ( y i | π Y i )
where π Y i and πC are observations for the parents of Yi and C, respectively, such that they are consistent with the vector y. Hence, the challenge is to build this model , composed of a DAG G and probability functions p ( X | Π X ), for the variables XX.
To fulfill the goal of building , we take the (training) data set D and find G that maximizes its posterior probability, that is, G =argmax G G p ( G | D ), with G the set of all DAGs over node set X and
p ( G | D ) p ( G ) p ( D | G , θ ) p ( θ | G ) d θ
where p ( θ | G ) is the prior of θ for a given graph G , θ = { θ i j k } i j k is the entire vector of model parameters such that θijk is associated with p ( X i = k | Π i = j ), where k ∈ {1,…ri}, j { 1 , , r Π i } and i ∈ {0,…,n} (rZ = ∏XZ rX X denotes the number of states of the joint variable Z and rX the number of states of a single variable X). The prior is assumed to be a symmetric Dirichlet with positive hyper-parameter α*:
p ( θ | G ) = i = 1 n j = 1 r Π i Γ ( α * r Π i ) k = 1 r i θ i j k α * r X i r Π i 1 Γ ( α * r X i r Π i )
α* is usually referred to as the Equivalent Sample Size (ESS). The value in Equation (1) is also known as the Bayesian Dirichlet Equivalent Uniform (BDeu) score [18,19], where we assume parameter independence and modularity [20]. We further assume that there is no preference for any graph and set p ( G ) as uniform, so it can be rewritten as s D ( G ) = p ( D | G ). Under these assumptions, it has been shown [19] that the graph maximizing its posterior probability is the argument of
max G i = 0 n j = 1 r Π i l Γ ( α * r Π i ) l Γ ( α * r Π i + N i j ) k = 1 r i l Γ ( α * r X i r Π i + N i j k ) l Γ ( α * r X i r Π i )
where lΓ is the log-gamma function, Nijk indicates how many elements in D contain both Xi = k and Πi = j. The values {Nijk}ijk depend on the graph G (more specifically, they depend on the parent set Πi of each Xi), so a more precise notation would be to use N i j k Π i instead of Nijk, which we avoid for ease of exposition.
In order to find the graph representing the best set of conditional independences over the space of all possible DAGs G, multiple approaches have been proposed in the literature, including score properties to reduce the search space (De Campos and Ji [21]), dynamic programming (Silander and Myllymaki [22]), branch-and-bound [23], A* and path finding methods [24], mixed integer linear programming [25], among others. These methods do not scale with the number of variables, which is nothing but expected, as the problem is known to be NP-hard in general and even if we limit each node to have at most two parents [6,26]. Therefore, some simplification assumptions must be made in order to achieve practical running times and tractable computation.
There are two main possibilities for building classifiers on top of Bayesian networks that are tractable: the Naive Bayes classifier and the Tree Augmented Naive Bayes (TAN) classifier. The Naive Bayes classifier (equivalently a log-linear model) is the simplest of the Bayesian network classifiers, by assuming that p ( Y i | C , Y j ) = p ( Y i | C )for any ij. The graphical representation of this is that of a directed tree with the class, C as the sole root and with all attributes being leaves with the class as their only parent, the example in Figure 1b illustrates it. As such, a Naive Bayes classifier has a constant structural shape irrespective of the training data. This restriction will often lead to underfitting on the part of a Naive Bayes classifier, however it still performs quite well (when scored on zero-one accuracy; see Section 3 for a description of accuracy measures) [27,28]. The question of why the naive Bayes does so well despite its’ deficiencies has been attempted to be answered by Friedman [29], among others. Friedman also suggested that misclassification error be decomposed into bias and variance errors; in doing so we can represent both the error from erroneous assumptions made during learning in bias error and the error from the sensitivity of the parameters of the classifier to the training data in variance error. In these terms, the Naive Bayes classifier has low variance—as it does not have a large number of parameters to estimate; but high bias due to its strict assumption of independence.
The tree augmented Naive Bayes (TAN) classifier, presented in [29], was an attempt to reduce this high bias by relaxing the assumption of independence between attributes. It does this by allowing a more complex graph model in which each attribute has the class and one other attribute as its parents—except a single attribute which has only the class as a parent. This approach can be seen as a middle ground between general Bayesian networks which can learn without constraints and the rigid structure of naive Bayes, in which each attribute has only the class as a parent. As we can see in [2,3032], TAN is shown to not only outperform general Bayesian networks, but Naive Bayes as well. Figure 1c shows a possible graph model with TAN from which we can see that compared with the naive model in Figure 1b, a more complex and possibly more fitting model can be learned. Under TAN we now see that two of the attributes in our example are dependent upon the third. Also of note is that if one were to remove the class (including its arcs) from the TAN classifier model, so that the graph was made up only of the attributes, then we would now be left with a directed tree; this is the origin motivation of its name.
By using the BDeu scoring function, one can devise an efficient algorithm for TAN, because BDeu is decomposable, that is, p ( G | D ) can be written as a sum i = 0 n s D ( X i , Π i ) of local computations involving a node and its parents in the graph G. By also using the likelihood equivalence property of BDeu, that is, a score function is likelihood equivalent if given two DAGs G 1 and G 2 over X where the set of conditional independences encoded in G 1 equals that of G 2, then we have p ( G 1 | D ) = p ( G 2 | D ), then one can use a simple minimum spanning tree algorithm to find the best possible graph G * to build the TAN classifier [33] in quadratic time in n : G TAN * = argmax G G TAN p ( G | D ),wherein G TAN is the set of all the TAN structures with nodes X.
Note that TAN forces a tree to cover all the attributes, as well as ensures that they are all children of the class. This might be undesirable, if the data are not sufficient to lead to such a complex structure of dependencies. With that in mind, very recently we have proposed the Extended TAN (or ETAN) classifier [11], intended to allow a trade-off in the model complexity between the very simple Naive and the more sophisticated TAN models. ETAN has the additional ability of being able to learn from data in which circumstances an attribute Yi should be a child of C, as well as to identify whether attributes should be directly dependent on each other or not without having to force a tree to cover them all. More formally, we can define the ETAN graph (representing its conditional independences) as a DAG such that, for each attribute Yi, |Πi| ≤ 1, or |Πi| = 2 and Πi ⊇ {C}. The class C still has no parents. So, we search for G TAN * = argmax G G TAN p ( G | D ), where G ETAN are all DAGs satisfying the aforementioned restrictions. In Figure 1a,d we can see some structures that are possible within the rules of ETAN. We point out that the ETAN graphs encompass both the Naive Bayes and TAN graphs. In this case, the implementation relies on the well-known Edmonds’ algorithm [34] (also attributed to [35]) as means for finding the ETAN graph of maximum posterior probability. While much more complicated in its implementation, ETAN can still run in quadratic time in n [12].

3. Accuracy Measures

Clearly we must have some way of measuring the accuracy of a classifier in order to understand which is the best for our purposes. The first measure of quality of a given classifier is the value of the posterior probability of its graph G (or equivalently under the given assumptions the probability p ( D | G ) of the training data under such model), also called Bayesian Dirichlet equivalent uniform (BDeu) score [18]. This is a measure of fitness and not of classification accuracy, but it is still very relevant to understand the difference between the approaches.
When we compare the effectiveness of the classifiers in our testing data (which is not used for inferring the classifiers), two main measures can be used [4]. The zero-one accuracy is a measure of correctly predicted class labels. So if we let T = (y1, c1),, (yN, cN) be the testing instances yi and their corresponding class labels ci, where yi is a vector observation for the attributes Y as defined previously, then the zero-one accuracy is such that
zo ( , T ) = 1 N i = 1 N ( c i = argmax j p ( C = j | y i ) ) ,
where I is the indicator function, that is, the percentage of successfully classified labels. Another important measure is the log-loss function. Simply put, the objective of the log-loss function is to approximately compare the posterior probability generated by the classifier with the true posterior probability of the class using the Kullback-Leibler distance [36]. Note that this is an approximation, as only with the number of testing samples N → ∞ we would obtain the true difference between them. We want the estimated posterior probability of the class to be as close as possible to the true one, therefore the most accurate classifier in this case is that which has the lowest log loss:
logl ( , T ) = i = 1 N log ( p ( C = c i | y i ) ) .
These measures are going to be used in Section 5 to evaluate the quality of the classifiers.

4. Averaged ETAN

In [12] it has been shown that the ETAN classifier performs quite well; in time complexity it is not worse than TAN, and it generates models that fit the data better than both TAN and Naive Bayes. Although TAN is shown to have a better zero-one accuracy in most cases, the log-loss accuracy of ETAN was shown to be significantly better than TAN—making it a good choice in situations where an accurate model of the data is as important as an accurate classification result. It was also found that the performance of ETAN increases proportionally to the number of attributes in the domain. However, ETAN was only slightly competitive with the Averaged One-Dependence Estimators (AODE) classifier, one of the state-of-the-art general purpose classification methods [13]. The idea behind AODE is to choose an attribute and elect it to be a parent of the class, while keeping the remaining graph constraints unchanged. Because AODE is based on the Naive Bayes classifier, the result is a graph that has the class plus a super attribute linked together, and then all other attributes as children of them both. We extend such an idea with the already good properties of ETAN to create the so-called Averaged ETAN (or AETAN).
First, we widen the range of possible models of ETAN by adding the possibility of a super attribute on top of the existing ETAN algorithm. The super attribute is an attribute S chosen to act as a parent of the class C. We iterate over all possible attributes one at a time (and also without any super attribute, in order to ensure that our approach generalizes ETAN). For each candidate super attribute, we find the best ETAN graph over the remaining attributes Y \ {S} such that S is a parent of C. An example is shown in Figure 2.
All of these discovered models S and their graphs G S are stored into an averaged model , together with their probabilities p ( G S , D ). Then the class prediction for a new observation vector y under averaging model is computed by the class posterior probability function as:
p ( C | y , D ) G S p ( G S , D ) p S ( C , y | G S , D ) .
Such averaging is supposed to improve classification accuracy, as it makes the inference process closer to a full Bayesian approach (even if the averaging is performed only over a very small set of graphs). In spite of that, this averaging usually has the effect of increasing the accuracy of classification while reducing the impact that any single graph has on the overall result. The non-obvious part is how to learn an ETAN model with an extra super attribute S.
The implementation of AETAN is obviously based on that of ETAN, but some non-trivial improvements are needed. Algorithm 1 contains the pseudo-code for AETAN. We give as input the variables X and a score function sD, as the method works with any score function which is decomposable. In each main loop, the algorithm simply adds the resultant graph into a list of all inferred graphs list; we loop through adding the best graph for each class and super attribute combination to the list. Once finished, we return the list of graphs ready to be averaged at the time of testing (the probability of each of them, which is proportional to their score, is also returned).
This implementation first initializes a temporary graph through the call to ArcsCreation and then makes use of Edmonds’ algorithm for finding the minimum directed weighted spanning forest. The overall time complexity is O(n3m), which is n times slower than ETAN or TAN. This comes from a simple analysis of ArcsCreation, which is O(n2m) and Edmonds’ algorithm, whose contract step is O(n2) while its expand step is O(n). After the list of models is inferred, the testing procedure as defined in Equation (4) takes time O(n2) per evaluation, which is asymptotically the same time complexity as that of AODE (but much slower than TAN and ETAN).
Algorithm 1. AETAN(X, sD)
Algorithm 1. AETAN(X, sD)
1:list ← ϕ
2:for all SX \ {C} do
3: (arcs, otherParents) ArcsCreation(X, S, sD)
4: G ( X , a r c s )
5: EdmondsContract( G )
6: G * n u l l
7:for all rootX \ {C, S} do
8:   in ← EdmondsExpand(root)
9:    G buildGraph ( X , r o o t , i n , o t h e r P a r e n t s )
10:    if s D ( G ) > s D ( G * ) then
11:      G * G
12: list ( list , ( G , s D ( G * ) ) )
return list
For each possible S, we first create the arcs that will be given to Edmonds using the function ArcsCreation (presented in Algorithm 2), by taking the highest scoring edge for each pair while testing the worth of having the super attribute and the class as possible additional parents. The logic is straightforward, and the important characteristic is that those decisions can be taken locally as they never yield directed cycles in the resulting graph. The variable otherParents is used to keep track of which of the four combinations of parents was used in the creation of the edge, for use later. We do not include these parents in the graph until after Edmonds’ algorithm has finished (so we store them for later inclusion). With the arcs in hands, Edmonds’ creates the directed minimum spanning tree. Finally, Algorithm 3 returns a graph using the result of Edmonds and the stored parent sets. More details on Edmonds’ can be found elsewhere [34,37,38].
Algorithm 2. ArcsCreation(X, S, sD)
Algorithm 2. ArcsCreation(X, S, sD)
1:for all XiX \ {C, S} do
2:otherParents[Xi] max(sD(Xi, {C, S}), sD(Xi, {C}), sD(Xi, {S}), sD(Xi, ϕ))
3:arcs ← ϕ
4:for all XiX \ {C} do
5:for all XjX \ {C} do
6:   3Parents ← sD(Xi, {C, S, Xj})
7:   2ParentsClass ← sD(Xi, {C, Xj})
8:   2ParentsSA ← sD(Xi, {C, Xj})
9:   onlyAttrib ← sD(Xi, {Xj})
10:   w′ ← max(sD(Xi, ϕ), sD(Xi, {C}, sD(Xi, {S}), sD(Xi, {C, S})))
11:   w ← max(3Parents, 2ParentsClass, 2ParentsSA, onlyAttrib) w′
12:   if w > 0 then
13:     Add Xj → Xi with weight w into arcs
14:     otherParents[Xj → Xi] max(3Parents, 2ParentsClass, 2ParentsSA, onlyAttrib)
15:   else
16:     otherParents[Xj → Xi] ← otherParents[Xi]
return (arcs, otherParents)
Algorithm 3. buildGraph(X; S; in; otherParents)
Algorithm 3. buildGraph(X; S; in; otherParents)
1: G ( X , ϕ )
2:for all nodeX \ {C, S} do
3:    ∏node←ϕ
4:    if in[node] ≠null then
5:      ∏node ← ∏node∪{ in[nodeotherParents [in[node] → node]
6:    else
7:      ∏node ← ∏nodeotherParents [node]
Return G
The overhead of learning AETAN (the creation of multiple ETAN with super attributes) makes it slower than ETAN, as we will see in the experiments.

5. Experiments

We run experiments with the following Bayesian network classifiers: Naive Bayes, TAN, ETAN, AETAN (the new approach), AODE, Weighted AODE (or WAODE) [39] and Hidden Naive Bayes (or HNB) [40]. All experiments are executed within Weka (Waikato Environment for Knowledge Analysis [41,42]), for which we developed an extension with the ETAN and AETAN methods. The experiment results we show herein were generated by learning our classifiers using 20 runs of 5-fold cross-validation, and so each classifier’s learning procedure has been called 100 times per data set, and tested over the extra fold also 100 times. We use the Bayesian Dirichlet equivalent uniform (BDeu) score with hyper-parameter α* = 5 unless specified (it is not well understood how this affects the accuracy of classification and we will explore the results of a few different values α* as suggested in the literature [4345], but a more detailed study on α* is beyond the scope of this work).
As for the data sets, we have chosen 51 data sets from the UCI machine learning repository [46]. These data sets were selected on the basis of number of attributes and number of data samples (enough to provide for useful analysis) having up to 100 attributes.
In Table 1 we see some interesting statistics about the overall performance of AETAN. It defeats every other classifier in percentage of correct classifications (zero-one accuracy) and log loss values by a good margin (except for HNB, which is comparable in zero-one accuracy and root mean squared error, but is inferior in log loss compared to AETAN). We point out that our analysis and the presented p-values have not taken into account multiple test correction, but as can be seen from the values in the table, the conclusions are the very same even if that is considered.
The boxplots shown in Figure 3 show the log loss information in more detail. Each of boxplot relates the results obtained from the 100 executions of the learning function for each data set (comparisons are always over the very same training and testing data, for the different classifiers). We see very good results for AETAN versus ETAN and AODE, even though the magnitude of the improvement is not so large (still statistically significant). We see that AETAN is on average at least as good as and often superior to both ETAN and AODE in terms of log loss, with some outliers. Very similar results (in fact noticeably better, versus Naive and TAN) are not shown.
In a similar way for the zero one accuracy, Figure 4 allows us to see the degree of improvement between AETAN and other classifiers (namely AODE and ETAN; results for Naive and TAN are omitted as they are always inferior to these). Again we see that it is usually slightly better than the others with some outliers, for example in the vowel data set its classification results were much better than that of AODE. There were only two losses in which the opposing classifier was much better than AETAN and these were both under AODE, when using the lung-cancer and primary-tumor data sets. Each of our boxplots relates the results obtained from the 100 executions of the learning function for each data set.
The great advantages of AETAN over its competitors in terms of zero one accuracy and log loss do not come for free. In Figure 5 we see that AETAN is considerably slower to be inferred than ETAN, which runs in similar time as TAN and AODE. It is worth mentioning that this difference is significant only for learning the model. During testing, the time complexity of AETAN is comparable to that of AODE, so the slower time of AETAN is not a great concern unless the targeted application requires online learning.

6. Conclusions

In this work we propose a new general purpose classifier called Averaged Extended Tree Augmented Naive Bayes, or simply AETAN. Its main idea is to combine the good properties of the Averaging One-dependent Estimator (AODE) and the Extended Tree Augmented Naive Bayes (ETAN) into a single classifier which could potentially improve on both. Empirical results with numerous benchmark data sets show that AETAN indeed outperforms the others, with the drawback of larger computational time for inferring the models. As future work, we intend to explore different score functions and the effect of the equivalent sample size in the classification results for general purpose Bayesian network classifiers. We also intend to expand on AETAN towards more general Bayesian networks where exact and efficient learning is still possible.

Author Contributions

Both authors have designed the algorithms and developed their implementations, performed the experiments, analyzed the results and written the manuscript. Both authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cheng, J.; Greiner, R. Comparing Bayesian Network Classifiers, Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence (UAI), Stockholm, Sweden, 30 July–1 August 1999; Morgan Kaufmann: San Francisco, CA, USA, 1999; pp. 101–108.
  2. Friedman, N.; Geiger, D.; Goldszmidt, M. Bayesian Network Classifiers. Mach. Learn. 1997, 29, 131–163. [Google Scholar]
  3. Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Morgan Kaufmann: Burlington, MA, USA, 1988. [Google Scholar]
  4. Koller, D.; Friedman, N. Probabilistic Graphical Models; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  5. Chickering, D.M. Learning Bayesian Networks is NP-Complete; Lecture Notes in Statistics; Springer: New York, NY, USA, 1996; Volume 112, pp. 121–130. [Google Scholar]
  6. Chickering, D.M.; Meek, C.; Heckerman, D. Large-Sample Learning of Bayesian Networks is NP-Hard. J. Mach. Learn. Res. 2004, 5, 1287–1330. [Google Scholar]
  7. Pernkopf, F.; Bilmes, J. Discriminative versus generative parameter and structure learning of Bayesian network classifiers, Proceedings of the International Conference on Machine Learning (ICML), Bonn, Germany, 7–11 August 2005; pp. 657–664.
  8. Pernkopf, F.; Wohlmayr, M. Stochastic margin-based structure learning of Bayesian network classifiers. Pattern Recognit. 2013, 46, 464–471. [Google Scholar]
  9. Pernkopf, F.; Bilmes, J.A. Efficient Heuristics for Discriminative Structure Learning of Bayesian Network Classifiers. J. Mach. Learn. Res. 2010, 11, 2323–2360. [Google Scholar]
  10. Grossman, D.; Domingos, P. Learning Bayesian network classifiers by maximizing conditional likelihood, Proceedings of the 21st International Conference on Machine Learning (ICML), Banff, AB, Canada, 1–4 July 2004; pp. 361–368.
  11. De Campos, C.P.; Cuccu, M.; Corani, G.; Zaffalon, M. Extended Tree Augmented Naive Classifier, Proceedings of the 7th European Workshop on Probabilistic Graphical Models (PGM), Utrecht, The Netherlands; Lecture Notes in Artificial Intelligence Volume 8754, Springer, Switzerland; 2014; pp. 176–189.
  12. De Campos, C.P.; Corani, G.; Scanagatta, M.; Cuccu, M.; Zaffalon, M. Learning Extended Tree Augmented Naive Structures. Int. J. Approx. Reason. 2015, in press. [Google Scholar]
  13. Webb, G.I.; Boughton, J.R.; Wang, Z. Not So Naive Bayes: Aggregating One-Dependence Estimators. Mach. Learn. 2005, 58, 5–24. [Google Scholar]
  14. Keogh, E.; Pazzani, M.J. Learning Augmented Bayesian Classifiers: A Comparison of Distribution-Based and Classification-Based Approaches, Proceedings of the Uncertainty’99: The Seventh International Workshop on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 4–6 January 1999.
  15. Qiu, C.; Jiang, L.; Li, C. Not always simple classification: Learning SuperParent for class probability estimation. Expert Syst. Appl. 2015, 42, 5433–5440. [Google Scholar]
  16. Jiang, L.; Cai, Z.; Wang, D. Improving tree augmented naive Bayes for class probability estimation. Knowl.-Based Syst. 2012, 26, 239–245. [Google Scholar]
  17. Jiang, L. Random one-dependence estimators. Pattern Recognit. Lett. 2011, 32, 532–539. [Google Scholar]
  18. Buntine, W. Theory refinement on Bayesian networks, Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI’92), Los Angeles, CA, USA, 13–15 July 1991; Morgan Kaufmann: San Francisco, CA, USA, 1991; pp. 52–60.
  19. Cooper, G.F.; Herskovits, E. A Bayesian method for the induction of probabilistic networks from data. Mach. Learn. 1992, 9, 309–347. [Google Scholar]
  20. Heckerman, D.; Geiger, D.; Chickering, D.M. Learning Bayesian networks: The combination of knowledge and statistical data. Mach. Learn. 1995, 20, 197–243. [Google Scholar]
  21. De Campos, C.P.; Ji, Q. Properties of Bayesian Dirichlet Scores to Learn Bayesian Network Structures, Proceedings of the 24th AAAI Conference on Artificial Intelligence (AAAI), Atlanta, GA, USA, 11–15 July 2010; pp. 431–436.
  22. Silander, T.; Myllymaki, P. A simple approach for finding the globally optimal Bayesian network structure, Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence (UAI), Cambridge, MA, USA, 13–16 July 2006; pp. 445–452.
  23. De Campos, C.P.; Ji, Q. Efficient Structure Learning of Bayesian Networks Using Constraints. J. Mach. Learn. Res. 2011, 12, 663–689. [Google Scholar]
  24. Yuan, C.; Malone, B. Learning Optimal Bayesian Networks: A Shortest Path Perspective. J. Artif. Intell. Res. 2013, 48, 23–65. [Google Scholar]
  25. Barlett, M.; Cussens, J. Advances in Bayesian Network Learning Using Integer Programming. Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence (UAI), Bellevue, WA, USA, 11–15 July 2013; pp. 182–191.
  26. Dasgupta, S. Learning polytrees, Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI), Stockholm, Sweden, 30 July–1 August 1999; Morgan Kaufmann: San Francisco, CA, USA, 1999; pp. 134–141.
  27. Domingos, P.; Pazzani, M. On the optimality of the simple Bayesian classifier under zero-one loss. Mach. Learn. 1997, 29, 103–130. [Google Scholar]
  28. Hand, D.J.; Yu, K. Idiot’s Bayes-Not So Stupid after All? Int. Stat. Rev. 2001, 69, 385–398. [Google Scholar]
  29. Friedman, J. On bias, variance, 0/1—loss, and the curse-of-dimensionality. Data Min. Knowl. Discov. 1997, 1, 55–77. [Google Scholar]
  30. Corani, G.; de Campos, C.P.; Sun, Y. A tree augmented classifier based on Extreme Imprecise Dirichlet Model. Proceedings of the International Symposium on Imprecise Probability: Theories and Applications (ISIPTA), Durham, UK, 14–18 July 2009; pp. 89–98.
  31. Corani, G.; de Campos, C.P. A tree augmented classifier based on Extreme Imprecise Dirichlet Model. Int. J. Approx. Reason. 2010, 51, 1053–1068. [Google Scholar]
  32. Madden, M.G. On the classification performance of TAN and general Bayesian networks. Knowl.-Based Syst. 2009, 22, 489–495. [Google Scholar]
  33. Chow, C.K.; Liu, C.N. Approximating discrete probability distributions with dependence trees. IEEE Trans. Inf. Theory. 1968, 14, 462–467. [Google Scholar]
  34. Edmonds, J. Optimum Branchings. J. Res. Natl. Bureau Stand. B 1967, 71B, 233–240. [Google Scholar]
  35. Chu, Y.J.; Liu, T.H. On the Shortest Arborescence of a Directed Graph. Sci. Sin. 1965, 14, 1396–1400. [Google Scholar]
  36. Kullback, S.; Leiber, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar]
  37. Tarjan, R.E. Finding Optimum Branchings. Networks 1977, 7, 25–35. [Google Scholar]
  38. Camerini, P.M.; Fratta, L.; Maffioli, F. A note on finding optimum branchings. Networks 1979, 9, 309–312. [Google Scholar]
  39. Jiang, L.; Zhang, H.; Cai, Z.; Wang, D. Weighted average of one dependence estimators. J. Exp. Theor. Artif. Intell. 2012, 24, 219–230. [Google Scholar]
  40. Jiang, L.; Zhang, H.; Cai, Z. A novel Bayes model: Hidden naive Bayes. IEEE Trans. Knowl. Data Eng. 2009, 21, 1361–1371. [Google Scholar]
  41. University of Waikato. Weka 3: Data Mining Software in Java. Available online: http://www.cs.waikato.ac.nz/ml/weka/ accessed on 1 June 2015.
  42. Witten, I.H.; Frank, E. Data Mining: Practical Machine Learning Tools and Techniques, 2nd ed.; Morgan Kaufmann: San Francisco, CA, USA, 2005. [Google Scholar]
  43. De Campos, C.P.; Benavoli, A. Inference with multinomial data: Why to weaken the prior strength, Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Barcelona, Spain, 16–22 July 2011; pp. 2107–2112.
  44. Scanagatta, M.; de Campos, C.P.; Zaffalon, M. Min-BDeu and Max-BDeu Scores for Learning Bayesian Networks, Proceedings of the 8th European Workshop on Probabilistic Graphical Models (PGM), Utrecht, The Netherlands, 17–19 September 2014; 8754, pp. 426–441.
  45. Silander, T.; Kontkanen, P.; Myllymäki, P. On Sensitivity of the MAP Bayesian Network Structure to the Equivalent Sample Size Parameter, Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI), Vancouver, BC, Canada, 19–22 July 2007; pp. 360–367.
  46. Asuncion, A.; Newman, D.J. UCI Machine Learning Repository. 2007. Available online: http://www.ics.uci.edu/~mlearn/MLRepository.html accesssed on 1 June 2015.
Figure 1. Some examples of the different structures possible within the classifiers we have written about here. (a) Possible with ETAN; (b) Possible with naive or ETAN; (c) Possible with TAN or ETAN; (d) Possible with ETAN.
Figure 1. Some examples of the different structures possible within the classifiers we have written about here. (a) Possible with ETAN; (b) Possible with naive or ETAN; (c) Possible with TAN or ETAN; (d) Possible with ETAN.
Entropy 17 05085f1
Figure 2. An example of a new structure possible with Extended Tree Augmented Naive Bayes (ETAN)pp.
Figure 2. An example of a new structure possible with Extended Tree Augmented Naive Bayes (ETAN)pp.
Entropy 17 05085f2
Figure 3. These graphs show the log loss difference between AETAN and ETAN (Top) and AETAN and Averaging One-dependent Estimator (AODE) (Bottom). The values are the difference between the log loss of the first classifier minus AETAN, so higher values mean AETAN is better.
Figure 3. These graphs show the log loss difference between AETAN and ETAN (Top) and AETAN and Averaging One-dependent Estimator (AODE) (Bottom). The values are the difference between the log loss of the first classifier minus AETAN, so higher values mean AETAN is better.
Entropy 17 05085f3
Figure 4. A comparison of correct classification ratio between AETAN and ETAN (Top) and AETAN and AODE (Bottom), higher value means AETAN is better as these results are the accuracy of AETAN divided by the others.
Figure 4. A comparison of correct classification ratio between AETAN and ETAN (Top) and AETAN and AODE (Bottom), higher value means AETAN is better as these results are the accuracy of AETAN divided by the others.
Entropy 17 05085f4
Figure 5. Comparison of computation time to learn the classifier between AETAN and ETAN, expressed as a ratio of AETAN/ETAN so higher values mean AETAN is slower.
Figure 5. Comparison of computation time to learn the classifier between AETAN and ETAN, expressed as a ratio of AETAN/ETAN so higher values mean AETAN is slower.
Entropy 17 05085f5
Table 1. Table showing Averaged Extended Tree Augmented Naive Bayes (AETAN) versus other classifiers. W means AETAN wins (is superior to the other classifier), T is a tie, and L represents losses. Pvalues correspond to Wilcoxon signed-rank test. RMSE is the root mean squared error as obtained by Weka [42]. Methods were used from Weka and its plugins.
Table 1. Table showing Averaged Extended Tree Augmented Naive Bayes (AETAN) versus other classifiers. W means AETAN wins (is superior to the other classifier), T is a tie, and L represents losses. Pvalues correspond to Wilcoxon signed-rank test. RMSE is the root mean squared error as obtained by Weka [42]. Methods were used from Weka and its plugins.
ClassifierZero-one Accuracy
Log Loss
RMSE
W/T/LPvalueW/T/LPvalueW/T/LPvalue
Naive35/0/166 × 10−446/0/55 × 10−941/0/103 × 10−7
ETAN42/0/96 × 10−749/0/21 × 10−949/0/27 × 10−10
TAN32/0/190.0443/0/82 × 10−738/0/139 × 10−5
AODE35/0/160.0337/0/142 × 10−438/0/136 × 10−4
WAODE33/0/180.0736/0/151 × 10−336/0/152 × 10−3
HNB26/0/250.4230/0/217 × 10−325/0/260.85

Naive (α* = 2)34/0/171 × 10−346/0/52 × 10−941/0/103 × 10−7
ETAN (α* = 2)40/0/115 × 10−748/0/34 × 10−948/0/32 × 108
AETAN (α* = 2)38/1/127 × 10−544/0/76 × 10−742/0/94 × 10−7
TAN (α* = 2)34/0/170.0145/0/61 × 10−840/0/114 × 10−6
AODE (α* = 2)30/0/210.2038/0/131 × 10−437/0/142 × 10−3
WAODE (α* = 2)30/0/210.2336/0/158 × 10−435/0/164 × 10−3

Share and Cite

MDPI and ACS Style

Meehan, A.; De Campos, C.P. Averaged Extended Tree Augmented Naive Classifier. Entropy 2015, 17, 5085-5100. https://doi.org/10.3390/e17075085

AMA Style

Meehan A, De Campos CP. Averaged Extended Tree Augmented Naive Classifier. Entropy. 2015; 17(7):5085-5100. https://doi.org/10.3390/e17075085

Chicago/Turabian Style

Meehan, Aaron, and Cassio P. De Campos. 2015. "Averaged Extended Tree Augmented Naive Classifier" Entropy 17, no. 7: 5085-5100. https://doi.org/10.3390/e17075085

Article Metrics

Back to TopTop