Next Article in Journal
Cross-Scale Interactions and Information Transfer
Previous Article in Journal
Entropy Methods in Guided Self-Organisation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How to Mine Information from Each Instance to Extract an Abbreviated and Credible Logical Rule

1
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
2
State Key Laboratory of Computer Science, Beijing 100080, China
3
College of Information Science and Engineering, Northeastern University, Shenyang City 110819, China
*
Author to whom correspondence should be addressed.
Entropy 2014, 16(10), 5242-5262; https://doi.org/10.3390/e16105242
Submission received: 17 July 2014 / Revised: 28 August 2014 / Accepted: 28 September 2014 / Published: 9 October 2014

Abstract

:
Decision trees are particularly promising in symbolic representation and reasoning due to their comprehensible nature, which resembles the hierarchical process of human decision making. However, their drawbacks, caused by the single-tree structure, cannot be ignored. A rigid decision path may cause the majority class to overwhelm other class when dealing with imbalanced data sets, and pruning removes not only superfluous nodes, but also subtrees. The proposed learning algorithm, flexible hybrid decision forest (FHDF), mines information implicated in each instance to form logical rules on the basis of a chain rule of local mutual information, then forms different decision tree structures and decision forests later. The most credible decision path from the decision forest can be selected to make a prediction. Furthermore, functional dependencies (FDs), which are extracted from the whole data set based on association rule analysis, perform embedded attribute selection to remove nodes rather than subtrees, thus helping to achieve different levels of knowledge representation and improve model comprehension in the framework of semi-supervised learning. Naive Bayes replaces the leaf nodes at the bottom of the tree hierarchy, where the conditional independence assumption may hold. This technique reduces the potential for overfitting and overtraining and improves the prediction quality and generalization. Experimental results on UCI data sets demonstrate the efficacy of the proposed approach.

1. Introduction

The rapid development of information and web technology has made a significant amount of data readily available for knowledge discovery. Growing interest in symbolic representation and reasoning highlights knowledge discovery as a clearly identifiable and technically rich subfield in artificial intelligence. The desirable properties of tools used to investigate big data are easy to understand models and predictive decisions. Decision trees are particularly promising in this regard, due to their comprehensible nature that resembles the hierarchical process of human decision making. To split nodes while growing the tree, the vast majority of the oblique and univariate decision-tree induction algorithms employ different impurity-based measures [1,2]. Aside from such advantages, such as the ability to explain the decision process and low computational costs, their drawbacks, caused by the single-tree structure, cannot be ignored.
Commonly, to classify an instance, we can just follow one path from the unique root to the leaf, examining every decision made along the way. This approach usually works well. However, in reality, the classification distribution is frequently used to deal with imbalanced datasets, where there are many more instances of a certain class than of another. In such cases, a rigid decision path may cause the majority class overwhelm other classes, thus creating a situation where the minority class is ignored. The decision tree corresponds to locally optimal solutions to all training data, but from it, we cannot get globally optimal solutions to the whole data set. Besides, low-quality training data may lead to the construction of overfitting or fragile classifiers. Thus, eliminating redundant attributes is generally used as a data preprocessing technique to improve the quality of the data. However, pruning removes not only superfluous nodes, but also subtrees, with the superfluous nodes as subroots. If some key nodes are removed by mistake, the negative effect caused by the absent nodes may propagate and aggravate the situation to some extent. To ensure a credible and robust decision path, according to Occam’s razor rule, only a limited number of attributes can be used for prediction. Thus just a portion of the information implicated in the training data will be utilized.
To resemble the hierarchical process of human decision making and to make the final model much more flexible, the core idea of this paper is to build a decision forest rather than a single tree to model training data. The theoretical foundation of the traditional decision tree algorithm is the chain rule of joint mutual information, which applies mutual and conditional mutual information to describe the direct and indirect relationships between predictive attributes and the class label. Correspondingly, the proposed learning algorithm, flexible hybrid decision forest (FHDF), applies the chain rule of local mutual information to form logical rules for each instance. The rules with the same root form different decision tree structures and then form a decision forest later. Thus, we can choose the most credible path from the forest to classify. To reduce the potential for overfitting and overtraining, functional dependencies (FDs) [3,4] will be employed as prior knowledge to remove redundant attributes rather than subtrees. Additionally, different sets of redundant attributes may be found and removed from different test instances. Because FDs are unrelated to class label, they can be extracted from the whole data set, rather than training data based on association rule analysis [5]; thus, FHDF works in the framework of semi-supervised learning. Naive Bayes (NB) [6,7], which is an important mining classifier for data mining and applied in many real-world classification problems, because of its high classification performance, replaces the leaf nodes at the bottom of the tree hierarchy, where the conditional independence assumption may hold. This technique ensures that all attributes will be utilized for prediction, thus improving the prediction quality and generalization.
The rest of this paper is organized as follows. Section 2 first proposes the theoretical basis of decision forest—the chain rule of local mutual information—then clarifies the rationality of FDs and introduces related work about NB. Section 3 describes the learning procedure of FHDF. Section 4 compares various approaches on data sets from the UCI repository. Finally, Section 5 presents possible future work.

2. Related Research Work

2.1. Information Theory

In the 1940s, Claude E. Shannon introduced information theory, the theoretical basis of modern digital communication. Although Shannon was principally concerned with the problem of electronic communications, the theory has much broader applicability. Many commonly used measures are based on the entropy of information theory and used in a variety of classification algorithms.
In the following discussion, Greek letters (α, β, γ) denote sets of attributes. Lower-case letters denote specific values taken by corresponding attributes (for instance, xi represents the event that Xi = xi).

Definition 1

Entropy is a measure of uncertainty of random variable C:
H ( C ) = - c C P ( c ) log  P ( c )
where P(c) is the marginal probability distribution function of C.

Definition 2

Mutual information I(C; X) is the reduction of entropy about variable C after observing all possible values of X
M I ( C ; X ) = H ( C ) - H ( C X ) = x X c C P ( x , c ) log  P ( x , c ) P ( c ) P ( x )
where P(x, c) is the joint probability distribution function of X and C.

Definition 3

Local mutual information LMI(C; x) is defined to measure the reduction of entropy about variable C after observing X = x,
L M I ( C ; x ) = c C P ( x , c ) log P ( x , c ) P ( c ) P ( x )

Definition 4

Conditional mutual information CMI(C;Xi|Xj) is defined to measure the expected value of the mutual information of two random variables C and Xi given the value of a third variable Xj.
C M I ( C ; X i X j ) = c C x i X i x j X j P ( x i , c x j ) log  P ( x i , c x j ) P ( c x j ) P ( x i x j )

Definition 5

Conditional local mutual information CLMI(C; xi|xj) is defined to measure the reduction of entropy about variable C after observing Xi = xi when Xj = xj always holds.
C L M I ( C ; x i x j ) = c C P ( x i , c x j ) log P ( x i , c x j ) P ( c x j ) P ( x i x j )
Obviously,
{ I ( C ; X ) = X L M I ( C ; x ) C M I ( C ; X i X j ) = X i X j C L M I ( C ; x i x j )
Mutual information and conditional mutual information are commonly applied to roughly describe the direct or conditional relationships between predictive attribute and class label C. However, in the real world, the relationships between attributes may differ greatly as the situation changes. Local mutual information and conditional local mutual information can be used to describe the dynamic changes, thus making the final model much more flexible. For example, as Figure 1 shows, suppose that the overall relationship among attributes {X2, · · ·, X3} and class label C is just like a rectangle. However, for the k–th instance, {X2,X3} are independent of C, and the local relationship between X1 and C is just like an oval.
According to the chain rule of joint mutual information, which is the theoretical basis of the decision tree learning algorithm, the mutual information between attributes X = {X1,X2, · · ·, Xn} and class label C is:
M I ( C ; X ) = M I ( C ; X 1 ) + C M I ( C ; X 2 X 1 ) + + C M I ( C ; X n X 1 , , X n - 1 )
On the other hand, for a sample t = {x1, x2, · · ·, xn} in data set S, the chain rule of local mutual information between t and class label C can be represented as:
L M I ( C ; t ) = L M I ( C ; x 1 ) + C L M I ( C ; x 2 x 1 ) + + C L M I ( C ; x n x 1 , , x n - 1 )
On the basis of the chain rule of local mutual information described above, we can convert each instance into a single logical rule or decision path. We first choose the most appropriate attribute value from t, e.g., x1, which satisfies LMI(C; x1) = max LMI(C; xi)(1 ≤ i ≤ n). The training set is split into finer subsets with attribute value x1. Then, we choose the next attribute value, e.g., x2, which satisfies CLMI(C; x2|x1) = max CLMI(C; xi|x1)(1 ≤ i ≤ n, i ≠ 1). The training subset is further split into finer subsets with attribute values x1 and x2. This procedure will continue, until the class label of the subset is the same or the instances in the subset satisfy some criterion. Finally, each training instance corresponds to a logical rule or decision path. If several rules have the same root, the combination of these rules with the same root node can build a complete decision tree, and the same part of the structural information present in the data can be expressed. Thus, the final model is composed of not one tree, but one forest. This technique reduces the potential for overfitting and overtraining and improves the prediction quality and generalization. For example, given attributes X = {X1,X2,X3,X4} and three instances {t1, t2, t3}, according to the comparison results of local mutual information and conditional local mutual information instances, {t1, t2, t3} can be converted into three logical rules. The conversion procedure are shown in Figure 2.
Because Rule 2 and Rule 3 have the same root, they can be combined into a single tree. Finally, from instances {t1, t2, t3}, we can build two tree structure, as Figure 3 shows.

2.2. Functional Dependency Rules of Probability

Given a relation R (in a relational database), attribute Y of R is functionally dependent on attribute X of R, and X of R functionally determines Y of R (in symbols X → Y ). We demonstrated functional dependency rules of probability in [8,9] to build a linkage between FD and probability theory, and the following rules are mainly included:
  • Representation equivalence of probability: Suppose data set S consists of two attribute sets {α, β} and β can be inferred by α, i.e., the FD α → β holds, then the following joint probability distribution holds:
    P ( α ) = P ( α , β )
  • Augmentation rule of probability: If FD α → β holds and γ is a set of attributes, then the following joint probability distribution holds:
    P ( α , γ ) = P ( α , β , γ )
  • Transitivity rule of probability: If FDs α → β and β → γ hold, then the following joint probability distribution holds:
    P ( α ) = P ( α , γ )
  • Pseudo-transitivity rule of probability: If βγ → δ and α → β hold, then the joint probability distribution holds:
    P ( α , γ ) = P ( α , γ , δ )
When two attributes are strongly related, the classifier may overweight the inference from the two attributes, resulting in prediction bias. FDs will help to avoid this situation, and the high dimensional representation or even whole classification model is simplified. For example, if FD x1 → x2 exists, for instance t = {x1, x2, · · ·, xn}, by applying the representation equivalence of probability and the augmentation rule of probability, we will have:
{ P ( x ) = P ( x 1 , x 3 , , x n ) P ( x , c ) = P ( x 1 , x 3 , , x n , c )
Then, from the definition of LMI, we will get:
L M I ( C x ) = c C P ( x , c ) log  P ( x , c ) P ( c ) P ( x ) = c C P ( x 1 , x 3 , , x n , c ) log  P ( x 1 , x 3 , , x n , c ) P ( c ) P ( x 1 , x 3 , , x n ) = L M I ( C x 1 , x 3 , , x n )
Thus, x2 is extraneous to classify t. Obviously, extraneous attributes reduction resulting from FDs will not change underlying conditional probability, thus, in turn, leading to lower variance. Therefore, FDs have the potential to be a valuable supplementary of the classifier over a considerable range of classification tasks. Discovering FDs from existing databases is an important issue, investigated for many years, and recently addressed with a data mining viewpoint, in a novel and much more efficient way [10,11].

2.3. Naive Bayes

Supervised classification is a basic task in artificial intelligence and knowledge discovery. The aim of supervised learning is to predict from a training set the class of a testing instance x = {x1, · · ·, xn}, where xi is the value of the i-th attribute. We estimate the conditional probability of P(c|x) by selecting arg maxC P(c|x), where c ∈ {c1, · · ·, ck} are the k classes. From Bayes theorem, we have:
P ( c x ) = P ( c , x ) / P ( x ) P ( c , x ) P ( x c ) P ( c )
The BN classifier is becoming increasingly popular in many areas, such as decision aid, diagnosis and complex system control, because of its inference capabilities [12,13]. The structure of BN is a directed acyclic graph, where the nodes correspond to domain attributes and the arcs between the nodes represent direct dependencies between the attributes. The absence of an arc between two nodes X1 and X2 denotes that X2 is independent of X1 given its parents.
However, the accurate estimation of P(x|c) is a complex process. Learning an optimal BN structure from existing data has been proven to be an NP-hard problem. NB avoids this problem by assuming that the attributes are independent given the class,
P ( x c ) = i = 1 n P ( x i c ) .
Then, the following equation is often calculated in practice rather than Equation (15).
P ( c x ) P ( c ) i = 1 n P ( x i c )
NB has a simple structure, whereby an arc exists from the class node to other nodes, but no arcs exist among the other nodes, as illustrated in Figure 4. One advantage of NB is avoiding model selection, because selecting between alternative models can be expected to increase variance and allow a learning system to over fit the training data [14].

3. FHDF

The learning procedure of the FHDF algorithm is described as follows:
  • Input: Training set T1, testing set T2, predictive attribute set X = {X1, · · ·, Xn} and class label C.
  • Output: Hybrid decision forest model.
  • Pre-processing phase: Mine association rules from data set T1 + T2, then convert them into FDs; achieve the closure of FDs. Find extraneous attributes based on FD analysis and remove them from attribute set.
Training phase:
Step 1
For each instance t = {x1, · · ·, xn} in T1, generate the root node Xi that satisfies
X i = arg max  L M I ( C ; x p ) ( 1 p n )
Then, verify if the corresponding subset, which satisfies Xi = xi, has the same class label.
(a)
If yes, exit and create a leaf node;
(b)
Otherwise, node Xi is created and added as one branch node.
Step 2
Calculate for each attribute value, among the attributes that have not been used so far, its CLMI and select attribute Xj, which satisfies:
X j = arg max  C L M I ( C ; x q x i ) ( 1 q n , q i )
Then, verify if CLMI satisfies the stopping criterion: (a) If yes, create NB as a leaf node; (b) otherwise, child node Xj is created and added to the branch. Repeat the same process for each instance t in T1 from Step 1.
Step 3
Get the order of attribute values in each instance, which will be converted into the classification rule. Use the rules with the same root to construct a hybrid decision tree.
Testing phase:
Step 4
Use the rules to assign class labels to unlabeled instances in T2.
Step 5
For any instance t that does not match any rule in the decision forest, start from the training phase, achieve the attribute order and get the sub-optimal rule for classification.
Knowledge implicated in a database can be divided into certain and uncertain parts. The certain part is commonly described by a set of rules, such as a decision tree. The uncertain part is always described from the viewpoint of probability, such as BN. The proposed working mechanism of the hybrid model-FHDF resembles the hierarchical process of human decision making. First, we convert each instance into a rule form by applying the chain rule of local mutual information. The extracted robust knowledge structure can solve the most certain problems based on definite knowledge and experience. Second, if the criterion of information growth cannot be satisfied, that means knowledge is scattered, and the independence assumption may be satisfied to some extent; then, NB is applied to make full use of the rest of the attributes.
Before this structured learning process of knowledge accumulation and induction, FDs is utilized to remove extraneous information to build a simplified and robust knowledge structure. For example, suppose data set S, and the logical rules inferred are shown in Tables 1 and 2, respectively; the corresponding FHDF structure will be as Figure 5a shows. If FD {X1 = a1} {X2 = b0} holds, for instance three, four and five X2 are extraneous for further inference. As Figure 5b shows, X2 will be removed from the FHDF structure.
Equation (8) is the theoretical basis of FHDF. For each instance t in the training set, According to Occam’s razor rule, a shorter assumption may be believable, but a longer one is highly coincidental. To make the final prediction credible, information growth must be assured when attributes are added in order. When adding new attribute xi, the stopping prerequisite should be as follows:
L M I ( C ; x 1 , , x i ) > 1.2 × L M I ( C ; x 1 , , x i - 1 )
The information growth must be more than 20%. Otherwise, FHDF creates an NB leaf node with all of the other attributes that have not been processed to perform further inference.
To sum up, FHDF extracts logical rules from the logical viewpoint by calculating LMI and CLMI, while NB predicts from the probabilistic viewpoint. Besides, FHDF presents outstanding flexibility and adaptability when there is a lack of prior knowledge. For example, given a testing instance t = {a3, b1, c0}, no rule in the decision forest can be used to assign a class label. Suppose that the attribute order is {c0, a3, b1} by calculating LMI and CLMI; as shown in Table 1, only training Instance 8 meets the first two conditions and can be used to classify t. The class label of t should be C2.

4. Experiments

4.1. Bias and Variance

Kohavi and Wolpert presented a bias-variance decomposition of expected misclassification rate [15], which is a powerful tool from sampling theory statistics for analyzing supervised learning scenarios. Suppose c and ĉ are the true class label and that generated by a learning algorithm, respectively, the zero-one loss function is defined as:
ξ ( c , c ^ ) = 1 - δ ( c , c ^ ) ,
where δ(c, ĉ) = 1 if ĉ = c and zero otherwise. The bias term measures the squared difference between the average output of the target and the algorithm. This term is defined as follows:
b i a s = 1 2 c ^ , c ɛ C [ P ( c ^ x ) - P ( c x ) ] 2 ,
where x is the combination of any attribute value. The variance term is a real valued non-negative quantity and equals zero for an algorithm that always makes the same guess regardless of the training set. The variance increases as the algorithm becomes more sensitive to changes in the training set. It is defined as follows:
v a r i a n c e = 1 2 [ 1 - c ^ ɛ C P ( c ^ x ) 2 ] .
Moore and McCabe illustrated bias and variance through shooting arrows at a target [16], as described in Figure 6. The perfect model can be regarded as the bull’s eye on a target and the learned classifier as an arrow fired at the bull’s eye. Bias and variance describe what happens when an archer fires many arrows at the target. Bias means that the aim is off and the arrows land consistently off the bull’s eye in the same direction. Variance means that the arrows are scattered. Large variance means that repeated shots are widely scattered on the target. They do not give similar results, but differ widely among themselves.

4.2. Statistical Results on UCI Data Sets

In order to verify the efficiency and effectiveness of the proposed FHDF, we conduct experiments on 43 data sets from the UCI machine learning repository. Table 3 summarizes the characteristics of each data set, including the number of instances, attributes and classes. Large data sets with an instance number greater than 3000 are annotated with the symbol “*”. Missing values for qualitative attributes are replaced with modes, and those for quantitative attributes are replaced with means from the training data. For each benchmark data set, numeric attributes are discretized using MDL discretization [17]. The following techniques are compared:
  • NB, standard naive Bayes.
  • TAN [18], tree-augmented naive Bayes applying incremental learning.
  • C4.5, standard decision tree.
  • NBTree [19], decision tree with naive Bayes as the leaf node.
  • RTAN [20], tree-augmented naive Bayes ensembles.
All algorithms were coded in MATLAB 7.0 on a Pentium 2.93 GHz/1 G RAM computer.
Base probability estimates P(c), P(c, xi) and P(c, xi, xj) were smoothed using the Laplace estimate, which can be described as follows:
{ P ^ ( c ) = F ( c ) + 1 K + k P ^ ( c , x i ) = F ( c , x i ) + 1 K i + k i P ^ ( c , x i , x j ) = F ( c , x i , x j ) + 1 K i j + k i j
where F(·) is the frequency with which a combination of terms appears in the training data, K is the number of training instances for which the class value is known, Ki is the number of training instances for which both the class and attribute Xi are known and Kij is the number of training instances for which all of the class and attributes Xi and Xj are known. k is the number of attribute values of class C, ki is the number of attribute value combinations of C and Xi and kij is the number of attribute value combinations of C, Xj and Xi.
Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data). Semi-supervised learning is a class of machine learning techniques that uses both labeled and unlabeled data for training. Typically, a small amount of labeled data with a large amount of unlabeled data is used. Many machine learning researchers have found that when unlabeled data are used in conjunction with a small amount of labeled data, a considerable improvement in learning accuracy can be achieved. The acquisition of labeled data for a learning problem often requires a skilled human agent (e.g., to transcribe an audio segment) or a physical experiment (e.g., determining the 3D structure of a protein or determining whether oil is present at a particular location). Given data set S with attribute set X, suppose FD α → β has been deduced from whole data set and α, β ⊂ X. From the representation equivalence of probability, we can obtain:
P ( α ) = P ( α , β )
By applying the augmentation rule of probability, we will get:
P ( α , c ) = P ( α , β , c ) o r P ( α c ) = P ( α , β c )
Therefore, FD α → β still holds in the training data regardless of the class label. Thus, we can use the whole data set to extract FDs with high confidence. Additionally, the framework of FHDF is semi-supervised learning. In the following discussion, we use FDs that are extracted based on the association rule analysis as the domain knowledge. These rules can be effective in uncovering unknown relationships, thereby providing results that can be the basis of forecast and decision. They have proven to be useful tools for an enterprise, as they strive to improve their competitiveness and profitability. To ensure the validity of FDs, the minimum number of instances that satisfies FD is 100. With the increasing number of attributes, more RAM is needed to store joint probability distributions. An important restriction of our algorithm is that the number of the left side of FD, i.e., the number of key attributes, should be no more than two.
Table 4 presents for each data set the zero-one loss, which is estimated by 10-fold cross-validation to give an accurate estimation of the average performance of an algorithm. The bias and variance results are shown in Tables 5 and 6, respectively, in which only 15 large data sets are selected, because of statistical significance. The zero-one loss, bias or variance across multiple data sets provides a gross measure of relative performance. Statistically, a win/draw/loss record (W/D/L) is calculated for each pair of competitors A and B with regard to a performance measure M. The record represents the number of data sets in which A respectively beats, loses to or ties with B on M. Small improvements in leave-one-out error may be attributable to chance. Consequently, it may be beneficial to use a statistical test to assess whether an improvement is significant. A standard binomial sign test, assuming that wins and losses are equiprobable, is applied to these records. A difference is considered significant when the outcome of a two-tailed binomial sign test is less than 0.05. Tables 710 show the W/D/L records corresponding to zero-one loss, bias and variance, respectively.
To clarify the main reason why the same classifier performs differently when data size changes, in the following discussion, we propose a new parameter, named clear winning percentage (CWP), which is defined as follows, to evaluate the extent to which one classifier is relatively superior to another.
C W P = W i n - L o s s W i n + D r a w + L o s s
If CWP > 0, then the classifier performs better. Otherwise, it performs worse. Figure 7 shows the comparison results of CWP corresponding to Tables 7 and 8.
NB delivers fast and effective classification with a clear theoretical foundation. However, NB is confronted with the limitations of the conditional independence assumption. From Figure 7a, we can see that, although the conditional independence assumption rarely holds in the real world, NB exhibits competitive performance compared to C4.5, CWP = (22 16)/43 = 0.14 > 0. When dealing with small data sets, the instances can be regarded as sparsely distributed, and the conditional independence assumption is satisfied to some extent. For example, there are as many as 38 attributes and only 894 instances for data set “Anneal”. After calculating and comparing the sum of CMI between one attribute and all of the other attributes, most attributes have weak relationships to other attributes or are even nearly independent of them. However, when dealing with large data sets, the relationships between attributes become much more obvious, and the limitation of this assumption cannot be neglected, CWP = (212)/15 = −0.67 < 0. As Figure 7b shows, similar result appears when TAN and C4.5 are compared. The main reason may be that logical rules, which are extracted based on information theory, take the main role for classification and can represent credible linear dependency among attributes as data size increases. Besides, as Figure 7c shows, although TAN is restricted to have at most one parent node for each predictive attribute, its structure is more reasonable than NB and can exhibit the relationship between attributes to some extent. From Figure 7d–f, when compared with these three single-structure models, the hybrid model, i.e., NBTree, shows superior or at least equivalent performance. To decrease the computational overhead, NB and TAN simplify the network structure given different independence assumption; whereas the hybrid model can utilize the advantages of both logical rule and probabilistic estimation. Because the leaf of the tree structure contains only limited number instances, NB can evenly make use of the rest of the attributes. RTAN investigates the diversity of TAN by the K statistic. From Figure 7g–j, this bagging mechanism helps RTAN to achieve superior performance to NB and TAN. However, when compared to decision tree learning algorithms, i.e., C4.5 and NBTree, the advantage of RTAN is not obvious. FHDF is motivated by the desire to weaken the assumption by removing correlated attributes on the basis of FD analysis. It also applies the bagging mechanism to describe the hyperplane in which each instance exist. From Figure 7k–o, FHDF demonstrates comprehensive and superior performance.
Bias can help to evaluate the extent to which the final model learned from training data fits the whole data set. From Table 9, we can see that the fitness of NB is the poorest, because its structure is definite regardless of what the true data distribution is. In contrast, TAN performs much better than C4.5 and NBTree. The main reason may be that each predictive attribute of TAN is affected by its direct parent node, which is selected by calculating CMI from the global viewpoint. As for C4.5 and NBTree, the final prediction greatly relies on the first few attributes corresponding to the main parts of the singletree structure, i.e., the root and the branch nodes. Thus, C4.5 and NBTree can easily achieve local rather than global optimization solution. Additionally, in view of this, the bagging mechanism can help RTAN and FHDF to make full use of the information that training data supply, since the knowledge hierarchy implicated in the training data can be described in different submodels, and the negative effect caused by data distribution change will be mitigated to a certain extent. The complicated relationship among attributes are measured and depicted from the viewpoint of information theory; thus, performance robustness can be achieved.
With respect to variance, as Table 10 shows, of these algorithms, NB performs the best, because its network structure is definite and, thus, not sensitive to changes in the training set. By contrast, C4.5 performs the worst. The main reason may be that the logical rules resemble the hierarchical process of human decision making. The attributes that locate at the back must correlate with those in the front. If any attribute changes the location, the following attributes will be affected greatly. Thus, for different training sets, especially when the branch contains a very small number of instances, the conditional distribution estimates may differ greatly and different attribute orders may be obtained. For example, there are 58 attributes and 4601 instances for data set “Spambase”. When 10-fold cross-validation is applied, only 4601×0.9 = 4141 instances can be used for training. Suppose each attribute contains two values and the leaf node must have at least 100 instances to ensure statistical significance: each logical rule will use at most five attributes for prediction. That means that the order of the rest of the 53 attributes may be at random. TAN and RTAN need to calculate CMI to build a maximal spanning tree, which may cause overfitting. On the other hand, FHDF and NBTree both use NB as the leaf node, thus retaining the simplicity and direct theoretical foundation of the decision tree, while mitigating the negative effect of the probability distribution of the training set. Besides, when different training and testing sets were given, FHDF uses the same FDs in the semi-supervised learning framework to eliminate redundant attributes. These FDs are extracted from the whole data set and entirely unrelated to the training set.
To further describe the working mechanism of FHDF, the information growth and the number of key attributes are changed while learning from training data. The comparison results of average bias and average variance are shown in Figure 8. When information growth increased from 10% to 30%, as can be seen from Figure 8a, the bias increased correspondingly, while the variance decreased. The main reason may be that if instance t needs to be converted into the classification rule, higher information growth will cause fewer attribute values to be selected. Thus, the classification rule will be too short to precisely fit instance t. However, when given different training data, the short rule may roughly hold for many instances. When the number of key attributes increased from one to three, as can be seen from Figure 8b, the bias and variance almost remain the same. With more key attributes used, according to Occam’s razor rule, the extracted FDs may be coincidental and not credible. Furthermore, the number of FDs with one key attribute are much more than those with two or three key attributes; the negative effect on bias and variance can be neglected.

5. Conclusions

FHDF has demonstrated a number of advantages over previous decision tree learning algorithms. With sparsely distributed data, ensemble learning has difficulty in predicting with certainty, thus resulting in an error rate increase. Because regardless of which part is considered as the testing part, FDs are extracted from the whole data set and, thus, remain the same. The computational demands for determining the structure became lower, especially if a large number of attributes are available. The size of the conditional probability tables increased exponentially with the number of parents. Sufficient labeled instances are a prerequisite for the precise parameter estimation. However, from the probabilistic analysis perspective, if the size of the data set is too small, the distributions of different attribute values become uneven. Some attributes may be closely distributed, whereas others may be sparsely distributed. As a result, an unreliable probability estimate might be obtained. To get the precise calculation of CLMI and estimation of the conditional probability distribution, determining when NB can be used as the leaf node remains unsolved. A number of techniques have been developed for extending the decision tree to handle numeric data. Hence, extending the current work to the more general FHDF framework is necessary.

Acknowledgments

This work was supported by the National Science Foundation of China (Grant No. 61272209, 61300145) and the Postdoctoral Science Foundation of China (Grant No. 20100481053, 2013M530980).

Author Contributions

All authors have contributed to the study and preparation of the article. They have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Suresha, M.; Danti, A.; Narasimhamurthy, S.K. Decision trees to multiclass prediction for analysis of arecanut data. Comput. Syst. Sci. Eng 2014, 29, 105–114. [Google Scholar]
  2. Abellan, J.; Baker, R.M.; Crossman, R.J.; Masegosa, A.R. Classification with decision trees from a nonparametric predictive inference perspective. Comput. Stat. Data Anal 2014, 71, 789–802. [Google Scholar] [Green Version]
  3. Tao, X.P.; Wang, N.; Zhou, Sh.G.; Zhou, A.Y.; Hu, Y.F. Mining Functional Dependency Rule of Relational Database. Proceedings of the 3rd Pacific-Asia Conference on on Knowledge Discovery and Data Mining, Beijing, China, 26–28 April 1999; Zhong, N., Zhou, L.Z., Eds.; Springer: Berlin/Heidelberg, Germany, 1999; pp. 520–524. [Google Scholar]
  4. Ronald, S.K.; James, J.L. Discovery of functional and approximate functional dependencies in relational databases. J. Appl. Math. Dec. Sci 2003, 7, 49–59. [Google Scholar]
  5. Agrawal, R.; Srikant, R. Fast algorithms for mining association rules in large databases. Proceedings of 20th International Conference on Very Large Data Bases, Santiago de Chile, Chile, 12–15 September 1994; pp. 487–499.
  6. Wu, J.; Cai, Z. A naive Bayes probability estimation model based on self-adaptive differential evolution. J. Intell. Inf. Syst 2014, 42, 671–694. [Google Scholar]
  7. Zheng, F.; Webb, G.I. Subsumption Resolution: An Efficient and Effective Technique for Semi-Naive Bayesian Learning. Mach. Learn 2012, 87, 1947–1988. [Google Scholar]
  8. Wang, L.M.; Yao, G.F. Extracting Logical Rules and Attribute Subset from Confidence Domain. Information-Tokyo 2012, 15, 173–180. [Google Scholar]
  9. Wang, L.M.; Yao, G.F. Learning NT Bayesian Classifier Based on Canonical Cover Analysis of Relational Database. Information-Tokyo 2012, 15, 165–172. [Google Scholar]
  10. Wang, L.M.; Yao, G.F. Bayesian Network Inference Based on Functional Dependency Mining of Relational Database. Information-Tokyo 2012, 15, 2441–2446. [Google Scholar]
  11. Beskales, G.; Ilyas, I.; Golab, L. Sampling from repairs of conditional functional dependency violations. VLDB J 2014, 23, 103–128. [Google Scholar]
  12. Nayyar, A.Z.; Webb, G.I. Fast and Effective Single Pass Bayesian Learning. Proceedings of the 17th Pacific-Asia Conference on on Knowledge Discovery and Data Mining, Gold Coast, Australia, 14–17 April 2013; Pei, J., Vincent, S.T., Cao, L.B., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 149–160. [Google Scholar]
  13. Jiang, L.; Li, C.; Wang, S. Cost-sensitive Bayesian network classifiers. Patt. Recogn. Lett 2014, 45, 211–216. [Google Scholar]
  14. Langley, P.; Sage, S. Induction of Selective Bayesian Classifiers. Proceedings of the 10th Conference on Uncertainty in Artificial Intelligence, Seattle, WA, 29–31 July 1994.
  15. Kohavi, R.; Wolpert, D. Bias Plus Variance Decomposition for Zero-one Loss Functions. Proceedings of the 13th International Conference on Machine Learning, Bari, Italy, 3–6 July 1996.
  16. Moore, D.S.; McCabe, G.P. Introduction to the Practice of Statistics, 4th ed; Michelle Julet: San Francisco, CA, USA, 2002. [Google Scholar]
  17. Fayyad, U.M.; Irani, K.B. Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning. Proceedings of the 13th International Joint Conference on Artificial Intelligence, Chambéry, France, 28 August–3 September 1993.
  18. Josep, R.A. Incremental Learning of Tree Augmented Naive Bayes Classifiers. Proceedings of the 18th National Conference on Artificial Intelligence and 14th Conference on Innovative Applications of Artificial Intelligence, Edmonton, Alberta, Canada, 28 July–1 August 2002.
  19. Farid, D.M.; Zhang, L.; Rahman, C.M. Hybrid decision tree and naive Bayes classifiers for multi-class classification tasks. Expert Syst. Appl 2014, 41, 1937–1946. [Google Scholar]
  20. Ma, Sh.; Shi, H.B. Tree-augmented naive Bayes ensembles. Proceedings of the 3rd International Conference on Machine Learning and Cybernetics, Shanghai, China, 26–29 August 2004.
Figure 1. The overall and local relationships among {X1,X2,X3} and C.
Figure 1. The overall and local relationships among {X1,X2,X3} and C.
Entropy 16 05242f1
Figure 2. The conversion procedure from instances {t1, t2, t3} to logical rules.
Figure 2. The conversion procedure from instances {t1, t2, t3} to logical rules.
Entropy 16 05242f2
Figure 3. The overall decision forest structure corresponding to instances {t1, t2, t3}.
Figure 3. The overall decision forest structure corresponding to instances {t1, t2, t3}.
Entropy 16 05242f3
Figure 4. The network structure of naive Bayes.
Figure 4. The network structure of naive Bayes.
Entropy 16 05242f4
Figure 5. The corresponding flexible hybrid decision forest (FHDF) structure before and after applying functional dependency (FD).
Figure 5. The corresponding flexible hybrid decision forest (FHDF) structure before and after applying functional dependency (FD).
Entropy 16 05242f5
Figure 6. Bias and variance in shooting arrows at a target.
Figure 6. Bias and variance in shooting arrows at a target.
Entropy 16 05242f6
Figure 7. The 0–1 loss comparison of learning algorithms on all and large data sets.
Figure 7. The 0–1 loss comparison of learning algorithms on all and large data sets.
Entropy 16 05242f7
Figure 8. Comparison results of the bias and variance of FHDF given different information growth and a different number of key attributes.
Figure 8. Comparison results of the bias and variance of FHDF given different information growth and a different number of key attributes.
Entropy 16 05242f8
Table 1. Corresponding classification rules.
Table 1. Corresponding classification rules.
NoRulesClass Label
1{a0} C1
2{a0} C1
3{a1, b0, c0} C2
4{a1, b0, c1} C1
5{a1, b0, c2} C2
6{a2, b1} C0
7{a2, b0} C1
8{b2} C2
9{b2} C2
Table 2. Sample data from data set S.
Table 2. Sample data from data set S.
NoX1X2X3Class Label
1a0b0c2C1
2a0b1c1C1
3a1b0c0C2
4a1b0c1C1
5a1b0c2C2
6a2b1c1C0
7a2b0c1C1
8a3b2c0C2
9a4b2c1C2
Table 3. Data sets.
Table 3. Data sets.
No.Data Set# InstanceAttributeClass
1Abalone *417793
2Adult *48,842152
3Anneal898396
4Audio2267024
5Balance Scale (Wisconsin)62553
6breast-cancer-w699102
7Live Disorder (Bupa)34572
8Car172884
9Chess551402
10Cleveland303142
11Connect-4 Opening *67,557433
12Contraceptive Method Choice1473103
13Credit Screening690162
14Cylinder-bands540402
15Dermatology366356
16Glass Identification214103
17German1000212
18Haberman’s Survival30642
19Heart Disease303142
20Hepatitis155202
21Hungarian294142
22Iris15053
23King-rook-vs-King-pawn *3196362
24Labor Negotiation57172
25LED1,000810
26Localization *164,86073
27Lung Cancer32573
28Lymphography148194
29Magic *19,020112
30Mushroom *8124222
31Nursery *12,96085
32Optdigits *56206410
33Poker-hand1,025,010 *1110
34Primary Tumor3391822
35Satellite *6435376
36Segment *2310207
37Shuttle *58,00097
38Sick *3772302
39Spambase *4601582
40Teaching Assistant Evaluation15163
41Vehicle846194
42Wine Recognition178143
43Zoo101177
Table 4. Experimental results of 0–1 loss. C4.5, standard decision tree; TAN, tree-augmented naive Bayes applying incremental learning; NBTree, decision tree with naive Bayes as the leaf node; RTAN, tree-augmented naive Bayes ensembles.
Table 4. Experimental results of 0–1 loss. C4.5, standard decision tree; TAN, tree-augmented naive Bayes applying incremental learning; NBTree, decision tree with naive Bayes as the leaf node; RTAN, tree-augmented naive Bayes ensembles.
DatasetC4.5NBTANNBTreeRTANFHDF
Abalone0.4600.4720.4590.4300.4500.443
Adult0.1390.1580.1380.1430.1440.131
Anneal0.4580.3750.3750.3600.1730.238
Audio0.3140.2390.2920.1950.2350.307
Balance Scale (Wisconsin)0.3060.2850.2800.2860.2970.278
breast-cancer-w0.0560.0260.0420.0340.0320.071
Live Disorder(Bupa)0.4440.4440.4440.4260.4520.421
Car0.1560.1400.0570.0780.0610.036
Chess0.1510.1130.0930.0960.1090.095
Cleveland0.2770.1620.2050.1710.1780.204
Connect-4 Opening0.2060.2780.2350.2320.2330.217
Contraceptive Method Choice0.4930.5040.4890.4740.5050.475
Credit Screening0.1330.1410.1510.1340.1420.139
Cylinder-bands0.3070.2150.2830.1810.1790.215
Dermatology0.0790.0190.0330.0160.0220.062
Glass Identification0.2570.2620.2200.2420.2100.168
German0.2770.2530.2730.2380.2590.275
Haberman’s Survival0.2810.2810.2810.2700.2870.267
Heart Disease0.2370.1780.1930.1640.1950.201
Hepatitis0.1610.1940.1680.1730.1840.178
Hungarian0.1940.1600.1700.1600.1630.171
Iris0.0800.0870.0800.0830.0880.082
King-rook-vs-King-pawn0.0220.1210.0780.0810.0680.040
Labor Negotiation0.0880.0350.0530.0500.0360.033
LED0.3090.2670.2660.2570.2710.249
Localization0.3050.4960.3580.3450.2770.282
Lung Cancer0.6250.4380.5940.4800.5100.534
Lymphography0.2840.1490.1760.1620.1520.225
Magic0.1640.2240.1680.1680.1660.156
Mushroom0.0000.0200.0000.0000.0000.000
Nursery0.0500.0970.0650.0700.0530.027
Optdigits0.2300.0770.0410.0300.0240.035
Poker-hand0.2450.4990.3300.4620.1210.186
Primary Tumor0.5840.5460.5430.5520.5570.544
Satellite0.1540.1810.1210.1100.0940.103
Segment0.0920.0790.0390.0330.0340.045
Shuttle0.0010.0040.0020.0010.0010.001
Sick0.0200.0310.0260.0260.0250.021
Spambase0.0830.1020.0670.0650.0580.060
Teaching Assistant Evaluation0.5960.4970.5500.4700.4930.360
Vehicle0.3310.3920.2940.2780.2780.280
Wine Recognition0.1070.0170.0340.0220.0230.018
Zoo0.1980.0300.0100.0290.0100.047
Table 5. Experimental results of bias.
Table 5. Experimental results of bias.
DatasetNBTANC4.5NBTreeRTANFHDF
Abalone0.4040.3200.3110.3330.3140.329
Adult0.1400.1080.1000.1130.1130.108
Connect-4 Opening0.2330.1830.2000.1920.2090.179
King-rook-vs-King-pawn0.1110.0670.0190.0700.0620.039
Localization0.3820.3260.3090.3210.3140.314
Magic0.1990.1360.1390.1610.1540.132
Mushroom0.0400.0000.0030.0010.0000.001
Nursery0.0730.0510.0830.0520.0420.042
Optdigits0.0660.0310.1090.0300.0220.029
Poker-hand0.3270.2270.3080.2630.2700.331
Satellite0.1660.0940.1010.0800.0750.081
Segment0.0860.0450.0720.0390.0390.039
Shuttle0.0070.0020.0030.0020.0020.003
Sick0.0230.0230.0220.0240.0230.021
Spambase0.0970.0660.0720.0670.0630.050
Table 6. Experimental results of variance.
Table 6. Experimental results of variance.
DatasetNBTANC4.5NBTreeRTANFHDF
Abalone0.0930.1610.1680.1220.1740.137
Adult0.0270.0660.0450.0590.0450.044
Connect-4 Opening0.0950.0880.1260.0840.0830.110
King-rook-vs-King-pawn0.0250.0190.0100.0080.0220.023
Localization0.1910.2560.1900.2170.2320.219
Magic0.0410.0790.0850.0650.0700.061
Mushroom0.0080.0010.0040.0000.0000.000
Nursery0.0270.0380.0240.0360.0340.030
Optdigits0.0250.0280.1720.0260.0210.024
Poker-hand0.2090.2310.2120.2630.2320.214
Satellite0.0210.0410.0890.0420.0420.049
Segment0.0170.0210.0520.0210.0150.018
Shuttle0.0040.0010.0020.0020.0010.002
Sick0.0060.0070.0020.0040.0040.005
Spambase0.0100.0170.0430.0190.0130.014
Table 7. Win/draw/loss record (W/D/L) comparison results of 0–1 loss on all data sets.
Table 7. Win/draw/loss record (W/D/L) comparison results of 0–1 loss on all data sets.
W/D/LC4.5NBTANNBTreeRTAN
NB22/5/16
TAN24/10/921/8/14
NBTree28/7/828/9/619/19/5
RTAN27/9/720/17/624/14/516/13/14
FHDF29/9/524/9/1022/15/617/12/1418/11/14
Table 8. W/D/L comparison results of 0–1 loss on large data sets.
Table 8. W/D/L comparison results of 0–1 loss on large data sets.
W/D/LC4.5NBTANNBTreeRTAN
NB2/1/12
TAN4/3/814/1/0
NBTree7/2/615/0/05/8/2
RTAN7/2/611/1/38/4/36/5/4
FHDF10/2/315/0/012/2/111/1/38/3/4
Table 9. W/D/L comparison results of bias on large data sets.
Table 9. W/D/L comparison results of bias on large data sets.
W/D/LNBTANC4.5NBTreeRTAN
TAN14/1/0
C4.512/1/23/3/9
NBTree14/1/02/9/48/2/5
RTAN14/1/06/6/38/3/48/6/1
FHDF14/1/07/5/39/2/47/5/35/5/5
Table 10. W/D/L comparison results of variance on large data sets.
Table 10. W/D/L comparison results of variance on large data sets.
W/D/LNBTANC4.5NBTreeRTAN
TAN4/0/11
C4.55/2/86/1/8
NBTree5/1/910/2/39/0/6
RTAN7/0/810/2/38/2/57/3/5
FHDF4/3/811/0/48/2/57/2/66/2/7

Share and Cite

MDPI and ACS Style

Wang, L.; Sun, M.; Cao, C. How to Mine Information from Each Instance to Extract an Abbreviated and Credible Logical Rule. Entropy 2014, 16, 5242-5262. https://doi.org/10.3390/e16105242

AMA Style

Wang L, Sun M, Cao C. How to Mine Information from Each Instance to Extract an Abbreviated and Credible Logical Rule. Entropy. 2014; 16(10):5242-5262. https://doi.org/10.3390/e16105242

Chicago/Turabian Style

Wang, Limin, Minghui Sun, and Chunhong Cao. 2014. "How to Mine Information from Each Instance to Extract an Abbreviated and Credible Logical Rule" Entropy 16, no. 10: 5242-5262. https://doi.org/10.3390/e16105242

Article Metrics

Back to TopTop