Entropy-Based Greedy Algorithm for Decision Trees Using Hypotheses

In this paper, we consider decision trees that use both conventional queries based on one attribute each and queries based on hypotheses of values of all attributes. Such decision trees are similar to those studied in exact learning, where membership and equivalence queries are allowed. We present greedy algorithm based on entropy for the construction of the above decision trees and discuss the results of computer experiments on various data sets and randomly generated Boolean functions.


Introduction
Decision trees are well known as means for knowledge representation, as classifiers, and as algorithms to solve various problems of combinatorial optimization, computational geometry, etc. [1][2][3].
Conventional decision trees have been studied in different theories, in particular, in rough set theory initiated by Pawlak [4][5][6] and in test theory initiated by Chegis and Yablonskii [7]. These trees use simple queries based on one attribute each.
In contrast to these theories, exact learning initiated by Angluin [8,9] studied not only membership queries that correspond to attributes from rough set theory and test theory, but also so-called equivalence queries. Relations between exact learning and probably approximately correct (PAC) learning proposed by Valiant [10] were discussed in [8].
In this paper, we add the notion of a hypothesis to the model that has been considered in rough set theory as well in test theory. This model allows us to use an analog of equivalence queries.
Let T be a decision table with n conditional attributes f 1 , . . . , f n having values from the set ω = {0, 1, 2, . . .} in which rows are pairwise different and each row is labeled with a decision from ω. For a given row of T, we should recognize the decision attached to this row. To this end, we can use decision trees based on two types of queries. We can ask about the value of an attribute f i ∈ { f 1 , . . . , f n } on the given row. We will obtain an answer of the kind f i = δ, where δ is the number in the intersection of the given row and the column f i . We can also ask if hypothesis f 1 = δ 1 , . . . , f n = δ n is true, where δ 1 , . . . , δ n are numbers from the columns f 1 , . . . , f n , respectively. Either this hypothesis will be confirmed or we will obtain a counterexample in the form f i = σ, where f i ∈ { f 1 , . . . , f n } and σ is a number from the column f i different from δ i . The considered hypothesis is called proper if (δ 1 , . . . , δ n ) is a row of the table T.
In this paper, we consider two cost functions that characterize the time and space complexity of decision trees. We consider the depth of a decision tree as its time complexity, which is equal to the maximum number of queries in a path from the root to a terminal node of the tree. As the space complexity of a decision tree, we consider the number of its realizable, relative to T, nodes. A node is called realizable relative to T if for a row of T and some choice of counterexamples the computation in the tree passes through this node.
Decision trees using hypotheses can be essentially more efficient than the decision trees using only attributes. Let us consider an example, the problem of computation of the conjunction x 1 ∧ · · · ∧ x n . The minimum depth of a decision tree solving this problem using the attributes x 1 , . . . , x n is equal to n. The minimum number of realizable nodes in such decision trees is equal to 2n + 1. However, the minimum depth of a decision tree solving this problem using proper hypotheses is equal to 1: it is enough to ask only about the hypothesis x 1 = 1, . . . , x n = 1. If it is true, then the considered conjunction is equal to 1. Otherwise, it is equal to 0. The obtained decision tree contains n + 2 realizable nodes.
In this paper, we consider the following five types of decision trees: 1.
Decision trees that use only attributes.

2.
Decision trees that use only hypotheses.

3.
Decision trees that use both attributes and hypotheses.

4.
Decision trees that use only proper hypotheses.

5.
Decision trees that use both attributes and proper hypotheses.
There are different ways to construct conventional decision trees, including algorithms that can construct optimal decision trees for medium-sized decision tables [11][12][13][14][15]. In particular, in [16,17], we proposed dynamic programming algorithms for the minimization of the depth and number of realizable nodes in decision trees with hypotheses. However, the most common way is to use greedy algorithms [1,18].
In this paper, we propose a greedy algorithm based on entropy that, for a given decision table and type of decision trees, constructs a decision tree of the considered type for this table. The goal of this paper is to understand which type of decision tree should be chosen if we would like to minimize the depth and which type should be chosen if we would like to minimize the number of realizable nodes. To this end, we compare the parameters of the constructed decision trees of five types for 10 decision tables from the UCI ML Repository [19]. We do the same for randomly generated Boolean functions with n variables, where n = 3, . . . , 6. From the obtained experimental results, it follows that we should choose decision trees of the type 3 if we would like to minimize the depth and decision trees of the type 1 if we would like to minimize the number of realizable nodes.
The main contributions of the paper are (i) the design of the extended entropy-based greedy algorithm that can work with five types of decision trees and (ii) the understanding of which type of decision trees should be chosen when we would like to minimize the depth or the number of realizable nodes.
The rest of the paper is organized as follows. In Sections 2 and 3, we consider the main notions and in Section 4, the greedy algorithm for the decision tree construction. Section 5 contains the results of the computer experiments and Section 6, short conclusions. Tables   A decision table is a rectangular table T  We denote F(T) = { f 1 , . . . , f n } and denote by D(T) the set of decisions attached to the rows of T. For any conditional attribute f i ∈ F(T), we denote by E(T, f i ) the set of values of the attribute f i in the table T.

Decision
A system of equations over T is an arbitrary equation system of the following kind: Let T be a nonempty table. A subtable of T is a table obtained from T by the removal of some rows. We correspond to each equation system S over T a subtable TS of the table T.
If the system S is empty, then TS = T. Let S be nonempty and Then TS is the subtable of the table T containing the rows from T, which in the intersection with the columns f i 1 , . . . , f i m have numbers δ 1 , . . . , δ m , respectively.

Decision Trees
Let T be a nonempty decision table with n conditional attributes f 1 , ..., f n . We consider the decision trees with two types of queries. We can choose an attribute . . , f n } and ask about its value. This query has the set of answers , and ask about this hypothesis. This query has the set of answers The answer H means that the hypothesis is true. Other answers are counterexamples. The hypothesis H is called proper for T if (δ 1 , . . . , δ n ) is a row of the table T.
A decision tree over T is a marked finite directed tree with the root in which the following is true: Each node, which is not terminal (such nodes are called working), is labeled with an attribute from the set F(T) or with a hypothesis over T.

•
If a working node is labeled with an attribute f i from F(T), then for each answer from the set A( f i ), there is exactly one edge labeled with this answer, which leaves this node and there are no any other edges that leave this node. • If a working node is labeled with a hypothesis H = { f 1 = δ 1 , . . . , f n = δ n } over T, then for each answer from the set A(H), there is exactly one edge labeled with this answer, which leaves this node and there are no any other edges that leave this node.
Let Γ be a decision tree over T and v be a node of Γ. We now define an equation system S(Γ, v) over T associated with the node v. We denote by ξ the directed path from the root of Γ to the node v. If there are no working nodes in ξ, then S(Γ, v) is the empty system. Otherwise, S(Γ, v) is the union of equation systems attached to the edges of the path ξ.
A decision tree Γ over T is called a decision tree for T if for any node v of Γ, the following is true: If v is a terminal node and the subtable TS(Γ, v) is empty, then the node v is labeled with the decision 0. • If v is a terminal node and the subtable TS(Γ, v) is nonempty, then the node v is labeled with the decision attached to all rows of TS(Γ, v).
A complete path in Γ is an arbitrary directed path from the root to a terminal node in Γ. As the time complexity of a decision tree, we consider its depth, which is the maximum number of working nodes in a complete path in the tree, or which is the same-the maximum length of a complete path in the tree. We denote by h(Γ) the depth of the decision tree Γ.
As the space complexity of the decision tree Γ, we consider the number of its realizable relative to T nodes. A node v of Γ is called realizable relative to T if and only if the subtable TS(Γ, v) is nonempty. We denote by L(T, Γ) the number of nodes in Γ that are realizable relative to T.
In this paper, we consider the following five types of decision trees: 1.
Decision trees that use only attributes.

2.
Decision trees that use only hypotheses.

3.
Decision trees that use both attributes and hypotheses. 4.
Decision trees that use only proper hypotheses.

5.
Decision trees that use both attributes and proper hypotheses.

Greedy Algorithm Based on Entropy
Let T be a nonempty decision table with n conditional attributes f 1 , . . . , f n and Θ be a subtable of the table T. We define entropy of Θ (denoted ent(Θ)) as follows. If Θ is empty, then ent(Θ) = 0. Let Θ be nonempty. For any decision t ∈ D(Θ), we denote by N t (Θ) the number of rows in Θ labeled with the decision t and by We can find by simple search among all attributes from F(T) an attribute f i with the minimum impurity I( f i , Θ). We can also find by simple search among all proper hypotheses over T a proper hypothesis H with the minimum impurity I(H, Θ). It is not necessary to consider all hypotheses over T to find a hypothesis with the minimum impurity. For i = 1, . . . , n, we denote by δ i a number from E(T, f i ) such that ent(Θ{ f i = δ i }) = max{ent(Θ{ f i = σ}) : σ ∈ E(T, f i )}. Then, the hypothesis H = { f 1 = δ 1 , . . . , f n = δ n } has the minimum impurity I(H, Θ) among all hypotheses over T.
We now describe a greedy algorithm E based on entropy that, for a given nonempty decision table T and k ∈ {1, . . . , 5}, constructs a decision tree of the type k for the table T.
The considered algorithm is similar to standard top-down induction of decision trees [20,21]. The main peculiarity of this algorithm is step 5 in which, depending on the value of k, we choose an appropriate set of queries. Every time, the algorithm chooses a query from this set with the minimum impurity.
For a given nonempty decision table T and number k ∈ {1, . . . , 5}, the algorithm 1 E constructs a decision tree of the type k for the table T. We denote by h (k) E (T) the depth of this decision tree and by L (k) E (T), we denote the number of realizable relative to T nodes in this tree.

Algorithm 1 E .
Input: A nonempty decision table T and a number k ∈ {1, . . . , 5}. Output: A decision tree of the type k for the table T.

1.
Construct a tree G consisting of a single node labeled with T.

2.
If no node of the tree G is labeled with a table, then the algorithm ends and returns the tree G.

3.
Choose a node v in G, which is labeled with a subtable Θ of the table T.

4.
If Θ is degenerate, then instead of Θ, we label the node v with 0 if Θ is empty and with the decision attached to each row of Θ if Θ is nonempty.

5.
If Θ is nondegenerate, then depending on k, we choose a query X (either attribute or hypothesis) in the following way: (a) If k = 1, then we find an attribute X ∈ F(T) with the minimum impurity I(X, Θ).
If k = 2, then we find a hypothesis X over T with the minimum impurity I(X, Θ).

(c)
If k = 3, then we find an attribute Y ∈ F(T) with the minimum impurity I(Y, Θ) and a hypothesis Z over T with the minimum impurity I(Z, Θ). Between Y and Z, we choose a query X with the minimum impurity I(X, Θ).

(d)
If k = 4, then we find a proper hypothesis X over T with the minimum impurity I(X, Θ).

(e)
If k = 5, then we find an attribute Y ∈ F(T) with the minimum impurity I(Y, Θ) and a proper hypothesis Z over T with the minimum impurity I(Z, Θ).
Between Y and Z, we choose a query X with the minimum impurity I(X, Θ).

6.
Instead of Θ, we label the node v with the query X. For each answer S ∈ A(X), we add to the tree G a node v(S) and an edge e(S) connecting v and v(S). We label the node v(S) with the subtable ΘS and label the edge e(S) with the answer S. We then proceed to step 2.

Results of Experiments
We make experiments with 10 decision tables from the UCI ML Repository [19]. Table 1 contains the information about each of these decision tables: its name, the number of rows, and the number of conditional attributes. The results of the experiments are represented in Tables 2 and 3. The first column of Table 2   Decision Table T h Average 7.9 6.7 5.9 8.1 6.6 The first column of Table 3 contains the name of the considered decision table T. The last five columns contain values L

Decision Table T L
(1) For n = 3, . . . , 6, we randomly generate 100 Boolean functions with n variables. We represent each Boolean function with n variables as a decision table with n columns labeled with these variables and with 2 n rows that are all possible n-tuples of values of the variables. Each row is labeled with the decision that is the value of the function on the corresponding n-tuple.
For each function, using its decision table representation and the algorithm E , we construct a decision tree of the type k computing this function, k = 1, . . . , 5. For each Boolean function, each hypothesis over the decision table representing it is proper. Therefore, for each Boolean function, h  The first column in Table 5    From the obtained experimental results, it follows that, using hypotheses, we can decrease the depth of the constructed decision trees. However, at the same time, the number of realizable nodes usually grows. Depending on our goals, we should choose decision trees of type 3 if we would like to minimize the depth and decision trees of type 1 if we would like to minimize the number of realizable nodes.

Conclusions
In this paper, we studied modified decision trees that use both queries based on one attribute each and queries based on hypotheses about the values of all attributes. We proposed an entropy-based greedy algorithm for the construction of such decision trees and considered the results of computer experiments. The goal of this paper is to understand which type of decision tree should be chosen if we would like to minimize the depth and which type should be chosen if we would like to minimize the number of realizable nodes. From the obtained experimental results, it follows that we should choose decision trees of type 3 if we would like to minimize the depth and decision trees of type 1 if we would like to minimize the number of realizable nodes. The comparison of different greedy algorithms based on various uncertainty measures will be done in future papers.  Data Availability Statement: Data available in a publicly accessible repository that does not issue DOIs. Publicly available data sets were analyzed in this study. These data can be found here: http://archive.ics.uci.edu/ml accessed date: 12 April 2017.