1. Introduction
Decision trees are well known as means for knowledge representation, as classifiers, and as algorithms to solve various problems of combinatorial optimization, computational geometry, etc. [
1,
2,
3].
Conventional decision trees have been studied in different theories, in particular, in rough set theory initiated by Pawlak [
4,
5,
6] and in test theory initiated by Chegis and Yablonskii [
7]. These trees use simple queries based on one attribute each.
In contrast to these theories, exact learning initiated by Angluin [
8,
9] studied not only membership queries that correspond to attributes from rough set theory and test theory, but also so-called equivalence queries. Relations between exact learning and probably approximately correct (PAC) learning proposed by Valiant [
10] were discussed in [
8].
In this paper, we add the notion of a hypothesis to the model that has been considered in rough set theory as well in test theory. This model allows us to use an analog of equivalence queries.
Let T be a decision table with n conditional attributes  having values from the set  in which rows are pairwise different and each row is labeled with a decision from . For a given row of T, we should recognize the decision attached to this row. To this end, we can use decision trees based on two types of queries. We can ask about the value of an attribute  on the given row. We will obtain an answer of the kind , where  is the number in the intersection of the given row and the column . We can also ask if hypothesis  is true, where  are numbers from the columns , respectively. Either this hypothesis will be confirmed or we will obtain a counterexample in the form , where  and  is a number from the column  different from . The considered hypothesis is called proper if  is a row of the table T.
In this paper, we consider two cost functions that characterize the time and space complexity of decision trees. We consider the depth of a decision tree as its time complexity, which is equal to the maximum number of queries in a path from the root to a terminal node of the tree. As the space complexity of a decision tree, we consider the number of its realizable, relative to T, nodes. A node is called realizable relative to T if for a row of T and some choice of counterexamples the computation in the tree passes through this node.
Decision trees using hypotheses can be essentially more efficient than the decision trees using only attributes. Let us consider an example, the problem of computation of the conjunction . The minimum depth of a decision tree solving this problem using the attributes  is equal to n. The minimum number of realizable nodes in such decision trees is equal to . However, the minimum depth of a decision tree solving this problem using proper hypotheses is equal to 1: it is enough to ask only about the hypothesis . If it is true, then the considered conjunction is equal to 1. Otherwise, it is equal to 0. The obtained decision tree contains  realizable nodes.
In this paper, we consider the following five types of decision trees:
Decision trees that use only attributes.
Decision trees that use only hypotheses.
Decision trees that use both attributes and hypotheses.
Decision trees that use only proper hypotheses.
Decision trees that use both attributes and proper hypotheses.
There are different ways to construct conventional decision trees, including algorithms that can construct optimal decision trees for medium-sized decision tables [
11,
12,
13,
14,
15]. In particular, in [
16,
17], we proposed dynamic programming algorithms for the minimization of the depth and number of realizable nodes in decision trees with hypotheses. However, the most common way is to use greedy algorithms [
1,
18].
In this paper, we propose a greedy algorithm based on entropy that, for a given decision table and type of decision trees, constructs a decision tree of the considered type for this table. The goal of this paper is to understand which type of decision tree should be chosen if we would like to minimize the depth and which type should be chosen if we would like to minimize the number of realizable nodes. To this end, we compare the parameters of the constructed decision trees of five types for 10 decision tables from the UCI ML Repository [
19]. We do the same for randomly generated Boolean functions with 
n variables, where 
. From the obtained experimental results, it follows that we should choose decision trees of the type 3 if we would like to minimize the depth and decision trees of the type 1 if we would like to minimize the number of realizable nodes.
The main contributions of the paper are (i) the design of the extended entropy-based greedy algorithm that can work with five types of decision trees and (ii) the understanding of which type of decision trees should be chosen when we would like to minimize the depth or the number of realizable nodes.
The rest of the paper is organized as follows. In 
Section 2 and 
Section 3, we consider the main notions and in 
Section 4, the greedy algorithm for the decision tree construction. 
Section 5 contains the results of the computer experiments and 
Section 6, short conclusions.
  2. Decision Tables
A decision table is a rectangular table T with  columns filled with numbers from the set  of non-negative integers. Columns of this table are labeled with the conditional attributes . Rows of the table are pairwise different. Each row is labeled with a number from  that is interpreted as a decision. Rows of the table are interpreted as tuples of values of the conditional attributes.
A decision table T is called empty if it has no rows. The table T is called degenerate if it is empty or all rows of T are labeled with the same decision.
We denote  and denote by  the set of decisions attached to the rows of T. For any conditional attribute , we denote by  the set of values of the attribute  in the table T.
A system of equations over 
T is an arbitrary equation system of the following kind:
      where 
, 
, and 
 (if 
, then the considered equation system is empty).
Let T be a nonempty table. A subtable of T is a table obtained from T by the removal of some rows. We correspond to each equation system S over T a subtable  of the table T. If the system S is empty, then . Let S be nonempty and . Then  is the subtable of the table T containing the rows from T, which in the intersection with the columns  have numbers , respectively.
  3. Decision Trees
Let T be a nonempty decision table with n conditional attributes . We consider the decision trees with two types of queries. We can choose an attribute  and ask about its value. This query has the set of answers . We can formulate a hypothesis over T in the form of , where , and ask about this hypothesis. This query has the set of answers . The answer H means that the hypothesis is true. Other answers are counterexamples. The hypothesis H is called proper for T if  is a row of the table T.
A decision tree over T is a marked finite directed tree with the root in which the following is true:
Each terminal node is labeled with a number from the set .
Each node, which is not terminal (such nodes are called working), is labeled with an attribute from the set  or with a hypothesis over T.
If a working node is labeled with an attribute  from , then for each answer from the set , there is exactly one edge labeled with this answer, which leaves this node and there are no any other edges that leave this node.
If a working node is labeled with a hypothesis  over T, then for each answer from the set , there is exactly one edge labeled with this answer, which leaves this node and there are no any other edges that leave this node.
Let  be a decision tree over T and v be a node of . We now define an equation system  over T associated with the node v. We denote by  the directed path from the root of  to the node v. If there are no working nodes in , then  is the empty system. Otherwise,  is the union of equation systems attached to the edges of the path .
A decision tree  over T is called a decision tree for T if for any node v of , the following is true:
The node v is terminal if and only if the subtable  is degenerate.
If v is a terminal node and the subtable  is empty, then the node v is labeled with the decision 0.
If v is a terminal node and the subtable  is nonempty, then the node v is labeled with the decision attached to all rows of .
A complete path in  is an arbitrary directed path from the root to a terminal node in . As the time complexity of a decision tree, we consider its depth, which is the maximum number of working nodes in a complete path in the tree, or which is the same—the maximum length of a complete path in the tree. We denote by  the depth of the decision tree .
As the space complexity of the decision tree , we consider the number of its realizable relative to T nodes. A node v of  is called realizable relative to T if and only if the subtable  is nonempty. We denote by  the number of nodes in  that are realizable relative to T.
In this paper, we consider the following five types of decision trees:
Decision trees that use only attributes.
Decision trees that use only hypotheses.
Decision trees that use both attributes and hypotheses.
Decision trees that use only proper hypotheses.
Decision trees that use both attributes and proper hypotheses.
  4. Greedy Algorithm Based on Entropy
Let 
T be a nonempty decision table with 
n conditional attributes 
 and 
 be a subtable of the table 
T. We define entropy of 
 (denoted 
) as follows. If 
 is empty, then 
. Let 
 be nonempty. For any decision 
, we denote by 
 the number of rows in 
 labeled with the decision 
t and by 
 the value 
, where 
 is the number of rows in 
. Then,
      
We now define the impurity of a query for the table . The impurity of the query based on an attribute  (impurity of query ) is equal to . The impurity of the query based on a hypothesis H over T (impurity of query H) is equal to .
We can find by simple search among all attributes from  an attribute  with the minimum impurity . We can also find by simple search among all proper hypotheses over T a proper hypothesis H with the minimum impurity . It is not necessary to consider all hypotheses over T to find a hypothesis with the minimum impurity. For , we denote by  a number from  such that . Then, the hypothesis  has the minimum impurity  among all hypotheses over T.
We now describe a greedy algorithm  based on entropy that, for a given nonempty decision table T and , constructs a decision tree of the type k for the table T.
The considered algorithm is similar to standard top-down induction of decision trees [
20,
21]. The main peculiarity of this algorithm is step 5 in which, depending on the value of 
k, we choose an appropriate set of queries. Every time, the algorithm chooses a query from this set with the minimum impurity.
For a given nonempty decision table 
T and number 
, the algorithm 1 
 constructs a decision tree of the type 
k for the table 
T. We denote by 
 the depth of this decision tree and by 
, we denote the number of realizable relative to 
T nodes in this tree.
      
| Algorithm 1. | 
Input: A nonempty decision table T and a number . Output: A decision tree of the type k for the table T.
              Construct a tree G consisting of a single node labeled with T. If no node of the tree G is labeled with a table, then the algorithm ends and returns the tree G. Choose a node v in G, which is labeled with a subtable  of the table T. If  is degenerate, then instead of , we label the node v with 0 if  is empty and with the decision attached to each row of  if  is nonempty. If  is nondegenerate, then depending on k, we choose a query X (either attribute or hypothesis) in the following way: - (a)
 If , then we find an attribute X with the minimum impurity . - (b)
 If , then we find a hypothesis X over T with the minimum impurity . - (c)
 If , then we find an attribute Y with the minimum impurity  and a hypothesis Z over T with the minimum impurity . Between Y and Z, we choose a query X with the minimum impurity . - (d)
 If , then we find a proper hypothesis X over T with the minimum impurity . - (e)
 If , then we find an attribute Y with the minimum impurity  and a proper hypothesis Z over T with the minimum impurity . Between Y and Z, we choose a query X with the minimum impurity . 
 Instead of , we label the node v with the query X. For each answer , we add to the tree G a node  and an edge  connecting v and . We label the node  with the subtable  and label the edge  with the answer S. We then proceed to step 2. 
  | 
  5. Results of Experiments
We make experiments with 10 decision tables from the UCI ML Repository [
19]. 
Table 1 contains the information about each of these decision tables: its name, the number of rows, and the number of conditional attributes.
The results of the experiments are represented in 
Table 2 and 
Table 3. The first column of 
Table 2 contains the name of the considered decision table 
T. The last five columns contain values 
 (minimum values for each decision table are in bold).
The first column of 
Table 3 contains the name of the considered decision table 
T. The last five columns contain values 
 (minimum values for each decision table are in bold).
For , we randomly generate 100 Boolean functions with n variables. We represent each Boolean function with n variables as a decision table with n columns labeled with these variables and with  rows that are all possible n-tuples of values of the variables. Each row is labeled with the decision that is the value of the function on the corresponding n-tuple.
For each function, using its decision table representation and the algorithm , we construct a decision tree of the type k computing this function, . For each Boolean function, each hypothesis over the decision table representing it is proper. Therefore, for each Boolean function, , , , and .
The results of the experiments are represented in 
Table 4 and 
Table 5. The first column in 
Table 4 contains the number of variables 
n in the considered Boolean functions. The last five columns contain information about values 
 in the format 
.
The first column in 
Table 5 contains the number of variables 
n in the considered Boolean functions. The last five columns contain information about values 
 in the format 
.
From the obtained experimental results, it follows that, using hypotheses, we can decrease the depth of the constructed decision trees. However, at the same time, the number of realizable nodes usually grows. Depending on our goals, we should choose decision trees of type 3 if we would like to minimize the depth and decision trees of type 1 if we would like to minimize the number of realizable nodes.
  6. Conclusions
In this paper, we studied modified decision trees that use both queries based on one attribute each and queries based on hypotheses about the values of all attributes. We proposed an entropy-based greedy algorithm for the construction of such decision trees and considered the results of computer experiments. The goal of this paper is to understand which type of decision tree should be chosen if we would like to minimize the depth and which type should be chosen if we would like to minimize the number of realizable nodes. From the obtained experimental results, it follows that we should choose decision trees of type 3 if we would like to minimize the depth and decision trees of type 1 if we would like to minimize the number of realizable nodes. The comparison of different greedy algorithms based on various uncertainty measures will be done in future papers.