Attribute Selection Based on Constraint Gain and Depth Optimal for a Decision Tree
AbstractUncertainty evaluation based on statistical probabilistic information entropy is a commonly used mechanism for a heuristic method construction of decision tree learning. The entropy kernel potentially links its deviation and decision tree classification performance. This paper presents a decision tree learning algorithm based on constrained gain and depth induction optimization. Firstly, the calculation and analysis of single- and multi-value event uncertainty distributions of information entropy is followed by an enhanced property of single-value event entropy kernel and multi-value event entropy peaks as well as a reciprocal relationship between peak location and the number of possible events. Secondly, this study proposed an estimated method for information entropy whose entropy kernel is replaced with a peak-shift sine function to establish a decision tree learning (CGDT) algorithm on the basis of constraint gain. Finally, by combining branch convergence and fan-out indices under an inductive depth of a decision tree, we built a constraint gained and depth inductive improved decision tree (CGDIDT) learning algorithm. Results show the benefits of the CGDT and CGDIDT algorithms. View Full-Text
Share & Cite This Article
Sun, H.; Hu, X.; Zhang, Y. Attribute Selection Based on Constraint Gain and Depth Optimal for a Decision Tree. Entropy 2019, 21, 198.
Sun H, Hu X, Zhang Y. Attribute Selection Based on Constraint Gain and Depth Optimal for a Decision Tree. Entropy. 2019; 21(2):198.Chicago/Turabian Style
Sun, Huaining; Hu, Xuegang; Zhang, Yuhong. 2019. "Attribute Selection Based on Constraint Gain and Depth Optimal for a Decision Tree." Entropy 21, no. 2: 198.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.