- freely available
Probabilistic Confusion Entropy for Evaluating Classifiers
AbstractFor evaluating the classification model of an information system, a proper measure is usually needed to determine if the model is appropriate for dealing with the specific domain task. Though many performance measures have been proposed, few measures were specially defined for multi-class problems, which tend to be more complicated than two-class problems, especially in addressing the issue of class discrimination power. Confusion entropy was proposed for evaluating classifiers in the multi-class case. Nevertheless, it makes no use of the probabilities of samples classified into different classes. In this paper, we propose to calculate confusion entropy based on a probabilistic confusion matrix. Besides inheriting the merit of measuring if a classifier can classify with high accuracy and class discrimination power, probabilistic confusion entropy also tends to measure if samples are classified into true classes and separated from others with high probabilities. Analysis and experimental comparisons show the feasibility of the simply improved measure and demonstrate that the measure does not stand or fall over the classifiers on different datasets in comparison with the compared measures.
Share & Cite This Article
Export to BibTeX | EndNote
MDPI and ACS Style
Wang, X.-N.; Wei, J.-M.; Jin, H.; Yu, G.; Zhang, H.-W. Probabilistic Confusion Entropy for Evaluating Classifiers. Entropy 2013, 15, 4969-4992.View more citation formats
Wang X-N, Wei J-M, Jin H, Yu G, Zhang H-W. Probabilistic Confusion Entropy for Evaluating Classifiers. Entropy. 2013; 15(11):4969-4992.Chicago/Turabian Style
Wang, Xiao-Ning; Wei, Jin-Mao; Jin, Han; Yu, Gang; Zhang, Hai-Wei. 2013. "Probabilistic Confusion Entropy for Evaluating Classifiers." Entropy 15, no. 11: 4969-4992.