Next Article in Journal
On the Configurational Entropy of Nanoscale Solutions for More Accurate Surface and Bulk Nano-Thermodynamic Calculations
Next Article in Special Issue
Joint Characteristic Timescales and Entropy Production Analyses for Model Reduction of Combustion Systems
Previous Article in Journal
Entropy Analysis of Monetary Unions
Previous Article in Special Issue
Measures of Qualitative Variation in the Case of Maximum Entropy
Article Menu
Issue 6 (June) cover image

Export Article

Open AccessArticle
Entropy 2017, 19(6), 247; doi:10.3390/e19060247

Improving the Naive Bayes Classifier via a Quick Variable Selection Method Using Maximum of Entropy

Department of Computer Science and Artificial Intelligence, University of Granada, 18071 Granada, Spain
Author to whom correspondence should be addressed.
Academic Editor: Dawn E. Holmes
Received: 24 March 2017 / Revised: 29 April 2017 / Accepted: 19 May 2017 / Published: 25 May 2017
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
View Full-Text   |   Download PDF [277 KB, uploaded 25 May 2017]   |  


Variable selection methods play an important role in the field of attribute mining. The Naive Bayes (NB) classifier is a very simple and popular classification method that yields good results in a short processing time. Hence, it is a very appropriate classifier for very large datasets. The method has a high dependence on the relationships between the variables. The Info-Gain (IG) measure, which is based on general entropy, can be used as a quick variable selection method. This measure ranks the importance of the attribute variables on a variable under study via the information obtained from a dataset. The main drawback is that it is always non-negative and it requires setting the information threshold to select the set of most important variables for each dataset. We introduce here a new quick variable selection method that generalizes the method based on the Info-Gain measure. It uses imprecise probabilities and the maximum entropy measure to select the most informative variables without setting a threshold. This new variable selection method, combined with the Naive Bayes classifier, improves the original method and provides a valuable tool for handling datasets with a very large number of features and a huge amount of data, where more complex methods are not computationally feasible. View Full-Text
Keywords: variable selection; classification; Naive Bayes; imprecise probabilities; uncertainty measures variable selection; classification; Naive Bayes; imprecise probabilities; uncertainty measures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Abellán, J.; Castellano, J.G. Improving the Naive Bayes Classifier via a Quick Variable Selection Method Using Maximum of Entropy. Entropy 2017, 19, 247.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics



[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top