Next Article in Journal
On the Configurational Entropy of Nanoscale Solutions for More Accurate Surface and Bulk Nano-Thermodynamic Calculations
Next Article in Special Issue
Joint Characteristic Timescales and Entropy Production Analyses for Model Reduction of Combustion Systems
Previous Article in Journal
Entropy Analysis of Monetary Unions
Previous Article in Special Issue
Measures of Qualitative Variation in the Case of Maximum Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving the Naive Bayes Classifier via a Quick Variable Selection Method Using Maximum of Entropy

by
Joaquín Abellán
* and
Javier G. Castellano
Department of Computer Science and Artificial Intelligence, University of Granada, 18071 Granada, Spain
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(6), 247; https://doi.org/10.3390/e19060247
Submission received: 24 March 2017 / Revised: 29 April 2017 / Accepted: 19 May 2017 / Published: 25 May 2017
(This article belongs to the Special Issue Maximum Entropy and Its Application II)

Abstract

:
Variable selection methods play an important role in the field of attribute mining. The Naive Bayes (NB) classifier is a very simple and popular classification method that yields good results in a short processing time. Hence, it is a very appropriate classifier for very large datasets. The method has a high dependence on the relationships between the variables. The Info-Gain (IG) measure, which is based on general entropy, can be used as a quick variable selection method. This measure ranks the importance of the attribute variables on a variable under study via the information obtained from a dataset. The main drawback is that it is always non-negative and it requires setting the information threshold to select the set of most important variables for each dataset. We introduce here a new quick variable selection method that generalizes the method based on the Info-Gain measure. It uses imprecise probabilities and the maximum entropy measure to select the most informative variables without setting a threshold. This new variable selection method, combined with the Naive Bayes classifier, improves the original method and provides a valuable tool for handling datasets with a very large number of features and a huge amount of data, where more complex methods are not computationally feasible.

1. Introduction

In the data mining field, the problem of selecting variables from a dataset to improve the accuracy of a classifier has been extensively studied. The procedures used for this purpose can depend or not depend on the properties of the classifier. In this paper, we focus on this problem for a simple but very efficient classifier: the Naive Bayes (NB). The simplicity of this classifier makes it very suitable for very large datasets. Being able to handle huge amounts of data is paramount for data mining. The emergent technologies and applications can generate vast amounts of data that can be used for a more exhaustive extraction of information. However, the performance of the classifier drops when the dataset contains irrelevant and redundant variables. Hence, a previous stage of variable selection can significantly improve the results.
Classification is a classical data mining problem. It may be generally defined in the following way: we have a dataset of observations, called the training set, and we wish to obtain a set of rules in order to assign to every new observation a value of the classification variable (discrete or discretized). After completing this procedure, the quality of the resulting set of rules is verified using a different set of observations called the test set. The variable under study is called the class variable and the rest of the variables in the dataset are called attribute variables or features. Classification is based on the use of several techniques that infer rules or patterns from a given dataset in order to predict new values of the class variable using a new set of values for the remaining variables. The applications of classification are well known in fields like medicine, bioinformatics, physics, pattern recognition, economics, etc., and they are used for tasks like disease diagnosis, generating meteorological forecasts, determining insurance risk profiles, and text classification, among many others.
The performance of a classification method can degrade if the dataset contains redundant or irrelevant variables. Another important problem for the accuracy of a classification method emerges when the dataset contains a large number of variables. For instance, in the fields of bioinformatics and text mining, datasets typically contain several thousands of variables (genes or words, respectively). This enormous amount of information can become unmanageable. It is therefore necessary to select a smaller set of variables in order to reduce or remove any irrelevant or redundant information present in the data, and to enable the automatic handling of these datasets. To solve this issue, several methods have been devised to obtain a significant subset of variables for classification purposes.
Variable selection methods can be generally grouped into two types: Filter methods select variables without relying on a classification method, while wrapper methods rely directly on the classification method used. An important reference for these methods is the work of Hall and Holmes [1], which contains a detailed description of filter and wrapper methods that provide an excellent performance. The problem with the wrapper methods is their computational cost. They require checking subsets of variables in the base classifier for many of the steps involved. This makes them hardly suitable when the amount of features increases. For many features, filter methods are more appropriate. However, not all of the filter methods are adequate. Their computational cost must be considered, as the size of the features in the data could make them impractical.
The information gain measure developed by Quinlan [2], called Info-Gain (IG), is used to build decision trees via the ID3 algorithm. It can be used as a measure to select variables. Its purpose is to rank the set of attribute variables via the Info-Gain measure and select the best ones. It is a very quick variable selection model that can be applied on datasets with a huge number of features, where other procedures are not computationally efficient. This measure always gives a positive value, so the problem is where to set the threshold to select the best informative variables. Depending on the data used, this threshold can be notably different if we want to select the minimum set with the largest possible amount of information about the class variable.
The emergence of new mathematical models based on imprecise probabilities (Walley [3]) to represent the information also resulted in the development of new ways to quantify the uncertainty information contained in those representations. These mathematical models generalize the classic probability theory (PT), where the most used measure has been Shannon’s classic entropy [4]. Now, when the available information is expressed via a set of probability distributions normally a closed and convex set of probability distributions called a credal set, other measures different to the known entropy must be applied.
In recent years, the literature shows many attempts to define measures on imprecise probability models that perform in a way similar to entropy for precise probabilities (see Klir [5]). So far, the most successful one has been the maximum entropy measure (see Abellán et al. [6], Abellán and Masegosa [7], and Abellán and Bossé [8]), which verifies a large set of important properties. This set of properties is similar to the one verified by entropy in PT.
The Info-Gain based on precise probabilities and entropy can be extended with imprecise probabilities and maximum entropy. This implies a different treatment of the information where imprecision is considered (see Mantas and Abellán [9]). The equivalent criterion is called Imprecise Info-Gain (IIG) and was introduced in Abellán and Moral [10]. This new criterion has been used in a procedure to build decision trees, but it can also be used as an information gain measure to select variables from data in a similar manner that the IG criterion is applied, using a direct way or inside of more complex procedures. Its properties are somewhat different than those of Info-Gain, but the main difference is that the information gain can be negative for IIG. It is an important property that removes the requirement of setting a threshold for a dataset. Now, we can select a variable if its associated information gain via the IIG is positive. It represents a quick and more reasonable variable selection procedure.
In this paper, we present this new method of variable selection and use the known Naive Bayes classifier to test it on a large set of popular datasets. To show its performance, an experimental study was developed to compare the results for NB using a variable selection procedure based on IIG and a similar procedure based on IG with different thresholds. We will see that in general it is not possible to set a single threshold value for IG and get good results for every dataset, The best threshold value varies from one dataset to another. In contrast, the results obtained using IIG are generally good, improving those obtained using NB directly. Moreover, this procedure does not require setting a threshold value.
Section 2 briefly describes the background concepts of some mathematical models based on imprecise probabilities and uncertainty measures, where the maximum entropy plays an important role. Section 3 describes the Naive Bayes classifier (which is used as the base classifier). Section 4 explains the new variable selection model via the maximum entropy measure on imprecise probabilities, as well as the model used as a reference, which is based on precise probabilities and entropy. Section 5 is devoted to the experiments, and Section 6 contains the conclusions.

2. Imprecise Probabilities and Uncertainty Measures

2.1. Imprecise Probabilities

Various mathematical models can be used to represent the information available in a situation. None of these models is generally more justifiable than other, but each is more useful than the rest in specific situations. Walley [3] compiles most of the mathematical models for representing the absence of information through imprecise probabilities. In this section, we briefly introduce the model based on imprecise probabilities that we will use: reachable sets of probability intervals.

2.1.1. Reachable Sets of Probability Intervals

As an important reference on this type of credal set, we should mention the work by Campos, Huete and Moral [11], where we can find an excellent account of the basic operations for working with probability intervals, as well as their relation with other models such as those of upper and lower probabilities, capacities of order 2 and belief functions.
The main characteristic of this model is that there are many interesting operations between sets of probability intervals without having to leave the model, i.e., providing us with another set of probability intervals.
They can be described as follows: let X be a variable that takes values in Ω X = { x 1 , x 2 , , x | Ω X | } . A system of probability intervals is a family of intervals L = { [ l i , u i ] : i { 1 , 2 , , | Ω X | } } verifying that 0 l i u i 1 . The credal set associated to a set of intervals L on X can be defined as:
K L X = { p P ( Ω X ) l i p i u i , [ l 1 , u i ] L , i } ,
expressing p i as p ( { x i } ) .
One condition so that this set is nonempty is that
i l i 1 i u i .
Any element in the set { l i , u j | i , j { 1 | Ω X | } } therefore belongs to at least oneprobability distribution of K L X (which is why the set of intervals is defined as reachable) and the following conditions must be verified:
j i l j + u i 1 , j i u j + l i 1 ,
for each i. If this set of conditions is not verified, it is possible to obtain the reachable set of intervals from the following property:
Proposition 1.
Given a set of probability intervals L = { [ l i , u i ] : i { 1 , , | Ω X | } } the set L = { [ l i , u i ] : i { 1 , , | Ω X | } } where
l i = max i { l i , 1 i j u j } , u i = min i { u i , 1 i j l j } ,
give us the same set of probability distributions, K L X = K L X , where this last set is a reachable set of probability intervals.

2.1.2. Imprecise Dirichlet Model

The imprecise Dirichlet model (IDM) was introduced by Walley [12] to infer the probability distribution of a categorical variable. Let us assume that Z is a variable taking values on a finite set Z and that we have a sample of size N of independent and identically distributed outcomes of Z. If we want to estimate the probabilities, θ z = p ( z ) , with which Z takes its values, a common Bayesian procedure consists in assuming a prior Dirichlet distribution for the parameter vector ( θ z ) z Z , and then taking the posterior expectation of the parameters given the sample. The Dirichlet distribution depends on the parameters s, a positive real value, and t , a vector of positive real numbers t = ( t z ) z Z , verifying z Z t z = 1 . The density takes the form:
f ( ( θ z ) z Z ) = Γ ( s ) z Z Γ ( s t z ) z Z θ z s t z 1 ,
where Γ is the gamma function.
If n z is the number of occurrences of value z in the sample, the expected posterior value of parameter θ z is n z + s · t z N + s , which is also the Bayesian estimate of θ z (under quadratic loss).
The imprecise Dirichlet model only depends on parameter s and assumes all the possible values of t . This defines a convex set of prior distributions. It represents a much weaker assumption than a precise prior model, but it is possible to make useful inferences. In our particular case, where the IDM is applied to a single variable, we obtain a credal set for this variable Z that can be represented by a system of probability intervals. For each parameter, θ z , we obtain a probability interval given by the lower and upper posterior expected values of the parameter given the sample. These intervals can be easily computed and are given by [ n z N + s , n z + s N + s ] . The associated credal set on X is given by all the probability distributions p on Z, such that p ( z ) [ n z N + s , n z + s N + s ] , z . The intervals are coherent in the sense that if they are computed by taking infimum and supremum in the credal set, then the same set of intervals is again obtained. The associate credal set can be obtained in the same way as in the previous subsection,
K L i d m Z = { p P ( Ω Z ) l i p i u i , l i = n z i N + s , u i = n z i + s N + s , i } ,
and represents a credal set from a reachable set of probability intervals.
Parameter s determines how quickly the lower and upper probabilities converge as more data become available; larger values of s produce more cautious inferences. Walley [12] does not provide a definitive recommendation, but he advocates values between s = 1 and s = 2 .

2.2. Uncertainty Measures on Credal Sets

The study of uncertainty measures in the Dempster–Shafer theory of evidence [13,14] has been the starting point for the development of these measures on more general theories (a study of the most important measures proposed in literature can be seen in [5]). As a reference for the definition of an uncertainty measure on credal sets, Shannon’s entropy [4] has been used due to its operation on probabilities. In any theory which is more general than the probability theory, it is essential that a measure be able to quantify the uncertainty that a credal set represents: the parts of conflict and non-specificity [5].
Klir and Smith [15] and Abellán and Moral [16] justified the use of the maximum of entropy on credal sets as a good measure of total uncertainty that verifies a set of needed properties [17]. The problem lies in separating this function into others which really do measure the parts of conflict and non-specificity, respectively, and this entails the use of a credal set to represent the information. More recently, Abellán, Klir and Moral [6] presented a separation of the maximum of entropy into functions that are capable of coherently measuring the conflict and non-specificity of a credal set K on a finite variable X, as well as algorithms for facilitating its calculation in capacities of order 2 [6,18], and this may be expressed in the following way:
S * ( K ) = S * ( K ) + ( S * S * ) ( K ) ,
where S * represents the maximum of entropy and S * represents the entropy minimum on the credal set K:
S * ( K ) = max p K x p x log ( p x ) , S * ( K ) = min p K x p x log ( p x ) ,
where S * ( K ) coherently quantifies the conflict part of the credal set K and ( S * S * ) ( K ) represents the non-specificity part of K [6].

3. The Naive Bayes Classifier

In the area of machine learning, supervised classification learning can be considered an important tool for decision support. Classification can be defined as a machine learning technique used to predict group of membership for data instances. It can be applied to decision support in medicine, character recognition, astronomy, banking and other fields. A classifier may be represented using a Bayesian network, a neural network, a decision tree, etc.
The success of the model developed by Duda and Hart [19] is mainly due to its simplicity, efficiency and effectiveness in classification problems. Before describing the classifier, we will probabilistically describe the supervised classification problem.
Let D be a dataset, with size N, and with values in a set L of (discrete or discretized) attribute variables { X i | i = 1 , , r } , where each variable has a set of possible states or cases Ω X i = { x 1 i , x 2 i , , x | Ω X i | i } , and a class variable C, whose states are Ω C = { c 1 , c 2 , , c k } . The objective is to obtain information from the dataset in such a way that, given an observation (a set of values of all the attribute variables), it is possible to associate this with a value of the class variable.
If we represent the new sample as x , with x = { x h 1 1 , . . , x h r r } . The Naive Bayes (Figure 1) predicts value c m of C in the following way:
c m = a r g max c j p ( c j | x ) ,
with p ( c j | x ) the probability of each c j in the subset of the training set determined by x (subset of the sample that verifies that X i = x h i i , i ).
Now, based on the assumption that the attribute variables are independent given the class variable, the predicted value can be expressed as
c m = a r g max c j p ( c j ) i = 1 r p ( x h i i | c j ) .
The key to success of the Naive Bayes is its simplicity: no Bayesian network structure learning algorithm is required because its structure is fixed, the parameters of the model need only be estimated from the dataset using only bi-dimensional statistics for the class and each attribute and, as we have seen, the classification process is very efficient.
The Naive Bayes model shows remarkable results in accuracy, taking into account its clearly unrealistic assumptions: first, each attribute is conditionally independent from the other attributes given the class and, second, all of the variables have the same influence on the class. These assumptions can also cause some problems: the influence on the class of two highly correlated attributes may be overamplified by the model or a really irrelevant variable may only add noise to the classification. In any case, the solution may be to remove some attributes by using variable selection. The exhaustive search in the space of all of the variables combinations for the Naive Bayes requires the computation of 2 r structures, which is often prohibitive.

4. Variable Selection Methods

The aim of a variable selection method is to select a subset of variables that can effectively replace the original set of attributes while reducing the unfavourable effects of irrelevant or redundant features and still provide good results or, even better, improve the model performance.
Taking into account that many data mining techniques were originally not designed to work with considerable amounts of irrelevant/redundant features, such as the Naive Bayes classifier, the variable extraction is almost a requisite to work with many datasets. Furthermore, the variable selection stage will allow us to work efficiently with large datasets. Nevertheless, we must bear in mind that variable extraction adds an extra level of complexity to the process and there is no guarantee that the optimal subset of variables will be selected.
The methods to select variables depends on whether a base classifier is used as reference or not. They are generally grouped into two classes: (i) filter methods: select variables independently in relation to the classification method used; and (ii) wrapper methods: depend on the classification method subsequently used (Hall and Holmes [1]).
The wrapper methods need to check many steps subsets of variables in the base classifier. This procedure makes them few suitable when the amount of features is huge. When the amount of data increases, the filter methods are more appropriate. Although the use of a determinate filter method depends on its computational cost, it may not be possible to be use due to the big size of data or number of features.
The most popular filter methods applied on the Naive Bayes classifier use as a tool the mutual information measure of each attribute and the class variable [20], which is just the same procedure as the one of the information gain. A filter method that evaluates subsets of features also can have a a high computational cost, but normally lower than the ones of the wrapper methods when they are used to improve the Naive Bayes.

4.1. Info-Gain

This metric was introduced by Quinlan as the basis for his ID3 model [2]. This model has the following main features: it was defined to obtain decision trees with discrete variables, it does not work with missing values, a pruning process is not carried out, and it is based on Shannon’s entropy [4]. This split criterion can therefore be defined on an attribute variable X given the class variable C in the following way:
I G ( X , C ) = S ( C ) S ( C | X ) ,
where S ( C ) is the entropy of C: S ( C ) = j p ( c j ) log p ( c j ) , with p ( c j ) = p ( C = c j ) , the probability of each value of the class variable estimated in the training dataset. In the same way,
S ( C | X ) = t j p ( c j | x t ) log p ( c j | x t ) ,
where x t , t = 1 , . . , | Ω X | , is each possible state of X and c j , j = 1 , . . , k , each possible state of C. Finally, we can obtain the following reduced expression for the Info-Gain criterion:
I G ( X , C ) = t j p ( c j , x t ) log p ( c j , x t ) p ( c j ) p ( x t ) .
This criterion is also known as the Mutual Information Criterion and it is widely used for measuring the dependence degree between an attribute variable and the class variable. It tends to select attribute variables with many states and consequently results in excessive ramification.
The IG criterion can be used as a variable selection method. It creates a rank of informative variables. It principal drawback is that it always gives a non-negative value of gain in information:
I G ( X , C ) 0 , X .
This result is a direct consequence of the Gibb’s inequality on two probability distributions p = ( p i ) and q = ( q i ) on a finite set X:
i p i log p i i p i log q i .
In the case of the I I G , it can be considered the probability distribution q on C × X defined as q ( c j , x t ) = p ( c j ) p ( x t ) , where p is the above probability distribution based on the frequencies in the dataset.
As that value is always non-negative, the IG criterion creates a rank of features based on each gain in information. The problem is where to fix the threshold to use as filter to select the variables. The maximum gain in information is log | C | . Then, the function to consider as threshold must be α log | C | , with α [ 0 , 1 ] .

4.2. Imprecise Info-Gain

The Imprecise Info-Gain criterion was first used for building decision trees in Abellán and Moral’s method [10]. In a similar way to ID3, this tree is only defined for discrete variables; it cannot work with missing values; and it does not carry out a posterior pruning process. It is based on the application of uncertainty measures on convex sets of probability distributions. More specifically, probability intervals are extracted from the dataset for each case of the class variable using Walley’s imprecise Dirichlet model [12] (IDM), which represents a specific kind of convex set of probability distributions, and on these the maximum entropy is estimated.
As we explained in previous sections, the IDM depends on a parameter s and it estimates that (in a given dataset) the probabilities for each value of the class variable ( c j ) are within the interval:
p ( c j ) n c j N + s , n c j + s N + s ,
with n c j as the frequency of the set of values ( C = c j ) in the dataset.
If we label K ( C ) and K ( C | ( X = x t ) ) for the following sets of probability distributions q on Ω C :
K ( C ) = q | q ( c j ) n c j N + s , n c j + s N + s ,
K ( C | ( X = x t ) ) = q | q ( c j ) n { c j , x t } N + s , n { c j , x t } + s N + s ,
with n { c j , x t } as the frequency of the set of values ( C = c j , X = x t ) in the dataset, we can define the Imprecise Info-Gain for each attribute variable X as:
I I G ( X , C ) = S * ( K ( C ) ) t p ( x t ) S * ( K ( C | ( X = x t ) ) ) ,
where S * ( ) is the maximum entropy function of a credal set.
For the previously defined intervals and for a value of s = 1 , Abellán’s procedure [21] has a low computation cost. This value is the one recommended by Walley [12] and the one used to build decision trees in Abellán and Moral [10]. Given a credal set K ( C ) defined as above, we must first determine the set B = { c j | n c j = min i { n c i } } . Let | B | be the cardinal of the set B. If we use p ^ to denote the distribution where the maximum of entropy will be reached, the procedure of Abellán [21] for s = 1 can be expressed in the following way:
p ^ ( c i ) = n c i N + s c i B , n c i + s / | H | N + s c i B .
In this procedure, we split the value of s as uniformly as it is possible between the states of C with lower frequency. For values s > 1 , the algorithm to find the maximum entropy value has more computational cost.
In Abellán, Klir and Moral [6], we can see that the function of the maximum entropy includes two kinds of uncertainty: conflict and non-specificity. The first one is somewhat similar to the one in PT, well quantified by the Shannon’s entropy. The second one is a new type of uncertainty that does not appear in PT.
The maximum of entropy is the only measure that verifies a similar set of important properties on credal sets rather than the classic entropy in PT (see Abellán et al. [6]). The use of the maximum entropy function on this type of sets is also justified with the following sentences of Jaynes [22] about the sense of this measure:
“The fact that a certain probability distribution maximizes entropy subject to certain constraints representing our incomplete information, is the fundamental property which justifies use of that distribution for inference, it agrees with everything that is known, but carefully avoids assuming anything that is not known.”
The practical application of the IIG procedure can be easily explained without the concept of credal set, using only the frequencies n c j and the expression of the p ^ values. Let us apply the following notation:
  • n = ( n c 1 , , n c k ) ;
  • n is the array n where the mass s = 1 is shared among the minimum values (as in p ^ );
  • S ( n ) = S ( C ) is the entropy using a notation on non-normalized values;
  • S ( n ) is the maximum entropy value of C. It is a consequence of the above points;
  • n X = x t is the array ( n { c 1 , x t } , , n { c k , x t } ) .
With the above notation, IIG can be expressed as follows:
I I G ( C , X ) = S ( n ) X = x t p ( X = x t ) S ( n X = x t ) .
The original procedure to build decision trees [10], where the IIG was presented, uses the feature with the highest value of IIG to be inserted in a node.

4.3. IG versus IIG

One of the most significative differences between the IG and the IIG criteria is that the IIG can be negative. This situation never occurs with the IG criterion. This important characteristic allows the IIG criterion to discard variables that worsen the information on the class variable. This property allows us to detect directly which features are irrelevant as they will obtain negative values of information gain, which is equivalent to a waste of information.
With the IG criterion, we obtain a feature ranking, but this does not give a subset of variables, so we must decide which variables must be selected. To address this issue, the variables to be selected must have an IG value greater than or equal to a threshold. The threshold used depends on an α parameter and the number of states of the class variable for each dataset, hence we have that
t h r e s h o l d = α × log ( k ) ,
where α [ 0 , 1 ] and k = | C | .
In the Example 1, we can see a practical case where the difference between both criteria can be appreciated. In that example, we can see that some features can be selected by the IG criterion but not selected by the IIG.
Example 1.
Let C be a class variable with two possible states { c 1 , c 2 } . We consider that we have the following frequencies { c 1 : 9 , c 2 : 4 } . We also consider that we only have two attribute variables X 1 , X 2 , with possible values X 1 { x 1 1 , x 2 1 } , and X 2 { x 1 2 , x 2 2 , x 3 2 } . The frequencies of each combination of states are the following ones:
X 1 = x 1 1 ( 5   o f   c l a s s   c 1 ,   3   o f   c l a s s   c 2 ) , X 1 = x 2 1 ( 4   o f   c l a s s   c 1 ,   1   o f   c l a s s   c 2 ) , X 2 = x 1 2 ( 2   o f   c l a s s   c 1 ,   2   o f   c l a s s   c 2 ) , X 2 = x 2 2 ( 5   o f   c l a s s   c 1 ,   2   o f   c l a s s   c 2 ) , X 2 = x 3 2 ( 2   o f   c l a s s   c 1 ,   0   o f   c l a s s   c 2 ) .
Considering the IG criterion, we always have an improvement in the gain of information. The values obtained with this criterion are the following ones (using the natural logarithm):
I G ( C , X 1 ) = 0 . 6172 8 13 0 . 6615 5 13 0 . 5004 = 0 . 0177 ,
I G ( C , X 2 ) = 0 . 6172 4 13 0 . 6931 7 13 0 . 5983 2 13 0 = 0 . 0818 .
Here, the feature X 2 produces the greater gain of information by the IG criterion. Now, if we set α = 0 . 1 , then t h r e s h o l d = 0 . 0301 , and only X 2 would be selected. If we set α = 0 . 01 , then t h r e s h o l d = 0 . 003 , and both features would be selected.
However, with the IIG criterion (and s = 1 ), we have the following values:
I I G ( C , X 1 ) = 0 . 6518 8 13 0 . 6850 5 13 0 . 6368 = 0 . 0002 ,
I I G ( C , X 2 ) = 0 . 6518 4 13 0 . 6931 7 13 0 . 6615 2 13 0 . 6368 = 0 . 0157 .
Now, none of the variables would be selected by the IIG criterion.
The different results obtained in the above example are motivated by the use of the maximum entropy measure. The best value for the α parameter to obtain the best set of informative features with the IG criterion is an open question. In that example, we can see that different values of that parameter α produce a different set of selected variables.
To show that the both criteria perform differently, in a contrary situation to the one expressed by the Example 1, we can see the following example based on a real dataset.
Example 2.
We consider the dataset anneal from the University of California Irvine (UCI) repository of machine learning datasets [23] that we will use in our experimentation in the next sections. This dataset has 38 features and its class variable has six states ( k = 6 ). The gain in information of the last 12 features (the worse ones) for both criteria are presented in Table 1. The values are obtained using the same procedure as the one in Example 1.
Considering that log ( 6 ) = 1 . 7917 , the threshold for α = 0 . 01 is 0 . 0179 . Hence, the IG criterion with that value of α does not select the features from the “exptl” feature to the bottom, including that feature. Observing the value of gain of information using the IIG criterion for that feature, we see that this feature is selected by the IIG criterion because it has a positive value ( 0 . 0118 ).
The above example expresses a situation where a feature is selected by the IIG criterion but not for the IG criterion with its less restrictive α value. Then, that feature is also not selected using the rest of α values.

5. Experimentation

In this section, we shall describe the experiments carried out and show the results obtained. For our purpose, we have used a wide and different set of 30 known datasets, obtained from the UCI repository of machine learning datasets [23]. A brief description of these can be found in Table 2, where column “N” is the number of instances in the datasets, column “Attrib” is the number of attribute variables, column “Num” is the number of numerical variables, column “Nom” is the number of nominal variables, and column “k” is the number of cases or states of the class variable (always a nominal variable).
The Weka software (ver. 3.8) [24] has been used for the experimentation. The procedure of selecting attributes with the IIG method was implemented using data structures of Weka and the IDM parameter was set to s = 1 , i.e., the value used in the original methods of [18]. The reasons to use this value were: first, it was the value recommended by Walley [12]; and, second, the procedure to obtain the maximum entropy value reaches its lowest computational cost for this value [21].
As we have seen above, the IIG criterion allows us to discard those variables with negative information gain, while the IG measure only obtains a attribute ranking, which means that a threshold must be chosen to discard those irrelevant features. The selected threshold is determined by the number of states of the class variable and a parameter α . For our experiments, we have chosen four different values For values lower than 0 . 01 , the IG criterion selects all of the features for many of the datasets; and for values higher than 0 . 1 , the criterion selects a very low number of features and then the performance of the Naive Bayes is very poor—for that parameter: 0 . 01 , 0 . 02 , 0 . 05 and 0 . 1 .
In the experimentation, for each dataset, we applied the following procedure of 10-fold-cross validation repeated 10 times: the dataset is separated into 10 subsets, each one is used as a test set and the set obtained joining the other nine subsets are used as a training set. Thus, we have 10 training sets and 10 test sets. This procedure is repeated 10 times with a previous random reordering. Finally, it produces 100 training sets and 100 test sets. Using only the training set, features are selected via the IG and IIG methods obtaining a final subset of features. The Naive Bayes classifier is built on that subset, and the same subset of features is used for the test. Finally, Naive Bayes is applied on the test set. These results are compared with the Naive Bayes classifier with no variable selection scheme following the same validation procedure.
In summary, we have tested the Naive Bayes classifier without a variable selection and Naive Bayes with a previous variable selection procedure, using the IIG criterion and the IG criterion (this last one with four different thresholds).
The percentages of correct classifications for each dataset and each method, are presented in Table 3 with the standard deviation to appreciate the variation of each iteration of the methods. Table 4 presents the average number of attributes selected for each method of variable selection. In addition, the average times in seconds consumed for each method, when they are applied on a pair of Training and Test sets, are shown in Table 5.
We have compared the results of the Naive Bayes classifier joint with a previous variable selection method with the ones of the original classifier without a previous procedure for variable selection. Following the recommendation of Demšar [25], we have used the known Wilcoxon Signed-Ranks test [26] for the pairwise comparisons. We have considered that this comparative is more appropriate than the one of comparing all the methods joined because four of them are the same method varying the value of the threshold, and the principal aim is the improvement of the Naive Bayes classifier. We have carried out the tests using the K e e l software (ver. 2015-03-23) [27].
The Wilcoxon Signed-Ranks test [26] is a non-parametric test that ranks the differences in performance of two classifiers of each dataset comparing the ranks for the positive and the negative differences. It takes into account the commensurability of the differences that the Sign Test does not [25]. It is used to compare two related samples, matched samples, or repeated measurements on a single sample to check whether their population mean ranks differently.
Given A 1 and A 2 , two paired sets of data with a sample size of N:
A 1 = { A 11 , , A 1 N } ,
A 2 = { A 21 , , A 2 N } .
The null hypothesis is that the samples mean are equivalent, and the test is based on the following statistic:
W = i = 1 N [ s i n g ( A 1 i A 2 i ) R a n k i ] ,
where N is the sample size without ties; s i n g is the function that returns the sing of a value; and R a n k i is the ranking associated with the order established by the values of the set { | A 1 i A 2 i | , i = 1 , , N } . W follows a distribution with an expected value of 0 and a variance of N ( N + 1 ) ( 2 N + 1 ) 6 .
In Table 6, we compare each method with the others using the Wilcoxon test.

Comments on the Results

Table 3 shows that the predictive accuracy of the Naive Bayes classifier usually improves when a variable selection step is carried out. However, it should be noted that better results are not always achieved. After surveying the accuracy results, it becomes clear that, for the case of I G 0 . 05 and I G 0 . 1 , the results are worse. If we take a closer look, the best overall results are achieved using the IIG criterion. Henceforth, we can state that, in general, the Naive Bayes classifier offers better performance in predictive accuracy when the IIG criterion is used as a previous variable selection step.
Regarding accuracy, we must note that the IIG criterion does not always achieve the best performance. Depending on the dataset and the threshold used, the IG measure with different thresholds can return the best results in some cases. Therefore, the performance of the IG criterion can be improved by carrying out a search or study to determine the most appropriate threshold for every dataset. However, it is still an open problem: how to select the correct threshold value depending on the properties of each dataset. On the other hand, a study to determine the best threshold for each problem can add more complexity to the variable selection step.
With respect to the results about the number of selected features, shown in Table 4, we should mention that the variable selection performed using the IIG criterion is less aggressive than the different options based on the IG measure that were evaluated. The fact that the IIG does not require a threshold, since it removes variables with a negative information gain, may suggest that this criterion only discards those features that are truly irrelevant and add no information about the class variable. This table also shows the problem of using a threshold for the IG criterion. We can observe that, for the same value of the α parameter, the number of selected variables varies notably. It is clear that the greater the value of the α parameter, the lower will be the number of chosen variables for the same dataset. It can be seen that, for some datasets and using α = 0 . 01 , the number of variables selected is very similar to the number obtained using the IIG criterion. Then, for those datasets, the differences in performance are due to the different treatment of the information from both methods, as was shown in Example 1. In other datasets and using α = 0 . 1 , the IG measure does not select any variable.
It is important to remark that we found performance differences when comparing two procedures that select a similar average number of variables, i.e., IG 0 . 01 and IIG. For some datasets (e.g., audiology and german-credit), the first procedure selects a notably lower number of variables than the second one; for other datasets (e.g., solarflare and sponge), the first one selects a notably higher number of variables than the second one. However, in all these cases, the performance in accuracy of the IIG criterion is significantly better; and the IIG criterion is clearly more informative for variable selection.
The accuracy results in Table 3 show that Naive Bayes is usually better combined with the IIG criterion used in a previous feature selection step. Using the IIG, the results never deteriorate significantly. We cannot say the same for the IG with different threshold levels. For example, if we compare the results of percentage of accuracy for IG with α = 0 . 05 and the results using the original NB, we get the following results of clear improvement or deterioration (IG 0 . 05 | NB):
h o r s e c o l i c 82 . 32 | 78 . 70 ,
g e r m a n c r e d i t 75 . 16 | 68 . 79 .
The difference can be close to 7 % . With α = 0 . 01 , we get similar results, such as these (IG 0 . 01 | NB):
h e p a t i t i s 85 . 09 | 83 . 81 ,
s p o n g e 89 . 79 | 92 . 11 .
Table 5 shows that all the procedures have an extremely low computational cost, with average processing time below a tenth of a second. Another outcome is that Naive Bayes is the fastest method (as expected, since it has no variable selection stage). The different versions of the IG achieve improved response times depending on the number of variables selected. The IIG approach, though very fast, is slower than the rest of the approaches studied, as working with imprecise probabilities and the maximum entropy measure has a greater computational cost than precise probabilities and the Shannon entropy.
The statistical differences are analyzed via the Wilcoxon test carried out between each pair of methods. The results are presented in Table 6. Now, only the IIG approach achieves better results than Naive Bayes without variable selection, and these results are statistically significant for each level of significance used. None of the IG variants studied produce significantly better results than the Naive Bayes classifier without variable selection. We also note that IG with α = 0 . 1 is notably worse than the rest of the proposals.

6. Conclusions

As was pointed out in previous sections, variable selection is an important step in the preprocessing stage for data mining. In this paper, we have presented a new method to select variables based on imprecise probabilities and the maximum entropy measure to improve a very known classification method. We have shown, via an experimental study, that the predictive accuracy of the Naive Bayes classifier improves with the new variable selection method. Using an important statistical test, we obtain that the differences are statistically significant in favour of the use of the new method.
We compared the new method with a similar classical method based on precise entropy, called Info-Gain (IG). It must be remarked that in the same test carried out to compare the original Naive Bayes with the one using IG as a previous variable selection, we do not obtain significant statistical differences.
An important difference of the new method with respect to the classical one used for comparison is that the IG requires using a threshold to perform variable selection. Choosing the best threshold for each dataset is a difficult task. It should be emphasized that our proposal does not require setting a threshold, since variables with a negative information gain are discarded. The new method provides a quick and improved procedure for the preliminary step of variable selection for the Naive Bayes classifier that outperforms the IG approach, while removing the requirement of setting a threshold. The results presented in this work reinforce this assertion.
The final conclusion of this work is that the combination of the new variable selection method presented here with the Naive Bayes classifier provides an extremely useful tool for datasets with a very large number of features and a huge amount of data, where using complex methods is not computationally feasible. This is one of our tasks for future work.
A more immediate task for future work is to explore the use of the IIG criterion in complex methods to select variables based on IG. We think that this could provide an interesting improvement.

Acknowledgments

This work has been supported by the Spanish “Ministerio de Economía y Competitividad” and by “Fondo Europeo de Desarrollo Regional” (FEDER) under Project TEC2015-69496-R.

Author Contributions

J. Abellán is the responsible of the new method and of its first implementation. He also written the paper. Javier G. Castellano completed the implementation and designed the experiments. He is the responsible of the results, tables and test carried out. All authors have read and approved the final version of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hall, M.A.; Holmes, G. Benchmarking attribute selection techniques for discrete class data mining. IEEE Trans. Knowl. Data Eng. 2003, 15, 1437–1447. [Google Scholar] [CrossRef]
  2. Quinlan, J.R. Induction of Decision Trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef]
  3. Walley, P. Statistical Reasoning with Imprecise Probabilities; Taylor & Francis: London, UK, 1991. [Google Scholar]
  4. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  5. Klir, G.J. Uncertainty and Information: Foundations of Generalized Information Theory; John Wiley & Sons: Hoboken, NJ, USA; New York, NY, USA, 2005. [Google Scholar]
  6. Abellán, J.; Klir, G.; Moral, S. Disaggregated total uncertainty measure for credal sets. Int. J. Gen. Syst. 2006, 35, 29–44. [Google Scholar] [CrossRef]
  7. Abellán, J.; Masegosa, A. Requirements for total uncertainty measures in Dempster-Shafer theory of evidence. Int. J. Gen. Syst. 2008, 37, 733–747. [Google Scholar] [CrossRef]
  8. Abellán, J.; Bossé, É. Drawbacks of Uncertainty Measures Based on the Pignistic Transformation. IEEE Trans. Syst. Man Cybern. 2016, 1–7. [Google Scholar] [CrossRef]
  9. Mantas, C.J.; Abellán, J. Analysis and extension of decision trees based on imprecise probabilities: Application on noisy data. Expert Syst. Appl. 2014, 41, 2514–2525. [Google Scholar] [CrossRef]
  10. Abellán, J.; Moral, S. Building classification trees using the total uncertainty criterion. Int. J. Intell. Syst. 2003, 18, 1215–1225. [Google Scholar] [CrossRef]
  11. De Campos, L.M.; Huete, J.F.; Moral, S. Probability intervals: A tool for uncertain reasoning. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 1994, 2, 167–196. [Google Scholar] [CrossRef]
  12. Walley, P. Inferences from multinomial data: Learning about a bag of marbles (with discussion). J. R. Stat. Soc. Ser. B Methodol. 1996, 58, 3–57. [Google Scholar]
  13. Dempster, A.P. Upper and Lower Probabilities Induced by a Multivalued Mapping. Ann. Math. Stat. 1967, 38, 325–339. [Google Scholar] [CrossRef]
  14. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  15. Klir, G.J.; Smith, R.M. On Measuring Uncertainty and Uncertainty-Based Information: Recent Developments. Ann. Math. Artif. Intell. 2001, 32, 5–33. [Google Scholar] [CrossRef]
  16. Abellán, J.; Moral, S. An algorithm to compute the upper entropy for order-2 capacieties. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 2006, 14, 141–154. [Google Scholar] [CrossRef]
  17. Klir, G.; Wierman, M. Uncertainty-Based Information: Elements of Generalized Information Theory; Springer Science & Business Media: Berlin, Germany, 1999. [Google Scholar]
  18. Abellán, J.; Moral, S. Upper entropy of credal sets. Applications to credal classification. Int. J. Approx. Reason. 2005, 39, 235–255. [Google Scholar] [CrossRef]
  19. Duda, R.O.; Hart, P.E. Pattern Classification and Scene Analysis; John Wiley & Sons: New York, NY, USA, 1973. [Google Scholar]
  20. Pazzani, M.; Billsus, D. Learning and Revising User Profiles: The Identification of Interesting Web Sites. Mach. Learn. 1997, 27, 313–331. [Google Scholar] [CrossRef]
  21. Abellán, J. Uncertainty measures on probability intervals from the imprecise Dirichlet model. Int. J. Gen. Syst. 2006, 35, 509–528. [Google Scholar] [CrossRef]
  22. Jaynes, E.T. Notes on Present Status and Future Prospects. In Maximum Entropy and Bayesian Methods; Springer: Dordrecht, The Netherlands, 1990; pp. 1–13. [Google Scholar]
  23. Lichman, M. UCI Machine Learning Repository. 2013. Available online: http://archive.ics.uci.edu/ml/ (accessed on 24 May 2017).
  24. Witten, I.H.; Frank, E.; Hall, M.A.; Pal, C.J. Data Mining: Practical Machine Learning Tools and Techniques; Morgan Kaufmann: Burlington, MA, USA, 2005. [Google Scholar]
  25. Demšar, J. Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  26. Wilcoxon, F. Individual Comparisons by Ranking Methods. Biom. Bull. 1945, 1, 80–83. [Google Scholar] [CrossRef]
  27. Alcalá-Fdez, J.; Sánchez, L.; García, S.; del Jesus, M.; Ventura, S.; Garrell, J.; Otero, J.; Romero, C.; Bacardit, J.; Rivas, V.; et al. KEEL: A software tool to assess evolutionary algorithms for data mining problems. Soft Comput. 2009, 13, 307–318. [Google Scholar] [CrossRef]
Figure 1. Naive Bayes classifier.
Figure 1. Naive Bayes classifier.
Entropy 19 00247 g001
Table 1. Gain in information for the 12 worse features in anneal dataset using the IG and IIG criteria.
Table 1. Gain in information for the 12 worse features in anneal dataset using the IG and IIG criteria.
IGFeatureIIGFeature
0.0193bore0.0118exptl
0.0182lustre0.0087lustre
0.0156exptl0.0044bore
0.0039packing0product-type
0.0004bc0jurofm
0product-type0p
0jurofm0s
0p0m
0s0corr
0m0marvi
0corr−0.0019bc
0marvi−0.0052packing
Table 2. Datasets used in the experimentation.
Table 2. Datasets used in the experimentation.
DatasetNAttribNumNomk
anneal898386326
arrhythmia4522792067316
audiology2266906924
autos2052515107
breast-cancer2869092
cmc14739272
colic368227152
credit-german1000207132
diabetes7688802
glass-21639902
hepatitis155194152
hypothyroid3772297224
ionosphere351353502
japanese69015962
kr-vs-kp3196360362
madelon260050050002
mushroom8123220222
musk147616616602
musk2659816616602
semeion1593156256010
segment2310191907
spectrometer531131130148
sick3772297222
solar-flare323120122
sonar208606002
soybean6833503519
sponge76440443
vote435160162
vowel9901210211
zoo101161157
Table 3. Percentages of the average Accuracy and Standard Deviations of the Naive Bayes classifier considering all of the methods of variable selection on each dataset.
Table 3. Percentages of the average Accuracy and Standard Deviations of the Naive Bayes classifier considering all of the methods of variable selection on each dataset.
Dataset NB IG 0 . 01 IG 0 . 02 IG 0 . 05 IG 0 . 1 IIG
anneal86.59 ± 3.3186.16 ± 3.4286.20 ± 3.3886.36 ± 3.0790.12 ± 3.1486.47 ± 3.39
arrhythmia62.40 ± 6.9766.66 ± 5.9066.49 ± 5.9662.14 ± 6.1254.29 ± 1.2167.06 ± 5.79
audiology72.64 ± 6.1073.60 ± 6.1974.08 ± 6.9966.97 ± 7.0968.30 ± 7.9874.27 ± 5.79
autos57.41 ± 10.7757.45 ± 11.1556.53 ± 11.4057.40 ± 11.4560.90 ± 11.6357.45 ± 11.15
breast-cancer72.70 ± 7.7472.79 ± 7.9373.01 ± 7.9971.31 ± 7.2170.30 ± 1.3772.76 ± 7.98
cmc50.48 ± 4.2250.36 ± 4.2450.41 ± 4.4847.69 ± 3.4342.70 ± 0.2550.48 ± 4.22
horse-colic78.70 ± 6.2078.78 ± 5.7280.08 ± 5.6982.32 ± 5.8180.33 ± 5.8878.65 ± 5.77
credit-german75.16 ± 3.4874.55 ± 4.0674.14 ± 3.6068.79 ± 2.8169.50 ± 2.0475.21 ± 3.68
diabetes75.75 ± 5.3275.09 ± 5.5274.99 ± 5.5175.70 ± 5.5474.62 ± 4.9675.09 ± 5.52
glass262.43 ± 8.8664.05 ± 9.0564.05 ± 9.0564.05 ± 9.0564.89 ± 10.1364.05 ± 9.05
hepatitis83.81 ± 9.7085.09 ± 9.4884.96 ± 9.9284.69 ± 9.2482.15 ± 8.8884.38 ± 9.88
hypothyroid95.30 ± 0.7394.80 ± 0.8194.78 ± 0.8094.80 ± 0.7893.97 ± 0.8695.29 ± 0.76
ionosphere82.17 ± 6.1482.42 ± 6.2382.42 ± 6.2382.45 ± 6.2082.48 ± 6.1682.42 ± 6.23
japanese77.86 ± 4.1877.90 ± 4.2377.84 ± 4.2277.01 ± 4.2576.83 ± 4.6377.99 ± 4.29
kr-vs-kp87.79 ± 1.9189.09 ± 2.0989.40 ± 1.9290.43 ± 1.5180.29 ± 7.2387.80 ± 1.92
madelon59.44 ± 2.6660.58 ± 3.0560.34 ± 2.9752.97 ± 4.5150.00 ± 0.0059.54 ± 3.09
mushroom95.76 ± 0.7395.60 ± 0.8095.57 ± 0.7995.85 ± 0.8195.72 ± 0.7895.76 ± 0.73
musk173.99 ± 8.0474.62 ± 7.7474.62 ± 7.7473.38 ± 7.6770.40 ± 6.2274.62 ± 7.74
musk283.91 ± 1.6283.89 ± 1.6283.44 ± 1.5482.47 ± 1.5582.52 ± 1.6683.91 ± 1.62
segment80.17 ± 2.1279.92 ± 1.9079.92 ± 1.8978.97 ± 1.9578.97 ± 1.9580.17 ± 2.12
semeion86.36 ± 2.7086.43 ± 2.5986.33 ± 2.7180.75 ± 2.7941.07 ± 3.0086.43 ± 2.59
sick92.75 ± 1.3694.59 ± 1.2495.02 ± 1.1096.61 ± 0.8093.88 ± 0.0893.79 ± 1.36
solar-flare193.02 ± 4.2994.81 ± 3.7896.14 ± 2.7097.84 ± 1.4297.84 ± 1.4297.56± 1.92
sonar67.71 ± 8.6666.42 ± 9.4766.42 ± 9.4766.42 ± 9.4767.22 ± 10.7266.42 ± 9.47
soybean92.94 ± 2.9292.94 ± 2.9292.65 ± 2.8892.68 ± 2.8887.73 ± 3.3392.94 ± 2.92
spectrometer42.06 ± 5.9942.70 ± 5.8042.70 ± 5.8042.70 ± 5.8041.92 ± 5.5342.70 ± 5.80
sponge92.11 ± 9.1689.79 ± 10.8389.09 ± 10.7992.75 ± 9.6394.75 ± 6.4593.64 ± 7.32
vote90.02 ± 3.9190.00 ± 4.1190.00 ± 4.1190.02 ± 4.1189.93 ± 4.1390.02 ± 3.95
vowel66.79 ± 4.3667.45 ± 4.7067.42 ± 4.5164.81 ± 5.1158.49 ± 5.0867.11 ± 4.54
zoo95.07 ± 5.8695.07 ± 5.8695.07 ± 5.8695.16 ± 5.8695.06 ± 5.8795.07 ± 5.86
Average77.78 ± 5.078.12 ± 5.0878.14 ± 5.0777.18 ± 4.9374.57 ± 4.4278.30 ± 4.88
Table 4. Average number of selected attributes for the methods of variable selection on each dataset.
Table 4. Average number of selected attributes for the methods of variable selection on each dataset.
Dataset IG 0 . 01 IG 0 . 02 IG 0 . 05 IG 0 . 1 IIG
anneal23.7413.1710.846.0029.00
arrhythmia76.1572.2117.180.03118.06
audiology34.7020.6311.228.1455.26
autos22.5020.4519.0511.6022.51
breast-cancer6.134.933.500.005.46
cmc6.524.961.020.009.00
horse-colic15.7312.987.912.1616.76
german-credit9.745.481.020.1214.20
diabetes7.066.224.111.027.06
glass24.974.974.973.624.97
hepatitis13.7011.809.543.9112.93
hypothyroid4.003.423.001.008.64
ionosphere32.4232.4232.3532.1032.42
japanese12.2011.616.795.8914.17
kr-vs-kp13.927.053.002.3533.07
madelon9.257.370.330.0017.60
mushroom20.0019.0013.1512.6921.00
musk1134.43134.4385.5416.83134.43
musk2165.14161.60144.2481.82166.00
segment16.1616.0015.0015.0018.00
semeion251.69222.2587.997.47251.69
sick6.004.032.001.0014.80
solar-flare15.923.620.000.001.02
sonar21.1421.1421.148.0921.14
soybean34.9430.7827.8115.6435.00
spectrometer93.9993.9993.9986.1693.99
sponge34.1829.9011.303.784.68
vowel9.389.006.223.3610.38
vote14.0014.0013.7312.5014.99
zoo16.0015.4713.6012.8915.71
Average37.1933.8322.3811.8440.13
Table 5. Average time in seconds for each method when it is applied on a pair of Training and Test sets.
Table 5. Average time in seconds for each method when it is applied on a pair of Training and Test sets.
NB IG 0 . 01 IG 0 . 02 IG 0 . 05 IG 0 . 1 IIG
0.0130.0390.0380.0370.0350.046
Table 6. Wilcoxon’s test about the accuracy of the algorithms. We use the symbol ○ if the method in the column significantly improves the algorithm in the row and the symbol ● expresses that the method in the row improves significantly the method in the column. The cells’ upper diagonal have a significance level of 0 . 90 and the lower ones have a significance level of 0 . 95 .
Table 6. Wilcoxon’s test about the accuracy of the algorithms. We use the symbol ○ if the method in the column significantly improves the algorithm in the row and the symbol ● expresses that the method in the row improves significantly the method in the column. The cells’ upper diagonal have a significance level of 0 . 90 and the lower ones have a significance level of 0 . 95 .
Algorithm NB IG 0 . 01 IG 0 . 02 IG 0 . 05 IG 0 . 1 IIG
N B -
I G 0 . 01 -
I G 0 . 02 -
I G 0 . 05 -
I G 0 . 1 -
I I G -

Share and Cite

MDPI and ACS Style

Abellán, J.; Castellano, J.G. Improving the Naive Bayes Classifier via a Quick Variable Selection Method Using Maximum of Entropy. Entropy 2017, 19, 247. https://doi.org/10.3390/e19060247

AMA Style

Abellán J, Castellano JG. Improving the Naive Bayes Classifier via a Quick Variable Selection Method Using Maximum of Entropy. Entropy. 2017; 19(6):247. https://doi.org/10.3390/e19060247

Chicago/Turabian Style

Abellán, Joaquín, and Javier G. Castellano. 2017. "Improving the Naive Bayes Classifier via a Quick Variable Selection Method Using Maximum of Entropy" Entropy 19, no. 6: 247. https://doi.org/10.3390/e19060247

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop