Quadratic Mutual Information Feature Selection

: We propose a novel feature selection method based on quadratic mutual information which has its roots in Cauchy–Schwarz divergence and Renyi entropy. The method uses the direct estimation of quadratic mutual information from data samples using Gaussian kernel functions, and can detect second order non-linear relations. Its main advantages are: (i) uniﬁed analysis of discrete and continuous data, excluding any discretization; and (ii) its parameter-free design. The effectiveness of the proposed method is demonstrated through an extensive comparison with mutual information feature selection (MIFS), minimum redundancy maximum relevance (MRMR), and joint mutual information (JMI) on classiﬁcation and regression problem domains. The experiments show that proposed method performs comparably to the other methods when applied to classiﬁcation problems, except it is considerably faster. In the case of regression, it compares favourably to the others, but is slower.


Introduction
Modelling data using machine learning approaches usually involves taking some kind of learning machine (e.g., decision tree, neural network, support vector machine) to train a model using already known input and output data.For example, based on features collected about patients (gender, blood pressure, presence or absence of certain symptoms, etc.) and given the patients' diagnoses (the outputs), we can build a model and use it afterwards as a diagnosis tool for new patients.The input features and the output can be discrete (e.g., gender) or continuous (e.g., body temperature).In the first case we are dealing with a classification problem, and in the last with a regression problem.
Many classification or regression problems involve high-dimensional input data.For example, gene expression data can easily reach into tens of thousands of features [1].The majority of these features are either irrelevant or redundant for the given classification or regression task.A large number of features can lead to poor inference performance, possible over-fitting of the model, and increased training time [2].
To tackle these problems, feature selection algorithms try to select a smaller feature subset which is highly relevant to the output.There exists a great number of approaches to feature selection.We can divide them into three main groups; specifically, wrapper, embedded, and filter.The wrapper approach [3] uses the performance of a learning machine to evaluate the relevance of feature subsets.They usually achieve good performance, but can be computationally costly and infeasible for use on large data sets.Moreover, their performance depends on the learning machine being used in the evaluation.The embedded approach integrates feature selection into the learning machine itself and performs selection implicitly during the training phase.This method is faster [1], but still dependent on the learning machine.Filters are faster than both of the previous approaches and use a simple relevance criterion based on some measure such as correlation coefficient [4] or mutual information [5] to asses the goodness of a feature subset.The evaluation is independent of the learning machine and is less prone to over-fitting, but may fail to find the optimal feature subset for a given learning machine.
In addition to the relevance criterion, feature selection must also employ a certain search process, which drives the feature selection.Optimally, exhaustive search evaluates all possible feature subsets and selects the best one.This is usually computationally prohibitive, so greedy approaches like sequential search or random search [6] are used in practice.
In this work, we focus on the filter approach to feature selection and present a novel method based on quadratic mutual information.The method's criterion has its roots in Cauchy-Schwarz divergence and quadratic Renyi entropy.Our motivation is the straightforward estimation of Cauchy-Schwarz divergence [7] for discrete features, continuous features, or their combination, which makes it suitable to use without any preprocessing dependent on expert knowledge about the data.Moreover, it avoids the use of parameters, which are inconvenient for non-experts in the field.It is possible to use it as a precursor to classification and regression problems in order to avoid over-fitting and to improve the learning machine performance.
The paper is organized as follows.Section 2 briefly reviews previous work on feature selection using information-theoretic measures and their generalizations.Section 3 presents the proposed measure and search organization for the task of finding relevant features.Section 4 presents the experimental setting and the results obtained on classification and regression problems.Lastly, Section 5 gives conclusions and possible future research directions.

Related Work
There are many information-theoretic feature selection methods that have been proposed in the last two decades.Brown et al. [8] and Vergara et al. [2] unified most of them in the mutual information feature selection framework.There are also a few cases of using mutual information derived from Renyi [9] and Tsallis entropy [10] showing promising results.Chown and Huang [11] proposed the use of a data compression algorithm along with quadratic mutual information to perform feature selection, but their method is prone to over-fitting, due to the estimation of the criterion in high-dimensional space.Here we mention a few of the most well-known information-theoretic measures and criteria reviewed in [8], since they are all based on similar ideas.

Information-Theoretic Measures
Information-theoretic measures offer means to rank feature subsets according to the information they provide about the output.A finite set of features X = {X 1 , . . ., X N }, which can acquire values x 1 , ..., x m 1 with probabilities p 1 (x 1 ), ..., p 1 (x m 1 ), has the Shannon entropy Similarly, we can calculate the entropy of the output H(Y) given the possible values Y = {y 1 , . . ., y m 2 } with probabilities p 2 (y 1 ), . . ., p 2 (y m 2 ) and the joint entropy H(X, Y) given the joint probabilities p 12 (x i , y j ).
Another important information-theoretic measure is the Kullback-Leibler divergence (KL), which measures discrepancy between two probability distributions p and p The Kulback-Leibler divergence between the joint probability distribution p 12 (x, y) and the distribution p 1 (x)p 2 (y) is mutual information (MI): We can usually estimate the probability distributions using one of the histogram-based methods.When the features are continuous, one option is to apply a discretization step beforehand (equal width binning, equal frequency binning) [12].The manual selection of the number of bins can affect the estimation of MI and can lead to spurious results by shrouding some properties of the probability distribution.A better approach is to perform the discretization using an adaptive technique like minimum description length (MDL) [13], but this does not work for continuous output and thus cannot be used in a regression problem.
To avoid discretization, we can compute the differential mutual information directly from continuous data using the differential Kullback-Leibler divergence but we must estimate the probability density functions p 1 (x), p 2 (y), and p 12 (x, y) beforehand.
The non-parametric Parzen-window method [14] is the most straightforward approach to density function estimation.The estimate is obtained by spanning kernel functions around the data samples, where D is the size of the feature set.The estimate depends on the choice of kernel width h d , for which there are several recipes in the literature [15].However, the numerical computation of differential MI for a set of features is computationally quite expensive and prone to error.Another approach to differential MI estimation is the k-nearest neighbors (kNN) estimator [16] of MI, which in certain situations provides better results than the Parzen-window, but is still computationally expensive and not suitable to use directly on data sets comprised of discrete and continuous data [17].A more recent approach is to estimate the density ratio in (4) directly.However, due to the logarithm in (4), this approach becomes computationally expensive and susceptible to outliers [18].To alleviate this problem, the authors [18] propose a squared-loss mutual information measure which makes the computation more robust.
Besides the classical Shannon entropy, there exists a range of information entropy generalizations [19].One of the more widely known is the Renyi entropy [20] which extends the original concept by introducing an additional parameter q.It should be noted that Renyi entropy converges to Shannon entropy as q approaches 1 in the limit.Renyi also defined the differential Renyi entropy, where the integral p 1 (x) q dx substitutes the sum ∑ m 1 i=1 p 1 (x i ) q in (6).Usually, the estimation of differential entropy includes a probability density function (PDF) estimation from the data followed by the integral estimation from the PDF, which is challenging in high-dimensional problems.Erdogmus et al. [21] showed that quadratic Renyi entropy (q = 2) can be directly estimated from the data, bypassing the explicit need to estimate the PDF.Namely, the information potential V(X) = p 1 (x) 2 dx can be estimated as replacing the numerical integration of the PDF with sums over the data samples.The Renyi differential quadratic entropy estimator thus becomes There exist many proposals on how to compute mutual information with regards to Renyi entropy, but each lacks some of the properties that the Shannon mutual information exhibits [22].One of the proposed measures-the Cauchy-Schwarz divergence by Principe et al. [7]-is especially suitable as a substitute for Kullback-Liebler divergence, as it enables assessment of the dependence between variables directly from the data samples.By rearranging the above equation, we obtain where the first term H R 2 (X; Y) is the quadratic Renyi cross-entropy [7], which can be directly estimated from the data using a similar approach as in the case of ĤR 2 (X) On the basis of Cauchy-Schwarz divergence (10), Principe et al. [7] proposed quadratic mutual information (QMI) as a candidate for measuring dependence.They prove that I CS (X; Y) = 0 if and only if X and Y are independent of each other and positive otherwise, similar to the Kullback-Liebler divergence.

Information-Theoretic Feature Selection Methods
Given a set of already-selected features X S = {X 1 , .., X M } and a set of candidate features X C = {X M+1 , . . ., X N }, Battiti [23] proposed to compute a mutual information feature selection criterion (MIFS) for each candidate feature X and add the feature with the maximum value to the set of already selected features.The criterion is a heuristic which takes into account first order relevancy I(X c ; Y) and first order redundancy I(X c ; X s ).
It includes the parameter β, which greatly affects performance [24].Peng et al. [25] improved on the MIFS idea and proposed the minimum redundancy maximum relevance criterion (MRMR), which uses MIFS with automatic setting of parameter β MRMR avoids using parameters, but still considers only first-order interactions.
Yang and Moody [26] used joint mutual information (JMI) as a criterion for feature selection This criterion considers second-order interactions between features and the output, thus increasing computational costs on one hand, but on the other hand also allowing detection of features which, when taken in pairs, provide more information about the output than the sum of both features' individual contributions.
Several methods have been developed which go beyond second-order interactions [27][28][29].The joint search for multiple features is difficult, as multidimensional probability distributions are hard to estimate, and becomes especially problematic when the number of samples is small.However, this is a favourable approach when questing for a small number of features, as some subtle interactions can be revealed.When using filter methods as a pre-processing stage for a machine learning task, it is usually better to select more features and give the learning machine more options to choose from and possibly find higher-order interactions during the learning phase [30].
These methods are usually used on discrete/discretized data for classification problems.Frenay et al. [31] examined the adequacy of MI for feature selection in regression tasks and argue that in most cases it is a suitable criterion.However, regardless of feature selection being a precursor to classification or regression task, most problems arise from the difficulty of estimating the MI.

The Proposed Method
The quadratic mutual information (12) works as the basis for our feature selection method, because it can be computed directly from the data samples and works for both discrete and continuous features.Optimally, the method should assess every possible subset of feature candidates and select the subset with maximum QMI.However, evaluating all possible subsets of features is prohibitively time-consuming.Another problem is that the estimation of I CS is prone to over-fitting, especially if the number of samples is not much larger than the number of features in the subset.This is a common problem in machine learning when dealing with high-dimensional data.To cope with it, feature selection methods usually rank or select features iteratively one by one.Even if the features are added to the relevant set one by one, it is still important to consider possible interactions between them to prevent adding redundant features, or to include those that are not informative about the output on their own, but are useful when taken with other features.
The proposed method (Algorithm 1) selects features iteratively until it reaches an ending criterion-the number of features we want to have.At each step, the algorithm considers all possible candidates from the set of candidate features X C .It checks each candidate feature X c against the set of already selected features X s ∈ X S from the previous steps It adds the candidate feature X c with maximum S QMIFS to the set of already selected features.In the beginning, X S is empty, so the algorithm considers only quadratic mutual information between candidates and output.For later steps, the criterion function ( 16) is composed of sums of pairs of terms.The first term rewards the candidate features that are the most informative about the output when taken along with an already selected feature.The second term penalizes the features that have a strong correlation with already selected features.On one hand, this ensures the detection of features which work better in pairs-they provide more information about the output when taken together than the sum of both features' individual contributions.On the other hand, it avoids selecting redundant features-the information they provide about the output is present in one of the already selected features.Extension of the criterion to include higher-order interactions between features is possible, but considerably increases the computational time and is more prone to over-fitting.
There are a few considerations we must take into account before using this method to select the features.Firstly, the estimation of I CS depends heavily on the kernel width h [7].The Silverman rule [32] is a common way to estimate it, but the width (h d ) must be the same across all features.Neglecting this, the value of the criterion function will vary even if all candidate features are equally relevant to the output [7], and will fail to choose the correct ones.We take care of this problem by standardizing the data, which in turn causes the Silverman rule to produce the same h d for every feature.Secondly, the magnitude of I CS has no meaning [7] due to the dependence on the choice of window width.However, correct identification of the most relevant features requires only relative difference among them.That is, given two features X a , X b , and the output, and knowing that feature X a is more informative about the output than X b , the S QMIFS estimate is acceptable when S QMIFS (X a ) > S QMIFS (X b ).The following small-scale experiment nicely presents some of the important properties of the proposed criterion.
We generate correlated data composed from two features X s , X c , and an output Y.All three are continuous with 2000 samples drawn from normal distribution with zero mean and unit variance.We assume that feature X s is already in the set of selected features, and treat X c as the current candidate.Figure 1a shows how S QMIFS (X c ) changes while keeping correlation corr(X c ,X s ) fixed at 0.1, corr(X s ,Y) at 0.6, and varying the correlation between X c and output Y from 0 to 1.As the correlation increases, S QMIFS (X c ) also increases, but non-linearly.This behaviour is expected, since correlation is not comparable to quadratic mutual information.Figure 1b shows the opposite, how increasing the correlation between features affects the criterion value.We fix correlations corr(X c ,Y) and corr(X c ,Y) to 0.6 and vary the corr(X c ,X s ) from 0 to 1.The result shows that S QMIFS penalizes redundant features-the higher the redundancy (represented here as inter-feature correlation), the lower the criterion value.These findings demonstrate that S QMIFS follows the aforementioned propriety of guaranteeing the correct ordering of features.
Authors in [33] present an efficient approach to speeding up the computation of I CS with an insignificant loss to precision.The basic algorithm for computing I CS has a time complexity of O(n 2 ).They use a greedy incomplete Cholesky decomposition algorithm in order to achieve the computational complexity of O(nd 2 ), where d depends on the data.This approach is useful only when d 2 < n.In their work, they achieve substantial time savings when dealing with common data sets, so we adopted their approach in the computation of I CS .

Results and Discussion
For our experiments, we use ten data sets; nine are from the UCI machine learning repository [34], and one is from a company which deals with web advertisement placement.To compare the methods over a wide variety of scenarios, we choose the data sets so that some include only discrete data, some only continuous, and some mixed.The experiments cover two problem domains: one is dealing with classification, and the other with regression.Table 1 briefly summarizes the information about the data sets.For each data set it lists number of instances, number of features and their type, the type of the output, and the problem domain.

Experimental Methodology
We compare our method QMIFS to three other common and comparable methods which use an information-theoretic approach to feature selection: MIFS with β = 1, MRMR, and JMI.These three methods all need discretization of the continuous features before using them.The results are obtained using Matlab R2016a running on an Intel i7-6820HQ processor (Intel, Santa Clara, CA, USA) with 16 GB of main memory.
Classification tree from the Matlab Statistics and Machine Learning Toolbox serves as the indirect performance evaluation tool on the classification problem domain.The MDL discretization procedure from WEKA [35]-which promises better results than the usual approach of equal frequency or equal width binning [12]-acts as the preprocessing step, where needed.We evaluate the performance of the methods using the classification accuracy (CA), the area under the curve (AUC), Youden index (Y-index)-the difference between true positive rate (TPR) and false positive rate (FPR)-calculated in the optimal receiver operating characteristic (ROC) point, and the execution time.
In the regression problem domain, we asses the performance using the regression tree from the Matlab Statistics and Machine Learning Toolbox and measure the root-mean-square-error (RMSE) along with the execution time.As the output is continuous, MDL discretization is useless.Instead, equal frequency binning is used, with five bins for every feature and output.Equal frequency binning usually works better than equal width binning [12], and the empirical evidence from experimenting with MDL discretization shows that the number of bins per feature is often between three and seven.
In both problem domains, one thousand hold-out validations are performed on each data set.Each time, two thirds of randomly sampled instances act as the training set to build the model and the rest as the validation set to measure the performance.For each method, we vary the number of selected features: 3, 5, 7, or 10, and compare the results against the baseline performance where all features are used to train the model.
To get a clearer representation of result in both problem domains, we rank the methods according to the measures CA, AUC, Y-index, and RMSE.Each method obtains a rank from 1 (best) to 4 (worst).The ranked values get the same (average) rank if their 95% confidence intervals overlap.

Classification Performance
Table 2 shows which features are selected by each method, and Table 3 summarizes the ranking of the methods for each test scenario for measures CA, AUC, and Y-index and their average ranks.The ranks imply that all three measures behave similarly, which is expected since the data sets are well balanced with respect to the number of class values.Table 4 shows a more detailed insight into the performance of the methods for seven selected features.It includes only the maximum standard error of the performance indexes, as standard errors across different methods are practically the same.Additionally, Figure 2 reveals how different numbers of selected features affect the performance in terms of CA.
Chess data set: Baseline performs better in this case-looks like the learning machine can handle all 36 features.CA drops by about 0.03 after reducing the number of features to seven, and all the methods show similar behaviour in prioritizing features.According to Tables 3 and 4, our method is better than the others when selecting five, seven, and ten features, with regards to all three performance indexes.The time measurements in Table 4 show that it is also at least seven times faster at selecting seven features.
Breast Cancer data set: In this case, classification tree benefits from feature selection-even with only three features selected.Table 3 shows that all methods perform similarly; the largest discrepancy among them being at five selected features, where JMI overcomes the others in all three performance indexes.Again, QMIFS is the fastest method, with a three-to-four times lower running time.
Ionosphere data set: All methods improve the performance compared to the baseline.Our method does not perform very well in terms of CA, AUC, or Y-index, even though Table 2 shows that five out of seven selected features are the same as in the best performing method-JMI.It ranks second at three selected features, but then falls behind when selecting more of them.However, in terms of execution time, it is again three-to-four times faster.
Sonar data set: Only a few features are common to all the methods, so the performance varies substantially between them.At three selected features, JMI and QMIFS work the best and offer similar performance, having the same AUC ranks, with CA and Y-index being worse for QMIFS.At five features, CA improves for all methods but is overall still worse than the baseline.Using seven features selected by MIFS, JMI, or QMIFS offers a considerable improvement in comparison to the baseline (3% better CA).The methods achieve the same ranks since the differences between them are small, causing the confidence intervals to overlap.At 10 selected features, all the methods offer improvement over the baseline, with JMI and MRMR having 2% better CA than QMIFS and MIFS.Our method again has the lowest execution time when selecting seven features.
Wine data set: Feature selection improves performance in comparison to the baseline, even though there are only 11 features in the data set.All methods select similar features, manifesting in similar performance.This can be seen in the rankings and in Table 4. QMIFS achieves the best ranks when selecting five, seven, or ten features, and is also at least 1.5 times faster than the other three.
Table 4 reveals that CA, AUC, and Y-index behave similarly because the data sets used are well balanced in terms of class values.In all cases except the Chess data set, the classification tree benefits from the feature selection with a 0.01-0.03increase in CA.The differences between methods in terms of CA, AUC, and Y-index are small-relative difference is mainly less than 1%.The execution times clearly show that our method is the fastest.Due to the similarity of the first-order methods MRMR and MIFS, their execution times are equal and smaller in comparison to the second order method (JMI).Even though the time measurements are given only for seven selected features, the behaviour is similar in all test cases.What causes the other methods to be considerably slower is the MDL discretization done beforehand, which produces a large amount of computational overhead.
Overall, QMIFS offers performance similar to the other methods in terms of CA, AUC, and Y-index.Its average ranks shown in Table 3 across all data sets and number of selected features are 2.4/2.3/2.4,placing it somewhere in the middle-better than MIFS and MRMR, but lagging behind JMI.The subtle differences in the rankings can be attributed to the fact that both QMIFS and JMI are second-order methods and can detect some more peculiar relations between features.The difference between QMIFS and JMI could be attributed to the superiority of MDL discretization compared to the direct estimation in the case of QMIFS.

Regression Performance
Table 5 shows which features are selected by each method, and Table 6 summarizes the ranking of the methods for each test scenario and the average ranks.Table 7 and Figure 3 show the RMSE for each method and data set.Additionally, execution times are presented in Table 7.   Communities data set: Table 7 shows that the methods improve RMSE in all cases, even if we use only three features to train the model.This is expected, since the number of features in the data set is quite large (100) and difficult for the learning machine to tackle.Our method ranks last when selecting three or five features, but improves afterwards with RMSE comparable to other three methods (second and third best at seven and ten selected features, respectively).The selected features across different methods are much more versatile on this data set, owing to the fact that there are many input features to begin with.It is slower than the other three methods by a factor of 1.5-3.
Parkinson Telemonitoring data set: There is only a small gain in the performance by using at least seven features chosen by MRMR.The top three ranking features across all the methods are very similar, with only JMI offering 6-8% lower RMSE in comparison to others.However, our method performs equally well as JMI for five and more features.The execution times are comparable, with the first-order methods being faster, which is expected.
Wine Quality data set: In some cases, feature selection offers an improvement in the regression performance even though the total number of features in the data set is only 11.Overall, our method and MRMR are superior to MIFS and JMI, selecting similar features and offering improvement over the baseline.The execution times behave similarly as in the previous case.
Housing data set: The baseline performs better here for the most part, but there are only 13 features in the data set, so the learning machine does not have a difficult task in training the model.Only our method shows a small performance (2%) benefit compared to baseline when using the top ten features, and it achieves the best overall performance among the four methods, with an average rank of 1.5.Execution times are roughly 30% higher for the second order methods.
Web Advertisement data set: Our method improves the model's performance dramatically compared to the baseline and other feature selection methods, which all exhibit similar behaviour.The number of input features in the data set is large enough to pose a difficult task to the learning machine, so it benefits considerably from feature selection, at least when QMIFS is used.However, our method is much slower than the other three methods-by a factor of 3-6.
In terms of average RMSE ranks, our method outperforms the other three, achieving a value of 2.1 across all test cases.JMI and MRMR are tied for the second place, with average ranks of 2.4 and 2.5; MIFS is lagging behind, with an average rank of 3.1.These results suggest that without the possibility of using MDL to discretize the data, the other methods lag behind our approach.There are probably not many higher-order relations in the data, since JMI is comparable to MRMR in terms of overall performance.Obviously, the way in which underlying probability densities are estimated has a higher impact on the performance than the order of the method.We believe that QMIFS better distinguishes relations in the data than ad-hoc binning used in the other three methods.
Due to the higher versatility of the dependent variable values in the regression problem domain, incomplete Cholesky decomposition is not so effective, leading to longer execution times for our method.This is especially obvious in the Communities and Web advertisement data sets.Additionally, equal frequency binning causes much less computational overhead than MDL to MIFS, MRMR, and JMI, which consequently outperform QMIFS regarding execution times.

Conclusions
In this paper, we propose a quadratic mutual information feature selection method (QMIFS).Our goal was to detect second-order non-linear relations between features and the output, similarly to joint mutual information.Additionally, we focused on the analysis of both discrete and continuous features and outputs, avoiding the intermediate step of estimating underlying probability density functions using histograms or kernel density estimation.To achieve these goals, we employed a quadratic mutual information measure, as it enables direct estimation from the data samples.The measure itself does not exhibit all the properties intrinsic to mutual information measure, and therefore our method was developed to compensate for deficiencies.
We compare our method to three other methods based on information-theoretic measures: mutual information feature selection (MIFS), minimum redundancy maximum relevance (MRMR), and joint mutual information (JMI).The methods are compared indirectly, on the classification problem domain using models built by the classification tree learning machine, and on the regression problem domain using the regression tree learning machine.The results show that our method offers similar performance on the classification problem domain in terms of classification accuracy, area under the curve, and Youden index as the other three, but is considerably faster.When dealing with regression, it compares favourably to the others regarding root-mean-squared-error, but is slower.
We conclude that our method is universal, capable of feature selection on classification or regression problem domain.QMIFS does not need an additional preprocessing step to estimate the probability density function, as is the case in the other three methods.This and the fact that it avoids using parameters makes it simple to use for non-experts in the field.Experiments show that straightforward estimation of QMI from data samples using quadratic Renyi entropy and Gaussian kernels does a good job at identifying the important information in the data.Additionally, it offers considerable execution time savings compared to other feature selection methods coupled with advanced discretization techniques like MDL.
Future research should go towards finding better estimators for the width of the kernel, which importantly affects estimation of QMI.Potential other measures could also be investigated for compatibility with our approach.Moreover, the computational cost of the QMI and other potential measures can be further reduced by using the fast Gauss transform, as proposed in [7].

Algorithm 1 :
Quadratic mutual information feature selection-QMIFS Data: Set of candidate features X C and output Y Result: Set of selected features indices S Standardize X C and Y

Figure 1 .
Figure1.Properties of quadratic mutual information feature selection (QMIFS) criterion: (a) relevance of feature X c as its correlation with the output Y increases; and (b) redundancy of feature X c as its inter-feature correlation with already selected feature X s increases.

Figure 2 .
Figure 2. Performance of feature selection methods in the classification problem domain given in terms of classification accuracy of a classification tree.Higher values mean better performance.JMI: joint mutual information; MIFS: mutual information feature selection; MRMR: minimum redundancy maximum relevance; QMIFS: quadratic MIFS.
Classification problem domain: ranking of feature selection methods.Ranks calculated from the measures classification accuracy (CA), area under the curve (AUC), and Y-index (the difference between true positive rate, TPR, and false positive rate, FPR) are presented as triplets CA/AUC/Y-index.

Figure 3 .
Figure 3. Performance of feature selection methods in the regression problem domain given in terms of RMSE achieved by regression tree.Lower values indicate better performance.

Table 1 .
Properties of used data sets.All but the Web Advertisement data set are from the UCI collection.

Table 2 .
Classification problem domain: selected features.

Table 5 .
Regression problem domain: selected features.

Table 6 .
Regression problem domain: ranking of feature selection methods.

Table 7 .
Regression problem domain: values of root-mean-square-error (RMSE) measure and execution times for different numbers of selected features.Column All Features holds the value of RMSE obtained with all features and maximum standard error given in parentheses.