Next Article in Journal
A Rolling Bearing Fault Diagnosis Method Based on EEMD-WSST Signal Reconstruction and Multi-Scale Entropy
Previous Article in Journal
Role of Waste Cost in Thermoeconomic Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

IMMIGRATE: A Margin-Based Feature Selection Method with Interaction Terms

1
Department of Biostatistics, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD 21205, USA
2
Department of Computer Science, Brandeis University, Waltham, MA 02453, USA
3
Department of Statistics, Harvard University, Cambridge, MA 02138, USA
*
Authors to whom correspondence should be addressed.
Entropy 2020, 22(3), 291; https://doi.org/10.3390/e22030291
Submission received: 31 January 2020 / Revised: 25 February 2020 / Accepted: 25 February 2020 / Published: 2 March 2020

Abstract

:
Traditional hypothesis-margin researches focus on obtaining large margins and feature selection. In this work, we show that the robustness of margins is also critical and can be measured using entropy. In addition, our approach provides clear mathematical formulations and explanations to uncover feature interactions, which is often lack in large hypothesis-margin based approaches. We design an algorithm, termed IMMIGRATE (Iterative max-min entropy margin-maximization with interaction terms), for training the weights associated with the interaction terms. IMMIGRATE simultaneously utilizes both local and global information and can be used as a base learner in Boosting. We evaluate IMMIGRATE in a wide range of tasks, in which it demonstrates exceptional robustness and achieves the state-of-the-art results with high interpretability.

1. Introduction

Feature selection is one of the most fundamental problems in machine learning and pattern recognition  [1]. The Relief algorithm by Kira and Rendell [2] is one of the most successful feature selection algorithms. It can be interpreted as an online learning algorithm that solves a convex optimization problem with a hypothesis-margin-based cost function. Instead of deploying exhaustive or heuristic combinatorial searches, Relief decomposes a complex, global and nonlinear classification task into a simple and local one. Following the large hypothesis-margin principle for classification, Relief calculates the weights of features, which can be used for feature selection. Considering the binary classification in a set of samples P with two kinds of labels, the hypothesis-margin of an instance x is later formally defined in Gilad-Bachrach et al. [3] as 1 2 ( x NM ( x ) x NH ( x ) ) , where NH ( x ) denotes the “nearest hit,” i.e., the nearest sample to x with the same label, while NM ( x ) denotes the “nearest miss”, the nearest sample to x with the different label. The large hypothesis-margin principle has motivated several successful extensions of the Relief algorithm. For example, ReliefF [4] uses multiple nearest neighbors. Simba [3] recalculates the nearest neighbors every time the feature weights are updated. Yang et al. [5] consider global information to improve Simba. I-RELIEF [6] identifies the nearest hits and misses in a probabilistic manner, which forms a variation of hypothesis-margin. LFE [7] extends Relief from feature selection to feature extraction using local information. IM4E is proposed by Bei and Hong [8] to balance margin-quantity maximization and margin-quality maximization. Both approaches in Sun and Wu [7], Bei and Hong [8] use a variation of hypothesis-margin proposed in Sun and Li [6].
The Relief-based algorithms indirectly consider feature interactions by normalizing the feature weights [9], which, however, cannot directly reflect natural effects of associations and hence results in poor understanding on how feature interacts. For example, Relief and many of its extensions cannot tell whether a high weight of a certain feature is caused by its linear effect or its interaction with other features  [9]. Furthermore, these methods cannot directly reveal and measure the impact of the interaction terms on classification results.
To this end, we propose the Iterative Max-MIn entropy marGin-maximization with inteRAction TErms algorithm (IMMIGRATE, henceforth). IMMIGRATE directly measures the influence of feature interactions and has the following characteristics. First, when defining hypothesis-margin, we introduce a new trainable quadratic-Manhattan measurement to capture interaction terms, which measures the interaction importance directly. Second, we take advantage of the margin stability by measuring the underlying entropy based on the distribution of instances. Third, we derive an iterative optimization algorithm to efficiently minimize the cost function. Fourth, we design a novel classification method that utilizes the learned quadratic-Manhattan measurement to predict the class of a new instance. Fifth, we design a more powerful approach (i.e., Boosted IMMIGRATE) by using IMMIGRATE as the base learner of Boosting [10]. Sixth, to make IMMIGRATE efficient for analyzing high-dimensional datasets, we take advantage of IM4E [8] to obtain an effective initialization.
The rest of the paper is organized as follows. Section 2 explains the foundation of the Relief algorithm, and Section 3 introduces the IMMIGRATE algorithm. Section 4 summarizes and discusses our experiments with different datasets, showing that IMMIGRATE achieves the state-of-the-art results, and Boosted IMMIGRATE outperforms other boosting classifiers significantly. The computation time of IMMIGRATE is comparable to other popular feature selection methods that consider interaction terms. Section 5 concludes the article with comparisons with related works and a short discussion.

2. Review: The Relief Algorithm

We first introduce a few notations used throughout the paper: x i R A as the i-th instance in the training set P ; y i as the class label of x i ; N as the size of P ; A as the number of features (i.e., attributes); w as the feature weight vector; and | x i | as a vector where absolute value operation is element-wise. Relief  [2] iteratively calculates the feature weights in w (Algorithm 1). The higher a feature weight is, the more relevant the corresponding feature is. After the calculation of feature weights, a threshold is chosen to select relevant features. Relief can be viewed as a convex optimization problem that minimizes the cost function in Equation (1):
C = n = 1 M w T | x n NH ( x n ) | w T | x n NM ( x n ) | , subject to : w 0 , w 2 2 = 1 ,
where M ( N ) is a user defined number of randomly chosen training samples, NH ( x ) is the nearest “hit” (from the same class) of x ; NM ( x ) is the nearest “miss” (from a different class) of x ; and w T | x n NH ( x n ) | is the weighted Manhattan distance. Denote u = n = 1 M | x n NH ( x n ) | | x n NM ( x n ) | . Minimizing the cost function of Relief (1) can be solved using the Lagrange multiplier method and the Karush–Kuhn–Tucker conditions [11] to get a closed-form solution: w = ( u ) + / ( u ) + 2 , where ( a ) + truncates the negative elements to 0. This solution to the original Relief algorithm is important for understanding the Relief-based algorithms.
Algorithm 1 The Original Relief Algorithm
N: the number of training instances.
A: the number of features (i.e., attributes).
M: the number of randomly chosen training samples to update feature weight w .
Input: a training dataset { z n = ( x n , y n ) } n = 1 , , N .
Initialization: Initialize all feature weights to 0: w = 0 .
  for i = 1 to M do
    Randomly select an instance x i and find its NH ( x i ) and NM ( x i ) .
    Update the feature weights by w = w ( x i NH ( x i ) ) 2 / M + ( x i NM ( x i ) ) 2 / M ,
    where the square operation is element-wise.
Return: w .

3. IMMIGRATE Algorithm

Without loss of generality, we establish the IMMIGRATE algorithm in a general binary classification setting. This formulation can be easily extended to handle multi-class classification problems. Let the whole data set be P = { z n z n = ( x n , y n ) , x n R A , y n = ± 1 } n = 1 N ; the hit index set of x n be H n = { j z j P , y j = y n & j n } , and the miss index set of x n be M n = { j z j P , y j y n } .

3.1. Hypothesis-Margin

Given a distance d ( x i , x j ) between two instances, x i and x j , a hypothesis-margin [3] is defined as ρ n , h , m = d ( x n , x m ) d ( x n , x h ) , where x h and x m represent the nearest hit and nearest miss for instance x n , respectively. We adopt the probabilistic hypothesis-margin defined by Sun and Li [6] as
ρ n = m M n β n , m d ( x n , x m ) h H n α n , h d ( x n , x h ) ,
where α n , h 0 , β n , m 0 , h H n α n , h = 1 , m M n β n , m = 1 , for n { 1 , , N } . As in the above design, the hidden random variable α n , h represents the probability that x h is the nearest hit of instance x n , while β n , m indicates the probability that x m is the nearest miss of instance x n . In the rest of the paper, for conciseness, we will use margin to indicate hypothesis-margin.

3.2. Entropy to Measure Margin Stability

The distributions of hits and misses can be used to evaluate the stability of margins (i.e., margin quality). A more stable margin can be obtained by considering the distributions of instances with the same or different labels with respect to the target instance. A margin is deemed stable if it will not be greatly reduced by changes to only a few neighbors of the target instance. Considering an instance x n , its probabilities { α n , h } and { β n , m } represent the distributions of its hits and misses, respectively. We can use the hit entropy E h i t ( x n ) = h H n α n , h log α n , h and miss entropy E m i s s ( x n ) = m M n β n , m log β n , m to evaluate the stability of x n ’s margin. The following two scenarios help explain the intuition of using these entropy. Scenario A: all neighbors are distributed evenly around the target instance; scenario B: the neighbor distribution is highly uneven. An extreme example for scenario B is that one instance is quite close to the target and the rest are quite far away from the target. An easy experiment to test the stability is to discard one instance from the system and to check how it influences the margin. In scenario A, if the closest neighbor (no matter if it is hit or miss) is discarded, the margin changes only slightly because there are many other hits/misses evenly distributed around the target. In scenario B, if the closest neighbor is a miss, its removal can increase the margin significantly. On the contrary, if the closest neighbor is a hit, removing it can decrease the margin significantly. Intuitively speaking, hits prefer scenario A and misses favor scenario B.
Since scenarios A and B correspond to high and low entropy, respectively, the margin can benefit from a large hit entropy E h i t (e.g., scenario A) and a low miss entropy E m i s s (e.g., scenario B). We can set up a framework to maximize the hit entropy and minimize the miss entropy, which is equivalent to make the margin in Equation (2) the most stable. Bei and Hong [8] use the term max-min entropy principle to describe the process that maximizes the hit entropy and minimize the loss entropy to maximize the margin quality. The process of stabilizing margin is an extension of the large margin principle.

3.3. Quadratic-Manhattan Measurement

We extend the margin in Equation (2) by using a new quadratic-Manhattan measurement defined as:
q ( x i , x j ) = | x i x j | T W | x i x j | ,
where W is a non-negative symmetric matrix (element-wise non-negative) with its Frobenius norm W F = 1 . The quadratic-Manhattan measurement is a natural extension of the weight vector, and the distance defined in Equation (3) is a natural extension of the weighted Manhattan distance in Equation (1). Off-diagonal elements in W capture feature interactions and diagonal elements in W capture main effects. To understand why quadratic-Manhattan measurement can capture the influence of interactions, we observe that the effect of element w a , b ( a b ) in W enters into (3) as the coefficient for the combination of the a-th and b-th elements in vector | x i x j | . In Relief-based algorithms, the weighted Manhattan distance Equation (1) can be equivalently captured by the feature weight update equation in Algorithm 1. Similarly, w a , b can be updated using the combination of the a-th and b-th features based on a randomly given instance. We thus define our new margin using the quadratic-Manhattan measurement as
m M n β n , m q ( x n , x m ) h H n α n , h q ( x n , x h ) .

3.4. IMMIGRATE

We design the following cost function to maximize our new margin, and simultaneously, the hit entropy and miss entropy are optimized.
C = n = 1 N h H n α n , h | x n x h | T W | x n x h | m M n β n , m | x n x m | T W | x n x m | + σ n = 1 N [ E m i s s ( z n ) E h i t ( z n ) ] , subject to : W 0 , W T = W , W F 2 = 1 , n , h H n α n , h = 1 , m M n β n , m = 1 , and α n , h 0 , β n , m 0 ,
where E m i s s ( z n ) = m M n β n , m log β n , m , E h i t ( z n ) = h H n α n , h log α n , h , and σ is a hyperparameter that can be tuned via internal cross-validation.
We also design the following optimization procedure containing two iterative steps to find W that minimizes the cost function. The framework starts from a randomly initialized W and stops when the change of cost function is less than a preset limit or the iteration number reaches a preset threshold. In practice, we find that it typically takes less than 10 iterations to stop and obtain good results. Based on our experiments, different initialization of W does not influence the results of the iterative optimization. The computation time of IMMIGRATE is comparable to other interaction related methods such as SODA [12], hierNet [13].
As depicted by the flow-chart in Figure 1, the IMMIGRATE algorithm iteratively optimizes the cost function Equation (5). It starts with a random initiation satisfying certain boundary conditions, and proceeds to iterate the two steps as detailed below in Algorithm 2.
Algorithm 2 The IMMIGRATE Algorithm
Input: a training dataset { z n = ( x n , y n ) } n = 1 , , N .
Initialization: Let t = 0 , randomly initialize W ( 0 ) satisfying W ( 0 ) 0 , W T = W , W ( 0 ) F 2 = 1 .
 repeat
    Calculate { α n , h ( t + 1 ) } , { β n , m ( t + 1 ) } with Equation (6).
    Calculate W ( t + 1 ) with Theorem 1, Equation (8).
     t = t + 1 .
 until the change of C in Equation (5) is small enough or the iteration indicator t reaches a preset limit.
Output: W ( t ) .

3.4.1. Step 1: Fix W , Update { α n , h } and { β n , m }

Fixing W and setting C α n , h = 0 and C β n , m = 0 , we can obtain the closed-form updates of α n , h and β n , m as
α n , h = e x p ( q ( x n , x h ) / σ ) h H n e x p ( q ( x n , x h ) / σ ) , β n , m = e x p ( q ( x n , x m ) / σ ) k M n e x p ( q ( x n , x k ) / σ ) .
The Hessian matrix of C w.r.t. probability pair ( α n , h , β n , m ) is:
2 C ( α n , h , β n , m ) = σ / α n , h 2 C / β n , m α n , h 2 C / β n , m α n , h σ / β n , m .
Since α n , h , β n , m > 0 , the determinant of the Hessian matrix is negative, where a saddle point is found in the ( α n , h , β n , m ) space. Therefore, the cost function C achieves its local minimum and local maximum w.r.t. α n , h and β n , m , respectively.

3.4.2. Step 2: Fix { α n , h } and { β n , m } , Update W

Fixing α n , h and β n , m , the minimization w.r.t. W is convex. In Equation (5), W satisfies W 0 , W T = W , W F 2 = 1 . In our iterative optimization strategy, we impose W to be a distance metric for computation. Then, a closed-form solution to W can be derived (see Equation (8)).
Theorem 1.
With { α n , h } and { β n , m } fixed, Equation (5) gives rise to a closed-form solution for updating W . Let
Σ = n = 1 N Σ n , H Σ n , M ,
where Σ n , H = h H n α n , h | x n x h | | x n x h | T , Σ n , M = m M n β n , m | x n x m | | x n x m | T . Let the ψ i ’s and μ i ’s be the eigenvectors and eigenvalues of Σ, respectively, so that Σ ψ i = μ i ψ i with ψ i 2 2 = 1 . Then,
W = Φ Φ T ,
where Φ = ( η 1 ψ 1 , η 2 ψ 2 , , η A ψ A ) , η i = ( μ i ) + / i = 1 A ( ( μ i ) + ) 2 .
Proof. 
Since W is a distance metric matrix, it is symmetric and positive-semidefinite. Let λ 1 λ 2 λ A 0 be eigenvalues of W, then the eigen-decomposition of W is
W = P Λ P T = P Λ 1 / 2 Λ 1 / 2 P T , = [ λ 1 p 1 , , λ A p A ] [ λ 1 p 1 , , λ A p A ] T Φ Φ T ,
where P is an orthogonal matrix, and Φ = [ ϕ 1 , , ϕ A ] [ λ 1 p 1 , , λ A p A ] . Thus, ϕ i , ϕ j = 0 . The constraint W F 2 = 1 can be simplified as:
W F 2 = i , j w i , j 2 = i ( ϕ i T ϕ i ) 2 = 1 .
Let us rearrange Equation (5) as:
h H n α n , h | x n x h | T W | x n x h | tr ( W h H n α n , h | x n x h | | x n x h | T ) , tr ( W Σ n , H ) = tr ( Σ n , H i = 1 A ϕ i ϕ i T ) = i = 1 A ϕ i T Σ n , H ϕ i .
Then, Equation (5) can be further simplified as:
C = i = 1 A ϕ i T Σ ϕ i , subject to : W F 2 = i ( ϕ i T ϕ i ) 2 = 1 , ϕ i , ϕ j = 0 ,
where Σ = n = 1 N Σ n , H Σ n , M and Σ n , H = h H n α n , h | x n x h | | x n x h | T , Σ n , M = m M n β n , m | x n x m | | x n x m | T . The orthogonality condition can be ignored because this condition is required in the constraint. The Lagrangian for the optimization problem in Equation (12) is easy to obtain:
L = i = 1 A ϕ i T Σ ϕ i + λ ( i = 1 A ( ϕ i T ϕ i ) 2 1 ) .
Differentiating L with respect to ϕ i yields:
L / ϕ i = 2 Σ ϕ i + 4 λ ϕ i T ϕ i ϕ i = 0 .
Denote ϕ i / ϕ i 2 : = ψ i . From Equation (14), we have
Σ ψ i = μ i ψ i ,
where μ i = 2 λ ϕ i 2 2 . Thus, ψ i and μ i are an eigenvector and eigenvalue of Σ , respectively.
Let ϕ i = η i ψ i , η i 0 . Thus, C = i = 1 A η i ψ i T Σ η i ψ i = i = 1 A η i μ i ψ i T ψ i = i = 1 A η i μ i , and W F 2 = i ( η i ψ i T η i ψ i ) 2 = i ( η i ) 2 = 1 . Then, Equation (12) can be simplified to be
C = i = 1 A η i μ i , subject to : i = 1 A ( η i ) 2 = 1 , η i 0 .
Note that Equation (16) is exactly the same as the original Relief Algorithm (Algorithm 1):
η = ( μ ) + / ( μ ) + 2 ,
where ( a ) + = [ m a x ( a 1 , 0 ) , m a x ( a 2 , 0 ) , , m a x ( a I , 0 ) ] , and ϕ i = η i ψ i . It is also easy to see that the updated W is a distance metric.  □

3.4.3. Weight Pruning

Some previous Relief-based algorithms offer options to remove weights lower than a preset threshold. IMMIGRATE offers a similar option to prune small weights: set small elements in W to 0. By default, we use a threshold to prune small weights to 0, where W should be normalized w.r.t. Frobenius norm after the pruning.

3.4.4. Predict New Samples

A prediction rule based on the learned weight matrix W can be formulated as:
y ^ = arg min c y n = c α n c ( x ) q ( x , x n ) , α n c ( x ) = e x p q ( x , x n ) / σ y k = c e x p q ( x , x k ) / σ ,
where z = ( x , y ) is a new instance, c denotes the class and y ^ is the predicted label. This prediction method assigns a new instance to a class that maximizes its hypothesis-margin using the learned weight matrix W, which makes it more stable than the k-NN method used in the traditional Relief-based algorithms.

3.5. IMMIGRATE in Ensemble Learning

Boosting [10,14,15] has been widely used to create ensemble learners that produce the state-of-the-art results in many tasks. Boosting combines a set of relatively weak base learners to create a much stronger learner. To use IMMIGRATE as the base classifier in the AdaBoost algorithm [14], we modify the cost function Equation (5) to include sample weights and use the modified version in the boosting iterations. We name the algorithm BIM, standing for Boosted IMMIGRATE (Refer to Equation (19) and Algorithm 3 for more details about BIM). BIM schedules the adjustment of the hyperparameter σ in its boosting iterations. It starts with σ being a predefined σ m a x and gradually reduces σ by multiplying it with ( σ m i n / σ m a x ) 1 / T at each interaction until reaching σ m i n , where T is a predefined maximum number of boosting iterations.
C = n = 1 N D ( x n ) h H n α n , h | x n x h | T W | x n x h | m M n β n , m | x n x m | T W | x n x m | + σ n = 1 N D ( x n ) [ E m i s s ( z n ) E h i t ( z n ) ] , subject to : W 0 , W T = W , W F 2 = 1 , n , h H n α n , h = 1 , m M n β n , m = 1 , and α n , h 0 , β n , m 0 ,
where E m i s s ( z n ) = m M n β n , m log β n , m , E h i t ( z n ) = h H n α n , h log α n , h , n = 1 N D ( x n ) = 1 , and D ( x n ) 0 , n
Algorithm 3 The BIM Algorithm
T: the number of classifiers for BIM.
Input: a training dataset { z n = ( x n , y n ) } n = 1 , , N .
Initialization: for each x n , set D 1 ( x n ) = 1 / N .
 for t: = 1 to T do
    Limit max number of iteration of IMMIGRATE less than preset.
    Train weak IMMIGRATE classifier h t ( x ) using a chosen σ t and weights D t ( x ) by Equation (19).
    Compute the error rate ϵ t as ϵ t = i = 1 N D t ( x i ) I [ y i h t ( x i ) ] .
    if ϵ t 1 / 2 or ϵ t = 0 then
        Discard h t , T = T 1 and continue.
    Set α t = 0.5 × log [ ( 1 ϵ t ) / ϵ t ] .
    Update D ( x i ) : For each x i ,
        D t + 1 ( x i ) = D t ( x i ) exp ( α t I [ y i h t ( x i ) ] ) .
    Normalize D t + 1 ( x i ) , so that i = 1 N D t + 1 ( x i ) = 1 .
Output: h f i n a l ( x ) = arg max y { 0 , 1 } t : h t ( x ) = y α t .

3.6. IMMIGRATE for High-Dimensional Data Space

When applied to high-dimensional data, IMMIGRATE can incur a high computational cost because it considers the interactions between every feature pair. To reduce the computational cost, we first use IM4E [8] to learn a feature weight vector, which is used to initialize the diagonal elements of W in the proposed quadratic-Manhattan measurement. We also use the learned feature weight vector to help pre-screen the features, and keep only those with weights above a preset limit. In the remaining computation, we only model interactions between those chosen features. The discarded features after pre-screening can be added back empirically based on the need of a specific application. We term this procedure IM4E-IMMIGRATE, which is effective and computationally efficient. It can also be boosted (Boosted IM4E-IMMIGRATE) to be stronger.

4. Experiments

In our experiments, all continuous features are normalized with mean zero and unit variance. And cross-validation is used here to compare the performances of various approaches. We have implemented IMMIGRATE in R and MATLAB. The R package is available at https://CRAN.R-project.org/package=Immigrate, and the MATLAB version is available at https://github.com/RuzhangZhao/Immigrate-MATLAB-. Both IMMIGRATE and BIM can be accelerated by parallel computing as their computations are matrix-based.

4.1. Synthetic Dataset

We first test the robustness of the IMMIGRATE algorithm using a synthesized dataset where we have two interacting features following Gaussian distributions in a binary classification setting. The simulated dataset contains 100 samples from one class governed by a Gaussian distribution with mean ( 4 , 2 ) T and the covariance matrix 1 0.5 0.5 1 and another 100 samples from the other class governed by a Gaussian distribution with mean ( 6 , 0 ) T and the same covariance matrix. In addition, we add noises following a Gaussian distribution with mean ( 8 , 2 ) T and the covariance matrix 8 4 4 8 to the fist class, and add noises following a Gaussian distribution with mean ( 2 , 4 ) T and the same covariance matrix to the second class. Figure 2 shows a scatter plot of the synthesized dataset containing 10% samples from the noise distributions. The slope of the orange dotted line in Figure 2 is 1, which separates data with different labels.
The noises are included to disturb the detection of the interaction term. The noise level starts from 5%, and gradually increases by 5% to 50%. As the baseline, we apply logistic regression and observe that the t-test p-value of the interaction coefficient increases from 3 × 10 11 to 7 × 10 5 and 0.7 when the noise level increases from 0% to 10% and 50%. Local Feature Extraction (LFE, Sun and Wu [7]) is a Relief-based algorithm which considers interaction terms indirectly, though the interaction information is only used for feature extraction. We run IMMIGRATE and LFE on the synthesized datasets and compare the weights of the interaction term between features 1 and 2 in Figure 3, which shows IMMIGRATE is more robust than LFE.

4.2. Real Datasets

We compare IMMIGRATE with several existing popular methods using real datasets from the UCI database http://archive.ics.uci.edu/ml. The following algorithms are considered in the comparison: Support Vector Machine [16] with Sigmoid Kernel (SV1), Support Vector Machine with Radial basis function Kernel (SV2), LASSO (LAS) [17], Decision Tree (DT) [15], Naive Bayes Classifier (NBC) [18], Radial basis function Network (RBF) [19], 1-Nearest Neighbor (1NN) [20], 3-Nearest Neighbor (3NN), Large Margin Nearest Neighbor (LMN) [21], Relief (REL) [2], ReliefF (RFF) [4,22], Simba (SIM) [3], and Linear Discriminant Analysis (LDA) [23]. In addition, several methods designed for detecting interaction terms are included: LFE [7], Stepwise conditional likelihood variable selection for Discriminant Analysis (SOD) [12], and hierNet (HIN) [13]. We also include three most widely used and competitive ensemble learners: Adaptive Boosting (ADB) [14,15], Random Forest (RF) [24], and XgBoost (XGB) [25]. We use the following abbreviations when presenting the results: IM4 for IM4E, IGT for IMMIGRATE, and B4G for the boosted IM4E-IMMIGRATE.
Whenever possible, we use the settings of the aforementioned methods reported in their original papers: LMNN uses 3-NN classifier; Relief and Simba use Euclidean distance and 1-NN classifier; ReliefF uses Manhattan distance and k-NN classifier (k = 1, 3, 5 is decided by internal cross-validation); in SODA, gam (=0, 0.5, 1) is determined by internal cross-validation and logistic regression is used for prediction. The IM4E algorithm has two hyperparameters λ and σ . We fix λ = 1 as it has no actual contribution and tune σ as suggested by Bei and Hong [8]. Hence, the IMMIGRATE algorithm only has one hyperparameter σ . When tuning σ , we gradually decrease σ from σ 0 = 4 by half each time until it is not larger than 0.2 . The preset limit for weight pruning is 1 / A , where A is the number of features. Furthermore, the preset iteration number is chosen to be 10. For each dataset, σ and whether weight pruning is applied are determined by the best internal cross-validation results. For BIM, we use σ m a x = 4 , σ m i n = 0.2 , and the maximal number of boosting iterations T is 100. The preset threshold in IM4E-IMMIGRATE is 2 / A .
We repeat ten-fold cross-validation ten times for each algorithm on each dataset, i.e., 100 trials are carried out. When comparing two algorithms (i.e., A vs. B), we calculate the paired Student’s t-test using the results of 100 trials. First, the null hypothesis is there is no difference between the performances of A and those of B. When the p-value is larger than the significant level cutoff 0.05, we say A “Tie” B, which means there is no significant difference between their performances. When the p-value is smaller than the significant level cutoff 0.05, the second null hypothesis is the performances of B are no worse than those of A. When the new p-value is smaller than the significant level cutoff 0.05, we say A “wins”, which means A on average performs significantly better than B on this dataset, and vice versa.

4.2.1. Gene Expression Datasets

Gene expression datasets typically have thousands of features. We use the following five gene expression datasets for feature selections: GLI [26], Colon (COL) [27], Myeloma (ELO) [28], Breast (BRE) [29], Prostate (PRO) [30]. All datasets have more than 10,000 features. Refer to Table A1 in Appendix A for details of all datasets.
We perform ten-fold cross-validation ten times, i.e., 100 trials in total. The results are summarized in Table 1. The last row “(W,T,L)” indicates the number of times that the Boosted IM4E-IMMIGRATE (B4G) W,T,L (win,tie,loss) compared with each algorithm by the paired Student’s t-test with the significance level of α = 0.05 . The comparison results are also summarized in Figure 4 (top plot) for easy comparison. Although our B4G is not always the best, it outperforms other methods in most cases. In particular, when IM4E-IMMIGRATE (EGT) is compared with other methods, it also outperforms in most cases.

4.2.2. UCI Datasets

We also carry out an extensive comparison using many UCI datasets [31]: BCW, CRY, CUS, ECO, GLA, HMS, IMM, ION, LYM, MON, PAR, PID, SMR, STA, URB, USE and WIN. Refer to Appendix A Table A1 for the full names and links for those datasets. If a dataset has more than two classes, we use two classes with the largest sample size. In addition, we use three large-scale datasets: CRO , ELE , WAV .
We perform ten-fold cross-validation ten times. Table 2 for IMMIGRATE and Table 3 for BIM show the average accuracies on the corresponding datasets. In Table 2, the last row “(W,T,L)” indicates the number of times IMMIGRATE (IGT) and BIM W,T,L (win,tie,loss) when compared with each algorithm separately by using the paired Student’s t-test with the significance level of α = 0.05 . The comparison results are also summarized in Figure 4 (bottom subplot), where the first 17 items (black) indicate the results for IMMIGRATE while the last three items (blue) indicate the results for BIM.
Although IMMIGRATE or BIM is not always the best, they outperform other methods significantly in one-to-one comparisons in terms of cross-validation results. Figure 4 (bottom subplot, black part) and Table 2 show that IMMIGRATE achieves the state-of-the-art performance as the base classifier while Figure 4 (bottom subplot, blue part) and Table 3 show BIM achieves the state-of-the-art performance as the boosted version. To visualize the feature selection results of our approaches, we plot the feature weight heat maps of four datasets (GLA, LYM, SMR and STA) in Appendix B Figure A1.

5. Related Works

In many recent publications, Relief-based algorithms and feature selection with interaction terms have been well explored. Some methods are reviewed here to show the connection and differences with our approach. The hypothesis-margin definition in Equation (2) adopted in this work is also used in some previous studies, such as Bei and Hong [8]. However, Bei and Hong [8] do not consider the interactions between features. Our work provides a measurable way to show the influence of each feature interaction.
Sun and Wu [7] propose local feature extraction (LFE) method, which learns linear combination of features for feature extraction. LFE explores the information of feature interaction terms indirectly, which is partly our aim. However, LFE does not consider global information or margin stability, which results in significant differences in the cost function and the optimization procedures.
Our quadratic-Manhattan measurement defined in Equation (3) is related to the Mahalanobis metric used in previous works on metric learning, such as Large Margin Nearest Neighbor (LMNN) [21]. Weinberger and Saul [21] use semi-definite programming for learning distance metric in LMNN. LMNN and our approach are both based on K-Nearest Neighbors. A major difference is that our quadratic-Manhattan measurement has matrix W to be non-negative and symmetric (element-wise non-negative) with its Frobenius norm W F = 1 , whereas metric learning only requires its matrix to be symmetric semi-positive definite. Actually, the non-negative element requirement of W provides IMMIGRATE a high intepretability, where items in matrix indicate interaction importance. Quadratic-Manhattan measurement serves well in the classification task and offers a direct explanation about how features, in particular, feature interaction terms, contribute to the classification results.

6. Conclusions and Discussion

In this paper, we propose a new quadratic-Manhattan measurement to extend the hypothesis-margin framework, based on which a feature selection algorithm IMMIGRATE is developed for detecting and weighting interaction terms. We also develop its extended versions, Boosted IMMIGRATE (BIM) and IM4E-IMMIGRATE. IMMIGRATE and its variants follow the principle of maximizing stable hypothesis-margin and are implemented via a computationally efficient iterative optimization procedure. Extensive experiments show that IMMIGRATE outperforms state-of-the-art methods significantly, and its boosted version BIM outperforms other boosting-based approaches. In conclusion, compared with other Relief-based algorithms, IMMIGRATE mainly has the following advantages: (1) both local and global information are considered; (2) interaction terms are used; (3) robust and less prone to noise; (4) easily boosted. The computation time of IMMIGRATE variants is comparable to other methods able to detect interaction terms.
There are some limitations for IMMIGRATE and we discuss some directions of improving the algorithm accordingly. First, in Section 3.4.3, small weights are removed to obtain sparse solutions using some cutoffs directly, which is hard to do inference for the obtained weights. Penalty terms such as the l 1 - or l 2 -penalty are usually applied to shrink and select important weights. We suggest that our cost function Equation (5) can be modified to include such a penalty term to replace the process of weight pruning in Section 3.4.3. Second, although IMMIGRATE is efficient, it still costs much time to compute data with large size. To further improve the computational efficiency of IMMIGRATE for large-scale datasets, we can improve training by using well selected prototypes [32], which, as a subset of the original data, are representative but with noisy and redundant samples removed. Third, IMMIGRATE only considers pair-wise interactions between features. Interactions among multiple features can play important roles in real applications, [33,34]. Our work provides a basis for developing new algorithms to detect multi-feature interactions. For example, people can use tensor form to consider weights for multi-feature interactions. Fourth, although our iterative optimization procedure is efficient, it achieves ad hoc solutions with no guarantee of reaching the global optimum. It remains an open challenge to develop better optimization algorithms. Finally, the selection of an appropriate σ currently relies on internal cross-validation, which cannot uncover the underlying properties of σ . A better strategy may be developed by rigorously investigating the theoretical contributions of σ .

Author Contributions

Methodology, R.Z. and P.H.; software, R.Z.; validation, R.Z., P.H. and J.S.L.; investigation, R.Z., P.H. and J.S.L.; resources, R.Z., P.H. and J.S.L.; data curation, R.Z. and P.H.; writing—original draft preparation, R.Z.; writing—review and editing, R.Z., P.H. and J.S.L.; supervision, P.H. and J.S.L.; funding acquisition, P.H. and J.S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported partially by the the National Science Foundation grants DMS-1613035, DMS-1712714, and OAC-1920147.

Acknowledgments

The authors thank Xin Xing for the valuable suggestions to improve the work. And the authors thank Yang Li for the helpful suggestions about R codes.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NHNearest Hit
NMNearest Miss
IM4EIterative Margin-Maximization under Max-Min Entropy algorithm
IMMIGRATEIterative Max-MIn entropy marGin-maximization with inteRAction TErms algorithm

Appendix A. Information of the Real Datasets

Table A1. Summary of the UCI datasets and the gene expression datasets.
Table A1. Summary of the UCI datasets and the gene expression datasets.
Data# of Features# of InstancesFull Name
BCW9116Breast Cancer Wisconsin (Prognostic) https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Prognostic)
CRY690Cryotherapy https://archive.ics.uci.edu/ml/datasets/Cryotherapy+Dataset+
CUS7440Wholesale customers https://archive.ics.uci.edu/ml/datasets/Wholesale%2Bcustomers
ECO5220Ecoli https://archive.ics.uci.edu/ml/datasets/ecoli
GLA9146Glass Identification https://archive.ics.uci.edu/ml/datasets/glass+identification
HMS3306Haberman’s Survival https://archive.ics.uci.edu/ml/datasets/Haberman%27s+Survival
IMM790Immunotherapy https://archive.ics.uci.edu/ml/datasets/Immunotherapy+Dataset
ION32351Ionosphere https://archive.ics.uci.edu/ml/datasets/ionosphere
LYM16142Lymphograph https://archive.ics.uci.edu/ml/datasets/Lymphography
MON6432MONK’s Problems https://archive.ics.uci.edu/ml/datasets/MONK’s+Problems
PAR22194Parkinsons https://archive.ics.uci.edu/ml/datasets/parkinsons
PID8768Pima-Indians-Diabetes https://github.com/cran/mlbench/blob/master/data/PimaIndiansDiabetes.rda
SMR60208Connectionist Bench (Sonar, Mines vs. Rocks) https://archive.ics.uci.edu/ml/datasets/Connectionist+Bench+%28Sonar%2C+Mines+vs.+Rocks%29
STA12256Statlog (Heart) http://archive.ics.uci.edu/ml/datasets/statlog+(heart)
URB147238Urban Land Cover https://archive.ics.uci.edu/ml/datasets/Urban+Land+Cover
USE5251User Knowledge Modeling https://archive.ics.uci.edu/ml/datasets/User+Knowledge+Modeling#
WIN13130Wine https://archive.ics.uci.edu/ml/datasets/wine
CRO *289003Crowdsourced Mapping https://archive.ics.uci.edu/ml/datasets/Crowdsourced+Mapping
ELE *1210,000Electrical Grid Stability Simulated https://archive.ics.uci.edu/ml/datasets/Electrical+Grid+Stability+Simulated+Data+
WAV *213304Waveform Database Generator https://archive.ics.uci.edu/ml/datasets/Waveform+Database+Generator+(Version+1)
GLI22,28385Gliomas Strongly Predicts Survival [26]
COL200062Tumor and Normal Colon Tissues [27]
ELO12,625173Myeloma [28]
BRE24,48178Breast Cancer [29]
PRO12,600136Clinical Prostate Cancer Behavior [30]
* Large-scale datasets.

Appendix B. Heat Maps

Figure A1. Heat Maps of Feature Weights Learned by IMMIGRATE. The color bars show the values of corresponding colors in the plots.
Figure A1. Heat Maps of Feature Weights Learned by IMMIGRATE. The color bars show the values of corresponding colors in the plots.
Entropy 22 00291 g0a1

References

  1. Fukunaga, K. Introduction to Statistical Pattern Recognition; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  2. Kira, K.; Rendell, L.A. A practical approach to feature selection. In Machine Learning Proceedings 1992; Morgan Kaufmann: Burlington, MA, USA, 1992; pp. 249–256. [Google Scholar]
  3. Gilad-Bachrach, R.; Navot, A.; Tishby, N. Margin based feature selection-theory and algorithms. In Proceedings of the 21st International Conference on Machine Learning, Banff, AB, Canada, 4–8 July 2004; p. 43. [Google Scholar]
  4. Kononenko, I. Estimating attributes: Analysis and extensions of RELIEF. In European Conference on Machine Learning; Springer: Berlin, Germany, 1994; pp. 171–182. [Google Scholar]
  5. Yang, M.; Wang, F.; Yang, P. A Novel Feature Selection Algorithm Based on Hypothesis-Margin. JCP 2008, 3, 27–34. [Google Scholar] [CrossRef]
  6. Sun, Y.; Li, J. Iterative RELIEF for feature weighting. In Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA, 25–29 June 2006; pp. 913–920. [Google Scholar]
  7. Sun, Y.; Wu, D. A relief based feature extraction algorithm. In Proceedings of the 2008 SIAM International Conference on Data Mining, Atlanta, GA, USA, 24–26 April 2008; pp. 188–195. [Google Scholar]
  8. Bei, Y.; Hong, P. Maximizing margin quality and quantity. In Proceedings of the 2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), Boston, MA, USA, 17–20 September 2015; pp. 1–6. [Google Scholar]
  9. Urbanowicz, R.J.; Meeker, M.; La Cava, W.; Olson, R.S.; Moore, J.H. Relief-based feature selection: Introduction and review. J. Biomed. Inform. 2018, 85, 189–203. [Google Scholar] [CrossRef] [PubMed]
  10. Schapire, R.E. The strength of weak learnability. Mach. Learn. 1990, 5, 197–227. [Google Scholar] [CrossRef] [Green Version]
  11. Kuhn, H.W.; Tucker, A.W. Nonlinear programming. In Traces and Emergence of Nonlinear Programming; Springer: Berlin, Germany, 2014; pp. 247–258. [Google Scholar]
  12. Li, Y.; Liu, J.S. Robust variable and interaction selection for logistic regression and general index models. J. Am. Stat. Assoc. 2018, 114, 1–16. [Google Scholar] [CrossRef] [Green Version]
  13. Bien, J.; Taylor, J.; Tibshirani, R. A lasso for hierarchical interactions. Ann. Stat. 2013, 41, 1111. [Google Scholar] [CrossRef]
  14. Freund, Y.; Schapire, R.E. Experiments with a new boosting algorithm. Icml 1996, 96, 148–156. [Google Scholar]
  15. Freund, Y.; Mason, L. The alternating decision tree learning algorithm. Icml 1999, 99, 124–133. [Google Scholar]
  16. Soentpiet, R. Advances in Kernel Methods: Support Vector Learning; MIT Press: Cambridge, MA, USA, 1999. [Google Scholar]
  17. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  18. John, G.H.; Langley, P. Estimating continuous distributions in Bayesian classifiers. In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence; Morgan Kaufmann Publishers Inc.: Burlington, MA, USA, 1995; pp. 338–345. [Google Scholar]
  19. Haykin, S. Neural Networks: A Comprehensive Foundation; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1994. [Google Scholar]
  20. Aha, D.W.; Kibler, D.; Albert, M.K. Instance-based learning algorithms. Mach. Learn. 1991, 6, 37–66. [Google Scholar] [CrossRef] [Green Version]
  21. Weinberger, K.Q.; Saul, L.K. Distance metric learning for large margin nearest neighbor classification. J. Mach. Learn. Res. 2009, 10, 207–244. [Google Scholar]
  22. Robnik-Šikonja, M.; Kononenko, I. Theoretical and empirical analysis of ReliefF and RReliefF. Mach. Learn. 2003, 53, 23–69. [Google Scholar] [CrossRef] [Green Version]
  23. Fisher, R.A. The use of multiple measurements in taxonomic problems. Ann. Eugen. 1936, 7, 179–188. [Google Scholar] [CrossRef]
  24. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  25. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  26. Freije, W.A.; Castro-Vargas, F.E.; Fang, Z.; Horvath, S.; Cloughesy, T.; Liau, L.M.; Mischel, P.S.; Nelson, S.F. Gene expression profiling of gliomas strongly predicts survival. Cancer Res. 2004, 64, 6503–6510. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Alon, U.; Barkai, N.; Notterman, D.A.; Gish, K.; Ybarra, S.; Mack, D.; Levine, A.J. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc. Natl. Acad. Sci. USA 1999, 96, 6745–6750. [Google Scholar] [CrossRef] [Green Version]
  28. Tian, E.; Zhan, F.; Walker, R.; Rasmussen, E.; Ma, Y.; Barlogie, B.; Shaughnessy, J.D., Jr. The role of the Wnt-signaling antagonist DKK1 in the development of osteolytic lesions in multiple myeloma. N. Engl. J. Med. 2003, 349, 2483–2494. [Google Scholar] [CrossRef] [PubMed]
  29. Van’t Veer, L.J.; Dai, H.; Van De Vijver, M.J.; He, Y.D.; Hart, A.A.; Mao, M.; Peterse, H.L.; Van Der Kooy, K.; Marton, M.J.; Witteveen, A.T.; et al. Gene expression profiling predicts clinical outcome of breast cancer. Nature 2002, 415, 530. [Google Scholar] [CrossRef] [Green Version]
  30. Singh, D.; Febbo, P.G.; Ross, K.; Jackson, D.G.; Manola, J.; Ladd, C.; Tamayo, P.; Renshaw, A.A.; D’Amico, A.V.; Richie, J.P.; et al. Gene expression correlates of clinical prostate cancer behavior. Cancer Cell 2002, 1, 203–209. [Google Scholar] [CrossRef] [Green Version]
  31. Frank, A.; Asuncion, A. UCI Machine Learning Repository. Available online: http://archive.ics.uci.edu/ml (accessed on 1 August 2019).
  32. Garcia, S.; Derrac, J.; Cano, J.; Herrera, F. Prototype selection for nearest neighbor classification: Taxonomy and empirical study. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 417–435. [Google Scholar] [CrossRef]
  33. Yu, S.; Giraldo, L.G.S.; Jenssen, R.; Principe, J.C. Multivariate Extension of Matrix-based Renyi’s α-order Entropy Functional. IEEE Trans. Pattern Anal. Mach. Intell. 2019. [Google Scholar] [CrossRef]
  34. Vinh, N.X.; Zhou, S.; Chan, J.; Bailey, J. Can high-order dependencies improve mutual information based feature selection? Pattern Recognit. 2016, 53, 46–58. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Flow chart of IMMIGRATE. Step 0: Initialize W randomly, under the constraints W 0 , W T = W and W F 2 = 1 ). Step 1: Fix W , update { α n , h } and { β n , m } . Step 2: Fix { α n , h } and { β n , m } , update W . Steps 1 and 2 are iterated to optimize the cost function, where Δ C is the change of the cost function in (5) and ϵ is a pre-set limit.
Figure 1. Flow chart of IMMIGRATE. Step 0: Initialize W randomly, under the constraints W 0 , W T = W and W F 2 = 1 ). Step 1: Fix W , update { α n , h } and { β n , m } . Step 2: Fix { α n , h } and { β n , m } , update W . Steps 1 and 2 are iterated to optimize the cost function, where Δ C is the change of the cost function in (5) and ϵ is a pre-set limit.
Entropy 22 00291 g001
Figure 2. The synthesized dataset with 10% noise.
Figure 2. The synthesized dataset with 10% noise.
Entropy 22 00291 g002
Figure 3. IMMIGRATE (IGT) is more robust than LFE.
Figure 3. IMMIGRATE (IGT) is more robust than LFE.
Entropy 22 00291 g003
Figure 4. Results of paired t-test on gene expression datasets (top subplot) and UCI datasets (bottom subplot). The top plot shows how well (i.e., “Win” (red bars), “Tie” (green bars), and “Lose” (blue bars)) our Boosted IM4E-IMMIGRATE performs compared with other approaches. In the bottom plot, the results of methods labeled in black are the comparisons with our IMMIGRATE, and the results of methods (ABD, RF, and XGB) labeled in blue are the comparisons with our BIM.
Figure 4. Results of paired t-test on gene expression datasets (top subplot) and UCI datasets (bottom subplot). The top plot shows how well (i.e., “Win” (red bars), “Tie” (green bars), and “Lose” (blue bars)) our Boosted IM4E-IMMIGRATE performs compared with other approaches. In the bottom plot, the results of methods labeled in black are the comparisons with our IMMIGRATE, and the results of methods (ABD, RF, and XGB) labeled in blue are the comparisons with our BIM.
Entropy 22 00291 g004
Table 1. Summarizes the accuracies on five high-dimensional gene expression datasets 1 .
Table 1. Summarizes the accuracies on five high-dimensional gene expression datasets 1 .
DataSV1SV2LASDTNBC1NN3NNSODRFXGBIM4EGTB4G
GLI85.186.085.283.883.088.787.788.787.686.387.589.189.9
COL73.782.080.669.271.172.177.978.182.679.584.378.682.5
ELO72.990.274.677.376.385.691.386.979.277.988.988.688.4
BRE76.088.791.476.469.483.073.682.686.387.388.190.291.5
PRO71.369.987.986.468.083.282.783.291.890.588.089.589.7
W,T,L 2 5,0,04,0,14,1,05,0,05,0,05,0,04,0,15,0,03,1,14,0,13,1,1-,-,--,-,-
1 Ten-fold cross-validation is performed for ten times, namely 100 trials are carried out for each dataset. The average accuracy is reported for each dataset in Table 1, Table 2 and Table 3. The paired Student’s t-test is carried out to compare the results of the Boosted IM4E-IMMIGRATE (B4G) versus those of any other given algorithm. Under the significance level of α = 0.05 , an algorithm is significantly better than another one (i.e., the first algorithm wins) on a dataset if the p-value of the paired Student’s t-test is less than α = 0.05 . The same rule is applied to the results reported in Table 2 and Table 3. 2 The last row shows the number of times the Boosted IM4E-IMMIGRATE(B4G) W,T,L (win,tie,loss) compared with each algorithm in the table using the paired t-test.
Table 2. Summarizes the accuracies on UCI datasets.
Table 2. Summarizes the accuracies on UCI datasets.
DataSV1SV2LASDTNBCRBF1NN3NNLMNRELRFFSIMLFELDASODhINIM4IGT
BCW61.466.671.470.562.456.968.272.269.566.467.167.767.173.965.271.866.474.5
CRY72.990.687.485.384.489.789.185.487.873.877.279.786.088.686.087.986.289.8
CUS86.588.989.689.689.586.886.588.788.882.184.784.386.490.390.890.387.590.1
ECO92.996.998.698.697.894.696.097.897.889.090.791.293.199.097.998.797.598.2
GLA64.276.772.379.469.573.081.178.179.464.163.567.181.272.075.375.078.087.5
HMS63.864.567.772.567.266.866.069.371.265.366.065.764.969.067.469.466.669.2
IMM74.370.674.484.177.967.369.477.976.769.971.869.075.075.272.370.280.783.8
ION80.593.583.687.489.479.986.784.184.585.886.284.291.083.390.392.688.392.9
LYM83.681.585.275.283.671.177.282.886.664.971.070.479.685.279.384.883.387.2
MON74.491.775.086.474.068.275.184.484.961.461.865.064.874.491.997.275.699.5
PAR72.772.577.184.874.171.594.691.491.887.390.384.694.085.688.289.583.293.8
PID65.673.174.774.371.270.370.373.574.064.868.067.067.874.575.774.172.174.7
SMR73.583.973.672.370.367.186.984.786.169.578.381.084.373.170.583.076.486.5
STA69.871.670.868.971.069.567.870.871.359.764.063.066.771.371.869.270.875.9
URB85.287.988.182.685.875.387.287.587.981.983.273.087.973.087.988.387.489.9
USE95.795.297.293.290.684.990.591.592.054.563.769.585.896.996.296.594.196.4
WIN98.399.398.693.197.397.296.496.696.587.295.095.093.899.792.998.998.299.0
CRO *75.497.589.991.088.875.498.498.598.698.598.795.198.689.195.295.581.998.2
ELE *72.395.779.980.082.570.881.183.989.764.675.476.279.879.993.793.683.293.7
WAV *90.091.992.286.291.484.086.588.388.877.680.083.684.791.892.092.191.192.4
W,T,L 1 20,0,016,2,215,4,116,3,119,1,020,0,017,2,118,2,016,3,119,1,019,1,019,1,018,2,015,4,113,4,312,7,119,0,1-,-,-
* Large-scale datasets. 1 The last row (W,T,L) shows the number of times that IMMIGRATE (IGT) wins/ties/losses the corresponding algorithm based on the paired t-test on the cross-validation results.
Table 3. Summarizes the accuracies on the UCI datasets.
Table 3. Summarizes the accuracies on the UCI datasets.
DataADBRFXGBBIM
BCW78.278.678.678.3
CRY90.492.989.991.5
CUS90.891.191.491.0
ECO98.098.998.298.6
GLA85.087.087.986.8
HMS65.872.170.072.0
IMM77.284.281.786.1
ION92.193.592.593.1
LYM84.887.087.488.1
MON98.495.899.199.7
PAR90.591.091.993.2
PID73.576.075.176.2
SMR81.482.883.386.6
STA69.071.369.574.1
URB87.988.688.891.4
USE96.095.394.996.1
WIN97.599.198.299.1
CRO *97.397.498.598.6
ELE *91.192.395.294.1
WAV *89.591.290.893.3
W,T,L 1 17,3,011,8,114,4,2-,-,-
* Large-scale datasets. 1 The last row (W,T,L) shows the number of times that the Boosted IMMIGRATE (BIM) wins/ties/losses a corresponding algorithm based on the paired t-test on the cross-validation results.

Share and Cite

MDPI and ACS Style

Zhao, R.; Hong, P.; Liu, J.S. IMMIGRATE: A Margin-Based Feature Selection Method with Interaction Terms. Entropy 2020, 22, 291. https://doi.org/10.3390/e22030291

AMA Style

Zhao R, Hong P, Liu JS. IMMIGRATE: A Margin-Based Feature Selection Method with Interaction Terms. Entropy. 2020; 22(3):291. https://doi.org/10.3390/e22030291

Chicago/Turabian Style

Zhao, Ruzhang, Pengyu Hong, and Jun S. Liu. 2020. "IMMIGRATE: A Margin-Based Feature Selection Method with Interaction Terms" Entropy 22, no. 3: 291. https://doi.org/10.3390/e22030291

APA Style

Zhao, R., Hong, P., & Liu, J. S. (2020). IMMIGRATE: A Margin-Based Feature Selection Method with Interaction Terms. Entropy, 22(3), 291. https://doi.org/10.3390/e22030291

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop