A New Under-Sampling Method to Face Class Overlap and Imbalance

: Class overlap and class imbalance are two data complexities that challenge the design of effective classiﬁers in Pattern Recognition and Data Mining as they may cause a signiﬁcant loss in performance. Several solutions have been proposed to face both data difﬁculties, but most of these approaches tackle each problem separately. In this paper, we propose a two-stage under-sampling technique that combines the DBSCAN clustering algorithm to remove noisy samples and clean the decision boundary with a minimum spanning tree algorithm to face the class imbalance, thus handling class overlap and imbalance simultaneously with the aim of improving the performance of classiﬁers. An extensive experimental study shows a signiﬁcantly better behavior of the new algorithm as compared to 12 state-of-the-art under-sampling methods using three standard classiﬁcation models (nearest neighbor rule, J48 decision tree, and support vector machine with a linear kernel) on both real-life and synthetic databases.


Introduction
The class imbalance problem is a challenging situation common to many real-world applications such as fraud detection [1], fault/failure diagnosis [2], face recognition [3], text classification [4], sentiment analysis [5], and credit risk prediction [6], among others. A binary data set is said to be imbalanced when one of the classes (the minority or positive class, C + ) has a significantly lower number of instances in comparison to the other class (the majority or negative class, C − ) [7]. The disproportion between the number of positive and negative instances leads to a bias towards the majority class that may imply an important deterioration of the classification performance on the minority class.
Many authors have asserted that the class imbalance distribution by itself does not represent a critical problem for classification, but when it is associated with other data complexity factors, then it can significantly decrease the classification performance because traditional classifiers tend to err on many positive instances [8]. García et al. [9] viewed that the combination of class imbalance and highly overlapping class distributions results in a significant performance deterioration of instance-based classifiers. Class overlap refers to ambiguous regions of the feature space where the prior probability of 1. Algorithmic-level methods. These consist in internally biasing the discrimination-based process so as to compensate for the class imbalance. 2. Data-level methods. These perform some sort of data preprocessing with the aim of reducing the imbalance ratio. 3. Cost-sensitive methods. These incorporate distinct misclassification costs into the classification process and assign higher misclassification costs to the errors on the minority class. 4. Ensemble-based techniques. These consist of a combination between an ensemble algorithm and either the data-level or the cost-sensitive approaches. In the first one, data preprocessing is performed before training the classifier, whereas in the second one, the low cost guides the use of the ensemble algorithm.
Conclusions about what is the best solution for the class imbalance problem are divergent, but the data-level methods are likely the most investigated because they are not classifier-dependent, do not need any algorithmic adaptation to the data sets, and can be easily implemented for any problem [16]. Thus, focusing on the data-level approach, resampling methods consist of adjusting the data set size with the aim of balancing the class distribution, either by decreasing the number of majority class instances (under-sampling) or increasing the number of minority class instances (over-sampling). A hybrid resampling approach consists of combining both over-sampling and under-sampling strategies. As remarked by several authors, under-sampling methods may throw away many potentially useful data as a result of ignoring most of majority class, whereas the over-sampling methods can lead to overfitting on the minority class and an increase in complexity and execution time [17]. On the other hand, it is worth pointing out that the resampling methods have been also used to face the imbalance problem together with class overlap and/or the presence of noisy and borderline instances in the data set [8,[18][19][20][21], a common situation that has motivated the research presented in this paper.
Although most traditional resampling methods are based on the k nearest neighbor (kNN) rule [22][23][24], clustering algorithms have been demonstrated to be even a more powerful strategy for addressing the class imbalance problem [16,25,26] because they may reduce the risk of removing potentially useful data and, therefore, they can achieve better results than the kNN-based resampling methods. On the other hand, despite the fact that over-sampling has been shown to be apparently superior to under-sampling [27,28], the under-sampling approach can be especially useful when dealing with big data applications [29] since these algorithms will lead to a reduction of the data set size and also a decrease in the computational requirements of the subsequent classification process. Bearing both these issues in mind, the present paper introduces a new under-sampling technique (here called DBMIST-US) to face not only the class imbalance but also the overlapping between classes. It deals with the noisy instances in the majority class via a noise filter based on the DBSCAN clustering algorithm [30] combined with the use of a minimum spanning tree (MST) algorithm to reduce the size of the negative class. The reason to combine the DBSCAN clustering with the MST approach is because DBSCAN has demonstrated to be a powerful tool for identifying and removing noisy instances and cleaning the overlapping between classes [31], but it does not produce a well-balanced class distribution. By viewing the data set as a weighted complete graph, the MST algorithm allows for discovering the core of the majority class, which is further used to remove the amount of redundant negative instances needed to balance both classes.
Hereafter, the remainder of this paper is organized as follows. Section 2 introduces several well-known methods to tackle the class imbalance problem. Details of the DBMIST-US algorithm here proposed are described in Section 3. The experimental setup is reported in Section 4. In Section 5, the results are analyzed and discussed. Finally, Section 6 presents the main conclusions and outlines some open lines to be addressed in the future.

Resampling Algorithms to Face Class Imbalance
In this section, a collection of both kNN-based and clustering-based algorithms for dealing with the class imbalance problem is briefly described.

Neighborhood-Based Algorithms
Most kNN-based under-sampling algorithms are inherited from the literature on prototype selection. These techniques exploit the analysis of the feature space, either by removing instances far from the decision boundary of two classes to reduce redundancy (condensing or thinning) or removing those close to the boundary for generalization (editing, cleaning, or filtering) [32]. Wilson's editing [33] and Hart's condensing [22] are the most representative examples of these families of prototype selection methods.
Wilson's editing (ENN) removes all instances that are misclassified by the kNN rule. The Hart's condensing (CNN) algorithm uses the concept of consistent subset to eliminate the majority class instances that are sufficiently far away from the decision boundary because these are considered to be irrelevant for learning. Analogously, the Tomek links (TL) [23,34] have also been employed to remove the majority class instances since, if two instances form a Tomek link, then either one of these is noise or both instances are borderline.
The one-sided selection (OSS) technique developed by Kubat and Matwin [35] eliminates only the majority class instances that are redundant or noisy (i.e., those that are bordering the minority class). The concept of Tomek links is used to discover the border instances, whereas the Hart's condensing algorithm is applied to discover the redundant cases (i.e., those that are far enough from the decision boundary).
Laurikkala [24] proposed the neighborhood cleaning rule (NCL) algorithm, which is quite similar to the OSS technique. With NCL, the majority class instances whose class label differs from the class of at least two of their three nearest neighbors are removed by means of Wilson's editing [33]. In addition, this algorithm also discards the neighbors that belong to the majority class if a minority class instance is misclassified by its three nearest neighbors.

Clustering-Based Algorithms
Clustering-based under-sampling methods have become a well-grounded alternative to neighborhood-based algorithms for handling class imbalance because they can mitigate the problem of information loss caused by the removal of negative instances [16].
Yen and Lee introduced a cluster-based under-sampling algorithm named SBC (under-sampling based on clustering) [17], which rests upon the idea that there may exist various clusters in a data set and each cluster i may have a different ratio of majority class instances to minority class instances in the cluster i. Thus, all instances are initially grouped into K clusters; then, a number of negative instances will be randomly selected from each cluster according to the ratio of majority class instances to minority class instances. Finally, it combines the selected negative instances from the K clusters with all the instances in the minority class. A similar under-sampling method corresponds to the algorithm proposed by Longadge et al. [36], which firstly clusters the majority class instances into K groups using the K-means algorithm and then selects |C + | × IR i majority class instances from each cluster i, where IR i denotes the imbalance ratio in the cluster i. Note that the aim of this method is not to obtain a perfectly balanced class distribution, but to reduce the disproportion between the size of the majority and minority classes.
The ClusterOSS algorithm introduced by Barella et al. [37] is an improvement of the OSS method. It starts by using the K-means algorithm to cluster the majority class instances. Then, the closest instances to the center of each cluster are used to start the application of OSS, removing borderline and noisy negative instances using the Tomek links. Unlike OSS, this technique does not start the under-sampling process at random, but defines how many and which instances should be chosen by applying the K-means clustering.
Sowah et al. proposed an under-sampling technique named CUST [38] whose main objective is to remove redundant and noisy instances along with outliers from the majority class. In a first stage, the algorithm removes noisy negative instances using a techniques based on Tomek links. In a second stage, outliers and redundant negative instances are eliminated by using the K-means algorithm to cluster the remaining majority class instances.
Das et al. introduced the ClusBUS algorithm [39], which aims at discarding negative instances that lie in class overlapping regions. First, it clusters the entire data set into K clusters using the DBSCAN algorithm. Then, for those clusters that contain both positive and negative instances, the algorithm removes a number of majority class instances necessary to create a vacuum around the minority class instances. Tsai et al. presented the CBIS technique [40] based on clustering analysis and instance selection: the affinity propagation algorithm groups similar negative instances into K clusters, and then an instance selection process filters out instances in each cluster. Finally, all reduced K clusters together with the instances of minority class form a balanced data set.
Ng et al. proposed the diversified sensitivity-based under-sampling (DSUS) method [41]. It selects useful instances yielding high stochastic sensitivity with respect to the currently trained classifier. To guarantee that only instances close to the decision boundary are selected and preserve the data distribution, negative instances are clustered and only a representative instance of each cluster participates in the selection process Lin et al. developed two under-sampling strategies in which the K-means clustering technique is used [26]. Unlike the algorithms described previously, the number of clusters with negative instances is here defined to be equal to the size of the minority class. The first strategy uses the cluster centers to represent the majority class, whereas the second strategy employs the nearest neighbors of the centers. In the Fast-CBUS method introduced by Ofek et al. [16], the minority class is clustered into K clusters by the K-means algorithm and, for each cluster, a similar number of majority class instances close to the minority class instances are sampled.
It is also worth pointing out that some hybrid algorithms combine clustering with other techniques. For instance, the clustering-based balancing (ClusterBal) method proposed by Sun et al. [42] combines clustering with classifier ensembles: it divides the majority class instances into K subsets by clustering, and then all the minority instances are added to each subset to train a focused classifier. On the other hand, the cluster-based evolutionary under-sampling (CBEUS) [43] combines the K-means clustering applied to the majority class with a genetic algorithm.

The DBMIST-US Algorithm
In this section, we describe the new algorithm DBMIST-US for handling class overlap and imbalance by under-sampling the data set. The purpose is to obtain a subset of representative instances from the majority class and avoid the risk of removing instances that may contribute to generate knowledge. The overall process of DBMIST-US is described in Algorithm 1, where IR max is the unique free parameter representing the maximum imbalance ratio allowed. It starts by dividing a two-class imbalanced data set of n instances into a subset C − with the majority class instances and a subset C + containing the minority class instances. After this initial step, the algorithm consists of two stages: 1. Cleaning stage (lines 3-7). The DBSCAN clustering method [30] is applied to the set with the negative instances to produce a noise-free subset. 2. Core stage (line 8). An MST is built to get a core representation from the majority class. The result is a subset of the majority class with less dispersion than the set obtained in the previous stage.

5:
C − ←DBSCAN(C − , , minPts). 6: until and minPts do not change 7: DS ← C + ∪ C − 8: C − ← MSTGraph(DS , IR max ) 9: DS ← C + ∪ C − DBSCAN is a clustering algorithm that assumes that the clusters correspond to dense regions in the feature space. It can discover groups without any prior knowledge about the number of clusters in the data, and it works pretty well with noisy data sets. DBSCAN requires two input parameters: defines the radius of the neighborhood region (i.e., how close instances should be to each other to be considered a part of the same cluster) and minPts establishes the minimum number of instances that might be part of the -neighborhood to form a cluster. Unlike the original proposal [44] where the input values of and minPts are computed by clustering the data set with the K-means algorithm, our DBSCAN formulation only takes care of |C − | and |C + | to compute and minPts, respectively. DBSCAN begins by randomly selecting a negative instance p − i from the current set C − . If the analyzed instance p − i does not have at least minPts neighboring instances at a distance , it is considered as noisy and removed from the set C − . Note that the rationale behind this is to remove those instances that are close enough to the decision boundary. This process is repeated until minPts and do not change. After applying the DBSCAN algorithm on C − , an MST is computed by the process MSTGraph described in Algorithm 2.

Core Stage
After cleaning the majority class with DBSCAN, a graph-based approach is used to compute a candidate core of the set C − . A weighted complete graph G w = (V(G), E(G)) is built from the set of majority class instances C − given by the DBSCAN-based stage as follows: The purpose of computing an MST is to build a subgraph that connects all vertices in G W without forming cycles, that is, there is not a path from a vertex x to a vertex y such that x = y and whose sum of edge weights is the smallest. Figure 1 shows an illustrative example of an MST (b) computed from a weighted graph (a). To build an MST, an incidence matrix M G (line 6) computed by means of Prim's algorithm [45] is used. This process is as follows: 1. Choose an initial vertex v at random. 2. Select the edge e = {v, u} with the lowest weight in M G and mark v as visited. Now, the next vertex to be analyzed is u.

Repeat
Step 2 now taking u as the initial vertex of the edge e and while there exists a vertex z such that z has not been visited yet, e={u,z}. 4. The MST will be built doing backtracking on the already marked vertices until each vertex has been marked.
By definition, an MST contains each vertex of the graph. Our proposal takes only a subset of the MST: the first S-vertices to build the new majority class C − , where S denotes the maximum imbalance ratio required (lines 8-10). To validate the 95% of confidence in the imbalance ratio, the values chosen are Z = 1.96, σ = 0.5, and e = 0.05 according to the procedure given by Torres et al. [46].

Time Complexity
The time complexity of an algorithm is estimated by counting the number of operations performed [47]. Thus, the time complexity of the DBMIST-US algorithm depends on its two stages: the run time complexity of DBSCAN is O(n 2 ) in the worst-case [48] and the time complexity of MSTGraph is also O(n 2 ), which depends on the time complexity of the following processes: • The computation of the incidence matrix from G w takes n 2 steps.

•
The computation of an MST based on the Prim's algorithm takes n 2 steps.
Therefore, as the three procedures are independent, the complexity of DBMIST-US is given by O(n 2 ) in the worst-case.

Differences between DBMIST-US and Related Works
The algorithm proposed in this work differs from other related methods in several aspects. First, most developments focus on handling either class imbalance or class overlap separately, while DBMIST-US faces both intrinsic data complexities together. Although one can find a few works addressing both problems, in general, they consist of the application of different techniques in a sequential manner. For instance, Chen et al. designed a methodology based on the use of the NCL algorithm to remove the overlapping majority class instances followed by an ensemble to randomly under-sample the majority class [49]. Similarly, Koziarski and Wożniak proposed an energy-based approach to clean the neighborhoods of minority class instances combined with an over-sampling procedure [50]. Other methods have been also proposed to handle imbalanced data sets with noisy and borderline instances together, but they are mostly based on modifying some previous resampling algorithm; in this line, for instance, Sáez et al. proposed an extension of the well-known SMOTE over-sampling algorithm through the inclusion of an iterative ensemble-based noise filter [8].
Regarding the differences between DBMIST-US and the existing clustering-based methods, it is worth pointing out that most under-sampling techniques rely upon the K-means and the fuzzy K-means algorithms [16,26,[36][37][38]40,41,43,51]. However, it is well-known that K-means may not be sufficiently effective when applied to imbalanced data because it always generates clusters with similar sizes [52]. Another weakness of these clustering algorithms is that they assume some prior knowledge of the number of clusters to face the imbalance problem. To overcome these shortcomings, our proposal employs the DBSCAN algorithm in the filtering stage. Some of the advantages of using DBSCAN are: (i) it can identify arbitrarily shaped clusters and detect noise, and (ii) the number of clusters is not a user-specified parameter. In addition, an MST-based approach is used to under-sample the majority class, which, to the best of our knowledge, this is the first development using the MST to face the imbalance problem. A related approach employs the Gabriel graph and the relative neighborhood graph to search for neighbors in SMOTE [53], which are supergraphs of the Euclidean MST.
Finally, differences between DBMIST-US and other resampling methods that use DBSCAN can also be remarked. Sanguanmak and Hanskunatai proposed the hybrid DBSM algorithm, which employs DBSCAN for under-sampling and the SMOTE technique for over-sampling [54]; here, DBSCAN is used to create clusters from the whole training set, and then 50% of the majority class instances are eliminated from each cluster. Another approach is the DBSMOTE algorithm [55], which integrates DBSCAN and SMOTE; DBSCAN is applied to discover arbitrarily shaped clusters and SMOTE generates more synthetic positive instances around the core of the minority class rather than at the border. The main difference between our proposal and both these methods is that, unlike DBMIST-US, these are for over-sampling. On the other hand, the DBMUTE technique combines DBSCAN with the under-sampling MUTE algorithm [56] to discover and remove majority class instances from the overlapping region [57], but this is not intended to identify and remove noisy instances.

Experimental Set-Up
Extensive experiments were conducted to evaluate the usefulness of the DBMIST-US method when compared to a pool of state-of-the-art under-sampling techniques. In summary, the research questions to investigate in this experimental study were: Q1. What is the classification performance of DBMIST-US in comparison to several state-of-the-art under-sampling algorithms? Q2. How robust is DBMIST-US across the use of different classification models? Q3. What is the impact of each under-sampling algorithm on the imbalance ratio?

Data Sets
We conducted the experiments on 20 real-life data sets (Table 1) and 12 synthetic data sets taken from the KEEL Data Set Repository (https://sci2s.ugr.es/keel/imbalanced.php#subA). All data sets have two classes and various levels of imbalance (the imbalance ratio ranges from 9.35 to 82).
The experiments on the synthetic data were carried out on three databases with different shapes of the minority class (subclus, clover, and paw) whose instances are randomly and uniformly distributed in a two-dimensional feature space. In all cases, the positive instances are uniformly surrounded by the majority class. Each database comprises 800 instances with an imbalance ratio of 7 and different levels of random noise (0%, 30%, 50%, and 70%) on both classes: data from both the majority and minority classes were randomly mislabeled and some single feature-values were randomly changed.
In subclus, the positive instances are located inside rectangles that form small disjuncts. Clover represents a more complex, nonlinear problem, where the minority class resembles a flower with elliptic petals. In a paw database, the minority class is decomposed into three elliptic sub-regions of varying cardinalities, where two sub-regions are located close to each other, and the remaining smaller sub-region is separated. Figure 2 illustrates two examples of these data sets: 0% and 70% of noise.

Reference Under-Sampling Algorithms and Classifiers
Besides the DBMIST-US method with a maximum imbalanced ratio set to 2, we also implemented some state-of-the-art under-sampling algorithms already described in Section 2: CNN, ENN, TL, NCL, OSS, SBC, and ClusterOSS. In addition, the experiments also included a collection of methods that are popular in the literature related to class imbalance (see Table 2).

Method Description
Random under-sampling (RUS) It balances the data set through the random elimination of instances that belong to the over-sized class. Evolutionary under-sampling (EUS) It consists of removing instances in the majority class by using the guide of a genetic-based algorithm [58]. Easy ensemble (EE) The data set is divided into several subsets by random resampling with replacement, and each subset is then used to train a base classifier of the ensemble with AdaBoost [59]. Balance cascade (BC) It performs bagging and removes negative instances that can be classified correctly with high confidence from future selections [59]. Random under-sampling boosting (RUSBOOST) It combines RUS with the AdaBoost algorithm [60].
To investigate the robustness of the experimental results and obtain fair comparisons between the under-sampling algorithms, we evaluated the performance across three popular classification models of different nature: an instance-based learner (nearest neighbor, 1NN), a decision tree (non-pruned J48) and a linear classifier (support vector machine with a linear kernel, SVM). The parameters of these classifiers throughout the experiments were the default values of the WEKA open source machine learning software [61].
The evaluation of the under-sampling algorithms was estimated by 10-fold cross-validation in order to avoid biased results [62]. Each original data set was randomly divided into 10 stratified parts: for each fold, nine blocks were used as a training set, and the remaining portion was used for testing each of the three classifiers that were trained by the different under-sampling strategies. Finally, the performance measures of each method were averaged over the 10 trials.

Evaluation Metrics
For a two-class problem, Table 3 shows a 2 × 2 confusion matrix where each entry contains the number of correct/incorrect classifications [63]. Thus, the true-positive (TP) and the true-negative (TN) values represent the amount of instances from each class that were correctly classified, whereas the false-positive (FP) and false-negative (FN) indicate the amount of positive and negative instances misclassified.  The assessment of classification performance is often carried out using more powerful measures that can be derived from straightforward indices such as the true-positive and true-negative rates. The most popular measure is the overall accuracy, but it is not appropriate when the prior class probabilities are very different because it does not consider misclassification costs and is strongly biased towards the majority class [64]. Thus, the performance of classifiers in the context of class imbalance has commonly been evaluated using other metrics, such as the geometric mean, the F-measure, and the area under the ROC curve. For the present experimental study, the geometric mean was chosen since it represents a good trade-off between the accuracy rates measured on each class:

Results
In this section, we report the experimental results obtained by DBMIST-US and several state-of-the-art under-sampling algorithms, pursuing to answer the three research questions stated in Section 4.

Classification Performance Comparison with State-of-the-Art Methods
The aim of the first block of this experimental study was to compare the performance of DBMIST-US with that of reference methods to assess its competitiveness (Q1), and also to analyze its robustness across classification models (Q2). Tables A1-A3 in Appendix A report the geometric mean (averaged across the 10 runs) of each under-sampling algorithm using the three classifiers on all the data sets. The classification results on the original (non-preprocessed) data sets were also included as a baseline for performance assessment and usefulness testing. Figure 3 displays the average Friedman rank of the under-sampling algorithm for each classifier. From these graphs, one can observe that DBMIST-US achieved the lowest average ranks regardless of the classifier used, which means that this method performed the best on both the real-life data sets and the synthetic data sets. In addition, as expected, under-sampling the data leads to an improvement in the classification performance, except when using the CNN algorithm; in general, this showed the highest average Friedman ranks (i.e., the worst performance as measured by the geometric mean), especially with the 1NN and J48 classifiers.  It is also interesting to draw attention to the behavior of CNN, NCL, TL, ENN, OSS, RUSBOOST, and SBC with the SVM classification model since the geometric mean obtained by these algorithms was 0 for many of the experimental data sets, as can be seen in Table A3. Taking into account that the geometric mean on the respective non-preprocessed data sets was also 0, this behavior indicates that those under-sampling algorithms could not handle either the class overlap or the class imbalance or even both correctly. To gain some insight into this situation, Figure 4 shows the TPR (red triangles) and TNR (black squares) obtained by using SVM on each under-sampled synthetic data set. From these spider plots, one can observe that the bad performance of most of those methods comes from a true-positive rate equal to 0, which means that the SVM model misclassified all instances in the minority class.  To shed some light onto why the true-positive rate given by those under-sampling algorithms was 0, Figure 5 plots the original paw-50 data set and the under-sampled sets when using either a neighborhood-based method (TL) or a clustering-based method (SBC). One can observe that these algorithms could not solve the class imbalance problem and, consequently, classification by SVM remained biased towards the majority class. On the other hand, the plots show that there still exists some overlapping between classes, indicating that not all noisy negative instances could be removed from the data sets.
A particularly interesting problem emerges with the ClusterOSS method where the performance on the minority class improved significantly, but at the cost of causing a severe degradation of the true-negative rate. In Figure 4, the spider plot for the paw-50 data set brings up an especially dramatic case since the TNR was equal to 0. For a deeper understanding of why such a loss of performance in the true-negative rate arose, the scatter plots in Figure 6 display the under-sampled sets when using ClusterOSS and DBMIST-US. As can be viewed, both methods eliminated the problem of class overlap by cleaning the decision boundary completely. However, the removal of many negative instances with the ClusterOSS algorithm interchanged the role of classes (i.e., the majority class became the minority one) and placed the negative instances further from each other than from the positive instances, whereas DBMIST-US behaved appropriately by producing a perfectly balanced data set and keeping a sufficient number of useful negative instances in well-defined clusters. For further confirmation of the behavior of DBMIST-US just discussed, the scatter plots in Figure 7 depict the three synthetic databases with 70% of noise in their original class distribution and also after being under-sampled by the DBMIST-US algorithm. Despite all these corresponding to problems with high overlapping between classes, the under-sampling method was able to properly eliminate the noisy and borderline instances in the majority class and clean the decision boundary. In addition, DBMIST-US reduced the disproportion between positive and negative instances very importantly. Figure 7. Scatter plots of the synthetic data sets (70% of noise) for the non-preprocessed sets and the sets yielded by DBMIST-US (points in gray color correspond to the instances that were removed).

Statistical Significance Analysis
The Wilcoxon's paired signed-rank test to find out statistically significant differences between each pair of under-sampling algorithms for a significance level of α = 0.05. In Figure 8, we present the results of a wins-ties-losses analysis, showing the number of data sets on which DBMIST-US obtained statistically significantly better, equal, or worse performance than the reference algorithms on a pairwise basis, for the three classifiers considered in this experimental study.
As can be seen, DBMIST-US significantly outperformed the state-of-the-art under-sampling methods on a majority of databases, independently of the classifier used. This result suggests that the DBMIST-US algorithm is robust against the classification models of different nature. Figure 9 summarizes the Wilcoxon's test to show whether or not there exists any significance difference between DBMIST-US and the reference methods for the three classifiers. In this case, the bars represent how many times an algorithm was significantly better, equal to, or worse than the remaining techniques. The most important finding is that the DBMIST-US method was significantly the best under-sampling algorithm for all classifiers, which highlights the robustness of this method once again.

Evaluation of the Impact on the Imbalance Ratio
This block of the experimental study aimed to investigate the impact of each under-sampling algorithm on the imbalance ratio (Q3), that is, to check the ability of each method to provide well-balanced data sets. Table 4 reports the imbalance ratio of each under-sampled data set. Initially, the imbalance ratio of the 20 real-life data sets was between 9.35 and 82, whereas the imbalance ratio of the synthetic databases was 7. After under-sampling the original sets, RUS, EUS, EE, and BC gave perfectly-balanced data sets with an imbalance ratio equal to 1; conversely, most data sets that were resampled by NCL, TL, ENN, OSS, and SBC are still heavily class-imbalanced, thus leading to an imbalance ratio really close to the original one. Regarding the DBMIST-US algorithm, the imbalance ratios equal or close to 1 indicate that the resulting data sets were well-balanced, demonstrating the effectiveness of the method proposed here. Finally, the ClusterOSS method produced many data sets with an imbalance ratio less than 1, which means that the huge amount of negative instances removed made the majority class smaller than the minority class, as already pointed out in Section 5.1.   Another way of evaluating the impact of each under-sampling algorithm on the imbalance ratio is looking at the percentage of average reduction of the imbalance ratio given in the last row of Table 4. Results reveal that the top-4 methods were RUS, EUS, EE, and BC with an average reduction of 91.55%, which correspond to the algorithms that produced perfectly-balanced data sets (imbalance ratio equal to 1). However, DBMIST-US with an average reduction of 91.29% was clearly very close to the top-4 methods. On the other hand, the average reductions achieved with NCL, TL, ENN, OSS, and SBC were extremely low, which suggests that these algorithms are not the most appropriate for under-sampling.

Conclusions
In this work, we have proposed a new under-sampling method called DBMIST-US to face class overlap and class imbalance through the combination of the DBSCAN clustering algorithm with the generation of an MST. The goal of the technique proposed here is to handle both data complexities simultaneously using a two-stage algorithm. In the first stage, the decision boundary is cleaned by removing negative instances that have been considered as noise, whereas the second stage is in charge of balancing the classes by selecting representative instances from the majority class according to a maximum imbalance ratio predefined by the user.
As already introduced in Section 3.3, several differences between DBMIST-US and some recent related under-sampling algorithms can be highlighted: (i) DBMIST-US faces class overlap and class imbalance simultaneously, whereas other methods are designed to tackle these data complexities separately or by applying a series of techniques in a sequential manner, (ii) the baseline clustering-based under-sampling approaches are based on the classical K-means algorithm, whereas our proposal combines DBSCAN with MST, and (iii) the algorithms that rest upon the use of DBSCAN are primarily developed for over-sampling the minority class instead of under-sampling the majority class.
We carried out a set of experiments to validate the performance of the DBMIST-US method against 12 state-of-the-art under-sampling algorithms using three supervised classifiers (1NN, J48, and SVM), tested on both real-life data sets and synthetic data sets with an imbalance ratio ranging from 7 to 82. The results have shown that the method here proposed performs well with noisy and borderline problems and better than the remaining algorithms when applied on heavily imbalanced data sets, especially using the SVM classification model. Another research question that has properly been answered in the experimental study refers to robustness, showing that DBMIST-US is high robust to the use of classifiers of different nature. It is also worth noting the percentage of average reduction of the imbalance ratio achieved by DBMIST-US, which was comparable to the reduction produced by the best reference algorithms.
The promising results of the DBMIST-US algorithm encourage us to examine some directions for future research. The most immediate issue relates to the generalization of DBMIST-US for handling the multi-class imbalance problem, which emerges as a more challenging situation in many practical applications. A second subject that deserves further research is the multi-label classification of imbalanced data where the classes are not mutually exclusive, which represents a much more complex classification task because an instance may belong to more than one class. However, another avenue for investigation is to analyze how the DBMIST-US algorithm can be used on large-scale data and data streams.

Conflicts of Interest:
The authors declare no conflict of interests.

Abbreviations
The following abbreviations are used in this manuscript:

Appendix A. Classification Results
This appendix summarizes the classification performance of each under-sampling algorithm using 1NN, J48, and SVM. Values in boldface highlight the best result for each database.  Table A3. Geometric mean results obtained with SVM.