Evolutionary Mahalanobis Distance-Based Oversampling for Multi-Class Imbalanced Data Classification

The number of sensing data are often imbalanced across data classes, for which oversampling on the minority class is an effective remedy. In this paper, an effective oversampling method called evolutionary Mahalanobis distance oversampling (EMDO) is proposed for multi-class imbalanced data classification. EMDO utilizes a set of ellipsoids to approximate the decision regions of the minority class. Furthermore, multi-objective particle swarm optimization (MOPSO) is integrated with the Gustafson–Kessel algorithm in EMDO to learn the size, center, and orientation of every ellipsoid. Synthetic minority samples are generated based on Mahalanobis distance within every ellipsoid. The number of synthetic minority samples generated by EMDO in every ellipsoid is determined based on the density of minority samples in every ellipsoid. The results of computer simulations conducted herein indicate that EMDO outperforms most of the widely used oversampling schemes.


Introduction
With advancements in sensor technology and the Internet of things (IoT), vast quantities of sensing data have been collected and analyzed for different applications. Costeffective sensors are widely used in our everyday lives to collect various types of data for further online or offline analyses and applications. The classification of real-world sensing data is a highly important research topic in the field of data mining and machine learning. However, the data sets collected using sensors or other sensing techniques usually have a skewed class distribution because the number of data points vary greatly between classes. Such data are called imbalanced data. The data utilized in applications, such as anomaly detection in high-speed trains [1][2][3], fault diagnosis of motors [4][5][6], fault detection and diagnosis in manufacturing processes [7][8][9], and medical diagnosis [10][11][12], are usually imbalanced. In imbalanced data sets, at least one class of data has significantly more data points compared with other classes. Learning on imbalanced data results in poor performance, and this problem has thus attracted considerable research attention in recent years. It is mainly because the performance of many conventional learning algorithms is degraded on the skewed class distribution of imbalanced data sets [13].
Balanced class distribution [14] or equal weighting of classification errors for every class [13] is generally assumed in most conventional machine learning algorithms. For instance, 95% and 5% of imbalanced data sets can comprise majority and minority class samples, respectively. With the equal weighting of the classification errors, traditional classification approaches tend to overlook several or most of the minority class samples in the attempt to minimize the overall classification error. Consequently, although the overall classification error rate is low, the classification error rate for the minority class is high. Minority class samples are important in classification in certain applications, such as medical diagnosis, anomaly detection, and fault detection and diagnosis. Majority class samples usually represent normal conditions, and minority class samples represent abnormal conditions, which can be key in such applications. The learning approaches for imbalanced data are designed to increase learning accuracy with respect to minority classes without trading off learning accuracy with respect to majority classes.
The learning approaches for imbalanced data can generally be categorized into three types: cost-sensitive learning, data-level learning, and ensemble learning, which are comprehensively reviewed in [15,16]. Cost-sensitive learning assigns higher misclassification costs to minority class samples than to majority class samples. Studies have proposed various learning approaches that adjust the misclassification cost using kernel functions; these approaches involve the radial basis function [17], matrix-based kernel regression [18], support vector machine [19,20], and deep learning [21,22].
Data-level learning approaches essentially rebalance the skewed data distribution of different classes by removing several majority class samples or by adding new minority class samples, and they can generally be divided into undersampling and oversampling learning approaches. The main advantage of data-level learning approaches is that they are independent of classifiers. They can be considered a type of data preprocessing approach. Therefore, data-level learning approaches can be easily integrated with other imbalanced learning approaches. Undersampling involves removing the majority class samples to ensure that the learning results are not overly biased toward the majority class [23][24][25][26]. Undersampling reduces both the number of samples in and the computational cost of machine learning. However, it tends to reduce the model's capability to recognize majority classes. By contrast, oversampling involves increasing the number of minority class samples by resampling them or by generating synthetic samples. However, resampling by simply replicating the minority class samples does not improve learning of the decision region of minority classes. The synthetic minority oversampling technique (SMOTE) proposed in [27] selects several samples from a minority class. Searching in the vicinity of the selected samples, it identifies other samples of the minority class to generate new synthetic samples linearly between the two points. SMOTE is the most widely used oversampling technique because of its computational inexpensiveness. However, SMOTE is prone to overgeneralization because it synthesizes new samples through the random selection of minority class samples. Various adaptive sampling methods based on SMOTE have been proposed to overcome its limitation. The adaptive synthetic sampling approach for imbalanced learning algorithm (ADASYN) [28], SMOTEBoost [29], and Borderline SMOTE [30] are effective modified versions of SMOTE. In contrast to SMOTE, other algorithms, such as those in [31][32][33], have been proposed; these algorithms generate synthetic samples by learning the structure underlying the minority samples.
Ensemble learning for incomplete data integrates traditional machine learning approaches, such as boosting [34], bagging [35], and stacking [36], with other cost-sensitive or data resampling imbalanced learning approaches. In [37], SMOTE was integrated with Adaboost [38] to increase the number of minority samples and to assign higher weights to misclassified minority samples. A similar integration of Adaboost with a novel synthetic sampling method was proposed in [39]. The performance of boosting, bagging, and other hybrid techniques applied to imbalanced data has been compared in [40] and [41].
In the methods proposed in [31][32][33], synthetic samples are generated based not on individual minority samples as proposed in SMOTE [30] but on the underlying structure of the minority samples. Recently, a similar oversampling approach called Mahalanobis distance-based oversampling (MDO) was proposed in [42]. MDO generates synthetic samples based on the structure of the principal component space of minority samples. The synthetic samples generated by MDO have the same Mahalanobis distance as that of the considered minority sample. Because the class mean of the synthetic samples generated by MDO is the same as that of the minority class samples, the covariance structure of the minority class samples is preserved. In [43], a scheme called adaptive Mahalanobis distance oversampling (AMDO) was proposed. AMDO integrates generalized singular value decomposition with MDO to solve the oversampling problem encountered in mixedtype imbalanced data sets. Either MDO or AMDO can be utilized as a direct learning approach for solving problems with multi-class imbalanced problems.
The oversampling results obtained from MDO or AMDO are equivalent to those obtained by placing minority class samples and generated synthetic samples into the principal component space. The minority class samples and the synthetic samples can be considered to be included in an ellipsoid centered at the class mean. The orientation of the ellipsoid depends on the covariance structure of the minority class samples. The synthetic samples do not change the covariance structure of the minority class because all the synthetic samples are generated within the ellipsoid. However, both MDO and AMDO use only one ellipsoid to include the minority class samples and synthetic samples. If the decision regions of the minority class are separated, the decision region approximated using only one ellipsoid may overlap with the decision regions of other classes. This is especially true for imbalanced multi-class data. Samples from different classes may be included in a single ellipsoid structure depending on the target minority class samples. The synthetic samples generated by MDO or AMDO are randomly assigned in the single ellipsoid only if they have the same Mahalanobis distance as that of the associated minority sample. When synthetic samples are generated within a single ellipsoid, the generated synthetic samples tend to be placed in the cluster of samples belonging to other classes. This reduces the effectiveness of oversampling. Moreover, certain decision regions (e.g., those that are ring-or belt-shaped) are difficult to approximate with only one ellipsoid.
A novel approach called evolutionary Mahalanobis distance oversampling (EMDO) is proposed in this paper to overcome the limitations of MDO and AMDO. EMDO utilizes multiple ellipsoids to learn the distribution and orientation of minority class samples in parallel. Gustafson and Kessel proposed a clustering algorithm called the Gustafson-Kessel algorithm (GKA) [44], which is similar to the widely used fuzzy c-means [45] clustering approach with Mahalanobis norms. The advantage of the GKA over fuzzy c-means is that it utilizes the Mahalanobis norm instead of the Euclidean norm to learn the underlying sample distribution. However, the GKA assumes a fixed volume before learning the center and orientation of every ellipsoid. The GKA is an effective clustering approach for learning the centers and orientations of data clusters, but it is unsuitable for learning the decision regions of data due to its assumption of a fixed ellipsoid size. The GKA was modified in [46,47] to adaptively learn ellipsoid sizes for pattern recognition problems by using the genetic algorithm with a single objective function. In the proposed EMDO, the GKA is integrated with multi-objective particle swarm optimization (MOPSO) [48,49] to ensure that the centers, orientations, and sizes of multiple ellipsoids, along with the overall misclassification error, are learned in parallel. The misclassification error is defined as the total number of misclassified samples included in a union of multiple ellipsoids. Therefore, EMDO can learn a set of ellipsoids to approximate connected or disconnected complex decision regions with reasonable accuracy. Because multiple ellipsoids are learned in parallel in EMDO, an effective approach is designed to adaptively determine the number of synthetic samples to be generated in every ellipsoid. Similar ideas that design suitable algorithms to search for model parameters for specific applications are shown in [50][51][52].
The technical novelty and main contribution of this paper are summarized as follows.

1)
An effective novel oversampling approach called EMDO is proposed for multi-class imbalanced data problems. Different from the MDO and AMDO approaches that use only one ellipsoid, EMDO learns multiple ellipsoids in parallel to approximate the decision region of the target minority class samples. 2) MOPSO is utilized along with GKA in EMDO to optimize the parameters, including the centers, orientations, and sizes of multiple ellipsoids approximating the target class of decision regions with reasonable accuracy. 3) Synthetic minority samples are generated based on the Mahalanobis distance within every ellipsoid learned by EMDO. A novel adaptive approach is proposed to determine the number of synthetic minority samples to be generated based on the density of minority samples in every ellipsoid. 4) EMDO was evaluated and found to perform better than other widely used oversampling schemes.
The remainder of this paper is organized as follows. Section 2 presents the problem formulation of oversampling for imbalanced data. The GKA is introduced in this section, and it shows that the GKA is suitable to solve the problem formulated herein. Section 3 introduces the proposed multi-objective optimization scheme designed in the EMDO, which uses MOPSO. Section 4 details the method for calculating the number of ellipsoids required to approximate the decision regions of every class. Section 5 describes performance evaluation of EMDO against other widely used oversampling schemes. Finally, Section 6 concludes the study.

Problem Statement and GKA
Given a data set S = (x i , y i ) |x i ∈ R d , y i ∈ {1 . . . p}, i = 1 . . . N , every ith sample x i ∈ S belongs to some class y i among p classes. Let S j ⊂ S be the set containing the samples belonging to class j, j = 1 . . . p. Denote N j ≡ S j as the number of samples in S j , where N min = min less than a preset imbalance ratio, IR. The value of IR is determined based on the size of S and on the characteristics of the classification problem. Typically, IR ≥ 1.5. S j is called a minority set if ( S j /N max ) < IR, j = 1 . . . p. An oversampling technique is applied in this study to overcome the skewed distribution of samples in S. If S j is a minority set, the synthetic samples belonging to the same jth class are generated in S j to form an enlarged set S j such that S j = N max /IR. Denote N j as the total number of extra synthetic samples generated to balance the minority set S j , Note that there can be more than one minority set in a multi-class problem. An oversampling technique is proposed herein to improve classification accuracy on an imbalanced data set. To generate an adequate number of synthetic samples and place them in the minority sets, the distribution of decision regions of every minority set in the d-dimensional feature space must be located. Multiple ellipsoids are utilized in this study to approximate the decision regions of minority sets. EMDO is proposed to learn these ellipsoids and generate synthetic samples in these ellipsoids for oversampling.
Assume that ellipsoids approximate the decision region of the jth class samples. Denote the center of every nth ellipsoid as v j n ∈ R d . The distance between every kth sample x k and the ellipsoid center v j n is defined in the Mahalanobis form as follows: where M j n ∈ R d×d is a norm-inducing matrix. The ellipsoid Φ j n is defined as follows by using the Mahalanobis distance defined in (2): The sample x k is inside or on the ellipsoid if Φ j n (x k ) ≤ 1, but it is outside the ellipsoid if Φ j n (x k ) > 1. Let the decision region of the jth class samples in the feature space be denoted as j ; j is approximated by the union of α j ellipsoids, that is, The GKA is used for learning the α j ellipsoids in parallel, given that the size of each ellipsoid is assigned. Denote the size of the ellipsoid Φ j n in (3) as ξ j n , n = 1 . . . α j . The determinant of the norm-inducing matrix M j n is inversely proportional to ξ j n . Therefore, The GKA learns the norm-inducing matrices M j n and the ellipsoid centers v j n through iteratively calculating an auxiliary fuzzy partition matrix U j ∈ R α j ×N j by using all N j samples belonging to the jth class. The element µ j nk ∈ U j represents the membership value of the kth sample x k associated with the nth ellipsoid Φ j n . The membership values sum to 1 for every x k , that is, The GKA is a fast iterative learning algorithm that efficiently updates the membership values in the fuzzy partition matrix U j while learning both the norm-inducing matrix M j n and the center v j n of every nth ellipsoid. Note that the GKA learns all α j ellipsoids in parallel. Denote the matrices containing the ellipsoid centers and the norm-inducing matrices as V j and M j , respectively, that is, All elements in the triple U j , V j , M j ) are learned by iteratively minimizing the distance in (2) weighted with the membership values in U j subject to the constraints in (5) and (6). Let ω j n , n = 1 . . . α j and j k , k = 1 . . . N j be the Lagrange multipliers of the constraints in (5) and (6), respectively. The triple (U j , V j , M j ) is iteratively learned as follows: where b is an adjusted weighting index. The optimization described in (7) is realized by differentiating (7) with respect to µ j nk , v j n , ω j n , and j k ,and by equating the result to 0. The parameters are obtained as follows: The iteration in GKA is stopped when no significant improvement is made in the fuzzy partition matrix U j . Let (U j ) (m) be the fuzzy partition matrix learned in the mth iteration, the norm of the difference between (U j ) (m) and (U j ) (m+1) can be defined as The GKA iteratively learns U j , V j , and M j until δ j < ε j , where ε j is a small constant. The flowchart of the GKA is illustrated in Figure 1.

Multi-Objective Optimization in EMDO
As depicted in Figure 1 and described in Section 2, the GKA optimizes the centers and norm-inducing matrices of multiple ellipsoids in parallel with a preset size of every ellipsoid. If the ellipsoid size is set inappropriately, the ellipsoids learned by the GKA cannot accurately include all minority class samples. According to (4), each jth-class decision region j is approximated by the union of α j ellipsoids Φ j n of size ξ The distance between the kth sample x k and j , denoted L(x k , j ), can be defined as the minimum distance between x k and the center of each ellipsoid, as follows: where λ j nk is defined in (2). The sample x k is included in j if L(x k , j ) ≤ 1 and the corresponding class y k = j. Denote a binary function H(·) as follows: The total number of jth-class samples included in the set Φ j can be calculated as It is possible that several samples that do not belong to the jth class are included in the set of ellipsoids Φ j . The total number of samples not belonging to the jth class but included in Φ j can be calculated as where N \j included (·) ≤ (N − N j ).
Referring to (5), the total size of the ellipsoids contained in Φ j can be calculated as The proposed EMDO not only minimizes the total ellipsoid sizes but also aims to simultaneously maximize the number of jth class samples included in the set of ellipsoids Φ j and minimize the number of samples included in Φ j but not belonging to jth class. The misclassification error can be defined as the summation of the number of jth class samples , and the number of samples that are included in Φ j but that do not belong to the jth class is calculated as O \j included (Φ j | Ξ j ). Therefore, the misclassification error can be defined as The misclassification error can be utilized as an objective function to optimize the ellipsoid sizes. The set of ellipsoid sizes Ξ j = [ξ j 1 , ξ j 2 , . . . , ξ j α j ] can be optimized using a multi-objective optimization scheme that minimizes both (17) and (18).
MOPSO is utilized to perform this multi-objective optimization by searching for the best set of ellipsoid sizes Ξ j by minimizing the objective functions F 1 (·) and F 2 (·). Assume that G particles are utilized in the MOPSO. Denote Ξ j (k, g) as the gth particle in the kth iteration and Ξ j (g) as the non-dominated solution subject to the following multi-objective optimization. Note that the multi-objective optimization searches for the non-dominated solution of every particle.
Ξ j (k, g) is defined to be a non-dominated solution, as given by (19), if it is not dominated by any other particle, that is, if both F 1 (Ξ j (k, g)) ≤ F 1 (Ξ j (k, i)) and F 2 (Φ j | Ξ j (k,g) ) ≤ The non-dominated solution obtained using the gth particle, denoted as Ξ j (g), is Only if Ξ j (k, g) is the non-dominated solution of (19) will it be included in the repository S j rep , which is the set of all of non-dominated solutions of (19). The best solution achieved by the gth particle, denoted Ξ j p_best (g), is updated using this non-dominated solution generated by the gth particle, that is, Ξ Ξ j m is no longer a non-dominated solution and is excluded from S j rep if A(Ξ j m ) > 0. An adaptive grid algorithm [53] is applied to S j rep after the filtering process is completed, as given in (23) and (24), to place all the non-dominated solutions into several grids. The global best particle Ξ j g_best is randomly selected from among the grids in S j rep by using the roulette wheel selection scheme. After both Ξ j p_best (g) and Ξ j g_best are determined, each particle is updated as follows: where c 1 and c 2 are preset constants and γ 1 , γ 2 ∈ [0, 1] are randomly generated real numbers. If no new non-dominated solution can be successfully included in S j rep after the filtering process for certain preset K thr iterations, MOPSO is saturated, and the iterative learning of particles given by (19)-(26) The optimal solution (Ξ j ) * can be selected based on the average density of the nondominated solutions because the ellipsoids associated with the optimal solution tend to have a small size but a large number of samples. However, the average density in (27) cannot be directly utilized as the optimization index for selecting the optimal solution. It is modified as the ratio of the total number of jth class samples included in the set Φ j multiplied with the ratio of the total number of samples not belonging to the jth class but included in Φ j . Let the evaluation index for the mth non-dominated solution Ξ j m be σ j (Ξ j m ), which is defined based on (27) as follows: where N \j (Φ j | Ξ j m ) denotes the number of samples not belonging to the jth class but included in Φ j . The optimal solution (Ξ j ) * can be defined as the set of ellipsoid sizes that maximize the index σ j (·), that is, After the optimal ellipsoid sizes are determined by MOPSO according to (19) and (29), the GKA is applied based on the optimal ellipsoid sizes (Ξ j ) * to calculate the other optimal ellipsoid parameters such as the norm-inducing matrix (M j ) * and the ellipsoid centers (V j ) * . Note that the orientations of all the ellipsoids approximating the jth class decision are determined by the norm-inducing matrix (M j ) * . The proposed MOPSO integrated with the GKA is illustrated in Figure 2.

Determining Number of Ellipsoids
The ellipsoid parameters, such as size, centers, and orientation, are optimized using MOPSO integrated with the GKA, as described in Sections 2 and 3. These ellipsoid parameters are calculated under the condition that the total number of ellipsoids α j used to approximate the jth class decision region is assigned in advance j = 1 . . . p. If α j is too small, the jth class samples might be included in an insufficient number of ellipsoids, resulting in several ellipsoids having large sizes. It is possible that certain samples that do not belong to the jth class are included in these large ellipsoids. Moreover, samples other than those belonging to the jth class might be included because an insufficient number of ellipsoids is assigned to model the jth class samples. A binary-class dataset with 1000 samples in each class is illustrated in Figure 3. The samples in either class 1 or class 2 are randomly generated within the range [0, 3] on X and Y axis, respectively. The problem caused by a small α j is illustrated in Figure 3a. Conversely, the jth class samples might be included in too many ellipsoids if α j is too large. This results in a scenario where several ellipsoids overlap with each another, resulting in the ellipsoids being learned inefficiently, as illustrated in Figure 3b. If a suitable α j is assigned, as illustrated in Figure 3c, the learning result leads to a set of ellipsoids with of the appropriate size, center, and orientation. To determine the suitable number of ellipsoids, the ellipsoid parameters are optimized using MOPSO integrated with the GKA by setting α j from 1 to an appropriate number q. Denote Ξ j ) * α j =i as the optimal ellipsoid sizes calculated using MOPSO, according to (29), with α j set to i ellipsoids, (28) is utilized to evaluate the effectiveness and efficiency for different numbers of ellipsoids α j . The suitable number of ellipsoids α j can be determined at the value corresponding to the corner of the curve σ j ( Ξ j ) * α j =i ) with respect to α j . Figure 4 shows a typical curve σ j ( Ξ j ) * α j =i ) with respect to α j . According to this figure, α j = 6 is a suitable choice because the curve corner appears at α j = 6.

Generating Synthetic Samples
Any set of jth class samples with ( S j /N max ) < IR, j = 1 . . . p, is considered a minority set. With reference to (1), N j synthetic samples are to be generated and added into the minority set. Recall that the minority set of the jth class samples is approximated by α j ellipsoids. The N j synthetic samples must be proportionally added into each of the α j ellipsoids based on the density d The density of ellipsoid Φ j n is defined as The weight of the ellipsoid Φ j n for sharing the generated synthetic samples is defined as the reciprocal of the density d(Φ Denote the number of synthetic samples added to Φ j n as N j n , which is determined based on the weight β j n given in (33) The scheme for generating synthetic samples for every ellipsoid Φ j n is designed to resolve the oversampling problem for the following two scenarios: In this case, more than 90% of the samples in Φ Note that the sample x k is considered to be included in the ellipsoid if ∆ where b ni is the projection of the vector (x k − v j n ) onto the eigenvector z j ni . To ensure random generation of the synthetic samples inside Φ j n , b ni is set to be a random number within the range. (38) because each of the eigenvectors intersects the boundary sphere at 1/ ψ j ni and −1/ ψ j ni . However, the approach to randomly generate synthetic samples, as expressed in in (37) and (38), does not guarantee that the generated synthetic samplex j k is always in the ellipsoid Φ j n . For every randomly generatedx j k , calculate the Mahalanobis distance according to (2) aŝ The generatedx where κ 1 is a random number and κ 1 ∈ (0, 1/λ In this case, more samples not belonging to the jth class are in Φ j n . The random placement of synthetic samples in Φ j n , as in the previous case, cannot effectively improve the classification accuracy. Borderline SMOTE [30] is modified to generate synthetic samples in this case. The samples located at the borderline between the clusters belonging and not belonging to the jth class must be first identified using Borderline SMOTE. For every jth class sample x j k ∈ Φ The sample with all the m-nearest neighbors belonging to jth class is the sample not on the borderline. Conversely, the borderline sample contains at least one sample among the m-nearest neighbors not belonging to the jth class. Therefore, a sample is a borderline sample if at least one sample in the set of m-nearest neighbors S j k does not belong to the jth class for every x j k ∈ Φ j n . After the borderline samples are identified, the synthetic samples are generated through random interpolation between the borderline sample x j k and any other x j l ∈ S j k ; that is, the synthetic sample is generated as follows: where κ 2 is a random number and κ 2 ∈ [0, 1].

Simulation
The proposed EMDO was evaluated against other multi-class imbalanced data learning algorithms on different numerical data sets. The classifier C4.5 is usually utilized as the classifier to verify the oversampling results for various oversampling approaches. For instance, the oversampling approaches in [30,32,39], and [40][41][42][43] all used C4.5 as the classifier to verify the proposed oversampling schemes. This is mainly due to the fact that the classification results with C4.5 do not change as long as the parameter setting and datasets are fixed. No randomness exists in the classification results with C4.5 for the same parameter setting and dataset. Note that the number of ellipsoids utilized for the minority class is determined using the scheme proposed in Section 4 and is listed in the rightmost column of Table 1 and Table 6. The five nearest neighbors are considered for synthetic sample generation in case (B) of Section 5. The simulations in this study were conducted using five-fold cross-validation with 10 independent runs. Every data set was tested using different oversampling schemes and compared with the proposed EMDO. The minority class with the minimum size was selected to validate the effectiveness and efficiency of the proposed EMDO.
Several evaluation metrics were designed to evaluate the effectiveness and efficiency of the proposed EMDO. The classification accuracy for the jth class is defined as follows: where TP j is the number of true-positive classified samples, that is, the samples that are correctly classified as belonging to the jth class. FP j is the number of false-positive classified samples, that is, the samples that are incorrectly classified as belonging to the jth class. The metric P avg is defined as the average classification accuracy over all p classes, that is, The metric P min refers to the classification accuracy defined in (43) for the minority class with the minimum size. To measure the capability of EMDO to separate any pair of classes, the area under curve (AUC) [54,55] is widely used in [56,57]. Denote A m,n as the AUC between class m and class n. The metric AUCm is defined as follows for measuring the capability of EMDO to separate the smallest minority class with the minimum size from the other classes.
where n' denotes the minority class with the minimum size. In addition to AUCm, the average of AUC over all pairs of classes for a multi-class problem, denoted as MAUC, is defined as In order to evaluate the imbalance condition of every data set, the maximum imbalance ratio IRmax is defined as follows: (47) where Example 1: The data sets used in the simulation are the same as those used in [43] for comparing the performance of the EMDO against AMDO and other learning algorithms. The data sets used in [43] were mainly from data repositories such as the ones Knowledge Extraction based on Evolutionary Learning (KEEL) [58] and UCI (University of California, Irvine) Machine Learning Repository [59]. Table 1 describes these data sets. The performance comparison based on different indices are made in Tables 2-5. Within Tables 2-5, the algorithms such as SSMOTE refers to Static-SMOTE [60], GCS refers to RESCALE [61], ABNC refers to AdaBoost.NC [62], and OSMOTE refers to OVOSMOTE [63]. The MDO in [42], MDO+ and AMDO in [43] are also compared in Tables 2-5 with the proposed EMDO. The Baseline algorithm is the classifier C4.5 without any oversampling technique. The best result is in bold face.

1.00
The best result is in bold face. The best result is in bold face.

1.06
The best result is in bold face.
To compare the performance of EMDO with those of the other schemes, the rank average of every scheme was calculated. All oversampling schemes were tested on each of the data sets listed in Table 1. The ranking of algorithm performance was based on each of the metrics. For instance, the algorithm with the best performance is ranked first, the algorithm with the second-to-best performance is ranked second, etc. The average rank of every algorithm is then calculated. The schemes with the same metric values share ranks. For instance, if two schemes are ranked second because they have the same metric values, the two schemes share the second and third ranks. These two schemes are thus ranked 2.5. The means and standard deviations of P min and P avg are listed in Tables 2 and 3, respectively, for every oversampling scheme, including the proposed EMDO, applied to different data sets. According to Tables 2 and 3, EMDO outperforms all of the other schemes on every data set. EMDO has the lowest average rank. The results shown in both Tables 2 and 3 imply that the oversampling performed using EMDO significantly improves the classification accuracy for the smallest minority class. Moreover, the synthetic samples generated for the minority class samples improve the overall average classification accuracy.
For all schemes, the mean and standard deviation are listed in Table 4 and the AUCm defined in (45) and MAUCm defined in (46) are also compared in Tables 4 and 5, respectively. As indicated in Table 4, EMDO outperform the other schemes in separating the smallest minority class from the other classes for every listed data set. Moreover, according to Table 5, the capability of EMDO to separate all pairs of classes in the multi-class problem is superior to that of the other schemes.
Example 2: The performance of EMDO is evaluated on the sensory data in this example. Two data sets, Statlog (Shuttle) from UCI and Mafalda [64] from Github are utilized in this example. The data set Statlog (Shuttle) is the set of the recorded sensory data from NASA's space shuttle while the data set Mafalda is the set of the recorded sensory data from different brands of cars. The characteristics of these two data sets is shown in Table 6. Table 6 shows that both data sets are extremely imbalanced because the maximum imbalance ratios IR max are as high as 5684.7 and 5.94, respectively. Four indices P min , P avg , AUCm, and MAUC are calculated and compared in Table 7 for both data sets with classifier C4.5. The classification results are greatly improved with oversampling scheme EMDO compared with the results without EMDO. EMDO helps improve classification results for both highly imbalanced data sets according to the four evaluation indices listed in Table 7.

Conclusions
EMDO was demonstrated to outperform competing oversampling approaches in simulations. EMDO performed well because it approximates the decision region of the target minority class with reasonable accuracy by using a set of ellipsoids. In problems involving multi-class imbalanced data, EMDO performs exceptionally well if the decision region of the minority class is separated in the feature space. EMDO can learn the sizes, centers, and orientations of the ellipsoids that approximate the minority class decision region by using the underlying distribution of minority class samples. IoT is a key emerging technology, and imbalanced data will become an increasingly common problem as the number of IoT sensors increases. The proposed EMDO is suitable for solving such multiclass imbalanced data classification problems. One of the future works related to this study involves applying EMDO to address the problem of imbalanced data encountered in real-world IoT sensing data. Although EMDO is a data-level learning approach, it can easily be integrated with other cost-sensitive methods to increase the effectiveness and efficiency of learning. Further studies on variants of integration can be another direction for future research.