Next Article in Journal
Resolution of Fuzzy Relational Inequalities with Boolean Semi-Tensor Product Composition
Next Article in Special Issue
Damped Newton Stochastic Gradient Descent Method for Neural Networks Training
Previous Article in Journal
Stochastic Process-Based Inversion of Electromagnetic Data for Hydrocarbon Resistivity Estimation in Seabed Logging
Previous Article in Special Issue
Visualizing Profiles of Large Datasets of Weighted and Mixed Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kernel Based Data-Adaptive Support Vector Machines for Multi-Class Classification

1
School of Statistics and Management, Shanghai University of Finance and Economics, Shanghai 200433, China
2
Department of Statistical and Actuarial Sciences, University of Western Ontario, London, ON N6A 3K7, Canada
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(9), 936; https://doi.org/10.3390/math9090936
Submission received: 25 February 2021 / Revised: 9 April 2021 / Accepted: 10 April 2021 / Published: 23 April 2021
(This article belongs to the Special Issue Statistical Data Modeling and Machine Learning with Applications)

Abstract

:
Imbalanced data exist in many classification problems. The classification of imbalanced data has remarkable challenges in machine learning. The support vector machine (SVM) and its variants are popularly used in machine learning among different classifiers thanks to their flexibility and interpretability. However, the performance of SVMs is impacted when the data are imbalanced, which is a typical data structure in the multi-category classification problem. In this paper, we employ the data-adaptive SVM with scaled kernel functions to classify instances for a multi-class population. We propose a multi-class data-dependent kernel function for the SVM by considering class imbalance and the spatial association among instances so that the classification accuracy is enhanced. Simulation studies demonstrate the superb performance of the proposed method, and a real multi-class prostate cancer image dataset is employed as an illustration. Not only does the proposed method outperform the competitor methods in terms of the commonly used accuracy measures such as the F-score and G-means, but also successfully detects more than 60% of instances from the rare class in the real data, while the competitors can only detect less than 20% of the rare class instances. The proposed method will benefit other scientific research fields, such as multiple region boundary detection.

1. Introduction

One of the typical problems in data mining and machine learning is to classify new instances on the basis of observed ones. A common classification problem is separating two classes based on the estimated decision rule trained from the training data, however, multi-class situations have been increasingly seen in various scientific areas, including disease diagnosis in medical research [1], artificial intelligence [2], users’ preferences in recommendation systems [3], and risk evaluation in social sciences [4]. Accordingly, techniques are either derived from those binary classifiers or originally proposed specifically for multi-category classification problems. One of the most powerful classifiers is the support vector machine (SVM) [5], which shows its superior performance in many real applications [6] and is known for its excellent performance in both small and big samples, its robustness for outliers, and ease of interpretation.
The most popular framework for dealing with the multi-category classification problems is to decompose it into a series of binary classifications where the regular binary classifiers can be directly applied. Examples of those methods include the well-known one-versus-one [7] and one-versus-all [5] techniques. In particular, for a k-category classification case under the SVM framework, the least square SVM (LS-SVM) [8] method was extended to the multi-class case [9]. To overcome the drawback of the original LS-SVM that the decision function is constructed from most of the training samples, referred to as the non-sparseness problem, Xia and Li [10] developed a new multi-class LS-SVM algorithm where the solution can be sparse in the weight coefficients of support vectors. Fung and Mangasarian [11] followed the idea of proximal SVM (PSVM) in [12] to extend the PSVM to the multi-class case. For each decomposed sub-classification problem, the solution is similar to its binary case for classifying new samples by allocating them to the closer class of the two parallel planes. This PSVM method turns out to be quite aligned with the one-versus-all method. Zhang et al. [13] extended the PSVM method to include the adaptive kernel function, which magnifies the resolution on each boundary based on weighted factors that can be obtained from a Chi-square distribution. However, its adaptively scaled kernel depends on a squared distance, which may not be reliable [1], and the decay rate for each class is constant. Following the idea by Crammer and Singer [2], He et al. [14] proposed a simplified multi-class support vector machine with a reduced dual optimization. Their method suffers a computation burden. He et al. [14] also presented a simplified multi-class SVM to reduce the size of the resulting dual optimization by introducing a relaxed classification error bound, which speeds up the training process without sacrificing classification accuracy.
However, an imbalance issue usually arises in real applications such as cancer research, especially when dealing with multi-category classification. That is, some minority classes may only contain very few instances in the training sample data when dealing with two categories in nature or using the one-versus-all strategy in multi-class cases. Learning from the imbalanced data turns out to be remarkably challenging in the field of data mining with big data [6]. Many fields have been seeing the importance and need of accurate classifiers for imbalanced data [15], including the detection of rare but serious diseases such as cancers in medical science, fraudulence issues in accounting [16], and risk evaluation in economics [4]. Many commonly used binary classifiers may only show limited predictive power for the minority class when severe imbalances exist [17]. Indeed, this issue corresponds to the unequal distribution of the sample data from different classes, where a majority of instances belong to a specific class while the rest to others. Chawla et al. [18] and Tang et al. [19] have discussed the issue and found that the SVM for multiple classes with imbalanced data can be prone to generating a classifier with a strong estimation bias towards the majority class and will give a rather poor performance. Wang and Shen [20] proposed a method that can avoid the difficulties of the one-versus-all strategy by dealing with multiple classes in a joint manner. Consequently, an accurate classifier is always desired when a specific class is extremely small compared to other classes in the training data, such as the one-versus-all case in dealing with multi-class classification.
To overcome the effect of imbalance on classifications, Liu and He [1] proposed a new method to enhance the performance of the SVM for imbalanced data by adaptively scaling the kernel function obtained from a standard SVM so that the separation between two categories can be effectively enlarged. The method also takes into account the location of the support vectors in the feature space, which makes it more appealing when the responses are from multiple classes. In this paper, we propose a new data-adaptive SVM technique for multi-class problems. A new data-adaptive kernel function is proposed for the multi-class SVM in a way that the decay rate of the scaling magnitude is more robust and can vary along with the density of the samples in the neighborhood. Not only does the method take the imbalance of data from a multi-class response into consideration, but it involves spatial association of local data instances as well. By using this adaptive kernel function, the constructed classifier shows excellent predictive power, especially for imbalanced data, with a competitive cost of time consumption. Numerical investigations demonstrate the superior performance of the proposed method, and a real image dataset is employed as an illustration.
The remainder of the paper is organized as follows. Section 2 introduces the proposed methodology for multi-category classification with class imbalance taken into account. Numerical investigation is presented in Section 3 to demonstrate the superb prediction accuracy of the proposed method compared with its competitors. Concluding remarks and discussion are described in the final section.

2. Methodology

2.1. SVM Framework and Notation

As a general method for classification proposed by Vapnik and Vapnik [5], the support vector machine essentially uses a kernel function that maps the original input data space into a high-dimensional feature space so that the instances from two classes are as far as possible, preferably separable with a linear boundary in the feature space.
To start with, we consider a binary case. Given a sample { x i , y i } for i = 1 , , n , where x i is a vector of predictors in the input space I = R p and y i represents the class index, which takes a value from { + 1 , 1 } , a nonlinear support vector machine maps the input data x = { x 1 , , x n } into a high-dimensional feature space, F = R l , using a nonlinear mapping function s : R p R l , and finds a linear boundary in the feature space F by maximizing the smallest distance of instances to this boundary. Mathematically, the idea is equivalent to solve
min w , b 1 2 w T w + C i = 1 n ξ i , t subject   to y i ( w T s ( x i ) + b ) 1 ξ i , t , ξ i , t 0 , for i = 1 , , n ,
where C is the so-called soft margin parameter that determines the trade-off between the optimal combinatorial choice of the margin and the classification error, and ξ = ( ξ 1 , , ξ n ) T is a non-negative slack variable vector that controls misclassification. The dual procedure of (1) is to solve
Max α i = 1 n α i 1 2 i = 1 n j = 1 n α i α j y i y j K ( x i , x j ) ,
subject   to i = 1 n α i y i = 0 , 0 α i C , i = 1 , 2 , , n ,
where α i ’s are the dual variables and the scalar function K ( · , · ) is called a kernel function defined as K ( x i , x j ) = < s ( x i ) , s ( x j ) > with < · , · > being the inner product operator. Denote S V the index set of the support vectors { j | α j > 0 for j = 1 , 2 , , n } . With all the observations x i , i S V , the kernel form of the SVM boundary can be written as
i S V α i y i K ( x i , x ) + b = 0 .
Consequently, the label of an instance x is assigned by s i g n ( D ( x ) ) , with
D ( x ) = i S V α ^ i y i K ( x i , x ) + b ^ .
where a ^ represents the predicted value of a. Theoretically, the bias term b j is proved identical for all instances in the S V [21]. Practically, the biased term b ^ is determined as the average of all the estimated b ^ j ’s at all the support vectors, where b ^ j is obtained by using the j-th support vector x j
b ^ j = y j i S V α ^ i y i K ( x i , x j ) .
A k-category classification problem with the class label y i taking value from { 1 , , k } can be generally decomposed into a sequence of binary classification problems using the one-versus-all strategy. Specifically, the m-th binary classification, m = 1 , , k , is set up for a training sample { x i , y i ( m ) } , where y i ( m ) = I ( y i = m ) I ( y i m ) and I ( · ) is the indicator function. Hence, by applying the SVM procedure for binary classification, k classifiers can be constructed with k kernels K 1 , , K k , and the m-th kernel form of the SVM boundary between the m-th class and the remaining ( k 1 ) classes can be written as
D m ( x ) = i S V m α i ( m ) y i ( m ) K m ( x i , x ) + b ( m )
With the estimated decision functions from all m-th binary classifications, the final class label of an instance can be assigned using a majority voting procedure.
Quite a few typical kernels are available for the SVM procedure. One is the radial kernel K ( x , x ) = f ( x x 2 / 2 ) , such as the Gaussian Radial Basis Function kernel,
K ( x , x ) = exp ( x x 2 / 2 σ 2 ) .
Another type of kernel takes a form of the inner product K ( x , x ) = f ( < x , x > ) , such as a polynomial kernel with degree d,
K ( x , x ) = ( 1 + < x , x > ) d .

2.2. Conformal Transformation and Adaptive Kernel Machine

From the geometrical point of view, when the feature space F is the Euclidean space, the Riemannian metric is induced in the input space I. Take a two-dimensional case, for instance, a small change d ( x ) in the input space will be mapped as d s ( x ) in the feature space
d s ( x ) = s · d x ,
where
s = s ( x ) x = s 1 ( x ) x 1 s 1 ( x ) x p s l ( x ) x 1 s l ( x ) x p .
Thus, the squared length of d s ( x ) can be written in the quadratic form as
d s ( x ) 2 = ( d s ( x ) ) T d s ( x ) = i j s i j ( x ) d x i d x j ,
where
s i j ( x ) = s T · s .
Lemma 1
([1]). Suppose K ( p , q ) is a kernel function, and s ( · ) is the corresponding mapping in the support vector machine. Then
s i j ( p ) = p i q j K ( p , q ) | q = p .
Detailed proof is given in Appendix A.
Though the parameters of kernel functions are able to manipulate the geometric characteristics of the feature space F to some degree, conformal transformation on the original kernel function can further contribute to great adaptability. Conformal transformation is a function mapping that projects the original input space to a new feature space with the angles between vectors being preserved in a local area [1]. Define
s ˜ ( x ) = c ( x ) s ( x )
and
K ˜ ( x , x ) = < s ˜ ( x ) , s ˜ ( x ) ) > = c ( x ) c ( x ) < s ( x ) , s ( x ) > = c ( x ) c ( x ) K ( x , x ) ,
then K ˜ ( x , x ) corresponds to the mapping s ˜ that may increase the separation for a properly chosen positive scalar function c ( x ) which has larger values at the support vectors identified using the kernel K ( x , x ) . Furthermore, K ˜ can be easily shown to satisfy the Mercer positivity condition, the sufficient condition for being a kernel function. Specifically, we employ the L 1 -norm adaptive radial basis function (RBF) kernel proposed in [1]:
c ( x ) = e | D ( x ) | d M ( x )
where
d M ( x ) = A V G i { s ( x i ) s ( x ) 2 < M , y i y } ( s ( x i ) s ( x ) 2 ) ,
and M can be regarded as the distance between the nearest and the farthest support vectors under the original mapping s ( x ) . In this way, the average on the right-hand side can comprise all the support vectors different from the currently considered instance in the neighborhood of s ( x ) within the radius of M. This takes into account the spatial distribution of the support vectors in the feature space F, and hence partially reflects the spatial association of the instances in the training set. This method turns out to be robust and efficient [1].

2.3. Adaptive Kernel Machine for Multi-Class Cases

To apply the adaptive kernel machine to a multi-class classification problem, we first apply the basic SVM to all k classes of a training sample by employing the one-versus-all strategy, and obtain k initial decision boundaries as well as the predicted labels of all instances. We then split the training sample in k datasets using the label of class y ^ i from the initial round SVM, represented by S 1 , S 2 , , S k , respectively. This step is essential in the sense of finding the approximated locations of support vectors and the initial boundaries. Similar to the idea of conformal transformation in the binary case, the adaptive data-dependent kernel transformation function is defined as
c ( x ) = exp ( p 1 ( x ) | D 1 ( x ) | ) , if x S 1 exp ( p 2 ( x ) | D 2 ( x ) | ) , if x S 2 exp ( p k ( x ) | D k ( x ) | ) , if x S k
where p m ( x ) , m = 1 , , k , are functions of data that will be determined to control the decay rates and hence further affect the performance of the classifier.

2.4. Specification of Functions p m ( x )

In an imbalanced data classification, determination of appropriate weights for each category is important so that the problem can be transferred back to the approximately balanced case. Generally, there are two requirements for the choice of weights. One is that the data in the majority class should be allocated with a smaller weight than those in the minority class so that the data are somewhat balanced in the contribution to the decision function. The other is the natural restriction that the sum of the weights should be 1. Essentially, for imbalanced data, the weights can be set as the reciprocal of the sizes of the classes in the training sample. Let n m denote the training sample size for the m-th class, m = 1 , , k . Then the weightings are defined as
w m = 1 / n m 2 i = 1 k 1 / n m 2 .
In this way, w m s show the sparse distribution nature of each category. Note that a L 2 -norm is adopted when building w m in (17). Although L p -norm ( p > 0 ) can be applied in general, such as the L 1 -norm, in real applications, we found the L 2 -norm would show the best empirical performance.
As w m s do not involve the information of x , we further introduce the idea of constructing c ( x ) in the binary case to include information from x . Define
d m ( x ) = A V G j S V m ( s m ( x j ) s m ( x ) 2 ) = A V G j S V m K m ( x j , x j ) + K m ( x , x ) 2 K m ( x j , x )
where S V m is the support vector set from the initial SVM with the binary SVM procedure in the m-th class, K m ( · , · ) is the kernel function adopted in the m-th binary SVM and s m ( · ) is its corresponding mapping function. In practice, we adopt a common kernel function K m ( · , · ) , m = 1 , , k , such as the popular Gaussian kernel function, to simplify the calculation. Consequently, we define
p m ( x ) = w m · d m ( x )
so that the influence from the size of the class is taken into account.
Another potential choice of p m ( x ) could be
p m ( x ) = A V G i { s m ( x i ) s m ( x ) 2 < Q m , y i y } ( s m ( x i ) s m ( x ) 2 ) ,
where the tuning parameter Q m can also be regarded as the distance between the nearest and the farthest support vector in S V m from s m ( x ) within the same class. When k is small or moderate, this setting can be meaningful. However, when k is large, the computational cost may arise since more tuning parameters need to be determined. To avoid the problem, we propose to use a universal control Q while taking the weights w m into account. The final version of p m ( x ) is constructed as
p m ( x ) = A V G i { s m ( x i ) s m ( x ) 2 < Q · w m , y i y } ( s m ( x i ) s m ( x ) 2 )
In this way, the classification can be more robust to extreme cases in spatial distribution, which may push the classification boundaries towards the majority classes, while the weights are considered to balance the training set so that the performance of the classification is enhanced.
Some other techniques are seen in the literature, though they may show some drawbacks in different situations for imbalanced data. For example, Wu and Amari [22] made some improvements by introducing different tuning parameters for different classes so that the local density of support vectors can be accommodated. With the heavy computational cost it brings, the performance in high-dimension cases turns out uncertain. Williams et al. [23] also extended their binary scaling SVM technique to the multi-class case; however, its distance tuning parameter, corresponding to the value of Q · w m in our case, is fixed throughout the whole region. This inflexible setting cannot reflect the local information, especially when the density of support vectors is quite high. Also, using L 2 -norm of D ( x ) may lead to unstable classification performance in high dimensional cases due to a faster decay rate to a constant e k compared with our proposed method.

2.5. Data-Adaptive SVM Algorithm for Multi-Class Case

With c ( x ) constructed in (16), we conformally transfer the k kernels trained from the initial round of multi-class SVM, K 1 , , K k into
K ˜ m ( x i , x j ) = c ( x i ) c ( x j ) K ( x i , x j ) ,
where m = 1 , , k , c ( · ) is defined in (16) with p m ( x ) as (21). K m ( · , · ) is usually set as the Gaussian kernel function during the first round of SVM. The performance of using the form in (19) is similar empirically. Based on the updated kernels, the second round SVM is then conducted and predictions of labels for all instances are obtained. It is seen that
  • The magnification will be almost constant along the separating surface D ( x ) = 0 for each boundary;
  • The magnification will be largest where the contours are closest locally. (See more details in the Appendices.)
Thus, as long as the parameters C and σ in the kernel machine (and the controlling parameter Q if the form of p m ( x ) in (21) is adopted) are tuning adaptively with data, the classifiers can be trained, and hence the subjects’ labels can be predicted.
To conclude the section, the algorithm of the whole procedure of the multi-label classification problem is described as follows. A regular SVM classifier is trained with an ordinary Gaussian radial basis kernel function, and the support vectors are found so that the separating boundaries can be approximately determined using the one-versus-all technique in the first stage. Based on the spatial information of the support vectors, the conformal transformations will be constructed, and the original kernel functions are updated. Then a new round of SVM optimization problems is conducted with the updated kernel function so that the boundary in each one-versus-all strategy can be found. Consequently, the predicted labels for subjects can be estimated. The whole procedure is summarized in Algorithm 1.
Algorithm 1. Multi-class data adaptive kernel scaling support vector machine (SVM).
 Input: y i , x i , i = 1 , , n ; a   Gaussian   kernel   function K ( · , · )
1: A regular SVM classifier is trained with an ordinary Gaussian radial basis kernel function;
2: Based on the spatial information of these support vectors, the conformal transformation is constructed, and the original kernel function is updated;
3: A new round of SVM optimization problems is conducted with the updated kernel function, and the boundaries for different classes are found;
4: The predicted class labels for instances are determined by majority voting.

3. Numerical Investigation

In this section, we conduct intensive numerical experiments to evaluate the performance of the proposed classification procedure and compare them with the existing competitors. The whole study will be divided into two parts, one for simulated data and the other for a real image dataset. We will compare the proposed method with four existing methods, including the traditional SVM and methods from Wu [22], William [23] and Maratea [24].
We assess the performance of the classifiers using various quantitative measures. One of them is the overall accuracy, defined by
P o v e r a l l = T P + T N T P + F P + T N + F N = T P + T N n ,
where T P , F N , F P and T N represent the number of instances of true positive, false negative, false positive and true negative in the test sample, respectively. However, for imbalanced data, the overall accuracy rate may not be sufficient [24]. We further adopt two other measurements on classifiers’ performance for imbalanced data, namely the F-score and the G-mean, respectively [25]. Specifically, the F-score is defined as
F s c o r e = 2 × P p r e × P s p e P p r e + P s e n ,
and G-mean as
G m e a n = P s e n × P s p e ,
where P p r e , P s e n and P s p e are the precision, the sensitivity and the specificity, respectively. They are obtained by
P p r e = T P T P + F P ,
P s e n = T P T P + F N ,
and
P s p e = T N T N + F P .
Note that F-score measures the harmonic mean of the precision and sensitivity, while G-mean is constructed as the geometric mean of the sensitivity and the specificity, giving a more fair comparison between the positive and negative classes, regardless of its size. To further evaluate the numerical performance of the multi-category classification, we employ the multi-class ROC and the AUC measures [25].

3.1. Simulation Study

First, we conduct simulation studies to evaluate the performance of the proposed method and compare it with the competitors in the literature. Three scenarios are considered. Each of them includes the balanced, moderately imbalanced, and extremely imbalanced cases, respectively. The Gaussian RBF kernel is employed during the first round of classification, if not mentioned elsewhere.
For convenience, the input space is 2-dimensional, and all training data are generated using three classes of bivariate Gaussian distributions with means vectors ( 2 , 2 ) , ( 4 , 3 ) , ( 3 , 2 ) , and identical covariance matrix γ · Σ , where γ is a nuisance parameter that controls the overlapping proportion of the classes. Moderate covariance is incorporated for all pairs with a correlation coefficient ρ = 0.3 , and the variance of all variables is 1.
The overall sample size for the training data is set as 600 and is separated into three classes by different weights in three different scenarios. The class size is ( 200 , 200 , 200 ) in Scenario 1, ( 100 , 200 , 300 ) in Scenario 2, and ( 20 , 100 , 480 ) in Scenario 3. In each scenario, different combinations of parameters that need to be tuned will be considered. The cost parameter C is chosen from the set { 0.1 , 0.2 , 0.5 , 1 , 5 , 8 , 40 , 100 , 500 } and σ takes value from the set { 0.01 , 0.05 , 0.1 , 0.5 , 1 , 5 , 10 , 100 } . As Q is the threshold controlling the size of the local neighborhood, it is chosen by a grid search from the set { 0.1 , 0.2 , , 1 } times the maximal Euclidean distance between all pairs of data points in the sample. All classifiers are tuned properly with respect to the corresponding measures.
The classification procedure is as follows. First, we train the classifiers with the traditional SVM using the one-versus-all strategy, and the support vectors are identified approximately. The kernel functions for all the methods are then updated adaptively by conformal transformation with different scalar function c ( x ) , using p m defined in (21). A second round of SVM is then conducted, and the estimated class labels for observations in the test sample will be given and consequently compared with the true labels. Five-fold cross validation is employed to obtain the misclassification rate for each simulated dataset, and the whole process is repeated 1000 times. With the accuracy measures defined above, the performance of all the classifiers is shown in Table 1, Table 2 and Table 3, and Figure 1, Figure 2 and Figure 3. Similar results are seen in the proposed method with the other way of defining p m .
It is seen that all methods considered here have improved performances comparing to the ordinary SVM in almost all scenarios with different combinations of the parameters C and σ . In general, the proposed method outperforms all the other classifiers considered, especially in the imbalanced data. When σ gets larger with fixed C, the misclassification rate tends to decrease in all the methods compared. When σ is relatively small, the proposed method performs better than those of Wu and Williams’ methods, while if σ is relatively large, all the methods are nearly the same. This is because when σ is large, the feasible solution set gets large, and all of the methods tend to find the optimal solution. Correspondingly, when C increases, the budget for misclassification gets bigger, which means more tolerance is permitted so that the two classes can be separated. In this scenario, we found that p m is roughly the same, approximately the reciprocal of | D | m a x . This makes sense because in the balanced-data case, the density of the distributed SVs is roughly uniform, and hence the averages of the distance in the feature space for each data point are roughly the same.
For imbalanced scenarios the performances of all methods turn out to be a bit worse than the balanced case with no surprise due to the non-uniformly distributed support vectors. The change of the misclassification rate with C and σ is similar to that in the balanced data case. The proposed method performs the best among all the methods.

3.2. A Real Prostate Caner MRI Dataset

In this section, we apply the proposed method to a prostate cancer MR image dataset. The study aims to find statistical methods to classify cancer and non-cancer areas or grades of cancer by the imaging data obtained by imaging collection equipments. In this case, nine common classes are labeled and listed as follows, indicating different levels of severity of cancer.
  • Atrophy: As means literally (non-cancer);
  • EPE: Prostatic intraepithelial neoplasia (non-cancer);
  • PIN: Prostatic intraepithelial neoplasia (non-cancer);
  • G3: Tumour focus that is all Gleason 3 (cancer);
  • G4: Tumour focus that is all Gleason 4 (cancer);
  • G3+4: Tumour focus all predominately G3 with intermingled G4 (cancer);
  • G4+3: Tumour focus all predominately G4 with intermingled G3 (cancer);
  • G4+5: Tumour focus all predominately G4 with intermingled G5 (cancer);
  • OtherProstate: Prostate tissue that does not fall into the other categories (non-cancer).
Note that the labels are given at the voxel level. That is, for a specific patient, it is very likely to have different voxels (indicating different positions of the prostate tissue) with different classes. A patient that has G 3 + 4 -type cancer in some areas is likely to have G 3 -type cancer as well as O t h e r P r o s t a t e -type of voxels in other areas. Our objective is to predict class labels at the voxel level. There are several labels associated with G 5 , however, the whole dataset contains only one patient with a very tiny area of G 5 and the associated type of cancer. Therefore, G 5 is extremely imbalanced.
In the first phase of the study, 21 patients are involved and more than 400 images are collected. Predictors on each voxel are the three-dimensional intensity measures from MRIs, denoted as T2W intensity, ADC intensity and C-Grade intensity. Other measures such as DCE and DWI are only available to part of the patients and hence are not included in the training process.
To adopt the proposed data-adaptive scaling in this multi-class case, two-stage SVMs are required. During the first stage, a standard SVM with the selected kernel is conducted so that the support vectors from the original dataset can be found. Based on the identified support vectors, the kernel functions are updated. Then, a second-stage SVM is conducted with the updated kernel, and the resulting estimated boundary will be used as the rule for classification. In terms of choosing appropriate tuning parameters for each method, 7-fold cross validation is conducted for 500 times at the patient level.
To assess the performance, we compare our proposed methods with both traditional and data-adaptive multi-category classification methods. In terms of the traditional methods, one-versus-one (1vs1) and one-versus-all (1vsA) from indirect methods, and the Crammer and Singer’s (CS) direct methods and He’s Simplified SVM (simSVP) will be included, while for the data-adaptive methods, Amari’s and William’s adaptively scaling will be included. In terms of the criterion of the classification performance, misclassification rate, percentage of support vectors in the whole dataset, F-score and G-means along with their margins are reported.
Table 4 presents the assessment measures for all the methods considered. Obviously, the proposed method performs almost the best among all the compared methods. A highlight point is that the proposed method has the smallest margins in all performance measures, resulting from the property of the robust decay of the magnification effect from the proposed data-adaptive kernel. In terms of the accuracy, the proposed method has a similar misclassification rate to the indirect methods, which is significantly smaller than the rest of the methods. F-score and G-means are the largest for the proposed method, much larger than other data-adaptive kernel methods. The percentage of support vectors that are used for constructing the classifiers is the smallest for the proposed method.
It is worth pointing out that among those wrongly predicted labels, G 4 + 3 is the dominant class. In other words, the misclassification always happens in G 4 + 3 type cancer. This is because this type of cancer is really rare in the training sample, taking only 1–2% among all the labels. These extremely imbalanced data have made it very difficult to be detected with a high accuracy. The proposed method can detect around 60% among this type, while other data adaptive (Amari’s and William’s) methods can only find less than 20%. All other methods cannot detect this class. Also, only our method detects the G 5 class from the only one patient, while all competitor methods fail.

4. Concluding Remarks

In this paper, we developed a new data-dependent SVM construction technique for the multi-category classification problem. Based on the data-adaptive kernel SVM for the binary case, we proposed a new method to construct the data-dependent kernel for the multi-class setting, especially when the data are imbalanced. The data-dependent kernel functions have a more robust decay rate and can vary along with the density of the size of neighbors. Thus, the kernel can be adapted optimally for a specific dataset. Numerical results from both synthetic and real datasets have shown the excellent performance of the proposed method. Not only does the proposed method outperform in terms of the commonly used accuracy measures such as the F-score and G-means, compared with the competitors, but also successfully detects more than 60% of instances from the rare class in the real data, while the competitors can only detect less than 20%. A possible future work is to select relevant predictors for the multi-class kernel functions and consider the spatial association between different images. It is worth noting that the misclassification rate will be affected by the distance of the mean vectors. For instance, the misclassification will not occur if the centers of the three Gaussian distributions are sufficiently far from each other when the covariance matrix is set as unity. The proposed method may be useful in other scientific research fields, such as detecting the boundaries of multiple regions of interest.

Author Contributions

Conceptualization, J.S.; Formal analysis, X.L.; Funding acquisition, W.H.; Investigation, W.H.; Methodology, X.L.; Supervision, W.H.; Validation, X.L.; Writing—original draft, J.S. and X.L.; Writing—review and editing, J.S., X.L. and W.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Liu’s research was partially supported by the Fundamental Research Funds for the Central Universities. He’s research was partially supported by the Natural Science and Engineering Research Council of Canada (NSERC). The authors thank the CIHR Team at Image-Guided Prostate Cancer Management at the University of Western Ontario. The authors also thank the reviewer team for their constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Lemmas and Theorems

Appendix A.1. Proof of Lemma 1

Proof. 
By the definition of a reproducing kernel function K ( x , z ) with its values λ k and the corresponding scalar eigenfunctions g k ( x ) , we have
K ( x , z ) · g k ( z ) d z = λ k · g k ( x )
where k = 1 , 2 , , l . Then the kernel is represented as
K ( x , z ) = k λ k · g k ( x ) · g k ( z ) .
By rescaling the function g k ( · ) as s k ( x ) = λ k g k ( x ) , the kernel function can be further presented as
K ( x , z ) = k s k ( x ) · s k ( z ) = [ s ( x ) ] T · [ s ( z ) ]
where [ s ( x ) ] T = s 1 ( x ) , s 2 ( x ) , , s l ( x ) and [ · ] T is the transpose operator. Thus, if we further define
s = s ( x ) x = s 1 ( x ) x 1 s 1 ( x ) x p s l ( x ) x 1 s l ( x ) x p
and
s i j ( x ) = x i s ( x ) T · x j s ( x ) = s 1 ( x ) x i , , s l ( x ) x i · s 1 ( x ) x j , , s l ( x ) x j T ,
as in (8) and (10), it follows that
x i z j K ( x , z ) | z = x = [ s ( x ) ] T · s ( z ) = x i s ( x ) T · x j s ( x ) = s i j ( x ) .
The lemma shows how a mapping s is associated with the corresponding kernel function K.

References

  1. Liu, X.; He, W. Adaptive kernel scaling support vector machine with application to a prostate cancer image study. J. Appl. Stat. 2021, 1–20. [Google Scholar] [CrossRef]
  2. Crammer, K.; Singer, Y. On the algorithmic implementation of multiclass kernel-based vector machines. J. Mach. Learn. Res. 2001, 2, 265–292. [Google Scholar]
  3. Maratea, A.; Petrosino, A. Asymmetric Kernel scaling for imbalanced data classification. Fuzzy Log. Appl. 2011, 196–203. [Google Scholar] [CrossRef]
  4. Zhang, Z.; Gao, G.; Shi, Y. Credit risk evaluation using multi-criteria optimization classifier with kernel, fuzzification and penalty factors. Eur. J. Oper. Res. 2014, 237, 335–348. [Google Scholar] [CrossRef]
  5. Vapnik, V.N.; Vapnik, V. Statistical Learning Theory; Wiley: New York, NY, USA, 1998; Volume 1. [Google Scholar]
  6. Menardi, G.; Torelli, N. Training and assessing classification rules with imbalanced data. Data Min. Knowl. Discov. 2014, 28, 92–122. [Google Scholar] [CrossRef]
  7. Kreßel, U.H.G. Pairwise classification and support vector machines. In Advances in Kernel Methods; MIT Press: Cambridge, MA, USA, 1999; pp. 255–268. [Google Scholar]
  8. Suykens, J.A.; Vandewalle, J. Least squares support vector machine classifiers. Neural Process. Lett. 1999, 9, 293–300. [Google Scholar] [CrossRef]
  9. Suykens, J.A.; Vandewalle, J. Multiclass least squares support vector machines. In Proceedings of the International Joint Conference on Neural Networks, IJCNN’99, Washington, DC, USA, 10–16 July 1999; Volume 2, pp. 900–903. [Google Scholar]
  10. Xia, X.L.C.; Li, K. A sparse multi-class least-squares support vector machine. In Proceedings of the IEEE International Symposium on Industrial Electronics, Cambridge, UK, 30 June–2 July 2008; pp. 1230–1235. [Google Scholar]
  11. Fung, G.M.; Mangasarian, O.L. Multicategory proximal support vector machine classifiers. Mach. Learn. 2005, 59, 77–97. [Google Scholar] [CrossRef] [Green Version]
  12. Fung, G.M.; Mangasarian, O. Proximal support vector machine classifiers. Mach. Learn. 2002, 1, 21. [Google Scholar]
  13. Zhang, Y.; Fu, P.; Liu, W.; Chen, G. Imbalanced data classification based on scaling kernel-based support vector machine. Neural Comput. Appl. 2014, 25, 927–935. [Google Scholar] [CrossRef]
  14. He, X.; Wang, Z.; Jin, C.; Zheng, Y.; Xue, X. A simplified multi-class support vector machine with reduced dual optimization. Pattern Recognit. Lett. 2012, 33, 71–82. [Google Scholar] [CrossRef]
  15. Mazurowski, M.A.; Habas, P.A.; Zurada, J.M.; Lo, J.Y.; Baker, J.A.; Tourassi, G.D. Training neural network classifiers for medical decision making: The effects of imbalanced datasets on classification performance. Neural Netw. 2008, 21, 427–436. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Chawla, N.; Japkowicz, N.; Kolcz, A. Special Issue on Learning from Imbalanced Datasets, Sigkdd Explorations; ACM SIGKDD: New York, NY, USA, 2004; Volume 6, pp. 1–6. [Google Scholar]
  17. Daskalaki, S.; Kopanas, I.; Avouris, N. Evaluation of classifiers for an uneven class distribution problem. Appl. Artif. Intell. 2006, 20, 381–417. [Google Scholar] [CrossRef]
  18. Chawla, N.V.; Japkowicz, N.; Kotcz, A. Editorial: Special issue on learning from imbalanced data sets. ACM Sigkdd Explor. Newsl. 2004, 6, 1–6. [Google Scholar] [CrossRef]
  19. Tang, Y.; Zhang, Y.Q.; Chawla, N.V.; Krasser, S. SVMs modeling for highly imbalanced classification. Syst. Man Cybern. Part B Cybern. IEEE Trans. 2009, 39, 281–288. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Wang, L.; Shen, X. On L1-norm multiclass support vector machines. J. Am. Stat. Assoc. 2007, 102, 583–594. [Google Scholar] [CrossRef]
  21. Friedman, J.; Hastie, T.; Tibshirani, R. The Elements of Statistical Learning; Springer Series in Statistics; Springer: Berlin/Heidelberg, Germany, 2001; Volume 1. [Google Scholar]
  22. Wu, S.; Amari, S.I. Conformal transformation of kernel functions: A data-dependent way to improve support vector machine classifiers. Neural Process Lett. 2002, 15, 59–67. [Google Scholar] [CrossRef]
  23. Williams, P.; Li, S.; Feng, J.; Wu, S. Scaling the kernel function to improve performance of the support vector machine. In Advances in Neural Networks–ISNN 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 831–836. [Google Scholar]
  24. Maratea, A.; Petrosino, A.; Manzo, M. Adjusted F-measure and kernel scaling for imbalanced data learning. Inf. Sci. 2014, 257, 331–341. [Google Scholar] [CrossRef]
  25. Fawcett, T. ROC graphs: Notes and practical considerations for researchers. Mach. Learn. 2004, 31, 1–38. [Google Scholar]
Figure 1. Performance of the competitor methods for Scenario 1.
Figure 1. Performance of the competitor methods for Scenario 1.
Mathematics 09 00936 g001
Figure 2. Performance of the competitor methods for Scenario 2.
Figure 2. Performance of the competitor methods for Scenario 2.
Mathematics 09 00936 g002
Figure 3. Performance of the competitor methods for Scenario 3.
Figure 3. Performance of the competitor methods for Scenario 3.
Mathematics 09 00936 g003
Table 1. F-score (F), G-mean (G) and the AUC (A) measures for all five classification methods in Scenario 1 for n 1 = 200 , n 3 = 200 and n 3 = 200 , respectively. Max margin is 0.02.
Table 1. F-score (F), G-mean (G) and the AUC (A) measures for all five classification methods in Scenario 1 for n 1 = 200 , n 3 = 200 and n 3 = 200 , respectively. Max margin is 0.02.
SVM Wu [22] William [23] Maratea [24] Our Method
C σ FGA FGA FGA FGA FGA
80.10.390.380.52 0.430.430.54 0.590.590.61 0.710.700.75 0.780.790.80
80.50.430.420.55 0.470.460.56 0.660.660.69 0.750.750.78 0.810.810.83
85.00.470.460.56 0.530.520.59 0.680.680.72 0.780.770.81 0.840.830.85
400.10.450.450.55 0.440.430.54 0.610.610.66 0.730.720.75 0.810.810.85
400.50.530.520.57 0.510.500.55 0.670.670.71 0.750.750.78 0.840.830.88
405.00.560.550.59 0.620.620.67 0.710.720.78 0.780.780.81 0.860.860.88
1000.10.520.510.57 0.610.590.62 0.640.630.68 0.780.790.81 0.840.850.88
1000.50.600.580.62 0.670.650.69 0.770.670.76 0.790.800.83 0.860.860.90
1005.00.690.660.71 0.710.700.73 0.790.720.80 0.810.820.84 0.880.880.92
Table 2. F-score (F), G-mean (G) and the AUC (A) measures for all five classification methods in Scenario 2 for n 1 = 100 , n 3 = 200 and n 3 = 300 , respectively. Max margin is 0.04.
Table 2. F-score (F), G-mean (G) and the AUC (A) measures for all five classification methods in Scenario 2 for n 1 = 100 , n 3 = 200 and n 3 = 300 , respectively. Max margin is 0.04.
SVM Wu [22] William [23] Maratea [24] Our Method
C σ FGA FGA FGA FGA FGA
80.10.290.300.51 0.400.400.52 0.490.490.55 0.620.600.63 0.770.760.80
80.50.320.320.51 0.420.420.55 0.580.570.63 0.660.650.71 0.780.780.82
85.00.360.360.52 0.480.490.55 0.610.600.67 0.710.720.77 0.800.800.84
400.10.400.400.54 0.540.530.57 0.570.580.62 0.650.660.71 0.800.800.83
400.50.500.420.52 0.590.580.64 0.650.640.69 0.680.670.73 0.810.800.85
405.00.560.560.61 0.620.600.64 0.680.660.72 0.720.730.77 0.820.820.88
1000.10.420.390.54 0.570.550.61 0.650.640.69 0.660.680.75 0.840.830.88
1000.50.520.500.59 0.630.650.71 0.700.690.75 0.710.720.77 0.850.840.89
1005.00.590.610.66 0.680.700.74 0.750.760.79 0.750.740.81 0.860.870.91
Table 3. F-score (F), G-mean (G) and the AUC (A) measures for all five classification methods in Scenario 3 for n 1 = 20 , n 3 = 100 and n 3 = 480 , respectively. Max margin is 0.05.
Table 3. F-score (F), G-mean (G) and the AUC (A) measures for all five classification methods in Scenario 3 for n 1 = 20 , n 3 = 100 and n 3 = 480 , respectively. Max margin is 0.05.
SVM Wu [22] William [23] Maratea [24] Our Method
C σ FGA FGA FGA FGA FGA
80.10.250.230.50 0.340.320.51 0.450.460.53 0.580.570.61 0.750.740.79
80.50.280.270.51 0.370.380.53 0.540.520.60 0.620.640.69 0.770.770.82
85.00.320.290.51 0.460.440.53 0.570.580.62 0.680.660.72 0.790.790.84
400.10.350.340.51 0.510.490.54 0.530.540.59 0.600.590.64 0.790.780.84
400.50.470.450.54 0.550.540.60 0.590.580.63 0.640.640.71 0.800.800.85
405.00.510.520.58 0.570.550.62 0.630.620.68 0.680.690.75 0.810.810.84
1000.10.380.370.51 0.540.520.58 0.610.600.64 0.620.630.70 0.820.820.85
1000.50.450.450.55 0.590.570.64 0.650.640.72 0.670.670.74 0.840.840.87
1005.00.550.550.60 0.620.630.68 0.710.710.76 0.710.700.75 0.850.860.91
Table 4. Outcomes of multi-class prediction on the Prostate Cancer Program.
Table 4. Outcomes of multi-class prediction on the Prostate Cancer Program.
MethodsError(%)SV(%)F-ScoreG-Means
Proposed 8.06 ± 0.58 17.46 ± 0.57 0.84 ± 0.05 0.81 ± 0.04
Amari 11.88 ± 1.12 21.33 ± 1.27 0.70 ± 0.09 0.66 ± 0.10
William 10.21 ± 0.97 18.93 ± 1.65 0.74 ± 0.12 0.71 ± 0.08
CS 9.20 ± 1.22 17.57 ± 1.12 0.77 ± 0.06 0.73 ± 0.06
simSVM 9.33 ± 1.20 18.29 ± 1.07 0.78 ± 0.10 0.74 ± 0.09
1vs1 8.20 ± 1.26 25.41 ± 2.87 0.81 ± 0.06 0.77 ± 0.07
1vsA 8.25 ± 1.57 24.16 ± 2.62 0.82 ± 0.06 0.76 ± 0.06
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shao, J.; Liu, X.; He, W. Kernel Based Data-Adaptive Support Vector Machines for Multi-Class Classification. Mathematics 2021, 9, 936. https://doi.org/10.3390/math9090936

AMA Style

Shao J, Liu X, He W. Kernel Based Data-Adaptive Support Vector Machines for Multi-Class Classification. Mathematics. 2021; 9(9):936. https://doi.org/10.3390/math9090936

Chicago/Turabian Style

Shao, Jianli, Xin Liu, and Wenqing He. 2021. "Kernel Based Data-Adaptive Support Vector Machines for Multi-Class Classification" Mathematics 9, no. 9: 936. https://doi.org/10.3390/math9090936

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop