A Multiclass Nonparallel Parametric-Margin Support Vector Machine
Abstract
:1. Introduction
- The proposed MNP-KSVC encodes the K multiclass learning task into a series of “one-versus-one-versus-rest” subproblems with all the training instances. Then, it encodes the outputs of subproblems with the ternary output , which helps to deal with imbalanced cases.
- For each subproblem, MNP-KSVC aims to find a pair of nonparallel parametric-margin to separate the two selected classes together with the remaining classes. Unlike TMPSVM, each parametric-margin hyperplane is closer to its class and at least at a distance away from the other class, meanwhile mapping the remaining instances into a region.
- To implement the empirical risks, MNP-KSVC considers the hybrid classification and regression loss. The Hinge loss is utilized to penalize the errors of the focused two class instances and the -intensive loss for the remaining class instances.
- Extensive numerical experiments are performed on several multiclass UCI benchmark datasets, and their results are compared with four models (Multi-SVM, MBSVM, MTPMSVM, and Twin-KSVC). The comparative results indicate the effectiveness and feasibility of the proposed MNP-KSVC for multiclass classification.
2. Preliminaries
2.1. Notations
2.2. TPMSVM
3. The Proposed MNP-KSVC
3.1. Model Formulation
- The first term is the -norm of and . Minimizing this is done with the aim of regulating the model complexity of MNP-KSVC and avoiding over-fitting. Furthermore, this regularization term makes the QPPs strictly convex, leading to a unique solution.
- The second term is the sum of the projection value of the −1 labeled instances on . Optimizing this term leads instances to be as far as possible from the +1 labeled parametric-margin hyperplane .
- The third term with the first constraint requires the projection values of the +1 labeled instances on hyperplane to be not less than 1. Otherwise, a slack vector is introduced to measure its error when the constraint is violated.
- The last term with the second constraint aims for the projection values of the remaining 0 labeled instances on hyperplane to be not more than . Otherwise, a slack variable is utilized to measure the corresponding error. Optimizing this is done with the aim of keeping instances at least distance from the +1 labeled instances. Moreover, controls the margin between “+” and “0” labeled instances.
3.2. Model Optimization
3.3. Decision Rule
Algorithm 1 The procedure of MNP-KSVC |
|
4. Model Extension to the Nonlinear Case
- In contrast to some existing nonparallel SVMs, we do not need to consider the extra kernel-generated technique since only inner products appear in dual problems (14) and (15) of the linear MNP-KSVC. These dual formulations enable MNP-KSVC to behave consistently in linear and nonlinear cases. Thus, taking appropriate kernel functions instead of inner products in the Hessian matrix of dual problem (14) and (15), i.e.,
- For an unseen instance , construct the following decision functions for -pair nonlinear MNP-KSVC classifier as
5. Numerical Experiments
5.1. Experimental Setting
- Multi-SVM [38]: The idea is similar to the “one-versus-all” SVM [3]. However, it generates K binary SVM classifiers by solving the one large dual QPP. That is, the k-th classifier is trained with the k-th class instances encoded with positive labels and the remaining class instances with negative labels. Then, the label of an unseen instance is assigned by the “voting” scheme. The penalty parameter for each classifier in Multi-SVM is c.
- MBSVM [33]: It is the multiclass extension of the binary TWSVM, which is based on the “one-versus-all” strategy. MBSVM aims to find K nonparallel hyperplanes by solving K QPPs simultaneously. Specifically, the k-th class instances are as far away as the k-th hyperplane while the remaining instances are proximal to the k-th hyperplane. An unseen instance is assigned to the label depending on to which of the K hyperplanes it lies farthest away. The penalty parameter for each classifier in MBSVM is c.
- MTPMSVM: Inspired by MBSVM [33], we use the “one-versus-all” strategy to implement the multiclass version of TPMSVM [15] as a baseline. In contrast to MBSVM, it aims to find K parametric-margin hyperplanes, such that each hyperplane is closer to its corresponding class instances and as far away from the remaining class instances. The penalty parameters for each classifier in MTPMSVM are .
- Twin-KSVC [35]: It is another novel multiclass extension of TWSVM. Twin-KSVC evaluates all the training instances in a “one-versus-one-versus-rest” structure with the ternary output . It aims to find a pair of nonparallel hyperplanes for each of two kinds of samples selected from K classes. The remaining class instances are mapped into a region within these two nonparallel hyperplanes. The penalty parameters for each classifier in Twin-KSVC are .
- Similar to [35,38], we use multiclass accuracy to measure each classifier, defined as
- To reduce the complexity of parameter selection for multiclass classifiers, we use the same parameter setting for each learning subproblem. Specifically, we set the same for all classifiers c in MBSVM and MBSVM, in MTPSVM, in Twin-KSVC, and in MNP-KSVC. For the nonlinear case, the RBF kernel is considered, where is the kernel parameter.
- It is usually unknown beforehand which parameters are optimal for classifiers at hand. Thus, we employ the 10-fold cross-validation technique [3] for parameter selection. In detail, each dataset is randomly partitioned into 10 subsets with similar sizes and distributions. Then, the union of 9 subsets is used as the training set, while the remaining 1 is used as the testing set. Furthermore, we apply the grid-based approach [3] to obtain the optimal parameters of each classifier. Namely, the penalty parameters and the kernel parameter are selected from , while the margin parameter is chosen from . Once selected, we return them to learn the final decision function.
5.2. Result Comparison and Discussion
- MNP-KSVC yields better performance than other classifiers in terms of accuracy on almost all datasets. This confirms the efficacy of the proposed MNP-KSVC on the multiclass learning tasks.
- Nonparallel based models (MBSVM, MTPMSVM, Twin-KSVC, and MNP-KSVC) outperform the traditional SVM model. The reason is that SVM simply utilizes the parallel hyperplanes to learn the decision function, leading to less capability to capture the underlying multiclass distributions.
- MNP-KSVC has a better generalization ability than MBSVM and Twin-KSVC in most cases. For instance, MNP-KSVC obtains a higher accuracy (84.59%) than MBSVM (80.82%) and Twin-KSVC (80.12%) on the Ecoli dataset in the linear case. Similar results can be obtained from the other datasets. Since MBSVM and Twin-KSVC simply implement empirical risk minimization, they are easy to overfit. The regularization terms are considered in our MNP-KSVC, which regulates the model complexity to avoid overfitting.
- MTPMSVM is another multiclass extension, which is based on the “one-versus-rest” strategy. With the help of the “hybrid classification and regression” learning paradigm, our MNP-KSVC can learn more multiclass discriminate information.
- Furthermore, we count the number of Superior/Inferior (W/L) instances to the compared classifier on all datasets for both linear and nonlinear cases, listed at the bottom of Table 2 and Table 3. The results indicate that MNP-KSVC achieves the best results against others in terms of both W/L and average accuracy.
6. Discussion and Future Work
- For the K-class learning task, our MNP-KSVC first transforms the complicated multiclass problem into subproblems via a “one-versus-one-versus-rest” strategy. Each subproblem focuses on separating the two selected classes and the rest of the classes. That is, we utilize to represent the label of the two selected classes and 0 to label the rest. Unlike the “one-versus-all” strategy used in Multi-SVM, MBSVM, and MTPMSVM, this encoding strategy can alleviate the imbalanced issues that sometimes occur in multiclass learning [32,35].
- For each subproblem, our MNP-KSVC aims to learn a pair of nonparallel parametric-margin hyperplanes (36) with the ternary encoding . These parametric-margin hyperplanes are closer to their corresponding class and at least one distance from the other class. Meanwhile, they restrict the rest of the instances into an insensitive region. A hybrid classification and regression loss joined with the regularization is further utilized to formulate the optimization problems (10) and (11) of MNP-KSVC.
- Moreover, the nonlinear extension is also presented to deal with the nonlinear multiclass learning tasks. In contrast to MBSVM [33] and Twin-KSVC [35], the linear and nonlinear models in MNP-KSVC are consistent. Applying the linear kernel in the nonlinear problems (44) and (45) results in the same formulations as the original linear problems (14) and (15).
- Extensive experiments on various datasets demonstrate the effectiveness of the proposed MNP-KSVC compared with Multi-SVM, MBSVM, MTPMSVM, and Twin-KSVC.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
QPP | Quadratic Problem Programming |
KKT | Karush–Kuhn–Tucker |
SVM | Support Vector Machine |
TWSVM | Twin Support Vector Machine |
Multi-SVM | Multiclass Support Vector Machine |
MBSVM | Multiple Birth Support Vector Machine |
MTPMSVM | Multiple Twin Parametric Margin Support Vector Machine |
Twin-KSVC | Multiclass Twin Support Vector Classifier |
MNP-KSVC | Multiclass Nonparallel Parametric-Margin Support Vector Classifier |
References
- Han, J.; Kamber, M.; Pei, J. Data Mining: Concepts and Techniques; Morgan Kaufmann: San Francisco, CA, USA, 2012. [Google Scholar]
- Vapnik, V. Statistical Learning Theory; Wiley: New York, NY, USA, 1998. [Google Scholar]
- Deng, N.; Tian, Y.; Zhang, C. Support Vector Machines: Theory, Algorithms and Extensions; CRC Press: Philadelphia, PA, USA, 2013. [Google Scholar]
- Sitaula, C.; Aryal, S.; Xiang, Y.; Basnet, A.; Lu, X. Content and context features for scene image representation. Knowl.-Based Syst. 2021, 232, 107470. [Google Scholar] [CrossRef]
- Ma, S.; Cheng, B.; Shang, Z.; Liu, G. Scattering transform and LSPTSVM based fault diagnosis of rotating machinery. Mech. Syst. Signal Process. 2018, 104, 155–170. [Google Scholar] [CrossRef]
- Liu, T.; Yan, D.; Wang, R.; Yan, N.; Chen, G. Identification of Fake Stereo Audio Using SVM and CNN. Information 2021, 12, 263. [Google Scholar] [CrossRef]
- You, S.D. Classification of Relaxation and Concentration Mental States with EEG. Information 2021, 12, 187. [Google Scholar] [CrossRef]
- Kang, J.; Han, X.; Song, J.; Niu, Z.; Li, X. The identification of children with autism spectrum disorder by SVM approach on EEG and eye-tracking data. Comput. Biol. Med. 2020, 120, 103722. [Google Scholar] [CrossRef]
- Lazcano, R.; Salvador, R.; Marrero-Martin, M.; Leporati, F.; Juarez, E.; Callico, G.M.; Sanz, C.; Madronal, D.; Florimbi, G.; Sancho, J.; et al. Parallel Implementations Assessment of a Spatial-Spectral Classifier for Hyperspectral Clinical Applications. IEEE Access 2019, 7, 152316–152333. [Google Scholar] [CrossRef]
- Fabelo, H.; Ortega, S.; Szolna, A.; Bulters, D.; Pineiro, J.F.; Kabwama, S.; J-O’Shanahan, A.; Bulstrode, H.; Bisshopp, S.; Kiran, B.R.; et al. In-Vivo Hyperspectral Human Brain Image Database for Brain Cancer Detection. IEEE Access 2019, 7, 39098–39116. [Google Scholar] [CrossRef]
- Roy, S.D.; Debbarma, S. A novel OC-SVM based ensemble learning framework for attack detection in AGC loop of power systems. Electr. Power Syst. Res. 2022, 202, 107625. [Google Scholar] [CrossRef]
- Mangasarian, O.L.; Wild, E.W. Multisurface proximal support vector machine classification via generalized eigenvalues. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 69–74. [Google Scholar] [CrossRef]
- Jayadeva; Khemchandani, R.; Chandra, S. Twin Support Vector Machines for Pattern Classification. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 905–910. [Google Scholar] [CrossRef]
- Ding, S.; Hua, X. An overview on nonparallel hyperplane support vector machine algorithms. Neural Comput. Appl. 2014, 25, 975–982. [Google Scholar] [CrossRef]
- Peng, X. TPMSVM: A novel twin parametric-margin support vector machine for pattern recognition. Pattern Recogn. 2011, 44, 2678–2692. [Google Scholar] [CrossRef]
- Arun Kumar, M.; Gopal, M. Least squares twin support vector machines for pattern classification. Expert Syst. Appl. 2009, 36, 7535–7543. [Google Scholar] [CrossRef]
- Shao, Y.; Zhang, C.; Wang, X.; Deng, N. Improvements on Twin Support Vector Machines. IEEE Trans. Neural Netw. 2011, 22, 962–968. [Google Scholar] [CrossRef]
- Chen, W.J.; Shao, Y.H.; Li, C.N.; Liu, M.Z.; Wang, Z.; Deng, N.Y. ν-projection twin support vector machine for pattern classification. Neurocomputing 2020, 376, 10–24. [Google Scholar] [CrossRef]
- Tian, Y.; Qi, Z.; Ju, X.; Shi, Y.; Liu, X. Nonparallel Support Vector Machines for Pattern Classification. IEEE Trans. Cybern. 2014, 44, 1067–1079. [Google Scholar] [CrossRef] [Green Version]
- Tian, Y.; Ping, Y. Large-scale linear nonparallel support vector machine solver. Neural Netw. 2014, 50, 166–174. [Google Scholar] [CrossRef] [PubMed]
- Chen, W.; Shao, Y.; Li, C.; Wang, Y.; Liu, M.; Wang, Z. NPrSVM: Nonparallel sparse projection support vector machine with efficient algorithm. Appl. Soft Comput. 2020, 90, 106142. [Google Scholar] [CrossRef]
- Chen, W.; Shao, Y.; Li, C.; Deng, N. MLTSVM: A novel twin support vector machine to multi-label learning. Pattern Recogn. 2016, 52, 61–74. [Google Scholar] [CrossRef]
- Bai, L.; Shao, Y.H.; Wang, Z.; Chen, W.J.; Deng, N.Y. Multiple Flat Projections for Cross-Manifold Clustering. IEEE Trans. Cybern. 2021, 1–15. Available online: https://ieeexplore.ieee.org/document/9343292 (accessed on 20 October 2021). [CrossRef]
- Hou, Q.; Liu, L.; Zhen, L.; Jing, L. A novel projection nonparallel support vector machine for pattern classification. Eng. Appl. Artif. Intell. 2018, 75, 64–75. [Google Scholar] [CrossRef]
- Liu, L.; Chu, M.; Gong, R.; Zhang, L. An Improved Nonparallel Support Vector Machine. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 5129–5143. [Google Scholar] [CrossRef]
- Chen, W.; Shao, Y.; Deng, N.; Feng, Z. Laplacian least squares twin support vector machine for semi-supervised classification. Neurocomputing 2014, 145, 465–476. [Google Scholar] [CrossRef]
- Shao, Y.; Chen, W.; Zhang, J.; Wang, Z.; Deng, N. An efficient weighted Lagrangian twin support vector machine for imbalanced data classification. Pattern Recogn. 2014, 47, 3158–3167. [Google Scholar] [CrossRef]
- Chen, W.; Shao, Y.; Ning, H. Laplacian smooth twin support vector machine for semi-supervised classification. Int. J. Mach. Learn. Cybern. 2014, 5, 459–468. [Google Scholar] [CrossRef]
- Gao, Z.; Fang, S.C.; Gao, X.; Luo, J.; Medhin, N. A novel kernel-free least squares twin support vector machine for fast and accurate multi-class classification. Knowl.-Based Syst. 2021, 226, 107123. [Google Scholar] [CrossRef]
- Ding, S.; Zhao, X.; Zhang, J.; Zhang, X.; Xue, Y. A review on multi-class TWSVM. Artif. Intell. Rev. 2017, 52, 775–801. [Google Scholar] [CrossRef]
- Qiang, W.; Zhang, J.; Zhen, L.; Jing, L. Robust weighted linear loss twin multi-class support vector regression for large-scale classification. Signal Process. 2020, 170, 107449. [Google Scholar] [CrossRef]
- de Lima, M.D.; Costa, N.L.; Barbosa, R. Improvements on least squares twin multi-class classification support vector machine. Neurocomputing 2018, 313, 196–205. [Google Scholar] [CrossRef]
- Yang, Z.; Shao, Y.; Zhang, X. Multiple birth support vector machine for multi-class classification. Neural Comput. Appl. 2013, 22, 153–161. [Google Scholar] [CrossRef]
- Angulo, C.; Parra, X.; Català, A. K-SVCR. A support vector machine for multi-class classification. Neurocomputing 2003, 55, 57–77. [Google Scholar] [CrossRef]
- Xu, Y.; Guo, R.; Wang, L. A Twin Multi-Class Classification Support Vector Machine. Cogn. Comput. 2013, 5, 580–588. [Google Scholar] [CrossRef]
- Nasiri, J.A.; Moghadam Charkari, N.; Jalili, S. Least squares twin multi-class classification support vector machine. Pattern Recogn. 2015, 48, 984–992. [Google Scholar] [CrossRef]
- Mangasarian, O.L. Nonlinear Programming; SIAM Press: Philadelphia, PA, USA, 1993. [Google Scholar]
- Tomar, D.; Agarwal, S. A comparison on multi-class classification methods based on least squares twin support vector machine. Knowl.-Based Syst. 2015, 81, 131–147. [Google Scholar] [CrossRef]
- Friedman, M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 1937, 200, 675–701. [Google Scholar] [CrossRef]
- Yu, Z.; Wang, Z.; You, J.; Zhang, J.; Liu, J.; Wong, H.S.; Han, G. A New Kind of Nonparametric Test for Statistical Comparison of Multiple Classifiers Over Multiple Datasets. IEEE Trans. Cybern. 2017, 47, 4418–4431. [Google Scholar] [CrossRef]
- Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inform. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
- Chen, W.; Shao, Y.; Xu, D.; Fu, Y. Manifold proximal support vector machine for semi-supervised classification. Appl. Intell. 2014, 40, 623–638. [Google Scholar] [CrossRef]
- Li, Y.; Sun, H.; Yan, W.; Cui, Q. R-CTSVM+: Robust capped L1-norm twin support vector machine with privileged information. Inform. Sci. 2021, 574, 12–32. [Google Scholar] [CrossRef]
Datasets | #Instances | #Training | #Testing | #Attributes | #Class |
---|---|---|---|---|---|
Balance | 625 | 438 | 187 | 4 | 3 |
Ecoli | 327 | 229 | 98 | 7 | 5 |
Iris | 150 | 105 | 45 | 4 | 3 |
Glass | 214 | 150 | 64 | 13 | 6 |
Wine | 178 | 125 | 53 | 13 | 3 |
Thyroid | 215 | 150 | 65 | 5 | 3 |
Dermatology | 358 | 251 | 107 | 34 | 6 |
Shuttle | 2175 | 1522 | 653 | 9 | 5 |
Contraceptive | 1473 | 1031 | 442 | 9 | 3 |
Pen Based | 1100 | 770 | 330 | 16 | 10 |
Datasets | Multi-SVM | MBSVM | MTPMSVM | Twin-KSVC | MNP-KSVC |
---|---|---|---|---|---|
Balance | 79.91 ± 4.86 | 86.14 ± 4.63 | 87.43 ± 3.56 | 86.64 ± 1.92 | 88.33 ± 2.03 |
Ecoli | 72.32 ± 7.03 | 73.62 ± 4.95 | 73.77 ± 4.33 | 74.62 ± 3.86 | 75.91 ± 2.52 |
Iris | 93.24 ± 2.66 | 92.96 ± 2.24 | 93.21 ± 2.53 | 92.69 ± 3.24 | 94.13 ± 2.49 |
Glass | 69.18 ± 9.85 | 72.89 ± 7.28 | 68.68 ± 6.79 | 71.65 ± 5.96 | 71.38 ± 5.38 |
Wine | 93.24 ± 3.02 | 95.28 ± 1.46 | 98.19 ± 1.61 | 97.23 ± 1.12 | 97.14 ± 1.27 |
Thyroid | 90.24 ± 2.53 | 93.74 ± 1.58 | 92.92 ± 1.43 | 96.97 ± 1.08 | 97.52 ± 1.54 |
Dermatology | 81.82 ± 3.79 | 86.67 ± 1.69 | 84.46 ± 3.29 | 90.37 ± 2.32 | 89.06 ± 3.16 |
Shuttle | 71.58 ± 4.78 | 84.04 ± 2.92 | 77.17 ± 3.3 | 78.76 ± 4.81 | 83.16 ± 1.92 |
Contraceptive | 38.53 ± 3.76 | 43.95 ± 2.51 | 44.65 ± 2.93 | 42.25 ± 2.45 | 44.22 ± 2.03 |
Pen Based | 79.59 ± 3.75 | 85.94 ± 1.26 | 81.94 ± 2.04 | 83.21 ± 2.77 | 86.78 ± 1.37 |
Ave. Acc | 76.97 | 81.52 | 80.24 | 81.43 | 82.76 |
W/L | 10/0 | 8/2 | 8/2 | 9/1 | / |
Ave. rank | 4.6 | 2.9 | 3.1 | 2.67 | 1.7 |
Datasets | Multi-SVM | MBSVM | MTPMSVM | Twin-KSVC | MNP-KSVC |
---|---|---|---|---|---|
Balance | 79.94 ± 5.57 | 87.13 ± 4.57 | 89.92 ± 3.27 | 90.17 ± 4.08 | 91.41 ± 3.42 |
Ecoli | 79.81 ± 5.19 | 80.82 ± 4.25 | 84.74 ± 3.49 | 80.12 ± 4.49 | 84.59 ± 3.83 |
Iris | 90.26 ± 2.65 | 96.89 ± 1.83 | 97.39 ± 2.14 | 94.36 ± 2.03 | 98.04 ± 1.47 |
Glass | 58.66 ± 4.76 | 52.64 ± 4.08 | 62.73 ± 3.95 | 56.01 ± 4.26 | 64.12 ± 2.98 |
Wine | 94.36 ± 2.12 | 94.31 ± 1.87 | 98.38 ± 2.62 | 97.26 ± 1.61 | 98.04 ± 1.45 |
Thyroid | 91.82 ± 1.75 | 93.27 ± 1.95 | 95.34 ± 0.89 | 94.14 ± 1.05 | 95.63 ± 0.84 |
Pen Based | 86.53 ± 3.93 | 89.51 ± 3.29 | 85.06 ± 3.68 | 86.12 ± 2.47 | 88.78 ± 2.68 |
Dermatology | 84.43 ± 4.29 | 83.82 ± 3.86 | 84.51 ± 2.64 | 83.33 ± 3.29 | 85.26 ± 3.12 |
Shuttle | 74.36 ± 3.39 | 83.74 ± 2.73 | 87.06 ± 2.89 | 86.91 ± 2.38 | 89.37 ± 1.73 |
Contraceptive | 42.09 ± 4.95 | 44.28 ± 4.02 | 45.93 ± 3.85 | 47.51 ± 3.57 | 47.47 ± 3.68 |
Ave. Acc | 78.22 | 80.64 | 83.11 | 81.59 | 84.27 |
W/L | 10/0 | 9/1 | 8/2 | 9/1 | / |
Ave. rank | 4.3 | 3.69 | 2.3 | 3.3 | 1.4 |
Statistic | p-Value | Hypothesis | |
---|---|---|---|
Linear | 6.5143 | 2.9503 × 10−4 | reject |
Nonlinear | 10.2004 | 1.2267 × 10−5 | reject |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Du, S.-W.; Zhang, M.-C.; Chen, P.; Sun, H.-F.; Chen, W.-J.; Shao, Y.-H. A Multiclass Nonparallel Parametric-Margin Support Vector Machine. Information 2021, 12, 515. https://doi.org/10.3390/info12120515
Du S-W, Zhang M-C, Chen P, Sun H-F, Chen W-J, Shao Y-H. A Multiclass Nonparallel Parametric-Margin Support Vector Machine. Information. 2021; 12(12):515. https://doi.org/10.3390/info12120515
Chicago/Turabian StyleDu, Shu-Wang, Ming-Chuan Zhang, Pei Chen, Hui-Feng Sun, Wei-Jie Chen, and Yuan-Hai Shao. 2021. "A Multiclass Nonparallel Parametric-Margin Support Vector Machine" Information 12, no. 12: 515. https://doi.org/10.3390/info12120515
APA StyleDu, S.-W., Zhang, M.-C., Chen, P., Sun, H.-F., Chen, W.-J., & Shao, Y.-H. (2021). A Multiclass Nonparallel Parametric-Margin Support Vector Machine. Information, 12(12), 515. https://doi.org/10.3390/info12120515