Breakdown Point of Robust Support Vector Machines
AbstractSupport vector machine (SVM) is one of the most successful learning methods for solving classiﬁcation problems. Despite its popularity, SVM has the serious drawback that it is sensitive to outliers in training samples. The penalty on misclassiﬁcation is deﬁned by a convex loss called the hinge loss, and the unboundedness of the convex loss causes the sensitivity to outliers. To deal with outliers, robust SVMs have been proposed by replacing the convex loss with a non-convex bounded loss called the ramp loss. In this paper, we study the breakdown point of robust SVMs. The breakdown point is a robustness measure that is the largest amount of contamination such that the estimated classiﬁer still gives information about the non-contaminated data. The main contribution of this paper is to show an exact evaluation of the breakdown point of robust SVMs. For learning parameters such as the regularization parameter, we derive a simple formula that guarantees the robustness of the classiﬁer. When the learning parameters are determined with a grid search using cross-validation, our formula works to reduce the number of candidate search points. Furthermore, the theoretical ﬁndings are conﬁrmed in numerical experiments. We show that the statistical properties of robust SVMs are well explained by a theoretical analysis of the breakdown point. View Full-Text
Share & Cite This Article
Kanamori, T.; Fujiwara, S.; Takeda, A. Breakdown Point of Robust Support Vector Machines. Entropy 2017, 19, 83.
Kanamori T, Fujiwara S, Takeda A. Breakdown Point of Robust Support Vector Machines. Entropy. 2017; 19(2):83.Chicago/Turabian Style
Kanamori, Takafumi; Fujiwara, Shuhei; Takeda, Akiko. 2017. "Breakdown Point of Robust Support Vector Machines." Entropy 19, no. 2: 83.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.