Next Article in Journal
A Matheuristic for Joint Optimal Power and Scheduling Assignment in DVB-T2 Networks
Previous Article in Journal
Local Convergence of an Efficient Multipoint Iterative Method in Banach Space
Previous Article in Special Issue
A Grey-Box Ensemble Model Exploiting Black-Box Accuracy and White-Box Intrinsic Interpretability
Open AccessArticle

A Soft-Voting Ensemble Based Co-Training Scheme Using Static Selection for Binary Classification Problems

Educational Software Development Laboratory (ESDLab), Department of Mathematics, University of Patras, 26504 Patras, Greece
*
Author to whom correspondence should be addressed.
Algorithms 2020, 13(1), 26; https://doi.org/10.3390/a13010026
Received: 1 November 2019 / Revised: 23 December 2019 / Accepted: 13 January 2020 / Published: 16 January 2020
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
In recent years, a forward-looking subfield of machine learning has emerged with important applications in a variety of scientific fields. Semi-supervised learning is increasingly being recognized as a burgeoning area embracing a plethora of efficient methods and algorithms seeking to exploit a small pool of labeled examples together with a large pool of unlabeled ones in the most efficient way. Co-training is a representative semi-supervised classification algorithm originally based on the assumption that each example can be described by two distinct feature sets, usually referred to as views. Since such an assumption can hardly be met in real world problems, several variants of the co-training algorithm have been proposed dealing with the absence or existence of a naturally two-view feature split. In this context, a Static Selection Ensemble-based co-training scheme operating under a random feature split strategy is outlined regarding binary classification problems, where the type of the base ensemble learner is a soft-Voting one composed of two participants. Ensemble methods are commonly used to boost the predictive performance of learning models by using a set of different classifiers, while the Static Ensemble Selection approach seeks to find the most suitable structure of ensemble classifier based on a specific criterion through a pool of candidate classifiers. The efficacy of the proposed scheme is verified through several experiments on a plethora of benchmark datasets as statistically confirmed by the Friedman Aligned Ranks non-parametric test over the behavior of classification accuracy, F1-score, and Area Under Curve metrics. View Full-Text
Keywords: binary classification; co-training; ensemble methods; feature views; dynamic ensemble selection; Soft-Voting binary classification; co-training; ensemble methods; feature views; dynamic ensemble selection; Soft-Voting
Show Figures

Figure 1

MDPI and ACS Style

Karlos, S.; Kostopoulos, G.; Kotsiantis, S. A Soft-Voting Ensemble Based Co-Training Scheme Using Static Selection for Binary Classification Problems. Algorithms 2020, 13, 26.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop