Next Article in Journal
Causal Composition: Structural Differences among Dynamically Equivalent Systems
Next Article in Special Issue
Predicting Student Performance and Deficiency in Mastering Knowledge Points in MOOCs Using Multi-Task Learning
Previous Article in Journal
A New Belief Entropy Based on Deng Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combination of Active Learning and Semi-Supervised Learning under a Self-Training Scheme

1
Wired Communications Lab, Department of Electrical and Computer Engineering, University of Patras, 26504 Achaia, Greece
2
Educational Software Development Lab, Department of Mathematics, University of Patras, 26504 Achaia, Greece
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(10), 988; https://doi.org/10.3390/e21100988
Submission received: 15 August 2019 / Revised: 27 September 2019 / Accepted: 9 October 2019 / Published: 10 October 2019
(This article belongs to the Special Issue Theory and Applications of Information Theoretic Machine Learning)

Abstract

:
One of the major aspects affecting the performance of the classification algorithms is the amount of labeled data which is available during the training phase. It is widely accepted that the labeling procedure of vast amounts of data is both expensive and time-consuming since it requires the employment of human expertise. For a wide variety of scientific fields, unlabeled examples are easy to collect but hard to handle in a useful manner, thus improving the contained information for a subject dataset. In this context, a variety of learning methods have been studied in the literature aiming to efficiently utilize the vast amounts of unlabeled data during the learning process. The most common approaches tackle problems of this kind by individually applying active learning or semi-supervised learning methods. In this work, a combination of active learning and semi-supervised learning methods is proposed, under a common self-training scheme, in order to efficiently utilize the available unlabeled data. The effective and robust metrics of the entropy and the distribution of probabilities of the unlabeled set, to select the most sufficient unlabeled examples for the augmentation of the initial labeled set, are used. The superiority of the proposed scheme is validated by comparing it against the base approaches of supervised, semi-supervised, and active learning in the wide range of fifty-five benchmark datasets.

1. Introduction

The most common approach established in machine learning (ML) is supervised learning (SL). Under the SL schemes, classifiers are trained using purely labeled data. In contrast with the problem complexity, the performance of such schemes is directly analogous to the amount and the quality of labeled data which are used at the training phase. In a large variety of scientific domains, such as object detection [1], speech recognition [2], web page categorization [3], and computer-aided medical diagnosis [4,5,6] vast pools of unlabeled data are often available. Though, in most cases labeling data can be costly and time-consuming, as human effort and expertise are required to annotate the available data. Many research works [7] exist focusing on techniques with the aim of exploiting the available unlabeled data especially in favor of classification problems. The most common learning methods incorporating such techniques are active learning (AL) and semi-supervised learning (SSL) [8]. Both AL and SSL share an iterative learning nature, making them a perfect fit for constructing more complex combination learning schemes.
The primary goal of this paper is to put forward a new AL and SSL combination algorithm in order to efficiently exploit the plethora of available unlabeled data found in most of the ML datasets and provide an improved classification framework. The general flow of AL and SSL frameworks is presented in Figure 1. Both methods utilize an initial pool of labeled and unlabeled examples with the goal of efficiently augmenting the available knowledge. AL and SSL frameworks, in most cases, operate under an iterative logic aiming to predict the label in the most appropriate unlabeled examples. While the former method annotates the unlabeled instances by interactively querying a human expert based on a variety of querying strategies, the latter attempts to automatically produce the labels of unlabeled examples by exploiting the previously learned knowledge and a wide range of unlabeled instances selection criteria. After the successful augmentation of the initial labeled set, a final model is constructed in both cases with a view to the application on the unknown test cases.
As both methods share a lot of key characteristics, a major effort is now needed to combine the two learning approaches. The main contribution of the proposed algorithm is the employment of a self-training scheme for the combination of AL and SSL utilizing the fast and effective metrics of the entropy and the distribution of the prediction probabilities of the available unlabeled data. The plethora of experiments carried out, also play a major role in the validation of the proposed algorithm. The proposed method is examined through a number of different individual base learners, where the ensemble learning technique is also explored as the aggregated models tend to produce more accurate predictions and are commonly used in today’s applications [9,10].
Real-world case scenarios where AL and SSL combination methods can be applied include natural language processing (NLP) problems to which a lot of labeled examples are required to effectively train a model and also vast amounts of unlabeled data can be mined. Common applications on the NLP field are part of speech tagging, named entity recognition, sentiment analysis [11], fraud detection, and spam filtering. Especially, a number of AL [12], SSL, and combinations [13] of them have been proposed in the spam filtering domain. In Figure 2, an application on the Spambase [14] benchmark dataset briefly presents the accuracy improvement for the proposed scheme as the algorithm’s iterations progress. With regard to the base algorithm learner, the support vector machines (SVMs) [15] classifier was embedded. For comparison, in the same figure, the corresponding SSL part of the algorithm was fed with the same amount of unlabeled data to obtain only the semi-supervised accuracy.
The rest of this research work is organized as follows: In Section 2, the related work on similar classification methods is discussed. Following in Section 3, the proposed method is presented along with the exact algorithm implemented. An attempt to evaluate the efficacy of the combination scheme is made in Section 4, where extensive experimentation results can be found. Moreover, in this section, the average accuracies of the classifiers applied on the combination scheme are also briefly compared. In Section 5, a modification of the scheme is explored. The research conclusions are conferred in Section 6, where a number of areas to be explored as future work are mentioned. Finally, a software implementation of the wrapper algorithm is found in the Appendix A through the accompanying link.

2. Related Work

AL can be considered one of the most promising approaches for improving the performance of a prediction model in real-world scenarios where large amount of data exists, but their labeling is costly or infeasible [7]. AL assumes that human experts will be available to provide ground-truth labels for the unlabeled instances. Therefore, the philosophy of AL is to minimize the number of queries with the explicit goal to focus the labeling effort in the most profitable or informative instances, in other words, to minimize the training cost of the model [16]. Finally, these manually annotated samples are merged with the training dataset to get the highest classification accuracy. Several query strategies [7,17] have been proposed to measure the informativeness or the representativeness of the data. Informativeness-based strategies measure the contribution of an unlabeled instance on the uncertainty reduction of a statistical model, while representativeness-based strategies measure the instance contribution on representing the underlying structure of input patterns. The most commonly used query strategies can be considered certainty-based sampling, query-by-committee, and expected error reduction. In the first type of strategy, a single model is trained and the human expert (annotator) is queried to label the least confidence unlabeled instances based on the pre-trained model. The query-by-committee strategy involves more than one active learner (classification models) to be trained for the classification task. The unlabeled instances about which these models disagree the most are selected for human annotation. The third strategy is a decision-theoretic approach aiming to estimate the potential of the model’s generalization error reduction. In other words, a model is trained and used to estimate the expected future error of the unlabeled samples. Then, the instances with the minimal future error (risk) are selected and delivered for manual labeling. The effectiveness of AL and various query strategies has been shown in typical classification tasks, such as text classification [18], speech recognition [19], speech emotion classification [20], audio retrieval [21] to name a few.
In contrast to AL, SSL aims to automatically exploit unlabeled data in addition to labeled data to improve learning performance, without human intervention. In SSL, two basic assumptions about the data distribution are considered. The first assumes that data are inherently clustered, meaning that instances belonging to the same cluster have the same label. The other one assumes that data lie on a manifold, meaning that nearby samples have similar predictions. The idea behind both is that similar data points should have similar outputs and the unlabeled instances can expose similarities between these data points. Many different SSL methods have been designed in machine learning, including mainly transductive support vector machines [22], graph-based methods [23,24,25], co-training [26], self-training [1]. In the self-training scheme, the classification model is used to predict the labels of a portion of the unlabeled instances and, consequently, the most confident ones are added to the initial training dataset repeatedly until convergence. Rather than just relying on a unique model, in co-training [26] ensemble method is employed. For each model, separate feature sets (or views) of the same labeled data are used for training. Then, like self-training, the most confident predictions of each classifier on the unlabeled data are used to iteratively construct additional labeled training data. The co-training paradigm relies on three assumptions about the views, i.e., sufficiency, compatibility, and conditional independence [26]. On the other hand, graph-based methods treat all the samples (both labeled and unlabeled) as connected vertices (nodes) in a graph, aiming to connect these nodes, in other words, to weight these node-to-node pairwise edges by similarities between the corresponding sample pairs. Finally, minimum energy optimization is used to propagate labeling from the labeled to the unlabeled nodes.
Although AL has led to the reduction of the human labeling burden, without sacrificing the model’s performance [7], it is still inefficient in some situations, e.g., the acquisition of a large amount of human annotations is impractical or not feasible at all. Thus, SSL comes in handy by minimizing the unlabeled data that will be fed to the human annotator. Specifically, human experts are required to label only those instances with the lowest certainty (as determined by the AL algorithm), while the remaining instances are automatically labeled by a machine annotator (by the SSL algorithm). Indeed, several studies have been proposed that combine AL and SSL under the same methodology. One of the first attempts [27] was in the text classification field, where expectation maximization was employed along with pool-based active learning. Later, Muslea et al. proposed the combination of co-testing and co-training showing improved classification accuracy in Web pages and pictures classification. During co-training, two classifiers are trained separately on two different views, and only the contention points, i.e., the unlabeled instances in which the classifier disagrees the most, were selected for human annotation. Finally, expectation maximization co-training (co-EM) was employed to automatically label instances that showed a low disagreement between the two classifiers. Other studies exploited certainty-based AL with self-training aiming to manual labeling with minimum human cost in spoken language understanding [28], natural language processing [29], sound classification [30], disease classification [31] and cell segmentation [32]. In another study [33], the authors addressed the problem of imbalanced training data in object detection. First, a simple object detection model was trained using a small portion of perfect samples instead of using the entire training dataset, while the imperfect samples were partitioned into several batches. Then, a batch-mode learning of AL and SSL combination was employed by integrating the uncertainty and diversity criteria from the concept of AL and the confidence criterion from that of SSL.

3. Proposed Method

The proposed method constitutes a combination of AL and SSL approaches, in order to leverage the advantages of both techniques. A mixed self-training method is employed utilizing the entropy of unlabeled instances, with the aim to identify the most confusing instances in the case of active round, while in the semi-supervised round the internal learner’s distribution of probabilities for all possible labels per each instance is exploited as a sorting mechanism for the selection of the most confident examples.
Algorithm 1: Combination Scheme
1:
LOAD the dataset D and construct the labeled set L and the unlabeled set U
2:
INITIALIZE the classifier CLS
3:
CALCULATE the labeled ratio R = size(L)/size(L+U)
4:
DEFINE the maximum number of iterations MaxIter
5:
DEFINE the maximum percentage of unlabeled examples to be added in each iteration T in respect with R
6:
SET maxUnlabPerIter = T * R * size(D)
7:
 
8:
SET i = 0
9:
WHILE i<MaxIter AND size(Ui)>0: /* where U0= U */
10:
 Train(CLS) on the current labeled set Li /* where L0=L */
11:
 IF i modulo 2 == 0:
12:
  Classify(Ui) using CLS and construct matrix Mpr containing corresponding prediction probabilities along with the predicted labels
13:
  SORT Mpr descending according to the prediction probabilities
14:
  STORE the top maxUnlabPerIter instances of Mpr in a matrix Mfinal
15:
   /* now containing the most confident instances along with their predictions */
16:
 ELSE:
17:
  Calculate the distribution_of_probabilities(Ui) and return a matrix DistUi
18:
  Calculate the entropy(DistUi) for each element and return a matrix EntrUi
19:
  SORT EntrUi descending according to their entropies
20:
  Label the top maxUnlabPerIter using human expertise
21:
  STORE the top maxUnlabPerIter instances along with their labels in a matrix Mfinal
22:
   /* now containing the most confusing instances along with their true labels */
23:
 END_IF
24:
 Augment(Li) by adding Mfinal instances
25:
 Clean(Ui) by removing Mfinal instances
26:
 SET i = i + 1
27:
END_WHILE
28:
29:
Train(CLS) using Laugmented (☰ Llast iteration)
30:
LOAD the unknown test cases as Testset
31:
Classify(Testset) using CLS to produce the final predictions
Uncertainty-based metrics are widely deployed in the AL field as the literature suggests [34], mainly due to their strong performance in terms of calculation efficiency and effectiveness in the process of selecting the most confusing instances. On the other hand, in the SSL field research works exist [8,35] proving the effectiveness of probabilistic iterative schemes. As the nature of these types of metrics is similar, they can prove to be a robust combination for the construction of schemes such as the proposed. Moreover, it is also known [36] that the SSL self-training technique further helps to overpass the lack of exploration problems that occur during the AL entropy-based training process causing the algorithms to stuck at suboptimal solutions, continuously selecting instances which do not improve the current classifier.
The proposed algorithm can be characterized as a simple yet very effective wrapper algorithm that can utilize a wide range of learners, assuming that they can produce probability distributions for their predictions. A detailed presentation of the algorithm follows in the next paragraphs.
Let D denote the initial training set, consisting of a labeled set of examples L and an unlabeled set of examples U thus defining a labeled ratio R, as in the following equation:
L a b e l e d   R a t i o = s i z e ( L ) s i z e ( L + U )
where the s i z e ( X ) function returns the size of a set of instances.
Initially, a base learner (CLS) is selected and trained on L. Afterwards, a self-training scheme is employed with the aim to augment the L using the available unlabeled examples of D. The number of unlabeled examples utilized in each iteration is conservatively selected taking in account the size of the initial labeled set, using also a control parameter T, setting the percentage of unlabeled examples related to the size of the initial labeled set. The number of maximum unlabeled instances selected in each iteration is calculated as follows:
max U n l a b P e r I t e r = T * R * s i z e ( D )
In each iteration (i), one of the two learning approaches is employed successively. The self-training loop terminates in a maximum number of iterations MaxIter or in the case of exhaustion of the pool of unlabeled examples.
Starting with the SSL round, the CLS is applied on the current unlabeled set Ui and a matrix of predictions Mpr is constructed along with the prediction probability for each unlabeled instance, resulting in a s i z e ( U i )   x   ( l + 2 ) dimensions matrix, where l + 2 is the number of features, including the predicted labels and the corresponding prediction probabilities. The SSL round uses machine labeling in order to balance the expensive human effort and examination process required to label the data. The Mpr is sorted descendingly utilizing the prediction probabilities while the rest of the maxUnlabPerIter elements are discarded. The maxUnlabPerIter instances along with their predicted labels are stored in Mfinal.
Following the method flow, an AL round is deployed in every other iteration. In this round, the algorithm attempts to construct a matrix containing the entropy estimation of each unlabeled instance EntrUi. The base learner is applied on Ui and the distribution of probabilities are exported in matrix DistUi of dimensions s i z e ( U i )   x   n u m _ c l a s s e s ( D ) , where the n u m _ c l a s s e s ( X ) function returns the number of classes of a dataset. Having produced DistUi, the calculation of entropy estimation matrix is performed using the next formula, to compute each one of its elements (j):
E n t r o p y j = k = 1 n u m _ c l a s s e s ( D ) p k * log 2 p k
where p k denotes the probability of k class for instance j, already contained in DistUi.
Subsequently, EntrUi is sorted in descending order, as the most confusing examples, with entropy values near one, should be placed on the top of the matrix. The top maxUnlabPerIter instances are kept in EntrUi with the rest of them being discarded. Human expertise is utilized to label the maxUnlabPerIter instances and a matrix containing the human-labeled instances Mfinal is constructed with the size of maxUnlabPerIter   x   ( l + 1 ) , where l + 1 is the number of features, including the class.
During each iteration, the Mfinal instances are added to the current labeled set Li and removed from the current unlabeled set Ui. The CLS is re-trained at the start of each self-training iteration in order to be utilized again. When the termination criteria are met, the algorithm exits the self-training loop having constructed the augmented labeled set Laugmented ( Llast iteration). As a final step, the CLS is trained on the augmented labeled set in order to be applied on the unknown test cases. The exact implementation of the combination scheme is presented in Algorithm 1.

4. Experimentation and Results

In order to examine the efficacy of the proposed scheme, an exhaustive experimentation procedure was followed. At first, fifty-five (55) benchmark datasets were extracted from the UCI repository [14], related to a wide range of classification problems. To further enhance the variance and complexity of the classification process, all datasets were partitioned and examined according to the resampling procedure of k-fold cross-validation [37]. Following the method’s steps, each subject dataset is shuffled and then divided into k unique data groups. By holding out one of the groups as a test set and utilizing the rest as a train set, k new datasets are generated. The k parameter was set equal to ten, as it is commonly selected by the majority of the literature.
The main aim of the experimentation process was to prove the superiority of the combination scheme against the competing methods of the supervised, semi-supervised and active learning using always the same amounts of labeled and unlabeled data under the same base learner model. In more detail, the supervised method is trained only on the initial labeled set while the semi-supervised rival method utilizes also the initial unlabeled set in the same manner that is also exploited in the proposed combination scheme. Moreover, as baseline AL opponent the random sampling [7] process is implemented in a similar way with the rest of the combination self-training procedure, also utilizing the initial unlabeled set.
For this purpose, all training subsets were further divided into two sets, an initial labeled set and an initial unlabeled set, using four different labeled ratios R. As the initial datasets contained a hundred percent of the instance labels, in order to simulate the human expert labeling process, all the original labels for the constructed unlabeled sets were stored separately in order to be retrieved whenever the algorithm required to query the human expert. Thus, each original dataset was augmented into forty derived datasets. In detail, the R values were set to 10%, 20%, 30%, and 40%. As regards the proposed algorithm’s parameters, the control parameter T was set equal to 10%, while the MaxIter parameter was empirically selected equal to 10 in order to impose a maximum of 40%, in relation to the original dataset size, limit (can be calculated using Equation (2) multiplied by the MaxIter parameter of unlabeled instances for selection and augmentation of the initial labeled set in the case of R = 40%.
As a comparison measure, the average classification accuracy over each R was used. In order to draw general conclusions for the efficacy of the combination scheme, a wide range of classification models and meta-techniques were employed, incorporated in each one of the four learning methods. A brief description for each one of the base learners is presented:
BagDT: In this model, the bootstrap aggregating (bagging) [38] meta-algorithm was applied along with the use of the C4.5 decision trees [39] classifier. The bagging technique is often adopted to reduce the variance and overfitting of a base learner and enhance its accuracy stability. The basic idea behind this technique is the generation of multiple training sets by uniformly sampling the original dataset.
5NN: The k-nearest neighbors [40] classifier belongs to the family of lazy learning algorithms. By examining the k closest instances in a defined feature space, it classifies a given test instance by plurality voting on the labels of the k instances.
Logistic: The logistic regression, also commonly referenced as the logit model, is a statistical model that utilizes the logistic function in order to model binary dependent variables, thus fitting very well with categorical targets. In problems where the target variable has more than two values, multinomial logistic regression is applied [41].
LMT: The logistic model tree [42] classification model combines logistic regression with decision trees. The main idea behind the classifier is the use of linear regression models as leaves of a classification tree.
LogitBoost: This classifier is a boosting model proposed by Friedman et al. [43]. It is based on the idea that the adaptive boosting [44] method can be thought as a generalized additive model, thus the cost function of logistic regression can be applied.
RF: One of the most robust ML learners is the random forests [45] model, which is capable of tackling regression and classification problems. Its operation is based on the construction of multiple decision trees using random subsamples of the original feature space. The aggregation of the results is achieved via majority voting. Due to its inner architecture, it is known to efficiently handle the overfitting phenomena.
RotF: The rotation forest model constitutes an ensemble [46] classifier proposed by Rodriguez and Kuncheva [47]. Following the flow of this algorithm, the initial feature space is divided in random subspaces. The default feature extraction algorithm applied to create the subspaces is the principal component analysis (PCA) [48], aiming to increase the diversity amongst the base learners.
XGBoost: The extreme gradient boosted trees [49] algorithm, is a powerful implementation of gradient boosted decision trees. Under this boosting [50] scheme, a number of trees are built sequentially with each time the goal to reduce errors produced from the previous tree, thus each tree is fitted on the gradient loss of the previous step. The final decision is produced from the weighted voting of the trees. The XGBoost algorithm is a very scalable algorithm that has shown to perform very well on large datasets or sparse datasets utilizing parallel and distributed execution methods.
Voting (RF, RotF, XGBoost): As a last effort to further explore the potential of more complex classification models in the combination scheme, the construction of an ensemble classifier by majority voting the results of three of the most robust models: RF, RotF, and XGBoost was put forward. As regards the extraction of probabilities, the average of the exported probabilities for the three classifiers was considered as the best option.
The experimental results in terms of classification accuracy for each base learner are organized in Table 1, Table 2, Table 3, Table 4 and Table 5 and supplementary material Tables S1–S4, categorized according to the four label ratios (10%, 20%, 30%, 40%) for each learning method. The bold values in the tables indicate the highest accuracy for the corresponding dataset and the subject labeled ratio.
The superiority of the proposed combination scheme regarding the classification accuracy is prominent. The following important observations are derived from the accuracy tables:
  • The proposed combination method outperforms all other four learning methods in all four labeled ratios and for all the nine base learners used as control methods, in terms of average accuracy. This argument is also validated in Figure 3, where the comparisons are visually assembled and a progressive picture of the performance of the two dominant methods is presented as the R increases. The SL method was also included as a baseline performance metric.
  • It is observed by the accuracy tables that the proposed method steadily produces significantly more wins on each individual dataset through all the experiments carried out.
Following the accuracy examination, the Friedman aligned ranks test [51] was conducted. In Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14, the results of the statistical tests for each one of the nine base learners divided into the four labeled ratios used, are presented. These lead to the following assumptions:
  • The non-parametric tests assess the null hypothesis that the means of the results of two or more of the compared methods are the same by calculating the related p-value. This hypothesis can be rejected for all the nine algorithms and for all labeled ratios as all calculated p-values are significantly lower than the significance level of a = 0.10.
  • Moreover, the Friedman rankings confirm that for all nine base learners and regardless of the labeled ratio, the proposed combination scheme ranks first ahead of all other learning methods in coincidence with the accuracy experimental results.
Since the Friedman test null hypothesis was rejected, the Holm’s [52] post-hoc statistical test was also applied with an alpha value of 0.10. The aim of the Holm’s test is to detect the specific differences between the combination scheme and the other learning methods, thus the null hypothesis under evaluation is that the mean of the results of the proposed method and against each other group is equal (compared in pairs). The post-hoc results are also presented in the corresponding ranking test tables for each one of the base learners. By observing the adjusted p-values of the Holm’s tests, it is concluded that:
  • The proposed combination method performs significantly better on 105 of the total 108 compared method variations for the nine base learners over the four labeled ratios.
  • The AL methods for the Logistic, the LMT and the LogitBoost classifiers accept the mean significant difference test for one label ratio each, 30%, 20%, 40% accordingly. However, the adjusted p-values show small differences over the alpha of 0.10.
Summarizing the test results, both Friedman Aligned Ranks tests and Holm’s one vs all comparison tests verify the superior performance of the proposed method over a wide range of scenarios and algorithm comparisons.
To better observe the individual results regarding the combination schemes and the role of the base learners incorporated, the average accuracies were plotted in Figure 4. The outcome was as expected the following: The ensemble voting (RF, RotF, XGBoost) classifier outperforms the rest models in all labeled ratios. As the first indication of such an outcome, the improved prediction probabilities derived from the averaging of the three classifier probabilities, on which the combination scheme relies, it would be a promising starting point for seeking a robust proof to strictly explain the performance boost. Thus, on the one hand, the most confusing unlabeled instances, through the entropy calculation, and on the other hand, the most confident unlabeled instances, through the distribution of prediction probabilities, are detected using the distribution of prediction probabilities. Such behaviors seem to also emerge in other relevant ensemble wrapper algorithms [53].

5. Modification

Pointing towards the improvement of the proposed method, it is obvious by the statistical analysis and ranking results that a slight increase in the performance of the SSL part could have a significant impact on the overall efficiency of the combination scheme.
In this direction, careful observation of the execution of the proposed algorithm revealed the weakness of the SSL prediction probabilities, which, in many cases, leads to the selection of the wrong instances to be labeled. In order to augment the probabilistic information available for the proposed method, as regards the SSL part, a lazy classifier (kNN) was integrated into the instance selection process. Such a development, on the one hand, augments the proposed method with a second view of the labels for the unlabeled set, and on the other hand, does not significantly increase the computational overhead as this family of classifiers does not need training. As a second measure to strengthen the SSL instance selection criteria, the empirical approach of setting a lower limit on the minimum accepted probability for an unlabeled instance was adopted using the formula:
p r o b a T h r e s h o l d = n u m _ c l a s s e s ( D ) + 1 2 * n u m _ c l a s s e s ( D )
whereby utilizing the n u m _ c l a s s e s ( X ) function the dependence on the dataset characteristics is lifted. A more compact representation of the SSL part modifications is given in Algorithm 2, while the abstract flow chart of the improved combination framework is presented in Figure 5.
Algorithm 2: SSL modification
10:
[Execute Algorithm 1 steps (until Alg. 1 line 10)]
11:
 IF i modulo 2 == 0:
12:
  SET probaThreshold = [num_classes(D) + 1]/[2 * num_classes(D)]
13:
  SET the number of nearest neighbors numNeib
14:
  INITIALIZE the NN classifier on Li using numNeib
15:
  
16:
  Classify(Ui) using CLS and construct matrix Mpr containing corresponding prediction probabilities along with the predicted labels
17:
  FOR_EACH instance of Mpr:
18:
   IF cls_predicted_class(instance) != nn_predicted_class(instance)    OR cls_probability(instance) < probaThreshold:
19:
    DISCARD instance from Mpr
20:
   END_IF
21:
  END_FOR_EACH
22:
  SORT Mpr descending according to the prediction probabilities
23:
  STORE the top maxUnlabPerIter instances of Mpr in a matrix Mfinal
24:
  /* now containing the most confident instances along with their predictions */
25:
[Continue Algorithm 1 steps (from Alg. 1 line 16)]
The improved combination scheme was further tested against the most robust AL frameworks found in the literature. In detail, the query strategies of least confidence (LC), margin sampling (MS) and entropy sampling (ES) were considered to be compared with the modified proposed scheme. The major aspects concerning these strategies [7] follow below.
LC: The objective of this strategy is to identify the least confident unlabeled instances by examining the probability of the most probable label for each unlabeled instance. The strategy continues by selecting the instances having the lowest probable labels and presents them to the human expert to be labeled in order to augment the initial labeled set.
MS: As an improvement of the LC strategy, MS attempts to overcome the disadvantageous selection process of only considering the most probable labels by calculating the differences of the most probable and the second most probable label for an unlabeled instance. Afterwards, those calculated differences are sorted and the instances with the lowest differences are selected to be labialized.
ES: This strategy, part of which is also integrated into the AL counterpart of the proposed scheme, computes the entropy measure (similar to Equation (3)) for each unlabeled instance using the distribution of prediction probabilities. The most entropic instances are then selected to be displayed to the human expert in order to enlarge the original labeled set.
In Figure 6, ten experiments display the performance comparison in terms of classification accuracy regarding the three AL methods against the modified combination scheme. The experiments are categorized by the five base learner models that were integrated into the methods. In each experiment, a different benchmark dataset was deployed using four different Rs equal to 10%, 20%, 30%, and 40% accordingly.
The experimental results confirm the efficiency of the modified combination scheme against the AL methods. It can be extracted from the figure that the proposed technique in all ten cases performs equally or better from its rivals’ accuracies. Moreover, the figure suggests that the three AL methods produce closely related accuracy results, as in four of the ten test cases, their performance was almost identical. The previous outcome can be explained by exploring the metrics utilized in these strategies, which are all derived from the prediction probabilities of the base learners.
Closing this section, in the conducted experiments on real-world benchmark datasets, the proposed combination scheme was compared with the SL, the SSL, and the AL methods. The experiments show that the proposed method outperforms the compared methods. Therefore, in the future, it is very important to conduct more insightful theoretical analyses on the effectiveness of the proposed approach and explore other appropriate selection criteria for filtering the informative unlabeled instances, in order to generalize the results with more confidence.

6. Conclusions

In this research work, a new wrapper algorithm was proposed combining the AL and SSL methods with the aim of efficiently utilizing the available unlabeled data. A plethora of experiments was conducted for evaluating the efficacy of the proposed algorithm in a wide range of benchmark datasets against other learning methods using a variety of classifiers as base models. In addition, four different labeled ratios were investigated. The proposed algorithm prevails over the other learning methods as statistically confirmed by the Friedman aligned ranks non-parametric tests and the Holm’s post-hoc tests. To further promote the use of the proposed algorithm, a software package was developed while more details about this package can be obtained from the link found in the Appendix A.
Regarding the performance boost that was experimentally observed while applying the proposed combination scheme on the numerous datasets, there is strong evidence that the vigorous AL method can efficiently improve its performance utilizing SSL schemes such as the self-training technique. Even in cases were the individual SSL method was not performing dexterously; when integrated in the AL and SSL proposed wrapper the performance of the overall scheme was significantly improved compared to the plain AL method. Moreover, in the case that the majority of the instances used in a learning scheme are automatically labeled, the performance may be unsatisfactory, and in some cases, it may even be worse than the SL baseline accuracy. For this reason, a fundamental requirement arises; that of defining a sufficient threshold of human expert intervention on the labeling process to successfully combine AL and SSL methods. Such a fine-tuning process is criticized as highly application-specific and challenging to automate. Furthermore, it can be noticed by the results, that on datasets with very small initial labeled sets, the proposed scheme can be beneficiary as the initially learned decision boundaries of such datasets can be possibly inaccurate, thus unlabeled instances near these boundaries could be falsely classified. This is an implication that the AL part of the proposed scheme could efficiently tackle.
For future work, a number of areas have been identified and are worth exploring as they seem promising in the direction of improving the classification abilities of the proposed algorithm. As a major first research area, that is expected to have a high impact on the combination scheme’s performance in terms of accuracy and execution time would be the investigation of different instance selection strategies than those that are currently employed. In the AL part of the proposed algorithm, two common alternatives are the least confidence [54] and the margin sampling [55] algorithms, which utilize the unlabeled data under a different scope. Moreover, more complex query scenarios than the plain pool-based sampling used, like query synthesis [56] could also be beneficial. As regards the semi-supervised part, simple techniques like the integration of weights annotating the instances assessed as informative by the SSL part of the algorithm could further improve the overall accuracy of the combination scheme as suggested in [35,57].
Another interesting research area would be that of the extreme outlier detection algorithms. The incorporation of such algorithms in the proposed algorithm would have an immediate impact on the quality of the selected candidate unlabeled instances that are used to augment the labeled set in each self-training iteration, thus resulting in more robust inner models. A few of the very well-known techniques that could be directly implemented in the combination scheme are the local outlier factor [58] for detecting anomalous values based on neighboring data or the isolation forest [59], which is a tree-based outlier detector.
Other research areas that would bear further improvement to the proposed algorithm include preprocessing algorithms, for instance, PCA for dimensionality reduction and production of more informative features or other feature selection techniques such as univariate feature selection [60]. Speaking of the integrated base learners, the introduction of online learners like the Hoeffding adaptive tree [61] and Pegasos [62] or deep learning architectures based on deep neural networks [63] and deep ensembles [64] could make the proposed algorithm sufficient for tackling streaming and big data problems.
Finally, by combining schemes from the fields of active regression learning [65,66] and semi-supervised regression [53] along with the proposed classification algorithm, a general combination scheme could be put forward that would be able to handle numeric and categorical targets.

Supplementary Materials

The following are available online at https://www.mdpi.com/1099-4300/21/10/988/s1, Table S1: Classification accuracies of 5 nearest neighbors (5NN) on four different ratios, Table S2: Classification accuracies of logistic regression (logistic) on four different ratios, Table S3: Classification accuracies of logistic model trees (LMT) on four different ratios, Table S4: Classification accuracies of LogitBoost on four different ratios.

Author Contributions

All authors have contributed equally to the final manuscript.

Funding

This research is implemented through the Operational Program Human Resources Development, Education and Lifelong Learning and is co-financed by the European Union (European Social Fund) and Greek national funds.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A

The proposed combination algorithm was implemented as a separate Java package for the WEKA [67] software tool. The decision to develop the combination scheme as a part of the WEKA tool was made since it is one of the most well-known tools used in the machine-learning community, which includes a big number of base learner models. Moreover, it can be easily deployed without requiring programming experience for the end-user. The package can be downloaded using the following link: http://ml.upatras.gr/combine-classification/.

References

  1. Rosenberg, C.; Hebert, M.; Schneiderman, H. Semi-supervised self-training of object detection models. In Proceedings of the Seventh IEEE Workshop on Applications of Computer Vision (WACV 2005), Breckenridge, CO, USA, 5–7 January 2005. [Google Scholar]
  2. Karlos, S.; Fazakis, N.; Karanikola, K.; Kotsiantis, S.; Sgarbas, K. Speech Recognition Combining MFCCs and Image Features. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2016; Volume 9811 LNCS, pp. 651–658. ISBN 9783319439570. [Google Scholar]
  3. Tsukada, M.; Washio, T.; Motoda, H. Automatic Web-Page Classification by Using Machine Learning Methods. In Web Intelligence: Research and Development; Springer: Berlin/Heidelberg, Germany, 2001; pp. 303–313. [Google Scholar] [Green Version]
  4. Fiscon, G.; Weitschek, E.; Cella, E.; Lo Presti, A.; Giovanetti, M.; Babakir-Mina, M.; Ciotti, M.; Ciccozzi, M.; Pierangeli, A.; Bertolazzi, P.; et al. MISSEL: A method to identify a large number of small species-specific genomic subsequences and its application to viruses classification. BioData Min. 2016, 9, 38. [Google Scholar] [CrossRef] [PubMed]
  5. Previtali, F.; Bertolazzi, P.; Felici, G.; Weitschek, E. A novel method and software for automatically classifying Alzheimer’s disease patients by magnetic resonance imaging analysis. Comput. Methods Programs Biomed. 2017, 143, 89–95. [Google Scholar] [CrossRef] [PubMed]
  6. Celli, F.; Cumbo, F.; Weitschek, E. Classification of Large DNA Methylation Datasets for Identifying Cancer Drivers. Big Data Res. 2018, 13, 21–28. [Google Scholar] [CrossRef]
  7. Settles, B. Active Learning Literature Survey. Mach. Learn. University of Wisconsin-Madison: Madison, WI, USA, 2009; pp. 1–43.
  8. Triguero, I.; García, S.; Herrera, F. Self-labeled techniques for semi-supervised learning: Taxonomy, software and empirical study. Knowl. Inf. Syst. 2015, 42, 245–284. [Google Scholar] [CrossRef]
  9. Mousavi, R.; Eftekhari, M.; Rahdari, F. Omni-Ensemble Learning (OEL): Utilizing Over-Bagging, Static and Dynamic Ensemble Selection Approaches for Software Defect Prediction. Int. J. Artif. Intell. Tools 2018, 27, 1850024. [Google Scholar] [CrossRef]
  10. Bologna, G.; Hayashi, Y. A Comparison Study on Rule Extraction from Neural Network Ensembles, Boosted Shallow Trees, and SVMs. Appl. Comput. Intell. Soft Comput. 2018, 2018, 1–20. [Google Scholar] [CrossRef]
  11. Hajmohammadi, M.S.; Ibrahim, R.; Selamat, A.; Fujita, H. Combination of active learning and self-training for cross-lingual sentiment classification with density analysis of unlabelled samples. Inf. Sci. 2015, 317, 67–77. [Google Scholar] [CrossRef]
  12. Ahsan, M.N.I.; Nahian, T.; Kafi, A.A.; Hossain, M.I.; Shah, F.M. Review spam detection using active learning. In Proceedings of the IEEE 2016 7th IEEE Annual Information Technology, Electronics and Mobile Communication Conference, Vancouver, BC, Canada, 13–15 October 2016. [Google Scholar]
  13. Xu, J.; Fumera, G.; Roli, F.; Zhou, Z. Training spamassassin with active semi-supervised learning. In Proceedings of the 6th Conference on Email and Anti-Spam (CEAS’09), Mountain View, CA, USA, 16–17 July 2009. [Google Scholar]
  14. Dua, D.; Graff, C. UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu/ml/citation_policy.html (accessed on 9 October 2019).
  15. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  16. Sourati, J.; Akcakaya, M.; Dy, J.; Leen, T.; Erdogmus, D. Classification Active Learning Based on Mutual Information. Entropy 2016, 18, 51. [Google Scholar] [CrossRef]
  17. Huang, S.J.; Jin, R.; Zhou, Z.H. Active Learning by Querying Informative and Representative Examples. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1936–1949. [Google Scholar] [CrossRef]
  18. Lewis, D.D.; Gale, W.A. A Sequential Algorithm for Training Text Classifiers. In Proceedings of the ACM SIGIR Forum, Dublin, Ireland, 3–6 July 1994. [Google Scholar]
  19. Riccardi, G.; Hakkani-Tür, D. Active learning: Theory and applications to automatic speech recognition. IEEE Trans. Speech Audio Process. 2005, 13, 504–511. [Google Scholar] [CrossRef]
  20. Zhang, Z.; Schuller, B. Active Learning by Sparse Instance Tracking and Classifier Confidence in Acoustic Emotion Recognition. In Proceedings of the Interspeech 2012, Portland, OR, USA, 9–13 September 2012. [Google Scholar]
  21. Roma, G.; Janer, J.; Herrera, P. Active learning of custom sound taxonomies in unstructured audio data. In Proceedings of the 2nd ACM International Conference on Multimedia Retrieval, Hong Kong, China, 5–8 June 2012. [Google Scholar]
  22. Chen, Y.; Wang, G.; Dong, S. Learning with progressive transductive support vector machine. Pattern Recognit. Lett. 2003, 24, 1845–1855. [Google Scholar] [CrossRef]
  23. Johnson, R.; Zhang, T. Graph-based semi-supervised learning and spectral kernel design. IEEE Trans. Inf. Theory 2008, 54, 275–288. [Google Scholar] [CrossRef]
  24. Anis, A.; El Gamal, A.; Avestimehr, A.S.; Ortega, A. A Sampling Theory Perspective of Graph-Based Semi-Supervised Learning. IEEE Trans. Inf. Theory 2019, 65, 2322–2342. [Google Scholar] [CrossRef]
  25. Culp, M.; Michailidis, G. Graph-based semisupervised learning. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 174–179. [Google Scholar] [CrossRef] [PubMed]
  26. Blum, A.; Mitchell, T. Combining Labeled and Unlabeled Data with Co-Training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, Madison, WI, USA, 24–26 July 1998. [Google Scholar]
  27. McCallum, A.K.; Nigam, K.; McCallumzy, A.K.; Nigamy, K. Employing EM and pool-based active learning for text classification. In Proceedings of the Fifteenth International Conference on Machine Learning, Madison, WI, USA, 24–27 July 1998; pp. 359–367. [Google Scholar]
  28. Tur, G.; Hakkani-Tür, D.; Schapire, R.E. Combining active and semi-supervised learning for spoken language understanding. Speech Commun. 2005, 45, 171–186. [Google Scholar] [CrossRef]
  29. Tomanek, K.; Hahn, U. Semi-supervised active learning for sequence labeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2, Singapore, 2–7 August 2009. [Google Scholar]
  30. Han, W.; Coutinho, E.; Ruan, H.; Li, H.; Schuller, B.; Yu, X.; Zhu, X. Semi-supervised active learning for sound classification in hybrid learning environments. PLoS ONE 2016, 11, e0162075. [Google Scholar] [CrossRef] [PubMed]
  31. Chai, H.; Liang, Y.; Wang, S.; Shen, H.-W. A novel logistic regression model combining semi-supervised learning and active learning for disease classification. Sci. Rep. 2018, 8, 13009. [Google Scholar] [CrossRef] [PubMed]
  32. Su, H.; Yin, Z.; Huh, S.; Kanade, T.; Zhu, J. Interactive Cell Segmentation Based on Active and Semi-Supervised Learning. IEEE Trans. Med. Imaging 2016, 35, 762–777. [Google Scholar] [CrossRef] [PubMed]
  33. Rhee, P.K.; Erdenee, E.; Kyun, S.D.; Ahmed, M.U.; Jin, S. Active and semi-supervised learning for object detection with imperfect data. Cogn. Syst. Res. 2017, 45, 109–123. [Google Scholar] [CrossRef]
  34. Yang, Y.; Loog, M. Active learning using uncertainty information. In Proceedings of the International Conference on Pattern Recognition, Cancun, Mexico, 4–8 December 2016. [Google Scholar]
  35. Fazakis, N.; Karlos, S.; Kotsiantis, S.; Sgarbas, K. Self-trained Rotation Forest for semi-supervised learning. J. Intell. Fuzzy Syst. 2017, 32, 711–722. [Google Scholar] [CrossRef]
  36. Yang, Y.; Loog, M. A benchmark and comparison of active learning for logistic regression. Pattern Recognit. 2018, 83, 401–415. [Google Scholar] [CrossRef] [Green Version]
  37. Stone, M. Cross-validation: A review. Ser. Stat. 1978, 9, 127–139. [Google Scholar] [CrossRef]
  38. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
  39. Salzberg, S.L. C4.5: Programs for Machine Learning by J. Ross Quinlan. Morgan Kaufmann Publishers, Inc., 1993. Mach. Learn. 1994, 16, 235–240. [Google Scholar] [CrossRef] [Green Version]
  40. Aha, D.W.; Kibler, D.; Albert, M.K. Instance-Based Learning Algorithms. Mach. Learn. 1991, 6, 37–66. [Google Scholar] [CrossRef]
  41. Le Cessie, S.; Houwelingen, J.C. Van Ridge Estimators in Logistic Regression. Appl. Stat. 1992, 41, 191–201. [Google Scholar] [CrossRef]
  42. Landwehr, N.; Hall, M.; Frank, E. Logistic model trees. Mach. Learn. 2005, 59, 161–205. [Google Scholar] [CrossRef]
  43. Friedman, J.; Hastie, T.; Tibshirani, R. Additive logistic regression: A statistical view of boosting. Ann. Stat. 2000, 28, 337–407. [Google Scholar] [CrossRef]
  44. Schapire, R.E. A Short Introduction to Boosting. J. Jpn. Soc. Artif. Intell. 1999, 14, 771–780. [Google Scholar]
  45. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  46. Opitz, D.; Maclin, R. Popular Ensemble Methods: An Empirical Study. J. Artif. Intell. Res. 1999, 11, 169–198. [Google Scholar] [CrossRef]
  47. Rodriguez, J.J.; Kuncheva, L.I.; Alonso, C.J. Rotation forest: A new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1619–1630. [Google Scholar] [CrossRef] [PubMed]
  48. Han, J.; Kamber, M.; Pei, J. Data Mining: Concepts and Techniques; Elsevier: Amsterdam, The Netherlands, 2011. [Google Scholar]
  49. Chen, T.; Guestrin, C. XGBoost: Reliable Large-scale Tree Boosting System. Available online: http://learningsys.org/papers/LearningSys_2015_paper_32.pdf (accessed on 9 October 2019).
  50. Ferreira, A.J.; Figueiredo, M.A.T. Boosting algorithms: A review of methods, theory, and applications. In Ensemble Machine Learning: Methods and Applications; Springer: Boston, MA, USA, 2012. [Google Scholar]
  51. Friedman, M. The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance. J. Am. Stat. Assoc. 1937, 32, 69–73. [Google Scholar] [CrossRef]
  52. Holm, S. A simple sequentially rejective multiple test procedure. Scand. J. Stat. 1979, 6, 65–70. [Google Scholar]
  53. Fazakis, N.; Karlos, S.; Kotsiantis, S.; Sgarbas, K. A multi-scheme semi-supervised regression approach. Pattern Recognit. Lett. 2019, 125, 758–765. [Google Scholar] [CrossRef]
  54. Culotta, A.; McCallum, A. Reducing labeling effort for structured prediction tasks. In Proceedings of the National Conference on Artificial Intelligence, Pittsburgh, PA, USA, 9–13 July 2005. [Google Scholar]
  55. Scheffer, T.; Decomain, C.; Wrobel, S. Active hidden markov models for information extraction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  56. Wang, L.; Hu, X.; Yuan, B.; Lu, J. Active learning via query synthesis and nearest neighbour search. Neurocomputing 2015, 147, 426–434. [Google Scholar] [CrossRef] [Green Version]
  57. Huu, Q.N.; Viet, D.C.; Thuy, Q.D.T.; Quoc, T.N.; Van, C.P. Graph-based semisupervised and manifold learning for image retrieval with SVM-based relevant feedback. J. Intell. Fuzzy Syst. 2019, 37, 711–722. [Google Scholar] [CrossRef]
  58. Wang, W.; Lu, P. An efficient switching median filter based on local outlier factor. IEEE Signal Process. Lett. 2011, 18, 551–554. [Google Scholar] [CrossRef]
  59. Liu, F.T.; Ting, K.M.; Zhou, Z.-H. Isolation forest. In Proceedings of the 2008 Eighth IEEE International Conference on Data Mining, Pisa, Italy, 15–19 December 2008; pp. 413–422. [Google Scholar]
  60. Tang, J.; Alelyani, S.; Liu, H. Feature selection for classification: A review. In Data Classification: Algorithms and Applications; CRC Press: Boca Raton, FL, USA, 2014. [Google Scholar]
  61. Hulten, G.; Spencer, L.; Domingos, P. Mining time-changing data streams. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and data Mining KDD ’01, San Francisco, CA, USA, 26–29 August 2001. [Google Scholar]
  62. Shalev-Shwartz, S.; Singer, Y.; Srebro, N.; Cotter, A. Pegasos: Primal estimated sub-gradient solver for SVM. Math. Program. 2011, 127, 3–30. [Google Scholar] [CrossRef]
  63. Liu, W.; Wang, Z.; Liu, X.; Zeng, N.; Liu, Y.; Alsaadi, F.E. A survey of deep neural network architectures and their applications. Neurocomputing 2017, 234, 11–26. [Google Scholar] [CrossRef]
  64. Amini, M.; Rezaeenour, J.; Hadavandi, E. A Neural Network Ensemble Classifier for Effective Intrusion Detection Using Fuzzy Clustering and Radial Basis Function Networks. Int. J. Artif. Intell. Tools 2016, 25, 1550033. [Google Scholar] [CrossRef]
  65. Elreedy, D.; Atiya, A.F.; Shaheen, S.I. A Novel Active Learning Regression Framework for Balancing the Exploration-Exploitation Trade-Off. Entropy 2019, 21, 651. [Google Scholar] [CrossRef]
  66. Fazakis, N.; Kostopoulos, G.; Karlos, S.; Kotsiantis, S.; Sgarbas, K. An Active Learning Ensemble Method for Regression Tasks. Intell. Data Anal. 2020, 24. [Google Scholar] [CrossRef]
  67. Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA data mining software. ACM SIGKDD Explor. Newsl. 2009, 11, 10. [Google Scholar] [CrossRef]
Figure 1. The general frameworks of active learning and semi-supervised learning along with their shared elements.
Figure 1. The general frameworks of active learning and semi-supervised learning along with their shared elements.
Entropy 21 00988 g001
Figure 2. Progression of accuracies in relation to the number of iterations executed for the proposed combination scheme and its semi-supervised counterpart, utilizing support vector machines (SVMs) as base learner, applied on the Spambase dataset using two different labeled ratios.
Figure 2. Progression of accuracies in relation to the number of iterations executed for the proposed combination scheme and its semi-supervised counterpart, utilizing support vector machines (SVMs) as base learner, applied on the Spambase dataset using two different labeled ratios.
Entropy 21 00988 g002
Figure 3. Performance comparison of the proposed combination scheme, in terms of average accuracies over fifty-five datasets and four labeled ratios, against the corresponding methods of active learning (AL) and supervised learning (SL) for the nine base classifiers.
Figure 3. Performance comparison of the proposed combination scheme, in terms of average accuracies over fifty-five datasets and four labeled ratios, against the corresponding methods of active learning (AL) and supervised learning (SL) for the nine base classifiers.
Entropy 21 00988 g003
Figure 4. Average accuracies for the proposed combination scheme regarding different base learners and labeled ratios.
Figure 4. Average accuracies for the proposed combination scheme regarding different base learners and labeled ratios.
Entropy 21 00988 g004
Figure 5. Graphical abstract of the proposed combination framework after the introduction of the semi-supervised learning (SSL) improvements.
Figure 5. Graphical abstract of the proposed combination framework after the introduction of the semi-supervised learning (SSL) improvements.
Entropy 21 00988 g005
Figure 6. Progression of accuracies for the modified combination scheme against the four AL strategies (least confidence (LC), margin sampling (MS), entropy sampling (ES)) for five different base learners on ten different benchmark datasets.
Figure 6. Progression of accuracies for the modified combination scheme against the four AL strategies (least confidence (LC), margin sampling (MS), entropy sampling (ES)) for five different base learners on ten different benchmark datasets.
Entropy 21 00988 g006
Table 1. Classification accuracies of bagging-decision trees (BagDT) on four different ratios.
Table 1. Classification accuracies of bagging-decision trees (BagDT) on four different ratios.
R = 10%R = 20%R = 30%R = 40%
MethodSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombination
Dataset
anneal88.86688.52493.21194.66095.21095.98896.77099.10796.65996.54797.55198.66397.77397.21598.32698.885
arrhythmia58.42557.98162.61463.28565.06363.31468.80769.69170.14070.58969.48370.81270.34869.49873.24273.464
audiology52.25350.47456.16656.64061.08761.48265.59368.63665.13864.68473.00474.30871.66070.79175.17880.079
autos45.38144.28650.16748.28652.11956.16761.00055.61960.97658.14363.40566.38165.40559.04869.73872.690
balance-scale73.10375.83274.56075.99676.48775.54080.47681.60881.12978.57181.94181.76483.37278.09383.51883.372
breast-cancer69.92670.62870.60369.54469.96370.61670.60370.96170.96170.61670.59171.67571.29370.61670.94874.803
bridges-version145.72745.72745.72745.72753.27357.90957.27356.90960.00060.72760.00062.45561.00061.09164.90962.909
bridges-version243.00043.00043.00043.00048.45551.09159.81853.90960.00064.00063.09162.81862.09159.18263.63662.000
clevalend78.21576.55975.84975.55973.24774.57079.20478.19479.51676.83980.46281.48483.15178.51680.49581.806
cmc48.06648.32949.35649.69150.30251.45554.30751.26050.57652.33951.52753.15453.22054.10052.67453.428
column_2C76.45275.80675.80680.64580.32380.32380.32384.51681.61380.96882.25882.90382.25882.25882.90383.226
column_3C79.03279.03277.41977.74279.35578.71079.67783.22680.32380.64578.06583.87179.35579.03282.25882.903
credit-rating84.78383.76885.21785.36284.63884.78385.21785.50785.65285.50785.65286.66785.07285.94286.08786.232
cylinder-bands58.33357.03757.22259.44459.63060.00059.25960.55658.88958.70458.14859.07460.55657.96358.51959.259
dermatology71.08970.57882.77087.44784.97783.34891.03695.93191.00691.02993.48395.11393.46189.64792.65896.742
ecoli67.28268.17376.47176.19479.14476.18580.93681.23080.93680.96381.82783.32480.34879.16283.32483.316
flags50.05348.55348.52649.92151.10551.07952.02652.57955.65852.18451.13256.76356.23752.60555.28952.711
german_credit70.50069.20069.30070.80070.90069.20069.50071.40069.50070.50074.00074.00073.60071.80072.50073.000
glass50.49847.68451.99153.81064.52459.84857.46865.95260.30360.77967.72767.33869.11364.37267.22969.113
haberman72.53871.59171.55970.88272.86073.20470.88273.86071.87172.53872.20473.19471.87171.53871.83971.237
heart-statlog72.96371.48174.44475.18575.55677.03778.14880.74177.77876.66780.37084.07479.63077.03782.22282.963
hepatitis80.08381.95879.37581.91778.79280.08380.00081.87582.54282.54279.95879.41782.54280.00078.00079.875
horse-colic78.55178.54482.60584.21984.77583.13882.86885.30883.40182.59084.21985.30083.94985.03085.57185.841
hungarian-heart81.00078.63278.31082.39181.02380.70178.27678.96678.62178.64478.97783.05776.89776.88581.65579.310
hypothyroid98.35798.19998.80899.60299.07298.88799.04699.60299.09999.09999.41799.54999.28599.31199.44399.576
ionosphere75.80277.80286.64384.66788.04088.03288.33392.31786.61986.35791.18392.31790.61189.74690.61192.032
iris72.66777.33384.66786.66788.00089.33390.66790.66793.33393.33391.33392.00093.33393.33393.33392.667
kr-vs-kp95.02595.24496.34098.49997.02797.31098.09199.24997.99898.21698.74899.34398.99898.81199.21899.343
labor66.33366.33366.33366.33365.66770.33370.00066.33370.00073.66777.00087.66777.00079.00078.66779.000
letter77.95577.23581.32084.49083.77582.47086.57089.84086.57085.77089.16592.17588.33586.37590.28092.620
lymphography65.47666.14368.95267.61977.57172.38171.57171.66772.90570.85775.57179.04874.28676.23877.61979.667
mushroom99.16399.32399.729100.00099.87799.88999.914100.00099.95199.90299.975100.00099.96399.97599.975100.000
optdigits89.02186.17490.78392.82992.18990.03693.04395.97992.93690.58793.71995.71294.28890.85494.55595.498
page-blocks95.41495.46895.98097.05896.23695.92696.54797.11396.62096.58397.36997.24197.16897.00497.15097.168
pendigits93.12292.84994.79696.51695.51594.86096.06197.86296.17996.00696.87198.33596.88096.24397.36298.071
pima_diabetes71.88073.05575.13574.47975.91173.97074.74773.44575.27075.26173.84176.30776.30976.04474.49875.930
postoperative65.55665.55665.55665.55667.77871.11164.44466.66762.22267.77866.66767.77864.44467.77865.55667.778
primary-tumor29.47430.33935.95434.79535.65134.75038.61036.24838.61034.19837.74537.46940.40135.67741.57838.930
segment90.73690.77991.47294.32993.42093.11794.37297.22993.85394.50295.45597.35994.58994.89296.01797.186
sick97.64097.58797.93298.64898.03898.11898.11798.75498.11898.14498.22398.72898.27798.14498.59598.621
solar-flare67.91464.80768.43968.16469.79068.94170.08170.32270.11570.53871.53271.26070.33671.53671.44871.905
sonar59.52460.97664.88160.54866.83364.42968.71467.88169.21470.59574.04871.11970.71469.21476.47675.024
soybean64.24161.62475.98070.11979.21175.83584.63386.09184.48682.56487.55190.63586.52084.31891.35593.116
spambase89.93790.28590.54592.82791.32891.78492.34994.30592.13292.82792.80594.63192.74192.06793.21994.610
spect61.31661.49461.86463.36768.50665.34466.09966.90266.71771.97873.39080.34279.18674.89577.02576.084
sponge86.25086.25086.25086.25091.07192.50092.50092.50092.50092.50092.50092.50093.75092.50092.50093.750
tae32.37534.37540.41738.37538.33336.95838.33341.00042.95841.70840.33352.25044.37542.33353.58354.875
tic-tac-toe70.25470.98574.22176.62078.18974.32680.89583.09182.46776.40484.86486.95184.44480.68489.45791.864
vehicle64.18962.28967.15067.61566.91667.13470.44569.61870.22170.09173.17571.16173.06070.45472.10872.696
vote94.04994.95294.27695.17495.41295.41894.04996.32195.41295.64096.32795.87295.64095.64596.78196.327
vowel48.28349.19252.42454.04060.80860.70767.98070.20270.70765.75874.54577.17273.93968.48580.30383.737
waveform78.02075.60079.18079.70080.30077.54079.16081.34079.16078.38081.20081.26080.70078.16080.88081.740
wine74.31478.20380.42584.93588.75885.42591.60193.26890.45889.37993.30196.07892.15791.60192.71294.379
wisconsin-breast91.98692.41892.41695.42294.55993.70494.99496.42295.56594.27795.56595.99494.99695.27795.13796.137
zoo57.45557.45557.45557.45575.27375.27378.27385.18281.27382.27386.27388.27384.27386.27388.27393.182
Average71.27071.15873.61174.38376.21675.84777.63178.81778.12577.86379.61481.34879.91378.62381.06281.867
Table 2. Classification accuracies of random forests (RF) on four different ratios.
Table 2. Classification accuracies of random forests (RF) on four different ratios.
R = 10%R = 20%R = 30%R = 40%
MethodSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombination
Dataset
anneal83.74083.96585.41487.97687.41887.19490.97992.20390.86990.08792.98593.76292.87592.42793.87194.871
arrhythmia56.63856.63360.40660.62359.72058.84160.39664.59959.28558.18463.27564.82162.37759.51763.93765.488
audiology42.49036.73953.51845.65256.70054.46663.79459.30861.97663.24172.13468.61768.12367.76773.87473.379
autos48.83349.71452.11951.23856.92957.45265.38166.28668.28669.26270.66774.59568.73869.73877.97679.476
balance-scale78.06777.58879.02780.46179.51978.24179.84180.16182.08479.52482.08781.93582.08180.80681.91081.129
breast-cancer69.24970.28367.13172.04467.18068.55964.31068.27666.77368.15367.19266.44167.10669.21264.75468.596
bridges-version144.63644.63644.63644.63644.63638.00046.54544.72746.63644.54552.54545.54551.54545.54552.27355.000
bridges-version241.81841.81841.81841.81845.54537.00049.27346.45548.36440.00053.18245.54553.27347.54553.18252.273
clevalend76.84975.17279.50578.81780.16179.49583.43081.10881.08679.77482.80681.77482.45282.10883.77481.806
cmc48.88148.40650.10248.40149.96451.32251.79850.23951.52551.73050.91452.54451.11652.00050.90852.200
column_2C77.09776.77477.41979.35580.00079.67780.96881.29081.93581.93583.54883.87183.87181.61382.58185.484
column_3C77.74279.03279.67782.25882.90382.25884.51682.90382.58183.22681.29084.83981.93582.25881.29084.194
credit-rating82.31982.46483.47885.21783.76884.20384.49385.79783.76884.20384.92885.36284.63884.34884.78386.087
cylinder-bands59.25958.51963.33364.25965.18559.44467.77865.74167.59360.18570.18570.00067.22260.00071.29670.185
dermatology78.70979.79786.87792.59890.96189.59594.80595.61693.71693.72495.09096.45695.63894.26493.98696.179
ecoli74.09172.90677.67478.57480.33977.96884.18984.77784.18980.65184.22585.69583.30784.21685.41086.292
flags46.44741.78944.39548.42152.76338.71152.71151.05354.73745.92150.13252.18454.28947.00057.78952.132
german_credit72.90072.20073.30072.70074.00073.60073.90075.60073.90073.70075.30075.30074.90071.60074.50075.400
glass55.21653.33357.98755.21666.38566.84068.70168.24768.72369.15671.94875.71472.38167.70674.76274.286
haberman62.04365.65666.34464.25868.59169.90367.26968.58166.97867.63466.35569.23767.02268.98966.30166.677
heart-statlog74.07475.18574.81577.03778.51978.51978.14881.11179.25978.51981.48183.33380.74178.88982.22281.852
hepatitis80.66780.70880.12583.79278.16779.45881.25083.12582.00082.54278.66784.45881.91781.87578.66785.125
horse-colic79.06979.59579.85782.59084.22783.66483.94984.76783.68682.05784.21985.57883.94185.85684.76786.119
hungarian-heart79.27679.26479.98982.00083.04683.04681.00082.01181.34581.34580.28781.00081.64481.65581.29980.966
hypothyroid95.73395.52096.66099.25897.88097.42998.56999.44498.33098.03998.96699.39198.99398.59698.91499.364
ionosphere81.80280.38186.92988.60390.60389.74690.88992.88991.46090.60392.88993.46091.74691.74693.17594.032
iris83.33383.33388.00090.00093.33392.00096.00094.66796.00095.33396.00094.66795.33395.33394.66795.333
kr-vs-kp95.77694.90096.12098.62396.77896.62197.49799.28097.55997.46598.24899.40698.15498.21698.93799.312
labor65.00065.00065.00065.00073.66769.66782.00080.33382.00077.00084.00082.66784.00073.33382.66782.333
letter84.79584.21087.94090.91589.84589.62592.21595.30092.21591.87094.05096.29093.41592.95595.01096.435
lymphography68.23867.57173.61975.00074.19076.28677.61976.23874.33376.33381.66780.33379.04877.61985.14385.095
mushroom99.74199.76699.914100.00099.96399.963100.000100.00099.98899.98899.975100.000100.00099.988100.000100.000
optdigits94.57394.43195.76597.47396.40696.05097.01198.32797.13596.76297.43898.32797.43896.88697.86598.363
page-blocks95.87095.79796.21897.35196.49296.45596.67597.49796.74896.76697.00497.55296.98596.91297.47997.533
pendigits97.14396.98097.60798.46398.00897.84498.31799.23698.37298.35398.77299.24598.67298.43598.91799.172
pima_diabetes74.35472.40474.23174.61275.65674.61475.92176.30477.09276.83275.93074.87975.53175.13874.62476.048
postoperative68.88968.88968.88968.88962.22266.66762.22267.77863.33364.44464.44465.55660.00065.55663.33363.333
primary-tumor33.60134.76837.42433.59237.40638.59243.03940.38343.03940.34842.73643.91341.55140.64243.62743.039
segment93.33393.46394.37296.53795.28195.23895.84497.96595.62895.67196.97098.00996.53796.36497.40398.225
sick96.52796.39496.65998.59597.45597.45597.77398.54297.66797.53497.95898.46297.90597.82698.14498.436
solar-flare66.76965.99069.09869.37570.51270.05970.13170.46470.07370.03769.75871.55070.04169.68170.51372.590
sonar62.02464.02467.33369.64369.23869.73875.92979.33374.97675.97676.92981.23879.78675.95282.23882.690
soybean67.19964.42976.28175.25179.06677.00685.21187.99086.81684.47489.74492.82288.28688.57291.36093.110
spambase92.87192.56693.08894.58893.52393.30594.19795.54494.30594.11094.67595.67594.52394.32794.93695.588
spect69.27069.82568.52269.27072.39572.13371.28478.83775.58271.94878.15979.19380.19677.76579.89577.381
sponge92.50092.50092.50092.50092.50092.50092.50092.50092.50092.50092.50092.50093.75093.75092.50093.750
tae36.37537.75041.75041.66739.70837.04241.66743.54240.37539.08349.04249.62546.33345.04256.33357.667
tic-tac-toe76.51274.94278.39683.29681.73580.17184.34391.54484.24081.11088.51595.51089.76482.15691.54996.658
vehicle65.13264.31170.33969.15171.76170.10470.22773.16770.34070.69774.11573.52771.99271.16974.59173.161
vote94.50394.27694.95896.32795.19694.50394.94796.33295.41295.18096.78696.33296.55996.32797.01496.781
vowel34.44433.73744.84842.72756.56655.65771.31371.11171.01069.89984.54588.78880.80880.00091.61696.162
waveform83.66082.94084.34083.92084.30084.14084.14084.62084.14083.86084.44084.84085.10083.96085.22084.840
wine86.04686.60185.98094.41294.37996.60195.00098.30196.11196.66797.22298.33398.33397.22297.22298.301
wisconsin-breast94.41494.98695.13097.28096.13595.41896.42096.85196.70696.56596.28096.56595.99496.28096.28096.422
zoo75.27375.27375.27375.27379.27377.27379.27388.18285.27379.27387.27393.18288.27384.27387.27392.182
Average73.01572.73075.13076.13777.23876.31679.04780.11879.27478.25580.95481.82680.69479.43681.90182.701
Table 3. Classification accuracies of rotation forest (RotF) on four different ratios.
Table 3. Classification accuracies of rotation forest (RotF) on four different ratios.
R = 10%R = 20%R = 30%R = 40%
MethodSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombination
Dataset
anneal85.18586.07586.85991.08088.20691.54193.21194.20693.43293.54395.10496.21594.87894.98896.10596.880
arrhythmia59.27164.37761.51266.35767.93767.70567.26670.13069.26668.15069.05371.24268.15972.35372.13571.469
audiology51.75951.75957.58958.37965.92963.77567.31270.81065.94967.29272.09574.78370.29670.77176.48279.644
autos47.33345.40554.14352.26253.59559.40566.33360.85766.40568.21469.69069.76268.73872.64375.59572.643
balance-scale83.66983.51385.26184.47084.95685.75086.89286.87987.83787.04687.67388.96688.49089.60189.91391.027
breast-cancer64.33566.09669.22467.18069.28670.32069.96370.34573.04272.70973.06773.44869.22469.59469.54472.007
bridges-version150.45550.45550.45550.45556.09153.18258.09151.09159.27365.63660.18263.81858.18264.72767.72760.091
bridges-version248.45548.45548.45548.45554.27351.27362.90957.81854.72758.27358.18261.81857.09158.36464.90966.818
clevalend74.17277.52776.89279.46281.81779.86082.16181.15181.15181.80682.12982.19483.46282.47382.81784.441
cmc49.76449.96849.49249.89949.49252.47251.04851.66151.38853.76454.78052.06452.67754.03552.88253.899
column_2C80.64580.64578.71082.25879.03280.32383.54883.22681.61380.64581.61382.90382.25882.58183.22684.516
column_3C80.00074.83980.00082.25879.03280.64581.61386.45280.32380.00081.61386.77483.87184.83984.19483.871
credit-rating83.47883.18884.49385.07285.07285.07284.63885.50785.50785.79786.52286.23285.07285.65287.10186.957
cylinder-bands60.00061.66763.51963.88964.44463.51969.81568.33370.00069.25969.81575.00070.55668.14876.85275.185
dermatology85.00884.45292.34295.90195.63194.27295.36897.29094.82794.26496.18698.10196.74296.73497.83097.553
ecoli75.30373.52975.27683.30780.63379.75083.91386.00783.91383.35185.07186.29283.60184.19885.10787.460
flags50.13252.15850.18451.57950.65851.60555.28953.26358.31654.78955.76357.73756.31653.76357.26360.026
german_credit68.50071.50070.90072.30073.20073.50073.20073.30073.20074.10074.00073.70072.90073.90074.10076.000
glass51.40754.61057.51156.03963.52863.52863.50670.56361.71063.20367.79269.67564.00464.97870.51970.498
haberman69.25869.93571.86071.52772.50573.16171.55974.84972.86072.51674.51673.53873.84973.83970.25872.860
heart-statlog76.29672.59379.63077.40780.00078.14879.63080.37079.25980.00082.22280.74179.25978.88982.22280.741
hepatitis82.54279.83380.62577.91782.62580.70882.50082.58382.50084.50080.58382.54281.16782.50080.66787.042
horse-colic77.15577.14778.54479.32482.62883.43182.58383.39382.85383.40880.43583.96483.40884.22783.97985.045
hungarian-heart81.31082.33382.66782.63283.34583.36879.63280.98982.03480.02381.64482.34581.31082.35682.01181.333
hypothyroid96.87397.27097.61599.49698.72898.64898.72899.60398.80798.30498.78199.36499.07398.78199.12699.417
ionosphere80.09582.08789.46890.35789.19090.88190.88994.32591.46892.03291.48494.60391.46092.02492.89794.032
iris85.33385.33394.00088.00092.66793.33394.00094.66796.66795.33394.66796.00097.33395.33396.00096.000
kr-vs-kp94.93294.64796.84098.49997.09096.93398.15499.15697.46797.24598.71799.03197.24897.93598.59399.343
labor72.66772.66772.66772.66773.66766.33382.33380.66782.33377.00078.33385.66778.33378.66784.33390.000
letter81.56581.56584.68088.22087.23087.19089.82593.27089.82589.41092.01594.57591.63590.89093.40095.260
lymphography69.61971.71471.71473.04874.90578.38174.23879.00071.61978.33380.33377.14382.38177.61981.00083.143
mushroom99.64399.64399.889100.00099.91499.91499.926100.00099.95199.92699.951100.00099.95199.93899.963100.000
optdigits92.74092.54494.64495.81994.91195.07195.46397.18995.92595.46396.49597.74096.33596.13997.15397.473
page-blocks95.98095.43296.07297.29696.52896.25496.76697.46096.58396.71197.04097.60696.93097.05897.20597.552
pendigits96.88996.93497.62698.69997.77197.92698.53599.04598.59098.39998.82699.11898.75498.67298.91799.118
pima_diabetes72.01172.92774.21974.35274.22276.30077.21876.17476.17474.61475.00576.30475.13874.61075.14075.781
postoperative64.44464.44464.44464.44470.00066.66764.44466.66770.00070.00066.66768.88964.44467.77866.66770.000
primary-tumor32.70130.94536.88138.01237.41538.33341.85442.70941.85440.96339.52842.44243.68142.47843.35142.166
segment93.46393.85394.54595.41195.41194.45995.88797.96596.19096.10497.18697.96596.49496.45097.14398.139
sick97.72097.19097.90598.62198.09198.03898.14498.86098.22497.98598.33098.91398.38398.25098.70199.046
solar-flare66.65768.46870.01771.15270.76670.82170.54171.11570.72470.36070.81772.45470.32871.67471.55173.115
sonar62.42964.95267.28664.42967.40567.28670.14375.42973.54873.57179.78673.07176.92979.28681.33379.310
soybean72.59474.20581.83183.28286.08785.78989.15692.09388.57488.56892.82293.84990.77491.21393.12994.286
spambase91.63291.82891.95893.74092.80693.30693.41495.08893.30693.89294.08895.30593.78493.69794.76295.196
spect67.21967.60368.16668.13772.34972.02573.35182.38874.59473.51577.09474.40274.82779.22477.17177.828
sponge92.50092.50092.50092.50092.50092.50092.50092.50092.50092.50092.50092.50091.07192.50093.75092.500
tae33.70839.04244.37543.04241.75037.70846.33341.00044.33348.33350.33356.33347.58350.25056.87556.875
tic-tac-toe74.63774.53676.72480.48480.58681.10983.81389.14884.76683.40989.03894.67789.87888.31593.94096.453
vehicle69.97966.08571.53571.62573.88971.40675.89276.71473.64172.58074.71177.07675.41672.46475.30576.134
vote91.72391.04793.10894.93795.64594.27194.96395.18094.24495.19095.63496.32796.32794.95296.55996.559
vowel45.15245.65756.66757.98067.37462.22274.64678.38477.37473.63684.74788.88984.14182.12191.31395.253
waveform81.32081.82082.48082.20082.18083.14082.70083.34082.70083.82083.16082.94083.32084.08083.18083.880
wine87.68086.53687.15796.07890.42592.15795.55694.96793.33393.85697.19096.63494.41296.07896.07896.634
wisconsin-breast95.41895.27795.99497.13796.27796.42296.56596.99496.84996.70896.70896.85196.56396.56596.70897.280
zoo70.45570.45570.45570.45581.27378.27381.36487.27383.27383.27388.27393.09188.27389.27389.27392.182
Average73.91374.20576.35677.26478.41878.17180.16981.26380.30680.42481.63682.97581.21381.64583.16383.963
Table 4. Classification accuracies of XGBoost on four different ratios.
Table 4. Classification accuracies of XGBoost on four different ratios.
R = 10%R = 20%R = 30%R = 40%
MethodSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombination
Dataset
anneal92.75992.64895.10096.99496.32396.10196.88198.88596.99396.88197.43798.88497.43897.10597.77298.884
arrhythmia55.53156.42063.09261.94763.28564.83165.49868.15565.49868.37268.17470.37267.49367.05370.35772.150
audiology48.26145.21752.66850.11957.13458.02462.90561.10763.36065.53468.10369.03266.36467.66872.96476.976
autos46.33344.85751.14350.73858.95255.54865.35765.31064.47664.42970.64377.57172.69072.16777.95278.976
balance-scale76.79275.20277.74779.81380.63280.95282.39681.12682.56882.40183.68983.03983.03983.83584.94483.041
breast-cancer65.78864.70466.08467.48867.16766.81066.39267.85765.00067.47566.39268.14066.04768.84265.38271.613
bridges-version142.81842.81842.81842.81844.81847.63654.45549.54560.90957.18254.36459.00056.36454.27357.00060.909
bridges-version243.54543.54543.54543.54542.72743.72755.27354.09160.18257.27356.18258.18257.18256.27358.00067.909
clevalend71.17272.51675.84973.79679.54880.86080.78580.50579.48480.47384.43080.47382.79681.14082.44181.785
cmc49.22049.48850.77650.57651.25352.95153.35854.31253.56354.98754.44255.05253.96654.84854.30454.852
column_2C76.12976.45276.12980.64581.61380.96880.32383.54879.67780.96881.61381.93585.80680.00083.22683.226
column_3C79.03277.41976.12978.06581.29081.29080.64581.61380.64580.32379.35582.58180.32382.90380.96884.839
credit-rating82.17482.02983.47883.47883.18884.05883.91385.65284.20384.92884.63886.81284.78385.07286.52286.522
cylinder-bands65.92663.33370.37069.44471.66771.66771.85274.63075.74174.81577.96378.14875.00074.44479.25980.000
dermatology83.88985.00089.87292.34293.18392.10294.82096.74994.82094.00295.37596.19494.82794.55095.37597.020
ecoli68.14667.27370.86576.20375.00075.00081.83682.12181.83680.33981.82784.80480.32181.22182.72784.528
flags49.13248.63248.57951.05348.63250.71152.23754.71153.23751.63253.63249.00053.65850.57959.73756.289
german_credit68.90068.80070.10070.70072.50071.50073.50073.80073.50071.90072.80073.90073.20073.80074.10075.100
glass46.81846.36454.26455.13062.12160.71464.97863.11764.54565.95266.34267.42467.35967.33869.52472.381
haberman66.59167.26969.29071.90369.24769.57067.64568.91468.29067.60267.33370.28070.92569.93566.65667.312
heart-statlog71.85272.96374.44474.07475.92674.07475.55675.18575.92676.29675.92680.37074.81574.44480.37078.519
hepatitis74.04273.41776.00077.33374.79275.33379.29277.33380.54280.58377.25080.00077.95878.00077.20881.167
horse-colic75.78875.55680.16579.09279.61780.72181.80982.87579.61080.69181.23984.76780.97680.69881.24685.586
hungarian-heart80.98979.64479.31081.97780.66781.69079.96678.56378.93178.93176.86278.60979.94379.23079.94380.264
hypothyroid98.30498.30498.43699.47098.78198.75499.04699.68299.04699.04699.31199.68299.28499.20599.52399.655
ionosphere80.65980.65984.63584.93786.64386.36588.60390.33387.18385.19088.61192.03291.17588.31790.89792.032
iris84.66782.00086.66791.33392.00091.33394.66793.33394.66793.33394.66795.33394.66794.66796.00096.000
kr-vs-kp96.33996.15197.05998.56197.68597.43498.27999.43798.31098.21698.87399.50098.81198.78099.06299.343
labor65.00065.00065.00065.00064.66772.00073.66759.33373.66778.66779.00078.66779.00082.66780.33382.000
letter82.32582.09584.90587.82086.93086.43089.33092.05089.33088.79590.91093.23590.58589.99091.76093.425
lymphography70.23869.57174.28666.95276.33375.00076.28677.00077.76276.33378.33382.52479.66777.66783.09585.190
mushroom99.72999.71799.877100.00099.91499.91499.914100.00099.91499.96399.963100.00099.951100.00099.963100.000
optdigits91.29990.69493.34595.07194.53794.43195.19697.34995.21495.10796.06897.33196.05095.51696.37097.331
page-blocks95.83495.88996.19997.29696.38296.43796.60297.38796.58396.58397.18697.22396.85796.98597.24197.332
pendigits94.64294.21490.89797.84496.76196.54397.39898.85497.38097.41697.97198.89097.72697.68998.30898.836
pima_diabetes72.13870.97672.14374.09674.48275.78474.22975.51174.62673.95973.71774.09874.62174.48972.93175.665
postoperative60.00060.00060.00060.00058.88960.00060.00067.77858.88958.88961.11161.11157.77857.77857.77855.556
primary-tumor34.77737.73640.67739.51938.59240.96343.61940.69543.61944.79543.32446.31042.75443.04844.82243.057
segment92.85791.94893.68095.84494.71994.15695.71498.05295.67195.58496.62398.39896.36496.19097.61998.312
sick97.61497.64097.79998.86098.17198.25098.35699.04698.35698.33098.35699.04698.30398.25098.78199.072
solar-flare68.91168.78670.73970.23869.48069.08970.39270.95670.40870.34570.12272.73070.51372.11872.12772.945
sonar58.14359.07163.83362.97668.71465.78671.61969.61975.40571.11977.33374.95274.92974.85777.38180.667
soybean72.59872.15781.24979.77685.20084.32789.00990.47389.45087.83590.19093.84790.04090.33292.97193.252
spambase91.48090.93692.48093.82793.28492.61093.28494.87093.58893.30694.39295.15394.37193.87094.65394.957
spect62.93663.12963.47865.16666.86369.60167.95970.15770.46769.46473.85373.72270.02772.09476.07076.400
sponge92.50092.50092.50092.50092.50092.50091.07192.50093.75093.75093.75092.32193.75092.32193.75093.571
tae34.41731.75037.66737.08334.95839.00039.08348.29241.04240.33349.66742.29246.33346.33357.62553.000
tic-tac-toe78.19578.92281.42787.06788.20786.85191.13295.82791.75791.23795.09597.70595.61694.99196.34698.224
vehicle61.94163.48968.33168.92267.26567.03469.63472.22470.33970.57173.64673.63771.40870.45574.93375.312
vote95.18595.18595.41296.10595.64094.95895.87795.64095.18595.64596.09495.64095.40795.41295.64595.640
vowel49.29350.70757.27356.66763.33362.32372.22274.84871.81870.60678.28382.52577.27375.45583.93988.384
waveform81.78080.48082.16083.16082.34082.22083.16083.78083.16083.44083.62084.38084.28083.96084.30084.580
wine83.20383.75885.98087.64792.12492.09293.85694.96793.26893.82494.41294.41294.41293.23594.93596.634
wisconsin-breast93.56194.27594.13596.42094.41894.13395.13795.99495.13794.84995.28095.85195.42294.70894.99495.851
zoo60.54560.54560.54560.54579.27380.27383.27384.09187.27387.27390.27393.09190.27389.27389.27396.000
Average72.41372.17974.55775.45476.73476.97178.89679.63279.37879.23280.47481.64080.38080.11081.84483.056
Table 5. Classification accuracies of voting (RF, RotF, XGBoost) on four different ratios.
Table 5. Classification accuracies of voting (RF, RotF, XGBoost) on four different ratios.
R = 10%R = 20%R = 30%R = 40%
MethodSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombinationSupervisedSemi-SupervisedActive RandomCombination
Dataset
anneal90.86489.63492.86994.64894.20794.09696.76898.65896.32396.66097.55299.10797.55297.66398.21898.886
arrhythmia60.83658.86563.73465.72967.26666.16966.60468.36267.71567.27170.82171.70569.47867.26172.57573.681
audiology51.75950.00058.41957.03659.84262.45167.27366.38366.81868.12372.96473.45868.57771.16676.46278.241
autos49.26245.90551.66755.61959.90559.35767.81065.78668.35767.81074.54878.09573.14373.14378.45279.857
balance-scale80.94779.35581.42383.18783.03483.51085.28984.00284.80884.64785.12384.81386.07886.40087.35587.353
breast-cancer66.81068.21466.79869.27367.51267.89468.16572.75969.18769.55767.47571.34268.49868.86767.11873.054
bridges-version146.54546.54546.54546.54553.27349.45556.27357.18262.90959.90960.18265.72760.18265.72765.72767.545
bridges-version247.54547.54547.54547.54547.54551.09159.81854.45559.18261.90962.09165.90961.09164.81867.54563.909
clevalend72.49574.82878.17279.81782.18379.87181.43084.14081.44181.78583.77482.79683.77483.44184.45282.462
cmc50.37650.64551.18751.73251.73153.29252.88352.75352.47555.66153.96754.23852.67955.93353.42253.972
column_2C78.38778.06578.71080.00080.64581.29083.22685.48482.25882.25883.22683.54885.16183.22686.12985.484
column_3C79.67778.71079.03282.25881.61381.29083.87186.12983.54882.58182.25883.87182.58183.22682.25883.871
credit-rating83.76884.49384.49385.79785.07283.76885.36286.23285.65285.50785.94286.23286.52286.23286.81287.536
cylinder-bands67.40766.85270.55670.00072.22270.74175.55673.33375.92670.92678.14878.51976.48172.96381.11182.037
dermatology86.08986.62293.43195.61695.90195.62396.45697.02096.45697.00597.56897.55396.74297.27596.73497.568
ecoli74.70673.77976.48880.94579.13578.54783.02184.82283.02181.81885.70485.97184.20783.03085.72287.469
flags53.21147.57953.26351.05355.92152.18455.31656.23759.39556.28956.28957.78955.23756.78960.81658.921
german_credit70.90070.70071.60073.90074.30074.20074.80075.50074.80075.40074.90074.80075.70073.90075.50076.300
glass54.71955.17360.39061.71066.36465.90967.79269.22166.84068.24771.97074.76270.51970.02275.17376.212
haberman70.59170.25867.62469.92570.23770.55969.28073.15170.58168.94671.57071.23771.26970.91470.89270.570
heart-statlog72.96374.07476.29677.77878.88976.66777.77880.74177.03777.77879.63082.22279.63079.25982.96381.111
hepatitis79.25081.83376.70881.87578.79280.66780.00083.79281.87581.87579.87582.50077.95881.83380.50082.542
horse-colic77.99578.79981.80282.88381.79483.95683.69484.50583.13883.94982.86886.39684.76784.22784.23486.404
hungarian-heart80.65580.62180.97783.39182.69083.34580.63281.67882.00082.35678.89779.96680.63280.96682.33380.644
hypothyroid98.01397.82798.46399.70998.94098.78199.09999.70999.04699.12599.28499.73599.23199.25899.49799.655
ionosphere81.51680.94488.06391.20690.34189.77890.31793.17590.32591.17592.60392.60391.46092.88992.88993.746
iris82.00082.66791.33392.66792.66792.66794.66795.33394.66794.66795.33394.66794.66794.66795.33395.333
kr-vs-kp96.40296.30897.12198.68697.74797.40398.27999.46898.43598.31098.96799.37498.84298.77999.12499.437
labor68.00068.00068.00068.00075.33373.66778.66774.66778.66778.66780.33380.66780.33379.00082.66784.667
letter85.62585.41088.46091.85090.35589.95092.36095.31592.36091.95093.95596.25593.45093.22594.87596.335
lymphography71.00071.71475.66769.71476.90580.33377.66777.52476.38176.28681.00081.71480.38180.33383.76283.667
mushroom99.72999.72999.914100.00099.91499.92699.926100.00099.95199.92699.975100.00099.96399.96399.975100.000
optdigits94.21793.96895.53497.38496.24696.15796.92298.23897.06496.77997.45698.27497.17197.04697.56298.025
page-blocks95.98096.07196.34697.64396.60296.62096.78497.57096.85796.83997.33397.42497.07797.18697.47997.607
pendigits97.03497.05297.49898.71798.00897.99998.54499.32798.40898.58198.79099.30098.69998.58198.99999.263
pima_diabetes72.92574.23874.23176.17675.13776.04976.31675.65877.09375.53175.54075.91175.39874.87974.62675.660
postoperative67.77867.77867.77867.77864.44465.55664.44468.88963.33365.55663.33365.55662.22264.44463.33364.444
primary-tumor35.36536.25737.72736.53339.49240.69546.58643.92246.58645.08943.03945.98944.84042.76345.98944.831
segment93.85393.85394.93596.66795.49895.84496.10498.48596.27796.32097.18698.48596.84096.88397.83598.528
sick97.82697.72097.98598.59598.19798.30398.35699.07298.35698.27698.40999.01998.38398.40998.80799.019
solar-flare69.17168.08170.97571.10170.27570.45870.64372.02670.47770.52470.39472.74170.80570.93170.97172.838
sonar62.45261.45266.78666.35770.19068.28674.02475.50073.52474.45279.26276.90578.78676.88178.85781.214
soybean75.81674.93882.71185.04985.93186.22890.03092.53090.47189.59592.23494.29091.21192.23894.15094.578
spambase92.34992.08893.24194.37093.65493.54594.04595.34994.17594.08894.63195.54494.47994.28494.97995.327
spect63.98564.91165.65266.59273.42171.90269.77972.20373.33772.45777.84274.45575.13576.76381.06877.279
sponge92.50092.50092.50092.50092.50092.50092.50092.50092.50092.50092.50092.50092.50092.50093.75092.500
tae35.70837.75043.70841.70838.95837.00041.66743.58342.33341.08347.66751.70851.00049.00056.91756.333
tic-tac-toe78.50278.71280.90786.63487.47786.43590.91996.03690.61188.73294.78198.33095.19892.17797.59898.641
vehicle66.20366.32671.17569.27371.05270.92072.94475.66172.82569.86475.19073.52972.46971.16476.36176.480
vote94.95894.27195.18096.55495.18595.41896.09996.32795.87296.09496.55496.09995.86796.32196.09995.872
vowel49.69749.79860.70759.09168.48565.05177.98080.10177.37475.96086.16291.21284.94983.83892.22295.758
waveform83.58083.32083.98084.62084.18084.10084.64084.88084.64084.42085.40084.96085.14085.10085.38085.220
wine86.56984.86985.45896.66793.79195.45896.11198.33396.11194.96796.63497.74596.66796.63496.63497.745
wisconsin-breast94.84994.56395.42296.70895.70895.27595.99496.28095.84995.70895.99496.28096.13795.56596.28096.280
zoo75.27375.27375.27375.27383.27382.27382.27391.18287.27386.27388.27395.09188.27388.27389.27394.273
Average74.66674.50076.77278.03878.90978.73780.61481.83980.96280.69282.24483.43581.92881.96883.74284.294
Table 6. Friedman aligned ranking test and Holm’s post hoc test regarding BagDT (a = 0.10).
Table 6. Friedman aligned ranking test and Holm’s post hoc test regarding BagDT (a = 0.10).
Labeled Ratio (R)Classifier (BagDT)Friedman p-Value (Statistic)Friedman RankingHolm’s Post Hoc Test
p-ValueNull Hypothesis
10%Combination0.00000
(76.02618)
55.83636--
Active Random 81.336360.03566rejected
Supervised148.772730.00000rejected
Semi-supervised156.054550.00000rejected
20%Combination0.00000
(59.02259)
57.93636--
Active Random 91.927270.00510rejected
Supervised141.072730.00000rejected
Semi-supervised151.063640.00000rejected
30%Combination0.00000
(78.32732)
48.57273--
Active Random 91.354550.00042rejected
Supervised148.445450.00000rejected
Semi-supervised153.627270.00000rejected
40%Combination0.00000
(65.36953)
61.73636--
Active Random 85.827270.04717rejected
Supervised129.427270.00000rejected
Semi-supervised165.009090.00000rejected
Table 7. Friedman aligned ranking test and Holm’s post hoc test regarding RF (a = 0.10).
Table 7. Friedman aligned ranking test and Holm’s post hoc test regarding RF (a = 0.10).
Labeled Ratio (R)Classifier (Random Forests)Friedman p-Value (Statistic)Friedman RankingHolm’s Post Hoc Test
p-ValueNull Hypothesis
10%Combination0.00000
(90.74521)
50.23636--
Active Random 80.054550.01403rejected
Supervised153.300000.00000rejected
Semi-supervised158.409090.00000rejected
20%Combination0.00000
(76.85983)
52.23636--
Active Random 88.345450.00293rejected
Supervised142.390910.00000rejected
Semi-supervised159.027270.00000rejected
30%Combination0.00000
(76.55845)
55.79091--
Active Random 83.127270.02432rejected
Supervised140.627270.00000rejected
Semi-supervised162.454550.00000rejected
40%Combination0.00000
(59.66724)
61.71818--
Active Random 90.200000.01895rejected
Supervised129.200000.00000rejected
Semi-supervised160.881820.00000rejected
Table 8. Friedman aligned ranking test and Holm’s post hoc test regarding RotF (a = 0.10).
Table 8. Friedman aligned ranking test and Holm’s post hoc test regarding RotF (a = 0.10).
Labeled Ratio (R)Classifier (Rotation Forest)Friedman p-Value (Statistic)Friedman RankingHolm’s Post Hoc Test
p-ValueNull Hypothesis
10%Combination0.00000
(86.92304)
53.12727--
Active Random 78.836360.03417rejected
Semi-supervised149.381820.00000rejected
Supervised160.654550.00000rejected
20%Combination0.00000
(68.42331)
53.53636--
Active Random 91.309090.00186rejected
Supervised148.118180.00000rejected
Semi-supervised149.036360.00000rejected
30%Combination0.00000
(61.06200)
54.55455--
Active Random 95.745450.00069rejected
Semi-supervised145.809090.00000rejected
Supervised145.890910.00000rejected
40%Combination0.00000
(71.23507)
56.67273--
Active Random 84.936360.01989rejected
Semi-supervised141.018180.00000rejected
Supervised159.372730.00000rejected
Table 9. Friedman aligned ranking test and Holm’s post hoc test regarding extreme gradient boosted trees (XGBoost) (a = 0.10).
Table 9. Friedman aligned ranking test and Holm’s post hoc test regarding extreme gradient boosted trees (XGBoost) (a = 0.10).
Labeled Ratio (R)Classifier (XGBoost)Friedman p-Value (Statistic)Friedman RankingHolm’s Post Hoc Test
p-ValueNull Hypothesis
10%Combination0.00000
(100.32586)
48.66364--
Active Random 75.800000.02538rejected
Supervised153.990910.00000rejected
Semi-supervised163.545450.00000rejected
20%Combination0.00000
(79.07341)
53.10000--
Active Random 83.527270.01218rejected
Semi-supervised149.700000.00000rejected
Supervised155.672730.00000rejected
30%Combination0.00000
(64.21611)
53.01818--
Active Random 95.963640.00040rejected
Supervised144.963640.00000rejected
Semi-supervised148.054550.00000rejected
40%Combination0.00000
(73.61879)
52.05455--
Active Random 89.327270.00214rejected
Supervised147.563640.00000rejected
Semi-supervised153.054550.00000rejected
Table 10. Friedman aligned ranking test and Holm’s post hoc test regarding voting (RF, RotF, XGBoost) (a = 0.10).
Table 10. Friedman aligned ranking test and Holm’s post hoc test regarding voting (RF, RotF, XGBoost) (a = 0.10).
Labeled Ratio (R)Classifier (Voting (RF,RotF,XGBoost))Friedman p-Value (Statistic)Friedman RankingHolm’s Post Hoc Test
p-ValueNull Hypothesis
10%Combination0.00000
(94.26061)
47.87273--
Active Random 80.972730.00639rejected
Supervised155.690910.00000rejected
Semi-supervised157.463640.00000rejected
20%Combination0.00000
(84.82332)
47.57273--
Active Random 87.927270.00089rejected
Supervised151.627270.00000rejected
Semi-supervised154.872730.00000rejected
30%Combination0.00000
(71.00226)
53.01818--
Active Random 89.654550.00254rejected
Supervised145.727270.00000rejected
Semi-supervised153.600000.00000rejected
40%Combination0.00000
(76.77322)
58.08182--
Active Random 78.409090.09400rejected
Semi-supervised150.109090.00000rejected
Supervised155.400000.00000rejected
Table 11. Friedman aligned ranking test and Holm’s post hoc test regarding k-nearest neighbors (5NN) (a = 0.10).
Table 11. Friedman aligned ranking test and Holm’s post hoc test regarding k-nearest neighbors (5NN) (a = 0.10).
Labeled Ratio (R)Classifier (5NN)Friedman p-Value (Statistic)Friedman RankingHolm’s Post Hoc Test
p-valueNull Hypothesis
10%Combination0.00000
(71.86930)
58.98182--
Active Random 81.245450.06662rejected
Supervised141.718180.00000rejected
Semi-supervised160.054550.00000rejected
20%Combination0.00000
(73.09232)
56.09091--
Active Random 84.645450.01865rejected
Supervised140.454550.00000rejected
Semi-supervised160.809090.00000rejected
30%Combination0.00000
(62.25286)
57.79091--
Active Random 89.600000.00878rejected
Supervised143.745450.00000rejected
Semi-supervised150.863640.00000rejected
40%Combination0.00000
(74.33081)
57.08182--
Active Random 81.727270.04231rejected
Supervised144.509090.00000rejected
Semi-supervised158.681820.00000rejected
Table 12. Friedman aligned ranking test and Holm’s post hoc test regarding logistic (a = 0.10).
Table 12. Friedman aligned ranking test and Holm’s post hoc test regarding logistic (a = 0.10).
Labeled Ratio (R)Classifier (Logistic)Friedman p-Value (Statistic)Friedman RankingHolm’s Post Hoc Test
p-ValueNull Hypothesis
10%Combination0.00000
(49.05320)
64.44545--
Active Random 90.400000.03249rejected
Semi-supervised142.154550.00000rejected
Supervised145.000000.00000rejected
20%Combination0.00000
(59.73571)
58.75455--
Active Random 90.327270.00929rejected
Semi-supervised143.436360.00000rejected
Supervised149.481820.00000rejected
30%Combination0.00000
(40.03025)
71.25455--
Active Random 89.027270.14314accepted
Supervised139.654550.00000rejected
Semi-supervised142.063640.00000rejected
40%Combination0.00000
(65.16112)
61.48182--
Active Random 81.709090.09563rejected
Supervised146.318180.00000rejected
Semi-supervised152.490910.00000rejected
Table 13. Friedman aligned ranking Test and Holm’s post hoc test regarding logistic model tree (LMT) (a = 0.10).
Table 13. Friedman aligned ranking Test and Holm’s post hoc test regarding logistic model tree (LMT) (a = 0.10).
Labeled Ratio (R)Classifier (LMT)Friedman p-Value (Statistic)Friedman RankingHolm’s Post Hoc Test
p-ValueNull Hypothesis
10%Combination0.00000
(74.72391)
55.78182--
Active Random 82.536360.02751rejected
Supervised150.054550.00000rejected
Semi-supervised153.627270.00000rejected
20%Combination0.00000
(76.73213)
59.54545--
Active Random 76.809090.15495accepted
Semi-supervised148.554550.00000rejected
Supervised157.090910.00000rejected
30%Combination0.00000
(50.01495)
56.80000--
Active Random 103.509090.00012rejected
Semi-Supervised139.154550.00000rejected
Supervised142.536360.00000rejected
40%Combination0.00000
(79.76665)
56.98182--
Active Random 77.718180.08757rejected
Semi-Supervised147.136360.00000rejected
Supervised160.163640.00000rejected
Table 14. Friedman aligned ranking test and Holm’s post hoc test regarding LogitBoost (a = 0.10).
Table 14. Friedman aligned ranking test and Holm’s post hoc test regarding LogitBoost (a = 0.10).
Labeled Ratio (R)Classifier (LogitBoost)Friedman p-Value (Statistic)Friedman RankingHolm’s Post Hoc Test
p-ValueNull Hypothesis
10%Combination0.00000
(75.28847)
52.08182--
Active Random 87.890910.00318rejected
Semi-supervised149.690910.00000rejected
Supervised152.336360.00000rejected
20%Combination0.00000
(68.38871)
60.38182--
Active Random 80.809090.09239rejected
Supervised147.736360.00000rejected
Semi-supervised153.072730.00000rejected
30%Combination0.00000
(60.68458)
53.25455--
Active Random 99.918180.00012rejected
Supervised136.145450.00000rejected
Semi-supervised152.681820.00000rejected
40%Combination0.00000
(46.30018)
70.32727--
Active Random 86.018180.19611accepted
Supervised138.063640.00000rejected
Semi-supervised147.590910.00000rejected

Share and Cite

MDPI and ACS Style

Fazakis, N.; Kanas, V.G.; Aridas, C.K.; Karlos, S.; Kotsiantis, S. Combination of Active Learning and Semi-Supervised Learning under a Self-Training Scheme. Entropy 2019, 21, 988. https://doi.org/10.3390/e21100988

AMA Style

Fazakis N, Kanas VG, Aridas CK, Karlos S, Kotsiantis S. Combination of Active Learning and Semi-Supervised Learning under a Self-Training Scheme. Entropy. 2019; 21(10):988. https://doi.org/10.3390/e21100988

Chicago/Turabian Style

Fazakis, Nikos, Vasileios G. Kanas, Christos K. Aridas, Stamatis Karlos, and Sotiris Kotsiantis. 2019. "Combination of Active Learning and Semi-Supervised Learning under a Self-Training Scheme" Entropy 21, no. 10: 988. https://doi.org/10.3390/e21100988

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop