Prototype Selection for Multilabel Instance-Based Learning †
Abstract
:1. Introduction
- We propose four variations of CNN and IB2 that are suitable for multilabel classification problems.
- One of the variations uses a novel adaptation of Levenshtein distance for multilabel instances.
- We conduct an experimental study using nine multilabel datasets and complement it with corresponding statistical tests of significance. The study reveals that the proposed variations offer significant reduction rates without compromising the classification accuracy.
2. Related Work
3. The Single-Label Condensed Nearest Neighbor Rule
Algorithm 1 CNN |
Input: Output:
|
- Variability in Results: Running the CNN algorithm multiple times on the same TS may produce different condensing sets due to variations in the randomly selected initial instance (line 2 of Algorithm 1) or differences in the order in which TS instances are examined (line 5 of Algorithm 1).
- Memory Requirements: CNN is a memory-based algorithm, meaning that all instances need to reside in main memory during its execution.
- Computational Cost: The CNN algorithm requires multiple passes over the training set.
4. The IB2 Algorithm
Algorithm 2 IB2 |
Input: Output:
|
- Since IB2 avoids multiple passes over training data, is quite faster than CNN.
- The condensing set obtained from IB2 is generally smaller than that of CNN, leading to faster classification and reduced storage requirements.
- IB2 supports incremental learning, where new instances can be added to the condensing set without requiring complete retraining of the algorithm.
- The decision boundary revision step allows IB2 to adapt to new instances and adjust the reduced training set accordingly.
5. The Proposed Algorithms
- Multilabel CNN Hamming Distance and Multilabel IB2 Hamming Distance (MLCNN-H and MLIB2-H);
- Multilabel CNN Jaccard Distance and Multilabel IB2 Jaccard Distance (MLCNN-J and MLIB2-J);
- Multilabel CNN Levenshtein Distance and Multilabel IB2 Levenshtein Distance (MLCNN-L and MLIB2-L) and
- Multilabel CNN Binary Relevance and Multilabel IB2 Binary Relevance (MLCNN-BR and MLIB2-BR).
5.1. MLCNN-H and MLIB2-H
- (110001, 110001) = 0/6 = 0
- (110001, 001110) = 6/6 = 1
- (110001, 000010) = 4/6 = 0.33
- (110001, 100010) = 3/6 = 0.5
5.2. MLCNN-J and MLIB2-J
- (110001, 110001) = 0/3 = 0
- (110001, 001110) = 6/6 = 1
- (110001, 000010) = 4/4 = 1
- (110001, 100010) = 3/4 = 0.75
5.3. MLCNN-L and MLIB2-L
- Non-Negativity: The Levenshtein distance is always non-negative.
- Symmetry: The distance between “S” and “T” is the same as the distance between “T” and “S”.
- Identity: The distance between a string and itself is always zero.
- Triangle Inequality: For any three strings “S”, “T” and “W”, the distance from “S” to “W” is no greater than the sum of the distances from “S” to “T” and from “T” to “W”.
- Substructure Optimality: The optimal solution for the overall Levenshtein distance can be obtained by combining optimal solutions to the subproblems (i.e., the prefix substrings) of “S” and “T” [33].
- LEV(110001, 110001) = LEV(ABF, ABF) = 0
- LEV(110001, 001110) = LEV(ABF, CDE) = 3
- LEV(110001, 000010) = LEV(ABF, E) = 3
- LEV(110001, 100010) = LEV(ABF, AE) = 2
5.4. MLCNN-BR and MLIB2-BR
6. Experimental Study
6.1. Experimental Setup
6.2. Experimental Results
6.3. Statistical Comparisons
6.3.1. Wilcoxon Signed Rank Test Results
6.3.2. Friedman Test Results
- MLCNN-L is the most accurate approach. MLCNN-J ( > 0.5), MLCNN-J ( > 0.75) and MLIB2-L are the runners up.
- MLIB2-J ( > 0.75) and MLIB2-H achieve the highest reduction rates. MLCNN-J ( > 0.75) and MLIB2-J ( > 0.5) are the runners up.
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
DRT | Data Reduction Technique |
PS | Prototype Selection |
CNN | Condensed Nearest Neighbor |
IB2 | Instance-Based Learning 2 |
CS | Condensing set |
TS | Training set |
MLCNN-H | Multilabel Condensed Nearest Neighbor with Hamming Distance |
MLCNN-J | Multilabel Condensed Nearest Neighbor with Jaccard Distance |
MLCNN-L | Multilabel Condensed Nearest Neighbor with Levenshtein Distance |
MLCNN-BR | Multilabel Condensed Nearest Neighbor with Binary Relevance |
MLIB2-H | Multilabel Instance-Based Learning 2 with Hamming Distance |
MLIB2-J | Multilabel Instance-Based Learning 2 with Jaccard Distance |
MLIB2-L | Multilabel Instance-Based Learning 2 with Levenshtein Distance |
MLIB2-BR | Multilabel Instance-Based Learning 2 with Binary Relevance |
References
- Tsoumakas, G.; Katakis, I. Multi-label classification: An overview. Int. J. Data Warehous. Min. 2007, 3, 1–13. [Google Scholar] [CrossRef]
- Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
- Liu, H.; Motoda, H. Feature Selection for Knowledge Discovery and Data Mining; Kluwer Academic Publishers: New York, NY, USA, 1998. [Google Scholar]
- Garcia, S.; Derrac, J.; Cano, J.; Herrera, F. Prototype Selection for Nearest Neighbor Classification: Taxonomy and Empirical Study. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 417–435. [Google Scholar] [CrossRef] [PubMed]
- Triguero, I.; Derrac, J.; Garcia, S.; Herrera, F. A Taxonomy and Experimental Study on Prototype Generation for Nearest Neighbor Classification. Trans. Systems Man Cyber Part C 2012, 42, 86–100. [Google Scholar] [CrossRef]
- Spyromitros, E.; Tsoumakas, G.; Vlahavas, I. An Empirical Study of Lazy Multilabel Classification Algorithms. In Proceedings of the Artificial Intelligence: Theories, Models and Applications; Darzentas, J., Vouros, G.A., Vosinakis, S., Arnellos, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 401–406. [Google Scholar] [CrossRef]
- Hart, P.E. The condensed nearest neighbor rule. IEEE Trans. Inf. Theory 1967, 18, 515–516. [Google Scholar]
- Aha, D.W.; Kibler, D.; Albert, M.K. Instance-based learning algorithms. Mach. Learn. 1991, 6, 37–66. [Google Scholar] [CrossRef]
- Filippakis, P.; Ougiaroglou, S.; Evangelidis, G. Condensed Nearest Neighbour Rules for Multi-Label Datasets. In Proceedings of the International Database Engineered Applications Symposium Conference, Heraklion, Greece, 5–7 May 2023; pp. 43–50. [Google Scholar] [CrossRef]
- Levenshtein, V.I. Binary codes capable of correcting deletions, insertions, and reversals. Sov. Phys. Dokl. 1965, 10, 707–710. [Google Scholar]
- Tsoumakas, G.; Spyromitros-Xioufis, E.; Vilcek, J.; Vlahavas, I. Mulan: A Java Library for Multi-Label Learning. J. Mach. Learn. Res. 2011, 12, 2411–2414. [Google Scholar]
- Read, J.; Reutemann, P.; Pfahringer, B.; Holmes, G. MEKA: A Multi-label/Multi-target Extension to WEKA. J. Mach. Learn. Res. 2016, 17, 1–5. [Google Scholar]
- Charte, F.; Rivera, A.J.; del Jesus, M.J.; Herrera, F. MLeNN: A First Approach to Heuristic Multilabel Undersampling. In Intelligent Data Engineering and Automated Learning–IDEAL 2014; Springer: New York, NY, USA, 2014; pp. 1–9. [Google Scholar] [CrossRef]
- Wilson, D.L. Asymptotic Properties of Nearest Neighbor Rules Using Edited Data. IEEE Trans. Syst. Man Cybern. 1972, SMC-2, 408–421. [Google Scholar] [CrossRef]
- Kanj, S.; Abdallah, F.; Denœux, T.; Tout, K. Editing training data for multi-label classification with the k-nearest neighbor rule. Pattern Anal. Appl. 2015, 19, 145–161. [Google Scholar] [CrossRef]
- Arnaiz-González, Á.; Díez-Pastor, J.F.; Rodríguez, J.J.; García-Osorio, C. Local sets for multi-label instance selection. Appl. Soft Comput. 2018, 68, 651–666. [Google Scholar] [CrossRef]
- Leyva, E.; González, A.; Pérez, R. Three new instance selection methods based on local sets: A comparative study with several approaches from a bi-objective perspective. Pattern Recognit. 2015, 48, 1523–1537. [Google Scholar] [CrossRef]
- Li, H.; Fang, M.; Li, H.; Wang, P. Prototype selection for multi-label data based on label correlation. Neural Comput. Appl. 2023. [Google Scholar] [CrossRef]
- Chou, C.H.; Kuo, B.H.; Chang, F. The Generalized Condensed Nearest Neighbor Rule as A Data Reduction Method. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; Volume 2, pp. 556–559. [Google Scholar] [CrossRef]
- Suyal, H.; Singh, A. Improving Multi-Label Classification in Prototype Selection Scenario. In Computational Intelligence and Healthcare Informatics; Wiley: Hoboken, NJ, USA, 2021; pp. 103–119. [Google Scholar] [CrossRef]
- Zhang, M.L.; Zhou, Z.H. ML-KNN: A lazy learning approach to multi-label learning. Pattern Recognit. 2007, 40, 2038–2048. [Google Scholar] [CrossRef]
- Arnaiz-González, Á.; Díez-Pastor, J.F.; Rodríguez, J.J.; García-Osorio, C. Study of data transformation techniques for adapting single-label prototype selection algorithms to multi-label learning. Expert Syst. Appl. 2018, 109, 114–130. [Google Scholar] [CrossRef]
- Calvo-Zaragoza, J.; Valero-Mas, J.J.; Rico-Juan, J.R. Improving kNN multi-label classification in Prototype Selection scenarios using class proposals. Pattern Recognit. 2015, 48, 1608–1622. [Google Scholar] [CrossRef]
- González, M.; Cano, J.R.; García, S. ProLSFEO-LDL: Prototype Selection and Label- Specific Feature Evolutionary Optimization for Label Distribution Learning. Appl. Sci. 2020, 10, 3089. [Google Scholar] [CrossRef]
- Geng, X. Label Distribution Learning. IEEE Trans. Knowl. Data Eng. 2016, 28, 1734–1748. [Google Scholar] [CrossRef]
- Ougiaroglou, S.; Filippakis, P.; Evangelidis, G. Prototype Generation for Multi-label Nearest Neighbours Classification. In Proceedings of the Hybrid Artificial Intelligent Systems; Sanjurjo González, H., Pastor López, I., García Bringas, P., Quintián, H., Corchado, E., Eds.; Springer: Cham, Germany, 2021; pp. 172–183. [Google Scholar]
- Ougiaroglou, S.; Filippakis, P.; Fotiadou, G.; Evangelidis, G. Data reduction via multi-label prototype generation. Neurocomputing 2023, 526, 1–8. [Google Scholar] [CrossRef]
- Sánchez, J. High training set size reduction by space partitioning and prototype abstraction. Pattern Recognit. 2004, 37, 1561–1564. [Google Scholar] [CrossRef]
- Valero-Mas, J.J.; Gallego, A.J.; Alonso-Jiménez, P.; Serra, X. Multilabel Prototype Generation for data reduction in K-Nearest Neighbour classification. Pattern Recognit. 2023, 135, 109190. [Google Scholar] [CrossRef]
- Chen, C.; Jóźwik, A. A sample set condensation algorithm for the class sensitive artificial neural network. Pattern Recognit. Lett. 1996, 17, 819–823. [Google Scholar] [CrossRef]
- Sun, L.; Ji, S.; Ye, J. Hypergraph Spectral Learning for Multi-Label Classification. In Proceedings of the Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Las Vegas, NV, USA, 24–27 August 2008; pp. 668–676. [Google Scholar] [CrossRef]
- Byerly, A.; Kalganova, T. Class Density and Dataset Quality in High-Dimensional, Unstructured Data. arXiv 2022. [Google Scholar] [CrossRef]
- Zhang, S.; Hu, Y.; Bian, G. Research on string similarity algorithm based on Levenshtein Distance. In Proceedings of the 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 25–26 March 2017; pp. 2247–2251. [Google Scholar] [CrossRef]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Sechidis, K.; Tsoumakas, G.; Vlahavas, I. On the Stratification of Multi-label Data. In Proceedings of the Machine Learning and Knowledge Discovery in Databases; Gunopulos, D., Hofmann, T., Malerba, D., Vazirgiannis, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 145–158. [Google Scholar] [CrossRef]
- Czarnowski, I.; Jędrzejowicz, P. An Approach to Data Reduction for Learning from Big Datasets: Integrating Stacking, Rotation, and Agent Population Learning Techniques. Complexity 2018, 2018, 7404627. [Google Scholar] [CrossRef]
- Gallego, A.J.; Calvo-Zaragoza, J.; Valero-Mas, J.J.; Rico-Juan, J.R. Clustering-Based k-Nearest Neighbor Classification for Large-Scale Data with Neural Codes Representation. Pattern Recogn. 2018, 74, 531–543. [Google Scholar] [CrossRef]
- Ougiaroglou, S.; Evangelidis, G. RHC: Non-Parametric Cluster-Based Data Reduction for Efficient k-NN Classification. Pattern Anal. Appl. 2016, 19, 93–109. [Google Scholar] [CrossRef]
- Escalante, H.J.; Graff, M.; Morales-Reyes, A. PGGP: Prototype Generation via Genetic Programming. Appl. Soft Comput. 2016, 40, 569–580. [Google Scholar] [CrossRef]
- Escalante, H.J.; Marin-Castro, M.; Morales-Reyes, A.; Graff, M.; Rosales-Pérez, A.; Montes-Y-Gómez, M.; Reyes, C.A.; Gonzalez, J.A. MOPG: A Multi-Objective Evolutionary Algorithm for Prototype Generation. Pattern Anal. Appl. 2017, 20, 33–47. [Google Scholar] [CrossRef]
- Calvo-Zaragoza, J.; Valero-Mas, J.J.; Rico-Juan, J.R. Prototype Generation on Structural Data Using Dissimilarity Space Representation. Neural Comput. Appl. 2017, 28, 2415–2424. [Google Scholar] [CrossRef]
- Sheskin, D. Handbook of Parametric and Nonparametric Statistical Procedures; A Chapman & Hall Book; Chapman & Hall/CRC: Boca Raton, FL, USA, 2011. [Google Scholar]
Instances | First Label |
---|---|
(1, 1) | 1 |
(1, 8) | 1 |
(2, 7) | 0 |
(3, 8) | 1 |
(4, 5) | 1 |
(7, 1) | 0 |
(8, 4) | 1 |
(9, 1) | 1 |
Instances | Second Label |
---|---|
(1, 1) | 0 |
(2, 7) | 0 |
(3, 8) | 1 |
(5, 6) | 1 |
(7, 5) | 0 |
(8, 4) | 1 |
(9, 9) | 1 |
Instances | First Label | Second Label |
---|---|---|
(1, 1) | 1 | 0 |
(1, 8) | 1 | 0 |
(3, 8) | 1 | 1 |
(4, 5) | 1 | 0 |
(5, 6) | 0 | 1 |
(8, 4) | 1 | 1 |
(9, 1) | 1 | 0 |
(9, 9) | 0 | 1 |
Datasets | Domain | Size | Attributes | Labels | Cardinality | Density |
---|---|---|---|---|---|---|
CAL500 (CAL) | Music | 502 | 68 | 174 | 26.044 | 0.150 |
Emotions (EMT) | Music | 593 | 72 | 6 | 1.869 | 0.311 |
Water quality (WQ) | Chemistry | 1060 | 16 | 14 | 5.073 | 0.362 |
Scene (SC) | Image | 2407 | 294 | 6 | 1.074 | 0.179 |
Yeast (YS) | Biology | 2417 | 103 | 14 | 4.237 | 0.303 |
Birds (BRD) | Sounds | 645 | 260 | 19 | 1.014 | 0.053 |
CHD49 (CHD) | Medicine | 555 | 49 | 6 | 2.580 | 0.430 |
Image (IMG) | Image | 2000 | 294 | 5 | 1.236 | 0.247 |
Mediamill (MDM) | Video | 43,907 | 120 | 101 | 4.376 | 0.043 |
Dataset | BRkNN | MLCNN-H | MLCNN-J ( > 0.5) | MLCNN-J ( > 0.75) | MLCNN-BR | MLCNN-L | MLIB2-H | MLIB2-J ( > 0.5) | MLIB2-J ( > 0.75) | MLIB2-BR | MLIB2-L | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
CAL | HL: | 0.19 | 0.19 | 0.19 | 0.19 | 0.19 | 0.19 | 0.19 | 0.19 | 0.19 | 0.18 | 0.19 |
RR: | - | 8.28 | 0.75 | 19.45 | 0.0 | 0.10 | 14.26 | 1.20 | 32.92 | 0.0 | 0.35 | |
EMT | HL: | 0.24 | 0.26 | 0.26 | 0.25 | 0.29 | 0.24 | 0.26 | 0.26 | 0.26 | 0.30 | 0.24 |
RR: | - | 39.88 | 40.26 | 60.62 | 26.64 | 14.67 | 51.05 | 51.64 | 69.06 | 39.97 | 21.67 | |
WQ | HL: | 0.33 | 0.34 | 0.33 | 0.34 | 0.36 | 0.33 | 0.35 | 0.33 | 0.34 | 0.37 | 0.33 |
RR: | - | 45.71 | 18.99 | 54.65 | 5.33 | 12.74 | 58.16 | 26.68 | 66.20 | 10.85 | 17.73 | |
SC | HL: | 0.11 | 0.12 | 0.12 | 0.12 | 0.13 | 0.12 | 0.14 | 0.14 | 0.14 | 0.17 | 0.14 |
RR: | - | 51.93 | 51.93 | 53.03 | 50.44 | 43.05 | 70.75 | 70.75 | 74.50 | 75.69 | 58.85 | |
YS | HL: | 0.24 | 0.27 | 0.26 | 0.26 | 0.30 | 0.26 | 0.27 | 0.26 | 0.27 | 0.30 | 0.26 |
RR: | - | 48.26 | 31.27 | 54.72 | 16.61 | 33.95 | 59.09 | 39.80 | 65.07 | 24.90 | 43.61 | |
BRD | HL: | 0.05 | 0.06 | 0.05 | 0.05 | 0.08 | 0.05 | 0.06 | 0.05 | 0.05 | 0.08 | 0.05 |
RR: | - | 55.70 | 15.50 | 22.71 | 58.18 | 34.81 | 61.59 | 18.80 | 26.28 | 62.87 | 40.62 | |
CHD | HL: | 0.35 | 0.36 | 0.36 | 0.38 | 0.40 | 0.37 | 0.37 | 0.37 | 0.38 | 0.40 | 0.38 |
RR: | - | 42.18 | 32.20 | 58.64 | 14.74 | 30.98 | 57.20 | 44.97 | 70.82 | 24.89 | 45.33 | |
IMG | HL: | 0.20 | 0.22 | 0.22 | 0.21 | 0.23 | 0.21 | 0.25 | 0.25 | 0.25 | 0.26 | 0.22 |
RR: | - | 42.11 | 42.11 | 44.82 | 38.43 | 26.61 | 73.23 | 73.23 | 84.67 | 63.43 | 40.47 | |
MDM | HL: | 0.031 | 0.035 | 0.032 | 0.036 | 0.042 | 0.032 | 0.037 | 0.033 | 0.039 | 0.043 | 0.033 |
RR: | - | 48.70 | 33.19 | 54.58 | 21.76 | 31.30 | 57.08 | 39.90 | 63.43 | 30.06 | 37.61 |
Methods | Accuracy | Reduction Rate | ||
---|---|---|---|---|
w/l/t | Wilc. | w/l/t | Wilc. | |
BRkNN vs. MLCNN-H | 8/0/1 | 0.012 | - | - |
BRkNN vs. MLCNN-J ( > 0.5) | 6/0/3 | 0.027 | - | - |
BRkNN vs. MLCNN-J ( > 0.75) | 7/0/2 | 0.018 | - | - |
BRkNN vs. MLCNN-BR | 8/0/1 | 0.012 | - | - |
BRkNN vs. MLIB2-H | 8/0/1 | 0.012 | - | - |
BRkNN vs. MLIB2-J ( > 0.5) | 6/0/3 | 0.026 | - | - |
BRkNN vs. MLIB2-J ( > 0.75) | 7/0/2 | 0.018 | - | - |
BRkNN vs. MLIB2-BR | 8/1/0 | 0.011 | - | - |
BRkNN vs. MLCNN-L | 5/1/3 | 0.046 | - | - |
BRkNN vs. MLIB2-L | 5/1/3 | 0.046 | - | - |
MLCNN-H vs. MLCNN-J ( > 0.5) | 1/6/2 | 0.028 | 6/1/2 | 0.028 |
MLCNN-H vs. MLCNN-J ( > 0.75) | 3/6/0 | 0.260 | 1/8/0 | 0.110 |
MLCNN-H vs. MLCNN-BR | 1/8/0 | 0.011 | 8/1/0 | 0.015 |
MLCNN-H vs. MLIB2-H | 8/1/0 | 0.011 | 0/9/0 | 0.008 |
MLCNN-H vs. MLIB2-J ( > 0.5) | 4/5/0 | 0.859 | 5/4/0 | 0.767 |
MLCNN-H vs. MLIB2-J ( > 0.75) | 7/2/0 | 0.066 | 1/8/0 | 0.086 |
MLCNN-H vs. MLIB2-BR | 8/1/0 | 0.015 | 5/4/0 | 0.515 |
MLCNN-H vs. MLCNN-L | 1/8/0 | 0.021 | 9/0/0 | 0.008 |
MLCNN-H vs. MLIB2-L | 2/7/0 | 0.260 | 7/2/0 | 0.051 |
MLCNN-J ( > 0.5) vs. MLCNN-J ( > 0.75) | 6/3/0 | 0.214 | 0/9/0 | 0.008 |
MLCNN-J ( > 0.5) vs. MLCNN-BR | 8/1/0 | 0.011 | 8/1/0 | 0.110 |
MLCNN-J ( > 0.5) vs. MLIB2-H | 9/0/0 | 0.008 | 0/9/0 | 0.008 |
MLCNN-J ( > 0.5) vs. MLIB2-J ( > 0.5) | 8/1/0 | 0.011 | 0/9/0 | 0.008 |
MLCNN-J ( > 0.5) vs. MLIB2-J ( > 0.75) | 9/0/0 | 0.008 | 0/9/0 | 0.008 |
MLCNN-J ( > 0.5) vs. MLIB2-BR | 8/1/0 | 0.011 | 6/3/0 | 0.859 |
MLCNN-J ( > 0.5) vs. MLCNN-L | 1/8/0 | 0.051 | 7/2/0 | 0.214 |
MLCNN-J ( > 0.5) vs. MLIB2-L | 3/6/0 | 0.678 | 4/5/0 | 0.314 |
MLCNN-J ( > 0.75) vs. MLCNN-BR | 8/1/0 | 0.011 | 8/1/0 | 0.051 |
MLCNN-J ( > 0.75) vs. MLIB2-H | 7/2/0 | 0.066 | 3/6/0 | 0.214 |
MLCNN-J ( > 0.75) vs. MLIB2-J ( > 0.5) | 3/6/0 | 0.953 | 7/2/0 | 0.374 |
MLCNN-J ( > 0.75) vs. MLIB2-J ( > 0.75) | 7/2/0 | 0.021 | 0/9/0 | 0.008 |
MLCNN-J ( > 0.75) vs. MLIB2-BR | 8/1/0 | 0.015 | 6/3/0 | 0.260 |
MLCNN-J ( > 0.75) vs. MLCNN-L | 0/9/0 | 0.008 | 8/1/0 | 0.015 |
MLCNN-J ( > 0.75) vs. MLIB2-L | 2/7/0 | 0.214 | 7/2/0 | 0.086 |
MLCNN-BR vs. MLIB2-H | 3/6/0 | 0.139 | 0/9/0 | 0.008 |
MLCNN-BR vs. MLIB2-J ( > 0.5) | 3/6/0 | 0.066 | 1/8/0 | 0.110 |
MLCNN-BR vs. MLIB2-J ( > 0.75) | 3/6/0 | 0.173 | 1/8/0 | 0.015 |
MLCNN-BR vs. MLIB2-BR | 8/1/0 | 0.038 | 0/8/1 | 0.012 |
MLCNN-BR vs. MLCNN-L | 1/8/0 | 0.011 | 4/5/0 | 0.953 |
MLCNN-BR vs. MLIB2-L | 2/7/0 | 0.021 | 2/7/0 | 0.139 |
MLIB2-H vs. MLIB2-J ( > 0.5) | 1/6/2 | 0.056 | 6/1/2 | 0.028 |
MLIB2-H vs. MLIB2-J ( > 0.75) | 5/4/0 | 1.000 | 1/8/0 | 0.110 |
MLIB2-H vs. MLIB2-BR | 8/1/0 | 0.015 | 7/2/0 | 0.021 |
MLIB2-H vs. MLCNN-L | 0/9/0 | 0.008 | 9/0/0 | 0.008 |
MLIB2-H vs. MLIB2-L | 1/8/0 | 0.028 | 9/0/0 | 0.008 |
MLIB2-J ( > 0.5) vs. MLIB2-J ( > 0.75) | 7/2/0 | 0.051 | 0/9/0 | 0.008 |
MLIB2-J ( > 0.5) vs. MLIB2-BR | 8/1/0 | 0.011 | 7/2/0 | 0.173 |
MLIB2-J ( > 0.5) vs. MLCNN-L | 1/8/0 | 0.011 | 8/1/0 | 0.051 |
MLIB2-J ( > 0.5) vs. MLIB2-L | 1/7/1 | 0.093 | 6/3/0 | 0.214 |
MLIB2-J ( > 0.75) vs. MLIB2-BR | 8/1/0 | 0.015 | 7/2/0 | 0.066 |
MLIB2-J ( > 0.75) vs. MLCNN-L | 0/9/0 | 0.008 | 8/1/0 | 0.011 |
MLIB2-J ( > 0.75) vs. MLIB2-L | 0/9/0 | 0.008 | 8/1/0 | 0.011 |
MLIB2-BR vs. MLCNN-L | 1/8/0 | 0.011 | 4/5/0 | 0.374 |
MLIB2-BR vs. MLIB2-L | 1/8/0 | 0.011 | 4/5/0 | 0.678 |
MLCNN-L vs. MLIB2-L | 4/1/4 | 0.078 | 0/9/0 | 0.008 |
Algorithm | Mean Rank | |
---|---|---|
ACC | RR | |
MLCNN-H | 6.00 | 5.78 |
MLCNN-J ( > 0.5) | 4.22 | 4.00 |
MLCNN-J ( > 0.75) | 5.17 | 7.22 |
MLCNN-BR | 9.00 | 2.28 |
MLCNN-L | 3.67 | 2.67 |
MLIB2-H | 7.72 | 8.33 |
MLIB2-J ( > 0.5) | 5.72 | 6.11 |
MLIB2-J ( > 0.75) | 7.44 | 9.22 |
MLIB2-BR | 9.72 | 4.61 |
MLIB2-L | 5.06 | 4.78 |
BRkNN | 2.28 | - |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Filippakis, P.; Ougiaroglou, S.; Evangelidis, G. Prototype Selection for Multilabel Instance-Based Learning. Information 2023, 14, 572. https://doi.org/10.3390/info14100572
Filippakis P, Ougiaroglou S, Evangelidis G. Prototype Selection for Multilabel Instance-Based Learning. Information. 2023; 14(10):572. https://doi.org/10.3390/info14100572
Chicago/Turabian StyleFilippakis, Panagiotis, Stefanos Ougiaroglou, and Georgios Evangelidis. 2023. "Prototype Selection for Multilabel Instance-Based Learning" Information 14, no. 10: 572. https://doi.org/10.3390/info14100572