Multi-Label Feature Selection Method Based on Maximum Label Complexity Ratio
Abstract
1. Introduction
- (1)
- We systematically investigate how dynamic changes in label complexity influence feature relevance assessment. To quantify this effect, we introduce a dynamic label complexity ratio derived from label information entropy and mutual information.
- (2)
- A novel multi-label feature selection method named MLCFS is proposed. This method comprehensively addresses the correlation and redundancy among features, as well as the interaction information between features and labels. Additionally, it takes into account the variations in label complexity.
- (3)
- To verify the effectiveness of MLCFS, experiments are conducted on nine publicly available multi-label datasets. This study compares the proposed method with eight established multi-label feature selection methods. The experimental results demonstrate that MLCFS outperforms the other comparative methods across multiple evaluation metrics, effectively reducing data dimensionality and enhancing the classification performance.
2. Preliminaries
The Basic Concepts of Information Theory
3. Related Work
4. Proposed Feature Selection Method
4.1. The Dynamic Changes in the Label Information
4.2. Quantification of the Complexity of Labels
4.3. Proposed Method
| Algorithm 1 MLCFS |
| Input: ; User-specified threshold K. Output: The already-selected feature subset S. //Step 1: Compute initial label complexity ratios for all labels 1:; 2: 0; 3: For i = 1 to q do 4:; 5: End for 6: While k < K do //Step 2: First iteration (when S is empty) 7: If k = 0 then 8:; 9:; 10:; 11:; 12:; 13: End if //Step 3: Subsequent iterations //Stage A: Update label complexity ratios considering selected features 14: For i = 1 to q do 15:; 16: End for 17: ; //Stage B: Comprehensive feature evaluation 18: do 19:; 20: end for 21:; 22:; 23:; 24:; 25: End while. |
4.4. Theoretical and Time Complexity Analysis
5. Experimental Results and Analysis
5.1. Evaluation of Metrics for Multi-Label Feature Selection
- (1)
- Hamming Loss (HL): HL evaluates the occurrence frequency of the given sample labels being misclassified.
- (2)
- Average Precision (AP): AP evaluates the average score of the labels ranked higher than the given label.where record the results of all labels ranked in descending order of their scores according to .
- (3)
- Ranking Loss (RL): RL evaluates the average score of the marked pairs in the reverse sorting of the given samples.
- (4)
- Coverage Error (CE): CE evaluates how many steps it takes on average to move down the list of ranked labels, covering all relevant labels for the sample.
5.2. Description of Multi-Label Benchmark Datasets and Experimental Settings
5.3. Classification Results and Analysis
5.4. Statistical Tests
6. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
| Symbols | Notations |
|---|---|
| set of samples | |
| set of features | |
| set of labels | |
| the k-th candidate feature(general nonation) | |
| the i-th label | |
| features in the selected feature subset | |
| selected feature subset |
References
- Huang, R.; Wu, Z. Multi-label feature selection via manifold regularization and dependence maximization. Pattern Recognit. 2021, 120, 108149. [Google Scholar] [CrossRef]
- Wu, J.S.; Huang, S.J.; Zhou, Z.H. Genome-wide protein function prediction through multi-instance multi-label learning. IEEE/ACM Trans. Comput. Biol. Bioinform. 2014, 11, 891–902. [Google Scholar] [CrossRef]
- Spolaôr, N.; Monard, M.C.; Tsoumakas, G.; Lee, H.D. A systematic review of multi-label feature selection and a new method based on label construction. Neurocomputing 2016, 180, 3–15. [Google Scholar] [CrossRef]
- Gao, W.; Hu, L.; Zhang, P. Class-specific mutual information variation for feature selection. Pattern Recognit. 2018, 79, 328–339. [Google Scholar] [CrossRef]
- Lin, Y.; Hu, Q.; Liu, J.; Chen, J.; Duan, J. Multi-label feature selection based on neighborhood mutual information. Appl. Soft Comput. 2016, 38, 244–256. [Google Scholar] [CrossRef]
- Deng, W.; Xu, H.; Guan, Z.; Sun, Y.; Ran, X.; Ma, H.; Zhou, X.; Zhao, H. PSO-K-Means Clustering-Based NSGA-III for Delay Recovery. IEEE Trans. Consum. Electron. 2025, 71, 10084–10095. [Google Scholar] [CrossRef]
- Huang, R.; Jiang, W.; Sun, G. Manifold-based constraint Laplacian score for multi-label feature selection. Pattern Recognit. Lett. 2018, 112, 346–352. [Google Scholar] [CrossRef]
- Dai, J.; Chen, J.; Liu, Y.; Hu, H. Novel multi-label feature selection via label symmetric uncertainty correlation learning and feature redundancy evaluation. Knowl.-Based Syst. 2020, 207, 106342. [Google Scholar] [CrossRef]
- Lee, J.; Kim, D.W. Memetic feature selection algorithm for multi-label classification. Inf. Sci. 2015, 293, 80–96. [Google Scholar] [CrossRef]
- Kashef, S.; Nezamabadi-pour, H. A label-specific multi-label feature selection algorithm based on the Pareto dominance concept. Pattern Recognit. 2019, 88, 654–667. [Google Scholar]
- Pereira, R.B.; Plastino, A.; Zadrozny, B.; Merschmann, L.H. Categorizing feature selection methods for multi-label classification. Artif. Intell. Rev. 2018, 49, 57–78. [Google Scholar] [CrossRef]
- Lee, J.; Kim, D.W. Efficient multi-label feature selection using entropy-based label selection. Entropy 2016, 18, 405. [Google Scholar] [CrossRef]
- Hall, M.A. Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning. In Proceedings of the Seventeenth International Conference on Machine Learning, San Francisco, CA, USA, 29 June–2 July 2000; pp. 359–366. [Google Scholar]
- Guyon, I.; Weston, J.; Barnhill, S. Gene Selection for Cancer Classification using Support Vector Machines. Mach. Learn. 2002, 46, 389–422. [Google Scholar] [CrossRef]
- Mejia-Lavalle, M.; Sucar, E.; Arroyo, G. Feature selection with a perceptron neural net. In Proceedings of the International Workshop on Feature Selection for Data Mining, Bethesda, MD, USA, 22 April 2006; pp. 131–135. [Google Scholar]
- Yu, L.; Liu, H. Efficient Feature Selection via Analysis of Relevance and Redundancy. J. Mach. Learn. Res. 2004, 5, 1205–1224. [Google Scholar]
- Liu, H.; Setiono, R. Chi2: Feature selection and discretization of numeric attributes. In Proceedings of the 7th IEEE International Conference on Tools with Artificial Intelligence, Herndon, VA, USA, 5–8 November 1995; pp. 388–391. [Google Scholar]
- Liu, H.; Yu, L. Toward integrating feature selection algorithms for classification and clustering. IEEE Trans. Knowl. Data Eng. 2005, 17, 491–502. [Google Scholar] [CrossRef]
- Li, Y.H.; Hu, L.; Gao, W.F. Multi-label feature selection based on sparse coefficient matrix reconstruction. Chin. J. Comput. 2022, 45, 1827–1841, (In Chinese with English abstract). [Google Scholar]
- Wu, J.S.; Li, Y.L.; Huang, C. Recent Advances in Unsupervised Multi-view Feature Selection. J. Softw. 2025, 36, 886–914. [Google Scholar]
- Li, Y.H.; Hu, L.; Zhang, P. Multi-label feature selection based on dynamic graph Laplacian. J. Commun. 2020, 41, 47–59. [Google Scholar]
- Sechidis, K.; Spyromitros-Xioufis, E.; Vlahavas, I. Information theoretic multi-target feature selection via output space quantization. Entropy 2019, 21, 855. [Google Scholar] [CrossRef]
- Liu, J.; Lin, Y.; Ding, W.; Zhang, H.; Du, J. Fuzzy mutual information-based multilabel feature selection with label dependency and streaming labels. IEEE Trans. Fuzzy Syst. 2023, 31, 77–91. [Google Scholar] [CrossRef]
- Zhang, L.; Wang, C. Multi-label feature selection algorithm based on joint mutual information of max-relevance and min-redundancy. J. Commun. 2018, 39, 111–122. [Google Scholar]
- Wang, G.Y.; Yu, H.; Yang, D.C. Decision table reduction based on conditional information entropy. Chin. J. Comput. 2002, 25, 759–766, (In Chinese with English abstract). [Google Scholar]
- Liu, J.; Li, Y.; Weng, W. Feature selection for multi-label learning with streaming label. Neurocomputing 2020, 387, 268–278. [Google Scholar] [CrossRef]
- Sun, L.; Wang, L.; Ding, W.; Qian, Y.; Xu, J. Feature Selection Using Fuzzy Neighborhood Entropy-Based Uncertainty Measures for Fuzzy Neighborhood Multigranulation Rough Sets. IEEE Trans. Fuzzy Syst. 2021, 29, 19–33. [Google Scholar] [CrossRef]
- Boutell, M.R.; Luo, J.; Shen, X. Learning multi-label scene classification. Pattern Recognit. 2004, 37, 1757–1771. [Google Scholar] [CrossRef]
- Trohidis, K.; Tsoumakas, G.; Kalliris, G. Multi-label classification of music by emotion. EURASIP J. Audio Speech Music Process. 2011, 2011, 4. [Google Scholar] [CrossRef]
- Read, J. A pruned problem transformation method for multi-label classification. In Proceedings of the 2008 New Zealand Computer Science Research Student Conference, Christchurch, New Zealand, 14–18 April 2008; pp. 143–150. [Google Scholar]
- Yin, T.; Chen, H.; Wan, J.; Zhang, P.; Horng, S.J.; Li, T. Exploiting feature multi-correlations for multilabel feature selection in robust multi-neighborhood fuzzy β covering space. Inf. Fusion 2024, 104, 102150. [Google Scholar] [CrossRef]
- Zhang, Y.; Huo, W.; Tang, J. Multi-label feature selection via latent representation learning and dynamic graph constraints. Pattern Recognit. 2024, 151, 110411. [Google Scholar] [CrossRef]
- Jian, L.; Li, J.; Shu, K.; Liu, H. Multi-label informed feature selection. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16), New York, NY, USA, 9–15 July 2016; pp. 1627–1633. [Google Scholar]
- Fan, Y.; Liu, J.; Tang, J. Learning correlation information for multi-label feature selection. Pattern Recognit. 2024, 145, 109899. [Google Scholar] [CrossRef]
- Dai, J.; Liu, Q.; Chen, W. Multi-label feature selection based on fuzzy mutual information and orthogonal regression. IEEE Trans. Fuzzy Syst. 2024, 32, 5136–5148. [Google Scholar] [CrossRef]
- Yin, T.; Chen, H.; Yuan, Z. LEFMIFS: Label enhancement and fuzzy mutual information for robust multilabel feature selection. Eng. Appl. Artif. Intell. 2024, 133, 108108. [Google Scholar] [CrossRef]
- Sun, Z.; Zhang, J.; Dai, L.; Li, C.; Zhou, C.; Xin, J.; Li, S. Mutual information based multi-label feature selection via constrained convex optimization. Neurocomputing 2019, 329, 447–456. [Google Scholar] [CrossRef]
- Gonzalez-Lopez, J.; Ventura, S.; Cano, A. Distributed multi-label feature selection using individual mutual information measures. Knowl.-Based Syst. 2020, 188, 105052. [Google Scholar] [CrossRef]
- Lee, J.; Kim, D.W. Mutual information-based multi-label feature selection using interaction information. Expert Syst. Appl. 2015, 42, 2013–2025. [Google Scholar] [CrossRef]
- Lee, J.; Kim, D.W. Feature selection for multi-label classification using multivariate mutual information. Pattern Recognit. Lett. 2013, 34, 349–357. [Google Scholar] [CrossRef]
- Lee, J.; Kim, D.W. SCLS: Multi-label feature selection based on scalable criterion for large label set. Pattern Recognit. 2017, 66, 342–352. [Google Scholar] [CrossRef]
- Zhang, P.; Liu, G.; Gao, W. Distinguishing two types of labels for multi-label feature selection. Pattern Recognit. 2019, 95, 72–82. [Google Scholar] [CrossRef]
- Pan, M.; Sun, Z.; Wang, C.; Cao, G. A multi-label feature selection method based on an approximation of interaction information. Intell. Data Anal. 2022, 26, 823–840. [Google Scholar] [CrossRef]
- Lee, J.; Kim, D.W. Fast multi-label feature selection based on information-theoretic feature ranking. Pattern Recognit. 2015, 48, 2761–2771. [Google Scholar] [CrossRef]
- Zhang, P.; Liu, G.; Song, J. MFSJMI: Multi-label feature selection considering join mutual information and interaction weight. Pattern Recognit. 2023, 138, 109378. [Google Scholar] [CrossRef]
- Guo, D.; Zhang, J.; Yang, B.; Lin, Y. Multi-modal intelligent situation awareness in real-time air traffic control: Control intent understanding and flight trajectory prediction. Chin. J. Aeronaut. 2025, 38, 103376. [Google Scholar] [CrossRef]
- Zhao, J.; Yang, C.; Gao, W.; Park, J.H. ADP-based optimal control of linear singularly perturbed systems with uncertain dynamics: A two-stage value iteration method. IEEE Trans. Circuits Syst. II Express Briefs 2023, 70, 4399–4403. [Google Scholar] [CrossRef]
- Tsoumakas, G.; Spyromitros-Xioufis, E.; Vilcek, J. Mulan: A Java library for multi-label learning. J. Mach. Learn. Res. 2011, 12, 2411–2414. [Google Scholar]
- Cai, Z.; Zhu, W. Multi-label feature selection via feature manifold learning and sparsity regularization. Int. J. Mach. Learn. Cybern. 2018, 9, 1321–1334. [Google Scholar] [CrossRef]
- Rodrigues, D.; Pereira, L.; Nakamura, R. A wrapper approach for feature selection based on bat algorithm and optimum-path forest. Expert Syst. Appl. 2014, 41, 2250–2258. [Google Scholar] [CrossRef]
- Zhang, J.; Luo, Z.; Li, C. Manifold regularized discriminative feature selection for multi-label learning. Pattern Recognit. 2019, 95, 136–150. [Google Scholar] [CrossRef]
- Zhang, L.; Wang, Z. Multi-label Feature Selection Algorithm Based on Maximum Correlation and Minimum Redundancy Joint Mutual Information. J. Commun. 2018, 39, 111–122. [Google Scholar]
- Friedman, M. A comparison of alternative tests of significance for the problemof m rankings. Ann. Math. Stat. 1940, 11, 86–92. [Google Scholar] [CrossRef]
- Dunn, O.J. Multiple comparisons among means. J. Am. Assoc. 1961, 56, 52–64. [Google Scholar] [CrossRef]













| Methods | ||
|---|---|---|
| D2F | ||
| PMU | ||
| SCLS | ||
| FIMF | / | |
| LRFS | ||
| IDA | ||
| MFSJMI |
| Methods | Time Complexities |
|---|---|
| MLCFS | |
| D2F | |
| PMU | |
| SCLS | |
| LRFS | |
| FIMF | |
| IDA | |
| MFSJMI | |
| MIFS |
| Datasets | Instances | Train | Test | Features | Label | Label Cardinality | Label Density | Domain |
|---|---|---|---|---|---|---|---|---|
| scene | 2407 | 1211 | 1196 | 294 | 6 | 1.074 | 0.179 | Image |
| yeast | 2417 | 1500 | 917 | 103 | 14 | 4.237 | 0.303 | Biology |
| computers | 5000 | 2000 | 3000 | 681 | 33 | 1.508 | 0.046 | Yahoo |
| health | 5000 | 2000 | 3000 | 612 | 32 | 1.662 | 0.052 | Text |
| reference | 5000 | 2000 | 3000 | 636 | 27 | 1.169 | 0.035 | Yahoo |
| social | 5000 | 2000 | 3000 | 1047 | 39 | 1.282 | 0.033 | Text |
| medical | 978 | 333 | 645 | 1449 | 45 | 1.245 | 0.028 | Text |
| entertain | 5000 | 2000 | 3000 | 640 | 21 | 1.420 | 0.068 | Text |
| society | 5000 | 2000 | 3000 | 636 | 27 | 1.692 | 0.063 | Text |
| Datasets | MLCFS | MIFS | D2F | PMU | SCLS | LRFS | FIMF | IDA | MFSJMI |
|---|---|---|---|---|---|---|---|---|---|
| scene | 0.1413 ± 0.0206 | 0.1704 ± 0.0097 | 0.1492 ± 0.0064 | 0.1473 ± 0.0066 | 0.1734 ± 0.003 | 0.1419 ± 0.0099 | 0.1663 ± 0.0063 | 0.1458 ± 0.0102 | 0.1411 ± 0.019 |
| yeast | 0.2257 ± 0.0126 | 0.2302 ± 0.0041 | 0.2278 ± 0.0029 | 0.2279 ± 0.0037 | 0.2332 ± 0.0044 | 0.2263 ± 0.0035 | 0.2319 ± 0.0042 | 0.2303 ± 0.0026 | 0.2305 ± 0.0028 |
| Computers | 0.0407 ± 0.0008 | 0.0449 ± 0.0002 | 0.044 ± 0.0005 | 0.0441 ± 0.0005 | 0.0434 ± 0.0005 | 0.0429 ± 0.0007 | 0.0433 ± 0.0006 | 0.0426 ± 0.0012 | 0.0432 ± 0.0007 |
| Health | 0.0441 ± 0.0026 | 0.0502 ± 0.001 | 0.0483 ± 0.0005 | 0.0493 ± 0.0006 | 0.0485 ± 0.0011 | 0.0452 ± 0.0011 | 0.0442 ± 0.0013 | 0.0471 ± 0.0009 | 0.0473 ± 0.0015 |
| Reference | 0.0305 ± 0.0014 | 0.0313 ± 0.0012 | 0.0322 ± 0.0012 | 0.0336 ± 0.001 | 0.0329 ± 0.0002 | 0.0312 ± 0.0007 | 0.0321 ± 0.0009 | 0.0315 ± 0.0006 | 0.0314 ± 0.0009 |
| Social | 0.0266 ± 0.0021 | 0.0317 ± 0.0013 | 0.0303 ± 0.0005 | 0.0309 ± 0.0003 | 0.0287 ± 0.0007 | 0.0274 ± 0.0007 | 0.0282 ± 0.0006 | 0.0266 ± 0.0012 | 0.0281 ± 0.0009 |
| medical | 0.0171 ± 0.0008 | 0.0165 ± 0.0021 | 0.0196 ± 0.001 | 0.0197 ± 0.0011 | 0.0233 ± 0.0002 | 0.0175 ± 0.001 | 0.0174 ± 0.001 | 0.0218 ± 0.0001 | 0.0177 ± 0.0015 |
| Entertain | 0.0637 ± 0.0017 | 0.0658 ± 0.0008 | 0.0657 ± 0.0013 | 0.0671 ± 0.0011 | 0.0659 ± 0.0014 | 0.0631 ± 0.0014 | 0.0654 ± 0.0011 | 0.0615 ± 0.0012 | 0.0641 ± 0.0011 |
| Society | 0.0587 ± 0.0007 | 0.0596 ± 0.0009 | 0.0587 ± 0.0004 | 0.0597 ± 0.0009 | 0.0594 ± 0.0003 | 0.058 ± 0.0006 | 0.0586 ± 0.0007 | 0.0582 ± 0.0005 | 0.0589 ± 0.001 |
| average | 0.0722 | 0.0778 | 0.0751 | 0.0755 | 0.0788 | 0.0726 | 0.0764 | 0.074 | 0.0739 |
| Datasets | MLCFS | MIFS | D2F | PMU | SCLS | LRFS | FIMF | IDA | MFSJMI |
|---|---|---|---|---|---|---|---|---|---|
| scene | 1.9849 ± 0.2984 | 2.9801 ± 0.434 | 2.3015 ± 0.2357 | 2.3129 ± 0.2443 | 2.7828 ± 0.1086 | 2.2297 ± 0.2913 | 2.6974 ± 0.4179 | 2.3396 ± 0.3783 | 2.3442 ± 0.4845 |
| yeast | 7.9796 ± 0.2589 | 9.0812 ± 0.506 | 8.7833 ± 0.2726 | 8.9352 ± 0.3673 | 9.0711 ± 0.3446 | 8.9035 ± 0.3516 | 8.9928 ± 0.3234 | 9.2325 ± 0.3927 | 8.9751± 0.332 |
| Computers | 6.3673 ± 0.1149 | 7.5371 ± 0.5373 | 7.2455 ± 0.2641 | 7.1926 ± 0.2168 | 7.1822 ± 0.2216 | 7.1585 ± 0.2369 | 6.9352 ± 0.196 | 7.0722 ± 0.2511 | 7.1399 ± 0.2279 |
| Health | 4.8104 ± 0.3185 | 6.228 ± 0.376 | 5.7394 ± 0.1555 | 5.7229 ± 0.1426 | 5.8251 ± 0.1802 | 5.7664 ± 0.1544 | 4.7298 ± 0.1364 | 5.8252 ± 0.1703 | 5.8838 ± 0.1783 |
| Reference | 5.0328 ± 0.1062 | 5.949 ± 0.3156 | 5.6561 ± 0.1973 | 5.6117 ± 0.1147 | 5.6353 ± 0.1623 | 5.7063 ± 0.3418 | 5.6452 ± 0.3112 | 5.9714 ± 0.332 | 5.6892 ± 0.2235 |
| Social | 5.4449 ± 0.1685 | 6.955 ± 0.4051 | 6.1474 ± 0.191 | 6.2101 ± 0.1976 | 6.0108 ± 0.3113 | 5.8175 ± 0.3354 | 5.9043 ± 0.2704 | 6.0501 ± 0.3354 | 6.028 ± 0.302 |
| medical | 5.1658 ± 0.4137 | 6.1604 ± 0.4141 | 6.3598 ± 0.4012 | 6.4201 ± 0.4025 | 8.3118 ± 0.1098 | 5.8078 ± 0.2927 | 5.7868 ± 0.2699 | 7.1824 ± 0.0631 | 5.8668 ± 0.6055 |
| Entertain | 5.1016 ± 0.1243 | 5.9338 ± 0.5407 | 5.7088 ± 0.2277 | 5.6683 ± 0.2167 | 5.7602 ± 0.1751 | 5.5795 ± 0.2664 | 5.6386 ± 0.2137 | 5.7098 ± 0.2397 | 5.6899 ± 0.2259 |
| Society | 7.8775 ± 0.2345 | 8.6349 ± 0.4791 | 8.4876 ± 0.2665 | 8.4146 ± 0.2669 | 8.5074 ± 0.2349 | 8.3791 ± 0.3111 | 8.3525 ± 0.3782 | 8.3738 ± 0.3093 | 8.4163 ± 0.3102 |
| average | 5.5294 | 6.6066 | 6.2699 | 6.2765 | 6.5652 | 6.1498 | 6.0758 | 6.4174 | 6.2259 |
| Datasets | MLCFS | MIFS | D2F | PMU | SCLS | LRFS | FIMF | IDA | MFSJMI |
|---|---|---|---|---|---|---|---|---|---|
| scene | 0.1763 ± 0.0596 | 0.3751 ± 0.0865 | 0.2395 ± 0.0478 | 0.2415 ± 0.0493 | 0.3366 ± 0.0216 | 0.2249 ± 0.059 | 0.318 ± 0.0842 | 0.2467 ± 0.0759 | 0.248 ± 0.0968 |
| yeast | 0.2053 ± 0.0168 | 0.2703 ± 0.0302 | 0.2454 ± 0.0096 | 0.2548 ± 0.0183 | 0.2653 ± 0.0146 | 0.2564 ± 0.0149 | 0.2586 ± 0.0158 | 0.2678 ± 0.0204 | 0.2596 ± 0.0222 |
| Computers | 0.1186 ± 0.0023 | 0.1515 ± 0.0146 | 0.1389 ± 0.006 | 0.1367 ± 0.0045 | 0.1398 ± 0.0061 | 0.1364 ± 0.0057 | 0.1307 ± 0.005 | 0.1364 ± 0.0067 | 0.1367 ± 0.0058 |
| Health | 0.0729 ± 0.0083 | 0.1148 ± 0.0106 | 0.0979 ± 0.0043 | 0.0983 ± 0.0042 | 0.1013 ± 0.0051 | 0.0979 ± 0.0041 | 0.2005 ± 0.0062 | 0.1001 ± 0.0044 | 0.1028 ± 0.0049 |
| Reference | 0.1069 ± 0.0032 | 0.0313 ± 0.0012 | 0.1254 ± 0.0065 | 0.124 ± 0.0039 | 0.1251 ± 0.005 | 0.1278 ± 0.0107 | 0.1256 ± 0.0097 | 0.1355 ± 0.0105 | 0.127 ± 0.0069 |
| Social | 0.0886 ± 0.0039 | 0.0317 ± 0.0013 | 0.1043 ± 0.0041 | 0.1058 ± 0.0044 | 0.1025 ± 0.0075 | 0.0976 ± 0.0064 | 0.0983 ± 0.0061 | 0.1022 ± 0.0079 | 0.1011 ± 0.0068 |
| medical | 0.0726 ± 0.008 | 0.0897 ± 0.0093 | 0.0951 ± 0.0092 | 0.0963 ± 0.0091 | 0.1398 ± 0.0024 | 0.0833 ± 0.0064 | 0.0829 ± 0.006 | 0.1139 ± 0.0012 | 0.0848 ± 0.0131 |
| Entertain | 0.1598 ± 0.006 | 0.2004 ± 0.0265 | 0.1885 ± 0.011 | 0.186 ± 0.0101 | 0.1896 ± 0.0082 | 0.1826 ± 0.0128 | 0.1855 ± 0.0102 | 0.1888 ± 0.0113 | 0.188 ± 0.0108 |
| Society | 0.1845 ± 0.006 | 0.0596 ± 0.0009 | 0.2068 ± 0.0081 | 0.203 ± 0.0085 | 0.2075 ± 0.0081 | 0.2034 ± 0.0105 | 0.2032 ± 0.0134 | 0.203 ± 0.0102 | 0.2054 ± 0.0112 |
| average | 0.1317 | 0.1472 | 0.1602 | 0.1607 | 0.1786 | 0.1567 | 0.1782 | 0.1485 | 0.1647 |
| Datasets | MLCFS | MIFS | D2F | PMU | SCLS | LRFS | FIMF | IDA | MFSJMI |
|---|---|---|---|---|---|---|---|---|---|
| scene | 0.7331 ± 0.07 | 0.4978 ± 0.0654 | 0.6169 ± 0.0503 | 0.6197 ± 0.0522 | 0.5129 ± 0.0171 | 0.6362 ± 0.0658 | 0.5443 ± 0.0803 | 0.6165 ± 0.0825 | 0.6258 ± 0.0873 |
| yeast | 0.7199 ± 0.0198 | 0.6441 ± 0.0408 | 0.6791 ± 0.0134 | 0.6728 ± 0.0199 | 0.653 ± 0.0197 | 0.6648 ± 0.019 | 0.6683 ± 0.0193 | 0.6524 ± 0.0253 | 0.6615 ± 0.0267 |
| Computers | 0.6015 ± 0.0068 | 0.514 ± 0.0364 | 0.5407 ± 0.013 | 0.5402 ± 0.0158 | 0.5256 ± 0.0178 | 0.5416 ± 0.0164 | 0.5494 ± 0.0163 | 0.5481 ± 0.0155 | 0.5399 ± 0.017 |
| Health | 0.6734 ± 0.0272 | 0.5407 ± 0.0308 | 0.5617 ± 0.0201 | 0.5583 ± 0.0138 | 0.5566 ± 0.0163 | 0.5594 ± 0.0203 | 0.6692 ± 0.0122 | 0.5578 ± 0.023 | 0.5539 ± 0.0226 |
| Reference | 0.5824 ± 0.0099 | 0.5089 ± 0.0273 | 0.5204 ± 0.0186 | 0.505 ± 0.028 | 0.5111 ± 0.0176 | 0.5182 ± 0.0188 | 0.5233 ± 0.0205 | 0.5062 ± 0.0195 | 0.5209 ± 0.017 |
| Social | 0.6428 ± 0.024 | 0.5183 ± 0.0254 | 0.5671 ± 0.0121 | 0.5628 ± 0.0134 | 0.5423 ± 0.0237 | 0.5731 ± 0.0173 | 0.5674 ± 0.0245 | 0.571 ± 0.0221 | 0.5671 ± 0.0246 |
| medical | 0.7295 ± 0.0368 | 0.6599 ± 0.0576 | 0.6056 ± 0.032 | 0.591 ± 0.026 | 0.4482 ± 0.0067 | 0.6532 ± 0.0268 | 0.6515 ± 0.0248 | 0.5055 ± 0.0021 | 0.6493 ± 0.0465 |
| Entertain | 0.5279 ± 0.0246 | 0.4182 ± 0.0304 | 0.4319 ± 0.0179 | 0.4473 ± 0.0128 | 0.4361 ± 0.0094 | 0.4382 ± 0.018 | 0.4388 ± 0.0158 | 0.4229 ± 0.0152 | 0.4298 ± 0.0176 |
| Society | 0.5256 ± 0.0071 | 0.4441 ± 0.0327 | 0.4865 ± 0.0092 | 0.4911 ± 0.0139 | 0.4684 ± 0.0149 | 0.4792 ± 0.0158 | 0.474 ± 0.0199 | 0.571 ± 0.0221 | 0.481 ± 0.0213 |
| average | 0.6373 | 0.5273 | 0.5567 | 0.5542 | 0.5171 | 0.5627 | 0.5651 | 0.5502 | 0.5588 |
| Methods | Hamming Loss | Coverage Error | Ranking Loss | Average Precision |
|---|---|---|---|---|
| MLCFS | 1.78 | 1.11 | 1.33 | 1.11 |
| MIFS | 6.56 | 8.33 | 5.78 | 8 |
| D2F | 5.78 | 5.33 | 5.22 | 4.44 |
| PMU | 7.44 | 4.89 | 4.67 | 5.11 |
| SCLS | 7.67 | 6.56 | 7.33 | 7.33 |
| LRFS | 2.56 | 3.67 | 3.67 | 3.89 |
| FIMF | 4.89 | 3.33 | 4.78 | 3.89 |
| IDA | 3.67 | 6.33 | 6 | 5.67 |
| MFSJMI | 4.44 | 5.44 | 5.78 | 5.44 |
| Evaluation Metrics | Critical Value | ||
|---|---|---|---|
| Hamming Loss | 38.9293 | 9.4172 | 2.102 |
| Coverage Error | 42.2351 | 11.3516 | |
| Ranking Loss | 22.4270 | 3.6192 | |
| Average Precision | 38.1521 | 9.0173 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Cao, Y.; Zhang, P.; Wang, L. Multi-Label Feature Selection Method Based on Maximum Label Complexity Ratio. Electronics 2026, 15, 525. https://doi.org/10.3390/electronics15030525
Cao Y, Zhang P, Wang L. Multi-Label Feature Selection Method Based on Maximum Label Complexity Ratio. Electronics. 2026; 15(3):525. https://doi.org/10.3390/electronics15030525
Chicago/Turabian StyleCao, Yu, Ping Zhang, and Long Wang. 2026. "Multi-Label Feature Selection Method Based on Maximum Label Complexity Ratio" Electronics 15, no. 3: 525. https://doi.org/10.3390/electronics15030525
APA StyleCao, Y., Zhang, P., & Wang, L. (2026). Multi-Label Feature Selection Method Based on Maximum Label Complexity Ratio. Electronics, 15(3), 525. https://doi.org/10.3390/electronics15030525

