Next Article in Journal
Order Lot Sizing: Insights from Lattice Gas-Type Model
Previous Article in Journal
Entropy Accumulation Under Post-Quantum Cryptographic Assumptions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Feature Ranking on Small Samples: A Bayes-Based Approach

School of Translational Information Technologies, ITMO University, 197101 St. Petersburg, Russia
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(8), 773; https://doi.org/10.3390/e27080773
Submission received: 14 May 2025 / Revised: 4 July 2025 / Accepted: 11 July 2025 / Published: 22 July 2025
(This article belongs to the Section Multidisciplinary Applications)

Abstract

In the modern world, there is a need to provide a better understanding of the importance or relevance of the available descriptive features for predicting target attributes to solve the feature ranking problem. Among the published works, the vast majority are devoted to the problems of feature selection and extraction, and not the problems of their ranking. In this paper, we propose a novel method based on the Bayesian approach that allows us to not only to build a methodically justified way of ranking features on small datasets, but also to methodically solve the problem of benchmarking the results obtained by various ranking algorithms. The proposed method is also model-free, since no restrictions are imposed on the model. We carry out an experimental comparison of our proposed method with the classical frequency method. For this, we use two synthetic datasets and two public medical datasets. As a result, we show that the proposed ranking method has a high level of self-consistency (stability) already at the level of 50 samples, which is greatly improved compared to classical logistic regression and SHAP ranking. All the experiments performed confirm our theoretical conclusions: with the growth of the sample, an increasing trend of mutual consistency is observed, and our method demonstrates at least comparable results, and often results superior to other methods in the values of self-consistency and monotonicity. The proposed method can be applied to a wide class of rankings of influence factors on small samples, including industrial tasks, forensics, psychology, etc.
Keywords: feature ranking; small samples; Bayesian approach; ranking algorithm benchmarking feature ranking; small samples; Bayesian approach; ranking algorithm benchmarking

Share and Cite

MDPI and ACS Style

Vatian, A.; Gusarova, N.; Tomilov, I. Feature Ranking on Small Samples: A Bayes-Based Approach. Entropy 2025, 27, 773. https://doi.org/10.3390/e27080773

AMA Style

Vatian A, Gusarova N, Tomilov I. Feature Ranking on Small Samples: A Bayes-Based Approach. Entropy. 2025; 27(8):773. https://doi.org/10.3390/e27080773

Chicago/Turabian Style

Vatian, Aleksandra, Natalia Gusarova, and Ivan Tomilov. 2025. "Feature Ranking on Small Samples: A Bayes-Based Approach" Entropy 27, no. 8: 773. https://doi.org/10.3390/e27080773

APA Style

Vatian, A., Gusarova, N., & Tomilov, I. (2025). Feature Ranking on Small Samples: A Bayes-Based Approach. Entropy, 27(8), 773. https://doi.org/10.3390/e27080773

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop