Next Article in Journal
Sustainable Risk Identification Using Formal Ontologies
Previous Article in Journal
CVE2ATT&CK: BERT-Based Mapping of CVEs to MITRE ATT&CK Techniques
 
 
Article

High Per Parameter: A Large-Scale Study of Hyperparameter Tuning for Machine Learning Algorithms

Department of Computer Science, Ben-Gurion University, Beer-Sheva 8410501, Israel
Academic Editors: Luca Mariot, Luca Manzoni and Stefano Mariani
Algorithms 2022, 15(9), 315; https://doi.org/10.3390/a15090315
Received: 27 July 2022 / Revised: 22 August 2022 / Accepted: 30 August 2022 / Published: 2 September 2022
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Hyperparameters in machine learning (ML) have received a fair amount of attention, and hyperparameter tuning has come to be regarded as an important step in the ML pipeline. However, just how useful is said tuning? While smaller-scale experiments have been previously conducted, herein we carry out a large-scale investigation, specifically one involving 26 ML algorithms, 250 datasets (regression and both binary and multinomial classification), 6 score metrics, and 28,857,600 algorithm runs. Analyzing the results we conclude that for many ML algorithms, we should not expect considerable gains from hyperparameter tuning on average; however, there may be some datasets for which default hyperparameters perform poorly, especially for some algorithms. By defining a single hp_score value, which combines an algorithm’s accumulated statistics, we are able to rank the 26 ML algorithms from those expected to gain the most from hyperparameter tuning to those expected to gain the least. We believe such a study shall serve ML practitioners at large. View Full-Text
Keywords: machine learning; hyperparemeters machine learning; hyperparemeters
Show Figures

Figure 1

MDPI and ACS Style

Sipper, M. High Per Parameter: A Large-Scale Study of Hyperparameter Tuning for Machine Learning Algorithms. Algorithms 2022, 15, 315. https://doi.org/10.3390/a15090315

AMA Style

Sipper M. High Per Parameter: A Large-Scale Study of Hyperparameter Tuning for Machine Learning Algorithms. Algorithms. 2022; 15(9):315. https://doi.org/10.3390/a15090315

Chicago/Turabian Style

Sipper, Moshe. 2022. "High Per Parameter: A Large-Scale Study of Hyperparameter Tuning for Machine Learning Algorithms" Algorithms 15, no. 9: 315. https://doi.org/10.3390/a15090315

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop