A Comparative Analysis of Different Machine Learning Algorithms Developed with Hyperparameter Optimization in the Prediction of Student Academic Success
Abstract
Featured Application
Abstract
1. Introduction
2. Machine Learning Methods
2.1. Distance and Kernel-Based Methods
2.2. Tree-Based and Ensemble Learning Models
3. Data Set and Preprocessing Steps
4. Model Training and Evaluation Methods
4.1. Evaluation Metrics
4.2. Hyperparameter Optimization with GA
4.3. Determination of Optimum Hyperparameters of Models with Grid Search
5. Discussion and Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Yağcı, M. Educational data mining: Prediction of students’ academic performance using machine learning algorithms. Smart Learn. Environ. 2022, 9, 11. [Google Scholar] [CrossRef]
- Ting, T.T.; Loh, E.S.; Koong, J.L.; Tham, H.H.; Salau, A.O. Prediction of student academic status in higher education through machine learning. Pak. J. Life Soc. Sci. 2024, 22, 19224–19238. [Google Scholar]
- Villar, A.; Andrade, C.R.V. Supervised machine learning algorithms for predicting student dropout and academic success: A comparative study. Discov. Artif. Intell. 2024, 4, 2. [Google Scholar] [CrossRef]
- Alkan, A. Öğrencilerin sınavlardaki performansının makine öğrenmesi teknikleriyle tahminlenmesi. Osman. Korkut Ata Univ. J. Inst. Sci. 2024, 7, 1116–1128. [Google Scholar] [CrossRef]
- Bayırbağ, V.; Bakır, H. Çalışan Yıpranması Tahmin Etmek için Hiper Parametresi Ayarlanmış Makine Öğrenme Algoritmaların Kullanılması. In Proceedings of the International Conference on Scientific and Academic Research, Konya, Turkey, 2–4 June 2023; Volume 1, pp. 466–471. [Google Scholar]
- Khan, M.A.; Zafar, A.; Farooq, F.; Javed, M.F.; Alyousef, R.; Alabduljabbar, H.; Khan, M.I. Geopolymer concrete compressive strength via artificial neural network, adaptive neuro-fuzzy interface system, and gene expression programming with K-fold cross validation. Front. Mater. 2021, 8, 621163. [Google Scholar] [CrossRef]
- Yang, L.; Shami, A. On hyperparameter optimization of machine learning algorithms: Theory and practice. Neurocomputing 2020, 415, 295–316. [Google Scholar] [CrossRef]
- Arévalo-Cordovilla, F.E.; Peña, M. Comparative Analysis of Machine Learning Models for Predicting Student Success in Online Programming Courses: A Study Based on LMS Data and External Factors. Mathematics 2024, 12, 3272. [Google Scholar] [CrossRef]
- Zou, W.; Zhong, W.; Du, J.; Yuan, L. Prediction of Student Academic Performance Utilizing a Multi-Model Fusion Approach in the Realm of Machine Learning. Appl. Sci. 2025, 15, 3550. [Google Scholar] [CrossRef]
- Asgarkhani, N.; Kazemi, F.; Jankowski, R. Machine learning-based prediction of residual drift and seismic risk assessment of steel moment-resisting frames considering soil-structure interaction. Comput. Struct. 2023, 289, 107181. [Google Scholar] [CrossRef]
- Wang, X.; Mei, X.; Huang, Q.; Han, Z.; Huang, C. Fine-grained learning performance prediction via adaptive sparse self-attention networks. Inf. Sci. 2021, 545, 223–240. [Google Scholar] [CrossRef]
- Huang, Q.; Chen, J. Enhancing academic performance prediction with temporal graph networks for massive open online courses. J. Big Data 2024, 11, 52. [Google Scholar] [CrossRef]
- Huang, Q.; Zeng, Y. Improving academic performance predictions with dual graph neural networks. Complex Intell. Syst. 2024, 10, 3557–3575. [Google Scholar] [CrossRef]
- Wang, X.; Wu, P.; Liu, G.; Huang, Q.; Hu, X.; Xu, H. Learning performance prediction via convolutional GRU and explainable neural networks in e-learning environments. Computing 2019, 101, 587–604. [Google Scholar] [CrossRef]
- Cover, T.M.; Hart, P.E. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
- Drucker, H.; Burges, C.J.C.; Kaufman, L.; Smola, A.; Vapnik, V. Support vector regression machines. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 1997; Volume 9, pp. 155–161. [Google Scholar]
- Quinlan, J.R. Induction of decision trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Geron, A. Scikit-Learn, Keras ve TensorFlow ile Uygulamalı Makine Öğrenmesi, 1st ed.; Aksoy, B.; Kaya, Ö., Translators; Buzdağı Yayınevi: Ankara, Turkey, 2021. [Google Scholar]
- Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
- Geurts, P.; Ernst, D.; Wehenkel, L. Extremely randomized trees. Mach. Learn. 2006, 63, 3–42. [Google Scholar] [CrossRef]
- Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Chen, T.; Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
- Available online: https://www.kaggle.com/datasets/tusharika802/student-performance-data-csv (accessed on 19 August 2024).
- James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning: With Applications in R; Springer: New York, NY, USA, 2013. [Google Scholar]
- Willmott, C.J.; Matsuura, K. Advantages of the mean absolute error (MAE) over the root mean square error (RMSE). Clim. Res. 2005, 30, 79–82. [Google Scholar] [CrossRef]
- Hyndman, R.J.; Koehler, A.B. Another look at measures of forecast accuracy. Int. J. Forecast. 2006, 22, 679–688. [Google Scholar] [CrossRef]
- Chai, T.; Draxler, R.R. Root mean square error (RMSE) or mean absolute error (MAE)? Geosci. Model Dev. 2014, 7, 1247–1250. [Google Scholar] [CrossRef]
- Botchkarev, A. Performance metrics (error measures) in machine learning regression, forecasting and prognostics: Properties and typology. arXiv 2018, arXiv:1809.03006. [Google Scholar]
- Shcherbakov, M.V.; Brebels, A.; Shcherbakova, N.L.; Tyukov, A.P.; Janovsky, T.A.; Kamaev, V.A. A survey of forecast error measures. World Appl. Sci. J. 2013, 24, 171–176. [Google Scholar]
- Armstrong, J.S.; Collopy, F. Error measures for generalizing about forecasting methods: Empirical comparisons. Int. J. Forecast. 1992, 8, 69–80. [Google Scholar] [CrossRef]
- Makridakis, S.; Andersen, A.; Carbone, R.; Fildes, R.; Hibon, M.; Lewandowski, R.; Winkler, R. The accuracy of extrapolation (time series) methods: Results of a forecasting competition. J. Forecast. 1982, 1, 111–153. [Google Scholar] [CrossRef]
- Tofallis, C. A better measure of relative prediction accuracy for model selection and model estimation. J. Oper. Res. Soc. 2015, 66, 1352–1362. [Google Scholar] [CrossRef]
- Montgomery, D.C.; Runger, G.C. Applied Statistics and Probability for Engineers, 7th ed.; Wiley: Hoboken, NJ, USA, 2018. [Google Scholar]
- Kutner, M.H.; Nachtsheim, C.J.; Neter, J.; Li, W. Applied Linear Statistical Models, 5th ed.; McGraw-Hill: Boston, MA, USA, 2005. [Google Scholar]
- Coleman, H.W.; Steele, W.G. Experimentation, Validation, and Uncertainty Analysis for Engineers, 4th ed.; Wiley: Hoboken, NJ, USA, 2018. [Google Scholar]
Statistics | Absences | Parental Support | Weekly Study Time | Tutoring | Extracurriculars | GPA |
---|---|---|---|---|---|---|
Count | 2392 | 2392 | 2392 | 2392 | 2392 | 2392 |
Mean | 14.541 | 2.122 | 9.772 | 0.301 | 0.383 | 1.906 |
Std | 8.467 | 1.123 | 5.653 | 0.459 | 0.486 | 0.915 |
Min | 0.0 | 0.0 | 0.001 | 0.0 | 0.0 | 0.0 |
%25 | 7.0 | 1.0 | 5.043 | 0.0 | 0.0 | 1.175 |
%50 | 15.0 | 2.0 | 9.706 | 0.0 | 0.0 | 1.894 |
%75 | 22.0 | 3.0 | 14.408 | 1.0 | 1.0 | 2.622 |
Max | 29.0 | 4.0 | 19.978 | 1.0 | 1.0 | 4.0 |
Metrics | Formula |
---|---|
Determination coefficient (R2) | |
Mean absolute error (MAE) | |
Mean squared error (MSE) | |
Root mean squared error (RMSE) | |
Mean squared logarithmic error (MSLE) | |
Maximum error (MaxEr) | |
Explained variance (ExVar) | |
Median absolute error (MedAE) | |
Mean square relative error (MARE) | |
Mean square relative error (MSRE) | |
Root mean square relative error (RMSRE) | |
Relative root mean square error (RRMSE) | |
Mean bias error (MBE) | |
Maximum absolute relative error (eMAX) | |
Standard deviation (SD) | |
t-statistic (t-stat) | |
Uncertainty at 95% (U95) |
MLModels | DT | RF | K-NNs | AdaBoost | GBM | ETR | BR | SVR | HGBR | XGBoost | |
---|---|---|---|---|---|---|---|---|---|---|---|
Metrics | |||||||||||
R2 | 0.8691 | 0.9287 | 0.899 | 0.9174 | 0.9384 | 0.9265 | 0.9184 | 0.95 | 0.9408 | 0.9305 | |
MSE | 0.1083 | 0.059 | 0.0835 | 0.0683 | 0.051 | 0.0608 | 0.0674 | 0.0413 | 0.049 | 0.0575 | |
RMSE | 0.329 | 0.2428 | 0.289 | 0.2613 | 0.2258 | 0.2466 | 0.2597 | 0.2032 | 0.2213 | 0.2398 | |
MAE | 0.26 | 0.188 | 0.2268 | 0.2097 | 0.1793 | 0.1948 | 0.203 | 0.1597 | 0.1754 | 0.1891 | |
MSLE | 0.0185 | 0.0105 | 0.0146 | 0.0116 | 0.0092 | 0.0105 | 0.0118 | 0.0074 | 0.009 | 0.0101 | |
MedAE | 0.2156 | 0.1491 | 0.1806 | 0.1834 | 0.1567 | 0.1595 | 0.1578 | 0.1347 | 0.1472 | 0.161 | |
MaxEr | 1.2878 | 0.8993 | 0.9993 | 0.8398 | 0.9777 | 0.8172 | 0.979 | 0.9745 | 0.9737 | 0.8617 | |
ExVar | 0.8691 | 0.9287 | 0.8993 | 0.9174 | 0.9384 | 0.9265 | 0.9184 | 0.9501 | 0.9408 | 0.9305 | |
MARE | 0.3923 | 0.3789 | 0.4607 | 0.4274 | 0.3497 | 0.3579 | 0.3617 | 0.2866 | 0.3542 | 0.372 | |
MSRE | 6.3221 | 8.119 | 16.0801 | 11.9587 | 6.9601 | 6.5338 | 5.9819 | 4.0173 | 7.5782 | 8.4804 | |
RMSRE | 2.5144 | 2.8494 | 4.01 | 3.4581 | 2.6382 | 2.5561 | 2.4458 | 2.0043 | 2.7528 | 2.9121 | |
RRMSE | 1.3052 | 1.4791 | 2.0815 | 1.7951 | 1.3694 | 1.3269 | 1.2696 | 1.0404 | 1.429 | 1.5116 | |
MBE | 0 | −0.0021 | −0.0154 | −0.0009 | −0.0011 | −0.0002 | −0.0022 | 0.0069 | −0.0017 | 0.0018 | |
erMAX | 48.1298 | 45.2616 | 81.5312 | 58.4733 | 43.281 | 42.6166 | 40.3193 | 30.0223 | 41.4007 | 44.147 | |
SD | 0.329 | 0.2428 | 0.2886 | 0.2613 | 0.2258 | 0.2466 | 0.2597 | 0.2031 | 0.2213 | 0.2398 | |
t-stat | −0.0013 | −0.1935 | −1.1649 | −0.0759 | −0.1024 | −0.0189 | −0.1821 | 0.7483 | −0.1669 | 0.1657 | |
U95 | 0.0295 | 0.0217 | 0.0258 | 0.0234 | 0.0202 | 0.0221 | 0.0233 | 0.0182 | 0.0198 | 0.0215 |
ML Models | Parameters and Their Ranges | ML Models | Parameters and Their Ranges |
---|---|---|---|
DT | criterion: squared_error, friedman_mse, absolute_error, poisson max_depth: 10 values in increments of 1 in the range [1, 10] max_leaf_nodes: 10 values in the range [2, 500] min_samples_leaf: 10 values in increments of 1 in the range [1, 10] | ETR | n_estimators: 10 values in increments of 10 in the range [100, 1000] min_samples_leaf: 10 values in the range [1, 18] min_samples_split: 10 values in increments of 10 in the range [2, 20] criterion: squared_error, absolute_error, friedman_mse, poisson |
RF | n_estimators: 10 values in increments of 10 in the range [100, 1000] criterion: squared_error, friedman_mse, absolute_error, poisson max_depth: 10 values in increments of 5 in the range [5, 50] min_samples_leaf: 10 values in increments of 1 in the range [1, 10] | BR | n_estimators: 10 values in increments of 10 in the range [100, 1000] max_samples: 10 values in the range [0.1, 1.0] max_features: 10 values in the range [0.1, 1.0] bootstrap: True, False bootstrap_features: True, False |
K-NNs | n_neighbors: 10 odd numbers in the interval [3, 21] weights: uniform, distance metric: minkowski, euclidean, manhattan, chebyshev leaf_size: 10 values in increments of 10 in the range [10, 100] | SVR | kernel: linear, poly, rbf, sigmoid max_iter: 10 values in increments of 10 in the range [100, 1000] epsilon: 10 values in the range [0.01, 0.8] degree: 10 values in increments of 1 in the range [1, 10] |
AdaBoost | n_estimators: 10 values in increments of 10 in the range [100, 2000] learning_rate: 10 values in the range [0.01, 1.0] loss: linear, square, exponential | HGBR | learning_rate: 10 values in the range [0.01, 2.0] loss: squared_error, absolute_error, poisson, quantile max_depth: 10 values in the range [3, 40] min_samples_leaf: 10 values in the range [1, 18] |
GB | n_estimators: 10 values in increments of 10 in the range [100, 1000] learning_rate: 10 values in the range [0.01, 2.0] max_depth: 10 values in the range [3, 40] loss: squared_error, absolute_error, huber, quantile | XGBoost | n_estimators: 10 values in increments of 10 in the range [100, 1000] learning_rate: 10 values in the range [0.01, 2.0] max_depth: 10 values in the range [3, 40] gamma: 0, 0.1, 0.3, 0.5 |
Models | Hyperparameters and Associated Values |
---|---|
DT | criterion = absolute_error, max_depth = 10, max_leaf_nodes = 362, min_samples_leaf = 7, random_state = 42 |
RF | n_estimators = 600, criterion = squared_error, max_depth = 95, min_samples_leaf = 2, random_state = 42 |
K-NNs | n_neighbors = 13, weights = uniform, metric = manhattan, leaf_size = 90 |
AdaBoost | n_estimators = 601, learning_rate = 0.9269, loss = square, random_state = 42 |
GBM | n_estimators = 400, learning_rate = 0.05, max_depth = 4, objective = regression_l1, random_state = 42 |
ETR | n_estimators = 700, min_samples_leaf = 5, min_samples_split = 20, criterion = squared_error, random_state = 42, n_jobs = −1 |
BR | n_estimators = 100, max_samples = 0.9, max_features = 1, bootstrap = True, bootstrap_features = False, random_state = 42 |
SVR | kernel = poly, C = 10, epsilon = 0.01, degree = 3 |
HGBR | learning_rate = 0.01, loss = absolute_error, max_depth = 7, min_samples_leaf = 10, random_state = 42 |
XGBoost | n_estimators = 1000, learning_rate = 0.05, max_depth = 3, gamma = 0.1, random_state = 42, n_jobs = −1 |
ML Models | DT | RF | K-NNs | AdaBoost | GBM | ETR | BR | SVR | HGBR | XGBoost | |
---|---|---|---|---|---|---|---|---|---|---|---|
Metrics | |||||||||||
R2 | 0.896301 | 0.929128 | 0.913097 | 0.928929 | 0.944055 | 0.924588 | 0.108244 | 0.935501 | 0.748037 | 0.944935 | |
MSE | 0.085752 | 0.058607 | 0.071863 | 0.058771 | 0.046263 | 0.062361 | 0.737422 | 0.053336 | 0.208357 | 0.045535 | |
RMSE | 0.292834 | 0.242088 | 0.268072 | 0.242427 | 0.215088 | 0.249722 | 0.858733 | 0.230946 | 0.456461 | 0.213389 | |
MAE | 0.232702 | 0.187643 | 0.212346 | 0.193578 | 0.16805 | 0.195663 | 0.720991 | 0.182679 | 0.35861 | 0.166036 | |
MSLE | 0.01452 | 0.010516 | 0.012664 | 0.010215 | 0.008234 | 0.010839 | 0.109325 | 0.009929 | 0.037209 | 0.007811 | |
MedAE | 0.193143 | 0.150683 | 0.183506 | 0.168019 | 0.134978 | 0.156779 | 0.632939 | 0.150914 | 0.287834 | 0.140217 | |
MaxEr | 0.969728 | 0.886882 | 1.000751 | 0.800483 | 0.867643 | 0.864754 | 2.03398 | 0.823983 | 1.326151 | 0.873608 | |
ExVar | 0.89632 | 0.929138 | 0.913848 | 0.92909 | 0.944072 | 0.924598 | 0.108613 | 0.935531 | 0.748241 | 0.944945 | |
MARE | 0.401476 | 0.385017 | 0.430512 | 0.399119 | 0.31794 | 0.395242 | 1.620936 | 0.35992 | 0.959208 | 0.326581 | |
MSRE | 7.311951 | 8.852349 | 11.71123 | 10.6093 | 5.538571 | 9.09187 | 148.025 | 9.051076 | 62.08498 | 7.247817 | |
RMSRE | 2.704062 | 2.97529 | 3.422168 | 3.257192 | 2.353417 | 3.015273 | 12.16655 | 3.008501 | 7.879402 | 2.692177 | |
RRMSE | 1.40364 | 1.54443 | 1.776398 | 1.690762 | 1.221625 | 1.565185 | 6.315483 | 1.56167 | 4.090085 | 1.397471 | |
MBE | −0.00401 | −0.00289 | −0.02491 | −0.01153 | 0.003782 | 0.002897 | −0.01745 | 0.004974 | −0.01301 | 0.00278 | |
erMAX | 42.44329 | 50.88418 | 63.27024 | 52.46861 | 37.14585 | 46.29575 | 180.5393 | 60.64751 | 118.0525 | 48.97678 | |
SD | 0.292807 | 0.242071 | 0.266912 | 0.242152 | 0.215055 | 0.249705 | 0.858555 | 0.230892 | 0.456276 | 0.213371 | |
t-stat | −0.29941 | −0.26152 | −2.04235 | −1.04245 | 0.384899 | 0.253934 | −0.44494 | 0.471491 | −0.62402 | 0.285113 | |
U95 | 0.026222 | 0.021679 | 0.023903 | 0.021686 | 0.019259 | 0.022362 | 0.076888 | 0.020677 | 0.040862 | 0.019108 |
ML Models | Hyperparameters and Values |
---|---|
DT | criterion = poisson, max_depth = 12, max_leaf_nodes = 152, min_samples_leaf = 8, random_state = 42 |
RF | n_estimators = 300, criterion = poisson, max_depth = 20, min_samples_leaf = 2, random_state = 42 |
K-NNs | n_neighbors = 15, weights = distance, metric = manhattan, leaf_size = 10 |
AdaBoost | n_estimators = 1500, learning_rate = 1.0, loss = square, random_state = 42 |
GBM | n_estimators = 300, learning_rate = 0.05, max_depth = 3, objective = huber, random_state = 42 |
ETR | n_estimators = 600, min_samples_leaf = 2, min_samples_split = 6, criterion = friedman_mse, random_state = 42, n_jobs = −1 |
BR | n_estimators = 300, max_samples = 0.6, max_features = 1, bootstrap = True, bootstrap_features = False, random_state = 42 |
SVR | kernel = rbf, C = 10, epsilon = 0.01, degree = 3 |
HGBR | learning_rate = 0.1, loss = squared_error, max_depth = 3, min_samples_leaf = 10, random_state = 42 |
XGBoost | n_estimators = 300, learning_rate = 0.05, max_depth = 3, c = 10, random_state = 42, n_jobs = −1 |
ML Models | DT | RF | K-NNs | AdaBoost | GBM | ETR | BR | SVR | HGBR | XGBoost | |
---|---|---|---|---|---|---|---|---|---|---|---|
Metrics | |||||||||||
R2 | 0.89064 | 0.92980 | 0.913353 | 0.928917 | 0.94696 | 0.929619 | 0.143169 | 0.951915 | 0.945142 | 0.947417 | |
MSE | 0.09043 | 0.05804 | 0.071652 | 0.058781 | 0.04386 | 0.058201 | 0.708542 | 0.039763 | 0.045364 | 0.043482 | |
RMSE | 0.30072 | 0.24092 | 0.267678 | 0.242448 | 0.20942 | 0.241248 | 0.841749 | 0.199406 | 0.212988 | 0.208524 | |
MAE | 0.23894 | 0.18702 | 0.211018 | 0.192703 | 0.16455 | 0.188392 | 0.706746 | 0.159441 | 0.167299 | 0.163836 | |
MSLE | 0.01554 | 0.01037 | 0.012686 | 0.010225 | 0.00767 | 0.010056 | 0.105468 | 0.007036 | 0.008068 | 0.007635 | |
MedAE | 0.20684 | 0.15611 | 0.184793 | 0.165127 | 0.13061 | 0.151219 | 0.617039 | 0.131227 | 0.135331 | 0.129909 | |
MaxEr | 0.93889 | 0.85434 | 1.027994 | 0.777161 | 0.86834 | 0.842619 | 1.997979 | 0.850702 | 0.872232 | 0.84852 | |
ExVar | 0.89066 | 0.92981 | 0.913982 | 0.929092 | 0.94699 | 0.92962 | 0.143717 | 0.951932 | 0.945184 | 0.947445 | |
MARE | 0.47907 | 0.37021 | 0.418841 | 0.399577 | 0.31120 | 0.353887 | 1.587274 | 0.2846 | 0.329849 | 0.314439 | |
MSRE | 17.7219 | 7.69777 | 10.62791 | 10.83885 | 5.85244 | 6.593987 | 142.0118 | 4.111488 | 6.763004 | 6.198054 | |
RMSRE | 4.20974 | 2.77449 | 3.260048 | 3.29224 | 2.41918 | 2.567876 | 11.91687 | 2.027681 | 2.600578 | 2.489589 | |
RRMSE | 2.18522 | 1.4402 | 1.692244 | 1.708955 | 1.25576 | 1.332948 | 6.185877 | 1.05254 | 1.349923 | 1.29231 | |
MBE | −0.0038 | −0.0022 | −0.02281 | −0.01203 | 0.0043 | −0.00106 | −0.02129 | 0.003662 | 0.005871 | 0.00478 | |
erMAX | 84.07117 | 45.6306 | 61.077 | 55.02174 | 39.9395 | 42.80196 | 176.9299 | 33.52411 | 41.41628 | 44.59608 | |
SD | 0.300694 | 0.24091 | 0.266705 | 0.242149 | 0.20938 | 0.241246 | 0.84148 | 0.199372 | 0.212907 | 0.208469 | |
t-stat | −0.27779 | −0.2007 | −1.8714 | −1.08761 | 0.45053 | −0.0964 | −0.55385 | 0.402016 | 0.603497 | 0.501833 | |
U95 | 0.026929 | 0.02157 | 0.023885 | 0.021686 | 0.01875 | 0.021605 | 0.075358 | 0.017855 | 0.019067 | 0.018669 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Demirtürk, B.; Harunoğlu, T. A Comparative Analysis of Different Machine Learning Algorithms Developed with Hyperparameter Optimization in the Prediction of Student Academic Success. Appl. Sci. 2025, 15, 5879. https://doi.org/10.3390/app15115879
Demirtürk B, Harunoğlu T. A Comparative Analysis of Different Machine Learning Algorithms Developed with Hyperparameter Optimization in the Prediction of Student Academic Success. Applied Sciences. 2025; 15(11):5879. https://doi.org/10.3390/app15115879
Chicago/Turabian StyleDemirtürk, Bahar, and Tuba Harunoğlu. 2025. "A Comparative Analysis of Different Machine Learning Algorithms Developed with Hyperparameter Optimization in the Prediction of Student Academic Success" Applied Sciences 15, no. 11: 5879. https://doi.org/10.3390/app15115879
APA StyleDemirtürk, B., & Harunoğlu, T. (2025). A Comparative Analysis of Different Machine Learning Algorithms Developed with Hyperparameter Optimization in the Prediction of Student Academic Success. Applied Sciences, 15(11), 5879. https://doi.org/10.3390/app15115879