Predictive Modeling of COVID-19 Readmissions: Insights from Machine Learning and Deep Learning Approaches
Abstract
1. Introduction
2. Literature Review
3. Methodology
3.1. Overview
3.2. Dataset
3.3. Data Pre-Processing
3.4. Data Balancing
3.5. Machine Learning and Deep Learning Methods for Tabular Data Classification
3.6. Implementation Details
4. Results and Discussion
4.1. Results
4.2. Challenges of Study
4.3. Study Scopes and Limitations
4.4. Future Directions
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Coronavirus Cases. Available online: https://www.worldometers.info/coronavirus/ (accessed on 11 October 2023).
- Mathieu, E. Coronavirus Pandemic (COVID-19). Available online: https://ourworldindata.org/covid-hospitalizations (accessed on 5 March 2020).
- Hassan. (2023, May 3). Malaysia Faces New COVID-19 Wave as More Get Hospitalised. Available online: https://www.straitstimes.com/asia/se-asia/malaysia-faces-new-covid-19-wave-as-more-get-hospitalised (accessed on 25 October 2023).
- Huang, J.; Zheng, L.; Li, Z.; Hao, S.; Ye, F.; Chen, J.; Yao, X.; Ling, X.B. Recurrence of SARS-CoV-2 PCR positivity in COVID-19 patients: A single center experience and potential implications. MedRxiv 2020, 2020, 20089573. [Google Scholar] [CrossRef]
- Raftarai, A.; Mahounaki, R.R.; Harouni, M.; Karimi, M.; Olghoran, S.K. Predictive models of hospital readmission rate using the improved adaboost in COVID-19. In Intelligent Computing Applications for COVID-19; CRC Press: Boca Raton, FL, USA, 2021; pp. 67–86. [Google Scholar]
- Rodriguez, V.A.; Bhave, S.; Chen, R.; Pang, C.; Hripcsak, G.; Sengupta, S.; Elhadad, N.; Green, R.; Adelman, J.; Metitiri, K.S.; et al. Development and validation of prediction models for mechanical ventilation, renal replacement therapy, and readmission in COVID-19 patients. J. Am. Med. Inform. Assoc. 2021, 28, 1480–1488. [Google Scholar] [CrossRef] [PubMed]
- Davazdahemami, B.; Zolbanin, H.M.; Delen, D. An explanatory machine learning framework for studying pandemics: The case of COVID-19 emergency department readmissions. Decis. Support Syst. 2022, 161, 113730. [Google Scholar] [CrossRef] [PubMed]
- Afrash, M.R.; Kazemi-Arpanahi, H.; Shanbehzadeh, M.; Nopour, R.; Mirbagheri, E. Predicting hospital readmission risk in patients with COVID-19: A machine learning approach. Inform. Med. Unlocked 2022, 30, 100908. [Google Scholar] [CrossRef] [PubMed]
- Shanbehzadeh, M.; Yazdani, A.; Shafiee, M.; Kazemi-Arpanahi, H. Predictive modeling for COVID-19 readmission risk using machine learning algorithms. BMC Med. Inform. Decis. Mak. 2022, 22, 139. [Google Scholar] [CrossRef] [PubMed]
- Han, H.; Wang, W.Y.; Mao, B.H. Borderline-SMOTE: A new over-sampling method in imbalanced data sets learning. In Proceedings of the Advances in Intelligent Computing: International Conference on Intelligent Computing, ICIC, Hefei, China, 23–26 August 2005; Volume 3644, pp. 878–887. [Google Scholar] [CrossRef]
- He, H.; Bai, Y.; Garcia, E.A.; Li, S. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In Proceedings of the International Joint Conference on Neural Networks, Hong Kong, China, 1–8 June 2008; pp. 1322–1328. [Google Scholar] [CrossRef]
- Fix, E. Discriminatory Analysis: Nonparametric Discrimination, Consistency Properties. USAF School of Aviation Medicine, 1. 1985. Available online: https://books.google.com/books?hl=en&lr=&id=s85PAQAAMAAJ&oi=fnd&pg=PA7&dq=E.+Fix,+Discriminatory+analysis:+nonparametric+discrimination,+consistency+properties.+USAF+school+of+Aviation+Medicine,+1951.&ots=MJUdr06JYi&sig=gy2TyMDAg0ryo0Y7Octa-c6z3WM (accessed on 3 May 2024).
- Schölkopf, B. SVMs-A practical consequence of learning theory. IEEE Intell. Syst. Their Appl. 1998, 13, 18–21. [Google Scholar] [CrossRef]
- Breiman, L. Classification and regression trees. In Routledge; CRC Press: Boca Raton, FL, USA, 2017; pp. 1–358. [Google Scholar] [CrossRef]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Chen, T.; Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar] [CrossRef]
- Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.-Y. Lightgbm: A Highly Efficient Gradient Boosting Decision Tree. Advances in Neural Information Processing Systems, 30. 2017. Available online: https://proceedings.neurips.cc/paper/2017/hash/6449f44a102fde848669bdd9eb6b76fa-Abstract.html (accessed on 3 May 2024).
- Prokhorenkova, L.; Gusev, G.; Vorobev, A.; Dorogush, A.V.; Gulin, A. CatBoost: Unbiased boosting with categorical features. Advances in Neural Information Processing Systems. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, QC, Canada, 3–8 December 2018; Volume 31. [Google Scholar]
- Broelemann, K.; Kasneci, G. A gradient-based split criterion for highly accurate and transparent model trees. In Proceedings of the IJCAI International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; pp. 2030–2037. [Google Scholar] [CrossRef]
- McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
- Guo, H.; Tang, R.; Ye, Y.; Li, Z.; He, X. DeepFM: A factorization-machine based neural network for CTR prediction. In Proceedings of the IJCAI International Joint Conference on Artificial Intelligence, Melbourne, Australia, 19–25 August 2017; pp. 1725–1731. [Google Scholar] [CrossRef]
- Shavitt, I.; Segal, E. Regularization Learning Networks: Deep Learning for Tabular Datasets. Advances in Neural Information Processing Systems, 31. 2018. Available online: https://proceedings.neurips.cc/paper/2018/hash/500e75a036dc2d7d2fec5da1b71d36cc-Abstract.html (accessed on 3 May 2024).
- Arik, S.; Pfister, T. Tabnet: Attentive interpretable tabular learning. Proc. AAAI Conf. Artif. Intel. 2021, 35, 6679–6687. Available online: https://ojs.aaai.org/index.php/AAAI/article/view/16826 (accessed on 3 May 2024). [CrossRef]
- Yoon, J.; Zhang, Y.; Jordon, J.; van der Schaar, M. Vime: Extending the success of self-and semi-supervised learning to tabular domain. Adv. Neural Inf. Process. Syst. 2020, 33, 11033–11043. [Google Scholar]
- Huang, X.; Khetan, A.; Cvitkovic, M.; Karnin, Z. Tabtransformer: Tabular data modeling using contextual embeddings. arXiv 2020, arXiv:2012.06678. [Google Scholar]
- Katzir, L.; Elidan, G.; El-Yaniv, R. Net-dnf: Effective Deep Modeling of Tabular Data. International Conference on Learning Representations. Available online: https://openreview.net/forum?id=73WTGs96kho (accessed on 3 May 2021).
- Yamada, Y.; Lindenbaum, O.; Negahban, S.; Kluger, Y. Feature selection using stochastic gates. Int. Conf. Mach. Learn. 2020, 119, 10648–10659. Available online: https://proceedings.mlr.press/v119/yamada20a.html (accessed on 3 May 2024).
- Agarwal, R.; Melnick, L.; Frosst, N.; Zhang, X.; Lengerich, B.; Caruana, R.; Hinton, G.E. Neural additive models: Interpretable machine learning with neural nets. Adv. Neural Inf. Process. Syst. 2021, 34, 4699–4711. Available online: https://proceedings.neurips.cc/paper/2021/hash/251bd0442dfcc53b5a761e050f8022b8-Abstract.html (accessed on 3 May 2024).
- Somepalli, G.; Goldblum, M.; Schwarzschild, A.; Bruss, C.B.; Goldstein, T. Saint: Improved neural networks for tabular data via row attention and contrastive pre-training. arXiv 2021, arXiv:2106.01342. [Google Scholar]
- Borisov, V.; Meier, J.; Heuvel, J.V.D.; Jalali, H.; Kasneci, G. A robust unsupervised ensemble of feature-based explanations using restricted boltzmann machines. arXiv 2021, arXiv:2111.07379. [Google Scholar]
- Akiba, T.; Sano, S.; Yanase, T.; Ohta, T.; Koyama, M. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2623–2631. [Google Scholar]
- Gorishniy, Y.; Rubachev, I.; Khrulkov, V.; Babenko, A. Revisiting Deep Learning Models for Tabular Data. Adv. Neural Inf. Process. Syst. 2021, 23, 18932–18943. [Google Scholar]
Feature | Description | Type |
---|---|---|
Age | The age of the patient during admission | Discrete |
Sex | The sex of the patient | Categorical |
BMI | The body mass index (BMI) of the patient | Continuous |
LOS of previous admission | The length of stay of previous admission | Continuous |
Systolic blood pressure (mmHg) | The systolic blood pressure of the patient | Continuous |
Diastolic blood pressure (mmHg) | The diastolic blood pressure of the patient | Continuous |
Heart rate (per min) | The heart rate of the patient | Continuous |
Body temperature | The body temperature of the patient | Continuous |
Respiration rate (per min) | The respiration rate of the patient | Continuous |
SPO2 (%) | The oxygen saturation of the patient | Continuous |
Fever | The presence of fever in the patient | Categorical |
Cough | The presence of cough in the patient | Categorical |
SOB | The presence of shortness of breath (SOB) in the patient | Categorical |
Lethargy | The presence of lethargy in the patient | Categorical |
Sore throat | The presence of a sore throat in the patient | Categorical |
HTN | The presence of hypertension (HTN) in the patient | Categorical |
DM | The presence of diabetes (DM) in the patient | Categorical |
Dyslipidemia | The presence of dyslipidemia symptoms in the patient | Categorical |
HPT | The presence of hyperparathyroidism (HPT) in the patient | Categorical |
IHD | The presence of myocardial ischemia (IHD) in the patient | Categorical |
Readmitted after COVID-19 (Y/N) | The indication of patient readmission due to COVID-19 | Categorical (target variable) |
Class 0 | Class 1 | Total | |
---|---|---|---|
ROS | 1441 | 1441 | 2882 |
BSMOTE | 1441 | 1441 | 2882 |
ADASYN | 1441 | 1383 | 2824 |
Method | Description |
---|---|
Linear Model | The Linear Model assumes a linear relationship between the dependent variable and one or more independent variables. |
KNN [12] | The K-Nearest Neighbors (KNN) is a non-parametric algorithm that classifies observations based on the majority vote of their nearest neighbors in the feature space. |
SVM [13] | The Support Vector Machine (SVM) aims to find an optimal hyperplane in high-dimensional feature space to separate different classes. |
Decision Tree [14] | The Decision Tree is a hierarchical model that partitions the feature space using different feature values to make predictions, with internal nodes representing features and leaf nodes representing class labels. |
Random Forest [15] | The Random Forest is an ensemble learning method that improves classification accuracy and robustness by combining the predictions of multiple Decision Trees. |
XGBoost [16] | The Extreme Gradient Boosting (XGBoost) is a gradient boosting algorithm that employs an ensemble of weak prediction models, like decision trees, to sequentially refine predictions, iteratively enhancing accuracy by correcting errors and optimizing overall performance. |
LightBGM [17] | The LightGBM is a scalable gradient-boosting framework that employs tree-based learning algorithms, combining leaf-wise and depth-wise tree growth strategies to achieve faster training times and higher accuracy in large-scale tabular data classification tasks. |
CatBoost [18] | The CatBoost is a gradient-boosting framework that handles categorical features without manual pre-processing, employing a blend of ordered boosting, random permutations, and gradient-based optimization techniques to provide accurate predictions in classification tasks. |
Model Tree [19] | The Model Tree is a hybrid approach that combines Decision Trees with linear regression models, utilizing Decision Trees to segment the feature space and applying linear regression models in each leaf node for interpretable predictions in classification tasks. |
Method | Description |
---|---|
MLP [20] | The Multilayer Perceptron (MLP) is an artificial neural network with interconnected layers of neurons commonly employed for classification tasks, leveraging non-linear activation functions and backpropagation to learn intricate relationships between features and target variables. |
DeepFM [21] | The DeepFM is a hybrid model that integrates deep neural networks and factorization machines to effectively handle dense and sparse features in tabular data, enabling the learning of low-order and high-order feature interactions, leading to accurate predictions and capturing complex patterns in classification tasks. |
RLN [22] | The Regularization Learning Model (RLN) employs regularization techniques to improve generalization and prevent overfitting, achieving a balance between model complexity and training accuracy, resulting in robust and reliable predictions for classification tasks. |
TabNet [23] | The TabNet employs a combination of sequential and attention-based processing to learn hierarchical and interpretable representations of the input features, enabling effective feature selection and accurate classification predictions. |
VIME [24] | The Value Imputation and Mask Estimation (VIME) is a method for handling missing values in tabular data, employing statistical techniques for Value Imputation and Mask Estimation to enhance the utilization of incomplete data in classification. |
TabTransformer [25] | The TabTransformer utilizes transformer-based architectures with self-attention mechanisms to capture feature dependencies and interactions, facilitating feature encoding and precise predictions in classification tasks. |
Net-DNF [26] | The Networks of Disjunctive Normal Form (Net-DNF) is a model architecture that combines neural networks with logical operations, representing decision rules in a disjunctive normal form and using neural networks to learn rule weights, resulting in effective feature representation and accurate predictions in classification tasks. |
STG [27] | The Gaussian-Based Alternative Termed Stochastic Gate (STG) incorporates a stochastic gating mechanism to capture uncertainty and model the probability of each feature being informative, thereby improving classification accuracy. |
NAM [28] | The Neural Additive Model (NAM) is a model architecture that combines neural networks with additive modeling, decomposing the prediction function into interpretable components and providing insights into the relationships between features and the target variable without direct reliance on tabular data in classification tasks. |
SAINT [29] | The Self-Attention and Intersample Attention Transformer (SAINT) is a model that utilizes self-attention mechanisms and intersample attention to capture both within-sequence and cross-sequence dependencies, enabling effective modeling of tabular data sequences and accurate predictions in classification tasks. |
Training Hyperparameters | Value |
---|---|
Batch Size | 32 |
Early Stopping Rounds | 5 |
Epochs | 100 |
Method | Accuracy | AUC |
---|---|---|
Linear Model | 0.6280 ± 0.0086 | 0.6592 ± 0.0097 |
KNN | 0.7120 ± 0.0190 | 0.7957 ± 0.0113 |
SVM | 0.9233 ± 0.0067 | 0.9206 ± 0.0122 |
Decision Tree | 0.8903 ± 0.0146 | 0.9224 ± 0.0123 |
Random Forest | 0.9791 ± 0.0048 | 0.9981 ± 0.0008 |
XGBoost | 0.9670 ± 0.0033 | 0.9972 ± 0.0009 |
CatBoost | 0.9882 ± 0.0020 | 1.0000 ± 0.0000 |
LightGBM | 0.9792 ± 0.0084 | 1.0000 ± 0.0000 |
Model Tree | 0.6999 ± 0.0150 | 0.7489 ± 0.0068 |
MLP | 0.8986 ± 0.0232 | 0.9523 ± 0.0068 |
TabNet | 0.8498 ± 0.0197 | 0.9226 ± 0.0245 |
VIME | 0.6974 ± 0.0372 | 0.7722 ± 0.0448 |
TabTransformer | 0.7571 ± 0.0114 | 0.8472 ± 0.0081 |
RLN | 0.5877 ± 0.0719 | 0.5979 ± 0.0801 |
DNFNet | 0.7443 ± 0.0279 | 0.8248 ± 0.0221 |
STG | 0.5000 ± 0.0005 | 0.6540 ± 0.0111 |
NAM | 0.6006 ± 0.0132 | 0.6562 ± 0.0153 |
DeepFM | 0.8306 ± 0.0320 | 0.9205 ± 0.0289 |
SAINT | 0.9219 ± 0.0225 | 0.9647 ± 0.0090 |
Method | Accuracy | AUC |
---|---|---|
Linear Model | 0.6777 ± 0.0068 | 0.7344 ± 0.0133 |
KNN | 0.7120 ± 0.0190 | 0.7957 ± 0.0113 |
SVM | 0.8664 ± 0.0087 | 0.8995 ± 0.0079 |
Decision Tree | 0.8682 ± 0.0103 | 0.8882 ± 0.0126 |
Random Forest | 0.9282 ± 0.0100 | 0.9750 ± 0.0055 |
XGBoost | 0.9507 ± 0.0058 | 0.9795 ± 0.0026 |
CatBoost | 0.9584 ± 0.0081 | 0.9870 ± 0.0051 |
LightGBM | 0.9563 ± 0.0098 | 0.9832 ± 0.0059 |
Model Tree | 0.7616 ± 0.0173 | 0.8498 ± 0.0144 |
MLP | 0.8414 ± 0.0142 | 0.9001 ± 0.0063 |
TabNet | 0.8504 ± 0.0449 | 0.9124 ± 0.0350 |
VIME | 0.6921 ± 0.0252 | 0.8088 ± 0.0206 |
TabTransformer | 0.7460 ± 0.0148 | 0.8385 ± 0.0116 |
RLN | 0.6589 ± 0.0117 | 0.7045 ± 0.0221 |
DNFNet | 0.7713 ± 0.0133 | 0.8537 ± 0.0226 |
STG | 0.6433 ± 0.0339 | 0.7078 ± 0.0179 |
NAM | 0.5920 ± 0.0451 | 0.7086 ± 0.0167 |
DeepFM | 0.8151 ± 0.0146 | 0.8841 ± 0.0101 |
SAINT | 0.8321 ± 0.0505 | 0.9014 ± 0.0327 |
Method | Accuracy | AUC |
---|---|---|
Linear Model | 0.6006 ± 0.0221 | 0.6587 ± 0.0303 |
KNN | 0.7355 ± 0.02026 | 0.8329 ± 0.0157 |
SVM | 0.8435 ± 0.0110 | 0.8803 ± 0.0093 |
Decision Tree | 0.8421 ± 0.0098 | 0.8866 ± 0.0140 |
Random Forest | 0.9228 ± 0.0078 | 0.9731 ± 0.0060 |
XGBoost | 0.9380 ± 0.0091 | 0.9799 ± 0.0033 |
CatBoost | 0.9596 ± 0.0070 | 0.9909 ± 0.0028 |
LightGBM | 0.9448 ± 0.0158 | 0.9836 ± 0.0045 |
Model Tree | 0.7199 ± 0.0190 | 0.7959 ± 0.0223 |
MLP | 0.7649 ± 0.0273 | 0.8372 ± 0.0150 |
TabNet | 0.7733 ± 0.0766 | 0.8427 ± 0.0842 |
VIME | 0.5999 ± 0.0098 | 0.7361 ± 0.0399 |
TabTransformer | 0.6742± 0.0121 | 0.7466 ± 0.0128 |
RLN | 0.5102 ± 0.0007 | 0.5583 ± 0.0483 |
DNFNet | 0.7203 ± 0.0320 | 0.8074 ± 0.0380 |
STG | 0.5783 ± 0.0399 | 0.6345 ± 0.0335 |
NAM | 0.5637 ± 0.0341 | 0.6431 ± 0.0325 |
DeepFM | 0.7514 ± 0.0267 | 0.8278 ± 0.0249 |
SAINT | 0.7514 ± 0.0267 | 0.8278 ± 0.0249 |
Method | Before Hyperparameter Tuning | After Tuning |
---|---|---|
Linear Model | Not Available | Not Available |
KNN | “n_neighbors”: [3, 5, 7, …, 41] | “n_neighbors”: 15 |
SVM | “C”: [1 × 10−10, 1 × 1010] (log scale) | “C”: 7950111594.29391 |
Decision Tree | “max_depth”: [2, 12] (log scale) | “max_depth”: 11 |
Random Forest | “max_depth”: [2, 12] (log scale), “n_estimators”: [5, 100] (log scale) | “max_depth”: 11, “n_estimators”: 17 |
XGBoost | “max_depth”: [2, 12] (log scale), “alpha”: [1 × 10−8, 1.0] (log scale), “lambda”: [1 × 10−8, 1.0] (log scale), “eta”: [0.01, 0.3] (log scale) | “alpha”: 0.0007382548136758594, “eta”: 0.057017983970348476, “lambda”: 0.006548409895095237, “max_depth”: 7 |
CatBoost | “learning_rate”: [0.01, 0.3] (log scale), “max_depth”: [2, 12] (log scale), “l2_leaf_reg”: [0.5, 30] (log scale) | “learning_rate”: 0.20084869470553585, “max_depth”: 10, “l2_leaf_reg”: 0.8702333344772514 |
LightGBM | “num_leaves”: [2, 4096] (log scale), “lambda_l1”: [1 × 10−8, 10.0] (log scale), “lambda_l2”: [1 × 10−8, 10.0] (log scale), “learning_rate”: [0.01, 0.3] (log scale) | “lambda_l1”: 7.799729980544415 × 10−6, “lambda_l2”: 4.589017170283277 × 10−5, “learning_rate”: 0.20370799209870197, “num_leaves”: 864 |
Model Tree | “criterion”: [‘gradient’, ‘gradient-renorm-z’], “max_depth”: [1, 3] | “criterion”: “gradient-renorm-z”, “max_depth”: 2 |
MLP | “hidden_dim”: [10, 100], “n_layers”: [2, 5], “learning_rate”: [0.0005, 0.001] | “hidden_dim”: 91, “n_layers”: 5, “learning_rate”: 0.0007566601124786297 |
TabNet | “n_d”: [8, 64], “n_steps”: [3, 10], “gamma”: [1.0, 2.0], “cat_emb_dim”: [1, 3], “n_independent”: [1, 5], “n_shared”: [1, 5], “momentum”: [0.001, 0.4] (log scale), “mask_type”: [“sparsemax”, “entmax”] | “n_d”: 22, “n_steps”: 3, “gamma”: 1.7895426531686847, “cat_emb_dim”: 3, “n_independent”: 1, “n_shared”: 4, “momentum”: 0.34790974943728636, “mask_type”: “entmax” |
VIME | “p_m”: [0.1, 0.9], “alpha”: [0.1, 10], “K”: [2, 3, 5, 10, 15, 20], “beta”: [0.1, 10] | “p_m”: 0.2820583537585633, “K”: 10, “alpha”: 4.553114184088457, “beta”: 5.145804248060295 |
TabTransformer | “dim”: [32, 64, 128, 256], “depth”: [1, 2, 3, 6, 12], “heads”: [2, 4, 8], “weight_decay”: [−6, −1], “learning_rate”: [−6, −3], “dropout”: [0, 0.1, 0.2, 0.3, 0.4, 0.5] | “dim”: 64, “depth”: 6, “heads”: 8, “weight_decay”: −3, “learning_rate”: −3, “dropout”: 0.2 |
RLN | “layers”: [2, 8], “theta”: [−12, −8], “log_lr”: [5, 7], “norm”: [1, 2] | “layers”: 7, “theta”: −11, “log_lr”: 5, “norm”: 2 |
DNFNet | “n_formulas”: [64, 128, 256, 512, 1024], “elastic_net_beta”: [1.6, 1.3, 1., 0.7, 0.4, 0.1] | “n_forumlas”: 128, “elastic_net_beta”: 1.3 |
STG | “learning_rate”: [1 × 10−4, 1 × 10−1] (log scale), “lam”: [1 × 10−3, 10] (log scale), “hidden_dims”: [[500, 50, 10], [60, 20], [500, 500, 10], [500, 400, 20]] | “learning_rate”: 0.02444969191570802, “lam”: 0.040527281585294395, “hidden_dims”: [60, 20] |
NAM | ‘lr’: [0.001, 0.1] (log scale), ‘output_regularization’: [0.001, 0.1] (log scale), ‘dropout’: [0, 0.9], ‘feature_dropout’: [0, 0.2] | “lr”: 1.2 × 10−3, “output_regularization”: 2.95 × 10−3, “dropout”: 1.8 × 10−2, “feature_dropout”: 0.168 |
DeepFM | ‘dnn_dropout’: [0, 0.9] | “dnn_dropout”: 0.3640626656168372 |
SAINT | “dim”: [32, 64, 128, 256], “depth”: [1, 2, 3, 6, 12], “heads”: [2, 4, 8], “dropout”: [0, 0.1, 0.2, 0.3, 0.4, 0.5] | “dim”: 32, “depth”: 1, “heads”: 2, “dropout”: 0.6 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Loo, W.K.; Voon, W.; Suhaimi, A.; Teh, C.S.J.; Tee, Y.K.; Hum, Y.C.; Hasikin, K.; Teo, K.; Ong, H.C.; Lai, K.W. Predictive Modeling of COVID-19 Readmissions: Insights from Machine Learning and Deep Learning Approaches. Diagnostics 2024, 14, 1511. https://doi.org/10.3390/diagnostics14141511
Loo WK, Voon W, Suhaimi A, Teh CSJ, Tee YK, Hum YC, Hasikin K, Teo K, Ong HC, Lai KW. Predictive Modeling of COVID-19 Readmissions: Insights from Machine Learning and Deep Learning Approaches. Diagnostics. 2024; 14(14):1511. https://doi.org/10.3390/diagnostics14141511
Chicago/Turabian StyleLoo, Wei Kit, Wingates Voon, Anwar Suhaimi, Cindy Shuan Ju Teh, Yee Kai Tee, Yan Chai Hum, Khairunnisa Hasikin, Kareen Teo, Hang Cheng Ong, and Khin Wee Lai. 2024. "Predictive Modeling of COVID-19 Readmissions: Insights from Machine Learning and Deep Learning Approaches" Diagnostics 14, no. 14: 1511. https://doi.org/10.3390/diagnostics14141511
APA StyleLoo, W. K., Voon, W., Suhaimi, A., Teh, C. S. J., Tee, Y. K., Hum, Y. C., Hasikin, K., Teo, K., Ong, H. C., & Lai, K. W. (2024). Predictive Modeling of COVID-19 Readmissions: Insights from Machine Learning and Deep Learning Approaches. Diagnostics, 14(14), 1511. https://doi.org/10.3390/diagnostics14141511