Comparative Evaluation of Machine Learning Models Using Structured and Unstructured Clinical Data for Predicting Unplanned General Medicine Readmissions in a Tertiary Hospital in Australia
Abstract
1. Introduction
1.1. Related Work
1.1.1. Structured Data and Traditional ML Models
1.1.2. Deep Learning on Structured EMR Variables
1.1.3. Unstructured Clinical Text and Early Natural Language Processing (NLP) Models
1.1.4. Multimodal Integration
1.2. Study Motivation and Contributions
- A structured-only model using curated EMR variables;
- A text-only model based on fine-tuned Bio-ClinicalBERT embeddings of clinical notes;
- Several multimodal architectures integrating structured data with text embeddings, including fully connected CNNs and LSTM models.
2. Materials and Methods
2.1. Study Design and Data Source
2.2. Data Variables, Preprocessing and Feature Representation
- Structured dataset: Demographic, clinical, utilisation, and laboratory variables.
- Unstructured dataset: Bio-ClinicalBERT representations obtained via end-to-end fine tuning produced a 768-dimensional [CLS] representation per admission.
- Combined dataset: Concatenation of structured variables and 768-dimensional embeddings to form a unified feature representation.
2.3. Text Representation Using Bio-ClinicalBERT
2.4. Model Development
- Structured-only model: A feedforward neural network trained exclusively on structured EMR variables, including demographic, clinical, laboratory, and healthcare utilisation features. Structured features are numerical or categorical variables derived from the database, not free-text notes.
- Unstructured text-only model: An end-to-end Bio-ClinicalBERT model fine-tuned on concatenated clinical narratives (admission notes, progress notes, allied health documentation, discharge summaries) to generate a 768-dimensional [CLS] embedding per admission, which is used for binary readmission prediction.
- Combined multimodal model: Integrates the two complementary data types—structured EMR features and unstructured text embeddings—by concatenating the 768-dimensional [CLS] embedding with structured features as input to a feedforward neural network. Here, “multimodal” refers to the combination of numeric/categorical structured features and textual embeddings, not multiple sensory modalities.
Additional Multimodal Architectures
- Combined multimodal (CNN) model: The concatenated structured variables and 768-dimensional [CLS] embedding were passed through one-dimensional convolutional layers with ReLU activation, followed by max-pooling and fully connected layers prior to sigmoid output.
- Combined multimodal (LSTM) model: The concatenated representation was reshaped and processed through a LSTM network, with the final hidden state used for binary classification via a sigmoid output layer.
- Input layer: 11 structured features;
- Hidden layers: Two fully connected layers with 256 and 64 neurons, respectively, each using ReLU activation;
- Output layer: A single neuron with sigmoid activation for binary classification.
2.5. Classical Machine Learning Baselines, Model Interpretation, and Statistical Analysis
- The structured model was evaluated using the DeepExplainer [42] on a subset of 500 background samples from the training set.
- SHAP summary (beeswarm) plots were generated to visualise feature contributions, and mean absolute SHAP values were used to rank the importance of each variable.
3. Results
3.1. Model Training and Evaluation
3.1.1. Classical Machine Learning Models
- Random Forest: ROC-AUC 0.61, accuracy 0.75, precision 0.50, recall 0.15, F1-score 0.23.
- Gradient Boosting: ROC-AUC 0.61, accuracy 0.73, precision 0.40, recall 0.17, F1-score 0.24.
- Extra Trees: ROC-AUC 0.61, accuracy 0.74, precision 0.41, recall 0.16, F1-score 0.23.
- HistGradient Boosting: ROC-AUC 0.62, accuracy 0.73, precision 0.40, recall 0.16, F1-score 0.23.
- Model definitions:
- Structured EMR: Demographic, clinical, laboratory, and healthcare utilisation variables.
- Classical machine learning baselines: Logistic regression and XGBoost trained using structured EMR variables only.
- DL–Structured: Feedforward neural network trained exclusively on structured EMR variables.
- DL–Text (Bio-ClinicalBERT): End-to-end fine-tuned Bio-ClinicalBERT model trained on concatenated free-text clinical notes.
- DL–Multimodal: Deep learning model integrating structured EMR variables with Bio-ClinicalBERT text embeddings.
3.1.2. Deep Learning Models
3.2. Feature Importance Analysis (SHAP)
4. Discussion
Strengths and Limitations
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| AUC | Area Under the Receiver Operating Characteristic Curve |
| BERT | Bidirectional Encoder Representations from Transformers |
| CNN | Convolutional Neural Network |
| CRP | C-reactive Protein |
| ED | Emergency Department |
| EMR | Electronic Medical Record |
| F1-score | Harmonic mean of precision and recall |
| GPT | Generative Pretrained Transformer |
| HFRS | Hospital Frailty Risk Score |
| IQR | Interquartile Range |
| IRSD | Index of Relative Socio-economic Disadvantage |
| LSTM | Long Short-Term Memory |
| MICE | Multivariate Imputation by Chained Equations |
| NLP | Natural Language Processing |
| PyTorch | Python-based Deep Learning Framework |
| ReLU | Rectified Linear Unit |
| RNN | Recurrent Neural Network |
| ROC | Receiver Operating Characteristic |
| ROC-AUC | Area Under the Receiver Operating Characteristic Curve |
| SD | Standard Deviation |
| SHAP | Shapley Additive exPlanations |
Appendix A






References
- Allaudeen, N.; Vidyarthi, A.; Maselli, J.; Auerbach, A. Redefining readmission risk factors for general medicine patients. J. Hosp. Med. 2011, 6, 54–60. [Google Scholar] [CrossRef]
- James, J.; Tan, S.; Stretton, B.; Kovoor, J.G.; Gupta, A.K.; Gluck, S.; Gilbert, T.; Sharma, Y.; Bacchi, S. Why do we evaluate 30-day readmissions in general medicine? A historical perspective and contemporary data. Intern. Med. J. 2023, 53, 1070–1075. [Google Scholar] [CrossRef]
- Naylor, M.D.; Brooten, D.; Campbell, R.; Jacobsen, B.S.; Mezey, M.D.; Pauly, M.V.; Schwartz, J.S. Comprehensive discharge planning and home follow-up of hospitalized elders: A randomized clinical trial. JAMA 1999, 281, 613–620. [Google Scholar] [CrossRef] [PubMed]
- Tsai, T.C.; Orav, E.J.; Jha, A.K. Care fragmentation in the postdischarge period: Surgical readmissions, distance of travel, and postoperative mortality. JAMA Surg. 2015, 150, 59–64. [Google Scholar] [CrossRef] [PubMed]
- Zhou, H.; Della, P.R.; Roberts, P.; Goh, L.; Dhaliwal, S.S. Utility of models to predict 28-day or 30-day unplanned hospital readmissions: An updated systematic review. BMJ Open 2016, 6, e011060. [Google Scholar] [CrossRef] [PubMed]
- Goldstein, B.A.; Navar, A.M.; Pencina, M.J.; Ioannidis, J.P. Opportunities and challenges in developing risk prediction models with electronic health records data: A systematic review. J. Am. Med. Inform. Assoc. 2017, 24, 198–208. [Google Scholar] [CrossRef] [PubMed]
- Xiao, C.; Ma, T.; Dieng, A.B.; Blei, D.M.; Wang, F. Readmission prediction via deep contextual embedding of clinical concepts. PLoS ONE 2018, 13, e0195024. [Google Scholar] [CrossRef]
- Farhan, W.; Wang, Z.; Huang, Y.; Wang, S.; Wang, F.; Jiang, X. A Predictive Model for Medical Events Based on Contextual Embedding of Temporal Sequences. JMIR Med. Inform. 2016, 4, e39. [Google Scholar] [CrossRef]
- Lybarger, K.; Dobbins, N.J.; Long, R.; Singh, A.; Wedgeworth, P.; Uzuner, Ö.; Yetisgen, M. Leveraging natural language processing to augment structured social determinants of health data in the electronic health record. J. Am. Med. Inform. Assoc. 2023, 30, 1389–1397. [Google Scholar] [CrossRef]
- Shickel, B.; Tighe, P.J.; Bihorac, A.; Rashidi, P. Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis. IEEE J. Biomed. Health Inform. 2018, 22, 1589–1604. [Google Scholar] [CrossRef]
- Morgan, D.J.; Bame, B.; Zimand, P.; Dooley, P.; Thom, K.A.; Harris, A.D.; Bentzen, S.; Ettinger, W.; Garrett-Ray, S.D.; Tracy, J.K.; et al. Assessment of Machine Learning vs Standard Prediction Rules for Predicting Hospital Readmissions. JAMA Netw. Open 2019, 2, e190348. [Google Scholar] [CrossRef]
- Sharma, Y.; Thompson, C.; Mangoni, A.A.; Shahi, R.; Horwood, C.; Woodman, R. Performance of Machine Learning Models in Predicting 30-Day General Medicine Readmissions Compared to Traditional Approaches in Australian Hospital Setting. Healthcare 2025, 13, 1223. [Google Scholar] [CrossRef]
- Hasan, O.; Meltzer, D.O.; Shaykevich, S.A.; Bell, C.M.; Kaboli, P.J.; Auerbach, A.D.; Wetterneck, T.B.; Arora, V.M.; Zhang, J.; Schnipper, J.L. Hospital readmission in general medicine patients: A prediction model. J. Gen. Intern. Med. 2010, 25, 211–219. [Google Scholar] [CrossRef]
- Rajkomar, A.; Oren, E.; Chen, K.; Dai, A.M.; Hajaj, N.; Hardt, M.; Liu, P.J.; Liu, X.; Marcus, J.; Sun, M.; et al. Scalable and accurate deep learning with electronic health records. Npj Digit. Med. 2018, 1, 18. [Google Scholar] [CrossRef]
- Futoma, J.; Morris, J.; Lucas, J. A comparison of models for predicting early hospital readmissions. J. Biomed. Inform. 2015, 56, 229–238. [Google Scholar] [CrossRef]
- Ashfaq, A.; Sant’Anna, A.; Lingman, M.; Nowaczyk, S. Readmission prediction using deep learning on electronic health records. J. Biomed. Inform. 2019, 97, 103256. [Google Scholar] [CrossRef] [PubMed]
- Lu, H.; Ehwerhemuepha, L.; Rakovski, C. A comparative study on deep learning models for text classification of unstructured medical notes with various levels of class imbalance. BMC Med. Res. Methodol. 2022, 22, 181. [Google Scholar] [CrossRef] [PubMed]
- Huang, K.; Altosaar, J.; Ranganath, R. ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission. arXiv 2019, arXiv:1904.05342. [Google Scholar] [CrossRef]
- Alsentzer, E.; Murphy, J.; Boag, W.; Weng, W.H.; Jindi, D.; Naumann, T.; McDermott, M. Publicly Available Clinical BERT Embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop; Association for Computational Linguistics: Minneapolis, MN, USA, 2019; pp. 72–78. Available online: https://aclanthology.org/W19-1909/ (accessed on 19 February 2026).
- Seinen, T.M.; Fridgeirsson, E.A.; Ioannou, S.; Jeannetot, D.; John, L.H.; Kors, J.A.; Markus, A.F.; Pera, V.; Rekkas, A.; Williams, R.D.; et al. Use of unstructured text in prognostic clinical prediction models: A systematic review. J. Am. Med. Inform. Assoc. 2022, 29, 1292–1302. [Google Scholar] [CrossRef]
- Brown, J.R.; Ricket, I.M.; Reeves, R.M.; Shah, R.U.; Goodrich, C.A.; Gobbel, G.; Stabler, M.E.; Perkins, A.M.; Minter, F.; Cox, K.C.; et al. Information Extraction From Electronic Health Records to Predict Readmission Following Acute Myocardial Infarction: Does Natural Language Processing Using Clinical Notes Improve Prediction of Readmission? J. Am. Heart Assoc. 2022, 11, e024198. [Google Scholar] [CrossRef]
- Mahajan, S.M.; Ghani, R. Combining Structured and Unstructured Data for Predicting Risk of Readmission for Heart Failure Patients. In MEDINFO 2019: Health and Wellbeing e-Networks for All; Studies in Health Technology and Informatics; IOS Press: Amsterdam, The Netherlands, 2019; Volume 264, pp. 238–242. [Google Scholar] [CrossRef]
- Pham, M.K.; Mai, T.T.; Crane, M.; Ebiele, M.; Brennan, R.; Ward, M.E.; Geary, U.; McDonald, N.; Bezbradica, M. Forecasting Patient Early Readmission from Irish Hospital Discharge Records Using Conventional Machine Learning Models. Diagnostics 2024, 14, 2405. [Google Scholar] [CrossRef]
- Zhang, D.; Yin, C.; Zeng, J.; Yuan, X.; Zhang, P. Combining structured and unstructured data for predictive models: A deep learning approach. BMC Med. Inform. Decis. Mak. 2020, 20, 280. [Google Scholar] [CrossRef]
- Pandey, S.R.; Tile, J.D.; Oghaz, M.M.D. Predicting 30-day hospital readmissions using ClinicalT5 with structured and unstructured electronic health records. PLoS ONE 2025, 20, e0328848. [Google Scholar] [CrossRef]
- Cui, H.; Fang, X.; Xu, R.; Kan, X.; Ho, J.C.; Yang, C. Multimodal Fusion of EHR in Structures and Semantics: Integrating Clinical Records and Notes with Hypergraph and LLM. In MEDINFO 2025—Healthcare Smart × Medicine Deep; Studies in Health Technology and Informatics; IOS Press: Amsterdam, The Netherlands, 2025; Volume 329, pp. 753–757. [Google Scholar] [CrossRef]
- Mahmoudi, E.; Kamdar, N.; Kim, N.; Gonzales, G.; Singh, K.; Waljee, A.K. Use of electronic medical records in development and validation of risk prediction models of hospital readmission: Systematic review. BMJ 2020, 369, m958. [Google Scholar] [CrossRef] [PubMed]
- Huang, Y.; Talwar, A.; Chatterjee, S.; Aparasu, R.R. Application of machine learning in predicting hospital readmissions: A scoping review of the literature. BMC Med. Res. Methodol. 2021, 21, 96. [Google Scholar] [CrossRef] [PubMed]
- Ru, B.; Tan, X.; Liu, Y.; Kannapur, K.; Ramanan, D.; Kessler, G.; Lautsch, D.; Fonarow, G. Comparison of Machine Learning Algorithms for Predicting Hospital Readmissions and Worsening Heart Failure Events in Patients With Heart Failure With Reduced Ejection Fraction: Modeling Study. JMIR Form. Res. 2023, 7, e41775. [Google Scholar] [CrossRef]
- Si, Y.; Du, J.; Li, Z.; Jiang, X.; Miller, T.; Wang, F.; Zheng, W.J.; Roberts, K. Deep representation learning of patient data from Electronic Health Records (EHR): A systematic review. J. Biomed. Inform. 2021, 115, 103671. [Google Scholar] [CrossRef]
- Shin, S.; Austin, P.C.; Ross, H.J.; Abdel-Qadir, H.; Freitas, C.; Tomlinson, G.; Chicco, D.; Mahendiran, M.; Lawler, P.R.; Billia, F.; et al. Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC Heart Fail. 2021, 8, 106–115. [Google Scholar] [CrossRef]
- Sarijaloo, F.; Park, J.; Zhong, X.; Wokhlu, A. Predicting 90 day acute heart failure readmission and death using machine learning-supported decision analysis. Clin. Cardiol. 2021, 44, 230–237. [Google Scholar] [CrossRef]
- Sharma, Y.; Horwood, C.; Hakendorf, P.; Shahi, R.; Thompson, C. External Validation of the Hospital Frailty-Risk Score in Predicting Clinical Outcomes in Older Heart-Failure Patients in Australia. J. Clin. Med. 2022, 11, 2193. [Google Scholar] [CrossRef]
- Hu, J.; Gonsahn, M.D.; Nerenz, D.R. Socioeconomic status and readmissions: Evidence from an urban teaching hospital. Health Aff. 2014, 33, 778–785. [Google Scholar] [CrossRef]
- Mudge, A.M.; Kasper, K.; Clair, A.; Redfern, H.; Bell, J.J.; Barras, M.A.; Dip, G.; Pachana, N.A. Recurrent readmissions in medical patients: A prospective study. J. Hosp. Med. 2011, 6, 61–67. [Google Scholar]
- Sharma, Y.; Miller, M.; Kaambwa, B.; Shahi, R.; Hakendorf, P.; Horwood, C.; Thompson, C. Factors influencing early and late readmissions in Australian hospitalised patients and investigating role of admission nutrition status as a predictor of hospital readmissions: A cohort study. BMJ Open 2018, 8, e022246. [Google Scholar] [CrossRef]
- Brand, C.; Sundararajan, V.; Jones, C.; Hutchinson, A.; Campbell, D. Readmission patterns in patients with chronic obstructive pulmonary disease, chronic heart failure and diabetes mellitus: An administrative dataset analysis. Intern. Med. J. 2005, 35, 296–299. [Google Scholar] [CrossRef]
- Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. arXiv 2020, arXiv:2005.14165. [Google Scholar] [CrossRef]
- Thompson, D.C.; Mofidi, R. Natural Language Processing framework for identifying abdominal aortic aneurysm repairs using unstructured electronic health records. Sci. Rep. 2025, 15, 26388. [Google Scholar] [CrossRef]
- Novac, O.C.; Chirodea, M.C.; Novac, C.M.; Bizon, N.; Oproescu, M.; Stan, O.P.; Gordan, C.E. Analysis of the Application Efficiency of TensorFlow and PyTorch in Convolutional Neural Network. Sensors 2022, 22, 8872. [Google Scholar] [CrossRef]
- Rumpf, S.; Zufall, N.; Rumpf, F.; Gschwendtner, A. A Performance Comparison of Different YOLOv7 Networks for High-Accuracy Cell Classification in Bronchoalveolar Lavage Fluid Utilising the Adam Optimiser and Label Smoothing. J. Imaging Inform. Med. 2025, 38, 2367–2380. [Google Scholar] [CrossRef]
- Chiang, Y.-Y.; Chen, C.-L.; Chen, Y.-H. Deep learning evaluation of glaucoma detection using fundus photographs in highly myopic populations. Biomedicines 2024, 12, 1394. [Google Scholar] [CrossRef]








| Model | Input Features | Hidden Layers | Output | Activation | Notes |
|---|---|---|---|---|---|
| Structured-only | 11 structured EMR variables | 2 fully connected layers: 256, 64 neurons | 1 | Sigmoid | ReLU activation in hidden layers; class-weighted binary cross-entropy loss |
| Text-only (Bio-ClinicalBERT) | Clinical notes (Bio-ClinicalBERT embeddings) | None (transformer-based) | 1 | Sigmoid | End-to-end fine-tuned Bio-ClinicalBERT with task-specific classification head; max token length = 256 |
| Combined multimodal (Feedforward) | Structured variables + 768-d Bio-ClinicalBERT [CLS] embedding | 2 fully connected layers: 256, 64 neurons | 1 | Sigmoid | Concatenated structured + text embeddings processed same as structured-only model |
| Combined multimodal (CNN) | Structured variables + 768-d Bio-ClinicalBERT [CLS] embedding | 1D convolutional layer(s) + max-pooling + fully connected layer(s) | 1 | Sigmoid | Convolution applied to concatenated feature vector before classification |
| Combined multimodal (LSTM) | Structured variables + 768-d Bio-ClinicalBERT [CLS] embedding | LSTM layer + fully connected layer | 1 | Sigmoid | Final LSTM hidden state used for binary classification |
| Variable | Total No (n = 4135) | No Readmission (n = 3129) | 30-Day Readmission (n = 1006) | p-Value |
|---|---|---|---|---|
| Age in years, median (IQR) | 74.0 (59.0–83.0) | 73.0 (59.0–83.0) | 74.0 (60.0–83.0) | 0.416 |
| Sex male n (%) | 1918 (46.4%) | 1452 (46.4) | 466 (46.3) | 0.850 |
| Charlson index, median (IQR) | 1.0 (1.0–2.0) | 1.0 (0.0–2.0) | 1.0 (0.0–3.0) | <0.001 |
| HFRS, median (IQR) | 4.9 (2.6–7.3) | 4.9 (2.6–7.1) | 5.2 (2.6–8.1) | <0.001 |
| IRSD, mean (SD) | 997.3 (59.1) | 998.9 (58.1) | 992.4 (62.1) | 0.002 |
| ED visits in previous 6 months, median (IQR) | 1.0 (0.0–2.0) | 1.0 (0.0–2.0) | 1.5 (0.0–4.0) | <0.001 |
| Hospital admissions in last 1 year, median (IQR) | 0.0 (0.0–2.0) | 0.0 (0.0–1.0) | 1.0 (0.0–3.0) | <0.001 |
| Smokers n (%) | 297 (7.2%) | 196 (6.3) | 101 (10.0) | <0.001 |
| Alcohol abuse n (%) | 451 (10.9) | 275 (8.8) | 176 (17.5) | <0.001 |
| CRP, median (IQR) | 18.2 (3.8–63.1) | 18.6 (3.7–64.4) | 16.4 (4.0–60.6) | 0.190 |
| LOS, median (IQR) | 3.3 (1.8–6.5) | 3.2 (1.8–6.1) | 3.9 (2.0–7.5) | <0.001 |
| Model Category | Model | Data Modality | AUC-ROC | Accuracy | Precision | Recall | F1 Score |
|---|---|---|---|---|---|---|---|
| Classical ML baselines | Logistic regression | Structured EMR only variables | 0.64 | 0.58 | 0.32 | 0.66 | 0.43 |
| XGBoost | Structured EMR only | 0.60 | 0.62 | 0.32 | 0.51 | 0.40 | |
| Additional ML models | Random Forest | Structured EMR only | 0.61 | 0.75 | 0.50 | 0.15 | 0.23 |
| Gradient Boosting | Structured EMR only | 0.61 | 0.73 | 0.40 | 0.17 | 0.24 | |
| Extra trees | Structured EMR only | 0.61 | 0.74 | 0.41 | 0.16 | 0.23 | |
| HistGradient Boosting | Structured EMR only | 0.62 | 0.73 | 0.40 | 0.16 | 0.23 | |
| Deep learning models | DL-Structured | Structured EMR only | 0.62 | 0.74 | 0.42 | 0.14 | 0.22 |
| DL-Unstructured-text (Bio-ClinicalBERT) | Unstructured clinical text only | 0.52 | 0.46 | 0.25 | 0.83 | 0.39 | |
| DL-Multimodal | Structured variables + clinical text | 0.58 | 0.63 | 0.33 | 0.49 | 0.40 | |
| CNN (combined) | Structured variables + clinical text | 0.54 | 0.47 | 0.26 | 0.66 | 0.38 | |
| LSTM (combined) | Structured variables + clinical text | 0.54 | 0.49 | 0.27 | 0.63 | 0.38 |
| Feature | Mean SHAP Value * | Relative Importance (%) |
|---|---|---|
| Total hospital admissions (prior year) | 0.0627 | 17.1 |
| Number of ED visits in previous 6 months | 0.0627 | 17.1 |
| Charlson index | 0.0318 | 8.7 |
| Hospital length of stay | 0.0241 | 6.6 |
| Age | 0.0164 | 4.5 |
| HFRS | 0.0163 | 4.5 |
| IRSD | 0.0105 | 2.9 |
| CRP | 0.0094 | 2.6 |
| Sex | 0.0081 | 2.2 |
| Alcohol abuse | 0.0058 | 1.6 |
| Smoking | 0.0037 | 1.0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Sharma, Y.; Thompson, C.; Mangoni, A.A.; Horwood, C.; Woodman, R. Comparative Evaluation of Machine Learning Models Using Structured and Unstructured Clinical Data for Predicting Unplanned General Medicine Readmissions in a Tertiary Hospital in Australia. Computers 2026, 15, 138. https://doi.org/10.3390/computers15030138
Sharma Y, Thompson C, Mangoni AA, Horwood C, Woodman R. Comparative Evaluation of Machine Learning Models Using Structured and Unstructured Clinical Data for Predicting Unplanned General Medicine Readmissions in a Tertiary Hospital in Australia. Computers. 2026; 15(3):138. https://doi.org/10.3390/computers15030138
Chicago/Turabian StyleSharma, Yogesh, Campbell Thompson, Arduino A. Mangoni, Chris Horwood, and Richard Woodman. 2026. "Comparative Evaluation of Machine Learning Models Using Structured and Unstructured Clinical Data for Predicting Unplanned General Medicine Readmissions in a Tertiary Hospital in Australia" Computers 15, no. 3: 138. https://doi.org/10.3390/computers15030138
APA StyleSharma, Y., Thompson, C., Mangoni, A. A., Horwood, C., & Woodman, R. (2026). Comparative Evaluation of Machine Learning Models Using Structured and Unstructured Clinical Data for Predicting Unplanned General Medicine Readmissions in a Tertiary Hospital in Australia. Computers, 15(3), 138. https://doi.org/10.3390/computers15030138

