An Artificial Intelligence Approach to Bloodstream Infections Prediction

This study aimed to develop an early prediction model for identifying patients with bloodstream infections. The data resource was taken from 2015 to 2019 at Taichung Veterans General Hospital, and a total of 1647 bloodstream infection episodes and 3552 non-bloodstream infection episodes in the intensive care unit (ICU) were included in the model development and evaluation. During the data analysis, 30 clinical variables were selected, including patients’ basic characteristics, vital signs, laboratory data, and clinical information. Five machine learning algorithms were applied to examine the prediction model performance. The findings indicated that the area under the receiver operating characteristic curve (AUROC) of the prediction performance of the XGBoost model was 0.825 for the validation dataset and 0.821 for the testing dataset. The random forest model also presented higher values for the AUROC on the validation dataset and testing dataset, which were 0.855 and 0.851, respectively. The tree-based ensemble learning model enabled high detection ability for patients with bloodstream infections in the ICU. Additionally, the analysis of importance of features revealed that alkaline phosphatase (ALKP) and the period of the central venous catheter are the most important predictors for bloodstream infections. We further explored the relationship between features and the risk of bloodstream infection by using the Shapley Additive exPlanations (SHAP) visualized method. The results showed that a higher prothrombin time is more prominent in a bloodstream infection. Additionally, the impact of a lower platelet count and albumin was more prominent in a bloodstream infection. Our results provide additional clinical information for cut-off laboratory values to assist clinical decision-making in bloodstream infection diagnostics.


Introduction
Bloodstream infections (BSIs) are one of the leading causes of death. Patients diagnosed with BSIs have high morbidity worldwide, with an estimated overall crude mortality rate of 15-30% [1]. Early recognition and initiation of treatment is the key to successful treatment of bloodstream infection. In general, pathogens have been identified through blood culture, which is a time-consuming procedure due to the multiple steps required for identification [2]. Furthermore, delays in administering effective antibiotics could increase the risk of death [3]. Attempts have been made to develop effective biomarkers to detect BSIs. However, most laboratory-based methods fail in the early diagnosis of BSI [4,5].
Medical innovations powered by artificial intelligence are increasingly developing into clinically practical solutions. Machine learning or deep learning algorithms can effectively 2 of 13 process the growing amount of data produced in various fields of medicine. Artificial intelligence can aid in the development of infection surveillance aimed at better recognizing risk factors, improving patient risk reduction, and detecting infections in a timely manner. Previous studies have developed different prediction models for bloodstream infections using various algorithms [6][7][8][9][10][11]. These studies collected data from different cohorts, such as general wards, intensive care units (ICUs), and surgical in-patients. Furthermore, different machine learning or deep learning algorithms were employed in these studies, with varying data collection windows. These studies examined the performance of a prediction model for BSI. Some studies presented excellent model performance [6,7], while others presented poor performance for identifying BSIs due to a data imbalance [8]. The Supplementary Table S1 presents different prediction models, data cohorts, data collection windows, and model evaluations for the studies.
However, the data cohorts were different. There is scant evidence of AI and machine learning implementation in the field of BSI, and no consistent trend of effect has emerged, especially in the ICU. Additionally, fewer studies have explored model interpretation based on clinical features.
In this study, we aimed to develop an interpretable model to predict BSI in an Asian population. We used different approaches to evaluate which prediction performance of the model is better. Moreover, we tried to explain the relationship between clinical features and bloodstream infections using the Shapley Additive exPlanations (SHAP) visualized method.

Definition of Bloodstream Infection
We defined BSI as the growth of a clinically important pathogen in at least one blood culture. Contaminant microorganisms were classified as negative under the Clinical and Laboratory Standards Institute guidelines. The Supplementary Table S2 presents the distribution of pathogens among the BSIs.

Data Acquisition
In this retrospective study, 4275 patients who were admitted to the Taichung Veterans General Hospital ICU were included. Between August 2015 and December 2019, 12,090 blood culture episodes were collected from these patients, with a total of 1680 BSI episodes and 10,410 non-BSI episodes. We found that most of the patients had two blood culture sampling episodes at the same time or the interval between two individual blood culture sampling episodes was less than 24 h. To avoid data sample noise, these episodes were randomly selected from only one episode for the data analysis. Additionally, we removed episodes in which the proportions of missing data for the clinical characteristics were more than 40%.
Finally, a total of 1478 bloodstream infections and 3597 non-bloodstream infections from blood culture tests were analyzed in our study. Figure 1 presents the flowchart of the study population selection. Table 1 reports the main characteristics of the overall  population, and the Supplementary Table S3 reports the patient characteristics in the training, validation, and test sets. The results of the t-test and analysis of variance (ANOVA) summary table for these data indicate that there were no statistically significant main or interaction effects.

Data Outcome and Prediction Window
The main objective of this study was to develop the early prediction of BSIs as a binary classification task. The prediction targets or the primary outcome of this study assessed bloodstream infections within a patient's stay at the ICU. Figure 2 presents the BSI prediction task using 72 h of data to forecast the one set of blood culture tests after 24 h, which we designed as a 24-h prediction window after the feature window of 72 h. All vital signs data were collected between 96-24 h before blood culture was measured. The laboratory data were collected within one week to 24 h prior to the blood culture test because the data were measured infrequently. We used the mean of vital signs and laboratory tests as the feature values. Moreover, previous studies have indicated that the use of central venous catheters (CVC) increases the risk of BSIs. Therefore, the present study calculated the time of using CVC from ICU admission to 24 h prior to the blood test as a predictive feature. We also analyzed the period from ICU admission to 24 h prior to the blood culture measure in terms of the number of days.

Data Outcome and Prediction Window
The main objective of this study was to develop the early prediction of BSIs as a binary classification task. The prediction targets or the primary outcome of this study assessed bloodstream infections within a patient's stay at the ICU. Figure 2 presents the BSI prediction task using 72 h of data to forecast the one set of blood culture tests after 24 h, which we designed as a 24-h prediction window after the feature window of 72 h. All vital signs data were collected between 96-24 h before blood culture was measured. The laboratory data were collected within one week to 24 h prior to the blood culture test because the data were measured infrequently. We used the mean of vital signs and laboratory tests as the feature values. Moreover, previous studies have indicated that the use of central venous catheters (CVC) increases the risk of BSIs. Therefore, the present study calculated the time of using CVC from ICU admission to 24 h prior to the blood test as a predictive feature. We also analyzed the period from ICU admission to 24 h prior to the blood culture measure in terms of the number of days.

Clinical Features Selection
This study is a retrospective analysis of the clinical data. The feature selection was reviewed based on the diagnostic criteria for sepsis and the risk factors of BSIs in the ICU [12,13]. Considering the available data from our electronic health record and the opinions of our expert domain, we collected thirty-two clinical variables as predictors of bloodstream infections, including patients' basic characteristics, vital signs, laboratory data, and clinical information.
Vital signs were recorded every two hours in the ICU, including body temperature, respiratory rate, pulse rate, oximetry, systolic blood pressure (SBP), diastolic blood pressure (DBP), and Glasgow Coma Scale (GCS). Seventeen laboratory features were measured based on patients' condition. Moreover, the usage time of central venous catheters (CVC), mechanical ventilation via endotracheal tube (ENDO), and Foley catheters were also included. Finally, we included the stay time of the patients' ICU admission prior to the blood culture test. The feature characteristics are presented in Table 2. The difference between BSIs and non-BSIs was measured using a Student's t-test for continuous variables. Additionally, a logistic regression analysis for crude and adjusted odds' ratios is reported in Supplementary Table S4.

Clinical Features Selection
This study is a retrospective analysis of the clinical data. The feature selection was reviewed based on the diagnostic criteria for sepsis and the risk factors of BSIs in the ICU [12,13]. Considering the available data from our electronic health record and the opinions of our expert domain, we collected thirty-two clinical variables as predictors of bloodstream infections, including patients' basic characteristics, vital signs, laboratory data, and clinical information.
Vital signs were recorded every two hours in the ICU, including body temperature, respiratory rate, pulse rate, oximetry, systolic blood pressure (SBP), diastolic blood pressure (DBP), and Glasgow Coma Scale (GCS). Seventeen laboratory features were measured based on patients' condition. Moreover, the usage time of central venous catheters (CVC), mechanical ventilation via endotracheal tube (ENDO), and Foley catheters were also included. Finally, we included the stay time of the patients' ICU admission prior to the blood culture test. The feature characteristics are presented in Table 2. The difference between BSIs and non-BSIs was measured using a Student's t-test for continuous variables. Additionally, a logistic regression analysis for crude and adjusted odds' ratios is reported in Supplementary Table S4.  Figure 3 presents an overview of the predictive model's established procedures, including the data pre-processing, model training, and model evaluation. Data pre-processing is an important issue in data analytics, including the removal of the outliers, missing value imputation, and data transformation. Moreover, some variables should be transformed because of different units, such as the white blood cell (WBC) count. For instance, the WBC count could be converted from 10ˆ3/uL to K/uL. For removal of the outliers, we visualized all variables by using boxplots and discussed them with clinicians to identify the outliers, especially in vital sign records. We considered the plausible value as the inclusion criteria for vital signs by clinical expertise. The Supplementary Table S5 presents the vital sign plausible values. The vital sign values that did not fall within the specific range were treated as outliers and excluded. For missing pre-processing values, we found that some lab tests, such as C-reactive protein (CRP) and glucose, were missing over 40% of their values/information (see Supplementary Table S6). The lab tests were not included in the final data analysis. We input the missing data by calculating the mean of the non-missing values in each column. Finally, a total of thirty features were used as predictors of BSIs. is considered acceptable, 0.8 to 0.9 is considered excellent, and greater than 0.9 is considered outstanding [15].

Data Analysis
The present study used conventional statistical approaches to analyze the data cohorts. For continuous variables, the difference between positive results and negative results by using an independent t-test were examined. For model training, five machine learning algorithms were used in this study, including logistic regression (LR), support vector machine (SVM), multi-layer perceptron (MLP), random forest (RF), and eXtreme For model development, the dataset was divided into a training set (60%), validation set (20%), and testing set (20%) after completing the data pre-processing. The validation dataset was evaluated to determine the model fit for the training dataset when tuning the hyperparameters and data preparation. The testing dataset was used to provide an unbiased evaluation of the final model fit on the training dataset [14]. To help examine the performance of the model, each model was evaluated by sensitivity, specificity, and the area under the receiver operating characteristic (AUROC) curve. An AUROC of 0.7 to 0.8 is considered acceptable, 0.8 to 0.9 is considered excellent, and greater than 0.9 is considered outstanding [15].

Data Analysis
The present study used conventional statistical approaches to analyze the data cohorts. For continuous variables, the difference between positive results and negative results by using an independent t-test were examined. For model training, five machine learning algorithms were used in this study, including logistic regression (LR), support vector machine (SVM), multi-layer perceptron (MLP), random forest (RF), and eXtreme Gradient Boosting (XGBoost). The logistic regression model was chosen as the representative linear model [16]. The SVM was chosen as the representative non-probabilistic binary linear classifier [17]. The MLP was based on an artificial neural network [18]. The RF and XGBoost models were chosen as representative ensemble learning and tree-based methods [19,20].
The main objective of this study was to develop an early prediction model of BSIs that could correctly identify positive BSIs. The predictive output of the machine learning model is represented as a probability, which should convert the value to the target class so that different threshold settings will perform and identify different numbers of BSI classes. The present study compared the different thresholds to examine the model prediction performance and attempted to find a trade-off between the classification of a BSI and non-BSI.
Additionally, we explored the features' importance in the proposed prediction model. The ensembles of decision tree methods, such as RF and XGBoost, can provide estimates of feature importance from a trained predictive model based on Gini importance. Some studies have explored interpretable machine learning by using SHAP, a game theoretic approach, to explain the output of any machine learning model [21]. Furthermore, the SHAP value plot could present the positive and negative relationships of the predictors with the target variable. In this paper, the SHAP method was used to explore the importance of clinical features and their relationship to BSI events for the XGBoost model.

Evaluation of Different Models
The present study compared five different algorithms to evaluate which performance was suitable for our dataset. Table 3 and Figure 4 present the prediction performance of BSI. The predictive result of the machine learning algorithm represents a risk probability of BSI. The default value is set 0.5, which means if the threshold of the model exceeds 0.5, the model will determine that the patient has BSI. Comparing the sensitivity and specificity, the XGBoost showed the highest sensitivity on the validation and testing datasets (0.724 and 0.706, respectively). Additionally, the RF showed the highest specificity on the validation dataset and testing dataset (0.927 and 0.940, respectively). A lower sensitivity was found for the SVM, RF, and MLP models. The sensitivity of the validation and testing datasets were determined for SVM (0.578 and 0.566, respectively), RF (0.565 and 0.577, respectively), and MLP (0.494 and 0.406, respectively).

Evaluation of Different Cut-Off Thresholds
According to machine learning techniques, the predicted results were represented as a probability. This probability is a value that ranges from zero and one and represents the input that belongs to the target class, which means the value can be converted to a class. For binary classification, the default cut-off threshold value is 0.5. This means that if a model's predicted results have a probability greater than 0.5, they predict a BSI. However, the default cut-off threshold may not have the best model prediction. When the threshold was changed, the results of sensitivity and specificity also changed. This allowed us to explore the trade-off between sensitivity and specificity.
The present study compared the different cut-off thresholds to evaluate the performance of the model in identifying BSIs. The purpose of the present study was to correctly identify patients with BSIs, so the focus was on finding the highest the proportion of positives that were correctly identified. The present study analyzed the different cut-off thresholds to determine the trade-off threshold for the BSI prediction model. According to the results of the model evaluation, the RF and XGBoost models had the best prediction performance for BSI. The two models further examined the evaluation of different cut-off thresholds. Figure 5 shows the performance statistics for BSI event prediction in the testing dataset. Here, the x-axis represents the probability of identifying patients with BSI events, and the y-axis is the number of patients. We found that most of the patients with BSI events presented a higher predictive probability, and the patients with no BSI events indicated that most of the patients' predictive probability is relatively low. In changing the In terms of specificity, the RF model performed with the highest specificity, which was over 0.9 on the validation and testing datasets. The LR model performed with the lowest specificity, scoring 0.660 on the validation dataset and 0.644 on the testing dataset.
The prediction performance was further assessed based on the AUROC from the validation data and testing data. According to the AUROC results, the RF model performed the highest AUROC in the validation and testing datasets (0.855 and 0.851, respectively). The XGBoost algorithm also performed a relatively high AUROC for the validating and testing datasets of 0.825 and 0.821, respectively. In contrast, the LR and MLP models had the lowest AUROC values in the test dataset (0.685, 0.667, respectively).
According to the results, the XGBoost and RF machine learning methods present better prediction performance of BSI. The AUROC results are over 0.8, which means that the model is considered excellent for predicting BSI. The Brier score is measured from the model fit; the lower the brier score, the better the performance of the model. The XGBoost and RF models yielded acceptable Brier scores. However, the LR model performed the lowest in predicting BSI.

Evaluation of Different Cut-Off Thresholds
According to machine learning techniques, the predicted results were represented as a probability. This probability is a value that ranges from zero and one and represents the input that belongs to the target class, which means the value can be converted to a class. For binary classification, the default cut-off threshold value is 0.5. This means that if a model's predicted results have a probability greater than 0.5, they predict a BSI. However, the default cut-off threshold may not have the best model prediction. When the threshold was changed, the results of sensitivity and specificity also changed. This allowed us to explore the trade-off between sensitivity and specificity.
The present study compared the different cut-off thresholds to evaluate the performance of the model in identifying BSIs. The purpose of the present study was to correctly identify patients with BSIs, so the focus was on finding the highest the proportion of positives that were correctly identified. The present study analyzed the different cut-off thresholds to determine the trade-off threshold for the BSI prediction model. According to the results of the model evaluation, the RF and XGBoost models had the best prediction performance for BSI. The two models further examined the evaluation of different cut-off thresholds. Figure 5 shows the performance statistics for BSI event prediction in the testing dataset. Here, the x-axis represents the probability of identifying patients with BSI events, and the y-axis is the number of patients. We found that most of the patients with BSI events presented a higher predictive probability, and the patients with no BSI events indicated that most of the patients' predictive probability is relatively low. In changing the threshold to 0.41, the sensitivity received a better score (80.8%) and acceptable specificity (67.0%) for the XGBoost model (Table 4). On the other hand, when the cut-off threshold was set to 0.4, it presented a higher sensitivity (85.8%) and acceptable specificity (69.9%) for the RF model. Moreover, a cut-off threshold set to 0.3 also presented a similar result (Table 4). According to the results, there could be a trade-off for the threshold based on sensitivity and specificity. This is particularly desirable if the experts want to identify patients with BSI events correctly by using both sensitivity and acceptable specificity. The results of other algorithms presented in the Supplementary Table S7 and Figure S1.

Clinical Features Importance and Visualization
Regarding the interpretability of the machine learning model, SHAP values were used to visualize and explain how these features affect BSI events within the XGBoost model. The SHAP values can explain and explore the results of the machine learning model by using a theoretic game approach. The method provides an overview of important features and visualizes the values of each feature for every data point (sample) [18]. Figure 6a presents the feature importance as the strongest predictor to effect BSIs. For the top 20 important features, there are 2 patient characteristics, 4 vital sign features, 12 laboratory features, 2 types of catheters (CVC and Foley), and an ICU-stay up to 24 h prior to the performance of a blood culture test. The results revealed that alkaline phosphatase (ALKP) and the use time of the central venous catheter (TOTAL_CVC) were associated with a higher risk of BSI events. Moreover, prothrombin time (PT) and platelet (PLT) were the third and fourth most important features. Additionally, we found that Apache II score and age seemed to be important features in predicting bloodstream infection. Figure 6b summarizes the SHAP value plot by combining feature importance with feature effects. The y-axis is defined by the feature and the x-axis is defined by the Shapley value. The plot describes the features' overall influence on the model prediction. Each point in each feature represents an individual case, with colors ranging from blue (low feature value) to red (high feature value). The data points further to the right represent the features that contribute to the higher risk of BSI for a given individual case. The data points to the left represent the features that contribute to the lower risk of BSI. The vertical line in the middle represents no change in risk. We found that the data points (individual cases) with higher ALKP values had a higher risk of BSI. Furthermore, some points with lower ALKP values also had a higher risk of bloodstream infection. In terms of the total usage time of a central venous catheter (TOTAL_CVC), the results reveal that the longer the usage time of the central venous catheter, the higher risk of BSI. The PT also revealed similar results. However, in contrast to PT, the data points with a lower PLT had a high risk of BSI. Additionally, most of patients with higher Apache II scores were correlated with an increased risk of BSI. line in the middle represents no change in risk. We found that the data points (individual cases) with higher ALKP values had a higher risk of BSI. Furthermore, some points with lower ALKP values also had a higher risk of bloodstream infection. In terms of the total usage time of a central venous catheter (TOTAL_CVC), the results reveal that the longer the usage time of the central venous catheter, the higher risk of BSI. The PT also revealed similar results. However, in contrast to PT, the data points with a lower PLT had a high risk of BSI. Additionally, most of patients with higher Apache II scores were correlated with an increased risk of BSI. Furthermore, the lab tests were used as the continuous variable in our prediction model. The present study examined the marginal effect of laboratory tests on the predicted outcome of a machine learning model using a SHAP dependence plot [22]. Figure  7 shows the dependence plot for PT, PLT, and albumin (ALB). The results showed that the value of prothrombin time over approximately 12.3 s were associated with a higher risk of BSI. Patients had a high risk of BSI when their PLT value was below approximately 120 K/uL. Consistent with the trend of PLT, ALB levels less than approximately 2.73 g/dL increased the risk of BSI. According to the dependence plot in Figure 7, we found cut-off values of laboratory features that provided additional clinical information to predict BSI for clinicians. Furthermore, the lab tests were used as the continuous variable in our prediction model. The present study examined the marginal effect of laboratory tests on the predicted outcome of a machine learning model using a SHAP dependence plot [22]. Figure 7 shows the dependence plot for PT, PLT, and albumin (ALB). The results showed that the value of prothrombin time over approximately 12.3 s were associated with a higher risk of BSI. Patients had a high risk of BSI when their PLT value was below approximately 120 K/uL. Consistent with the trend of PLT, ALB levels less than approximately 2.73 g/dL increased the risk of BSI. According to the dependence plot in Figure 7, we found cut-off values of laboratory features that provided additional clinical information to predict BSI for clinicians.

Discussion
In the present study, we used multiple machine learning algorithm approaches to develop an early prediction model for bloodstream infections. The prediction model achieved good performance in the validation dataset and testing dataset by using RF and XGBoost algorithms (AUROCs ranging from 0.821 to 0.855). The results demonstrated a good model fit for the tree-based ensemble methods. Compared to previous studies, the logistic regression model showed a range of AUROC values between 0.6 and 0.83 [23]. Lee et al. developed an early detection of bacteremia model using an artificial neural network approach. The AUROC results achieved 0.727 (95% CI, 0.713-0.727) and had a higher sen-

Discussion
In the present study, we used multiple machine learning algorithm approaches to develop an early prediction model for bloodstream infections. The prediction model achieved good performance in the validation dataset and testing dataset by using RF and XGBoost algorithms (AUROCs ranging from 0.821 to 0.855). The results demonstrated a good model fit for the tree-based ensemble methods. Compared to previous studies, the logistic regression model showed a range of AUROC values between 0.6 and 0.83 [23]. Lee et al. developed an early detection of bacteremia model using an artificial neural network approach. The AUROC results achieved 0.727 (95% CI, 0.713-0.727) and had a higher sensitivity (0.810) [6]. Ebrahim Mahmoud et al. also developed a prediction model for BSI among hospitalized patients [9]. However, these population studies were not conducted on critically ill patients. Roimi et al. developed an early diagnosis of BSI using machine learning for ICU patients. The study presented excellent AUROCs in two medical centers (0.89 ± 0.01 and 0.92 ± 0.02) [7].
We further identified the cut-off threshold for a trade-off between sensitivity and specificity. Some studies have compared the different cut-off thresholds to examine the model performance. In BSI predictive implementation, the evaluation of model performance focused on detecting patients with BSIs correctly [8]. The results showed that the trend of sensitivity and specificity of different cut-off thresholds was consistent, which is important for future research.
In terms of the features' importance, the results showed that the ALKP laboratory test is the most important feature in predicting BSIs. Furthermore, it is consistent with a previous study. Lee et al. identified ALKP as one of the most influential features for BSI. We found that some patients with low ALKP were associated with high risk of BSI. It seems that there may be a sub-group of patients where low ALKP is actually a very important predictor of BSI, even if previous studies indicated that the high ALKP is positively associated with BSI. Further studies could consider the sensitivity analysis that excludes this sub-group of patients and explores the relationship between other clinical characteristics and BSI in this group. We also identified the total duration of CVC as an important risk factor, which has been observed in many studies [6,24]. Hence, our model effectively identified the risk factors associated with the development of BSIs. We also found that some laboratory features, such PT, PLT, and ALB, are important in the development of BSIs, as confirmed by other studies [6,9,11]. According to the analysis of the dependence plot of the laboratory features, we could observe the cut-off values of the features with higher risk of BSI. The results provided helpful laboratory tests information for identifying the risk of BSI. We also found that the length of an ICU stay until the blood culture test plays an important role in the development of BSIs. A retrospective cohort study using data from 113,893 admissions revealed an association between the length of a hospital stay and an increased risk of BSIs [25]. Other studies have also indicated that the hospital-to-blood culture period has a positive effect on BSIs [6,26]. Overall, most of the studies present similar results, even though they are sampled from different populations, such as the United States, Israel, Saudi Arabia, and South Korea. We discovered some consistent features regarding the important risks of BSIs in these studies.
However, the present study has some limitations. First, the data were collected from a single medical center, and external validation is required, even though the independent validation process was implemented in our study. Second, the model was developed based on 72 h data and a 24 h prediction window, and patients who stayed in the ICU for less than 96 h were excluded. Third, the time-to-event features (CVC, Foley, and ENDO) were not evaluated using an alternative binary classification in our model; we did not compare the different types of features to find the best predictors. Lastly, the data was slightly imbalanced, which means the precision of the model training was relatively low because of the lower number of BSI events.

Conclusions
The present study developed a machine learning model for the early identification of patients with a high risk of BSIs in the ICU. The performance of the prediction was found to be compatible with previous studies. We explored how different cut-off thresholds affected the prediction performance. Moreover, we used the SHAP method to explain the results of the prediction model.
In general, our data highlight the importance of prediction models powered by artificial intelligence. Further studies are needed to validate this model through conventional clinical trials.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/jcm10132901/s1, Table S1: The relevant studies in predicting BSIs, Table S2: The distribution of pathogens among the BSIs, Table S3: Patient demographics of the study population in each dataset, Table S4: Logistic regression analysis for crude and adjusted odds ratio, Table S5. Vital signs values assumed to be plausible, Table S6: The missing values of features, Table S7: The LR, SVM, and MLP models performances of different cut-off thresholds. Figure