Machine Learning Algorithm Predicts Mortality Risk in Intensive Care Unit for Patients with Traumatic Brain Injury

Background: Numerous mortality prediction tools are currently available to assist patients with moderate to severe traumatic brain injury (TBI). However, an algorithm that utilizes various machine learning methods and employs diverse combinations of features to identify the most suitable predicting outcomes of brain injury patients in the intensive care unit (ICU) has not yet been well-established. Method: Between January 2016 and December 2021, we retrospectively collected data from the electronic medical records of Chi Mei Medical Center, comprising 2260 TBI patients admitted to the ICU. A total of 42 features were incorporated into the analysis using four different machine learning models, which were then segmented into various feature combinations. The predictive performance was assessed using the area under the curve (AUC) of the receiver operating characteristic (ROC) curve and validated using the Delong test. Result: The AUC for each model under different feature combinations ranged from 0.877 (logistic regression with 14 features) to 0.921 (random forest with 22 features). The Delong test indicated that the predictive performance of the machine learning models is better than that of traditional tools such as APACHE II and SOFA scores. Conclusion: Our machine learning training demonstrated that the predictive accuracy of the LightGBM is better than that of APACHE II and SOFA scores. These features are readily available on the first day of patient admission to the ICU. By integrating this model into the clinical platform, we can offer clinicians an immediate prognosis for the patient, thereby establishing a bridge for educating and communicating with family members.


Introduction
Traumatic brain injury is a global issue that not only impacts patients' health but also imposes a significant burden on social, economic, and medical resources [1].The age-adjusted mortality rate in Europe is 11.7 per 100,000 and 17.0 per 100,000 in the US [2,3].In contrast to Western countries, where TBI is often associated with war, Asia experiences TBI due to falls and road traffic injuries [4].As low-to middle-income countries undergo industrial transformation leading to increased mechanization and urbanization, the incidence of brain injuries is gradually rising.However, the slow growth of medical resources in these countries results in more severe disabilities compared to developed nations [4].
Diagnostics 2023, 13, 3016 2 of 14 Survivors of TBI typically face neurological deficits and disabilities.Those with severe TBI receive treatment in the intensive care unit (ICU).Various efforts have been made to predict the prognosis of TBI patients, exploring factors such as Glasgow Coma Scale (GCS), age, pupillary reactivity, injury severity, and clinical condition (e.g., hypoxia, respiratory distress, and hypotension) in numerous studies.The evaluation of brain injury extent and classification using CT scans is also closely linked to mortality [5][6][7][8].
A previous retrospective study found variability in the use of a single predictive model across populations [9].Although studies of IMPACT and CRASH are widely known, they may not be applicable to each individual patient [10].The SOFA (Sequential Organ Failure Assessment), introduced in 1996, is designed to describe the progression of complications in critically ill patients and an elevated SOFA score is associated with a higher likelihood of mortality [11,12].APACHE II relies on 12 physiological variables measured within the first 24 h of ICU admission to predict ICU patient outcomes [13].However, the use of APACHE II and SOFA has only shown marginal improvement in prognostic performance [14].Therefore, we need to seek more accurate predictive models for prognosis and mortality in ICU settings.
Machine learning (ML) approaches require more input and output data for analysis, but they excel at handling complex interrelationships.Compared to classical linear regression statistics, machine learning processes data directly, resulting in more accurate predictions [15].However, the "black-box" nature of AI, characterized by its lack of explanation, is still the main reason for the low clinical application.In order to improve the predictive explanation of AI models, Explanatory Artificial Intelligence (XAI) techniques have been introduced, with SHAP (SHapley Additive exPlanations) being the most widely used XAI technique for explaining which clinical features are important for predicting various diseases or patient prognosis.Therefore, it is very important to use XAI to better interpret how each feature contributes to the associated outcome in the AI prediction model [16].
Courville E et al. reported a systematic view and meta-analysis (2013-2020) demonstrating that much of this literature discusses in-hospital mortality and poor prognosis, but lacks a more specific focus on the ICU population to understand the predictive power of AIs in TBI patients [17].In the last three years, there have been several reports on the prognosis and mortality risk of brain injury using ML techniques.However, some of these studies may not have selected different combinations of features based on clinical importance, lacked comparisons with traditional tools, or were not conducted in an ICU setting.Therefore, further investigation is needed to clarify this point [18][19][20][21].
Our goal is to use machine learning algorithms to analyze the vast amount of ICU data to predict mortality risk after TBI, which is more tailored to patients in our country.Additionally, it is essential to compare these ML models with the existing APACHE II and SOFA scores.We also use the SHAP technique to explain which clinical features are important for predicting various diseases or patient outcomes.

Ethics
This research received ethical approval (revision: 11106-013) from the institutional review board at Chi Mei Medical Center in Tainan, Taiwan.The authors conducted the study in accordance with appropriate guidelines and regulations.Since the study was retrospective in nature, the Ethics Committee waived the requirement for informed consent.

Flow Chart and AI Device of Current Study
Our study followed the guidelines specified in the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) standard.Figure 1 illustrates the flowchart detailing the ML training process and its integration into the hospital system.The ML model was trained using a total of 42 selected features identified based on their statistically significant differences (p-value < 0.05) between the mortality and non-mortality groups.
predict mortality risk.
Statistical analysis involved t-tests for numerical variables and Chi-square tests for categorical variables.Additionally, Spearman correlation analysis was conducted to evaluate the strength of the correlation between each feature and the outcome.Recognizing the imbalanced outcome classes, particularly in mortality cases, we employed the synthetic minority oversampling technique (SMOTE) [22].This oversampling technique was applied to balance the number of positive outcome cases (mortality) with the negative cases (survival) during the final model training with each machine learning algorithm.To assess model performance, a 70% training dataset was used, while the remaining 30% formed the test set via random splitting.As a result, four models were developed to predict mortality risk.
Statistical analysis involved t-tests for numerical variables and Chi-square tests for categorical variables.Additionally, Spearman correlation analysis was conducted to evaluate the strength of the correlation between each feature and the outcome.Recognizing the imbalanced outcome classes, particularly in mortality cases, we employed the synthetic minority oversampling technique (SMOTE) [22].This oversampling technique was applied to balance the number of positive outcome cases (mortality) with the negative cases (survival) during the final model training with each machine learning algorithm.
Figure 2 illustrates the utilization of the hospital backend system to collect data from various assessment modules, including the ICU evaluation module, vital signs module, health status module, and medical history module.These modules provide input to the central computer for integrated processing, and the data are then fed into the ML training model for simulation.
the hospital system.The ML model was trained using a total of 42 selected features identified based on their statistically significant differences (p-value < 0.05) between the mortality and non-mortality groups.
To assess model performance, a 70% training dataset was used, while the remaining 30% formed the test set via random splitting.As a result, four models were developed to predict mortality risk.
Statistical analysis involved t-tests for numerical variables and Chi-square tests for categorical variables.Additionally, Spearman correlation analysis was conducted to evaluate the strength of the correlation between each feature and the outcome.Recognizing the imbalanced outcome classes, particularly in mortality cases, we employed the synthetic minority oversampling technique (SMOTE) [22].This oversampling technique was applied to balance the number of positive outcome cases (mortality) with the negative cases (survival) during the final model training with each machine learning algorithm.

Patient Selection
From January 2016 to December 2021, a retrospective collection of patients aged 20 years and older who were diagnosed with TBI and admitted to the ICU was conducted using the electronic medical records of Chi Mei Medical Center.The inclusion criteria included neurosurgical patients who have been admitted to the ICU with the following diagnostic codes.ICD-9: 800*-804*, 850*-854*, 959.0, 959.01, and 959.8-959.9;ICD-10: S00*-T07*.Patients with missing or ambiguous values were excluded.

Feature Selection and Model Building
Under the consensus of several neurosurgeons and intensive care physicians, we identified parameters that met the following criteria: (1) representation of the clinical status of traumatic brain injury patients, (2) objective assessability, and (3) generalizability.Subsequently, we employed univariate filter methods for feature selection, considering both continuous and categorical variables.A significance level of 0.05 or lower was used for selection.Additionally, Spearman's correlation coefficient and expert opinions were considered during the finalization of the feature selection process.The study utilized 42 features, as listed in Table 1.We employed four machine learning algorithms, including Logistic Regression [23], Random Forest [24], LightGBM [25], and XGBoost [26], to construct predictive models for mortality in ICU.To reduce concerns of overfitting that might arise from a small dataset, we utilized the cross-validation technique to build the models.A patient who undergoes one of the above five surgical procedures is said to have undergone surgery.

Model Performance Measurement
In this study, we evaluated the performance of the machine learning models using accuracy, sensitivity, specificity, and area under the curve (AUC) of the receiver operating characteristic curve (ROC).
Specificity is an important metric to assess the ability of a test or diagnostic method to correctly identify normal results (non-patients), while sensitivity evaluates the ability to correctly identify positive outcomes (patients).These metrics are mutually influencing and should be considered comprehensively in research [27].
Accuracy measures the correctness of predictions made by a classification model or testing method and represents the proportion of correct predictions among all predictions made.However, in certain imbalanced datasets, accuracy can be misleading and lead to poor prediction performance for minority classes [28].
The AUC, representing the area under the ROC curve, which represents the trade-off between sensitivity and specificity (false positive rate) at different thresholds, serves as an effective "summary" of the ROC curves' performance [29,30].
To assess the superiority of each machine learning model compared to traditional tools, we specifically used the DeLong test [31].

Characteristics and Clinical Presentations of Individuals with Traumatic Brain Injury
A total of 2260 patients were retrospectively included from the electronic medical records system of Chi-Mei Hospital.Among them, there were 1447 males (64.03%) and 813 females (35.97%).The average age was approximately 63.89 ± 17.74 (mean ± SD) years old.The characteristics of the patients are listed in Table 1, comprising 42 features, including vital signs, coma scale, pupillary reflex, intubation status, external ventricular drainage, and comorbidities.Among these, 29 features showed a significant difference in relation to mortality (p-value < 0.05).

The Correlation between Factors and Mortality (Spearman Correlation Coefficient)
To accurately quantify the impact of each factor on prediction within the ML model, we conducted an analysis using the Spearman correlation coefficient.Among the factors, 22 had coefficients greater than 0.1 (italic) and showed a significant correlation with mortality, indicating their substantial influence on prediction.Moreover, among these features, 14 had coefficients greater than 0.2 (bold) and demonstrated a significant correlation with mortality (Table 2).The top five variables exhibiting high correlation coefficients include pupil_reflex + (R), pupil_reflex + (L), vasopressors, GCS_M, and GCS_E.Notably, while SOFA and APACHE II were employed to compare predictive performances with the AI model, they were not utilized as features in the AI model itself.

Predictive Models with Different Features Combinations
Table 3 presents the predictive outcomes obtained from various feature combinations and artificial intelligence learning.Initially, there were 42 features, which were then categorized based on their significant difference with mortality and their Spearman correlation coefficient.This resulted in three groups: 29 features significantly correlated with mortality, 22 features with a Spearman correlation coefficient greater than 0.1, and 14 features with a Spearman correlation coefficient greater than 0.2.It should be noted that the original 15-feature model includes four features: GCS_E, GCS_V, GCS_M, and GCS.Since GCS is the sum of GCS_E, GCS_V, and GCS_M, we therefore excluded the GCS feature and built the 14-feature model.The results show that the impact on the model's quality is not significant.Each feature combination was assessed across four different machine learning models, and the performance of each model was evaluated using the AUC of the ROC curve to determine the best predictive model.Regardless of the feature combination, the bestperforming machine learning model achieved an AUC greater than 0.9.
Among the 42 features, the LightGBM model performed the best with an AUC of 0.916.In the combination of 29 features, the Random Forest model achieved the highest AUC of 0.918.For the 22-feature combination, the Random Forest model again outperformed others with an AUC of 0.921.Lastly, in the combination of 14 features, the LightGBM model had the highest AUC of 0.914 (Figure 3a-d).

Comparing the Best-Performing Model with Traditional ICU Assessment Tools in Different Feature Combinations
In the DeLong test, no significant differences (>0.05) were observed in any of the feature combinations when compared to the combination of 42 features and the LightGBM model.For the sake of clinical convenience, we believe that using a combination of 14 features is easier to execute.When compared to APACHE II and SOFA scores, the p-values obtained were 0.0180 and 0.0156, respectively, indicating significant differences (Table 4).Feature importance was used to rank the most important attributes that significantly contribute to the accuracy of the final prediction models [32].To better interpret how each feature contributes to the associated outcome, we performed SHAP (SHapley Additive exPlanations) [33].
We ranked the significance of all variables in the LightGBM model to comprehend the role of each better (Figure 4).In Figure 4a, the color of the SHAP plot represents the size of the original feature values, with red indicating positive variable values and blue indicating negative ones.The SHAP value signifies the degree of a feature's impact on the outcome (a positive SHAP value indicates a positive effect).A wider Feature SHAP value suggests a more extensive influence on the outcome.As depicted, patients using vasopressors (represented by red dots) have an increased risk of death (SHAP value is positive), whereas the impact of GCS_M and GCS_V is the opposite.Figure 4b

Feature Combinations
In the DeLong test, no significant differences (>0.05) were observed in any of the feature combinations when compared to the combination of 42 features and the LightGBM model.For the sake of clinical convenience, we believe that using a combination of 14 features is easier to execute.When compared to APACHE II and SOFA scores, the p-values obtained were 0.0180 and 0.0156, respectively, indicating significant differences (Table 4).Feature importance was used to rank the most important attributes that significantly contribute to the accuracy of the final prediction models [32].To better interpret how each feature contributes to the associated outcome, we performed SHAP (SHapley Additive exPlanations) [33].
We ranked the significance of all variables in the LightGBM model to comprehend the role of each better (Figure 4).In Figure 4a, the color of the SHAP plot represents the size of the original feature values, with red indicating positive variable values and blue indicating negative ones.The SHAP value signifies the degree of a feature's impact on the outcome (a positive SHAP value indicates a positive effect).A wider Feature SHAP value suggests a more extensive influence on the outcome.As depicted, patients using vasopressors (represented by red dots) have an increased risk of death (SHAP value is positive), whereas the impact of GCS_M and GCS_V is the opposite.Figure 4b    Based on the contribution of each predictor to the machine learning method, it can be presented in the form of feature importance (Figure 4).

Integration and Application of AI with Clinical Systems
After a series of analyses, we concluded that the LightGBM model with a combination of 14 features was more lightweight.Therefore, we integrated it into the hospital system to assist clinical doctors and nurses in treatment and facilitate communication with patients' families.The "Original" column represents data for current status.Currently, it displays data from the time of admission to the ICU.The "Adjust" column allows the observer to adjust the values of each feature to understand the effect of each feature on the risk of mortality as a reference for treatment.(Figure 5).
be presented in the form of feature importance (Figure 4).

Integration and Application of AI with Clinical Systems
After a series of analyses, we concluded that the LightGBM model with a combination of 14 features was more lightweight.Therefore, we integrated it into the hospital system to assist clinical doctors and nurses in treatment and facilitate communication with patients' families.The "Original" column represents data for current status.Currently, it displays data from the time of admission to the ICU.The "Adjust" column allows the observer to adjust the values of each feature to understand the effect of each feature on the risk of mortality as a reference for treatment.(Figure 5).

Discussion
This is the first study to demonstrate the mortality risk of TBI in ICU using a machine learning model and compare it to the present prediction model.The novelty of the current study is as follows.The simplified model using 14 features with the LightGBM algorithm for mortality prediction proved to be the most practical and excellent, achieving an AUC of 0.914.The study made significant achievements in several aspects: (a) specialized ICU parameters improved the credibility of prediction results; (b) different feature combinations were chosen based on clinical importance and correlation with mortality significance; (c) a comparison was made between ML techniques and commonly used ICU prognostic indicators and mortality assessment tools, such as APACHE II and SOFA scores (4).The observer can adjust the values of each feature to understand the effect of each feature on the risk of mortality as a reference for treatment.
This study employed artificial intelligence (AI) for data analysis, offering numerous advantages.ML can handle complex interactions in vast datasets, leading to more accurate outcome predictions.However, ML models require a larger number of input-output pairings for training, and interpretability may be sacrificed compared to standard statistics [18].In this study, we utilized AI to identify suitable models and clinically examine the mortality of patients with brain injury admitted to the ICU.
The data from 2260 patients, including electronic medical records, clinical physiological values, and laboratory tests, were collected and analyzed.Initially, 42 features were included, but not all of them showed a correlation with mortality.Therefore, we performed a direct analysis of the features and mortality, comparing their significance, and found that 29 parameters exhibited a significant difference in relation to mortality as Table

Discussion
This is the first study to demonstrate the mortality risk of TBI in ICU using a machine learning model and compare it to the present prediction model.The novelty of the current study is as follows.The simplified model using 14 features with the LightGBM algorithm for mortality prediction proved to be the most practical and excellent, achieving an AUC of 0.914.The study made significant achievements in several aspects: (a) specialized ICU parameters improved the credibility of prediction results; (b) different feature combinations were chosen based on clinical importance and correlation with mortality significance; (c) a comparison was made between ML techniques and commonly used ICU prognostic indicators and mortality assessment tools, such as APACHE II and SOFA scores (4).The observer can adjust the values of each feature to understand the effect of each feature on the risk of mortality as a reference for treatment.
This study employed artificial intelligence (AI) for data analysis, offering numerous advantages.ML can handle complex interactions in vast datasets, leading to more accurate outcome predictions.However, ML models require a larger number of input-output pairings for training, and interpretability may be sacrificed compared to standard statistics [18].In this study, we utilized AI to identify suitable models and clinically examine the mortality of patients with brain injury admitted to the ICU.
The data from 2260 patients, including electronic medical records, clinical physiological values, and laboratory tests, were collected and analyzed.Initially, 42 features were included, but not all of them showed a correlation with mortality.Therefore, we performed a direct analysis of the features and mortality, comparing their significance, and found that 29 parameters exhibited a significant difference in relation to mortality as Table 1 shows.Further analysis involved considering Spearman's correlation coefficient values, which led us to identify 14 features from LightGBM that still possessed a high AUC, making it the most accurate prediction model.Utilizing the mortality risk provided by AI, clinicians can be assisted in making informed medical decisions.
At our hospital, we primarily use the APACHE II and SOFA assessment tools to assist with clinical decision-making and effectively communicate with patients and their families to explain their medical condition in the ICU.Despite the existence of more precise and updated versions such as APACHE III and IV, APACHE II continues to be the predominant severity grading system and mortality risk in use [34].The SOFA score is also widely used by critical-care physicians due to its ability to provide rapid and accurate mortality predictions [35].To compare the AI models with APACHE II and SOFA scores, we employed the DeLong test.The results revealed that the ML models generally outperformed the traditional tools.This finding suggests the potential clinical utility of AI in this study.For ease of clinical practice and completeness of data acquisition, we chose to use a 14-feature LightGBM predictive model for clinical use.
Figure 4 shows that the use of vasopressors predominated and significantly influenced the mortality risk in the LightGBM model.Maintaining the stability of mean arterial pressure and cerebral perfusion pressure (CPP) has always been crucial in brain injury care.The judicious use of vasopressors helps balance intracranial pressure and maintain a constant CPP [36].For intubated patients, motor evaluation was relatively more important due to the inability to assess verbal function.The focus was primarily on the unaffected side's functionality to determine the patient's prognosis [37].A GCS score below 8 indicates severe brain injury, often requiring intubation to protect the airway.According to the study by Hsu SD et al., not only GCS but also systolic blood pressure (SBP) is an important prognostic factor.In the emergency department, if a patient has a GCS < 6 or an SBP < 84 mmHg, immediate life-saving measures need to be taken [19,38].Monitoring blood pressure and tracking changes in the GCS can be beneficial for predicting prognosis.However, in Hsu SD's study [19], they utilized features from the emergency department, whereas we utilized features from the ICU, where patients have already received treatment.Consequently, the mortality risk prediction based on ICU features tends to be more accurate at that stage.
Table 5 presents the literature comparison we conducted.In comparison to other literature, our study examines the impact of different feature combinations on mortality risk prediction and suggests that the predictive capability of the machine learning model outperforms traditional tools (APACH II, and SOFA scores).In addition, the model is currently being applied in ICU.We believe that this model can serve as an alternative choice for routine assessment in the ICU.Generally, IMPACT and CRASH are commonly used prognostic tools for predicting outcomes and mortality in clinical TBI cases [39,40].In Han J et al.'s report, these two traditional tools were found to have an AUC of 0.86 and 0.87, which is significantly lower compared to our ML approach [41].Wu X et al. compared XGBoost, a machine learning algorithm, with traditional prediction tools such as IMPACT and CRASH.The results demonstrated that machine learning (ML), specifically XGBoost, outperformed IMPACT and CRASH the traditional tools in terms of predictive accuracy [21].In Table 5, our AUC is greater than Wu's model, indicating that our model is more suitable for clinical use.
Moreover, the AI predictive tool we propose is intended as a clinical aid, not a replacement for a doctor's judgment.Before implementing policies based on AI predictions, it is essential to conduct comprehensive evaluations in terms of ethics, society, and policy.For example, protecting patients' data privacy and rights and ensuring they are not treated unfairly because of AI predictions.
Despite the robust ML algorithms demonstrating promising predictive performance, this study still has some limitations.First, it is a retrospective study, and prospective research is needed to validate the experimental results.Second, the diagnosis of brain injuries relies on Taiwan's National Health Insurance regulations, which may have a small number of miscoded diagnosis codes.However, the impact of these miscodings is relatively minor in terms of overall influence.Third, imaging parameters such as midline shift and presence/absence of brain ventricles have not been quantitatively incorporated into our ML model.Fourth, the potential confounding effects of the numerous features utilized require further exploration.Fifth, additional confounding variables such as smoking, alcohol intake, shifts in treatment guidelines, and emerging medical practices could not be comprehensively assessed due to the constraints of the retrospective database.Last, the current ML training is limited to various medical centers and laboratories, and due to differences in treatment guidelines, the generalization of ML from a single center to other regions is not yet possible.However, we provide the logical framework for ML, and the iterative process validates the effectiveness and value of such predictive models.Based on this foundation, further research can be conducted to improve upon these findings.

Conclusions
Our research primarily focuses on training AI using ICU data and utilizing various feature combinations to identify suitable ML models.In the end, we obtained 14 feature combinations (with a significant correlation to mortality and Spearman > 0.2), among which LightGBM performed exceptionally well.Not only does it demonstrate mortality prediction capabilities on par with models using more features but it also outperforms traditional models.These research findings can be applied in critical clinical settings to assist physicians in assessing patients' conditions and providing more data-driven explanations during communication with family members.In the future, we advocate for more studies that focus on incorporating additional variables to enhance model performance.The application of AI predictions in other healthcare settings, such as emergency care and long-term care, warrants deeper exploration.

Figure 1 .
Figure 1.Workflow diagram for data collection and machine learning model training.

Figure 2
Figure2illustrates the utilization of the hospital backend system to collect data from various assessment modules, including the ICU evaluation module, vital signs module, health status module, and medical history module.These modules provide input to the central computer for integrated processing, and the data are then fed into the ML training model for simulation.

Figure 1 .
Figure 1.Workflow diagram for data collection and machine learning model training.

Figure 1 .
Figure 1.Workflow diagram for data collection and machine learning model training.

Figure 2
Figure2illustrates the utilization of the hospital backend system to collect data from various assessment modules, including the ICU evaluation module, vital signs module, health status module, and medical history module.These modules provide input to the central computer for integrated processing, and the data are then fed into the ML training model for simulation.

Figure 3 .
Figure 3. Receiver operating characteristic curves (ROC), area under the curve (AUC), for mortality prediction in the training course.(a) Using 42 features to train the ML model; (b) using 29 features that were significant in the mortality; (c) using 22 features that were significant and Spearman correlation coefficient > 0.1; and (d) using 14 features that were significant and Spearman correlation coefficient > 0.2.Logistic regression (LR) (orange), random forest (black), LightGBM (green), and XGBoost (pink) using the 14 feature variables.

Figure 3 .
Figure 3. Receiver operating characteristic curves (ROC), area under the curve (AUC), for mortality prediction in the training course.(a) Using 42 features to train the ML model; (b) using 29 featuresthat were significant in the mortality; (c) using 22 features that were significant and Spearman correlation coefficient >0.1; and (d) using 14 features that were significant and Spearman correlation coefficient >0.2.Logistic regression (LR) (orange), random forest (black), LightGBM (green), and XGBoost (pink) using the 14 feature variables.
displays the ranking of features' influence on the outcome based on the absolute values of the SHAP values.The figure shows that the top five influential feature variables are vasopressors, GCS_M, GCS_V, pupil reflex + (R), and Muscle_RLE.
displays the ranking of features' influence on the outcome based on the absolute values of the SHAP values.The figure shows that the top five influential feature variables are vasopressors, GCS_M, GCS_V, pupil reflex + (R), and Muscle_RLE.

Figure 5 .
Figure 5. Interface presentation of AI in practical application within the Chi Mei Hospital healthcare system.

Figure 5 .
Figure 5. Interface presentation of AI in practical application within the Chi Mei Hospital healthcare system.

Table 1 .
Characteristics and significance of traumatic brain injury patients.
Note.A t-test was used for numerical variables and the Chi-square test was used for categorical variables.Surgical procedures are as follows: decompressive craniectomy, acute epidural hematoma removal, acute subdural hematoma removal, acute intracerebral hematoma removal, and intracranial pressure monitor placement.

Table 2 .
The Spearman correlation coefficient for each factor.
Note.Italicized text: absolute value greater than 0.1; Bold text: absolute value greater than 0.2.

Table 3 .
Model performance with different feature combinations.
Note.AUC = Area under receiver operating characteristic curve.Algorithms in bold indicate the model with the highest AUC.

Table 4 .
The DeLong test of ML models with different feature combinations and conventional tools (APACH II and SOFA scores).

Table 4 .
The DeLong test of ML models with different feature combinations and conventional tools (APACH II and SOFA scores).

Table 5 .
A comparative analysis of the mortality rate among patients with brain injury over the past five years, as reviewed in our study.