Abstract
Introduction: The prompt prehospital identification of intracerebral haemorrhage (ICH) may allow very early delivery of treatments to limit bleeding. Current prehospital stroke assessment tools have limited accuracy for the detection of ICH as they were designed to recognise all strokes, not ICH specifically. This systematic review aims to evaluate the performance of prehospital models in distinguishing ICH from other causes of suspected stroke. Methods: We adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Following a predefined strategy, we searched three electronic databases via Ovid (MEDLINE, EMBASE, and CENTRAL) in July 2023 for studies published in English, without date restrictions. Subsequently, data extraction was performed, and methodological quality was assessed using the Prediction Model Risk of Bias Assessment Tool. Results: After eliminating duplicates, 6194 records were screened for titles and abstracts. After a full-text review of 137 studies, 9 prediction studies were included. Five of these described prediction models were designed to differentiate between stroke subtypes, three distinguished between ICH and ischaemic stroke, and one model was developed specifically to identify ICH. All studies were assessed as having a high risk of bias, particularly in the analysis domain. The performance of the models varied, with the area under the receiver operating characteristic curve ranging from 0.73 to 0.91. The models commonly included the following as predictors of ICH: impaired consciousness, headache, speech or language impairment, high systolic blood pressure, nausea or vomiting, and weakness or paralysis of limbs. Conclusions: Prediction models may support the prehospital diagnosis of ICH, but existing models have methodological limitations, making them unreliable for informing practice. Future studies should aim to address these identified limitations and include a broader range of suspected strokes to develop a practical model for identifying ICH. Combining prediction models with point-of-care tests might further improve the detection accuracy of ICH.
1. Introduction
Strokes are a leading cause of morbidity and mortality worldwide, posing a significant healthcare challenge. Among the different stroke subtypes, intracerebral haemorrhage (ICH) accounts for only 10–15% of all strokes, but it is the most devastating form with no significant improvement in outcomes [1,2]. Timely and accurate identification of ICH is needed to facilitate prompt and appropriate management, which may be associated with better patient outcomes [3].
Given that most strokes occur out of hospital, the prehospital phase plays a critical role in the stroke patient care pathway. While advances in screening tools have enabled the early detection of strokes in the prehospital stage, these efforts primarily focus on identifying ischaemic stroke (IS) or all strokes [4]. Previous studies have shown that these tools have limited accuracy in identifying patients with ICH [5,6]. This limitation arises from the tools originally being designed to predict any stroke, rather than specific stroke subtypes (ICH or IS).
In recent years, significant advancements have been made in diagnosing and treating patients with ICH in the prehospital setting through the use of mobile stroke units [3]. Nevertheless, due to the high associated costs, it is unlikely that this technology will become widely available for stroke care in the near future, highlighting the need for cost-effective alternatives to enhance prehospital stroke care.
It is clinically important to distinguish ICH from other suspected stroke cases in the prehospital setting, as certain time-sensitive interventions, such as lowering blood pressure and reversing coagulopathy, are critical for reducing the risk of haematoma expansion, a major cause of poor outcomes [3,4]. This is supported by recent findings from the INTERACT4 trial [7], which demonstrated that intensive blood pressure lowering, administered to suspected stroke patients in the ambulance, significantly reduced death and disability at 90 days among those subsequently diagnosed with ICH. However, the same intervention proved detrimental in patients with IS [7], underscoring the need for diagnostic certainty in the prehospital setting. Furthermore, patients with ICH may benefit from being transported to the nearest appropriate stroke centre to minimise treatment delays [8]. Longer transport times to a thrombectomy-capable centre—the preferred destination for patients with large vessel occlusion (LVO)—have been associated with poorer outcomes in patients later diagnosed with ICH [9]. Consequently, prehospital differentiation of ICH, particularly from IS and LVO, becomes a priority to determine the optimal management strategies for suspected stroke cases in the field. This systematic review therefore aims to evaluate and compare the performance of available prehospital stroke prediction models in distinguishing ICH from other causes of suspected stroke.
2. Methods
2.1. Protocol and Registration
The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed to conduct and report this systematic review [10]. A completed PRISMA checklist is provided in Supplementary File S1. Following scoping searches, a protocol was developed for this review and registered with the International Prospective Register of Systematic Reviews (PROSPERO) on 9 August 2023 (registration number: CRD42023452526).
2.2. Eligibility Criteria
The inclusion and exclusion criteria are outlined in Table 1.
Table 1.
Inclusion and exclusion criteria.
2.3. Search Strategy
Using a prespecified search strategy (Supplementary File S2), a systematic literature search of the MEDLINE, EMBASE, and CENTRAL databases was conducted via Ovid from inception to July 2023. The search was limited to human studies, and for resource reasons, was restricted to studies published in English. After completing the database searches, a search of grey literature was conducted using Google Scholar. Additionally, the reference lists of included studies were reviewed to identify additional studies of interest.
2.4. Study Selection
The identified articles were imported into EndNote and Rayyan Qatar Computing Research Institute (QCRI) software [11], enabling duplicates to be eliminated, as well as facilitating screening and collaboration. Suitable studies were selected by two independent reviewers (MA and IA) in two stages. First, titles and abstracts were screened, and then full texts were carefully evaluated. MA screened all titles, abstracts, and full-text articles against the eligibility criteria. IA screened a random sample of 20% of the articles at both the title and abstract stage and the full-text stage. Disagreements during these screening stages were resolved through discussion between the reviewers to reach a consensus or by involving a third reviewer (DAJ or APJ) when necessary.
2.5. Data Extraction
The data were extracted from the eligible studies using standardised, prepiloted forms. MA extracted the data, and IA double-checked it for accuracy, with any differences being discussed and resolved. The following information was extracted, in accordance with the review objectives:
- First author, year, country;
- Study design;
- Study population demographics and baseline characteristics;
- Data collection setting;
- Details of the stroke prediction model and its variables;
- The reference standard used to determine the final diagnosis;
- Reported or calculated diagnostic accuracy metrics, including sensitivity, specificity, positive and negative predictive values, and area under the receiver operating characteristic curve (AUC) with their 95% confidence intervals (CI).
As this review only considered published data, the original study authors were not contacted for additional or missing information.
2.6. Risk of Bias and Applicability Assessment
Two reviewers used the Prediction Model Risk of Bias Assessment Tool (PROBAST) to evaluate the quality of the included studies [12]. PROBAST has been designed specifically to assess the risk of bias and applicability concerns in prediction modelling studies. To evaluate the risk of bias, 20 signalling questions across four domains were addressed, namely participants, predictors, outcome, and analysis. The applicability assessment consisted of several questions across three domains: participants, predictors, and outcome.
The overall assessment of the risk of bias and concerns regarding applicability was classified as ‘low’ if all domains received a ‘low’ rating, or ‘high’ if at least one domain was rated as ‘high’; ‘unclear’ was used where any domain was judged as such. The quality of the studies included in this review was summarised narratively and is presented in tabulated and graphical formats.
2.7. Data Synthesis
Due to the anticipated heterogeneity of populations, comparators, and outcome variables across the studies, a narrative synthesis approach was employed to synthesise the data. Additionally, the findings were summarised in tables and figures, and when appropriate, a comparison was made between the available models to identify similarities and differences, as well as assess their diagnostic accuracy performance. Unreported diagnostic metrics were calculated with the MedCalc statistical software, using the data provided in each study. For these metrics, 95% CIs were derived using the Clopper–Pearson method [13].
2.8. Patient and Public Involvement
No patients or public were involved in this study.
3. Results
3.1. Study Selection
A total of 8112 records were identified through the search in databases. After eliminating duplicate entries, 6194 unique records remained to be screened for titles and abstracts (Figure 1). Following the review of titles and abstracts, 137 articles were identified as being potentially relevant and were retrieved for a thorough evaluation of their full texts. After this evaluation, 128 articles were excluded for various reasons, including differences in patient populations, study types, settings, and reported outcomes, as well as one duplicate publication (illustrated in Figure 1). As a result, nine articles met our eligibility criteria [14,15,16,17,18,19,20,21,22].
Figure 1.
Flow chart of study selection process.
3.2. Characteristics of the Included Studies
The characteristics of the included studies are outlined in Table 2. The majority of prediction models were developed in Asia (n = 6), followed by Europe (n = 2) and North America (n = 1). Additionally, these models were constructed based on either prospective (n = 4) or retrospective cohort data (n = 2), or a combination of both (n = 3). Most of the included models were designed to differentiate between stroke subtypes [17,18,19,21,22]; three of the models were developed to distinguish between ICH and IS [14,15,16]. Only one model focused specifically on distinguishing patients with ICH from other stroke subtypes and non-stroke diagnoses [20]. All of the included studies targeted patients in prehospital settings, and the majority used only prehospital information to develop their models (n = 6).
Table 2.
Characteristics of included studies.
3.3. Risk of Bias and Applicability Assessment
Table 3 summarises the quality assessment of the included studies. The majority of models (n = 6) demonstrated a low risk of bias in the domains of participants, predictors, and outcome (Figure 2A). However, two models exhibited a high risk of bias in the participants domain due to their exclusion criteria, leading to only a selected group of patients with ICH and IS [14,15]. In the predictors domain, three models had an unclear risk of bias because of insufficient information about knowledge of outcomes before assessing predictors [14,15], or a lack of information about predictors [21]. Additionally, a high or unclear risk of bias was observed in two models in the outcome assessment domain, primarily due to the absence of a clear or standardised definition of ICH [14,15]. Furthermore, all studies had a high risk of analysis bias arising from various factors, including the limited sample size of participants with ICH in the model validation cohorts [16,20,21], selection of predictors based on univariable significance [15,16,17,19], inadequate exclusion of participants from the analysis [17,19,20,22], and the use of inappropriate performance measures [14,15,16,17,18,19]. By applying PROBAST, all models were considered at a high risk of overall bias, as shown in Figure 2A.
Table 3.
Risk of bias and applicability assessment using the PROBAST tool.
Figure 2.
Summary of risk of bias (A) and applicability (B) assessment.
When the nine models were assessed for applicability concerns, seven of the models were determined to have an overall low concern after evaluating their applicability to the review questions (Table 3 and Figure 2B). Two models were rated as having unclear or high concerns on multiple domains and were judged to be of high applicability concern, either due to the definition of the outcome [14] or the intended outcome from the model [15].
3.4. Model Development and Final Predictor Variables
Among the nine studies included in the review, seven used multivariable analysis to derive their prediction rules, of which five presented scoring systems [14,15,17,19,20], one used conditional probability [16], and one combined the best performance predictors [18]. Additionally, two studies [21,22] used different machine learning algorithms, such as logistic regression, random forest, and eXtreme Gradient Boosting (XGBoost) to develop the prediction models.
To determine the probability of ICH and other diagnoses, the included studies combined between 2 [18] and 52 predictors [21] in their final models. These studies collected a wide range of prehospital and in-hospital data, as illustrated in Supplementary File S3, Table S1.
The predictors most frequently included in the final models were impaired consciousness (n = 8), headache (n = 6), speech or language deficit (n = 6), high systolic blood pressure (n = 6), nausea or vomiting (n = 5), and limb weakness or paralysis (n = 5). Other predictors are summarised in Supplementary File S3, Figure S1.
Despite the overlap between predictors, specific associations between predictors and ICH were not reported in some studies [19,21,22]. Additionally, variations existed in how they were defined and assessed, making it difficult to directly compare their importance or weight across the various models. For instance, the level of consciousness and neurological deficits were assessed using the National Institutes of Health Stroke Scale [15,20], while different dichotomisations of blood pressure values were observed in the models [15,17,20].
3.5. Model Performance and Validation
Regarding performance metrics, the majority of the studies (n = 6) reported model discrimination using the AUC values [14,17,19,20,21,22], while other predictive characteristics were presented in four studies [18,20,21,22] and calculable in four of the nine studies [14,15,16,17] (as shown in Table 4). Furthermore, only two studies [19,22] demonstrated model calibration graphically with calibration plots, without further specification of the intercepts and slopes.
Table 4.
Diagnostic performance of prehospital models for identifying patients with ICH.
Three prediction models were developed to distinguish between ICH and IS [14,15,16]. Woisetschläger et al. [14] devised an out-of-hospital score, ranging from –3 to +3, where a positive score was predictive of ICH. The model achieved an AUC of 0.90 (95% CI, 0.86–0.94), with sensitivity ranging from 11% (95% CI, 6.0–18.1%) to 32% (95% CI, 23.9–41.4%) and specificity from 96% (95% CI, 90.6–99.0%) to 100% (95% CI, 96.6–100.0%) for cut-off scores between +1 and +3. Yamashita et al. [15] focused on distinguishing IS from ICH within 6 h of stroke onset. At a cut-off score of 0, their model had a sensitivity of 41% (95% CI, 31.3–51.3%) and a specificity of 91% (95% CI, 85.0–95.6%) for detecting ICH. Jin et al. [16] established a Bayes discriminant model for differentiating between ICH and IS in alert or comatose patients. In the training set, the predictive model showed moderate sensitivity (58%; 95% CI, 51.9–64.8%) and high specificity (79%; 95% CI, 74.8–83.2%) for diagnosing ICH in non-comatose patients, whereas in comatose patients with ICH, the sensitivity was high (94%; 95% CI, 90.2–96.0%), but specificity was low (42%; 95% CI, 34.5–49.8%).
Six prediction models were proposed to discriminate subtypes of stroke and other causes of stroke symptoms [17,18,19,20,21,22]. Uchida et al. [17] designed a prediction score to distinguish between different stroke subtypes simultaneously, including ICH, LVO, subarachnoid haemorrhage (SAH), and other types of strokes. This score consisted of 21 variables, which were then simplified into 7 variables by backward elimination [19]. In their derivation cohorts, the AUC for ICH ranged between 0.79 and 0.84 [17,19]. Similarly, Chiquete et al. [18] constructed different prediction rules to classify stroke subtypes based on clinical features recognised by bystanders who witnessed the onset of stroke. The best-performing model in this study achieved a sensitivity of 66% (95% CI, 57.0–74.6%) and a specificity of 52% (95% CI, 45.5–57.5%) in diagnosing patients with ICH [18].
Geisler et al. [20] developed the only prehospital prediction tool specifically for patients with ICH. According to the study, the likelihood of an ICH increased when the threshold score was ≥1.5. For the derivation cohort, the threshold scores exhibited low to moderate sensitivity ranging from 13% (95% CI, 3.5–29.0%) to 50% (95% CI, 31.9–68.1%), but high specificity ranging from 80% (95% CI, 75.9–84.1%) to 100% (95% CI, 98.6–100.0%), with an AUC of 0.75.
Uchida et al. [22] also employed three machine learning algorithms (logistic regression, random forest, and XGBoost) to develop a prehospital stroke scale. These algorithms demonstrated comparable predictive accuracy in distinguishing patients with ICH, with sensitivity ranging from 42% to 43%, specificity at 45% and AUC ranging from 0.78 to 0.79 in the training cohort. In contrast, the study conducted by Hayashi et al. [21] demonstrated superior predictive accuracy for ICH by using XGBoost to develop their prehospital model, achieving an AUC of 0.91 (95% CI, 0.89–0.93), with sensitivity and specificity values of 68% and 91%, respectively.
In terms of validation, six studies validated their prediction models, either by randomly dividing the study population into training and test sets [16,21] or by using an independent test cohort [17,19,20,22].
The model developed by Jin et al. [16] showed better accuracy in the validation cohort of non-comatose patients with ICH when compared to the derivation cohort, achieving a sensitivity of 66% (95% CI, 50.1–79.5%) and specificity of 87% (95% CI, 76.2–94.3%). However, its performance was comparatively lower in the comatose group, with a sensitivity of 91% (95% CI, 80.1–97.0%) and specificity of 39% (95% CI, 21.5–59.4%).
Geisler et al. [20] validated their model by comparing ICH and IS patients only. The sensitivity ranged from 3% (95% CI, 0.1–15.8%) to 52% (95% CI, 33.5–69.2%) and specificity from 87% (95% CI, 82.1–90.8%) to 100% (95% CI, 98.6–100.0%), with an AUC of 0.81.
The scoring systems devised by Uchida et al. were validated using the same outcome definitions as in their developed models [17,19]. The first model’s sensitivity and specificity for predicting ICH ranged from 2% (95% CI, 0.3–4.7%) to 33% (95% CI, 26.0–40.1%) and 46% (95% CI, 42.7–49.6%) to 98% (95% CI, 97.2–99.1%), respectively [17]. The AUC values for both models fell within the range of 0.73 and 0.77 [17,19].
The validation of the machine learning models employed by Uchida et al. [22] demonstrated better specificity (92–94%) with comparable sensitivity (40–43%), resulting in AUC values of 0.81 to 0.82. Similarly, Hayashi et al. [21] found that XGBoost had the highest performance among the classifiers, albeit with a slight decrease in sensitivity (62%), specificity (90%), and AUC value (0.87; 95% CI, 0.82–0.91) compared to the derivation set.
4. Discussion and Recommendations
This systematic review has identified nine prediction models for distinguishing ICH and other causes of suspected stroke in the prehospital setting. All the studies were deemed to have a high risk of bias. Furthermore, there was a considerable degree of heterogeneity in the study populations, designs, predictor variables, performance measures, and intended outcomes of the models. Consequently, we are unable to recommend the use of any of the studied models in the prehospital stroke care. Additionally, the external validity and generalisability of the models are unclear.
In the past, several attempts have been made to develop predictive scores for classifying stroke subtypes [23]. However, their diagnostic accuracy was poor, and they might not be applicable to patients in the prehospital setting [23,24]. Additionally, those scores were designed to distinguish between two main subtypes of stroke—ischaemic and haemorrhagic stroke—without considering other stroke-mimicking conditions; hence, they may not adequately capture the complexity of real-world care. Notably, only three of the included studies addressed the appropriate selection of patients for developing their prediction models, considering both stroke subtypes and other causes of suspected stroke [20,21,22].
The selection of an optimal prediction model is critical to support the recent advancements in ICH management [4,7]. Choosing models with high sensitivity will ensure the correct identification and triaging of patients with ICH to an appropriate level of care, yet it may lead to an increase in false positives. On the other hand, prioritising high specificity can prevent unnecessary treatment for non-ICH cases, but it might also result in a greater number of false negatives. Ideally, the ICH prediction model should balance high sensitivity and specificity. Nevertheless, it is important to recognise that the sensitivity and specificity trade off against one another. Notably, the included studies used different arbitrary cut-off points, which could influence the classification performances.
Among all the models studied, only one [21] achieved a good balance between sensitivity and specificity for detecting ICH using the XGBoost classifier, suggesting the superiority of machine learning models over traditional predictive models (Table 4). However, this model used 52 prediction features, limiting its practicality in the prehospital environment. In prehospital settings, it may be advisable to use prediction models with fewer variables, considering the diverse range of conditions that exist in practice other than stroke [22]. This approach was considered in most of the included studies, where a limited set of predictors was selected for their final models [14,15,16,18,19,20].
Most of the studies in this review collected common predictors of stroke. Among these, decreased consciousness level, high systolic blood pressure, headache, neurological deficits at presentation, and nausea or vomiting were more commonly included in the final models and should be considered as potential predictors for ICH in future prediction models. However, there were variations in the inclusion of other predictors (Table S1 and Figure S1), likely due to differences in the considered variables and study populations.
Through our search of the literature, we identified two other prehospital prediction models that were developed to identify the different subtypes of strokes [25,26]. However, these were not included in our review as they combined ICH and SAH into one category, which was defined as haemorrhagic stroke. ICH and SAH are separate disease entities with different pathophysiology, risk factors, clinical presentations, and management approaches [16,27]. Hence, it may be more beneficial for future models to consider the distinct characteristics of ICH and SAH, as their combination could potentially impact accurate diagnosis and appropriate treatment.
The findings of this systematic review highlight the need for further prehospital research to enhance the identification of ICH. Future studies can use our findings as a guide to develop predictive models that incorporate readily available prehospital data. To ensure their real-world applicability, it is important for future studies to consider a diverse cohort of suspected stroke patients, adhere to recent sample size criteria, and avoid data splitting for model development and validation [28]. Moreover, it would be more beneficial to validate and assess the accuracy, reliability, and limitations of the developed models by using large independent cohorts. Finally, future prediction studies should follow the TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) guidelines to ensure rigorous reporting [29].
Future prehospital studies might also consider using simple diagnostic aids in combination with predictive models to improve the accuracy of ICH detection [30,31]. These could include point-of-care testing technologies that measure brain-specific biomarkers, such as glial fibrillary acidic protein, which has shown promising results in distinguishing ICH from other conditions [32,33].
Limitations
The current review has some limitations worth mentioning. First, our review was restricted to published studies in the English language, thus potentially introducing language and publication bias. Additionally, due to the high number of retrieved records, the second reviewer screened only 20% of the identified studies, which might have led to the exclusion of some potentially relevant research during the selection process. Lastly, the heterogeneity and methodological limitations of the included studies precluded the combination of data in a meta-analysis, therefore hindering our ability to provide recommendations for prehospital practice.
5. Conclusions
Existing prehospital stroke prediction models for distinguishing ICH and other causes of suspected stroke were heterogeneous and of poor methodological quality, precluding their recommendation for use in prehospital stroke care. With support from the findings of this review, future studies should address the identified limitations to develop and validate a practical prediction model for patients with ICH in the prehospital stage. In conjunction with predictive models, the use of point-of-care testing tools might further enhance the accuracy of ICH prediction and should be considered in future research.
Supplementary Materials
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/healthcare13080876/s1, Supplementary File S1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist. Supplementary File S2. Search strategies for the systematic review. Supplementary File S3. Table S1. Predictors considered in each of the nine prediction modelling studies. Figure S1. Frequency of identified predictors in the final prediction models.
Author Contributions
All the authors of this review fulfil the criteria of authorship. M.A. is the primary author and guarantor of this research, under the supervision of A.P.-J. and D.J. M.A. conceptualised the idea and designed the search strategy, incorporating suggestions from A.P.-J. and D.J. M.A. and I.A. screened the studies, performed data extraction, and assessed the risk of bias. M.A. conducted data analyses and drafted the manuscript with support from A.P.-J. and D.J. All authors have read and agreed to the published version of the manuscript.
Funding
This review is part of M.A.’s PhD project, which is funded by King Saud University, Riyadh, Saudi Arabia, through the Saudi Arabian Cultural Bureau in the United Kingdom. The research was also co-funded by the National Institute for Health Research (NIHR) Manchester Biomedical Research Centre (NIHR203308). The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
All data relevant to this review are included in the manuscript or uploaded as Supplementary Information.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Paroutoglou, K.; Parry-Jones, A.R. Hyperacute management of intracerebral haemorrhage. Clin. Med. 2018, 18 (Suppl. 2), s9–s12. [Google Scholar] [CrossRef]
- van Asch, C.J.; Luitse, M.J.; Rinkel, G.J.; van der Tweel, I.; Algra, A.; Klijn, C.J. Incidence, case fatality, and functional outcome of intracerebral haemorrhage over time, according to age, sex, and ethnic origin: A systematic review and meta-analysis. Lancet Neurol. 2010, 9, 167–176. [Google Scholar] [CrossRef]
- Bowry, R.; Parker, S.A.; Bratina, P.; Singh, N.; Yamal, J.M.; Rajan, S.S.; Jacob, A.P.; Phan, K.; Czap, A.; Grotta, J.C. Hemorrhage Enlargement Is More Frequent in the First 2 Hours: A Prehospital Mobile Stroke Unit Study. Stroke 2022, 53, 2352–2360. [Google Scholar] [CrossRef] [PubMed]
- Greenberg, S.M.; Ziai, W.C.; Cordonnier, C.; Dowlatshahi, D.; Francis, B.; Goldstein, J.N.; Hemphill, J.C., 3rd; Johnson, R.; Keigher, K.M.; Mack, W.J.; et al. 2022 Guideline for the Management of Patients with Spontaneous Intracerebral Hemorrhage: A Guideline From the American Heart Association/American Stroke Association. Stroke 2022, 53, e282–e361. [Google Scholar] [CrossRef] [PubMed]
- Kleindorfer, D.O.; Miller, R.; Moomaw, C.J.; Alwell, K.; Broderick, J.P.; Khoury, J.; Woo, D.; Flaherty, M.L.; Zakaria, T.; Kissela, B.M. Designing a message for public education regarding stroke: Does FAST capture enough stroke? Stroke 2007, 38, 2864–2868. [Google Scholar] [CrossRef] [PubMed]
- Oostema, J.A.; Chassee, T.; Baer, W.; Edberg, A.; Reeves, M.J. Accuracy and Implications of Hemorrhagic Stroke Recognition by Emergency Medical Services. Prehosp. Emerg. Care 2021, 25, 796–801. [Google Scholar] [CrossRef]
- Li, G.; Lin, Y.; Yang, J.; Anderson, C.S.; Chen, C.; Liu, F.; Billot, L.; Li, Q.; Chen, X.; Liu, X.; et al. Intensive Ambulance-Delivered Blood-Pressure Reduction in Hyperacute Stroke. N. Engl. J. Med. 2024, 390, 1862–1872. [Google Scholar] [CrossRef]
- Richards, C.T.; Oostema, J.A.; Chapman, S.N.; Mamer, L.E.; Brandler, E.S.; Alexandrov, A.W.; Czap, A.L.; Martinez-Gutierrez, J.C.; Martin-Gill, C.; Panchal, A.R.; et al. Prehospital Stroke Care Part 2: On-Scene Evaluation and Management by Emergency Medical Services Practitioners. Stroke 2023, 54, 1416–1425. [Google Scholar] [CrossRef]
- Ramos-Pachón, A.; Rodríguez-Luna, D.; Martí-Fàbregas, J.; Millán, M.; Bustamante, A.; Martínez-Sánchez, M.; Serena, J.; Terceño, M.; Vera-Cáceres, C.; Camps-Renom, P.; et al. Effect of Bypassing the Closest Stroke Center in Patients with Intracerebral Hemorrhage: A Secondary Analysis of the RACECAT Randomized Clinical Trial. JAMA Neurol. 2023, 80, 1028–1036. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
- Ouzzani, M.; Hammady, H.; Fedorowicz, Z.; Elmagarmid, A. Rayyan-a web and mobile app for systematic reviews. Syst. Rev. 2016, 5, 210. [Google Scholar] [CrossRef]
- Moons, K.G.M.; Wolff, R.F.; Riley, R.D.; Whiting, P.F.; Westwood, M.; Collins, G.S.; Reitsma, J.B.; Kleijnen, J.; Mallett, S. PROBAST: A Tool to Assess Risk of Bias and Applicability of Prediction Model Studies: Explanation and Elaboration. Ann. Intern. Med. 2019, 170, W1–W33. [Google Scholar] [CrossRef]
- MedCalc Software Ltd. Diagnostic Test Evaluation Calculator. Version 22.013. Available online: https://www.medcalc.org/calc/diagnostic_test.php (accessed on 5 October 2023).
- Woisetschläger, C.; Kittler, H.; Oschatz, E.; Bur, A.; Lang, W.; Waldenhofer, U.; Laggner, A.N.; Hirschl, M.M. Out-of-hospital diagnosis of cerebral infarction versus intracranial hemorrhage. Intensive Care Med. 2000, 26, 1561–1565. [Google Scholar] [CrossRef]
- Yamashita, S.; Kimura, K.; Iguchi, Y.; Shibazaki, K.; Watanabe, M.; Iwanaga, T. Kurashiki Prehospital Stroke Subtyping Score (KP3S) as a means of distinguishing ischemic from hemorrhagic stroke in emergency medical services. Eur. Neurol. 2011, 65, 233–238. [Google Scholar] [CrossRef] [PubMed]
- Jin, H.Q.; Wang, J.C.; Sun, Y.A.; Lyu, P.; Cui, W.; Liu, Y.Y.; Zhen, Z.G.; Huang, Y.N. Prehospital Identification of Stroke Subtypes in Chinese Rural Areas. Chin. Med. J. 2016, 129, 1041–1046. [Google Scholar] [CrossRef] [PubMed]
- Uchida, K.; Yoshimura, S.; Hiyama, N.; Oki, Y.; Matsumoto, T.; Tokuda, R.; Yamaura, I.; Saito, S.; Takeuchi, M.; Shigeta, K.; et al. Clinical Prediction Rules to Classify Types of Stroke at Prehospital Stage. Stroke 2018, 49, 1820–1827. [Google Scholar] [CrossRef] [PubMed]
- Chiquete, E.; Jiménez-Ruiz, A.; García-Grimshaw, M.; Domínguez-Moreno, R.; Rodríguez-Perea, E.; Trejo-Romero, P.; Ruiz-Ruiz, E.; Sandoval-Rodríguez, V.; Gómez-Piña, J.J.; Ramírez-García, G.; et al. Prediction of acute neurovascular syndromes with prehospital clinical features witnessed by bystanders. Neurol. Sci. 2021, 42, 3217–3224. [Google Scholar] [CrossRef]
- Uchida, K.; Yoshimura, S.; Sakakibara, F.; Kinjo, N.; Araki, H.; Saito, S.; Morimoto, T. Simplified Prehospital Prediction Rule to Estimate the Likelihood of 4 Types of Stroke: The 7-Item Japan Urgent Stroke Triage (JUST-7) Score. Prehosp. Emerg. Care 2021, 25, 465–474. [Google Scholar] [CrossRef]
- Geisler, F.; Wesirow, M.; Ebinger, M.; Kunz, A.; Rozanski, M.; Waldschmidt, C.; Weber, J.E.; Wendt, M.; Winter, B.; Audebert, H.J. Probability assessment of intracerebral hemorrhage in prehospital emergency patients. Neurol. Res. Pract. 2021, 3, 1. [Google Scholar] [CrossRef]
- Hayashi, Y.; Shimada, T.; Hattori, N.; Shimazui, T.; Yoshida, Y.; Miura, R.E.; Yamao, Y.; Abe, R.; Kobayashi, E.; Iwadate, Y.; et al. A prehospital diagnostic algorithm for strokes using machine learning: A prospective observational study. Sci. Rep. 2021, 11, 20519. [Google Scholar] [CrossRef]
- Uchida, K.; Kouno, J.; Yoshimura, S.; Kinjo, N.; Sakakibara, F.; Araki, H.; Morimoto, T. Development of Machine Learning Models to Predict Probabilities and Types of Stroke at Prehospital Stage: The Japan Urgent Stroke Triage Score Using Machine Learning (JUST-ML). Transl. Stroke Res. 2022, 13, 370–381. [Google Scholar] [CrossRef] [PubMed]
- Runchey, S.; McGee, S. Does this patient have a hemorrhagic stroke? Clinical findings distinguishing hemorrhagic stroke from ischemic stroke. JAMA 2010, 303, 2280–2286. [Google Scholar] [CrossRef] [PubMed]
- Mwita, C.C.; Kajia, D.; Gwer, S.; Etyang, A.; Newton, C.R. Accuracy of clinical stroke scores for distinguishing stroke subtypes in resource poor settings: A systematic review of diagnostic test accuracy. J. Neurosci. Rural Pract. 2014, 5, 330–339. [Google Scholar] [CrossRef]
- Lee, S.E.; Choi, M.H.; Kang, H.J.; Lee, S.J.; Lee, J.S.; Lee, Y.; Hong, J.M. Stepwise stroke recognition through clinical information, vital signs, and initial labs (CIVIL): Electronic health record-based observational cohort study. PLoS ONE 2020, 15, e0231113. [Google Scholar] [CrossRef]
- Ye, S.; Pan, H.; Li, W.; Wang, J.; Zhang, H. Development and validation of a clinical nomogram for differentiating hemorrhagic and ischemic stroke prehospital. BMC Neurol. 2023, 23, 95. [Google Scholar] [CrossRef]
- Kim, H.C.; Nam, C.M.; Jee, S.H.; Suh, I. Comparison of blood pressure-associated risk of intracerebral hemorrhage and subarachnoid hemorrhage: Korea Medical Insurance Corporation study. Hypertension 2005, 46, 393–397. [Google Scholar] [CrossRef] [PubMed]
- Riley, R.D.; Ensor, J.; Snell, K.I.E.; Harrell, F.E., Jr.; Martin, G.P.; Reitsma, J.B.; Moons, K.G.M.; Collins, G.; van Smeden, M. Calculating the sample size required for developing a clinical prediction model. BMJ 2020, 368, m441. [Google Scholar] [CrossRef]
- Collins, G.S.; Moons, K.G.M.; Dhiman, P.; Riley, R.D.; Beam, A.L.; Van Calster, B.; Ghassemi, M.; Liu, X.; Reitsma, J.B.; van Smeden, M.; et al. TRIPOD+AI statement: Updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ 2024, 385, e078378. [Google Scholar] [CrossRef]
- Almubayyidh, M.; Alghamdi, I.; Parry-Jones, A.R.; Jenkins, D. Prehospital identification of intracerebral haemorrhage: A scoping review of early clinical features and portable devices. BMJ Open 2024, 14, e079316. [Google Scholar] [CrossRef]
- Wolcott, Z.C.; English, S.W. Artificial intelligence to enhance prehospital stroke diagnosis and triage: A perspective. Front. Neurol. 2024, 15, 1389056. [Google Scholar] [CrossRef]
- Kumar, A.; Misra, S.; Yadav, A.K.; Sagar, R.; Verma, B.; Grover, A.; Prasad, K. Role of glial fibrillary acidic protein as a biomarker in differentiating intracerebral haemorrhage from ischaemic stroke and stroke mimics: A meta-analysis. Biomarkers 2020, 25, 1–8. [Google Scholar] [CrossRef] [PubMed]
- Kalra, L.-P.; Zylyftari, S.; Blums, K.; Barthelmes, S.; Humm, S.; Baum, H.; Meckel, S.; Luger, S.; Foerch, C. Abstract 57: Rapid Prehospital Diagnosis of Intracerebral Hemorrhage by Measuring GFAP on a Point-of-Care Platform. Stroke 2024, 55 (Suppl. 1), A57. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).