Artificial Intelligence: A Shifting Paradigm in Cardio-Cerebrovascular Medicine

The future of healthcare is an organic blend of technology, innovation, and human connection. As artificial intelligence (AI) is gradually becoming a go-to technology in healthcare to improve efficiency and outcomes, we must understand our limitations. We should realize that our goal is not only to provide faster and more efficient care, but also to deliver an integrated solution to ensure that the care is fair and not biased to a group of sub-population. In this context, the field of cardio-cerebrovascular diseases, which encompasses a wide range of conditions—from heart failure to stroke—has made some advances to provide assistive tools to care providers. This article aimed to provide an overall thematic review of recent development focusing on various AI applications in cardio-cerebrovascular diseases to identify gaps and potential areas of improvement. If well designed, technological engines have the potential to improve healthcare access and equitability while reducing overall costs, diagnostic errors, and disparity in a system that affects patients and providers and strives for efficiency.


Introduction
Artificial intelligence (AI) focuses on how computers learn from large and complex datasets by mimicking the human thought process. AI has the potential to accelerate the field of precision medicine by helping practitioners to calculate the risk, guide the treatment, predict the outcome, and close the care gap using scalable computational resources and advanced algorithms applied to a growing body of data and knowledge. AI can be specifically designed to improve clinical care and increase efficiency in drug discovery [1]. Carefully designed and implemented electronic health record (EHR)-AI embedded tools and applications can save valuable time and assist practitioners with critical decision-making at the point of care. AI can potentially improve health disparity and address implicit bias. Machine learning (ML), an application of AI, provides systems with the ability to learn from data and experiences [2].
Cardio-cerebrovascular diseases, a leading cause of mortality and disability in the United States and worldwide [3,4], have been targeted by big data science and AI applications. Furthermore, with growing vascular risk factors, trends in mortality and complications will be increasing [5]. Many large studies in cardiovascular medicine use AI to provide a promising set of assistive tools to cardiologists and push the boundaries of translational science. Cardiovascular and cerebrovascular diseases share many predictors, pathophysiology processes, among others [6][7][8]. However, big data and advanced prediction modeling have not been studied in the same way in the cardio and cerebrovascular fields. Our intent in this work was to perform a review of the recent AI-enabled applications developed for cardiovascular and cerebrovascular conditions for different stages of care management (Figure 1).

Methods
We conducted a comprehensive literature search to extract original contributions in the various areas of AI application in cardio-cerebrovascular diseases published between 2017-2020. We defined cardiovascular diseases as ischemic heart disease, heart failure, myocardial infarction, and hypertrophic diseases, excluding arrhythmias, infiltrative cardiomyopathies, and genomics. Cerebrovascular diseases were defined as stroke (hemorrhagic/ischemic), thrombosis, and cerebral aneurysmal disorders, excluding genomics. The detailed search criterion is outlined in Figure 2. We examined 256 articles in the field of cardiovascular medicine and included 44 studies in this review article. Similarly, we reviewed 235 studies in cerebrovascular diseases and included 29 studies in this review. We assessed the reporting quality of the studies based on the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) statement for including studies in this review [9]. We further divided the studies based on the clinical application; pre-diagnostic, diagnostic/ imaging, and post-diagnostic. Other developing areas of AI research, such as AI in clinical trials and subtyping, AI-powered clinical decision support systems, as well as application of AI in reducing health disparity and implicit bias, have also been briefly discussed.

Results
A total of 73 cardio-cerebrovascular studies were identified and included in this review. More specifically, 29 studies were cerebrovascular, while 44 studies included cardiovascular diseases (Tables 1 and 2), with the majority of the cerebrovascular study designs being single-center and retrospective. The reviewed studies were divided into the following categories: Risk stratification modeling (11 cardiovascular, 5 cerebrovascular), Diagnostic studies (4 cardiovascular, 5 cerebrovascular), Outcome prediction and prognosis (18 cardiovascular, 6 cerebrovascular), Treatment strategies (3 cardiovascular, 2 cerebrovascular), and Diagnostic imaging studies (8 cardiovascular, 10 cerebrovascular). Tables 1 and 2 provide a detailed description of the included studies categorized as mentioned above. The text that follows will further subcategorize the studies to better dissect the various fields of application of AI. The pertinent subsections are also mentioned in the tables to improve readability.

(a) Risk Estimation
Risk assessment tools are becoming more salient in the era of precision medicine. EHR and administrative databases in conjunction with advanced applications of AI have been the driving force behind primary prevention strategies for cardiovascular and related conditions ( Table 1). Some of the noteworthy applications using ML for risk estimation included an improved prediction of cardiovascular risk factors in patients with no prior risk factors [10], prediction models of long-term risk of MI and cardiac death in asymptomatic patients [11], and using ML to identify cardiovascular disease risk factors in patients with no initial indications [12,13]. Researchers also looked at the association of biomarkers such as hemoglobin A1c (HbA1c) and thyroid-stimulating hormones, and the use of machine learning (support vector machine, SVM) to identify participants who later developed coronary heart disease [14]. Another study utilized AI-enabled tools in imaging to evaluate the prediction of major cardiovascular events in asymptomatic patients [15]. Predicting survival via ML utilizing echocardiography and CT angiogram (CTA) has also been attempted with promising results [16,17]. Four large-scale studies, mainly from Asian countries, have focused on estimating the risk of cerebrovascular disease (Table 2) [18][19][20][21]. These studies have sought to estimate the risk of stroke in patients with atrial fibrillation. Cerebrovascular studies on risk stratification are mostly retrospective and suffer from limited diversity and smaller sample sizes compared to cardiovascular studies. For instance, in some cardiovascular studies, existing clinical trials have been leveraged (MESA cohort [22] and EISNER trial [23]) with rich extended longitudinal follow-up data (up to ten years); cerebrovascular studies, on the other hand, have a relatively narrower timeline (up to two years).

(b) Clustering and Patient Profiling Before Event
Researchers have used ML to group cardiovascular patients based on coronary artery disease (CAD) severity [24], ischemia scoring [25], obstructive disease [26], and coronary stenosis [27]. ML has also been used to discriminate between healthy individuals and patients with impaired functional reserve due to heart failure with preserved ejection fraction (HFpEF) [28]. With regard to cerebrovascular disease, investigators have implemented ML to improve aneurysm detection with time-of-flight MR angiography [29]. Patient clustering has numerous potential benefits for the patients and the health system. Besides cardiovascular and cerebrovascular diseases, patient profiling has been valuable in other complex diseases [30][31][32][33][34].

(c) Care Gap Identification and Personalized Prevention
Identification of care gaps in medical management is an important potential field for ML with high clinical value. This field is not fully developed in either cardio or cerebrovascular diseases and can be a potential new venue for exploration and advanced application of AI for improving the quality of care and resource optimization.
In the period of studies collected for this article, only four studies were identified to concentrate on minimizing the healthcare gap. On the cardiovascular front, ML has been used to develop a risk calculator to aid with the initiation of statin therapy for CAD, which can potentially minimize future cardiovascular events in the affected patients [13]. By reclassifying CTA results, ML has been successful in better predicting existing ischemia and distinguishing that from subclinical coronary stenosis [27]. One cerebrovascular study to use ML for closing the care gap focused on better detection of cerebral aneurysms in MR angiography image data [29]. Karlsson et al. assessed an ML-powered clinical decision support system (CDSS) for stroke prevention in a randomized clinical trial on patients with atrial fibrillation (AF). The study corroborated that the CDSS can increase guideline adherence for anticoagulation therapy among these patients [35].
Personalized prevention is another area with potential clinical value. Thus far, ML has only been utilized to predict obstructive coronary disease on myocardial perfusion imaging as a directive for preventive action at an individual level [26]. Quality of recovery, in both MI and stroke patients, is dependent on the time from symptoms to intervention [36][37][38]. AI can aid in shortening this time window and improving treatment outcomes. However, there are technological barriers, including access to real-time patient data for model prediction, that make this space complex in terms of its implementation. For instance, in a study by Potter and colleagues, computational algorithms were used for developing an AI-aided system to more promptly identify and refer STEMI patients for cardiac catheterization during the EMS encounter [39]. Using this method for "physician-less" cardiac catheterization lab activation was safe and effective in improving treatment delay with sustainable results over time. To this end, investment in this emerging application of AI can help save lives while reducing systemwide cost and physician burnout due to stress that is due to the patient's higher risk for disability and death.

Application of Computational Algorithms in Diagnosis and Acute Phase Treatment
(b) Acute Diagnosis ML can be an essential tool to guide physicians in the acute diagnosis of cardio-and cerebrovascular disease. Most ECG recording devices now possess computational abilities to calculate measurements and "read" ECGs in real-time with variable accuracy [40]. With recent advances in computational algorithms, ML has been used to develop advanced diagnostic systems that can make predictions and direct the pre-hospital diagnosis of acute coronary syndrome [39,41].
Timely diagnosis of ischemic and hemorrhagic stroke, while challenging for physicians, is invaluable for the patient. ML has been explored by researchers for stroke screening [42], detection of stroke and large vessel occlusion using CTA imaging [43,44], detection and subtyping of hemorrhagic stroke on CT scans [45][46][47][48], and to predict post-stroke mortality [49,50]. Researchers have also used ML to aid in the acute diagnosis of TIAs and differentiate them from their mimics [51].
(c) Acute Imaging The use of machine learning, especially deep learning in the field of imaging, has grown exponentially in recent years, leading to improved prediction and diagnosis ability. For cardiovascular disease, ML has been used to aid in the diagnosis and classification of acute and subacute coronary stenosis. Researchers have used ECG data to identify patients with chest discomfort who need urgent revascularization [41]. Other investigators have developed algorithms to make similar diagnoses and classification from myocardial perfusion imaging [26], CT angiography [52], and clinical and laboratory data [53] in emergency settings.
The two main imaging modalities for the detection of stroke are CT scans and MRI. In the past four years, many studies have been performed in stroke patients that used ML to detect, quantify and subtype ICH on non-contrast CT [46][47][48]54] and MRI [55] in the acute phase. Researchers have also used support vector machine (SVM) algorithms to predict the expansion of hematoma in patients with spontaneous ICH [56]. In hemorrhagic stroke, ML has shown to be promising in detecting large vessel occlusion on CTA [44] and also predicting and quantifying the ischemic core [43,57]. In a different study, Fhager and colleagues implemented binary classification on a broadband microwave imaging technique that can potentially detect ICH outside of dedicated stroke centers [45].
Although advances in the application of machine learning for acute imaging had significant progress in both fields, ML has been used more extensively in the quantification of brain biomarkers when compared to markers from cardiovascular imaging. Nonetheless, the field is at the stage of transitioning to prospective trials and effective implementation at the bedside in multiple settings.

(d) Triaging and Acute Treatment
While diagnosis in cardio-and cerebrovascular fields is one of the first steps after hospital admission, risk stratification during triage can help optimize the available resources and tailor the care management. However, the need for rapid response also requires the tools to interact in real-time with the output from the imaging device and the EHR data. Therefore, the implementation of such tools can be complex and often require coordination at different levels. For instance, the risk of in-hospital cardiac arrest has been predicted using a decision tree [58], while other ML algorithms have been used for risk stratification of chest pain patients using coronary CTA data [52]. These tools, once externally validated and implemented to act in real-time in clinical settings, could help reduce the time for treatment and help save lives.
Using technologies to improve triaging during the acute phase has been more productive in recent years in the cerebrovascular field. ML has been used for recognition and differentiation of ischemic stroke using clinical data [42] and to predict the 90-day mRS score to aid with thrombectomy [59]. MRI data has been used for the classification of ischemic stroke onset time [60] and segmentation and phenotyping of acute ischemic lesions [55]. Researchers have also used ML to estimate ICH volume on CT scan images [47]. The use of ML in triaging stroke patients has escalated further, and authors have discussed the scope and limitations of an ML-based decision support system framework to aid physicians in urgent settings.
In a real-world environment, initial patient notes can complement pre-event information, if available, for the identification of patients at risk of stroke, and alert the physician to take the guideline-compliant steps to improve the outcome [61]. However, the processing of clinical notes requires advanced natural language processing (NLP) that is carefully tailored for clinical applications. NLP has been mostly applied to reports (such as radiology reports) with promising results [62]; NLP applied to clinical notes can have clinical utility at improving the identification of patients for major vascular events [61]. Finding clusters of stroke patients can be helpful from the medical perspective as it may lead to the discovery of new patterns and more effective ways to manage a specific condition and its complications. Garg et al. [63] developed an automated stroke subtype classification using radiology and progress reports and showed agreement with the manual TOAST (Trial of ORG 10172 in acute stroke treatment) [64] classification. The challenge of the study remains in its validation in an external cohort. Some other studies are attempting to create a CDSS to help physicians classify stroke subtypes based on limited clinical data. Keerthana [65] used Fuzzy C-Means clustering techniques for the segmentation of brain stroke using MRI images. The study lacked technical details, including the number of cases used in model development and testing. Subtyping in the field of cardiovascular medicine is relatively new, with clinical applications that remain relatively sparse [28,[66][67][68][69][70]. Shah et al. predicted the survival of patients with HFpEF using an unsupervised learning model and demonstrated the benefits of deep phenotyping in these patients [71]. The researchers created an unsupervised learning model across 46 different variables to identify intrinsic structures within patients with HFpEF; they identified three distinct groups. The study needs to be replicated in external HFpEF cohorts to demonstrate generalizability. Zhao et al. applied a constrained non-negative tensor factorization approach to classifying patients with the cardiovascular disease based on their longitudinal EHR data [72]. The latter study is unique as it encompasses data from patients ten years before their development of heart disease with the observation of emerging phenotypes of 12,380 cardiovascular diseases. In another study, Ahmad et al. [73] analyzed data from 1619 participants in the HF-ACTION (Heart Failure: A Controlled Trial Investigating Outcomes of Exercise Training) to identify the subtypes of chronic heart failure. The study design excluded patients with incomplete data, thus limiting the true value of the predicting models for clinical applications. Nonetheless, four subtypes were identified, and each patient in the corresponding subtype responded distinctively to exercise therapy. In another study, Schulam et al. [74] used Limestone, a non-negative tensor factorization algorithm, to identify multiple candidate phenotypes of heart failure. Their clinical evaluation results showed the potential ability of Limestone to produce the phenotypes that can identify disease subtypes with potential clinical utility. Panahiazar et al. [75] used clustering techniques to investigate the heart failure patients' response to therapy. The authors used K-means and hierarchical clustering to group heart failure patients that responded to medication.

Application of AI in Post-Diagnosis Outcome Prediction and Secondary Prevention
The similarity assessment of a new patient with each identified cluster could lead to the determination of an appropriate medication plan. The major limitation in these studies remains selection bias, given that in many cases, patients with a poor data footprint are excluded from modeling. However, overall, these examples demonstrated the potential of ML-enabled methods based on patient similarity as assistive tools.

(b) Outcome Prediction
Prediction of outcome after diagnosis was the most extensively investigated application of ML among the categories included in this literature review. Here, the outcomes of interest included, but were not limited to, disease severity, survival, mortality, length of hospitalization, rehospitalization, and recurrence. In patients with confirmed coronary artery disease (CAD), clinical and laboratory data have been used in addition to CTA [17,76], and angiogram [77] to predict cardiovascular events or death with promising results. In one study by Johnson and colleagues, ML algorithms proved superior to CAD reporting and data system (CAD-RADS) scoring in predicting future cardiovascular events and mortality in patients with positive CTA results [17]. In another study, the random forest-based model was shown to better identify patients at risk of 30-day congestive heart failure rehospitalization and 180-day cardiovascular mortality following a percutaneous coronary intervention, compared to conventional methods [78]. Other studies have explored the application of ML in patients admitted for acute coronary syndrome to predict in-hospital mortality [79], 30-day mortality [80], and long-term survival [81][82][83][84]. Duane et al. have proposed a deep learning model using static and dynamic features in 2930 patients with acute coronary syndrome to predict major adverse events in the future [85]. A major study from Sweden used 39 survival predictor variables in 51,943 patients to develop various ML models that could accurately predict two-year survival after the first MI event [82]. At the same time, Pieszo et al. used laboratory values in MI patients to predict long-term mortality, while Kwon and colleagues combined laboratory data with patient demographics to make similar predictions [83,84].
Heart failure is yet another area where ML has shown promising results in the prediction of outcomes [86,87]. In the study published by Kwon et al., machine learning algorithms were able to predict in-hospital and long-term mortality following acute heart failure more effectively than conventional scoring systems [88]. Survival in patients with pulmonary hypertension has also been predicted using ML [89]. Distinguishing between short-term vs. long-term mortality is equally beneficial for the patients and healthcare system as it can help with resource optimization as well as more personalized care [50].
Ischemic and hemorrhagic stroke has been the main focus of cerebrovascular studies with regard to secondary prevention and functional outcome as well as mortality prediction. Researchers used deep learning on acute ischemic stroke imaging features to predict lesion volume [90]. Two different teams of scientists have used ML algorithms to predict three-month functional outcomes following ischemic stroke [91,92]. ML has also been utilized to predict 90-day readmission [93] and one-year recurrence in patients with ischemic stroke [94]. In patients undergoing endovascular treatment for ischemic stroke, ML algorithms did not improve outcome prediction when compared to logistic regression [95]. In hemorrhagic intracranial events, ML has been successful in predicting hematoma expansion [56] and delayed ischemia [96].
As such, there has been an increasing number of successful applications of AI in predicting outcomes in cardiovascular and cerebrovascular diseases, raising the question of when these improvements can be evaluated for clinical utility and generalizability to reach patients' bedsides. In this context, the functional outcome in stroke patients is primarily measured by the modified Rankin Scale (mRS) score [97], while the New York Heart Association (NYHA) classification is used to categorize heart failure patients [98]. Using these scores as features in the machine learning models can be important for training the models. However, the main limiting factor remains the lack of proper reporting of functional classes and the level of missingness in these measurements across the different healthcare systems. Incorporating functional outcomes in a structured form in EHR data to enable easier integration of these measures in machine learning models is an important first step. Better, more consistent, and standardized reporting of functional class scores will ultimately lead to better model predictions.

(a) Personalized Treatment
Studies on the use of ML in assisting with rehabilitation have been limited. In heart failure patients, ML helped investigators to classify heart failure patients based on clinical presentation and improve treatment response by directing personalized therapies [99]. In the only cerebrovascular study, researchers used ML to predict activities of daily living in post-stroke patients to better optimize clinical care [100]. Personalized treatment for tertiary prevention is an area with great potential for the application of AI. Rehabilitation in both cardio and cerebrovascular patients has a major financial burden on healthcare systems [101,102]. Innovative use of ML in this field can lead to improved resource optimization and personalized patient experience [103].

(b) Outcome Prediction
Outcome prediction using ML during rehabilitation in cardiovascular studies has been mainly focused on cardiac resynchronization therapy outcomes in patients with heart failure. Researchers have used ML to predict patient response to cardiac resynchronization [104], outcome [105], and mortality [106]. ML has also been used to distinguish different heart failure phenotypes [86] and predict survival with the aid of echocardiography data [16]. In the only cerebrovascular study that we were able to identify, researchers used ML to predict activities of daily living in post-stroke patients to better optimize clinical care [100]. This field has great potential for future studies and trials to improve the recovery and quality of life of patients. Location: United Kingdom Aim: Predicting long-term mortality after ACS using laboratory values.

Variables: Hematological indices and inflammation markers
Strengths: Large sample size Limitations: Imputation for the ML was performed using mean of all observations, the latter is typically not ideal since missing in EHR data tend to be not-at-random Findings: The model achieved a c-statistic of 0.89 for in-hospital mortality. C-statistic was 0.77 for six-month mortality. Red cell distribution width (HR 1.23) and neutrophil to lymphocyte ratio (HR 1.08) showed independent association with all-cause mortality in multivariable Cox regression. Location: Multi-national Aim: Using ML to phenotypically classify a heterogeneous HF cohort and aid in optimizing the rate of responders to specific therapies. Variables: 50 variables including clinical parameters, biomarker values, and measures of left and right ventricular structure and function Strengths: Data from MADIT-CRT trial [114]; randomized cohort Limitations: Possibility of selection bias; results confined to a selected population of HF patients enrolled in a clinical trial with robust inclusion/exclusion criteria Findings: Four phenogroups identified, significantly different in the primary outcome occurrence. Two phenogroups included a higher proportion of known clinical characteristics predictive of CRT response and were associated with a substantially better treatment effect of CRT-D on the primary outcome (HR = 0.35 and HR = 0.36) than observed in the other groups. Notable facts: By integrating clinical parameters and full heart cycle imaging data, unsupervised ML can provide a clinically meaningful classification of a phenotypically heterogeneous HF cohort and might aid in optimizing the rate of responders to specific therapies.

1106
Multiple Kernel Learning, K-means clustering

Ref., Year-Category ** Study Details Sample
Size Algorithms [25], 2018-1b, 3a Location: Multi-national Aim: Predicting lesion-specific ischemia by invasive FFR using an integrated ML ischemia risk score from quantitative plaque measures from CCTA. Variables: Quantitative CTA data: stenosis, NCP, low-density NCP (LD-NCP), calcified and total plaque volumes, contrast density difference (maximum difference in luminal attenuation per unit area) and plaque length Strengths: Multi-center data from NXT trial [116] Limitations: Small sample size; plaque findings were not confirmed by invasive intravascular ultrasound Findings: Information gain for predicting ischemia was highest for contrast density difference (0. 254 LogitBoost [15], 2020-1a Location: Multi-national Aim: Evaluate the prognostic value of fully automated DL-based EAT volume and attenuation quantified from non-contrast cardiac CT. Variables: Non-contrast cardiac CT scan data, inflammatory biomarkers Strengths: Data from the EISNER trial [23] Limitations: Long-term follow-up not obtained Findings: Increased EAT volume and decreased EAT attenuation were independently associated with MACE. CAD risk score, CAC, and EAT volume were associated with increased risk of MACE (hazard ratio: 1.03, 1.25, and 1.35). EAT attenuation was inversely associated with MACE (hazard ratio: 0.83, Harrell C statistic: 0.76). MACE risk progressively increased with EAT volume ≥ 113 cm 3 and CAC ≥ 100 AU; highest in subjects with both. EAT volume correlated with inflammatory biomarkers; EAT attenuation inversely related to inflammatory biomarkers.
2068 DL [117], 2018-1a Location: Multi-national Aim: Investigating whether a ML score, using only plaque stenosis and composition information from the 16 coronary segments, has better predictive accuracy compared to the traditional CCTA based risk scores. Variables: 16 segment based coronary stenosis (0%, 1-24%, 25-49%, 50-69%, 70-99% and 100%) and composition (calcified, mixed and non-calcified plaque) derived from CCTA Strengths: Data from CONFIRM registry [110] Findings: ML-based approach showed better AUC for event discrimination (0.771) vs. other scores (ranging from 0.685 to 0.701). Improved risk stratification was the result of down-classification of risk among patients that did not experience events (non-events).

Ref., Year-Category ** Study Details Sample
Size Algorithms [27], 2017-1b, 1c Location: United States Aim: Evaluating the incremental benefit of ML-powered resting myocardial CTP over coronary CT stenosis for predicting ischemia Variables: CCTA and FFR data Strengths: Data from DeFACTO study [119] Limitations: Small sample size Findings: Accuracy, sensitivity, specificity, PPV, and NPV of resting CTP were 68.3%, 52.7%, 84.6%, 78.2%, and 63.0%, respectively, for predicting ischemia. Addition of resting CTP improved discrimination (AUC = 0.75) and reclassification (net reclassification improvement: 0.52) of ischemia compared to CT stenosis alone (AUC = 0.68). Notable facts: The addition of resting CTP analysis acquired from ML techniques may improve the predictive utility of significant ischemia over coronary stenosis.

Clinical Trials in the AI-Era
Patient selection for a clinical trial is a crucial process, and research has shown that predictive modeling in the selection of patients would increase the trials' success rate [121].
The development of a drug takes about ten years and more than two billion dollars, and yet only a fraction of drugs are approved by the Food and Drug Administration (FDA) [122]. The application of in silico clinical trials to suggest better patient selection criteria [123,124] can increase the efficiency and speed of drug development. For instance, the use of AI in clinical trials can increase the efficacy of screening of drug candidates based on (a) analysis of calculated properties, (b) prediction models for therapeutic drug targets, and (c) identification of safety liabilities; all of which facilitate a reduction in the number of in vivo or in vitro assay requirements [125]. These efforts are also driven by innovative start-up companies to reduce the cost and improve the success rate of trials.

AI at Physicians' Fingertips-Implication and Future Directions
Once validated and proven effective and safe, the AI solutions have to be integrated into clinical workflow and demonstrated to be effective in improving outcomes. It is only then that we have leaped to provide evidence-based care in real-time using the promises of big data and AI. However, taking the advances in AI to the bedside is not trivial. First, novel AI solutions must be rigorously assessed. Certainly, the FDA approval for AI applications is laying the foundation for regulatory evolution to allow faster integration of AI-enabled technologies into healthcare. Many clinical trials are designed to evaluate the impact of technological advances (such as new imaging devices [126]) like the drug-design trials. Second, carefully designed CDSS need to be developed and implemented in the EHR that take the AI-powered tool to physician's fingertips. To achieve these goals, the American Medical Informatics Association (AMIA) published a roadmap [127] in 2007 for taking action on CDSS and defined three main pillars: (a) high adoption and effective use, (b) best knowledge available when needed, and (c) continuous improvement of knowledge and CDSS methods. However, in general, physicians have relatively positive attitudes toward the idea of CDSS [128,129], even though there are many challenges, including low specificity [130,131], workflow interruptions [132][133][134], confusing interfaces [135,136], low confidence [137], awareness of the information [138], requirements of manual data entry [134,139], interference with physician autonomy [128,140], or lack of relevance [134] that limit the effective use and adoption of CDSS in many health care systems. "Alert Fatigue" can be caused by poorly designed and implemented CDSS [128,[141][142][143]. The four principles for the design of CDSS interfaces (four A's: All in one, At a glance, At hand, and Attention) [144] should also be followed. Based on the unified theory of acceptance and use of technology [145], user expectations need to be taken into consideration for technology to be accepted. In addition, several studies highlighted the importance of considering the end-user needs and expectations early in the development process [139,143,146]. Therefore, it is imperative to have CDSS end-users involved in the design and implementation. It is also essential to consult EHR engineers and information technologists to understand the possibilities, limitations, and hardware/software requirements to effectively utilize CDSS functionalities. Careful planning requires mapping current workflows to understand how clinical phases and tasks are completed and how these may be affected by the addition of CDSS. In some instances, CDSS may need to be customized to suit various processes. Many physicians remain hesitant to accept CDSSs, leading to suboptimal implementation [143]. Finally, despite federal investment to promote health information technology adoption, gaps remain in the use of CDSS among health systems [147], and we believe that lack of physician acceptance may be one of the main reasons. Thus, it is imperative for researchers across the translational spectrum to be involved in this AI revolution so that we can together reach the promises of precision health in a scalable and fair manner.

Health Disparity and Implicit Bias
Although recent scrutiny of AI-based software has introduced concern about unintended effects of AI on social bias and inequity [148], there are opportunities to leverage technology to reduce health disparity, care gaps [149,150], and unwanted variations [151], as well as improving access. There are many examples of how technology is improving access to specialty care, especially in rural areas. However, AI-based studies have to be carefully designed with explicit frameworks and a balanced representation of participants to mitigate some of the undesirable biases. For instance, the use of deep transfer learning is effective in reducing healthcare disparities that are driven by data inequality [152]. The reader is referred to the work by Cirillo et al. [153] for a more detailed overview and some of the recommendations on how to improve the global health and disease landscape and decrease inequalities with the use of technology.
There are also other challenges and opportunities when integrating AI tools in clinical workflow; namely, there are technological challenges, operational challenges, and ethical challenges [61]. These issues are tightly intertwined with implicit biases and health disparity. Larger centers with better access to robust infrastructure and a wide range of patient representation are better positioned to address implicit biases and address these challenges, leading to better integration of AI-assistive tools in the clinical workflow. However, as it is impossible-in practical terms-to find solutions to ensure the highest efficacy, efficiency, equity, and patient safety, it is important and necessary to define acceptable thresholds by working meticulously with regulatory institutions to guide the development of AI tools to ensure best practices and compliance.

Conclusions
To summarize, we have seen that the field of AI is omnipresent in both cardio and cerebrovascular fields, targeting different stages of patient management ( Figure 2). However, in the cardiovascular field, studies have been larger, and there were more prospective and multi-center studies. In the field of cerebrovascular diseases, studies were mostly retrospectives from single centers and limited in patient representation and scale. By enhancing collaborative efforts, future cerebrovascular studies can expand follow-up periods to better understand the long-term outcomes in the patients. Both cardio-and cerebrovascular fields can also benefit from collaborative efforts to increase data diversity, patient representation, and integration of different data modalities, e.g., imaging biomarkers and genetic information.
Currently, the limitations in AI-based models are mostly centered on the lack of sufficient patient representation, balanced cohorts, and biases introduced by cohort definitions or selection of variables, as well as the exclusion of a certain group of patients. Machine learning models pick up biases from the training datasets; therefore, to reach new heights, it is of fundamental importance to increase patient representation and data density and improve data for downstream modeling [154,155]. Finally, in terms of methodologies, both fields are taking advantage of advances in machine learning frameworks and tools. Ultimately, the future of healthcare is an organic blend of technology, innovation, and human connection. It is not enough to provide faster, better care; we must leverage the technology to also ensure that the care is fair and not biased towards a group or sub-population. We must understand our limitations and use the technology to deliver an integrated solution that does not make the physicians fixed to the screen and the keyboard. The care also has to ensure physicians receive the tools they need to be better at what they do. Overall, there are few areas in which AI can be of great value in both cardio and cerebrovascular diseases: (1) disease diagnosis and patient monitoring, especially in high-impact fields; (2) incidental findings for preventive care by scanning through images and reports; (3) risk stratification for primary or secondary prevention; and (4) resource and workflow optimization by leveraging administrative data.

Funding:
The study had no specific funding.

Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.

Conflicts of Interest:
The authors declare no conflict of interest.