Cardiovascular Diseases in the Digital Health Era: A Translational Approach from the Lab to the Clinic

Translational science has been introduced as the nexus among the scientific and the clinical field, which allows researchers to provide and demonstrate that the evidence-based research can connect the gaps present between basic and clinical levels. This type of research has played a major role in the field of cardiovascular diseases, where the main objective has been to identify and transfer potential treatments identified at preclinical stages into clinical practice. This transfer has been enhanced by the intromission of digital health solutions into both basic research and clinical scenarios. This review aimed to identify and summarize the most important translational advances in the last years in the cardiovascular field together with the potential challenges that still remain in basic research, clinical scenarios, and regulatory agencies.


Introduction
Digital Health has disrupted the actual panorama by introducing and establishing technology as one of the most useful and rapidly developing tools in the last decade [1]. These technological advances that are based on computing platforms, connectivity, software, and sensors for health care related uses, give a more holistic view of patient health. Through the availability of new data access ways, patients are gaining more control over their health.
In this sense, modern medicine is constantly evolving, while incorporating more technologies in the analysis, diagnosis, and treatment decisions. These incorporations include the direct collaboration of clinicians, engineers, and basic computational experts to improve data access, reduce costs, and increase overall efficacy. This synergy will ultimately increase quality and personalization at the medical level [1].
In the last years, the outbreak of Artificial Intelligence (AI) has helped to include prediction algorithms as assessment tools to assist clinicians in their diagnostic decisions. On this matter, the inherent capabilities of AI allow researchers to collect and interpret data relationships in digitalized clinical records that can reveal hidden information for the clinician with an inestimable impact in oncology [2], neurology [3], and cardiology fields [4], among others. These capabilities include the automation of tasks such as processing [1], segmenting images [2], or prognosis prediction [3] and translate into a higher efficiency of the processes by reducing time and costs.

The Present Breach among Basic Biomedical Research and Clinical Applications
Translational research aims to transfer the scientific knowledge developed from early research stages into clinical practice across the system. The average time to complete such a transition is 17 years [8], suggesting that efforts need to be made in order to compensate the expensive medical research by improving policy interventions and translation. As a result, intermediate steps have been defined, as described in Figure 1. Based on this, three different translational research types have been identified including (1) the development of treatments and interventions, (2) evaluation of the efficacy and effectiveness of these treatments and interventions and (3) the dissemination and implementation of research for system-wide change [9]. BioTech 2022, 3, x FOR PEER REVIEW 2 of 15 [1], segmenting images [2], or prognosis prediction [3] and translate into a higher efficiency of the processes by reducing time and costs. Moreover, cardiovascular diseases (CVDs) are the leading cause of death globally, taking an estimated 17,9 million lives each year, corresponding to 32% of all global deaths [5,6]. CVDs are a group of disorders of the heart and blood vessels and include coronary artery disease, cerebrovascular disease, heart failure, valvular heart disease and other conditions. More than four out of five CVD deaths are due to heart attacks and strokes, and one third of these deaths occur prematurely in people under 70 years of age [7].
Cardiology has been one of the medical fields where digital health applications are playing a crucial role, not only with the use of wearable technologies but also in relation to clinical applications. Among others, this field has benefited from the use of wireless ECG recordings, implantable loop recorders, cardiac implantable electronic devices with Bluetooth capability, and virtual or mixed-reality tools at operating rooms [1].
In this review, we present the most important translation platforms at different levels that have showed major discoveries in the last decade.

The Present Breach among Basic Biomedical Research and Clinical Applications
Translational research aims to transfer the scientific knowledge developed from early research stages into clinical practice across the system. The average time to complete such a transition is 17 years [8], suggesting that efforts need to be made in order to compensate the expensive medical research by improving policy interventions and translation. As a result, intermediate steps have been defined, as described in Figure 1. Based on this, three different translational research types have been identified including (1) the development of treatments and interventions, (2) evaluation of the efficacy and effectiveness of these treatments and interventions and (3) the dissemination and implementation of research for system-wide change [9]. These three types of translational research have been previously described in the literature and highly depend on the stage at which research is being developed [10]. For example, the first one (T1) focuses on translating the basic research findings from preclinical studies, animal research and basic health services research into bedside applications, where controlled observational studies and phase III clinical trials occur.
The second block (T2) includes the translation from bedside to practice-based research, that mainly focuses on phase III and IV trials, observational studies, and survey research. This block is devoted to guideline development, meta-analysis, and systematic reviews with the main objective of translating the information to patients, regulations and practice.
Finally, the last block (T3) includes the translation from practice-based research to clinical practice across the system, including dissemination and implementation research.
In parallel with this novel classification of the intermediate steps, current legislations have tried to adapt to each of the requirements for protecting patients' well-being and, at the same time, agility has been introduced into the process [11]. These three types of translational research have been previously described in the literature and highly depend on the stage at which research is being developed [10]. For example, the first one (T1) focuses on translating the basic research findings from preclinical studies, animal research and basic health services research into bedside applications, where controlled observational studies and phase III clinical trials occur.
The second block (T2) includes the translation from bedside to practice-based research, that mainly focuses on phase III and IV trials, observational studies, and survey research. This block is devoted to guideline development, meta-analysis, and systematic reviews with the main objective of translating the information to patients, regulations and practice.
Finally, the last block (T3) includes the translation from practice-based research to clinical practice across the system, including dissemination and implementation research.
In parallel with this novel classification of the intermediate steps, current legislations have tried to adapt to each of the requirements for protecting patients' well-being and, at the same time, agility has been introduced into the process [11].

Translational Research as a Highly Complex Structured Matrix
Current trends in preclinical trials include enormous efforts to redesign and evaluate new early phase clinical trial designs [12]. This effort is focused on the identification of biomarkers or endpoints that enable one to identify the full potential and the possible secondary effects of these novel approaches [12,13].
Translational research presents a key vision in the development of new drugs and medical devices, although its inclusion in traditional workflows can be both challenging and complex as it involves patients, research, and medical staff in various ways [14].
These approaches are also demanding at the infrastructural level and usually include cutting-edge research, sophisticated machines, complex imaging techniques, and biochemistry laboratories near hospitals and clinics, which are not always available nor possible.
Additionally, even if the infrastructural level is ensured, the quality of the data and the availability to perform and characterize tests is challenging and can affect the transfer process from the laboratory to the clinic.
Finally, a clear organized structure where communication is ensured in a multidisciplinary research team is essential for the correct translation of the information. This will ensure good communication among basic scientists and clinicians, will avoid duplication of efforts, and facilitate sharing of key information to identify innovative biomarkers that can be translated into clinical practice.
In this regard, consensus from experts agree that some efforts need to be made including [14]: • To establish better preclinical models that allow researchers to rationally select target compounds and to better understand their mechanism of action.

•
To evaluate and incorporate clear endpoints at preclinical stages that allow for anoptimal evaluation of target-based new drugs.

•
To define current monitoring techniques that help to develop the tools, probes, and biological and imaging assays suitable for in vitro assessment, in preclinical models.

•
To conduct, in a rapid, coordinated manner, highly specialized, complex, early clinical trials with rigorous standards to deliver complex, detailed data for licensing purposes.

•
To ensure a high-quality laboratory infrastructure and expertise with the capacity to provide biological readouts on clinical material in a timely manner.

Current Accomplishments in Cardiovascular Health
As previously described, CVDs have highly benefited from translational approaches that have already been discussed by several organizations and groups such as the Transnational Alliances for Regenerative Therapies in Cardiovascular Syndromes (TACTICS) [15] or the European Society of Cardiology (ESC) groups, including Digital Health applications for data acquisition and analysis [16]. Some of the most important applications have been summarized in this section.

Translational Bioinformatics
Translational bioinformatics (TBI) is a well-established field in the study of health informatics that has developed multiple branches of applications such as molecular bioinformatics, biostatistics, statistical genetics, and clinical informatics [17]. The main objective of this approach is to apply informatics to increase the acquisition and analysis of biomedical data, with an emphasis on omics (genomics, metagenomics, epigenomics, transcriptomics, proteomics, metabolomics, phenomics, exposomics, and microbiomics), therefore generating knowledge and medical tools that can be used by both scientists and clinicians with several purposes. Its endpoint explores the improvement of human health by using computer-based information systems, including data mining techniques, to identify patterns or biomarkers that can be used for prediction purposes, as the use of bioinformatics allows us to better understand the molecular basis of cardiovascular diseases and to identify the genes, molecules, and molecular pathways involved. This is not only useful in the identification of potential targets and testing new therapies, but also to predict patient risk, outcome, and the most suitable treatments. All this clinical knowledge is later translated into new application workflows that identify patient clusters, interpreting biological information for treatment selection and health outcome prediction [4].
This revolution, specially at the genomic level, has been possible by the development of next-generation sequencing (NGS) methods that apply different approaches to achieve high-throughput sequencing. These techniques include DNA-seq techniques, such as longread and short-read sequencing methods, Chromatin Interaction Analysis by Paired-end Tag Sequencing (ChIA-PET), Chromatin Conformation Capture with Sequencing (Hi-C), Assay for Transposase-Accessible Chromatin with High-throughput Sequencing (ATAC-Seq) [5], Chromatin Immunoprecipitation Sequencing (ChIP-seq), gene arrays, and RNAseq techniques [6]. At the proteomic level, the use of two-dimensional polyacrylamide gel electrophoresis (2DGE), mass spectrometry, and protein arrays allows the massive exploration of protein differences associated with pathological situations.
The impact of this approach in the CVD field is extensive, as most heart diseases are related to a certain genetic component that has highly benefited from the democratization of data and the rise of knowledge of public databases [18]. In this field, both academic, governmental, and industrial initiatives have developed ways to share information at both national and international levels. One of the most important initiatives that has been recently developed is the partnership between the American Heart Association (AHA) Institute for Precision Cardiovascular Medicine and Amazon Web Services by providing a variety of grant funding opportunities for testing and refining AI and machine learning algorithms using healthcare system data, with an aim of promoting precision medicine [19].
From the academic perspective, several institutions, including the National Center of Biotechnolgy Information (NCBI), have contributed to the development of portals, analytics platforms, databases, and centralized repositories [7] focusing on cardiovascular diseases. This includes the Knowledge Portal Framework focused on cardiovascular disease, in which HeartBioPortal [8] and the Cerebrovascular Disease Knowledge Portal [9] contain useful gene expression data. Regarding analytic platforms for the development of precision medicine, one could highlight the one from the American Heart Association [10] and from DataSTAGE [7]. Examples of developed databases and central repositories at the genetic level include the Heart Gene Database (HGDB) [11] and the Gene Expression Omnibus (GEO) [12], the COPaKB [13], HeartBD2K [14] and ProteomeXchange [15], among others [16], at the proteomic level, and MetabolomeXchange [17] at the metabolomic level. Others such as the CardioGenBase [20], In-Cardiome [21], Cardio/Vascular Disease Database [18], and dbGap [19] combine gene, functional, drug, and multi-omic studies. In this trend, initiatives such as the IMPaCT platform driven by the Instituto de Salud Carlos III aims to combine predictive medicine, data science, and genomic medicine as a transversal approach to develop precision medicine in the Spanish National Healthcare System.
In addition to academic efforts, government initiatives have also made available nation's data through large genomic sequencing programs. Among the most relevant ones, we found the NIH's All of Us Research Program, which includes many cardiovascular disease phenotypes, demographic information, and physical measurements, as well as whole genome sequencing data [22], the 100K Genomes Project in UK [11], and the 100K Wellness Pioneer Project in China.
Several companies have also contributed to scaling the use of bioinformating tools by implementing different tests or products that are commercialized in a standardized format. One example of these companies is Illumina, a biotechnology company that offers NGS and later tools for analysis. Many other biotech startup companies and non-profit initiatives have also shared that goal and many have finally effectively integrated the workflow of Illumina and Qiagen [7].
Other projects have also contributed to this field by developing and nourishing population-wide multi-omics initiatives such as the NHLBI Trans-Omics for Precision Medicine (TOPMed) program, including the integration of whole-genome sequencing (WGS), metabolic profiles, proteomics, and RNA expression patterns, among others, with molecular, imaging, and clinical data for the study of atherosclerosis [18]. Simpler approaches have explored the use of bioinformatics analysis for coronary heart disease, identifying different genes associated with atherosclerosis [20] and coronary heart disease prediction [23][24][25], as well as for myocardial infarction [26,27], for dilated cardiomyopathy [28], for high blood pressure [29][30][31], and for cardiovascular risk [32,33] and cardiomyopathy in general [34]. Important efforts have also been mounted, revealing the role of the transcriptome [35][36][37], the epigenome [38,39], and the metabolome [40] in these cardiovascular diseases. Recent breakthroughs in sequencing combined with better bioinformatics tools have enabled researchers to analyze the composition of the microbiome and how these microbes are involved in CVD disease. In this context, recent initiatives are evaluating the changes on the metagenome conditioned by diet and its impact on atherosclerosis [22].
In summary, there is a clear potential of transforming risk prediction, CVD diagnosis, treatment personalization potentials, and the selection of integration and dose. However, the integration of technology into the clinical care workflow is uneven among institutions [11]. Other limitations in this field include the ethical and legal issues that arise due to the massive production and use of personal data from patients and the rapid evolution of the field, which usually leaves behind its adaptation to clinical practice and bioinformatics.

Computational Models for Personalized Medicine
In silico trials are based on computer simulations that contain specific information from the patient, enabling the personalization of the models. The term in silico indicates any use of computers in clinical trials, even if limited to the management of clinical information in a database.
This type of computation is currently being tested in the development or regulatory evaluation of medicinal products [23][24][25][26], devices, interventions, or in the characterization and modeling of different diseases [27][28][29][30]. Although this approach presents major limitations that will be later commented on [31], the combination of the information extracted from the simulations with clinical information can increase the understanding of biological mechanisms [32,33] (Figure 2). Nowadays, these types of trials are currently being validated at in vitro and in vivo levels, as they are expected to have major benefits over current animal trials. lar, imaging, and clinical data for the study of atherosclerosis [18]. Simpler approaches have explored the use of bioinformatics analysis for coronary heart disease, identifying different genes associated with atherosclerosis [20] and coronary heart disease prediction [23][24][25], as well as for myocardial infarction [26,27], for dilated cardiomyopathy [28], for high blood pressure [29][30][31], and for cardiovascular risk [32,33] and cardiomyopathy in general [34]. Important efforts have also been mounted, revealing the role of the transcriptome [35][36][37], the epigenome [38,39], and the metabolome [40] in these cardiovascular diseases. Recent breakthroughs in sequencing combined with better bioinformatics tools have enabled researchers to analyze the composition of the microbiome and how these microbes are involved in CVD disease. In this context, recent initiatives are evaluating the changes on the metagenome conditioned by diet and its impact on atherosclerosis [22].
In summary, there is a clear potential of transforming risk prediction, CVD diagnosis, treatment personalization potentials, and the selection of integration and dose. However, the integration of technology into the clinical care workflow is uneven among institutions [11]. Other limitations in this field include the ethical and legal issues that arise due to the massive production and use of personal data from patients and the rapid evolution of the field, which usually leaves behind its adaptation to clinical practice and bioinformatics.

Computational Models for Personalized Medicine
In silico trials are based on computer simulations that contain specific information from the patient, enabling the personalization of the models. The term in silico indicates any use of computers in clinical trials, even if limited to the management of clinical information in a database.
This type of computation is currently being tested in the development or regulatory evaluation of medicinal products [23][24][25][26], devices, interventions, or in the characterization and modeling of different diseases [27][28][29][30]. Although this approach presents major limitations that will be later commented on [31], the combination of the information extracted from the simulations with clinical information can increase the understanding of biological mechanisms [32,33] (Figure 2). Nowadays, these types of trials are currently being validated at in vitro and in vivo levels, as they are expected to have major benefits over current animal trials. In silico trials soften these biases by using accurate computer models for a specific treatment and its development, including patient characteristics to broaden the testing scenario to different patient groups and more information. In this sense, the idea of in silico trials is to create a virtual twin in the computer that can test all possible treatments, enabling observation through a computer simulation of how well the candidate biomedical product performs and whether it produces the intended effect without inducing adverse effects. Regarding CVDs, the methodology to obtain the data can vary, from macro anatomical 3D models of a patient obtained from computed tomography or magnetic resonance [41], where electro-mechanical [42] and hemodynamics models can be implemented to mimic the movement and the conduction systems of the heart, to cell-based differential equations emulating every known ionic channel that may affect or modify cardiac cells' functioning [43,44]. In this line, artificial intelligence brings new tools based on neural networks to predict clinical and anatomical features, e.g., the heart shape based on the MRI and clinical data of the patients (height, weight, sex, heart rate, among others) or the implementation of variational autoencoders on the low ejection fraction data of patients to generate an understandable representation [45] of how the AI performs in its core. This is one known drawback of AI: how it operates or the decisions it makes most of the time are hidden or lack direct interpretation due to its high complexity and huge dimensionality of the transformations performed with real and synthetic data.
Therefore, in silico clinical trials could help to apply the 3Rs of fundamentals (i.e., reduce, refine, and partially replace real clinical trials) by: (1) Reducing the size or studying specific groups at the clinical level that are identified as risk groups at in silico level. (2) Adding more detailed information obtained from this type of trials to better understand interactions with different groups and long-term effects that clinical trials cannot provide. (3) Replacing the preclinical phase and preserving the clinical trial for legal requirements. (4) Improving unsuccessful treatments or products by providing extra information, as this increases innovation, decreases economical costs, and exponentially increases the understanding of biological processes. (5) Avoiding the use of animal models by directly including clinical data and personalized information from the patients. This significantly decreases the overall costs associated with the development of treatments and has proven to be more effective at predicting the behavior of the drug or treatment in large-scale trials and identifying secondary effects, therefore better screening the treatments that progress to phase III clinical trials.
As previously mentioned, the validation of these types of experiments highly depends on experimental data from both in vitro and in vivo protocols. This information is devoted to nourishing and calibrating the experimental equations that shape in silico models.
At the CVD level, these studies are present at different levels including cellular studies for the pharmacological testing of new compounds [46], evaluation of drug effects at the tissue level in combination with AI [47], and whole-organ simulation for the evaluation of different treatment strategies [3,48].

In Vitro Research and Translational In Vitro Diagnostics
In vitro research has always represented the first step for the development of drug discovery in preclinical models. Although these assays are essential for the development of molecules, 95% of early-phase studies are eliminated in further stages [34]. The main causes of elimination include deficient properties of the product (45%), lack of efficacy (28%), in vivo toxicity (11%), adverse effects (10%), or commercial purposes (6%).
In the CVD field, the major advance registered in the last decade has been the development and use of in vitro models using induced Pluripotent Stem Cells (iPSC) that can be later differentiated into multiple cardiac cellular types such as cardiomyocytes, cardiac fibroblasts, smooth muscle cells, and endothelial cells [35].
Due to the immature profile of these cells, and in the scope of a translational scenario, multiple strategies have been developed to mimic the properties of native tissue including extracellular matrix hydrogels [36,37], differentiation in 3D structures [38], prolonged culture times [39], hormone addition [40], substrate stiffness [41], biophysical stimulation [42], and in vivo maturation [43].
In vitro diagnostic tests are medical devices that consist of a reagent, calibrator, control material, kit of instruments and materials, apparatus, and equipment or system, used alone or in association with others [34]. They are intended by the manufacturer to be used in vitro for the study of samples from the human body, including blood and tissues. These models have been already explored at the cardiovascular level, usually in combination with digital health tools that enable electronic analysis or data acquisition and further analysis using machine learning [44]. These approaches are usually intended to characterize a physiological or pathological condition [45], to identify a possible congenital anomaly [46], to determine safety and compatibility with potential medical device recipients [47,48], and to monitor therapeutic measurements [49].
Although multiple applications have been described in this field, there are several present drawbacks that limit their generalized use and that are highly conditioned by both the type of samples and the development of the protocols, including: (1) Inappropriate patient sample or signal acquisition that leads to an inability to analyze the data. (2) Difficulties or deterioration of the sample during its collection, management, treatment, storage, or transport, especially for biological samples. (3) Inability to afford in vitro testing at large scales or highly efficient computational systems that can analyze large amounts of data.
In addition, in vitro diagnostic companies are key in this scenario, by taking an active role in collaborating with laboratory professionals, adapting and disseminating evidencebased recommendations about bio-specimen collection into the research settings from preclinical to phase III studies.

Animal Models as a Translational Model for Research
Animal models represent the intermediate transfer point among in vitro cultures and clinical trials. These models are essential for the translation of drug findings from bench to bedside and their critical evaluation regarding their predictive validity is of major importance [50]. For this reason, current trends encourage researchers not only to analyze the results from the lab to the clinic, but also to evaluate the efficacy and efficiency in both directions, identifying clinical bedside findings that were not predicted by animal testing [51].
Furthermore, a proper design, execution, and reporting of animal models is essential to evaluate preclinical data and ensure both reproducibility and translation to the clinic [52,53].
Finally, regulatory agencies play a key role in preclinical testing in animal models as they appear to be an unquestionable data source on the performance of the drug or product.
At the CVD level, animal models can be categorized in two groups: small mammalian animal models of heart disease and large animal models. The most common small animal models include mouse, rat and rabbit animal models with various applications such as myocardial infarction [54][55][56], cryoinjury models [57], hypertensive animals [58], and cardiac electrophysiology models [59]. Large animal models include dogs [60], pigs [61] and goats [62] for a number of different applications in preclinical stages such as druginduced arrhythmia studies, heart failure, or myocardial ischemia [62].
These models present some disadvantages such as the limited translation of biological products into the clinical scenario and differences between the preclinical models and the target population of patients [63,64]. First, there are significant differences in the cardiac regenerative capacities of rodents and humans, so results obtained in preclinical models may not necessarily translate to humans, especially if the products used as therapeutic products are of murine origin [15,65]. Secondly, animals included in preclinical studies are

Signal Acquisition and Processing Automation Using Artificial Intelligence
Recently, AI has presented a major impact in the medical sciences [68] by automatizing tasks and predicting outcomes with unprecedented performance in real-time applications [69][70][71]. These advances are occurring at a fast pace in research laboratories that implement algorithms that need to learn or to be trained to achieve high accuracy performance. The process of training these algorithms implies the use of high-quality data with enough number of samples for the algorithm to learn how to predict. Usually, the more complex the task is, the more data the algorithm will need for high accuracy performance. This trend has already been previously described [49], for example, by comparing biomarker prediction and automatic segmentation. Biomarker prediction, usually implemented by regression analysis, will require a significantly lower number of samples when compared with more complex tasks such as automatic segmentation, implemented by neural networks. As the amount of data needed for this training process presents an exponential tendency, current approaches consider the use of synthetic data from in silico simulations or data produced in the lab to increase the number of samples used for training. Another approach to train these algorithms relies on transfer learning, a process by which a pretrained algorithm from previous experiments is used to calibrate a new one, significantly reducing the number of samples needed for the final process [50]. This approach has been already implemented in the cardiovascular field, showing great performance including complex algorithms such as neural networks [51,52].
Similarly, for other new technologies that have been translated from initial research to widespread clinical practice, it is important to recognize that there will be novel challenges for the clinical deployment of AI tools. Understanding the nature of these new challenges, potential mitigation strategies, and a well-conceived research road map that ensures that advances in AI algorithm development are efficiently translated to clinical practice are of paramount importance [72]. Much of the work in AI is being done at single institutions with single center data for training, testing, and validation of the AI algorithms, lacking the heterogeneity of global data and the effect of population-based factors such as ethnicity, sex, or diet differences among others. A recent review of studies that evaluated the performance of AI algorithms for the diagnostic analysis of medical images found only 6% of the 516 reviewed studies performed external validation [73], and so far, there is limited research demonstrating the generalizability of these algorithms to widespread clinical practice.
In the CVD field, AI has played a major role in the last years [74] by enabling remote data collection [75], the analysis of large populations to identify profiles or groups that better respond to a given treatment [76,77], arrhythmia classification [78,79], and the identification of potential biomarkers for prognostication [80].

Economical Issues and Legal Regulations
Among the most important advantages of combining CVD translational approaches and digital health is the decrease in the average Research and Development (R&D) costs for new medicines, where clinical trials account for nearly 50% of the investment [68].
Regarding legal regulations, data privacy, quality of data and the interpretability of IT systems, as well as intellectual property (IP) rights, they are in the eye of the storm. In addition, software that qualifies as a medical device must follow the provisions relating to medical devices, which vary depending on the type and application of a certain medical device. EU Regulation 2017/45 is fully applicable whereas Regulation 2017/746 will remain in a translational situation until 26 May 2022. The European Commission has issued guidelines on the classification of medical devices (MEDDEV Guidelines) and, in particular, on the Qualification and Classification of standalone software used in healthcare. Digital solutions to be adopted by the National Health service are examined to ensure that the required security standards for the public administration are met.
AI in healthcare is mainly regulated by the EU Medical Devices Regulation 2017/745 (MDR) and in vitro Diagnostic Medical Devices Regulation 2017/746 (IVDR) in combination with the GDRP. Medical devices are often either developed using AI or they have an AI component. The GDPR applies since the application of AI implies the collection or treatment of data, and, specifically health data, which is considered as special category data that is subject to strict privacy and data protection obligations.
Moreover, the Ethics Guidelines for Trustworthy AI, published by the European Commission (2019) [81], highlighted that AI applications should not only be consistent with the law, but they must also adhere to ethical principles and ensure their implementations to avoid unintended harm.
Despite all the efforts that have been made to rapidly adapt to a constantly changing scenario, there are some key areas of enforcement for digital health that still need to be addressed: (1) Regulatory authorities' actions against digital health and healthcare IT that meet the definition of medical devices but have not obtained the CE mark. (2) The European Data Protection Agency's actions in the event of breaches of data protection legislation and data security.

Current Trends and Future Perspectives
Although translational research has experienced massive advances in combination with digital health tools, there are some improvements that should be addressed in the upcoming years, summarized in Figure 3. In particular, standards should be developed for data curation, distribution, sharing, and management to ensure a proper translation from preclinical to clinical scenarios and reproducibility of the results [82] (Figure 3). The most important factors affecting low reproducibility include a lack of access to raw data, misidentified or cross-contaminated samples, inability to properly manage complex datasets, low-quality research practices and experimental design and a competitive culture that rewards novel findings and undervalues In particular, standards should be developed for data curation, distribution, sharing, and management to ensure a proper translation from preclinical to clinical scenarios and reproducibility of the results [82] (Figure 3). The most important factors affecting low reproducibility include a lack of access to raw data, misidentified or cross-contaminated samples, inability to properly manage complex datasets, low-quality research practices and experimental design and a competitive culture that rewards novel findings and undervalues negative results [83].
The action needed to overcome these challenges has been already started including different potential improvements in different scenarios. Investigators, institutions, and journals are now demanding the application of good scientific methods and data accessibility from early stages of the research workflow [83].
In addition, policies have to face and overcome their own valleys of deaths that are both mainly present in T1 and T2 translational phases. Improvements to ameliorate regulations in translational science include new legislation and regulations, guidance for professionals, standards, and evidence-based guidelines and commercialization and innovation strategies [84]. Finally, at the clinical level, it is necessary to clarify and redesign the concept of evidence-based healthcare to facilitate understanding, analysis, improvement and/or replacement of the process as it is currently conceived, purported, and practiced [85]. Among the most important translational science priorities, some important factors were found, such as the predictive efficacy of preclinical trials, new therapeutic modalities to reach currently inaccessible diseases or pathologies, new methodologies to increase efficiency in preclinical development, identification of new biomarkers for human clinical response prediction, and clinical trial redesigns to facilitate fast clinical practice incorporation [13].

Conclusions
Digital Health has highly disrupted the cardiovascular panorama by including tools that help to identify biomarkers that can be transferred from preclinical stages into clinical scenarios. In this sense, several advances have been made in bioinformatics, in vitro research, and preclinical animal models to identify and standardize potential biomarkers that can contribute to a successful translation of the results from these scenarios to clinical practice. This translation has been highly enhanced by the development of new analytical tools that include AI algorithms, by processing and extracting patterns in data from preclinical scenarios to clinical practice. However, a lot of efforts have to be made to continue with this transition and validate the standardization protocols proposed. This evolution is expected to continue in the upcoming years, leading to the development of new personalized treatments.

Conflicts of Interest:
Fernández-Avilés and Atienza have equity from Corify Care SL. Atienza and Arenal served on the advisory board of Medtronic. None of the other authors has any conflict of interest, financial or otherwise, to disclose.