Next Article in Journal
Determination of 40 Elements in Powdered Infant Formulas and Related Risk Assessment
Next Article in Special Issue
Searching for New Model of Digital Informatics for Human–Computer Interaction: Testing the Institution-Based Technology Acceptance Model (ITAM)
Previous Article in Journal
The Effect of Social Isolation on Physical Activity during the COVID-19 Pandemic in France
Previous Article in Special Issue
Routine Health Information Systems in the European Context: A Systematic Review of Systematic Reviews
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rise of Clinical Studies in the Field of Machine Learning: A Review of Data Registered in ClinicalTrials.gov

by
Claus Zippel
and
Sabine Bohnet-Joschko
*
Chair of Management and Innovation in Health Care, Faculty of Management, Economics and Society, Witten/Herdecke University, 58448 Witten, Germany
*
Author to whom correspondence should be addressed.
Int. J. Environ. Res. Public Health 2021, 18(10), 5072; https://doi.org/10.3390/ijerph18105072
Submission received: 7 April 2021 / Revised: 6 May 2021 / Accepted: 7 May 2021 / Published: 11 May 2021
(This article belongs to the Special Issue Information Technology's Role in Global Healthcare Systems)

Abstract

:
Although advances in machine-learning healthcare applications promise great potential for innovative medical care, few data are available on the translational status of these new technologies. We aimed to provide a comprehensive characterization of the development and status quo of clinical studies in the field of machine learning. For this purpose, we performed a registry-based analysis of machine-learning-related studies that were published and first available in the ClinicalTrials.gov database until 2020, using the database’s study classification. In total, n = 358 eligible studies could be included in the analysis. Of these, 82% were initiated by academic institutions/university (hospitals) and 18% by industry sponsors. A total of 96% were national and 4% international. About half of the studies (47%) had at least one recruiting location in a country in North America, followed by Europe (37%) and Asia (15%). Most of the studies reported were initiated in the medical field of imaging (12%), followed by cardiology, psychiatry, anesthesia/intensive care medicine (all 11%) and neurology (10%). Although the majority of the clinical studies were still initiated in an academic research context, the first industry-financed projects on machine-learning-based algorithms are becoming visible. The number of clinical studies with machine-learning-related applications and the variety of medical challenges addressed serve to indicate their increasing importance in future clinical care. Finally, they also set a time frame for the adjustment of medical device-related regulation and governance.

1. Introduction

1.1. Background

Before medical innovations can be implemented in daily clinical routine, it takes more than a decade from research and development to market approval [1,2,3]. In this translation phase, a multitude of challenges and specifications have to be overcome so that a device can successfully be brought to the market, from patient recruitment, data consolidation and fragmented infrastructures to regulatory hurdles and (start-up) financing of research costs [4,5]. Examining the literature, it is noticeable that so far, there are hardly any data on the specific translation process of medical–digital applications that are increasingly being developed and that promise great benefits and potentials for health prevention, diagnostics, and therapy [6,7,8,9,10].

1.2. Research Motivation and Objective

Against this background, it was our aim to explore the development and current translation status of medical–digital applications in the field of machine learning (ML), a sub-area of artificial intelligence in which computer algorithms and statistical models are trained based on large datasets to independently link and predict abnormalities and correlations in a self-learning manner [11,12,13,14,15]. We focused on ML, as there are already a wide range of ML-based approaches and innovative developments for health care reported in the literature, from image diagnostics and processing [16,17,18,19,20], personalized medicine and genomics [21,22,23] to clinical data analysis for decision support and training in surgery, therapy planning or patient management [24,25,26,27,28].
In view of the research question, we decided to analyze study register data as they offer a glance into the research pipeline of universities, university clinics and research institutions as well as pharmaceutical, medical device, and biotech companies, and thus, provide first insights into the clinical translation process of ML-related applications and software. This registry-based approach also allows us to cluster and identify fields with increased research and investment that might be of clinical significance in the next decade. Considering legislative delays, our results may support health decision- and policymakers struggling with challenges in the regulation and governance of ML-applications [29,30,31].

2. Materials and Methods

2.1. Data Acquisition and Processing

For our study, we used datasets from ClinicalTrials.gov, one of the most comprehensive databases for clinical studies worldwide with over 360,000 planned, ongoing and completed clinical studies published at the time of access [32,33,34,35]. The register is freely accessible via https://clinicaltrials.gov [36]. For each study, (i) a given set of study characteristics is compulsory, and (ii) study-specific details are requested, using free text fields, such as title or individual short description. The ClinicalTrials.gov database and methodological approach have already been chosen frequently in other research studies to characterize study populations and trends in clinical care and research [37,38], for example in the areas of medical imaging [39,40,41], rare diseases [42] or oncology [43,44].
In view of the research question, the “advanced search function” was used to filter the register data records for which “Machine Learning” (a MeSH term introduced in 2016, [12]) had been entered in the report form and which were published by the end of 2020 (search term: “Machine Learning”|First posted on or before 31 December 2020). The dataset was retrieved on 7 January 2021 and exported in CSV file format [36]. In a second step, the authors scanned the dataset and included all study entries that clearly focused on the use or testing of ML-based algorithms, approaches or applications in a clinical setting. Entries on clinical trials that, according to the reporting party, were “withdrawn” or “terminated” or clearly did not primarily focus on the use of ML-related approaches or applications in clinical care were excluded from the study. In order to be able to filter and subgroup the studies in detail, the authors scanned the free text information of the study entries. Figure 1 shows the methodical procedure for the selection process of the study dataset considered for the register data analysis in the form of a flowchart.

2.2. Data Evaluation and Analysis

In order to provide an overview of the development and status quo of ML-related software approaches and applications in the clinical setting, the study entries were sorted in ascending order according to the date of which the study record was first available on ClinicalTrials.gov. Furthermore, common standardized study parameters, such as study type, recruitment status, age group or funding source, were evaluated [45]. In order to achieve a more in-depth characterization of the dataset, the authors scanned, evaluated and subcategorized the study entries according to further parameters, such as recruiting country, academic/industry sponsor or clinical study-initiating medical specialty/field. Further free text information, such as intervention arms, inclusion criteria or end points of the trials, were not part of the study.
In view of the explorative nature of the study objective, we evaluated the registry dataset descriptively. One-dimensional frequency distributions (absolute, relative) were determined for the analyzed study characteristics. The development of the published studies per year over time was shown graphically using a bar chart, and the description of all other parameters was summarized in tables. The quantitative acquisition, processing and statistical evaluation of the dataset was carried out, using Microsoft Excel® software for Microsoft Windows®.

3. Results

3.1. Registration of ML-Related Studies over Time

For our study, n = 358 study entries in the field of ML were included (see Figure 1). Sorted by year of first publication in the ClinicalTrials.gov register, a continuous rise in ML-related study entries could be seen since 2015, with a particularly significant increase between 2019 and 2020, from n = 89 to n = 149 posted studies (see Figure 2).

3.2. Medical Field of Application

The registered studies focused on a broad spectrum of different topics from a wide range of medical specialties. The majority of the posted studies in the field of machine learning was initiated by experts from the field of imaging (diagnostic radiology, nuclear medicine, radiation oncology; 12%), followed by cardiology, psychiatry, anesthesia/intensive care medicine (all 11%), neurology (10%), medical oncology (8%) and infectious disease medicine (6%) (see Figure 3). The latter mainly included studies that were published in 2020 on COVID-19-related issues.

3.3. Patient Recruitment and Study Organization

About half of the listed clinical studies were open (55%) or closed (45%) for patient enrollment. A total of 27% of the studies had already completed the recruitment phase. The vast majority of studies (98%) did not yet have any results. A total of 80% of the studies in the dataset were single-center, 13% multi-center studies. Seven percent could not be classified because of missing information (see, for this and the following, Table 1). Of the studies, 96% were national and 4% international. Of these, by far the most studies had a last recruiting location in the U.S.A. (40%), followed by China (9%), the United Kingdom (8%), Canada (6%), France (5%), Switzerland and Germany (each 4%). Across all study entries, and with a view to the major global regulatory regions, most of the published studies recruited patients in a country in North America (47%), followed by Europe (37%) and Asia (15%; other 6%).
In 82% of the studies, a university (hospital) and/or research institution was named as the organization/person responsible for the study (so-called “lead sponsor”), and in 18%, an industrial company. The majority of trials (88%) was (co-) funded by individuals, universities or organizations themselves, 24% of trials were (co-) funded by the industry and 5% had a public (government) sponsorship.

3.4. Study Type and Design

Of the n = 358 clinical studies categorized, around two thirds (64%) were reported as observational studies and around one third (36%) as interventional studies (see, for this and the following, Table 2). Among the observational studies, the majority of the studies were designed as prospective cohort studies. The majority of the interventional studies was open label/non-masked and single-armed. Over 90% of the studies planned to enroll (elderly) patients of both genders.

4. Discussion and Conclusions

Recent improvements and innovative approaches in the field of artificial intelligence promise high potential for the diagnosis and treatment of patients [46,47,48,49]. The sub-area of ML in which self-learning algorithms (such as convolutional neural network, random forest, support vector machine, etc. [50,51,52]) are trained on large datasets and used to make predictions independently when exposed to new data, is particularly advancing [11,13,14,17,19,20]. More and more research is showing that newly developed algorithms can process specialized tasks just as well as experienced health professionals or can increase their efficiency and performance in daily care [53,54,55,56]. A crucial factor for the successful development of ML-based software and assistance systems is—besides medical and technological expertise—in particular, the testing and use of these applications in daily clinical routine [57,58]. With this in mind, it was our goal to find out more about the recent development and status of the clinical translation of ML-related software and applications into the clinical setting. The translation and market approval of ML-based algorithms represent a major challenge in terms of legislation and regulation. Using the example of register data, the results show how dynamically this area is developing across medical disciplines. As a result, questions about governance and clinical testing will have to be answered in the near future (cf. for example [29,30,31]). In the following sections, we will summarize the main results of the registry data analysis on ML-related clinical studies, discuss this with reference to the regulatory environment and point out the methodological limitations of the study.

4.1. Studies in the Field of ML

The study data show that the number of ML-related studies in ClinicalTrials.gov has increased continuously from year to year since 2015, with a particular increase between 2019 and 2020 (see Figure 1). From a methodological point of view, it should be noted that the MeSH term “machine learning”, which was crucial for the study search in the registry database, was introduced in 2016 by the U.S. National Library of Medicine [12]. This could have influenced the search and selection procedure (especially for the period before 2015), as this MeSH term was probably only systematically reported and checked as a quality control review criteria for the clinical study registration from this point in time [59]. For the last few years, however, a visible increase in the number of published studies can be determined. This could be an indicator for the growing potential that is associated with the use and application of ML-related software/algorithms for medical care and research.
In addition, it was found that the majority of the analyzed studies in the field of ML were initiated and led by (university) hospitals or academic/research institutions (82%) and were (co-) financed from university (88%) or public/government funds (5%) (see Table 1). Among the academic institutions, most of the registered studies were reported by the Mayo Clinic (U.S.), Maastricht University Medical Center (NL), Sun Yat-Sen University (CN) and University of California (U.S.). In this context, the authors assume, that the number and proportion of academically initiated ML-related studies is likely to be underestimated here since the sponsor or PI in some cases does not necessarily have to register an academic study in a database such as ClinicalTrials.gov. This is especially the case for studies in the preclinical development stage or if only retrospective data are used. In comparison, fewer studies were initiated (18%) or (co-) financed (24%) by an industry sponsor. The proportion of studies with an industrial study sponsor is (still) relatively low, compared to other publications on ClinicalTrials.gov study data. For example, a cross-sectional analysis by Ross et al., published in 2009, showed a proportion of 40% in studies with industry sponsors [38] and a study by Bell and Smith from 2014 on over 24 thousand clinical studies on rare and non-rare conditions showed a proportion of more than 30 percent [42].
Among the industry sponsors were several comparatively small companies and start-ups with a focus on the development of algorithms in medicine (e.g., Dascena® and Eko Devices®). In general, it can therefore be assumed that the ML-related approaches reported were still mainly initiated and used in an academic/research context but could gradually be transferred to clinical translation and early clinical study development phases with increasing support from the industry, which sees investment potentials in this area.
Moreover, the analyzed studies were initiated from a variety of different medical fields and disciplines (Figure 3). Looking at the dataset, it could be seen that the ML-related approaches in the clinical studies used different types of training data. This included image data (e.g., in radiomics studies), sensor data (e.g., ECG signals), video data, text data and audio data (e.g., monitor audio signals). Furthermore, the registered studies used a wide range of different types and approaches of ML algorithms, such as (un-) supervised or reinforced learning. In order to illustrate this heterogeneity, we show selected study approaches from different medical application areas and fields. We hereby focus on advanced clinical studies for which the recruitment phase was reported as completed and at least one scientific publication was available.
  • Blomberg et al. reported to analyze whether a ML-based algorithm could recognize out-of-hospital cardiac arrests from audio files of calls to the emergency medical dispatch center (NCT04219306, [60]);
  • Jaroszewski et al. wanted to evaluate a ML-Driven Risk Assessment and Intervention Platform to increase the use of psychiatric crisis services (NCT03633825; [61]);
  • Mohr et al. stated to evaluate and compare a smartphone intervention for depression and anxiety that uses ML to optimize treatment for participants [NCT02801877; [62]);
  • Nieman et al. conducted a study to investigate the diagnostic performance of ML-based, coronary computed, tomography–angiography-derived fractional flow reserve (NCT02805621; [63,64,65]);
  • Putcha et al. performed a study on a ML-based approach to discover signatures in cell-free DNA to potentially improve the detection of colorectal cancer (NCT03688906; [66,67].
In summary, the results of the registry data analysis show that the registered studies in the field of ML were very heterogeneous, both from an organizational and study design perspective. Against this background, it would make sense to carry out further (especially multivariate) sub-evaluations of the dataset for selected study groups, for example, with large cohort radiomics studies, etc. Finally, it should be noted that the imaging disciplines in particular are involved in many studies, both as a study-initiating discipline and as a clinical partner, for example, for CT, MRI or PET scans. Since only the respective, study-initiating department was focused on for the register analysis, it can be assumed that the proportion of ML-related studies in which imaging experts are centrally integrated is significantly higher than the 12% shown in Figure 3.

4.2. Regulatory Framework and Aspects

With regard to the dataset, it is essential to point out, from a regulatory point of view, that the posted studies in the field of ML always address software that, in many cases, functions or is used (directly) in connection with a medical device. This is of central importance since from a regulatory point of view, software is considered a medical product in many regulatory areas, such as the U.S. or the European Union [68], and is, therefore, subject to the associated regulatory requirements, such as conformity assessment, registration, clinical evaluation or post-market surveillance [69]. In the EU, for example, software is considered a medical device according to the European Medical Device Regulation (MDR), which will come into force in May 2021, “when specifically intended by the manufacturer to be used for one or more […] medical purposes […], independent of the software’s location or the type of interconnection between the software and a device” [70]. The risk classification is based on the diagnostic and therapeutic intension of the software from risk classes I (lowest risk class) to III (highest risk class).
In this context, it should be pointed out that for ML-related software, primarily the general regulatory requirements for software apply and that there are hardly any laws or harmonized standards for the specific use of ML-software and applications in healthcare. With this in mind, it is of great interest that the U.S. Food and Drug Administration (FDA) has published a discussion paper on “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan”, which is continuously updated and currently making proposals with regard to the following areas:
  • Tailored regulatory framework for AI/ML-based SaMD;
  • Good machine-learning practice;
  • Patient-centered approach, incorporating transparency to users;
  • Regulatory science methods related to algorithm bias and robustness;
  • Real-world performance [71].
In view of the increasing amounts of clinical studies in the field of ML (Figure 1), it will be interesting to see how the regulatory framework will adapt, worldwide, to AI- and ML-related software and applications as well as the specifics associated with them. Aspects that have not yet been clarified, such as changes in ML-related software over time due to changing datasets, should be of particular interest. In the literature, suggestions are increasingly being submitted and discussed [30,72], both on general regulatory aspects [29,73,74] and on device- or subject-specific features, e.g., in view of medical imaging [75,76].
In addition, it becomes clear how important it will be in the future to pool patient data for clinical studies in the field of machine learning across multiple locations. The reason for this is that access to large amounts of data will be essential for the further development of the approaches in prospective clinical studies. An example of how this could work in view of strict data protection requirements is shown by the Joint Imaging Platform for Federated Clinical Data Analytics for the application of medical algorithms across study sites in the field of medical imaging [77].

4.3. Methodological Notes

The evaluation of registry data from ClinicalTrials.gov enables a broad and detailed analysis of a multitude of systematically collected, study-specific entries of high quality over a period of time. However, a number of limitations to this study approach need to be noted. Firstly, a method-inherent error of this approach is that the register dataset only represents a subset of all initiated ML-related studies around the globe. The reasons for this are that in some cases, the PI or sponsor does not necessarily have to register the study (see Section 4.2) or may as well choose a different registry to list the study accordingly [78,79,80]. In this context, it should also be pointed out that data and information specifically relating to research in the field of machine learning are also published in other digital archives or specific registers and research platforms, such as the platform of the Association for Computing Machinery (ACM) or the Institute of Electrical and Electronics Engineers (IEEE). This both illustrates the importance of harmonizing the fairly large number of registries and archives to prospectively create (also linguistically) more uniform data, a process that is focused on by projects such as the “Research Data Alliance” or the “Open Data Institute”. Secondly, the registry search only took into account register entries in which the search term “machine learning” was explicitly specified in the study title or free text. Since the use of study-specific MeSH terms when registering studies in ClinicalTrials.gov is recommended but not mandatory, it can be assumed that studies that used other MeSH terms or were registered with terms related in taxonomy were not taken into account for the dataset. This may well lead to the fact that the actual number of ML-related clinical studies published, and thus the clinical development in this field, is probably underestimated. Thirdly, common limitations of clinical registry (meta-) data analyses apply, which can lead to inaccuracy and inconsistencies, and thus may impair the data quality. This includes, in particular, incorrect or not-at-all answered sections of the registry form. In addition, the study text information (some of which vary in scope and content) can be interpreted differently, which could reduce the validity of the results [37,39,40,41,42,43,44,45]. Fourthly, the subgrouping of studies into medical specialties was not always clear; for example, when experts from two or more medical specialties were involved. In order to avoid this methodological problem, the medical specialty of the PI responsible for the trial and named in the study entry was used for subgrouping in case of doubt. As a result, medical specialties that are often involved in ML-related studies but tend to initiate fewer studies as the lead medical specialty were probably counted less (e.g., (neuro-) pathology [81]). Fifthly, it has to be assumed that since ClinicalTrials.gov is an American registry, there is a disproportionately high number of registered clinical trials conducted in North America. Our study results strongly support this hypothesis, seeing that the vast majority of studies included those recruited in the U.S.A. and Canada (see Table 1). This may possibly lead to distortions in comparison to the status and characteristics of ML-related trials in other regions, such as Europe or Asia.
In view of the limitations, the present study cannot represent a complete, detailed picture of the status quo. However, since ClinicalTrials.gov is by far the biggest and most renowned registry for clinical trials, the authors conclude that this approach allows a good first overview on the current status of clinical development and translation of ML-based approaches and applications in health care. This could provide an impetus for decisionmakers in healthcare facilities and policy as well as regulatory discussions.

5. Summary for Decisionmakers

  • In recent years, an increasing number of ML algorithms have been developed for the health care sector that offer tremendous potential for the improvement of medical diagnostics and treatment. With a quantitative analysis of register data, the present study aims to give an overview of the recent development and current status of clinical studies in the field of ML.
  • Based on an analysis of data from the registry platform ClinicalTrials.gov, we show that the number of registered clinical studies in the field of ML has continuously increased from year to year since 2015, with a particularly significant increase in the last two years.
  • The studies analyzed were initiated by a variety of medical specialties, addressed a wide range of medical issues and used different types of data.
  • Although academic institutions and (university) hospitals initiated most studies, more and more ML-related algorithms are finding their way into clinical translation with increasing industry funding.
  • The increase in the number of studies analyzed shows how important it is to further develop current medical device regulations, specifically in view of the ML-based software product category. The recommendations recently presented by the FDA can provide an important impetus for this.
  • Future research with trial registry data might address sub-evaluations on individual study groups.

Author Contributions

Both authors developed the idea of the paper/research method, conducted a search of the literature and wrote, reviewed, edited and formatted the draft of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

The APC is paid for under the ATLAS project which is funded by the Ministry of Economic Affairs, Innovation, Digitalization and Energy of North Rhine-Westphalia (funding code: ITG-1-1).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

For the study, data from the website ClinicalTrials.gov were used [36]. The registry for clinical studies is available online https://clinicaltrials.gov (accessed on 7 January 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Grant, J.; Green, L.; Mason, B. Basic research and health: A reassessment of the scientific basis for the support of biomedical science. Res. Eval. 2003, 12, 217–224. [Google Scholar] [CrossRef]
  2. Green, L.W.; Ottoson, J.M.; García, C.; Hiatt, R.A. Diffusion theory and knowledge dissemination, utilization, and integration in public health. Annu. Rev. Public Health 2009, 30, 151–174. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Morris, Z.S.; Wooding, S.; Grant, J. The answer is 17 years, what is the question: Understanding time lags in translational research. J. R. Soc. Med. 2011, 104, 510–520. [Google Scholar] [CrossRef] [PubMed]
  4. Contopoulos-Ioannidis, D.G.; Alexiou, G.A.; Gouvias, T.C.; Ioannidis, J.P. Medicine. Life cycle of translational research for medical interventions. Science 2008, 321, 1298–1299. [Google Scholar] [CrossRef]
  5. Trochim, W.; Kane, C.; Graham, M.J.; Pincus, H.A. Evaluating translational research: A process marker model. Clin. Transl. Sci. 2011, 4, 153–162. [Google Scholar] [CrossRef]
  6. Murdoch, T.B.; Detsky, A.S. The Inevitable Application of Big Data to Health Care. JAMA 2013, 309, 1351–1352. [Google Scholar] [CrossRef]
  7. Raghupathi, W.; Raghupathi, V. Big data analytics in healthcare: Promise and potential. Health Inf. Sci. Syst. 2014, 2, 3. [Google Scholar] [CrossRef]
  8. Wang, Y.; Hajli, N. Exploring the path to big data analytics success in healthcare. J. Bus. Res. 2017, 70, 287–299. [Google Scholar] [CrossRef] [Green Version]
  9. Mehta, N.; Pandit, A. Concurrence of big data analytics and healthcare: A systematic review. Int. J. Med. Inform. 2018, 114, 57–65. [Google Scholar] [CrossRef]
  10. Ngiam, K.Y.; Khor, I.W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. 2019, 20, e262–e273. [Google Scholar] [CrossRef]
  11. Deo, R.C. Machine Learning in Medicine. Circulation 2015, 132, 1920–1930. [Google Scholar] [CrossRef] [Green Version]
  12. U.S. National Library of Medicine. Maschine Learning; MeSH Unique ID: D000069550. 2016. Available online: https://www.ncbi.nlm.nih.gov/mesh/2010029 (accessed on 7 January 2021).
  13. Camacho, D.M.; Collins, K.M.; Powers, R.K.; Costello, J.C.; Collins, J.J. Next-Generation Machine Learning for Biological Networks. Cell 2018, 173, 1581–1592. [Google Scholar] [CrossRef] [Green Version]
  14. Chen, P.-H.C.; Liu, Y.; Peng, L. How to develop machine learning models for healthcare. Nat. Mater. 2019, 18, 410–414. [Google Scholar] [CrossRef]
  15. Uribe, C.F.; Mathotaarachchi, S.; Gaudet, V.; Smith, K.C.; Rosa-Neto, P.; Bénard, F.; Black, S.E.; Zukotynski, K. Machine Learning in Nuclear Medicine: Part 1-Introduction. J. Nucl. Med. 2019, 60, 451–458. [Google Scholar] [CrossRef] [Green Version]
  16. Erickson, B.J.; Panagiotis, K.; Zeynettin, A.; TL, K. Machine Learning for Medical Imaging. RadioGraphics 2017, 37, 505–515. [Google Scholar] [CrossRef]
  17. Kohli, M.; Prevedello, L.M.; Filice, R.W.; Geis, J.R. Implementing Machine Learning in Radiology Practice and Research. Am. J. Roentgenol. 2017, 208, 754–760. [Google Scholar] [CrossRef]
  18. Bonekamp, D.; Kohl, S.; Wiesenfarth, M.; Schelb, P.; Radtke, J.P.; Götz, M.; Kickingereder, P.; Yaqubi, K.; Hitthaler, B.; Gählert, N.; et al. Radiomic Machine Learning for Characterization of Prostate Lesions with MRI: Comparison to ADC Values. Radiology 2018, 289, 128–137. [Google Scholar] [CrossRef] [PubMed]
  19. Thrall, J.H.; Li, X.; Li, Q.; Cruz, C.; Do, S.; Dreyer, K.; Brink, J. Artificial Intelligence and Machine Learning in Radiology: Opportunities, Challenges, Pitfalls, and Criteria for Success. J. Am. Coll. Radiol. 2018, 15, 504–508. [Google Scholar] [CrossRef]
  20. Burian, E.; Jungmann, F.; Kaissis, G.A.; Lohöfer, F.K.; Spinner, C.D.; Lahmer, T.; Treiber, M.; Dommasch, M.; Schneider, G.; Geisler, F.; et al. Intensive Care Risk Estimation in COVID-19 Pneumonia Based on Clinical and Imaging Parameters: Experiences from the Munich Cohort. J. Clin. Med. 2020, 9, 1514. [Google Scholar] [CrossRef]
  21. Kelchtermans, P.; Bittremieux, W.; De Grave, K.; Degroeve, S.; Ramon, J.; Laukens, K.; Valkenborg, D.; Barsnes, H.; Martens, L. Machine learning applications in proteomics research: How the past can boost the future. Proteomics 2014, 14, 353–366. [Google Scholar] [CrossRef]
  22. Fröhlich, H.; Balling, R.; Beerenwinkel, N.; Kohlbacher, O.; Kumar, S.; Lengauer, T.; Maathuis, M.H.; Moreau, Y.; Murphy, S.A.; Przytycka, T.M.; et al. From hype to reality: Data science enabling personalized medicine. BMC Med. 2018, 16, 150. [Google Scholar] [CrossRef] [PubMed]
  23. Wong, D.; Yip, S. Machine learning classifies cancer. Nature 2018, 555, 446–447. [Google Scholar] [CrossRef] [PubMed]
  24. Casagranda, I.; Costantino, G.; Falavigna, G.; Furlan, R.; Ippoliti, R. Artificial Neural Networks and risk stratification models in Emergency Departments: The policy maker’s perspective. Health Policy 2016, 120, 111–119. [Google Scholar] [CrossRef] [PubMed]
  25. Maier-Hein, L.; Vedula, S.S.; Speidel, S.; Navab, N.; Kikinis, R.; Park, A.; Eisenmann, M.; Feussner, H.; Forestier, G.; Giannarou, S.; et al. Surgical data science for next-generation interventions. Nat. Biomed. Eng. 2017, 1, 691–696. [Google Scholar] [CrossRef]
  26. Lundberg, S.M.; Nair, B.; Vavilala, M.S.; Horibe, M.; Eisses, M.J.; Adams, T.; Liston, D.E.; Low, D.K.; Newman, S.F.; Kim, J.; et al. Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. Nat. Biomed. Eng. 2018, 2, 749–760. [Google Scholar] [CrossRef]
  27. Cleophas, T.J.; Zwinderman, A.H. Machine Learning in Medicine—A Complete Overview; Springer International Publishing: Cham, Switzerland, 2020. [Google Scholar]
  28. García-Ordás, M.T.; Arias, N.; Benavides, C.; García-Olalla, O.; Benítez-Andrades, J.A. Evaluation of Country Dietary Habits Using Machine Learning Techniques in Relation to Deaths from COVID-19. Health 2020, 8, 371. [Google Scholar] [CrossRef]
  29. Gerke, S.; Babic, B.; Evgeniou, T.; Cohen, I.G. The need for a system view to regulate artificial intelligence/machine learning-based software as medical device. NPJ Digit. Med. 2020, 3, 53. [Google Scholar] [CrossRef] [Green Version]
  30. Stern, A.D.; Price, W.N. Regulatory oversight, causal inference, and safe and effective health care machine learning. Biostatistics 2020, 21, 363–367. [Google Scholar] [CrossRef]
  31. Subbaswamy, A.; Saria, S. From development to deployment: Dataset shift, causality, and shift-stable models in health AI. Biostatistics 2020, 21, 345–352. [Google Scholar] [CrossRef]
  32. McCray, A.T.; Ide, N.C. Design and implementation of a national clinical trials registry. J. Am. Med. Inf. Assoc. 2000, 7, 313–323. [Google Scholar] [CrossRef] [Green Version]
  33. McCray, A.T. Better access to information about clinical trials. Ann. Intern. Med. 2000, 133, 609–614. [Google Scholar] [CrossRef]
  34. Zarin, D.A.; Tse, T.; Ide, N.C. Trial Registration at ClinicalTrials.gov between May and October 2005. N. Engl. J. Med. 2005, 353, 2779–2787. [Google Scholar] [CrossRef] [Green Version]
  35. Zarin, D.A.; Tse, T.; Williams, R.J.; Califf, R.M.; Ide, N.C. The ClinicalTrials.gov results database—Update and key issues. N. Engl. J. Med. 2011, 364, 852–860. [Google Scholar] [CrossRef] [Green Version]
  36. USA National Library of Medicine. ClinicalTrials.gov → Advanced Search. Available online: https://clinicaltrials.gov/ct2/search/advanced (accessed on 7 January 2021).
  37. Ehrhardt, S.; Appel, L.J.; Meinert, C.L. Trends in National Institutes of Health Funding for Clinical Trials Registered in ClinicalTrials.gov. JAMA 2015, 314, 2566–2567. [Google Scholar] [CrossRef] [Green Version]
  38. Ross, J.S.; Mulvey, G.K.; Hines, E.M.; Nissen, S.E.; Krumholz, H.M. Trial publication after registration in ClinicalTrials.Gov: A cross-sectional analysis. PLoS Med. 2009, 6, e1000144. [Google Scholar] [CrossRef]
  39. Cihoric, N.; Tsikkinis, A.; Miguelez, C.G.; Strnad, V.; Soldatovic, I.; Ghadjar, P.; Jeremic, B.; Dal Pra, A.; Aebersold, D.M.; Lössl, K. Portfolio of prospective clinical trials including brachytherapy: An analysis of the ClinicalTrials.gov database. Radiat. Oncol. 2016, 11, 48. [Google Scholar] [CrossRef] [Green Version]
  40. Chen, Y.-P.; Lv, J.-W.; Liu, X.; Zhang, Y.; Guo, Y.; Lin, A.-H.; Sun, Y.; Mao, Y.-P.; Ma, J. The Landscape of Clinical Trials Evaluating the Theranostic Role of PET Imaging in Oncology: Insights from an Analysis of ClinicalTrials.gov Database. Theranostics 2017, 7, 390–399. [Google Scholar] [CrossRef]
  41. Zippel, C.; Ronski, S.C.; Bohnet-Joschko, S.; Giesel, F.L.; Kopka, K. Current Status of PSMA-Radiotracers for Prostate Cancer: Data Analysis of Prospective Trials Listed on ClinicalTrials.gov. Pharmacy 2020, 13, 12. [Google Scholar] [CrossRef] [Green Version]
  42. Bell, S.A.; Tudur Smith, C. A comparison of interventional clinical trials in rare versus non-rare diseases: An analysis of ClinicalTrials.gov. Orphanet J. Rare Dis. 2014, 9, 170. [Google Scholar] [CrossRef] [Green Version]
  43. Subramanian, J.; Madadi, A.R.; Dandona, M.; Williams, K.; Morgensztern, D.; Govindan, R. Review of ongoing clinical trials in non-small cell lung cancer: A status report for 2009 from the ClinicalTrials.gov website. J. Thorac. Oncol. Off. Publ. Int. Assoc. Study Lung Cancer 2010, 5, 1116–1119. [Google Scholar] [CrossRef] [Green Version]
  44. Hirsch, B.R.; Califf, R.M.; Cheng, S.K.; Tasneem, A.; Horton, J.; Chiswell, K.; Schulman, K.A.; Dilts, D.M.; Abernethy, A.P. Characteristics of oncology clinical trials: Insights from a systematic analysis of ClinicalTrials.gov. JAMA Intern. Med. 2013, 173, 972–979. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Califf, R.M.; Zarin, D.A.; Kramer, J.M.; Sherman, R.E.; Aberle, L.H.; Tasneem, A. Characteristics of clinical trials registered in ClinicalTrials.gov, 2007–2010. JAMA 2012, 307, 1838–1847. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Benke, K.; Benke, G. Artificial Intelligence and Big Data in Public Health. Int. J. Environ. Res. Public Health 2018, 15, 2796. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
  48. Kulkarni, S.; Seneviratne, N.; Baig, M.S.; Khan, A.H.A. Artificial Intelligence in Medicine: Where Are We Now? Acad. Radiol. 2020, 27, 62–70. [Google Scholar] [CrossRef] [Green Version]
  49. Lee, D.; Yoon, S.N. Application of Artificial Intelligence-Based Technologies in the Healthcare Industry: Opportunities and Challenges. Int. J. Environ. Res. Public Health 2021, 18, 271. [Google Scholar] [CrossRef]
  50. Sidey-Gibbons, J.A.M.; Sidey-Gibbons, C.J. Machine learning in medicine: A practical introduction. BMC Med. Res. Methodol. 2019, 19, 64. [Google Scholar] [CrossRef] [Green Version]
  51. Brinker, T.J.; Hekler, A.; Utikal, J.S.; Grabe, N.; Schadendorf, D.; Klode, J.; Berking, C.; Steeb, T.; Enk, A.H.; von Kalle, C. Skin Cancer Classification Using Convolutional Neural Networks: Systematic Review. J. Med. Internet Res. 2018, 20, e11936. [Google Scholar] [CrossRef]
  52. Kawasaki, T.; Kidoh, M.; Kido, T.; Sueta, D.; Fujimoto, S.; Kumamaru, K.K.; Uetani, T.; Tanabe, Y.; Ueda, T.; Sakabe, D.; et al. Evaluation of Significant Coronary Artery Disease Based on CT Fractional Flow Reserve and Plaque Characteristics Using Random Forest Analysis in Machine Learning. Acad. Radiol. 2020, 27, 1700–1708. [Google Scholar] [CrossRef]
  53. Kickingereder, P.; Isensee, F.; Tursunova, I.; Petersen, J.; Neuberger, U.; Bonekamp, D.; Brugnara, G.; Schell, M.; Kessler, T.; Foltyn, M.; et al. Automated quantitative tumour response assessment of MRI in neuro-oncology with artificial neural networks: A multicentre, retrospective study. Lancet Oncol. 2019, 20, 728–740. [Google Scholar] [CrossRef] [Green Version]
  54. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef]
  55. Hekler, A.; Utikal, J.S.; Enk, A.H.; Solass, W.; Schmitt, M.; Klode, J.; Schadendorf, D.; Sondermann, W.; Franklin, C.; Bestvater, F.; et al. Deep learning outperformed 11 pathologists in the classification of histopathological melanoma images. Eur. J. Cancer 2019, 118, 91–96. [Google Scholar] [CrossRef] [Green Version]
  56. Campanella, G.; Hanna, M.G.; Geneslaw, L.; Miraflor, A.; Werneck Krauss Silva, V.; Busam, K.J.; Brogi, E.; Reuter, V.E.; Klimstra, D.S.; Fuchs, T.J. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat. Med. 2019, 25, 1301–1309. [Google Scholar] [CrossRef]
  57. Velasco-Garrido, M.; Zentner, A.; Busse, R. Health Systems, Health Policy and Health Technology Assessment. In Health Technology Assessment and Health Policy-Making in Europe. Current Status, Challenges and Potential; Velasco-Garrido, M., Kristensen, F.B., Nielsen, C.P., Busse, R., Eds.; WHO Regional Office for Europe: Copenhagen, Denmark, 2008. [Google Scholar]
  58. Beck, A.C.C.; Retèl, V.P.; Bhairosing, P.A.; van den Brekel, M.W.M.; van Harten, W.H. Barriers and facilitators of patient access to medical devices in Europe: A systematic literature review. Health Policy 2019, 123, 1185–1198. [Google Scholar] [CrossRef]
  59. USA National Library of Medicine. ClinicalTrials.gov Protocol Registration Quality Control Review Criteria. Available online: https://prsinfo.clinicaltrials.gov/ProtocolDetailedReviewItems.pdf (accessed on 1 February 2021).
  60. Blomberg, S.N.; Folke, F.; Ersbøll, A.K.; Christensen, H.C.; Torp-Pedersen, C.; Sayre, M.R.; Counts, C.R.; Lippert, F.K. Machine learning as a supportive tool to recognize cardiac arrest in emergency calls. Resuscitation 2019, 138, 322–329. [Google Scholar] [CrossRef] [Green Version]
  61. Jaroszewski, A.C.; Morris, R.R.; Nock, M.K. Randomized controlled trial of an online machine learning-driven risk assessment and intervention platform for increasing the use of crisis services. J. Consult. Clin. Psychol. 2019, 87, 370–379. [Google Scholar] [CrossRef]
  62. Mohr, D.C.; Schueller, S.M.; Tomasino, K.N.; Kaiser, S.M.; Alam, N.; Karr, C.; Vergara, J.L.; Gray, E.L.; Kwasny, M.J.; Lattie, E.G. Comparison of the Effects of Coaching and Receipt of App Recommendations on Depression, Anxiety, and Engagement in the IntelliCare Platform: Factorial Randomized Controlled Trial. J. Med. Internet Res. 2019, 21, e13609. [Google Scholar] [CrossRef]
  63. Tesche, C.; Otani, K.; De Cecco, C.N.; Coenen, A.; De Geer, J.; Kruk, M.; Kim, Y.-H.; Albrecht, M.H.; Baumann, S.; Renker, M.; et al. Influence of Coronary Calcium on Diagnostic Performance of Machine Learning CT-FFR: Results from MACHINE Registry. JACC Cardiovasc. Imaging 2020, 13, 760–770. [Google Scholar] [CrossRef]
  64. Baumann, S.; Renker, M.; Schoepf, U.J.; De Cecco, C.N.; Coenen, A.; De Geer, J.; Kruk, M.; Kim, Y.H.; Albrecht, M.H.; Duguay, T.M.; et al. Gender differences in the diagnostic performance of machine learning coronary CT angiography-derived fractional flow reserve -results from the MACHINE registry. Eur. J. Radiol. 2019, 119, 108657. [Google Scholar] [CrossRef]
  65. De Geer, J.; Coenen, A.; Kim, Y.H.; Kruk, M.; Tesche, C.; Schoepf, U.J.; Kepka, C.; Yang, D.H.; Nieman, K.; Persson, A. Effect of Tube Voltage on Diagnostic Performance of Fractional Flow Reserve Derived from Coronary CT Angiography With Machine Learning: Results From the MACHINE Registry. Am. J. Roentgenol. 2019, 213, 325–331. [Google Scholar] [CrossRef]
  66. Wan, N.; Weinberg, D.; Liu, T.-Y.; Niehaus, K.; Ariazi, E.A.; Delubac, D.; Kannan, A.; White, B.; Bailey, M.; Bertin, M.; et al. Machine learning enables detection of early-stage colorectal cancer by whole-genome sequencing of plasma cell-free DNA. BMC Cancer 2019, 19, 832. [Google Scholar] [CrossRef] [Green Version]
  67. Lin, J.; Ariazi, E.; Dzamba, M.; Hsu, T.-K.; Kothen-Hill, S.; Li, K.; Liu, T.-Y.; Mahajan, S.; Palaniappan, K.K.; Pasupathy, A.; et al. Evaluation of a sensitive blood test for the detection of colorectal advanced adenomas in a prospective cohort using a multiomics approach. J. Clin. Oncol. 2021, 39, 43. [Google Scholar] [CrossRef]
  68. Prabhakar, B.; Singh, R.K.; Yadav, K.S. Artificial intelligence (AI) impacting diagnosis of glaucoma and understanding the regulatory aspects of AI-based software as medical device. Comput. Med. Imaging Graph. 2021, 87, 101818. [Google Scholar] [CrossRef]
  69. Zippel, C.; Bohnet-Joschko, S. Post market surveillance in the german medical device sector—current state and future perspectives. Health Policy 2017, 121, 880–886. [Google Scholar] [CrossRef]
  70. European Parliament. Regulation (EU) 2017/745 of the European parliament and of the council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC. Off. J. Eur. Union 2017, 117, 1–175. [Google Scholar]
  71. FDA. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan, Internet. 2021. Available online: https://www.fda.gov/media/145022/download (accessed on 19 February 2021).
  72. Bate, A.; Hobbiger, S.F. Artificial Intelligence, Real-World Automation and the Safety of Medicines. Drug Saf. 2020. [Google Scholar] [CrossRef]
  73. Broome, D.T.; Hilton, C.B.; Mehta, N. Policy Implications of Artificial Intelligence and Machine Learning in Diabetes Management. Curr. Diabetes Rep. 2020, 20, 5. [Google Scholar] [CrossRef]
  74. Cohen, I.G.; Evgeniou, T.; Gerke, S.; Minssen, T. The European artificial intelligence strategy: Implications and challenges for digital health. Lancet Digit. Health 2020, 2, e376–e379. [Google Scholar] [CrossRef]
  75. Pesapane, F.; Volonté, C.; Codari, M.; Sardanelli, F. Artificial intelligence as a medical device in radiology: Ethical and regulatory issues in Europe and the United States. Insights Imaging 2018, 9, 745–753. [Google Scholar] [CrossRef]
  76. Larson, D.B.; Harvey, H.; Rubin, D.L.; Irani, N.; Tse, J.R.; Langlotz, C.P. Regulatory Frameworks for Development and Evaluation of Artificial Intelligence-Based Diagnostic Imaging Algorithms: Summary and Recommendations. J. Am. Coll. Radiol. 2020. [Google Scholar] [CrossRef]
  77. Scherer, J.; Nolden, M.; Kleesiek, J.; Metzger, J.; Kades, K.; Schneider, V.; Bach, M.; Sedlaczek, O.; Bucher, A.M.; Vogl, T.J.; et al. Joint Imaging Platform for Federated Clinical Data Analytics. JCO Clin. Cancer Inform. 2020, 4, 1027–1038. [Google Scholar] [CrossRef]
  78. Grobler, L.; Siegfried, N.; Askie, L.; Hooft, L.; Tharyan, P.; Antes, G. National and multinational prospective trial registers. Lancet 2008, 372, 1201–1202. [Google Scholar] [CrossRef]
  79. Hasselblatt, H.; Dreier, G.; Antes, G.; Schumacher, M. The German Clinical Trials Register: Challenges and chances of implementing a bilingual registry. J. Evid. Based Med. 2009, 2, 36–40. [Google Scholar] [CrossRef] [PubMed]
  80. Ogino, D.; Takahashi, K.; Sato, H. Characteristics of clinical trial websites: Information distribution between ClinicalTrials.gov and 13 primary registries in the WHO registry network. Trials 2014, 15, 428. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  81. Maros, M.E.; Capper, D.; Jones, D.T.W.; Hovestadt, V.; von Deimling, A.; Pfister, S.M.; Benner, A.; Zucknick, M.; Sill, M. Machine learning workflows to estimate class probabilities for precision cancer diagnostics on DNA methylation microarray data. Nat. Protoc. 2020, 15, 479–512. [Google Scholar] [CrossRef]
Figure 1. Flowchart for the selection procedure of the ML-related clinical study entries considered for the quantitative registry analysis. Source: Own figure based on the evaluation of the ClincalTrials.gov dataset [36].
Figure 1. Flowchart for the selection procedure of the ML-related clinical study entries considered for the quantitative registry analysis. Source: Own figure based on the evaluation of the ClincalTrials.gov dataset [36].
Ijerph 18 05072 g001
Figure 2. Number of clinical studies related to ML by year of publication on ClinicalTrials.gov (n = 358). Source: Own figure based on the evaluation of the ClincalTrials.gov dataset [36].
Figure 2. Number of clinical studies related to ML by year of publication on ClinicalTrials.gov (n = 358). Source: Own figure based on the evaluation of the ClincalTrials.gov dataset [36].
Ijerph 18 05072 g002
Figure 3. Study entries in the field of ML by study-initiating medical specialty/field (n = 358). Source: Own figure based on the evaluation of the ClincalTrials.gov dataset [36]. * Dianostic Radiology/Biomedical Imaging, Radiation Oncology, Nuclear Medicine.
Figure 3. Study entries in the field of ML by study-initiating medical specialty/field (n = 358). Source: Own figure based on the evaluation of the ClincalTrials.gov dataset [36]. * Dianostic Radiology/Biomedical Imaging, Radiation Oncology, Nuclear Medicine.
Ijerph 18 05072 g003
Table 1. Recruitment and organizational parameters of the included ML-related trials from the ClinicalTrials.gov registry (n = 358).
Table 1. Recruitment and organizational parameters of the included ML-related trials from the ClinicalTrials.gov registry (n = 358).
Absolute (n)Relative (%) *
Overall study status *
Patient recruitment
Open19855
Not open16045
Recruitment status
Not yet recruiting6418
Recruiting13437
Enrolling by invitation154
Active, not recruiting226
Suspended51
Completed9527
Unknown status236
Study results
Studies with results62
Studies without results35298
Organization/Cooperation
Number of study locations
Single study location28880
Multiple study locations4613
Not clear247
National/International
National34596
International134
Study location/Recruiting country **
The United States of America14440
China349
The United Kingdom288
Canada236
France185
Switzerland144
Germany134
Israel123
Spain123
Netherlands113
All others (Republic of Korea, Italy, Belgium, etc.)6719
Lead sponsor
University/Hospital29282
Industry6618
Funding Sources **
Industry8624
All others (individuals, universities, organizations)31488
Government agencies195
National Institutes of Health (NIH) ***113
Other U.S. Federal Agency ***82
* Sum partly ≠ 100 due to rounding; ** More than one choice possible; *** Subcategories in italics; Source: Own table based on the evaluation of the ClincalTrials.gov dataset [36].
Table 2. Study type and study design specific parameters of the included ML-related clinical trials from the ClinicalTrials.gov registry (n = 358).
Table 2. Study type and study design specific parameters of the included ML-related clinical trials from the ClinicalTrials.gov registry (n = 358).
Absolute (n)Relative (%) *
Population studied
Age group **
Included children7421
Included adults34195
Included older adults (age > 65 year)32089
Gender of participants
Both33393
Female only206
Male only51
Study type and design
Observational Studies ***23064
Observational Model
Cohort15443
Case-Control267
Case-Only267
Other247
Time Perspective
Prospective14039
Retrospective5716
Cross Sectional175
Other164
Interventional Studies ***12836
Allocation
Randomized6618
Non-Randomized175
N/A4513
Intervention Model
Single Group Assignment4813
Parallel Assignment6919
Other (crossover, sequential, etc.)113
Masking/Blinding
None (Open Label)7722
Masked5114
Single (Participant or Outcomes Assessor)195
Double or triple329
Primary purpose
Diagnostic3710
Treatment267
Prevention123
Supportive Care113
Other4212
Intervention/treatment type **
Behavioral4011
Device8624
Diagnostic Test7722
Drug175
Procedure134
Other15543
* Sum partly ≠ 100 due to rounding; ** More than one choice possible; *** Subcategories in italics; Source: Own table based on the evaluation of the ClincalTrials.gov dataset [36].
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zippel, C.; Bohnet-Joschko, S. Rise of Clinical Studies in the Field of Machine Learning: A Review of Data Registered in ClinicalTrials.gov. Int. J. Environ. Res. Public Health 2021, 18, 5072. https://doi.org/10.3390/ijerph18105072

AMA Style

Zippel C, Bohnet-Joschko S. Rise of Clinical Studies in the Field of Machine Learning: A Review of Data Registered in ClinicalTrials.gov. International Journal of Environmental Research and Public Health. 2021; 18(10):5072. https://doi.org/10.3390/ijerph18105072

Chicago/Turabian Style

Zippel, Claus, and Sabine Bohnet-Joschko. 2021. "Rise of Clinical Studies in the Field of Machine Learning: A Review of Data Registered in ClinicalTrials.gov" International Journal of Environmental Research and Public Health 18, no. 10: 5072. https://doi.org/10.3390/ijerph18105072

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop