Next Article in Journal
Vascular Schizophrenia-like Psychosis in Older Adults
Next Article in Special Issue
On the Virtues of “Team Medicine”—A City of Hope Perspective
Previous Article in Journal
COVID-19 and Oral Lichen Planus: Between an “Intriguing Plot” and the “Fata Morgana Effect”
Previous Article in Special Issue
Clinical Network Systems Biology: Traversing the Cancer Multiverse
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging the Academic Artificial Intelligence Silecosystem to Advance the Community Oncology Enterprise

by
Kevin J. McDonnell
Center for Precision Medicine, Department of Medical Oncology & Therapeutics Research, City of Hope Comprehensive Cancer Center, Duarte, CA 91010, USA
J. Clin. Med. 2023, 12(14), 4830; https://doi.org/10.3390/jcm12144830
Submission received: 7 June 2023 / Revised: 5 July 2023 / Accepted: 7 July 2023 / Published: 21 July 2023

Abstract

:
Over the last 75 years, artificial intelligence has evolved from a theoretical concept and novel paradigm describing the role that computers might play in our society to a tool with which we daily engage. In this review, we describe AI in terms of its constituent elements, the synthesis of which we refer to as the AI Silecosystem. Herein, we provide an historical perspective of the evolution of the AI Silecosystem, conceptualized and summarized as a Kuhnian paradigm. This manuscript focuses on the role that the AI Silecosystem plays in oncology and its emerging importance in the care of the community oncology patient. We observe that this important role arises out of a unique alliance between the academic oncology enterprise and community oncology practices. We provide evidence of this alliance by illustrating the practical establishment of the AI Silecosystem at the City of Hope Comprehensive Cancer Center and its team utilization by community oncology providers.

1. Introduction

Artificial intelligence (AI) plays an ever-increasing role in our daily lives most immediate to us in our use of entertainment, consumer and communication products [1,2]. Less immediately obvious to the oncology patient, AI has become an important tool to assist the clinical management of and guide therapy for cancer [3,4,5]. Within the academic oncology sphere, AI already has a significant impact. For example, AI has substantial, established roles in precision oncology [6,7,8], clinical oncology decision-making [9,10,11], digital cancer pathology [12,13,14,15,16] and radiology [17,18,19]. For community oncology practice, the role of AI remains limited but continues to emerge [20,21,22]. In this review, we seek to further expand knowledge of the role that AI plays in the community practice of oncology. We organize this manuscript into two parts. In Part I, we review the history, current state and emerging innovations relating to the computer hardware, data and software components that make AI possible. For conceptual simplicity and coherence, we refer to the synthesis of these components as the AI Silecosystem. We trace the emergence of the AI Silecosystem, its current state and future directions within the context of a Kuhnian scientific paradigm. In Part II, we provide a case example of the establishment and application of the AI Silecosystem in community oncology practice. We review the historical role and current integral position that academic medical institutions occupy in facilitating utilization of the AI Silecosystem by the community oncologist. We describe and place special emphasis on our experience at the City of Hope COH) Comprehensive Cancer Center to advance community oncology team utilization of the AI Silecosystem.

2. The AI Silecosystem as Kuhnian Paradigm

By AI Silecosystem we mean the synthesis of data, hardware and software that undergird the operation, make available the use, and fuel the growth of AI (Figure 1). To conceptually appreciate the history, progress and future trajectory of the AI Silecosystem, we may conceive and provide description of the AI Silecosystem as a Kuhnian paradigm [23]. As a Kuhnian paradigm, the AI Silecosystem has disrupted and shifted the original paradigm of computer as finite computational machine to the novel paradigm of computer as versatile, multipotent thinking machine. This paradigm shift characteristically matures through three discrete, iterative stages: inception, intermission and invigoration.

3. Origins of the AI Silecosystem: A Chronicle of an Emergent Paradigm

3.1. Inception: Articulation Anticipates Actualization

McCulloch and Pitts defined the incipient notion of computer as a thinking machine, suggesting that engineers might design computers to functionally mimic the operation of the human nervous system. In this theoretic nervous system model, an individual neuronal logic element achieves its ultimate activation state through cumulative summation of weighted inputs generated from a syndicate of contiguous neuronal logic elements [24]. This proposal represented an important architectural anlage preceding physical construction of Rosenblatt’s early neural network, the Perceptron [25,26]. Rosenblatt’s Mark 1 Perceptron neural network machine demonstrated the ability to perform basic visual pattern recognition. These early insights and accomplishments gave rise to an inchoate AI Silecosystem that Alan Turing further accelerated with his proposition that machines might “think” through serial adjudication of true and false logic states [27] (Figure 2). Formal AI development acquired significant academic interest and gained further momentum in 1956 when the early pioneers, McCarthy, Minsky and Shannon, convened a summer research convention at Dartmouth College where they sought critical evaluation of the assertion that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” [28]. Historians credit McCarthy as one of the originators of the term “artificial intelligence”. Consistent with previous Kuhnian paradigms, articulation of the AI Silecosystem paradigm anticipated its practical implementation.

3.2. Intermission: Expectations Exceed Experience

Initial efforts to create and implement the AI Silecosystem experienced setbacks. Between 1970 and 1990, a series of pivotal, adverse events led to intermittent intermissions in AI Silecosystem utilization, research and development. The inability of the AI Silecosystem to deliver its promise to perform complex, traditionally human-only tasks such as language translation, speech recognition and advanced image analysis efficiently and accurately muted expectations for AI-based approaches. These shortcomings prompted sponsors to withdraw financial support from several prominent AI initiatives. During this two-decade period, the Defense Advanced Research Projects Agency (DARPA) reduced funding for Carnegie Mellon’s AI speech recognition program, and the United States National Research Council ended its financing of AI language translation efforts [29]. Following the Lighthill report, the United Kingdom halted further public AI development [30], and Japan curtailed AI investment after its Fifth Generation project failed to meet its articulated goals [31]. These setbacks instigated widespread public disillusionment with AI and precipitated a series of intermissions in further AI discovery and advancement, that is, the “AI Winters”. Intermissions, such as the AI Winters, Kuhn would recognize as expected phases in the lifecycle of a paradigm shift. Full acceptance of a paradigm often must await creation of the technology and evaluation tools to permit complete use, valid assessment and thorough validation of the novel paradigm. Kuhn notes, for example, that many years passed after Newton and Einstein first introduced their mechanics and relativity paradigms until the availability of experimental verification protocols allowed scientists to fully understand, confirm and accept their revolutionary ideas [23]. Ultimately, innovation and insight facilitate endorsement and adoption of emerging paradigms, and, specifically, in the case of the AI Silecosystem, led to thawing of the AI Winters.

3.3. Invigoration: Innovation Invites Implementation and Investment

Innovation of and transformational progress within three core elements of the AI Silecosystem, i.e., computer hardware, data acquisition and processing and software algorithms, hastened thawing of the AI Winters. The following sections survey these key, instrumental innovations and advances.

3.3.1. Advances in Computer Hardware: The Engines That Power the AI Silecosystem

If we view the AI Silecosystem as a computational vehicle, its hardware elements function as the engines powering AI algorithmic processing. The invention of the silicon chip [32], introduction of multicore constructs [33] and development of ultrahigh capacity data storage systems [34], among other hardware innovations, enabled efficient, inexpensive performance of computationally complex, data-dense AI algorithms. The following more recent advances promise to further boost adoption and expansion of the AI Silecosystem.

Quantum Computing

Quantum computing uses the quantum bit (qubit) as its fundamental unit of information in contrast to conventional digital computing which employs the binary bit. Two different value states define the classic binary bit, and these value states exhibit mutual exclusivity (either 1 or 0). The qubit, however, may retain both values states simultaneously (1 and 0) in a quantum condition known as superposition. Superposition enables more rapid completion of complex, intensive computational tasks by quantum computation; digital computation cannot complete these tasks within a meaningful time frame. The computational superiority of the quantum computer, termed “quantum supremacy”, was first demonstrated by Google in 2019 using a programable superconducting processor [35]. Quantum supremacy has the potential to amplify the power and practical utility of the AI Silecosystem. For example, computational scientists have developed and now apply AI algorithms to solve complicated combinatoric problems such as those encountered in molecular oncology drug design [36] and cancer diagnostics [37]. Processing of such AI algorithms on traditional computer platforms, however, might require exorbitant, cost- and time-prohibitive computational resources; implementation of quantum computation may allow tractable, economic solutions for combinatoric and other equally complex oncologic questions. Oncologists have successfully used quantum computing, together with AI applications, in the prediction of breast cancer [38], the application of radiotherapy [39] and cancer histologic assessment [40].

Artificial-Intelligence-Boosted Internet of Things (AIoT)

The internet of things (IoT) describes a system of local and remote physical instruments with communication, data processing, computational, memory storage and sensor capabilities interconnected via the internet and/or a local network [41,42]. The IoT aims to leverage the full potential of modern digital resources to optimize and assist with the activities and pursuits of daily living. Domestic examples of the IoT include smart speakers, home security systems and integrated, residential thermostat devices. The IoT has the potential for broad societal utilization. Specifically, within the sphere of health care, the IoT, i.e., the internet of medical things (IoMT), has enabled new, vital medical services, for instance, distance clinical assessment and monitoring [43,44] and remote health emergency notification [45]. In addition, investigators have proposed using the IoMT to enhance breast cancer detection [46], patient-centric healthcare [47,48], and the performance of health-care-related deep learning models [49].
With the advent of AI, the next iteration of the IoT emerged: artificial-intelligence-boosted IoT (AIoT) [50]. The AIoT underpins a range of familiar IoT applications such as autonomous driving vehicles [51], industrial robots [52] and surveillance drones [53]. The AIoT has provided impetus for several AI-based initiatives, for example, the development of anticipatory manufacturing machine maintenance, automated optimization of commercial operational efficiency and machine-learning-based urban safety monitoring and traffic control. Hospitals have begun using the AIoT to maintain efficient daily facility functioning and provide centralized patient monitoring. At COH, researchers have harnessed the AIoT to ensure safe, timely and effective post-surgery recovery for the patient after return to their home [54].

Distributive Edge Computing

Shared, centralized high-performance computer centers (HPCCs) have made available to a multitude of scientists the computer resources required to perform highly complex, computationally intense analyses. A HPCC may be located at a significant physical distance from the data source; moreover, as a shared resource, HPCC analytic jobs enter a work queue and process them in a serial fashion. The geographical and operational architecture of the HPCC results in “in due time” job completion. A complementary data analytic approach, edge computing, redistributes data processing, computations and memory storage from HPCC hubs to smaller, local computer nodes contiguous with the data source [55]. Edge computer nodes excel at “now time” processing of smaller discrete data parcels. For certain applications, most notably IoT platforms, edge computing offers distinct advantages over centralized HPCC processing: improved efficiency, low latency and increased agility; further, for large institutions, with often immensely large HPCC computational demands, edge computing helps alleviate computational backlog and obviate compromise of network bandwidth. Currently, edge computing plays an indispensable role in healthcare, processing data originating from local clinics as well as patient wearable monitoring devices. [56,57]. Researchers have begun to leverage the AI Silecosystem to catalyze new discoveries in and applications of edge computing. Recent efforts seek to bring the power, versatility and efficacy of AI to the edge in order to enhance local analytic capabilities [58,59,60]; specific initiatives seek to apply AI to edge immune-oncology and precision oncology computational efforts [61,62].

Cloud Computing

Cloud computing refers to as-needed, subscription use of off-site computer services, typically utilizing an internet connected network. Cloud computing allows organizations to rapidly adapt to and accommodate their changing computational needs. Cloud computing mitigates the often-substantial transitional financial and time lag costs associated with start-up or rapidly expanding computer needs. As the owners of the cloud computer services manage and maintain their product, subscribers avoid administrative and custodian cost burdens. Further, in the event of abrupt computational deceleration or change in operational goals, cloud computing eliminates organizational depreciation costs associated with dormant or obsolete equipment and software. Even stably established and well-resourced HPCCs may utilize cloud computing services to buffer acute fluxes in computer needs. Cloud computing currently plays a pivotal role in supporting the healthcare industry, including provision of the off-site storage of patient electronic medical records, the warehousing of large genomic data sets, the enablement of robust telehealth capabilities and the hosting of patient access portals [63]. Cloud computing utilizes the AI Silecosystem to automate complex healthcare data management protocols and enhance workflows associated with the processing and analysis of patient data [64]. Cloud AI platforms make more immediately available to oncologists and their patients the tremendous power of AI protocols [65]. AI-augmented cloud computing helps to advance tumor board operations, cancer therapeutics, patient management, diagnostics and oncology services [66].

Neuromorphic Computing

Neuromorphic computing adapts the physical architecture and functionality of the human central nervous system to enhance computer design and operation [67,68,69,70]. The artificial neuron constitutes the fundamental functional unit of neuromorphic computing. The construction and implementation of the artificial neuron and neuromorphic computers rely on interdisciplinary collaboration among neurobiologists, electrical engineers, computer scientists and computational specialists. Neuromorphic computing provided the basis for the invention and utilization of neuromorphic sensors such as artificial retinas and cochleae. Neuromorphic computing research inspired specialized subdisciplines, for example, neuromemrestive initiatives that utilize electromagnetic memristors to create CNS-computer interfaces [71]. Neuromorphic computing plays an ever-increasingly important role in healthcare applications such as patient safety monitoring [72], neuro-rehabilitation [73] and interactive health care robotics [74]. Recently, computer researchers have incorporated neuromorphic computing approaches into AI platforms to boost their effectiveness and efficiency [75,76,77]. Cancer scientists and oncologists have implemented AI-based neuromorphic computing to enrich their research [78,79,80] and improve clinical patient care [78,81].

Analog Neural Networks

As with neuromorphic computing, analog neural networks seek to mimic, more closely, the biochemical and neurophysiological functioning of the biological nervous system. Because biologic neuronal inputs comprise parallel converged signals originating from a multitude of neighboring neurons, the inputs do not occur within discrete time episodes, nor do the strength of signals have categorical quantitative values. Therefore, a nervous system model with analog continuous, rather than digital, input values more closely approximates actual nervous system functioning. Analog neural networks require less energy and less computational time compared with digital networks [82,83,84,85]. Analog neural networks now play central roles in the operation of numerous healthcare and medical software applications, e.g., those related to medical imaging [86], mimicking of the olfactory function [87] and modeling of mastoid bone pathologic events [88]. Investigators observe that analog neural networks may be used to support AI-based platforms such as vector machine learning [89], advanced edge computing [90] and natural language processing [91]. Cancer computational specialists have adapted analog neural networks to strengthen AI-informed oncology research, including the development of efficient cancer classification workflows [92,93], cancer histological analytic approaches [94] and oncology drug design pathways [95].

Monolithic-3D AI Systems

Electrical engineers originally designed the integrated circuit (IC) as a two-dimensional, flat semiconductor device containing a vast array of electronic elements such as transistors, capacitors and resistors. The IC has the capability to perform a wide range of data processing and computational operations. Relative to a collection of discrete circuit elements, ICs carry out operations more rapidly and use less energy. Recent advancements in IC design have led to the development of a three-dimensional (3D) IC configuration in which engineers vertically layer two-dimensional IC units [96]. This innovative design allowed construction of monolithic 3D ICs that contain within a single chip the necessary electronic components to carry out increasingly complex, advanced computational tasks [97]. Monolithic 3D ICs demonstrate improved efficiency of operation and allow for construction of ever more compact electronic instrumentation. The introduction of monolithic 3D ICs rapidly accelerated practical implementation of often very complicated AI machine learning and deep neural network algorithms in IoT devices such as personal, wearable medical devices and point-of-service health equipment [98].

The Graphics Processing Unit

The central processing unit (CPU) provides global program execution instructions for the computer; typically, the CPU performs its operational tasks in a serial fashion, one following another. CPUs normally contain a modest number of individual processing units (most often fewer than one hundred). Electrical engineers designed the CPU to complete dedicated large-scale computer operational tasks. In comparison, the graphics processing unit (GPU) has more limited operation execution responsibilities related to specific tasks [99]. The GPU can execute functions in a parallel fashion, handling multiple tasks simultaneously; facilitating parallel execution, the GPU may contain thousands of processing units. Although originally designed to perform video and graphics functions, computer scientists realized that vis-à-vis the CPU, the GPU performs AI-related tasks (e.g., machine learning and neural network operations) more proficiently. Oncologists have utilized GPU-based devices to augment their ability to implement radiation therapy [100] and interpret neuro-oncology MRI images [101]

Analog, Non-Volatile Memory Devices

Analog memory devices can store continuous data values. Volatile memory requires a continuous power source to retain data; non-volatile memory devices retain and stably store data after power discontinuation. The profound interest in implementing AI-based approaches, such as neuromorphic computing, that require durable and continuously valued data sets, has intensified the need for analog, non-volatile memory devices. Recently, engineers have innovated memory storage with the introduction of analog, nonvolatile ferroelectric field-effect [102,103], resistive random access memory [104,105,106], magnetic random access memory [107,108] and phase change memory technologies [109,110,111]. Analog, non-volatile memory has been instrumental in the continuing maturation of AI-based neural networks [84,112,113], image analytic platforms [114] and bio-sensor devices [115,116].

3.3.2. Advances in Data

Data fuels the engine of the AI Silecosystem vehicle [117]; historically, several data-related innovations contributed to thawing of the AI Winters. Increasing the size of a data set characteristically elevates performance of an AI algorithm [118,119]. The advent of systematized large-scale data acquisition, concomitant with convergent informational and technical advances such as data compression [120], solid state memory [121] and random access memory [122], contributed to improved AI algorithmic functionality and abetted the awakenings of the AI Silecosystem from its early hibernations. In the following section, we examine additional data innovations that have driven forward the evolution and growth of the AI Silecosystem.

Synthetic Data

Synthetic data refer to information originating from an intentionally engineered process, in contrast to authentic data generated spontaneously from actual, real-world events. The desire for optimized AI algorithmic operability and larger data sets drove the development of synthetic data fabrication protocols.
Synthetic data production typically requires application of stringent statistical analytic procedures, precise data sampling approaches and rigorous testing methods to ensure accuracy and validity [123,124]. Synthetic data offer several key advantages over real-world data. For very large data sets, synthetic data avoid the often-tremendous financial costs associated with real-world data collection. Moreover, synthetic data, as they do not originate from actual patients, do not pose privacy risks and, additionally, eliminate the potential financial liability associated with a data breech. In addition, because of anonymity, synthetic data collections may allow their unrestricted use as open-source data repositories. The collection of real-world data may expose investigators to physical hazard. Data arising from natural disaster areas, associated with dangerous chemical or biologic agents, or originating from an unsafe physical environment (e.g., an active military combat zone or crime-challenged neighborhood) may all threaten the safety of data collection personnel. The surrogate production of synthetic data obviates such threats.
Within the AI Silecosystem, synthetic data have acquired increased prominence as recognition of their utility has grown. Synthetic data have driven forward innovations within the healthcare space. Synthetic data undergird many current initiatives in medical education [125,126], clinical training [127,128], epidemiology research [129,130] and disease prevention [131,132]. Cancer researchers now use synthetic data resources to bolster their work including precision medicine [133] and palliative care [134].

Facilitating Culturally Representative AI Data Sets

Experts identify cultural inequity and lack of diversity as ongoing and significant challenges in our society specifically impacting healthcare and medical outcomes [135,136,137]. As AI gains increasing currency as a tool to direct healthcare decision-making, and recognizing that patient data set composition influences AI algorithmic outcomes, consideration of the racial and ethnic composition of patient data sets has become important in order to ensure equity of healthcare outcomes, specifically within the sphere of cancer care [138]. Nevertheless, despite legal requirements for representative inclusion of racial and ethnic minorities in health research, disparities persist; data sets used in AI-based algorithms continue to employ non-representative patient populations, undermining the validity of algorithmic decision-making [139,140]. Novel initiatives aim to improve and maintain broad population representation within health care data sets and across AI platforms. These initiatives include the implementation of intentionally diverse data sets [141], the enactment of more effective legislative guidelines to promote equity and diversity [142] and initiation of proactive community programs to promote health research participation [143].

Optimizing Data Deposition and Engineering

In order to optimize functioning of the Silecosystem and performance of downstream applications, computer engineers and scientists require tractable access to high-quality, large-volume data [144,145]. For example, machine learning algorithms for drug discovery [146], diagnostic prediction [147] and oncology medical imaging [148] demonstrate significant improvement with enhancement of data quantity and quality. The construction of national federated data repositories seeks to establish direct, streamlined public access to large data warehouses [149,150,151,152,153]. Data engineering aims to modify and format data to facilitate AI model building and the completion of analytic tasks [154,155]. Recent data engineering efforts have sought to automate data quality improvement protocols such as eliminating bias in and assessing the integrity of large data sets [156,157,158].
Together, the careful generation of synthetic data, increased attention to equitable data representation and the facilitation of high-quality data access have promoted the saliency and amplified the currency of the AI Silecosystem. In the section that follows, we chronicle the role of software algorithms in mitigating past AI winters and their continuing role to solidify collective adoption of the AI Silecosystem.

3.3.3. Advances in Software Algorithms: Piloting the AI Ecosystem

If hardware functions as engine, and data serve as fuel, then the software algorithm operates as pilot to direct the AI Silecosystem. As a pilot, the software algorithm directs the operational flow, direction and output of the AI Silecosystem. The AI computer scientist may choose among a variety of software algorithms; most frequently, the scientist utilizes machine learning or neural network algorithms [159,160].
Machine learning algorithms employ either supervised or unsupervised protocols [161]. With supervised protocols, input data have assigned labels that link with an output result; using this label, the algorithm then “learns” the rule that governs the relationship between the input and output data. With unsupervised protocols, the data lacks labels, and the algorithm must devise its own associative rules to understand patterns in the data. Among a range of practical applications, supervised machine learning has been used to predict customer behavior [162,163], differentiate cells of different histologies [164,165] and recognize faces [166,167]. With unsupervised machine learning, the algorithm seeks to cluster entities based upon some discoverable property of the entities, for example, grouping anonymous individuals within a large crowd based upon biometric or acquired physical variables [168,169].
Neural network algorithms, subsets of machine learning, generally supervised, work by mimicking the workings of the nervous system; within a neural network, an artificial neuron receives multiple inputs from neighboring neurons and then generates a resultant output based upon combined input [170]. In turn, the neuron transmits its output signal to other neighboring neurons, culminating, ultimately, in a final, consolidated output value from the system. The neural network algorithm “learns” the necessary rules that govern the correct association between input and output values. For example, computer scientists have adapted neural networking to interpret handwriting; this task entails making the correct association between a handwritten word and the ground truth, intended word [171,172,173].
Building upon the revolutionary impact of machine learning, other software inventions and algorithmic discoveries helped to rejuvenate AI and continue to transform the Silecosystem. A brief synopsis of major innovations follows.

Generative AI

Generative AI, an evolutionary offshoot of machine learning, uses rules derived from established instances of creative content to generate novel content such as original, advanced-level written documents [174], music compositions [175] and video game platforms [176], among others. Recently available generative AI applications, Microsoft’s ChatGPT [177] and Google’s Bard [178], have piqued the public’s attention as both tools demonstrate the ability to very quickly generate works that approach the imaginative and technical abilities of human creators [179,180]. ChatGPT and Bard have authored working computer code [181,182,183], achieved passing scores on professional qualifying and academic exams [184,185] and written jokes [186]. In the health care field, generative AI enables chatbot services [187], carries out natural language processing of medical records [188] and completes medical education tasks [189]. These generative AI applications currently play important roles in cancer drug discovery [190], review of cancer patient medical records [191] and digital pathology [192].

Virtual and Augmented Reality

Virtual reality relies upon AI-empowered three-dimensional viewing devices together with positional tracking to construct and allow participation in a simulated, pseudo-physical existence [193]. Augmented reality combines input originating from physical reality with information generated by a computer device to enrich the conscious experience [194,195]. Providers have utilized both virtual and augmented realities in health care, for example, to improve medical practice and basic science research, advance educational curricula [196,197,198,199,200], refine surgical skills [201,202], guarantee the safety and effectiveness of medical procedures [203,204] and alleviate cancer pain and suffering [205,206,207]. Future virtual and augmented reality efforts aim to optimize routine, everyday tasks as well as medical professional-related procedures [208,209,210].

Explainable Machine Learning

Machine learning algorithms achieve their solutions through progression of relationally dependent steps. The underlying logic governing these relations, however, may be abstruse and not readily decipherable by a computer scientist [211]. Disambiguating the machine learning logic yields significant benefits. For just as explaining the mechanism of a biologic process or chemical reaction may reveal secondary insights and lead to additional discovery, so also may explaining the logic of a machine learning solution lead to derivative AI computational breakthroughs [212]. Furthermore, end users of transparent, explainable machine learning algorithms have increased confidence in the predictions of and conclusion made by the algorithm [213,214]. AI computer scientists use a variety of explanatory methods to reveal and illuminate the underlying governing logic of a machine learning behavior [215,216,217,218]. For example, gradient methods quantify the effect that a change in a machine input parameter has on the algorithm output at each step of the algorithm [219,220]. Deconvolution protocols provide logical information about the logical relationship between a specific output feature and input variable [221,222]. Local interpretable, model-agnostic explanations work by randomly inactivating model inputs and then observing and collectively analyzing output results [223,224,225]. These and other explainable methods promise to enhance the intuitive utility of and confidence in machine learning as well as other AI-based methods. For example, oncologists have employed explainable machine learning to boost their ability to perform breast cancer morphological and molecular breast cancer profiling [226] as well as estimate cancer hospital length of stay [227].

Generative Adversarial Networks

Generative adversarial networks (GANs) represent a category of generative machine learning algorithms in which two neural networks, a generator and discriminator, “compete” to achieve a maximized generative outcome, for example, production of an artificial image indistinguishable from an actual image [228,229]. Ground truth data sets train the generator to produce artificial data and also train the discriminator to distinguish between actual and artificial data [230,231]. The GAN algorithm achieves its generative objective when the generator produces artificial data, a majority of which the discriminator fails to distinguish from authentic data [230]. GANs have applications across a variety of disciplines including natural language processing [232,233,234], cybersecurity [235,236], manufacturing [237,238,239] and military defense [240,241]. Prominently, science and medicine have adapted GANs to design and analyze biological networks [242], perform medical imaging [243,244], inform precision oncology [245] and prescribe radiation medicine protocols [246,247,248].

Neuro-Vector-Symbolic Architecture

Illustrative of the rapid transformation of the AI Silecosystem, computer scientists recently introduced a novel AI computer operational structure, neuro-vector-symbolic architecture (NSVA) [249]. NSVA combines two existing, highly impactful AI strategies, deep neural networks (DNNs) and vector symbolic architectures (VSAs). DNNs excel at discerning objects in images, but lack the ability to differentiate among similarly shaped objects with differentiating secondary characteristics [250,251]. VSAs have the capacity to distinguish among entities having a multitude of secondary characteristics; however, they faulter with image perception [252,253]. Thus, neither DNNs nor VSAs can independently solve image-based abstract reasoning problems adequately. NSVAs incorporate the strengths of both SVAs and DNNs without their inherent weaknesses to create an innovative AI architecture capable of solving complex, perceptual problems [254]. Applied architectural synergism, such as the NSVA, provides a model for evolving the AI Silecosystem to accommodate the burgeoning computational complexity brought about by the accelerated societal adoption and use of AI. Cancer specialists have adapted these novel architectures to aid image analysis [255] and tumor classification [256].

The Democratization of Resources/Open-Source AI Software

Open-source software refers to computer software universally available to individuals for unrestricted use, modification and distribution [257]. Open-source software, beyond facile, economic availability, accelerates computer discovery, engenders trust in the software and organically self-improves due to iterative public editing and optimization [258]. The AI community has access to a broad menu of open-source software applications. Two frequently used AI open-source programs, TensorFlow [259] and PyTorch [260], provide platforms for the development of machine learning programs. Computer scientists frequently utilize TensorFlow to develop and train deep neural networks [261,262]. PyTorch has a variety of uses including the construction of natural language processing applications [263,264] and image processing [265,266]. Open-source AI software promotes the free exchange of ideas among users, sustains the democratization and pace of AI Silecosystem maturation, and serves as a catalyst for continuing research, invention and insight. Currently, AI computer scientists employ open-source software solutions to facilitate brain cancer research [267], perform cancer digital pathology [268] and analyze cancer genomic data [269].
In Table 1 below, we provide a summary of the significant historical and ongoing hardware, data and software innovations with regard to their impact on seven key metrics of the AI Silecosystem: AI algorithmic speed, efficiency, utility, agility, accuracy, security and accessibility.

4. Tribulations of the AI Silecosystem: Impending AI Winter or Early Twilight of a Paradigm in Demise?

Interest in, adoption of and innovation associated with the AI Silecosystem have surged in no small measure due to the recent advances in the field of generative AI. With this surge, however, has come an amplification of concerns over the real and emerging risks and dangers of the AI Silecosystem [270]. Some experts see a more powerful AI Silecosystem as an existential threat to humanity [271]; the Center for AI safety recently advised that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” [272]. Consequently, some societal leaders and countries have sought to pause or curtail continued AI development and/or use [273,274].
Regarding the use of AI within the healthcare and oncology sphere, leaders have voiced three broad concerns: loss of autonomy, malpractice and loss of compassion.
Scholars envision, on the horizon, ostensibly in the very near future, an AI singularity event wherein the intellectual capabilities of AI surpass that of humans, potentially with AI demonstrating unpredictable and uncontrollable behavior [275,276]. In this scenario, humans may unintentionally cede autonomy over their healthcare decision-making to an AI algorithm based upon actual superior medical insight [277,278,279], misperceived medical authority [280] or psychological manipulation [281].
Computer scientists and AI end users have expressed concerns over factual errors generated by AI algorithms [183,282,283,284]. AI-informed healthcare may pose real physical danger for the patient as AI algorithms may be prone to misdiagnosis [285] and incomplete or inaccurate treatment recommendations [286,287,288]. Healthcare specialists now recommend careful assessment of AI algorithms used for medical decision-making and expert review of AI-generated recommendations to avoid medical mistreatment [289,290].
Many patients do not trust AI [291,292,293]. Patients feel slighted by AI algorithms as the algorithms may, seemingly without apparent logic, deny patients health care coverage and needed services [294,295]. Patients perceive AI decisions as obdurate, unnuanced and arbitrary [296,297]. AI lacks compassion. The AI Silecosystem may be intelligent, but to many it is not wise.
These challenges, if not timely addressed, may precipitate the next AI intermission. Alternately, and potentially of greater consequence, the recent ascendancy of generative AI may presage an incipient twilight of the paradigm of “computer as thinking machine” along with the dawning of a succeeding, replacement paradigm, “computer as rational, sentient being”.
In Part I, we reviewed the primary hardware, data and software components of AI that enable its operation and advancement, encapsulated in the idea of the AI Silecosystem. As well, we chronicled the historical phases of progress and recession of the AI Silecosystem, conceptualized as the Kuhnian paradigm. In Part II that follows, we provide an example of practical utilization of the AI Silecosystem and illustrate its value to advance community oncology practice at the COH Comprehensive Cancer Center. We begin with a short discussion of the academic origins of the AI Silecosystem, and then proceed to detail its application at COH to advance community oncology practice.

5. The Academic Origins and Catalysis of the AI Silecosystem

The AI Silecosystem can trace its origins back to a number of key societal institutions that include commercial enterprises [298,299,300,301,302,303,304], the military [304,305,306,307] and, arguably, most prominently, academic centers [308,309,310]. Given their focus on research and education as well as their often substantial financial resources, academic centers became the natural home, incubator and accelerator of the AI Silecosystem. Because of their interdisciplinary and collaborative natures, academic departments often cross-pollinate ideas among departments and anticipate, react to and advance emerging paradigms such as the AI Silecosystem. Examples of notable AI advances originating from academic centers include invention of the Perceptron at the Cornell Aeronautical Laboratory in 1943 [311], conceptualization of the idea of AI at the 1956 Dartmouth Summer Research Project on Artificial Intelligence [312], construction of the first life-like robot at Waseda University in 1970 [313], demonstration of the first autonomous driving vehicle, the Stanford Cart, in 1979 [314] and creation of ImageNet, an annotated image repository, at Princeton University [315].
The emergence of the AI Silecosystem from academic centers accelerated adoption by academic healthcare and further advanced AI discoveries within the healthcare field. AI has established a widespread presence within medicine [316,317]. For instance, radiologists have harnessed AI to assist with interpretation of medical images [16,318,319], cardiologists use AI to diagnose and monitor patients with heart disease [320,321,322], gastroenterologists leverage AI to enhance the effectiveness of their interventions [323,324,325] and pulmonologists apply AI algorithms to optimize their diagnoses [326,327,328]. The AI Silecosystem has demonstrated tremendous value in oncology. Academic AI-based protocols have impacted oncologic approaches to the early diagnosis of cancer [329,330], targeted precision therapeutic recommendations [331] and palliative interventions [332,333]. After early applications in academic oncology, subsequent initiatives aimed to extend the AI Silecosystem paradigm to community oncology practice. Next, we chronicle these various initiatives.

6. Harnessing of the Academic Oncology AI Silecosystem to Advance Community Oncology Practice: The City of Hope Experience

Although the AI Silecosystem has firm footing within academic oncology, its place within community oncology practice continues to mature. The City of Hope Cancer Center (COH) comprises a central, academic campus together with over 30 community satellite oncology practices. The central academic campus hosts COH’s AI Silecosystem. In the following section, we describe the hardware, data, and software algorithm resources of the COH Silecosystem, the availability of these resources to the community oncology practices and the efforts to advance AI-empowered oncology care within the COH oncology enterprise (Figure 3).

6.1. Hardware Resources: High-Performance Computer Cluster

To support AI computations, COH maintains a high-performance computer center (HPCC) comprising 7300 CPU cores, 80 TB of memory and 176 GPUs. All COH physicians, faculty, staff and students, including community oncology members, have privileges to access the HPCC remotely through desktop terminal applications. Round-the-clock IT experts provide technical support to assist with access to and utilization of the HPCC.

6.2. Data Resources

The COH Data Center manages and ensures reliable availability of several petabytes of deidentified clinical and genomic data for AI-related projects. To facilitate AI research and clinical projects, the Data Center relies on an institution-wide data repository, POSEIDON (Precision Oncology Software Environment Interoperable Data Ontologies Network), to house patient clinical and genomic data [334]. AI-assisted natural language processing organizes POSEIDON data according to a Common Data Model to optimize and accelerate downstream data input into AI operational workflows. To date, POSEIDON has assembled nearly one quarter million unique real world patient data sets. COH information and health care scientists have instituted and optimized operational protocols to structure efficiently patient-generated data for AI-based applications [335].

6.3. Software Resources

COH maintains a suite of bioinformatics and AI application modules on the HPCC. Clients may utilize HPCC resources and pursue AI investigations independently or collaboratively with COH expert consultants. COH established its Department of Applied Artificial Intelligence and Data Science (AAI/DS) to educate the COH community, facilitate institutional AI-based research and to provide clinical decision support to aid with AI modeling. AAI/DS hosts two forums each month. One forum, a journal club, reviews published manuscripts covering current areas of AI research including image analysis, machine learning and natural language processing. The second forum focuses on machine-learning-related institutional research initiatives, software applications and computational tools.
AAI/DS efforts have resulted in the creation of multiple machine-learning-based models to predict real world clinical events. Following bone marrow transplantation (BMT), the development of severe sepsis has an associated mortality rate exceeding 50%. One AAI/DS project utilized an ensemble approach combining multiple random forest binary classifications models to develop a tool to estimate the risk of patients developing life-threatening sepsis after BMT [336]. COH clinicians have employed this model to improve clinical care, avert sepsis-associated organ damage and ameliorate mortality events after BMT.
Serious complications such as cardiac events, pneumonia, hemorrhage and death many times follow cytoreductive cancer surgeries. Another AAI/DS initiative employed an explainable machine learning strategy to develop a model that predicts complications following cytoreductive surgery [337]. Surgeons at COH currently employ this model to identify patients at risk for post-operative complications and to implement preventive measures to mitigate these risks. For oncologists, time estimation until end of life in terminally ill patients poses a challenge; frequently, oncologists overestimate time until end of life. Such misestimation may negatively impact patient and family emotional and financial planning as well as confound medical management. Working with COH palliative care specialists, AAI/DS used a gradient-boosted trees binary classifier to create a model estimating time to end of life [338]. This model reliably outperformed oncologists for predicting 90-day mortality in terminally ill patients.
Alongside AAI/DS, associate COH departments and institutions further underpin the AI Silecosystem. The COH Center for Informatics, comprising the Divisions of Biostatistics, Clinical Research Information Support, Research Informatics and Mathematical Oncology, provides key computational support to the COH AI Silecosystem. The Center assists with the statistical design of research projects, restructures health and research data to be compatible with computer processing and aids with the visualization and analysis of data. AI projects supported by the Center for Informatics include the use of machine learning approaches to optimize, organize and structure electronic health care records for downstream artificial-intelligence-related projects [339], development of a machine learning platform to visualize and extract computationally employable information from biomedical and clinical data records [340] and utilizing machine learning approaches to advance the study and clinical implementation of immune-oncology [341].
The Translational Genomics Research Institute (TGen), a COH-affiliated center, leverages translational genomics to innovate diagnostic methods, molecular prognostic tools and targeted therapies for cancer through independent and collaborative projects [342]. Implementation of AI and machine learning algorithms have accelerated TGen-driven insights, fortifying the COH AI Silecosystem. One recent TGen-initiated scientific endeavor applied machine learning to develop a novel early cancer detection method, targeted digital sequencing (TARDIS) [343].
The cumulative energies of the AAI/DS, Center for Informatics, TGen, as well as the efforts of independent COH investigators have helped create a rich resource of AI expertise and maintain a robust portfolio of AI research. Examples of other initiatives at COHthat illustrate the depth and breadth of the AI Silecosystem include the use of AI autosegmentation for patients pending bone marrow transplant irradiation [344,345,346], AI-assisted oncologic drug design [347], expert critical review of clinical AI models [348], AI-based platforms for the evaluation and treatment of lung [349] and breast cancers [350], machine learning enabled pre-surgery physical status scoring [351] and AI-assisted irradiation dose estimation [352].

6.4. COH AI Silecosystem Engagement with the Community Oncology Network

Community Oncology patients and physicians at COH interface with and gain advantage from the AI Silecosystem on multiple levels. Every day, COH patients benefit directly from AI-informed institutional clinical care protocols such as the AI-informed diagnostic radiology, radiation oncology, medical oncology and palliative care initiatives described above. Moreover, community oncology patients may qualify for AI-based national clinical trials sponsored by COH. One such trial, currently available at COH, uses machine learning to inform the treatment of high-risk prostate cancer (NCT04513717) [353]. Community oncology patients also collaterally benefit from inclusion of their health care and genomic data in the electronic health record as their data help shape and make more accurate the AI models from which their AI-informed healthcare derives [354].
The COH AI Silecosystem likewise aids community oncologists. The AI Silecosystem provides access to expert AI specialists capable of providing to the community oncologist insights into the clinical serviceability and utilization of AI-based healthcare applications. Additionally, COH community oncologists may avail themselves of the many educational opportunities such as AI-related journal clubs, seminars and lectures. Further, COH community oncologists may employ the AI-Silecosystem data repository and institutional AI-associated hardware and clinical platforms for their own patient care [355]. Moreover, the COH AI Silecosystem helps expand AI-based clinical trial and research opportunities for community oncology providers.

7. Conclusions

The AI Silecosystem operates, innovates and advances as a synthesis of its component hardware, data and software elements. The AI Silecosystem has transformed in accordance with a Kuhnian paradigmatic progression with periods of rapid advancements punctuated by episodes of retreat. Recent signals of possible impending AI recession or even demise notwithstanding, the AI Silecosystem currently enjoys increasing societal currency and practical adoption. The academic oncology healthcare enterprise has significantly leveraged the AI Silecosystem to rapidly advantage cancer care, in particular the clinical management of the community oncology patient. The COH academic-community oncology team alliance demonstrates the practical feasibility and the tangible dividend of such leverage. In the near term, we may reasonably anticipate continued enthusiasm for the AI Silecosystem and its further utilization within community oncology practice.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Poola, I. How artificial intelligence in impacting real life everyday. Int. J. Adv. Res. Dev. 2017, 2, 96–100. [Google Scholar]
  2. Lee, R.S.T. Artificial Intelligence in Daily Life; Springer: Singapore, 2020. [Google Scholar]
  3. Bhinder, B.; Gilvary, C.; Madhukar, N.S.; Elemento, O. Artificial Intelligence in Cancer Research and Precision Medicine. Cancer Discov. 2021, 11, 900–915. [Google Scholar] [CrossRef] [PubMed]
  4. Goldenberg, S.L.; Nir, G.; Salcudean, S.E. A new era: Artificial intelligence and machine learning in prostate cancer. Nat. Rev. Urol. 2019, 16, 391–403. [Google Scholar] [CrossRef] [PubMed]
  5. Cardoso, M.J.; Houssami, N.; Pozzi, G.; Séroussi, B. Artificial intelligence (AI) in breast cancer care–Leveraging multidisciplinary skills to improve care. Breast 2021, 56, 110–113. [Google Scholar] [CrossRef]
  6. Bhalla, S.; Laganà, A. Artificial intelligence for precision oncology. In Computational Methods for Precision Oncology; Springer: Berlin/Heidelberg, Germany, 2022; pp. 249–268. [Google Scholar]
  7. Dlamini, Z.; Francies, F.Z.; Hull, R.; Marima, R. Artificial intelligence (AI) and big data in cancer and precision oncology. Comput. Struct. Biotechnol. J. 2020, 18, 2300–2311. [Google Scholar] [CrossRef]
  8. Rompianesi, G.; Pegoraro, F.; Ceresa, C.D.; Montalti, R.; Troisi, R.I. Artificial intelligence for precision oncology: Beyond patient stratification. NPJ Precis. Oncol. 2019, 3, 6. [Google Scholar]
  9. Rompianesi, G.; Pegoraro, F.; Ceresa, C.D.; Montalti, R.; Troisi, R.I. Artificial intelligence in the diagnosis and management of colorectal cancer liver metastases. World J. Gastroenterol. 2022, 28, 108. [Google Scholar] [CrossRef]
  10. Christie, J.R.; Lang, P.; Zelko, L.M.; Palma, D.A.; Abdelrazek, M.; Mattonen, S.A. Artificial intelligence in lung cancer: Bridging the gap between computational power and clinical decision-making. Can. Assoc. Radiol. J. 2021, 72, 86–97. [Google Scholar] [CrossRef]
  11. Derbal, Y. Can artificial intelligence improve cancer treatments? Health Inform. J. 2022, 28, 14604582221102314. [Google Scholar] [CrossRef]
  12. Ibrahim, A.; Gamble, P.; Jaroensri, R.; Abdelsamea, M.M.; Mermel, C.H.; Chen, P.-H.C.; Rakha, E.A. Artificial intelligence in digital breast pathology: Techniques and applications. Breast 2020, 49, 267–273. [Google Scholar] [CrossRef] [Green Version]
  13. Jiang, Y.; Yang, M.; Wang, S.; Li, X.; Sun, Y. Emerging role of deep learning-based artificial intelligence in tumor pathology. Cancer Commun. 2020, 40, 154–166. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Viswanathan, V.S.; Toro, P.; Corredor, G.; Mukhopadhyay, S.; Madabhushi, A. The state of the art for artificial intelligence in lung digital pathology. J. Pathol. 2022, 257, 413–429. [Google Scholar] [CrossRef] [PubMed]
  15. Försch, S.; Klauschen, F.; Hufnagl, P.; Roth, W. Artificial intelligence in pathology. Dtsch. Ärzteblatt Int. 2021, 118, 199. [Google Scholar] [CrossRef]
  16. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J. Artificial intelligence in radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef] [PubMed]
  17. Tran, W.T.; Sadeghi-Naini, A.; Lu, F.-I.; Gandhi, S.; Meti, N.; Brackstone, M.; Rakovitch, E.; Curpen, B. Computational radiology in breast cancer screening and diagnosis using artificial intelligence. Can. Assoc. Radiol. J. 2021, 72, 98–108. [Google Scholar] [CrossRef]
  18. Chassagnon, G.; Vakalopoulou, M.; Paragios, N.; Revel, M.-P. Artificial intelligence applications for thoracic imaging. Eur. J. Radiol. 2020, 123, 108774. [Google Scholar] [CrossRef] [Green Version]
  19. Tagliafico, A.S.; Piana, M.; Schenone, D.; Lai, R.; Massone, A.M.; Houssami, N. Overview of radiomics in breast cancer diagnosis and prognostication. Breast 2020, 49, 74–80. [Google Scholar] [CrossRef] [Green Version]
  20. Frownfelter, J.; Blau, S.; Page, R.D.; Showalter, J.; Miller, K.; Kish, J.; Valley, A.W.; Nabhan, C. Artificial intelligence (AI) to improve patient outcomes in community oncology practices. J. Clin. Oncol. 2019, 37, e18098. [Google Scholar] [CrossRef]
  21. Kappel, C.; Rushton-Marovac, M.; Leong, D.; Dent, S. Pursuing Connectivity in Cardio-Oncology Care—The Future of Telemedicine and Artificial Intelligence in Providing Equity and Access to Rural Communities. Front. Cardiovasc. Med. 2022, 9, 927769. [Google Scholar] [CrossRef]
  22. Ye, P.; Butler, B.; Vo, D.; He, B.; Turnwald, B.; Hoverman, J.R.; Indurlal, P.; Garey, J.S.; Hoang, S.N. The initial outcome of deploying a mortality prediction tool at community oncology practices. J. Clin. Oncol. 2022, 40, 1521. [Google Scholar] [CrossRef]
  23. Kuhn, T.S. The Structure of Scientific Revolutions; University of Chicago Press: Chicago, IL, USA, 1962; Volume XV, p. 172. [Google Scholar]
  24. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biol. 1990, 52, 99–115. [Google Scholar] [CrossRef]
  25. Rosenblatt, F. The Perceptron, a Perceiving and Recognizing Automaton Project Para; Cornell Aeronautical Laboratory: Buffalo, NY, USA, 1957. [Google Scholar]
  26. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef] [Green Version]
  27. Turing, A.M. Computing machinery and intelligence. Mind 1950, 59, 433–460. [Google Scholar] [CrossRef]
  28. McCarthy, J.; Minsky, M.; Rochester, N.; Shannon, C.E. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Mag. 2006, 27, 12–14. [Google Scholar]
  29. Pierce, J.R.; Carroll, J.B. Language and Machines: Computers in Translation and Linguistics; National Academies Press: Washington, DC, USA, 1966. [Google Scholar]
  30. Science Research Council. Artificial Intelligence; a Paper Symposium; Science Research Council: London, UK, 1973; p. iv. 45p. [Google Scholar]
  31. ICOT. Shin-Sedai-Konpyūta-Gijutsu-Kaihatsu-Kikō, FGCS’92. Fifth Generation Computer Systems; IOS Press: Amsterdam, The Netherlands, 1992; Volume 1. [Google Scholar]
  32. Mack, C.A. Fifty years of Moore’s law. IEEE Trans. Semicond. Manuf. 2011, 24, 202–207. [Google Scholar] [CrossRef]
  33. Gepner, P.; Kowalik, M.K. Multi-core processors: New way to achieve high system performance. In Proceedings of the International Symposium on Parallel Computing in Electrical Engineering (PARELEC’06), Bialystok, Poland, 13–17 September 2006. [Google Scholar]
  34. Goda, K.; Kitsuregawa, M. The history of storage systems. Proc. IEEE 2012, 100, 1433–1440. [Google Scholar] [CrossRef]
  35. Arute, F.; Arya, K.; Babbush, R.; Bacon, D.; Bardin, J.C.; Barends, R.; Biswas, R.; Boixo, S.; Brandao, F.G.; Buell, D.A. Quantum supremacy using a programmable superconducting processor. Nature 2019, 574, 505–510. [Google Scholar] [CrossRef] [Green Version]
  36. Thomford, N.E.; Senthebane, D.A.; Rowe, A.; Munro, D.; Seele, P.; Maroyi, A.; Dzobo, K. Natural products for drug discovery in the 21st century: Innovations for novel drug discovery. Int. J. Mol. Sci. 2018, 19, 1578. [Google Scholar] [CrossRef] [Green Version]
  37. Jain, S.; Ziauddin, J.; Leonchyk, P.; Yenkanchi, S.; Geraci, J. Quantum and classical machine learning for the classification of non-small-cell lung cancer patients. SN Appl. Sci. 2020, 2, 1088. [Google Scholar] [CrossRef]
  38. Davids, J.; Lidströmer, N.; Ashrafian, H. Artificial Intelligence in Medicine Using Quantum Computing in the Future of Healthcare. In Artificial Intelligence in Medicine; Springer: Berlin/Heidelberg, Germany, 2022; pp. 423–446. [Google Scholar]
  39. Niraula, D.; Jamaluddin, J.; Matuszak, M.M.; Haken, R.K.T.; Naqa, I.E. Quantum deep reinforcement learning for clinical decision support in oncology: Application to adaptive radiotherapy. Sci. Rep. 2021, 11, 23545. [Google Scholar] [CrossRef]
  40. Majumdar, R.; Baral, B.; Bhalgamiya, B.; Roy, T.D. Histopathological Cancer Detection Using Hybrid Quantum Computing. arXiv 2023, arXiv:2302.04633. [Google Scholar]
  41. Madakam, S.; Lake, V.; Lake, V.; Lake, V. Internet of Things (IoT): A literature review. J. Comput. Commun. 2015, 3, 164. [Google Scholar] [CrossRef] [Green Version]
  42. Čolaković, A.; Hadžialić, M. Internet of Things (IoT): A review of enabling technologies, challenges, and open research issues. Comput. Netw. 2018, 144, 17–39. [Google Scholar] [CrossRef]
  43. Valsalan, P.; Baomar, T.A.B.; Baabood, A.H.O. IoT based health monitoring system. J. Crit. Rev. 2020, 7, 739–743. [Google Scholar]
  44. Yuehong, Y.; Zeng, Y.; Chen, X.; Fan, Y. The internet of things in healthcare: An overview. J. Ind. Inf. Integr. 2016, 1, 3–13. [Google Scholar]
  45. Saloni, S.; Hegde, A. WiFi-aware as a connectivity solution for IoT pairing IoT with WiFi aware technology: Enabling new proximity based services. In Proceedings of the 2016 International Conference on Internet of Things and Applications (IOTA), Pune, India, 22–24 January 2016. [Google Scholar]
  46. Aldhyani, T.H.; Khan, M.A.; Almaiah, M.A.; Alnazzawi, N.; Hwaitat, A.K.A.; Elhag, A.; Shehab, R.T.; Alshebami, A.S. A Secure internet of medical things Framework for Breast Cancer Detection in Sustainable Smart Cities. Electronics 2023, 12, 858. [Google Scholar] [CrossRef]
  47. Jabarulla, M.Y.; Lee, H.-N. A blockchain and artificial intelligence-based, patient-centric healthcare system for combating the COVID-19 pandemic: Opportunities and applications. Healthcare 2021, 9, 1019. [Google Scholar] [CrossRef]
  48. Srinivasu, P.N.; Ijaz, M.F.; Shafi, J.; Woźniak, M.; Sujatha, R. 6G driven fast computational networking framework for healthcare applications. IEEE Access 2022, 10, 94235–94248. [Google Scholar] [CrossRef]
  49. Prayitno; Shyu, C.R.; Putra, K.T.; Chen, H.C.; Tsai, Y.Y.; Hossain, K.T.; Jiang, W.; Shae, Z.Y. A systematic review of federated learning in the healthcare area: From the perspective of data properties and applications. Appl. Sci. 2021, 11, 11191. [Google Scholar] [CrossRef]
  50. Sung, T.-W.; Tsai, P.-W.; Gaber, T.; Lee, C.-Y. Artificial Intelligence of Things (AIoT) technologies and applications. Wirel. Commun. Mob. Comput. 2021, 2021, 9781271. [Google Scholar] [CrossRef]
  51. Krasniqi, X.; Hajrizi, E. Use of IoT technology to drive the automotive industry from connected to full autonomous vehicles. IFAC-Pap. 2016, 49, 269–274. [Google Scholar] [CrossRef]
  52. Jia, W.; Wang, S.; Xie, Y.; Chen, Z.; Gong, K. Disruptive technology identification of intelligent logistics robots in AIoT industry: Based on attributes and functions analysis. Syst. Res. Behav. Sci. 2022, 39, 557–568. [Google Scholar] [CrossRef]
  53. Wazid, M.; Das, A.K.; Park, Y. Blockchain-Envisioned Secure Authentication Approach in AIoT: Applications, Challenges, and Future Research. Wirel. Commun. Mob. Comput. 2021, 2021, 3866006. [Google Scholar] [CrossRef]
  54. Perez, F.; Nolde, M.; Crane, T.E.; Kebria, M.; Chan, K.; Dellinger, T.; Sun, V. Integrative review of remote patient monitoring in gynecologic and urologic surgical oncology. J. Surg. Oncol. 2023, 127, 1054–1061. [Google Scholar] [CrossRef]
  55. Chen, J.; Ran, X. Deep learning with edge computing: A review. Proc. IEEE 2019, 107, 1655–1674. [Google Scholar] [CrossRef]
  56. Uddin, M.Z. A wearable sensor-based activity prediction system to facilitate edge computing in smart healthcare system. J. Parallel Distrib. Comput. 2019, 123, 46–53. [Google Scholar] [CrossRef]
  57. Verma, P.; Fatima, S. Smart Healthcare Applications and Real-Tme Analytics through Edge Computing. In Internet of Things Use Cases for the Healthcare Industry; Springer: Cham, Switzerland, 2020; pp. 241–270. [Google Scholar]
  58. Cao, B.; Zhang, L.; Li, Y.; Feng, D.; Cao, W. Intelligent offloading in multi-access edge computing: A state-of-the-art review and framework. IEEE Commun. Mag. 2019, 57, 56–62. [Google Scholar] [CrossRef]
  59. Zhou, Z.; Chen, X.; Li, E.; Zeng, L.; Luo, K.; Zhang, J. Edge intelligence: Paving the last mile of artificial intelligence with edge computing. Proc. IEEE 2019, 107, 1738–1762. [Google Scholar] [CrossRef] [Green Version]
  60. Deng, S.; Zhao, H.; Fang, W.; Yin, J.; Dustdar, S.; Zomaya, A.Y. Edge intelligence: The confluence of edge computing and artificial intelligence. IEEE Internet Things J. 2020, 7, 7457–7469. [Google Scholar] [CrossRef] [Green Version]
  61. Chowdhury, A.; Kassem, H.; Padoy, N.; Umeton, R.; Karargyris, A. A Review of Medical Federated Learning: Applications in Oncology and Cancer Research. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, proceedings of the 7th International Workshop, BrainLes 2021, Held in Conjunction with MICCAI 2021, Virtual Event, 27 September 2021; Springer: Cham, Switzerland, 2022; Part I. [Google Scholar]
  62. Rodríguez, C. AIoT for Achieving Sustainable Development Goals. In Proceedings of the 4th International Conference on Recent Trends in Advanced Computing, VIT, Chennai, India, 11–12 November 2021. [Google Scholar]
  63. Rahimi, M.; Navimipour, N.J.; Hosseinzadeh, M.; Moattar, M.H.; Darwesh, A. A systematic review on cloud computing. J. Supercomput. 2014, 68, 1321–1346. [Google Scholar]
  64. Dang, L.M.; Piran, M.J.; Han, D.; Min, K.; Moon, H. Cloud healthcare services: A comprehensive and systematic literature review. Trans. Emerg. Telecommun. Technol. 2022, 33, e4473. [Google Scholar]
  65. Raza, K.; Qazi, S.; Sahu, A.; Verma, S. Computational Intelligence in Oncology: Past, Present, and Future. In Computational Intelligence in Oncology: Applications in Diagnosis, Prognosis and Therapeutics of Cancers; Springer: Berlin/Heidelberg, Germany, 2022; pp. 3–18. [Google Scholar]
  66. Liu, X.; Luo, X.; Jiang, C.; Zhao, H. Difficulties and challenges in the development of precision medicine. Clin. Genet. 2019, 95, 569–574. [Google Scholar] [CrossRef] [PubMed]
  67. Schuman, C.D.; Kulkarni, S.R.; Parsa, M.; Mitchell, J.P.; Date, P.; Kay, B. Opportunities for neuromorphic computing algorithms and applications. Nat. Comput. Sci. 2022, 2, 10–19. [Google Scholar] [CrossRef]
  68. Mead, C. Neuromorphic electronic systems. Proc. IEEE 1990, 78, 1629–1636. [Google Scholar] [CrossRef]
  69. Indiveri, G.; Linares-Barranco, B.; Hamilton, T.J.; Schaik, A.V.; Etienne-Cummings, R.; Delbruck, T.; Liu, S.-C.; Dudek, P.; Häfliger, P.; Renaud, S. Neuromorphic silicon neuron circuits. Front. Neurosci. 2011, 5, 73. [Google Scholar] [CrossRef] [Green Version]
  70. Thakur, C.S.; Molin, J.L.; Cauwenberghs, G.; Indiveri, G.; Kumar, K.; Qiao, N.; Schemmel, J.; Wang, R.; Chicca, E.; Olson Hasler, J. Large-scale neuromorphic spiking array processors: A quest to mimic the brain. Front. Neurosci. 2018, 12, 891. [Google Scholar] [CrossRef]
  71. Bulárka, S.; Gontean, A. Brain-computer interface review. In Proceedings of the 2016 12th IEEE International Symposium on Electronics and Telecommunications (ISETC), Timisoara, Romania, 27–28 October 2016. [Google Scholar]
  72. Yu, Z.; Zahid, A.; Ansari, S.; Abbas, H.; Abdulghani, A.M.; Heidari, H.; Imran, M.A.; Abbasi, Q.H. Hardware-based hopfield neuromorphic computing for fall detection. Sensors 2020, 20, 7226. [Google Scholar] [CrossRef] [PubMed]
  73. Ceolini, E.; Frenkel, C.; Shrestha, S.B.; Taverni, G.; Khacef, L.; Payvand, M.; Donati, E. Hand-gesture recognition based on EMG and event-based camera sensor fusion: A benchmark in neuromorphic computing. Front. Neurosci. 2020, 14, 637. [Google Scholar] [CrossRef]
  74. Aitsam, M.; Davies, S.; Di Nuovo, A. Neuromorphic Computing for Interactive Robotics: A Systematic Review. IEEE Access 2022, 10, 122261–122279. [Google Scholar] [CrossRef]
  75. Yu, Z.; Abdulghani, A.M.; Zahid, A.; Heidari, H.; Imran, M.A.; Abbasi, Q.H. An overview of neuromorphic computing for artificial intelligence enabled hardware-based hopfield neural network. IEEE Access 2020, 8, 67085–67099. [Google Scholar] [CrossRef]
  76. Sun, B.; Guo, T.; Zhou, G.; Ranjan, S.; Jiao, Y.; Wei, L.; Zhou, Y.N.; Wu, Y.A. Synaptic devices based neuromorphic computing applications in artificial intelligence. Mater. Today Phys. 2021, 18, 100393. [Google Scholar] [CrossRef]
  77. Roy, K.; Jaiswal, A.; Panda, P. Towards spike-based machine intelligence with neuromorphic computing. Nature 2019, 575, 607–617. [Google Scholar] [CrossRef] [PubMed]
  78. Pierangeli, D.; Palmieri, V.; Marcucci, G.; Moriconi, C.; Perini, G.; De Spirito, M.; Papi, M.; Conti, C. Optical neural network by disordered tumor spheroids. In Proceedings of the 2019 Conference on Lasers and Electro-Optics Europe & European Quantum Electronics Conference (CLEO/Europe-EQEC), Munich, Germany, 23–27 June 2019. [Google Scholar]
  79. Rong, G.; Mendez, A.; Assi, E.B.; Zhao, B.; Sawan, M. Artificial intelligence in healthcare: Review and prediction case studies. Engineering 2020, 6, 291–301. [Google Scholar] [CrossRef]
  80. Pierangeli, D.; Palmieri, V.; Marcucci, G.; Moriconi, C.; Perini, G.; De Spirito, M.; Papi, M.; Conti, C. Optical Neural Network for Cancer Morphodynamics Sensing. In Nonlinear Optics; Optica Publishing Group: Washington, DC, USA, 2019. [Google Scholar]
  81. Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
  82. DasGupta, B.; Schnitger, G. Analog versus discrete neural networks. Neural Comput. 1996, 8, 805–818. [Google Scholar] [CrossRef]
  83. Kakkar, V. Comparative study on analog and digital neural networks. Int. J. Comput. Sci. Netw. Secur. 2009, 9, 14–21. [Google Scholar]
  84. Xiao, T.P.; Bennett, C.H.; Feinberg, B.; Agarwal, S.; Marinella, M.J. Analog architectures for neural network acceleration based on non-volatile memory. Appl. Phys. Rev. 2020, 7, 031301. [Google Scholar] [CrossRef]
  85. Cramer, B.; Billaudelle, S.; Kanya, S.; Leibfried, A.; Grübl, A.; Karasenko, V.; Pehle, C.; Schreiber, K.; Stradmann, Y.; Weis, J.; et al. Surrogate gradients for analog neuromorphic computing. Proc. Natl. Acad. Sci. USA 2022, 119, e2109194119. [Google Scholar] [CrossRef]
  86. Chandrasekaran, S.T.; Jayaraj, A.; Karnam, V.E.G.; Banerjee, I.; Sanyal, A. Fully integrated analog machine learning classifier using custom activation function for low resolution image classification. IEEE Trans. Circuits Syst. I Regul. Pap. 2021, 68, 1023–1033. [Google Scholar] [CrossRef]
  87. Pan, C.-H.; Hsieh, H.-Y.; Tang, K.-T. An analog multilayer perceptron neural network for a portable electronic nose. Sensors 2012, 13, 193–207. [Google Scholar] [CrossRef]
  88. Odame, K.; Nyamukuru, M.; Shahghasemi, M.; Bi, S.; Kotz, D. Analog Gated Recurrent Unit Neural Network for Detecting Chewing Events. IEEE Trans. Biomed. Circuits Syst. 2022, 16, 1106–1115. [Google Scholar] [CrossRef]
  89. Perfetti, R.; Ricci, E. Analog neural network for support vector machine learning. IEEE Trans. Neural Netw. 2006, 17, 1085–1091. [Google Scholar] [CrossRef]
  90. Krestinskaya, O.; AJames, P.; Chua, L.O. Neuromemristive circuits for edge computing: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 4–23. [Google Scholar] [CrossRef] [Green Version]
  91. Moon, S.; Shin, K.; Jeon, D. Enhancing reliability of analog neural network processors. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2019, 27, 1455–1459. [Google Scholar] [CrossRef]
  92. Geske, G.; Stupmann, F.; Wego, A. High speed color recognition with an analog neural network chi. In Proceedings of the IEEE International Conference on Industrial Technology, Maribor, Slovenia, 10–12 December 2003. [Google Scholar]
  93. Kieffer, C.; Genot, A.J.; Rondelez, Y.; Gines, G. Molecular Computation for Molecular Classification. Adv. Biol. 2023, 7, 2200203. [Google Scholar] [CrossRef]
  94. Pattichis, C.; Schnorrenberg, F.; Schizas, C.; Pattichis, M.; Kyriacou, K. A Modular Neural Network System for the Analysis of Nuclei in Histopathological Sections. In Computational Intelligence Processing in Medical Diagnosis; Physica: Heidelberg, Germany, 2002; pp. 291–322. [Google Scholar]
  95. Morro, A.; Canals, V.; Oliver, A.; Alomar, M.L.; Galan-Prado, F.; Ballester, P.J.; Rossello, J.L. A stochastic spiking neural network for virtual screening. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 1371–1375. [Google Scholar] [CrossRef]
  96. Jiang, J.; Parto, K.; Cao, W.; Banerjee, K. Ultimate monolithic-3D integration with 2D materials: Rationale, prospects, and challenges. IEEE J. Electron Devices Soc. 2019, 7, 878–887. [Google Scholar] [CrossRef]
  97. Wong, S.; El-Gamal, A.; Griffin, P.; Nishi, Y.; Pease, F.; Plummer, J. Monolithic 3D integrated circuits. In Proceedings of the 2007 International Symposium on VLSI Technology, Systems and Applications (VLSI-TSA), Hsinchu, Taiwan, 23–25 April 2007. [Google Scholar]
  98. Torres-Mapa, M.L.; Singh, M.; Simon, O.; Mapa, J.L.; Machida, M.; Günther, A.; Roth, B.; Heinemann, D.; Terakawa, M.; Heisterkamp, A. Fabrication of a monolithic lab-on-a-chip platform with integrated hydrogel waveguides for chemical sensing. Sensors 2019, 19, 4333. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  99. Dematté, L.; Prandi, D. GPU computing for systems biology. Brief. Bioinform. 2010, 11, 323–333. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  100. Zaki, G.; Plishker, W.; Li, W.; Lee, J.; Quon, H.; Wong, J.; Shekhar, R. The utility of cloud computing in analyzing GPU-accelerated deformable image registration of CT and CBCT images in head and neck cancer radiation therapy. IEEE J. Transl. Eng. Health Med. 2016, 4, 4300311. [Google Scholar] [CrossRef]
  101. Chakrabarty, S.; Abidi, S.A.; Mousa, M.; Mokkarala, M.; Hren, I.; Yadav, D.; Kelsey, M.; LaMontagne, P.; Wood, J.; Adams, M. Integrative Imaging Informatics for Cancer Research: Workflow Automation for Neuro-Oncology (I3CR-WANO). JCO Clin. Cancer Inform. 2023, 7, e2200177. [Google Scholar] [CrossRef] [PubMed]
  102. Khan, A.I.; Keshavarzi, A.; Datta, S. The future of ferroelectric field-effect transistor technology. Nat. Electron. 2020, 3, 588–597. [Google Scholar] [CrossRef]
  103. Ajayan, J.; Mohankumar, P.; Nirmal, D.; Joseph, L.L.; Bhattacharya, S.; Sreejith, S.; Kollem, S.; Rebelli, S.; Tayal, S.; Mounika, B. Ferroelectric Field Effect Transistors (FeFETs): Advancements, Challenges and Exciting Prospects for Next Generation Non-Volatile Memory (NVM) Applications. Mater. Today Commun. 2023, 35, 105591. [Google Scholar] [CrossRef]
  104. Pan, F.; Gao, S.; Chen, C.; Song, C.; Zeng, F. Recent progress in resistive random access memories: Materials, switching mechanisms, and performance. Mater. Sci. Eng. R Rep. 2014, 83, 1–59. [Google Scholar] [CrossRef]
  105. Gupta, V.; Kapur, S.; Saurabh, S.; Grover, A. Resistive random access memory: A review of device challenges. IETE Tech. Rev. 2020, 37, 377–390. [Google Scholar] [CrossRef]
  106. Wu, H.; Wang, X.H.; Gao, B.; Deng, N.; Lu, Z.; Haukness, B.; Bronner, G.; Qian, H. Resistive random access memory for future information processing system. Proc. IEEE 2017, 105, 1770–1789. [Google Scholar] [CrossRef]
  107. Girard, P.; Cheng, Y.; Virazel, A.; Zhao, W.; Bishnoi, R.; Tahoori, M.B. A survey of test and reliability solutions for magnetic random access memories. Proc. IEEE 2020, 109, 149–169. [Google Scholar] [CrossRef]
  108. Sethu, K.K.V.; Ghosh, S.; Couet, S.; Swerts, J.; Sorée, B.; De Boeck, J.; Kar, G.S.; Garello, K. Optimization of Tungsten β-phase window for spin-orbit-torque magnetic random-access memory. Phys. Rev. Appl. 2021, 16, 064009. [Google Scholar] [CrossRef]
  109. Chen, A. A review of emerging non-volatile memory (NVM) technologies and applications. Solid-State Electron. 2016, 125, 25–38. [Google Scholar] [CrossRef]
  110. Si, M.; Cheng, H.-Y.; Ando, T.; Hu, G.; Ye, P.D. Overview and outlook of emerging non-volatile memories. MRS Bull. 2021, 46, 946–958. [Google Scholar] [CrossRef]
  111. Noé, P.; Vallée, C.; Hippert, F.; Fillot, F.; Raty, J.-Y. Phase-change materials for non-volatile memory devices: From technological challenges to materials science issues. Semicond. Sci. Technol. 2018, 33, 013002. [Google Scholar] [CrossRef]
  112. Ambrogio, S.; Narayanan, P.; Tsai, H.; Mackin, C.; Spoon, K.; Chen, A.; Fasoli, A.; Friz, A.; Burr, G.W. Accelerating deep neural networks with analog memory devices. In Proceedings of the 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Genova, Italy, 31 August 2020–2 September 2020. [Google Scholar]
  113. Abunahla, H.; Halawani, Y.; Alazzam, A.; Mohammad, B. NeuroMem: Analog graphene-based resistive memory for artificial neural networks. Sci. Rep. 2020, 10, 9473. [Google Scholar] [CrossRef] [PubMed]
  114. Zheng, X.; Zarcone, R.V.; Levy, A.; Khwa, W.-S.; Raina, P.; Olshausen, B.A.; Wong, H.P. High-density analog image storage in an analog-valued non-volatile memory array. Neuromorphic Comput. Eng. 2022, 2, 044018. [Google Scholar] [CrossRef]
  115. Byun, S.-J.; Kim, D.-G.; Park, K.-D.; Choi, Y.-J.; Kumar, P.; Ali, I.; Kim, D.-G.; Yoo, J.-M.; Huh, H.-K.; Jung, Y.-J.; et al. A Low-Power Analog Processor-in-Memory-Based Convolutional Neural Network for Biosensor Applications. Sensors 2022, 22, 4555. [Google Scholar] [CrossRef]
  116. Tzouvadaki, I.; Gkoupidenis, P.; Vassanelli, S.; Wang, S.; Prodromakis, T. Interfacing Biology and Electronics with Memristive Materials. Adv. Mater. 2023, e2210035, early view. [Google Scholar] [CrossRef]
  117. Greener, J.G.; Kandathil, S.M.; Moffat, L.; Jones, D.T. A guide to machine learning for biologists. Nat. Rev. Mol. Cell Biol. 2022, 23, 40–55. [Google Scholar] [CrossRef]
  118. Choi, R.Y.; Coyner, A.S.; Kalpathy-Cramer, J.; Chiang, M.F.; Campbell, J.P. Introduction to Machine Learning, Neural Networks, and Deep Learning. Transl. Vis. Sci. Technol. 2020, 9, 14. [Google Scholar]
  119. Zou, J.; Huss, M.; Abid, A.; Mohammadi, P.; Torkamani, A.; Telenti, A. A primer on deep learning in genomics. Nat. Genet. 2019, 51, 12–18. [Google Scholar] [CrossRef]
  120. Sayood, K. Introduction to Data Compression; Morgan Kaufmann: Burlington, MA, USA, 2017. [Google Scholar]
  121. Dirik, C.; Jacob, B. The performance of PC solid-state disks (SSDs) as a function of bandwidth, concurrency, device architecture, and system organization. ACM SIGARCH Comput. Archit. News 2009, 37, 279–289. [Google Scholar] [CrossRef] [Green Version]
  122. Meena, J.S.; Sze, S.M.; Chand, U.; Tseng, T.-Y. Overview of emerging nonvolatile memory technologies. Nanoscale Res. Lett. 2014, 9, 526. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  123. Bolón-Canedo, V.; Sánchez-Maroño, N.; Alonso-Betanzos, A. A review of feature selection methods on synthetic data. Knowl. Inf. Syst. 2013, 34, 483–519. [Google Scholar] [CrossRef]
  124. Raghunathan, T.E. Synthetic data. Annu. Rev. Stat. Its Appl. 2021, 8, 129–140. [Google Scholar] [CrossRef]
  125. Arora, A.; Arora, A. Generative adversarial networks and synthetic patient data: Current challenges and future perspectives. Future Healthc. J. 2022, 9, 190. [Google Scholar] [CrossRef] [PubMed]
  126. Rajotte, J.-F.; Bergen, R.; Buckeridge, D.L.; El Emam, K.; Ng, R.; Strome, E. Synthetic data as an enabler for machine learning applications in medicine. Iscience 2022, 25, 105331. [Google Scholar] [CrossRef] [PubMed]
  127. Delaney, A.M.; Brophy, E.; Ward, T.E. Synthesis of realistic ECG using generative adversarial networks. arXiv 2019, arXiv:1909.09150. [Google Scholar]
  128. Ramesh, V.; Vatanparvar, K.; Nemati, E.; Nathan, V.; Rahman, M.M.; Kuang, J. Coughgan: Generating synthetic coughs that improve respiratory disease classification. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020. [Google Scholar]
  129. Braddon, A.E.; Robinson, S.; Alati, R.; Betts, K.S. Exploring the utility of synthetic data to extract more value from sensitive health data assets: A focused example in perinatal epidemiology. Paediatr. Perinat. Epidemiol. 2022, 37, 292–300. [Google Scholar] [CrossRef]
  130. Thomas, J.A.; Foraker, R.E.; Zamstein, N.; Morrow, J.D.; Payne, P.R.; Wilcox, A.B. Demonstrating an approach for evaluating synthetic geospatial and temporal epidemiologic data utility: Results from analyzing >1.8 million SARS-CoV-2 tests in the United States National COVID Cohort Collaborative (N3C). J. Am. Med. Inform. Assoc. 2022, 29, 1350–1365. [Google Scholar] [CrossRef]
  131. Hernandez, M.; Epelde, G.; Beristain, A.; Álvarez, R.; Molina, C.; Larrea, X.; Alberdi, A.; Timoleon, M.; Bamidis, P.; Konstantinidis, E. Incorporation of synthetic data generation techniques within a controlled data processing workflow in the health and wellbeing domain. Electronics 2022, 11, 812. [Google Scholar] [CrossRef]
  132. Gonzales, A.; Guruswamy, G.; Smith, S.R. Synthetic data in health care: A narrative review. PLoS Digit. Health 2023, 2, e0000082. [Google Scholar] [CrossRef]
  133. D’Amico, S.; Dall’Olio, D.; Sala, C.; Dall’Olio, L.; Sauta, E.; Zampini, M.; Asti, G.; Lanino, L.; Maggioni, G.; Campagna, A. Synthetic data generation by artificial intelligence to accelerate research and precision medicine in hematology. JCO Clin. Cancer Inform. 2023, 7, e2300021. [Google Scholar] [CrossRef]
  134. Hahn, W.; Schütte, K.; Schultz, K.; Wolkenhauer, O.; Sedlmayr, M.; Schuler, U.; Eichler, M.; Bej, S.; Wolfien, M. Contribution of Synthetic Data Generation towards an Improved Patient Stratification in Palliative Care. J. Pers. Med. 2022, 12, 1278. [Google Scholar] [CrossRef] [PubMed]
  135. Elias, A.; Paradies, Y. The costs of institutional racism and its ethical implications for healthcare. J. Bioethical Inq. 2021, 18, 45–58. [Google Scholar] [CrossRef] [PubMed]
  136. Taylor, J. Racism, Inequality, And Health Care for African Americans. The Century Foundation, 19 December 2019. [Google Scholar]
  137. Matalon, D.R.; Zepeda-Mendoza, C.J.; Aarabi, M.; Brown, K.; Fullerton, S.M.; Kaur, S.; Quintero-Rivera, F.; Vatta, M.; Social, E.A.; Issues, C.L.; et al. Clinical, technical, and environmental biases influencing equitable access to clinical genetics/genomics testing: A points to consider statement of the American College of Medical Genetics and Genomics (ACMG). Genet. Med. 2023, 25, 100812. [Google Scholar] [CrossRef]
  138. Dankwa-Mullan, I.; Weeraratne, D. Artificial intelligence and machine learning technologies in cancer care: Addressing disparities, bias, and data diversity. Cancer Discov. 2022, 12, 1423–1427. [Google Scholar] [CrossRef] [PubMed]
  139. Henry, B.V.; Chen, H.; Edwards, M.A.; Faber, L.; Freischlag, J.A. A new look at an old problem: Improving diversity, equity, and inclusion in scientific research. Am. Surg. 2021, 87, 1722–1726. [Google Scholar] [CrossRef]
  140. Eubanks, V. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor; St. Martin’s Press: New York, NY, USA, 2018. [Google Scholar]
  141. Holstein, K.; Vaughan, J.W.; Daumé, H.; Dudik, M.; Wallach, H. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–16. [Google Scholar]
  142. Cachat-Rosset, G.; Klarsfeld, A. Diversity, Equity, and Inclusion in Artificial Intelligence: An Evaluation of Guidelines. Appl. Artif. Intell. 2023, 37, 2176618. [Google Scholar] [CrossRef]
  143. Washington, V.; Franklin, J.B.; Huang, E.S.; Mega, J.L.; Abernethy, A.P. Diversity, equity, and inclusion in clinical research: A path toward precision health for everyone. Clin. Pharmacol. Ther. 2023, 113, 575–584. [Google Scholar] [CrossRef]
  144. Al-Jarrah, O.Y.; Yoo, P.D.; Muhaidat, S.; Karagiannidis, G.K.; Taha, K. Efficient machine learning for big data: A review. Big Data Res. 2015, 2, 87–93. [Google Scholar] [CrossRef] [Green Version]
  145. Anh, T.T.; Luong, N.C.; Niyato, D.; Kim, D.I.; Wang, L.-C. Efficient training management for mobile crowd-machine learning: A deep reinforcement learning approach. IEEE Wirel. Commun. Lett. 2019, 8, 1345–1348. [Google Scholar] [CrossRef] [Green Version]
  146. Vamathevan, J.; Clark, D.; Czodrowski, P.; Dunham, I.; Ferran, E.; Lee, G.; Li, B.; Madabhushi, A.; Shah, P.; Spitzer, M. Applications of machine learning in drug discovery and development. Nature reviews. Drug Discov. 2019, 18, 463–477. [Google Scholar] [CrossRef]
  147. Ng, K.; Steinhubl, S.R.; DeFilippi, C.; Dey, S.; Stewart, W.F. Early detection of heart failure using electronic health records: Practical implications for time before diagnosis, data diversity, data quantity, and data density. Circ. Cardiovasc. Qual. Outcomes 2016, 9, 649–658. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  148. Tseng, H.-H.; Wei, L.; Cui, S.; Luo, Y.; Haken, R.K.T.; El Naqa, I. Machine learning and imaging informatics in oncology. Oncol. 2020, 98, 344–362. [Google Scholar] [CrossRef] [PubMed]
  149. Corrie, B.D.; Marthandan, N.; Zimonja, B.; Jaglale, J.; Zhou, Y.; Barr, E.; Knoetze, N.; Breden, F.M.; Christley, S.; Scott, J.K. iReceptor: A platform for querying and analyzing antibody/B-cell and T-cell receptor repertoire data across federated repositories. Immunol. Rev. 2018, 284, 24–41. [Google Scholar] [CrossRef] [PubMed]
  150. Shakhovska, N.; Bolubash, Y.J.; Veres, O. Big data federated repository model. In Proceedings of the Experience of Designing and Application of CAD Systems in Microelectronics, Lviv, Ukraine, 24–27 February 2015. [Google Scholar]
  151. Barnes, C.; Bajracharya, B.; Cannalte, M.; Gowani, Z.; Haley, W.; Kass-Hout, T.; Hernandez, K.; Ingram, M.; Juvvala, H.P.; Kuffel, G. The Biomedical Research Hub: A federated platform for patient research data. J. Am. Med. Inform. Assoc. 2022, 29, 619–625. [Google Scholar] [CrossRef] [PubMed]
  152. Lin, D.; Crabtree, J.; Dillo, I.; Downs, R.R.; Edmunds, R.; Giaretta, D.; De Giusti, M.; L’Hours, H.; Hugo, W.; Jenkyns, R. The TRUST Principles for digital repositories. Sci. Data 2020, 7, 144. [Google Scholar] [CrossRef]
  153. Yozwiak, N.L.; Schaffner, S.F.; Sabeti, P.C. Data sharing: Make outbreak research open access. Nature 2015, 518, 477–479. [Google Scholar] [CrossRef] [Green Version]
  154. Romero, O.; Wrembel, R. Data engineering for data science: Two sides of the same coin. In Proceedings of the Big Data Analytics and Knowledge Discovery: 22nd International Conference, DaWaK 2020, Bratislava, Slovakia, 14–17 September 2020; p. 22. [Google Scholar]
  155. Tamburri, D.; van den Heuvel, W.-J. Big Data Engineering, in Data Science for Entrepreneurship: Principles and Methods for Data Engineering. In Analytics, Entrepreneurship, and the Society; Springer: Berlin/Heidelberg, Germany, 2023; pp. 25–35. [Google Scholar]
  156. Schelter, S.; Stoyanovich, J. Taming technical bias in machine learning pipelines. Bull. Tech. Comm. Data Eng. 2020, 43, 1926250. [Google Scholar]
  157. Gudivada, V.; Apon, A.; Ding, J. Data quality considerations for big data and machine learning: Going beyond data cleaning and transformations. Int. J. Adv. Softw. 2017, 10, 1–20. [Google Scholar]
  158. Chu, X.; Ilyas, I.F.; Krishnan, S.; Wang, J. Data cleaning: Overview and emerging challenges. In Proceedings of the 2016 International Conference on Management of Data, San Francisco, CA, USA, 26 June–1 July 2016. [Google Scholar]
  159. Martinez, D.; Malyska, N.; Streilein, B.; Caceres, R.; Campbell, W.; Dagli, C.; Gadepally, V.; Greenfield, K.; Hall, R.; King, A. Artificial Intelligence: Short History, Present Developments, and Future Outlook; MIT Lincoln Laboratory: Lexington, MA, USA, 2019. [Google Scholar]
  160. Libbrecht, M.W.; Noble, W.S. Machine learning applications in genetics and genomics. Nat. Rev. Genet. 2015, 16, 321–332. [Google Scholar] [CrossRef] [Green Version]
  161. Shimizu, H.; Nakayama, K.I. Artificial intelligence in oncology. Cancer Sci. 2020, 111, 1452–1460. [Google Scholar] [CrossRef] [Green Version]
  162. Khodabandehlou, S.; Zivari Rahman, M. Comparison of supervised machine learning techniques for customer churn prediction based on analysis of customer behavior. J. Syst. Inf. Technol. 2017, 19, 65–93. [Google Scholar] [CrossRef]
  163. Hambarde, K.; Silahtaroğlu, G.; Khamitkar, S.; Bhalchandra, P.; Shaikh, H.; Kulkarni, G.; Tamsekar, P.; Samale, P. Data Analytics Implemented over E-Commerce Data to Evaluate Performance of Supervised Learning Approaches in Relation to Customer Behavior. In Soft Computing for Problem Solving: SocProS 2018; Springer: Berlin/Heidelberg, Germany, 2020; Volume 1. [Google Scholar]
  164. Liu, M.; Ylanko, J.; Weekman, E.; Beckett, T.; Andrews, D.; McLaurin, J. Utilizing supervised machine learning to identify microglia and astrocytes in situ: Implications for large-scale image analysis and quantification. J. Neurosci. Methods 2019, 328, 108424. [Google Scholar] [CrossRef]
  165. Janssens, T.; Antanas, L.; Derde, S.; Vanhorebeek, I.; Van den Berghe, G.; Grandas, F.G. CHARISMA: An integrated approach to automatic H&E-stained skeletal muscle cell segmentation using supervised learning and novel robust clump splitting. Med. Image Anal. 2013, 17, 1206–1219. [Google Scholar] [PubMed] [Green Version]
  166. Wani, M.A.; Bhat, F.A.; Afzal, S.; Khan, A.I.; Wani, M.A.; Bhat, F.A.; Afzal, S.; Khan, A.I. Supervised deep learning in face recognition. In Advances in Deep Learning. Studies in Big Data; Springer: Singapore, 2020; pp. 95–110. [Google Scholar]
  167. Nagaraj, P.; Banala, R.; Prasad, A.K. Real Time Face Recognition using Effective Supervised Machine Learning Algorithms. J. Physics Conf. Ser. 2021, 1998, 012007. [Google Scholar] [CrossRef]
  168. Han, T.; Yao, H.; Sun, X.; Zhao, S.; Zhang, Y. Unsupervised discovery of crowd activities by saliency-based clustering. Neurocomputing 2016, 171, 347–361. [Google Scholar] [CrossRef]
  169. Xu, S.; Ho, E.S.; Aslam, N.; Shum, H.P. Unsupervised abnormal behaviour detection with overhead crowd video. In Proceedings of the 2017 11th International Conference on Software, Knowledge, Information Management and Applications (SKIMA), Malabe, Sri Lanka, 6–8 December 2017. [Google Scholar]
  170. He, J.; Baxter, S.L.; Xu, J.; Xu, J.; Zhou, X.; Zhang, K. The practical implementation of artificial intelligence technologies in medicine. Nat. Med. 2019, 25, 30–36. [Google Scholar] [CrossRef] [PubMed]
  171. Patil, A.; Rane, M. Convolutional neural networks: An overview and Its applications in pattern recognition. In Information and Communication Technology for Intelligent Systems: Proceedings of ICTIS 2020; Springer: Singapore, 2020; Volume 1, pp. 21–30. [Google Scholar]
  172. Graves, A.; Liwicki, M.; Bunke, H.; Schmidhuber, J.; Fernández, S. Unconstrained on-line handwriting recognition with recurrent neural networks. Adv. Neural Inf. Process. Syst. 2007, 20, 1–8. [Google Scholar]
  173. Behnke, S. Hierarchical Neural Networks for Image Interpretation; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2766. [Google Scholar]
  174. Clark, E.; Ross, A.S.; Tan, C.; Ji, Y.; Smith, N.A. Creative writing with a machine in the loop: Case studies on slogans and stories. In Proceedings of the 23rd International Conference on Intelligent User Interfaces, Tokyo, Japan, 7–11 March 2018. [Google Scholar]
  175. Hadjeres, G.; Pachet, F.; Nielsen, F. Deepbach: A steerable model for bach chorales generation. In Proceedings of the 34th International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017. [Google Scholar]
  176. Guzdial, M.; Liao, N.; Chen, J.; Chen, S.-Y.; Shah, S.; Shah, V.; Reno, J.; Smith, G.; Riedl, M.O. Friend, collaborator, student, manager: How design of an ai-driven game level editor affects creators. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019. [Google Scholar]
  177. Introducing ChatGPT. Available online: https://openai.com/blog/chatgpt (accessed on 10 June 2023).
  178. Bard, an Experiment by Google. Available online: https://bard.google.com/ (accessed on 10 June 2023).
  179. Teubner, T.; Flath, C.M.; Weinhardt, C.; van der Aalst, W.; Hinz, O. Welcome to the era of chatgpt et al. The prospects of large language models. Bus. Inf. Syst. Eng. 2023, 65, 95–101. [Google Scholar] [CrossRef]
  180. Mondal, S.; Das, S.; Vrana, V.G. How to Bell the Cat? A Theoretical Review of Generative Artificial Intelligence towards Digital Disruption in All Walks of Life. Technologies 2023, 11, 44. [Google Scholar] [CrossRef]
  181. Piccolo, S.R.; Denny, P.; Luxton-Reilly, A.; Payne, S.; Ridge, P.G. Many bioinformatics programming tasks can be automated with ChatGPT. arXiv 2023, arXiv:2303.13528. [Google Scholar]
  182. Surameery, N.M.S.; Shakor, M.Y. Use chat GPT to solve programming bugs. Int. J. Inf. Technol. Comput. Eng. 2023, 3, 17–22. [Google Scholar] [CrossRef]
  183. van Dis, E.A.; Bollen, J.; Zuidema, W.; van Rooij, R.; Bockting, C.L. ChatGPT: Five priorities for research. Nature 2023, 614, 224–226. [Google Scholar] [CrossRef]
  184. Naser, M.; Ross, B.; Ogle, J.; Kodur, V.; Hawileh, R.; Abdalla, J.; Thai, H.-T. Can AI Chatbots Pass the Fundamentals of Engineering (FE) and Principles and Practice of Engineering (PE) Structural Exams? arXiv 2023, arXiv:2303.18149. [Google Scholar]
  185. Geerling, W.; Mateer, G.D.; Wooten, J.; Damodaran, N. Is ChatGPT Smarter than a Student in Principles of Economics? 2023. Available online: https://ssrn.com/abstract=4356034 (accessed on 10 June 2023).
  186. The Brilliance and Weirdness of ChatGPT. Available online: https://www.nytimes.com/2022/12/05/technology/chatgpt-ai-twitter.html (accessed on 10 June 2023).
  187. Ram, B.; Pratima Verma, P.V. Artificial intelligence AI-based Chatbot Study of ChatGPT, Google AI Bard and Baidu AI. World J. Adv. Eng. Technol. Sci. 2023, 8, 258–261. [Google Scholar]
  188. Cascella, M.; Montomoli, J.; Bellini, V.; Bignami, E. Evaluating the feasibility of ChatGPT in healthcare: An analysis of multiple clinical and research scenarios. J. Med. Syst. 2023, 47, 33. [Google Scholar] [CrossRef] [PubMed]
  189. Kung, T.H.; Cheatham, M.; Medenilla, A.; Sillos, C.; De Leon, L.; Elepaño, C.; Madriaga, M.; Aggabao, R.; Diaz-Candido, G.; Maningo, J. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS Digit. Health 2023, 2, e0000198. [Google Scholar] [CrossRef]
  190. Vert, J.-P. How will generative AI disrupt data science in drug discovery? Nat. Biotechnol. 2023, 41, 750–751. [Google Scholar] [CrossRef]
  191. Uprety, D.; Zhu, D.; West, H. ChatGPT—A promising generative AI tool and its implications for cancer care. Cancer 2023, 129, 2284–2289. [Google Scholar] [CrossRef] [PubMed]
  192. Sakamoto, T.; Furukawa, T.; Lami, K.; Pham, H.H.N.; Uegami, W.; Kuroda, K.; Kawai, M.; Sakanashi, H.; Cooper, L.A.D.; Bychkov, A. A narrative review of digital pathology and artificial intelligence: Focusing on lung cancer. Transl. Lung Cancer Res. 2020, 9, 2255. [Google Scholar] [CrossRef] [PubMed]
  193. Zheng, J.; Chan, K.; Gibson, I. Virtual reality. IEEE Potentials 1998, 17, 20–23. [Google Scholar] [CrossRef]
  194. Carmigniani, J.; Furht, B. Augmented reality: An overview. Handbook of Augmented Reality; Springer: New York, NY, USA, 2011; pp. 3–46. [Google Scholar]
  195. Berryman, D.R. Augmented reality: A review. Med. Ref. Serv. Q. 2012, 31, 212–218. [Google Scholar] [CrossRef] [PubMed]
  196. Fuchsova, M.; Korenova, L. Visualisation in Basic Science and Engineering Education of Future Primary School Teachers in Human Biology Education Using Augmented Reality. Eur. J. Contemp. Educ. 2019, 8, 92–102. [Google Scholar]
  197. Paembonan, T.L.; Ikhsan, J. Supporting Students’ Basic Science Process S kills by Augmented Reality Learning Media. J. Educ. Sci. Technol. 2021, 7, 188–196. [Google Scholar]
  198. Chen, S.-Y.; Liu, S.-Y. Using augmented reality to experiment with elements in a chemistry course. Comput. Hum. Behav. 2020, 111, 106418. [Google Scholar] [CrossRef]
  199. Li, L.; Yu, F.; Shi, D.; Shi, J.; Tian, Z.; Yang, J.; Wang, X.; Jiang, Q. Application of virtual reality technology in clinical medicine. Am. J. Transl. Res. 2017, 9, 3867. [Google Scholar]
  200. Pottle, J. Virtual reality and the transformation of medical education. Future Healthc. J. 2019, 6, 181. [Google Scholar] [CrossRef] [Green Version]
  201. Ayoub, A.; Pulijala, Y. The application of virtual reality and augmented reality in Oral & Maxillofacial Surgery. BMC Oral Health 2019, 19, 238. [Google Scholar]
  202. McKnight, R.R.; Pean, C.A.; Buck, J.S.; Hwang, J.S.; Hsu, J.R.; Pierrie, S.N. Virtual reality and augmented reality—Translating surgical training into surgical technique. Curr. Rev. Musculoskelet. Med. 2020, 13, 663–674. [Google Scholar] [CrossRef]
  203. Casari, F.A.; Navab, N.; Hruby, L.A.; Kriechling, P.; Nakamura, R.; Tori, R.; de Lourdes dos Santos Nunes, F.; Queiroz, M.C.; Fürnstahl, P.; Farshad, M. Augmented reality in orthopedic surgery is emerging from proof of concept towards clinical studies: A literature review explaining the technology and current state of the art. Curr. Rev. Musculoskelet. Med. 2021, 14, 192–203. [Google Scholar] [CrossRef] [PubMed]
  204. Carl, B.; Bopp, M.; Saß, B.; Voellger, B.; Nimsky, C. Implementation of augmented reality support in spine surgery. Eur. Spine J. 2019, 28, 1697–1711. [Google Scholar] [CrossRef]
  205. Georgescu, R.; Fodor, L.A.; Dobrean, A.; Cristea, I.A. Psychological interventions using virtual reality for pain associated with medical procedures: A systematic review and meta-analysis. Psychol. Med. 2020, 50, 1795–1807. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  206. Pittara, M.; Matsangidou, M.; Stylianides, K.; Petkov, N.; Pattichis, C.S. Virtual reality for pain management in cancer: A comprehensive review. IEEE Access 2020, 8, 225475–225489. [Google Scholar] [CrossRef]
  207. Sharifpour, S.; Manshaee, G.R.; Sajjadian, I. Effects of virtual reality therapy on perceived pain intensity, anxiety, catastrophising and self-efficacy among adolescents with cancer. Couns. Psychother. Res. 2021, 21, 218–226. [Google Scholar] [CrossRef]
  208. Cipresso, P.; Giglioli, I.A.C.; Raya, M.A.; Riva, G. The past, present, and future of virtual and augmented reality research: A network and cluster analysis of the literature. Front. Psychol. 2018, 9, 2086. [Google Scholar] [CrossRef] [Green Version]
  209. Peddie, J. Augmented Reality: Where We Will All Live; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  210. Riva, G.; Baños, R.M.; Botella, C.; Mantovani, F.; Gaggioli, A. Transforming experience: The potential of augmented reality and virtual reality for enhancing personal and clinical change. Front. Psychiatry 2016, 7, 164. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  211. Garcke, J.; Roscher, R. Explainable Machine Learning. Mach. Learn. Knowl. Extr. 2023, 5, 169–170. [Google Scholar] [CrossRef]
  212. Roscher, R.; Bohn, B.; Duarte, M.F.; Garcke, J. Explainable Machine Learning for Scientific Insights and Discoveries. IEEE Access 2019, 8, 42200–42216. [Google Scholar] [CrossRef]
  213. Rasheed, K.; Qayyum, A.; Ghaly, M.; Al-Fuqaha, A.I.; Razi, A.; Qadir, J. Explainable, trustworthy, and ethical machine learning for healthcare: A survey. Comput. Biol. Med. 2021, 149, 106043. [Google Scholar] [CrossRef]
  214. Ratti, E.; Graves, M. Explainable machine learning practices: Opening another black box for reliable medical AI. AI Ethics 2022, 2, 801–814. [Google Scholar] [CrossRef]
  215. Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S.B. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy 2020, 23, 18. [Google Scholar] [CrossRef]
  216. Wells, L.; Bednarz, T. Explainable AI and Reinforcement Learning—A Systematic Review of Current Approaches and Trends. Front. Artif. Intell. 2021, 4, 550030. [Google Scholar] [CrossRef]
  217. Nauta, M.; Trienes, J.; Pathak, S.; Nguyen, E.; Peters, M.; Schmitt, Y.; Schlötterer, J.; Keulen, M.V.; Seifert, C. From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. ACM Comput. Surv. 2022, 55, 1–42. [Google Scholar] [CrossRef]
  218. Jin, W.; Li, X.; Hamarneh, G. Evaluating explainable AI on a multi-modal medical imaging task: Can existing algorithms fulfill clinical requirements? Proc. AAAI Conf. Artif. Intelligence. 2022, 36, 11945–11953. [Google Scholar] [CrossRef]
  219. Dwivedi, R.; Dave, D.; Naik, H.; Singhal, S.; Omer, R.; Patel, P.; Qian, B.; Wen, Z.; Shah, T.; Morgan, G. Explainable AI (XAI): Core ideas, techniques, and solutions. ACM Comput. Surv. 2023, 55, 194. [Google Scholar] [CrossRef]
  220. Jiménez-Luna, J.; Grisoni, F.; Schneider, G. Drug discovery with explainable artificial intelligence. Nat. Mach. Intell. 2020, 2, 573–584. [Google Scholar] [CrossRef]
  221. de Souza, L.A., Jr.; Mendel, R.; Strasser, S.; Ebigbo, A.; Probst, A.; Messmann, H.; Papa, J.P.; Palm, C. Convolutional Neural Networks for the evaluation of cancer in Barrett’s esophagus: Explainable AI to lighten up the black-box. Comput. Biol. Med. 2021, 135, 104578. [Google Scholar] [CrossRef]
  222. Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
  223. Zafar, M.R.; Khan, N.M. DLIME: A deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. arXiv 2019, arXiv:1906.10263. [Google Scholar]
  224. Palatnik de Sousa, I.; Vellasco, M.M.B.R.; da Silva, E.C. Local interpretable model-agnostic explanations for classification of lymph node metastases. Sensors 2019, 19, 2969. [Google Scholar] [CrossRef] [Green Version]
  225. Kumarakulasinghe, N.B.; Blomberg, T.; Liu, J.; Leao, A.S.; Papapetrou, P. Evaluating local interpretable model-agnostic explanations on clinical machine learning classification models. In Proceedings of the 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), Rochester, MN, USA, 28–30 July 2020. [Google Scholar]
  226. Binder, A.; Bockmayr, M.; Hägele, M.; Wienert, S.; Heim, D.; Hellweg, K.; Ishii, M.; Stenzinger, A.; Hocke, A.; Denkert, C. Morphological and molecular breast cancer profiling through explainable machine learning. Nat. Mach. Intell. 2021, 3, 355–366. [Google Scholar] [CrossRef]
  227. Alsinglawi, B.; Alshari, O.; Alorjani, M.; Mubin, O.; Alnajjar, F.; Novoa, M.; Darwish, O. An explainable machine learning framework for lung cancer hospital length of stay prediction. Sci. Rep. 2022, 12, 607. [Google Scholar] [CrossRef]
  228. Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative adversarial networks: An overview. IEEE Signal Process. Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef] [Green Version]
  229. Aggarwal, A.; Mittal, M.; Battineni, G. Generative adversarial network: An overview of theory and applications. Int. J. Inf. Manag. Data Insights 2021, 1, 100004. [Google Scholar] [CrossRef]
  230. Arjovsky, M.; Bottou, L. Towards principled methods for training generative adversarial networks. arXiv 2017, arXiv:1701.04862. [Google Scholar]
  231. Gui, J.; Sun, Z.; Wen, Y.; Tao, D.; Ye, J. A review on generative adversarial networks: Algorithms, theory, and applications. IEEE Trans. Knowl. Data Eng. 2021, 35, 3313–3332. [Google Scholar] [CrossRef]
  232. Nam, S.; Kim, Y.; Kim, S.J. Text-adaptive generative adversarial networks: Manipulating images with natural language. Adv. Neural Inf. Process. Syst. 2018, 31, 1–10. [Google Scholar]
  233. Xu, J.; Ren, X.; Lin, J.; Sun, X. Diversity-promoting GAN: A cross-entropy based generative adversarial network for diversified text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018. [Google Scholar]
  234. Lai, C.-T.; Hong, Y.-T.; Chen, H.-Y.; Lu, C.-J.; Lin, S.-D. Multiple text style transfer by using word-level conditional generative adversarial network with two-phase training. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 3–7 November 2019. [Google Scholar]
  235. Yinka-Banjo, C.; Ugot, O.-A. A review of generative adversarial networks and its application in cybersecurity. Artif. Intell. Rev. 2020, 53, 1721–1736. [Google Scholar] [CrossRef]
  236. Toshpulatov, M.; Lee, W.; Lee, S. Generative adversarial networks and their application to 3D face generation: A survey. Image Vis. Comput. 2021, 108, 104119. [Google Scholar] [CrossRef]
  237. Kusiak, A. Convolutional and generative adversarial neural networks in manufacturing. Int. J. Prod. Res. 2020, 58, 1594–1604. [Google Scholar] [CrossRef]
  238. Singh, R.; Garg, R.; Patel, N.S.; Braun, M.W. Generative adversarial networks for synthetic defect generation in assembly and test manufacturing. In Proceedings of the 2020 31st Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC), Saratoga Springs, NY, USA, 24–26 August 2020. [Google Scholar]
  239. Wang, J.; Yang, Z.; Zhang, J.; Zhang, Q.; Chien, W.-T.K. AdaBalGAN: An improved generative adversarial network with imbalanced learning for wafer defective pattern recognition. IEEE Trans. Semicond. Manuf. 2019, 32, 310–319. [Google Scholar] [CrossRef]
  240. Meng, F.-j.; Li, Y.-q.; Shao, F.-m.; Yuan, G.-h.; Dai, J.-y. Visual-simulation region proposal and generative adversarial network based ground military target recognition. Def. Technol. 2022, 18, 2083–2096. [Google Scholar] [CrossRef]
  241. Thompson, S.; Teixeira-Dias, F.; Paulino, M.; Hamilton, A. Predictions on multi-class terminal ballistics datasets using conditional Generative Adversarial Networks. Neural Netw. 2022, 154, 425–440. [Google Scholar] [CrossRef] [PubMed]
  242. Achuthan, S.; Chatterjee, R.; Kotnala, S.; Mohanty, A.; Bhattacharya, S.; Salgia, R.; Kulkarni, P. Leveraging deep learning algorithms for synthetic data generation to design and analyze biological networks. J. Biosci. 2022, 47, 43. [Google Scholar] [CrossRef] [PubMed]
  243. Yi, X.; Walia, E.; Babyn, P. Generative adversarial network in medical imaging: A review. Med. Image Anal. 2019, 58, 101552. [Google Scholar] [CrossRef] [Green Version]
  244. Chen, Y.; Yang, X.-H.; Wei, Z.; Heidari, A.A.; Zheng, N.; Li, Z.; Chen, H.; Hu, H.; Zhou, Q.; Guan, Q. Generative adversarial networks in medical image augmentation: A review. Comput. Biol. Med. 2022, 144, 105382. [Google Scholar] [CrossRef]
  245. von Werra, L.; Schöngens, M.; Uzun, E.D.G.; Eickhoff, C. Generative adversarial networks in precision oncology. In Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval, Santa Clara, CA, USA, 2–5 October 2019. [Google Scholar]
  246. Zhan, B.; Xiao, J.; Cao, C.; Peng, X.; Zu, C.; Zhou, J.; Wang, Y. Multi-constraint generative adversarial network for dose prediction in radiotherapy. Med. Image Anal. 2022, 77, 102339. [Google Scholar] [CrossRef] [PubMed]
  247. Heilemann, G.; Zimmermann, L.; Matthewman, M. Investigating the Potential of Generative Adversarial Networks (GANs) for Autosegmentation in Radiation Oncology. Ph.D. Thesis, Technische Universitä, Viena, Austria, 2021. [Google Scholar]
  248. Nakamura, M.; Nakao, M.; Imanishi, K.; Hirashima, H.; Tsuruta, Y. Geometric and dosimetric impact of 3D generative adversarial network-based metal artifact reduction algorithm on VMAT and IMPT for the head and neck region. Radiat. Oncol. 2021, 16, 96. [Google Scholar] [CrossRef]
  249. Hersche, M.; Zeqiri, M.; Benini, L.; Sebastian, A.; Rahimi, A. A neuro-vector-symbolic architecture for solving Raven’s progressive matrices. Nat. Mach. Intell. 2023, 5, 363–375. [Google Scholar] [CrossRef]
  250. Serre, T. Deep learning: The good, the bad, and the ugly. Annu. Rev. Vis. Sci. 2019, 5, 399–426. [Google Scholar] [CrossRef]
  251. Shrestha, A.; Mahmood, A. Review of deep learning algorithms and architectures. IEEE Access 2019, 7, 53040–53065. [Google Scholar] [CrossRef]
  252. Kleyko, D.; Rachkovskij, D.A.; Osipov, E.; Rahimi, A. A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations. ACM Comput. Surv. 2021, 55, 1–40. [Google Scholar] [CrossRef]
  253. Kleyko, D.; Rachkovskij, D.A.; Osipov, E.; Rahimi, A. A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges. ACM Comput. Surv. 2021, 55, 1–52. [Google Scholar] [CrossRef]
  254. This AI Could Likely Beat You at an IQ Test. Available online: https://research.ibm.com/blog/neuro-vector-symbolic-architecture-IQ-test (accessed on 10 June 2023).
  255. Widdows, D.; Cohen, T. Reasoning with vectors: A continuous model for fast robust inference. Log. J. IGPL 2015, 23, 141–173. [Google Scholar] [CrossRef]
  256. Abhijith, M.; Nair, D.R. Neuromorphic High Dimensional Computing Architecture for Classification Applications. In Proceedings of the 2021 IEEE International Conference on Nanoelectronics, Nanophotonics, Nanomaterials, Nanobioscience & Nanotechnology (5NANO), Kottayam, Kerala, India, 29–30 April 2021. [Google Scholar]
  257. Fortunato, L.; Galassi, M. The case for free and open source software in research and scholarship. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2021, 379, 20200079. [Google Scholar] [CrossRef]
  258. Sahay, S. Free and open source software as global public goods? What are the distortions and how do we address them? Electron. J. Inf. Syst. Dev. Ctries. 2019, 85, e12080. [Google Scholar] [CrossRef] [Green Version]
  259. Singh, P.; Manure, A.; Singh, P.; Manure, A. Introduction to Tensorflow 2.0. In Learn TensorFlow 2.0: Implement Machine Learning and Deep Learning Models with Python; Apress: New York, NY, USA, 2020; pp. 1–24. [Google Scholar]
  260. Stevens, E.; Antiga, L.; Viehmann, T. Deep Learning with PyTorch; Manning Publications: Shelter Island, NY, USA, 2020. [Google Scholar]
  261. Pang, B.; Nijkamp, E.; Wu, Y.N. Deep learning with tensorflow: A review. J. Educ. Behav. Stat. 2020, 45, 227–248. [Google Scholar] [CrossRef]
  262. Sarang, P. Artificial Neural Networks with TensorFlow 2; Apress: Berkeley, CA, USA, 2021. [Google Scholar]
  263. Rao, D.; McMahan, B. Natural Language Processing with PyTorch: Build Intelligent Language Applications Using Deep Learning; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2019. [Google Scholar]
  264. Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M. Huggingface’s transformers: State-of-the-art natural language processing. arXiv 2019, arXiv:1910.03771. [Google Scholar]
  265. Ayyadevara, V.K.; Reddy, Y. Modern Computer Vision with Pytorch: Explore Deep Learning Concepts and Implement Over 50 Real-World Image Applications; Packt Publishing Ltd.: Birmingham, UK, 2020. [Google Scholar]
  266. Anderson, B.M.; Wahid, K.A.; Brock, K.K. Simple python module for conversions between DICOM images and radiation therapy structures, masks, and prediction arrays. Pract. Radiat. Oncol. 2021, 11, 226–229. [Google Scholar] [CrossRef] [PubMed]
  267. Norton, I.; Essayed, W.I.; Zhang, F.; Pujol, S.; Yarmarkovich, A.; Golby, A.J.; Kindlmann, G.; Wassermann, D.; Estepar, R.S.J.; Rathi, Y. SlicerDMRI: Open source diffusion MRI software for brain cancer research. Cancer Res. 2017, 77, e101–e103. [Google Scholar] [CrossRef] [Green Version]
  268. Gutman, D.A.; Khalilia, M.; Lee, S.; Nalisnik, M.; Mullen, Z.; Beezley, J.; Chittajallu, D.R.; Manthey, D.; Cooper, L.A. The digital slide archive: A software platform for management, integration, and analysis of histology for cancer research. Cancer Res. 2017, 77, e75–e78. [Google Scholar] [CrossRef] [Green Version]
  269. Zhu, Y.; Qiu, P.; Ji, Y. TCGA-assembler: Open-source software for retrieving and processing TCGA data. Nat. Methods 2014, 11, 599–600. [Google Scholar] [CrossRef] [PubMed]
  270. Jo, A. The Promise and Peril of Generative AI. Nature 2023, 614, 214–216. [Google Scholar]
  271. Bucknall, B.S.; Dori-Hacohen, S. Current and near-term AI as a potential existential risk factor. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, Oxford, UK, 19–21 May 2022. [Google Scholar]
  272. Roose, K.A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn. New York Times, 30 May 2023. [Google Scholar]
  273. Letters, F.O. Pause giant AI Experiments: An Open Letter. Future of Life Institution. 2023. Available online: https://futureoflife.org/open-letter/pause-giant-ai-experiments (accessed on 10 June 2023).
  274. Kann, B.H.; Hosny, A.; Aerts, H. Artificial intelligence for clinical oncology. Cancer Cell 2021, 39, 916–927. [Google Scholar] [CrossRef] [PubMed]
  275. Gha, A.E.; Takov, P.; Shang, N. The Superhuman Born Out of Artificial Intelligence and Genetic Engineering: The Destruction of Human Ontological Dignity. Horiz. J. Humanit. Artif. Intell. 2023, 2, 56–65. [Google Scholar]
  276. Zhang, J.; Zhang, Z.-M. Ethics and governance of trustworthy medical artificial intelligence. BMC Med. Inform. Decis. Mak. 2023, 23, 7. [Google Scholar] [CrossRef]
  277. Langlotz, C.P. Will artificial intelligence replace radiologists? Radiol. Soc. North America. 2019, 1, e190058. [Google Scholar] [CrossRef]
  278. Goldhahn, J.; Rampton, V.; Spinas, G.A. Could artificial intelligence make doctors obsolete? BMJ 2018, 363, k4563. [Google Scholar] [CrossRef] [Green Version]
  279. Razzaki, S.; Baker, A.; Perov, Y.; Middleton, K.; Baxter, J.; Mullarkey, D.; Sangar, D.; Taliercio, M.; Butt, M.; Majeed, A. A comparative study of artificial intelligence and human doctors for the purpose of triage and diagnosis. arXiv 2018, arXiv:1806.10698. [Google Scholar]
  280. Botha, J.; Pieterse, H. Fake news and deepfakes: A dangerous threat for 21st century information security. In Proceedings of the ICCWS 2020 15th International Conference on Cyber Warfare and Security, Norfolk, VA, USA, 12–13 March 2020. [Google Scholar]
  281. Pantserev, K.A. The Malicious Use of AI-Based Deepfake Technology as the New Threat to Psychological Security and Political Stability. In Cyber Defence in the Age of AI, Smart Societies and Augmented Humanity; Springer: Cham, Switzerland, 2020; pp. 37–55. [Google Scholar]
  282. Borji, A. A categorical archive of ChatGPT failures. arXiv 2023, arXiv:2302.03494. [Google Scholar]
  283. Brainard, J. Journals take up arms against AI-written text. Science 2023, 379, 740–741. [Google Scholar] [CrossRef]
  284. Vaishya; Misra, A.; Vaish, A. ChatGPT: Is this version good for healthcare and research? Diabetes Metab. Syndr. Clin. Res. Rev. 2023, 17, 102744. [Google Scholar] [CrossRef]
  285. DeGrave, A.; Janizek, J.; Lee, S. AI for radiographic COVID-19 detection selects shortcuts over signal. Nat. Mach. Intell. 2021, 3, 610–619. [Google Scholar] [CrossRef]
  286. Khullar, D.; Casalino, L.P.; Qian, Y.; Lu, Y.; Krumholz, H.M.; Aneja, S. Perspectives of patients about artificial intelligence in health care. JAMA Netw. Open 2022, 5, e2210309. [Google Scholar] [CrossRef]
  287. Seyyed-Kalantari, L.; Zhang, H.; McDermott, M.B.; Chen, I.Y.; Ghassemi, M. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nat. Med. 2021, 27, 2176–2182. [Google Scholar] [CrossRef]
  288. Kim, E.J.; Woo, H.S.; Cho, J.H.; Sym, S.J.; Baek, J.-H.; Lee, W.-S.; Kwon, K.A.; Kim, K.O.; Chung, J.-W.; Park, D.K. Early experience with Watson for oncology in Korean patients with colorectal cancer. PLoS ONE 2019, 14, e0213640. [Google Scholar] [CrossRef]
  289. Richardson, J.P.; Smith, C.; Curtis, S.; Watson, S.; Zhu, X.; Barry, B.; Sharp, R.R. Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digit. Med. 2021, 4, 140. [Google Scholar] [CrossRef]
  290. Lim, A.K.; Thuemmler, C. Opportunities and challenges of internet-based health interventions in the future internet. In Proceedings of the 2015 12th International Conference on Information Technology-New Generations, Las Vegas, NV, USA, 13–15 April 2015. [Google Scholar]
  291. Hatherley, J.J. Limits of trust in medical AI. J. Med. Ethics 2020, 46, 478–481. [Google Scholar] [CrossRef] [PubMed]
  292. DeCamp, M.; Tilburt, J.C. Why we cannot trust artificial intelligence in medicine. Lancet Digit. Health 2019, 1, e390. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  293. Nundy, S.; Montgomery, T.; Wachter, R.M. Promoting trust between patients and physicians in the era of artificial intelligence. JAMA 2019, 322, 497–498. [Google Scholar] [CrossRef] [PubMed]
  294. Johnson, M.; Albizri, A.; Harfouche, A. Responsible artificial intelligence in healthcare: Predicting and preventing insurance claim denials for economic and social wellbeing. Inf. Syst. Front. 2021, 1–17. [Google Scholar] [CrossRef]
  295. Lysaght, T.; Lim, H.Y.; Xafis, V.; Ngiam, K.Y. AI-assisted decision-making in healthcare: The application of an ethics framework for big data in health and research. Asian Bioeth. Rev. 2019, 11, 299–314. [Google Scholar] [CrossRef] [Green Version]
  296. Dueno, T. Racist Robots and the Lack of Legal Remedies in the Use of Artificial Intelligence in Healthcare. Conn. Ins. LJ 2020, 27, 337. [Google Scholar]
  297. Formosa, P.; Rogers, W.; Griep, Y.; Bankins, S.; Richards, D. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. Comput. Hum. Behav. 2022, 133, 107296. [Google Scholar] [CrossRef]
  298. Muthukrishnan, N.; Maleki, F.; Ovens, K.; Reinhold, C.; Forghani, B.; Forghani, R. Brief history of artificial intelligence. Neuroimaging Clin. 2020, 30, 393–399. [Google Scholar] [CrossRef] [PubMed]
  299. Pan, Y. Heading toward artificial intelligence 2.0. Engineering 2016, 2, 409–413. [Google Scholar] [CrossRef]
  300. Fan, J.; Campbell, M.; Kingsbury, B. Artificial intelligence research at IBM. IBM J. Res. Dev. 2011, 55, 16:1–16:4. [Google Scholar] [CrossRef]
  301. Bory, P. Deep new: The shifting narratives of artificial intelligence from Deep Blue to AlphaGo. Convergence 2019, 25, 627–642. [Google Scholar] [CrossRef]
  302. McCorduck, P.; Minsky, M.; Selfridge, O.; Simon, H.A. History of artificial intelligence. In Proceedings of the IJCAI’77: Proceedings of the 5th International Joint Conference on Artificial Intelligence, Cambridge, MA, USA, 22–25 August 1977. [Google Scholar]
  303. Bernstein, J. Three Degrees above Zero: Bell Laboratories in the Information Age; Cambridge University Press: Cambridge, UK, 1987. [Google Scholar]
  304. Horowitz, M.C. Artificial intelligence, international competition, and the balance of power. Texas National Security Review 2018, 2018, 22. [Google Scholar]
  305. Dick, S. Artificial intelligence. Harv. Data Sci. Rev. 2019, 1, 1–9. [Google Scholar]
  306. Wasilow, S.; Thorpe, J.B. Artificial intelligence, robotics, ethics, and the military: A Canadian perspective. AI Mag. 2019, 40, 37–48. [Google Scholar] [CrossRef]
  307. Bistron, M.; Piotrowski, Z. Artificial intelligence applications in military systems and their influence on sense of security of citizens. Electronics 2021, 10, 871. [Google Scholar] [CrossRef]
  308. Lohr, S. MIT Plans College for Artificial Intelligence, Backed by $1 Billion. The New York Times, 15 October 2018; 15. [Google Scholar]
  309. Kubassova, O.; Shaikh, F.; Melus, C.; Mahler, M. History, current status, and future directions of artificial intelligence. In Precision Medicine and Artificial Intelligence; Academic Press: Cambridge, MA, USA, 2021; pp. 1–38. [Google Scholar]
  310. Fulbright, R. A Brief History of Artificial Intelligence. In Democratization of Expertise; Routledge: London, UK, 2020. [Google Scholar]
  311. Marvin, M.; Seymour, A.P. Perceptrons; MIT Press: Cambridge, MA, USA, 1969; pp. 318–362. [Google Scholar]
  312. McCarthy, J.; Minsky, M.; Rochester, N.; Shannon, C. Dartmouth Artificial Intelligence (AI) Conference; Dartmouth College: Hanover, NH, USA, 1956. [Google Scholar]
  313. Lim, H.-o.; Takanishi, A. Biped walking robots created at Waseda University: WL and WABIAN family. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2007, 365, 49–64. [Google Scholar] [CrossRef] [PubMed]
  314. Güzel, M.S. Autonomous vehicle navigation using vision and mapless strategies: A survey. Adv. Mech. Eng. 2013, 5, 234747. [Google Scholar] [CrossRef] [Green Version]
  315. Fei-Fei, L.; Deng, J.; Li, K. ImageNet: Constructing a large-scale image database. J. Vis. 2009, 9, 1037. [Google Scholar] [CrossRef]
  316. Chang, A. History of Artificial Intelligence in Medicine. Gastrointest. Endosc. 2020, 92, 807–812. [Google Scholar]
  317. Beam, A.L.; Drazen, J.M.; Kohane, I.S.; Leong, T.Y.; Manrai, A.K.; Rubin, E.J. Artificial Intelligence in Medicine. N. Engl. J. Med. 2023, 388, 1220–1221. [Google Scholar] [CrossRef]
  318. Kumar, V.; Gu, Y.; Basu, S.; Berglund, A.; Eschrich, S.A.; Schabath, M.B.; Forster, K.; Aerts, H.J.; Dekker, A.; Fenstermacher, D. Radiomics: The process and the challenges. Magn. Reson. Imaging 2012, 30, 1234–1248. [Google Scholar] [CrossRef] [Green Version]
  319. Tang, A.; Tam, R.; Cadrin-Chênevert, A.; Guest, W.; Chong, J.; Barfett, J.; Chepelev, L.; Cairns, R.; Mitchell, J.R.; Cicero, M.D. Canadian Association of Radiologists white paper on artificial intelligence in radiology. Can. Assoc. Radiol. J. 2018, 69, 120–135. [Google Scholar] [CrossRef] [Green Version]
  320. Johnson, K.W.; Soto, J.T.; Glicksberg, B.S.; Shameer, K.; Miotto, R.; Ali, M.; Ashley, E.; Dudley, J.T. Artificial intelligence in cardiology. J. Am. Coll. Cardiol. 2018, 71, 2668–2679. [Google Scholar] [CrossRef]
  321. Lopez-Jimenez, F.; Attia, Z.; Arruda-Olson, A.M.; Carter, R.; Chareonthaitawee, P.; Jouni, H.; Kapa, S.; Lerman, A.; Luong, C.; Medina-Inojosa, J.R. Artificial intelligence in cardiology: Present and future. Mayo Clin. Proc. 2020, 95, 1015–1039. [Google Scholar] [CrossRef]
  322. Miyazawa, A.A. Artificial intelligence: The future for cardiology. Heart 2019, 105, 1214. [Google Scholar] [CrossRef]
  323. Le Berre, C.; Sandborn, W.J.; Aridhi, S.; Devignes, M.-D.; Fournier, L.; Smail-Tabbone, M.; Danese, S.; Peyrin-Biroulet, L. Application of artificial intelligence to gastroenterology and hepatology. Gastroenterology 2020, 158, 76–94.e2. [Google Scholar] [CrossRef] [Green Version]
  324. Christou, C.D.; Tsoulfas, G. Challenges and opportunities in the application of artificial intelligence in gastroenterology and hepatology. World J. Gastroenterol. 2021, 27, 6191. [Google Scholar] [CrossRef]
  325. Kröner, P.T.; Engels, M.M.; Glicksberg, B.S.; Johnson, K.W.; Mzaik, O.; van Hooft, J.E.; Wallace, M.B.; El-Serag, H.B.; Krittanawong, C. Artificial intelligence in gastroenterology: A state-of-the-art review. World J. Gastroenterol. 2021, 27, 6794. [Google Scholar] [CrossRef] [PubMed]
  326. Kaplan, A.; Cao, H.; FitzGerald, J.M.; Iannotti, N.; Yang, E.; Kocks, J.W.; Kostikas, K.; Price, D.; Reddel, H.K.; Tsiligianni, I. Artificial intelligence/machine learning in respiratory medicine and potential role in asthma and COPD diagnosis. J. Allergy Clin. Immunol. Pract. 2021, 9, 2255–2261. [Google Scholar] [CrossRef] [PubMed]
  327. Mekov, E.; Miravitlles, M.; Petkov, R. Artificial intelligence and machine learning in respiratory medicine. Expert Rev. Respir. Med. 2020, 14, 559–564. [Google Scholar] [CrossRef]
  328. Ferrante, G.; Licari, A.; Marseglia, G.L.; La Grutta, S. Artificial intelligence as an emerging diagnostic approach in paediatric pulmonology. Respirology 2020, 25, 1029–1030. [Google Scholar] [CrossRef]
  329. Hunter, B.; Hindocha, S.; Lee, R.W. The role of artificial intelligence in early cancer diagnosis. Cancers 2022, 14, 1524. [Google Scholar] [CrossRef] [PubMed]
  330. Kenner, B.; Chari, S.T.; Kelsen, D.; Klimstra, D.S.; Pandol, S.J.; Rosenthal, M.; Rustgi, A.K.; Taylor, J.A.; Yala, A.; Abul-Husn, N. Artificial intelligence and early detection of pancreatic cancer: 2020 summative review. Pancreas 2021, 50, 251. [Google Scholar] [CrossRef]
  331. Ballester, P.J.; Carmona, J. Artificial intelligence for the next generation of precision oncology. NPJ Precis. Oncol. 2021, 5, 79. [Google Scholar] [CrossRef]
  332. Windisch, P.; Hertler, C.; Blum, D.; Zwahlen, D.; Förster, R. Leveraging advances in artificial intelligence to improve the quality and timing of palliative care. Cancers 2020, 12, 1149. [Google Scholar] [CrossRef]
  333. Periyakoil, V.S.; Gunten, C.F.v. Palliative Care Is Proven. J. Palliat. Med. 2023, 26, 2–4. [Google Scholar] [CrossRef]
  334. Courdy, S.; Hulse, M.; Nadaf, S.; Mao, A.; Pozhitkov, A.; Berger, S.; Chang, J.; Achuthan, S.; Kancharla, C.; Kunz, I. The City of Hope POSEIDON enterprise-wide platform for real-world data and evidence in cancer. J. Clin. Oncol. 2021, 39, e18813. [Google Scholar] [CrossRef]
  335. Melstrom, L.G.; Rodin, A.S.; Rossi, L.A.; Fu, P., Jr.; Fong, Y.; Sun, V. Patient generated health data and electronic health record integration in oncologic surgery: A call for artificial intelligence and machine learning. J. Surg. Oncol. 2021, 123, 52–60. [Google Scholar] [CrossRef] [PubMed]
  336. Dadwal, S.; Eftekhari, Z.; Thomas, T.; Johnson, D.; Yang, D.; Mokhtari, S.; Nikolaenko, L.; Munu, J.; Nakamura, R. A Machine-Learning Sepsis Prediction Model for Patients Undergoing Hematopoietic Cell Transplantation. Blood 2018, 132, 711. [Google Scholar] [CrossRef]
  337. Deng, H.; Eftekhari, Z.; Carlin, C.; Veerapong, J.; Fournier, K.F.; Johnston, F.M.; Dineen, S.P.; Powers, B.D.; Hendrix, R.; Lambert, L.A. Development and Validation of an Explainable Machine Learning Model for Major Complications After Cytoreductive Surgery. JAMA Netw. Open 2022, 5, e2212930. [Google Scholar] [CrossRef]
  338. Zachariah, F.J.; Rossi, L.A.; Roberts, L.M.; Bosserman, L.D. Prospective comparison of medical oncologists and a machine learning model to predict 3-month mortality in patients with metastatic solid tumors. JAMA Netw. Open 2022, 5, e2214514. [Google Scholar] [CrossRef]
  339. Rossi, L.A.; Shawber, C.; Munu, J.; Zachariah, F. Evaluation of embeddings of laboratory test codes for patients at a cancer center. arXiv 2019, arXiv:1907.09600. [Google Scholar]
  340. Achuthan, S.; Chang, M.; Shah, A. SPIRIT-ML: A machine learning platform for deriving knowledge from biomedical datasets. In Proceedings of the Data Integration in the Life Sciences: 11th International Conference, DILS 2015, Los Angeles, CA, USA, 9–10 July 2015. [Google Scholar]
  341. Karolak, A.; Branciamore, S.; McCune, J.S.; Lee, P.P.; Rodin, A.S.; Rockne, R.C. Concepts and applications of information theory to immuno-oncology. Trends Cancer 2021, 7, 335–346. [Google Scholar] [CrossRef] [PubMed]
  342. Rosen, S.T. Why precision medicine continues to be the future of health care. Oncol. Times UK 2017, 39, 24. [Google Scholar] [CrossRef]
  343. Budhraja, K.K.; McDonald, B.R.; Stephens, M.D.; Contente-Cuomo, T.; Markus, H.; Farooq, M.; Favaro, P.F.; Connor, S.; Byron, S.A.; Egan, J.B. Genome-wide analysis of aberrant position and sequence of plasma DNA fragment ends in patients with cancer. Science Transl. Med. 2023, 15, eabm6863. [Google Scholar] [CrossRef] [PubMed]
  344. Liu, A.; Germino, E.; Han, C.; Watkins, W.; Amini, A.; Wong, J.; Williams, T. Clinical Validation of Artificial Intelligence Based Auto-Segmentation of Organs-at-Risk in Total Marrow Irradiation Treatment. Int. J. Radiat. Oncol. Biol. Phys. 2021, 111, e302–e303. [Google Scholar] [CrossRef]
  345. Watkins, W.; Du, D.; Qing, K.; Ladbury, C.; Liu, J.; Liu, A. Validation of Automated Segmentation Algorithms. Int. J. Radiat. Oncol. Biol. Phys. 2021, 111, e152. [Google Scholar] [CrossRef]
  346. Watkins, W.; Liu, J.; Hui, S.; Liu, A. Clinical Efficiency Gains with Artificial-Intelligence Auto-Segmentation in the Entire Human Body. Int. J. Radiat. Oncol. Biol. Phys. 2022, 114, e558. [Google Scholar] [CrossRef]
  347. Jossart, J.; Kenjić, N.; Perry, J. Structural-Based Drug Discovery Targeting PCNA: A Novel Cancer Therapeutic. FASEB J. 2021, 35. [Google Scholar] [CrossRef]
  348. Djulbegovic, B.; Teh, J.B.; Wong, L.; Hozo, I.; Armenian, S.H. Diagnostic Predictive Model for Diagnosis of Heart Failure after Hematopoietic Cell Transplantation (HCT): Comparison of Traditional Statistical with Machine Learning Modeling. Blood 2019, 134, 5799. [Google Scholar] [CrossRef]
  349. Ladbury, C.; Amini, A.; Govindarajan, A.; Mambetsariev, I.; Raz, D.J.; Massarelli, E.; Williams, T.; Rodin, A.; Salgia, R. Integration of artificial intelligence in lung cancer: Rise of the machine. Cell Rep. Med. 2023, 4, 100933. [Google Scholar] [CrossRef] [PubMed]
  350. Kothari, R.; Jones, V.; Mena, D.; Reyes, V.B.; Shon, Y.; Smith, J.P.; Schmolze, D.; Cha, P.D.; Lai, L.; Fong, Y. Raman spectroscopy and artificial intelligence to predict the Bayesian probability of breast cancer. Sci. Rep. 2021, 11, 6482. [Google Scholar] [CrossRef]
  351. Pozhitkov, A.; Seth, N.; Kidambi, T.D.; Raytis, J.; Achuthan, S.; Lew, M.W. Machine learning algorithm to perform ASA Physical Status Classification. medRxiv 2021. [Google Scholar] [CrossRef]
  352. Han, C.; Liu, A.; Wong, J. Application of Machine Learning for Prediction of Normal Organ Dose: Feasibility Study in Treatment Planning for Total Marrow Irradiation. Int. J. Radiat. Oncol. Biol. Phys. 2020, 108, e782–e783. [Google Scholar] [CrossRef]
  353. Two Studies for Patients with High Risk Prostate Cancer Testing Less Intense Treatment for Patients with a Low Gene Risk Score and Testing a More Intense Treatment for Patients With a High Gene Risk Score, The PREDICT-RT Trial. Available online: https://clinicaltrials.gov/ct2/show/NCT04513717 (accessed on 10 June 2023).
  354. Ngiam, K.Y.; Khor, W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. 2019, 20, e262–e273. [Google Scholar] [CrossRef]
  355. Bosserman, L.D.; Cianfrocca, M.; Yuh, B.; Yeon, C.; Chen, H.; Sentovich, S.; Polverini, A.; Zachariah, F.; Deaville, D.; Lee, A.B. Integrating academic and community cancer care and research through multidisciplinary oncology pathways for value-based care: A review and the City of Hope experience. J. Clin. Med. 2021, 10, 188. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The AI Silecosystem comprises hardware, data and software components. The integrated components of the AI Silecosystem facilitated the development, utilization and evolution AI.
Figure 1. The AI Silecosystem comprises hardware, data and software components. The integrated components of the AI Silecosystem facilitated the development, utilization and evolution AI.
Jcm 12 04830 g001
Figure 2. The AI Silecosystem paradigm conceives of the computer as thinking machine. As proposed by Turning [27] and McCarthy et al. [28], the computer may function as bone fide thinking machine, rather than mere computational machine. In accordance with a Kuhnian paradigm, the AI Silecosystem undergoes a series of stages: Inception, Intermission and Invigoration. Characteristically, the paradigm experiences a series of iterative Intermission and Invigoration cycles as new expectations develop and innovations occur.
Figure 2. The AI Silecosystem paradigm conceives of the computer as thinking machine. As proposed by Turning [27] and McCarthy et al. [28], the computer may function as bone fide thinking machine, rather than mere computational machine. In accordance with a Kuhnian paradigm, the AI Silecosystem undergoes a series of stages: Inception, Intermission and Invigoration. Characteristically, the paradigm experiences a series of iterative Intermission and Invigoration cycles as new expectations develop and innovations occur.
Jcm 12 04830 g002
Figure 3. Satellite COH community oncology clinics may access the institutional AI Silecosystem through hub-and-spoke service operations. Community oncology practices may utilize data analytic, AI expert and HPCC resources via centralized network services provided to the COH community.
Figure 3. Satellite COH community oncology clinics may access the institutional AI Silecosystem through hub-and-spoke service operations. Community oncology practices may utilize data analytic, AI expert and HPCC resources via centralized network services provided to the COH community.
Jcm 12 04830 g003
Table 1. Significant Past and Ongoing Advances in Hardware, Data and Software Driving Evolution of the AI Ecosystem and Their Value Impact.
Table 1. Significant Past and Ongoing Advances in Hardware, Data and Software Driving Evolution of the AI Ecosystem and Their Value Impact.
AI Silecosystem Metric
ComponentInnovationAI Algorithmic SpeedEfficiency/CostUtilityAgilityAccuracy/Validity/ReliabilitySecurity/SafetyAccessibility
HardwareQuantum Computing
AI Internet of Things
Distributive Edge Computing
Cloud Computing
Neuromorphic Computing
Analog Neural Networks
Monolithic 3D AI Systems
Graphics Processing Unit
Analog Non-Volatile Memory
DataSynthetic Data
Culturally Representative Data Sets
Data Optimization
SoftwareGenerative AI
Virtual and Augmented Reality
Explainable Machine Learning
Generative Adversarial Networks
Neuro-Vector-Symbolic Architecture
Open-Source AI Software
Greater Impact Lesser Impact
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

McDonnell, K.J. Leveraging the Academic Artificial Intelligence Silecosystem to Advance the Community Oncology Enterprise. J. Clin. Med. 2023, 12, 4830. https://doi.org/10.3390/jcm12144830

AMA Style

McDonnell KJ. Leveraging the Academic Artificial Intelligence Silecosystem to Advance the Community Oncology Enterprise. Journal of Clinical Medicine. 2023; 12(14):4830. https://doi.org/10.3390/jcm12144830

Chicago/Turabian Style

McDonnell, Kevin J. 2023. "Leveraging the Academic Artificial Intelligence Silecosystem to Advance the Community Oncology Enterprise" Journal of Clinical Medicine 12, no. 14: 4830. https://doi.org/10.3390/jcm12144830

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop