Next Article in Journal
Potential Role of Carbon Nanomaterials in the Treatment of Malignant Brain Gliomas
Next Article in Special Issue
Controllability and Robustness of Functional and Structural Connectomic Networks in Glioma Patients
Previous Article in Journal
Acidic Growth Conditions Promote Epithelial-to-Mesenchymal Transition to Select More Aggressive PDAC Cell Phenotypes In Vitro
Previous Article in Special Issue
Melanoma Clinical Decision Support System: An Artificial Intelligence-Based Tool to Diagnose and Predict Disease Outcome in Early-Stage Melanoma Patients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence in CT and MR Imaging for Oncological Applications

1
Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
2
Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
3
Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
4
Head and Neck Service, Department of Surgery, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
5
GE Healthcare, Menlo Park, CA 94025, USA
6
GE Healthcare, New York City, NY 10032, USA
*
Author to whom correspondence should be addressed.
Cancers 2023, 15(9), 2573; https://doi.org/10.3390/cancers15092573
Submission received: 31 January 2023 / Revised: 13 April 2023 / Accepted: 17 April 2023 / Published: 30 April 2023
(This article belongs to the Collection Artificial Intelligence in Oncology)

Abstract

:

Simple Summary

The two most common cross-sectional imaging modalities, computed tomography (CT) and magnetic resonance imaging (MRI), have shown enormous utility in clinical oncology. The emergence of artificial intelligence (AI)-based tools in medical imaging has been motivated by the desire for greater efficiency and efficacy in clinical care. Although a growing number of new AI tools for narrow-specific tasks in imaging is highly encouraging, the effort to tackle the key challenges to implementation by the worldwide imaging community has yet to be appropriately addressed. In this review, we discuss a few challenges in using AI tools and offer some potential solutions with examples from lung CT and MRI of the abdomen, pelvis, and head and neck (HN) region. As we advance, AI tools may significantly enhance clinician workflows and clinical decision-making.

Abstract

Cancer care increasingly relies on imaging for patient management. The two most common cross-sectional imaging modalities in oncology are computed tomography (CT) and magnetic resonance imaging (MRI), which provide high-resolution anatomic and physiological imaging. Herewith is a summary of recent applications of rapidly advancing artificial intelligence (AI) in CT and MRI oncological imaging that addresses the benefits and challenges of the resultant opportunities with examples. Major challenges remain, such as how best to integrate AI developments into clinical radiology practice, the vigorous assessment of quantitative CT and MR imaging data accuracy, and reliability for clinical utility and research integrity in oncology. Such challenges necessitate an evaluation of the robustness of imaging biomarkers to be included in AI developments, a culture of data sharing, and the cooperation of knowledgeable academics with vendor scientists and companies operating in radiology and oncology fields. Herein, we will illustrate a few challenges and solutions of these efforts using novel methods for synthesizing different contrast modality images, auto-segmentation, and image reconstruction with examples from lung CT as well as abdome, pelvis, and head and neck MRI. The imaging community must embrace the need for quantitative CT and MRI metrics beyond lesion size measurement. AI methods for the extraction and longitudinal tracking of imaging metrics from registered lesions and understanding the tumor environment will be invaluable for interpreting disease status and treatment efficacy. This is an exciting time to work together to move the imaging field forward with narrow AI-specific tasks. New AI developments using CT and MRI datasets will be used to improve the personalized management of cancer patients.

Graphical Abstract

1. Introduction

Most common high-resolution cross-sectional anatomic imaging modalities, such as computed tomography (CT) and magnetic resonance imaging (MRI), excel at providing details regarding lesion location, size, morphology, and structural changes to adjacent tissues [1]. There is abundant literature on qualitative and quantitative CT and MRI focusing on oncological applications [2,3]. Such images capture features, e.g., tumor density, enhancement pattern, margin irregularity, and relation to neighboring structures, which are then used for tumor detection, initial cancer staging, assessment of treatment response, and clinical follow-up [4]. For example, in routine clinical trials, radiologists provide lesion size measurements using Response Evaluation Criteria in Solid Tumors (RECIST) guidelines for medical oncologists and radiation oncologists to assess treatment response [5]. Such size measurements are labor-intensive and can be replaced by new auto-segmentation tools that help to calculate tumor volume in a more accurate, reproducible, and time-efficient manner [6,7]. The primary driver behind the emergence of artificial intelligence (AI) in medical imaging has been the desire for greater efficacy and efficiency in clinical care [8,9]. The topics of data sampling and deep learning (DL) strategies, including levels of learning supervision (transfer learning, multi-task learning, domain adaptation, and federated and continuous learning systems), are well covered in previously published reviews [10,11]. The importance of proper data collection and standardization methods, the appropriate choice of the reference standard in relation to the task at hand, the identification of suitable training approaches, the correct selection of performance metrics, the requirements of an efficient user interface, clinical workflows, and timely quality assurance of AI tools cannot be emphasized enough [12,13]. The imaging community must address the challenges together and identify target areas that can benefit from AI opportunities. Present challenges include testing the accuracy and reliability of quantitative CT and MRI data before its inclusion in the AI pipeline as well as how best to integrate AI developments into clinical practice [14,15].
Here, we will illustrate a few challenges and solutions of these efforts using novel methods for synthesizing different contrast modality images, auto-segmentation and image reconstruction with examples from lung CT as well as abdomen, pelvis, and head and neck MRI. Discussion of AI developments in other imaging modalities, including X-ray, mammography, ultrasonography, and positron emission tomography (PET), is beyond the scope of this review.

1.1. Highlights

AI applications in CT and MRI oncological imaging may be leveraged for protocol development, imaging acquisition, reconstruction, interpretation, and clinical care.
Herein are highlighted the key points of the review:
o 
Deep learning methods can be used to synthesize different contrast modality images for many purposes, including training networks for multi-modality segmentation, image harmonization, and missing modality synthesis.
o 
AI-based auto-segmentation for discerning abdominal organs is presented here. Deep learning methods can leverage different modalities with more information (e.g., higher contrast from MRI or many experts segmented labeled datasets such as from CT) to improve tumor segmentation performance in a different modality without requiring paired image sets.
o 
Deep learning reconstruction algorithms are illustrated with examples for both CT and MRI. Such approaches improve image quality, which aids in better tumor detection, segmentation, and monitoring of response.
o 
It is emphasized that large quantities of data are requirements for AI development, and this has created opportunities for collaboration, open team science, and knowledge sharing.

1.2. AI in CT and MRI for Oncological Imaging

AI tools represent a potential leap forward in oncological imaging, including harnessing machine learning and DL to improve tumor characterization, identify imaging biomarkers for histopathological, metabolic, and functional status, and tailor treatment plans [16]. AI methods have shown the potential to stratify patients based on risk factors as well as provide automated measurements of tumor volume via tumor segmentation [10,15,17]. Many studies have been published on machine learning tools for computer-aided or AI-assisted clinical tasks [8,9,11,18]. However, most of these tools are not yet ready for clinical deployment. It is of paramount importance that any AI-driven clinical tool undergo proper training and rigorous validation of its generalizability and robustness before being adopted into patient clinical care [15,19,20,21].
Highly accurate tumor segmentation would allow for reliable and reproducible longitudinal tracking of tumor size and volume across time points. Automated segmentation can be easily integrated into clinical oncological imaging workflows, overcoming the time limitations of manual size comparisons [22,23]. Although RECIST remains the standard methodology for clinical trials, it is difficult to implement in daily clinical practice [5]. Furthermore, rapid progress in computational power and new AI techniques can allow for the processing of larger data sets to reveal new imaging biomarkers that are surrogates for tumor subtypes and disease status [24]. AI models can now be constructed incorporating the full spectrum of clinical, genomic, and histopathologic data in tumor classification [25], tumor subtyping with non-invasive quantitative imaging data, and tumor histopathology. Lastly, genomics data can revolutionize cancer management by guiding treatment selection and determining prognosis [26,27].
AI efforts in CT and MRI are already well underway and have demonstrated remarkable progress in various image analysis tasks [8,9,10,11]. In cancer screening, DL techniques have shown promise in CT screening for lung cancer and colonic polyps [28], MRI screening for prostate cancer [29], discriminating glioblastoma from brain metastasis with conventional MR images [30], breast cancer risk assessment with MR images [31,32], and segmentation of CT and MR images of head and neck (HN) cancer for MR-guided radiotherapy [33,34,35]. AI models trained on large datasets can extract high-dimensional representations, which show an increase in specificity compared with lower-dimensional machine learning methods often used in computer-aided detection software for lung cancer screening [36]. The advent of precision medicine in oncology aims to tailor individual treatment plans based partly on tumor genomics and histopathology [37]. Typically, this data is obtained through invasive procedures. However, the ability to non-invasively capture such data can augment precision medicine with radiomics and therefore change clinical management. In neuro-oncology, in particular, research efforts aim to predict the presence of IDH1 mutations, 1p/19q co-deletion, and EGFR, as well as VEGF and p53 status, by identifying precise imaging biomarkers via machine learning and DL techniques [38]. Tumor subtyping may further aid the determination of cancer prognosis [39]. Attempts have been made using AI tools to predict survival outcomes in glioblastoma multiforme based on baseline brain MRI [40] as well as to predict response to chemoembolization in hepatocellular carcinoma (based on baseline liver MRI [41]. A comprehensive understanding of the invasive histopathological and molecular approaches, which provide insight into intratumor heterogeneity and the role of advanced MRI imaging in characterizing microstructures, cellularity, physiology, perfusion, and metabolism, is lacking [42,43]. Thus, developing informed, cutting-edge, robust AI tools using imaging datasets is necessary to quantify imaging biomarkers and improve patient diagnostics and outcomes.

2. Specific-Narrow Tasks Developed Using AI for Radiological Workflow

Figure 1 illustrates the many opportunities for specific-narrow tasks developed using AI in radiological workflow, which range from imaging protocol development and data acquisition to the interpretation of images for clinical care. AI can be helpful in patient protocol systems, starting with selecting proper imaging tests depending on the organ under study, exam scheduling, protocoling, and retrieving available prior images for comparison. All major imaging vendors incorporate AI, which shows great promise for patient positioning [44], image acquisition, and reconstruction pipelines by reducing scan time, suppressing artifacts, and improving overall image quality via optimization of the signal-to-noise ratio (SNR) [45,46]. AI-based image reconstruction methods can also help minimize the radiation dose from CT images by improving image quality [23,35]. AI tools developed for specific, narrow tasks, such as case assignment, lesion detection, and segmentation of regions of interest, are critical for oncological imaging. Reconstruction of images using DL algorithms has shown remarkable improvements in image contrast and SNR for CT [19,47,48] and MRI [49,50,51]. As mentioned above, manually segmenting longitudinal tumor volume is laborious, time-consuming, and difficult to perform accurately. Previously developed auto-segmentation methods were sensitive to changes in scanning parameters, resolution, and image quality, which limited their clinical value [52]. AI-based algorithms have been successful at tumor segmentation and have shown better accuracy and robustness to imaging acquisition differences [49,50,51]. In parallel, new AI tools have been developed for the quantification of image features from both radiomics and lesion classification [16,53,54]. AI models could help integrate multi-modality imaging data and molecular markers as available [25]. AI methods are also amenable to developing predictive and prognostic models for clinical decision-making and/or clinical trials [55]. With these developments, AI is poised to be the main driver for innovative CT and MR imaging, and it can play an important role in clinical oncology.
This is an exciting time for imaging professionals, in which radiologists and scientists will remain essential for producing the highest quality imaging data and its interpretation for clinical care. Herein, we illustrate the challenges of CT and MR image analysis using AI tools as well as offer some potential solutions originating from our experience using examples from lung CT and MRI of the abdomen, pelvis, and HN region.

3. Major Challenges with Solutions for Radiological Image Analysis

The major challenges in radiological image analysis are described pointwise with solutions in this section. Accordingly, we have summarized a selection of original and review articles with references, the narrow-specific AI tasks, title, objectives, advantages, recommendations, and limitations (if applicable) in Table 1. The select articles from 2018 to 2022 cover AI applications and their use in (i) medical imaging, (ii) image reconstruction and registration, (iii) lesion segmentation, detection, and characterization, and (iv) clinical applications in oncology. It was beyond the scope of this work to include the full list of articles published in this area.

3.1. Variability in Imaging Acquisition Pose Challenges for Large-Scale Radiomics Analysis Studies

Radiomics, or the non-invasive extraction of quantitative information from images, is well developed in oncology, with several groups demonstrating its utility for both cancer diagnosis and treatment response prediction of multiple solid cancers [62]. However, these successes have yet to be translated into routine clinical use due to the variability in MRI [63] and CT [64] images stemming from varying image acquisition protocols and multi-vendor scanners that affect radiomics features. Hence, cross-site image harmonization remains an urgent, unmet need to enable robust multi-institutional and clinical use of radiomics biomarkers.
Commonly used image harmonization methods, such as ComBat, use the statistical properties of the data distribution to reduce the variability of radiomic features by removing so-called “batch effects” by shifting [65] distributions and using the unrealistic assumption of a unimodal feature distribution. Multi-modality of feature distributions can be addressed with multiple mixture Gaussian-based ComBat [66] normalizations, but such methods still require pre-determined groupings and a fixed set of features.
Recent developments in domain adaptation using generative adversarial networks (GANs) have successfully applied image harmonization to CT and MRI images [67,68,69,70]. However, such methods have limited success due to their reliance on global image similarity losses, which can lead to the introduction of unexpected artifacts and hallucinated features as well as the potential loss of diversity in the textural content. Disentangling DL methods, which extract domain-invariant content such as tumor shape, anatomic context, and domain-specific style, are more robust to domain differences and best mitigate mode collapse issues [12,71]. However, DL methods also require the training of multiple one-to-one modality mapping methods, which increases the need for computational and memory capacity to accommodate a variety of scanner and imaging protocols.
Other prior works have used GANs for image synthesis for a variety of purposes [72,73,74,75,76,77], including generating PET images from CT using bi-directional contrastive GANs constructed to maximize the information between two networks generating CT to PET and PET to CT images, respectively. To generate missing PET images, synthesizing liver contrast to improve tumor detection by combining a GAN with a self-attention convolutional network and a region-based discriminator to improve tumor segmentation [77], multi-contrast MRI generation using CTs with the so-called MedGAN for medical imaging applications [73], as well as ensuring realistic texture preservation with texture preservation losses implemented into the GAN network training [72]. Whereas the aforementioned methods focused on preserving textural characteristics and inverse consistency to ensure synthesis, other works used attention formulations to focus the network towards regions or structures of interest. One technique, SAGAN [74], uses region masks to provide additional constraints. Another technique, PSIGAN, combines derived structure information using a jointly trained segmentation and image synthesis network for learning to segment on MRI images without labeled MRI datasets [78]. Recently, a new CVT-GAN method combined a convolutional framework with vision transformers to extract global and local self-attention methods for high-quality standard-dose PET (SPET) reconstruction using low-dose PET (LPET) images [76].
In prior work, a disentangled deep network approach was developed that employs a single universal content encoder with a single variational autoencoder to extract both image content and style for one-to-one domain adaptation [75]. Using our approach, a style code is extracted from the images and converted into latent style codes that can then be used to modulate image generation. A key difference between our variational auto-encoder approach and other prior methods is that our method learns a one-to-many modality translation using a lightweight scaling module that extracts the style code for the different modalities as a scaling function, which is then injected into a single decoder to generate the different modality images. Therefore, our approach makes use of a smaller memory footprint architecture consisting of a single domain in-variant content encoder, a lightweight style coder network, and a single decoder network. Other methods require multiple one-to-one modality synthesis networks for every single considered modality [72,73,74,75,76,77].
Extensive details of our method have been published in several outlets [59,75,79,80]. Briefly, our method includes a domain-invariant content encoder network composed of a sequence of convolutional layers and a single style coding network that extracts the latent style code for the different modalities. The style coding network is constructed using a variational autoencoder, which uses a latent Gaussian prior to span the styles of the various modalities and is constructed using 5 convolutional pooling layers, followed by a global pooling and fully connected layer. The style code is transformed into a latent style scale by a latent scale layer that is then used to modulate the features computed by the decoder network to synthesize images corresponding to different modalities. This network is jointly optimized using adversarial losses using a patchGAN discriminator, content reconstruction losses, image translation losses, and latent code regression losses as detailed in prior work [75]. In addition, a multi-tasked training strategy is used in which a two-dimensional (2D) Unet architecture is employed to learn to generate multi-organ segmentation from the synthesized image sets. The networks are optimized using the Adam method with a batch size of 1 and a learning rate of 2 × 10−4, with early stopping used to prevent overtraining [81].
The result of synthesized T2-weighted (T2w) MRI into T1-weighted (T1w) MRI from CT datasets available in the open-source Combined Healthy Abdominal Organ Segmentation (CHAOS) challenge dataset are shown in Figure 2. Using a published method described by Jiang and Veeraraghavan [75], the model was trained using 20 unlabeled MRIs and an entirely different set of 30 patients with expertly segmented CT images containing multiple organ segmentations. Testing was performed on another group consisting of 10 patients who had undergone MRI exams. Both sequences were acquired on a 1.5 Tesla scanner. As shown, our approach produced a realistic synthesis of such images, indicating potential use in image harmonization.
Synthesis realism was measured by computing the similarity between the features computed within the individual organs on synthesized images and those same organs in real images. Our method produced a low distance of 5.05 and 14.00 for T1w and T2w MRI. In comparison, this distance was 73.90 and 101.37 for T1w and T2w MRI using CycleGAN, which learns multiple one-to-one modality translations, and 73.39 and 77.49 using another state-of-the-art one-to-one modality translation method called StarGAN [82].

3.2. Volumetric Segmentation of Tumor Volumes and Longitudinal Tracking of Tumor Volume Response

Currently, radiographic response assessment during treatment and at follow-up is primarily applied using bi-dimensional RECIST metrics [5], which have many limitations and cannot quantify the underlying phenotypic heterogeneity within tumors. For practical use, automated and consistent pipelines for quantifying longitudinal tumor response dynamics are needed. Reliable segmentation is also necessary to overcome the practical limitations of radiomics analysis methods, which require volumetric tumor segmentation.
Recent works have shown the possibility of obtaining a more accurate tumor prognosis by utilizing longitudinal tumor response image features extracted from radiomics analysis [54,83,84,85,86]. Multi-tasked AI methods that combine segmentation and classification of serial images have shown the ability to predict tumor treatment response for rectal cancers better [87]. In this context, AI-enabled longitudinal image analysis is needed to both segment and characterize tumor changes at the voxel level. Containerized and operating system-independent segmentation tools, such as DeepNeuro [88] and DeepInfer [89], provide well-known AI models for specific disease sites, primarily brain and prostate cancers. Community supported resources, such as MONAI [90], have increased the ability to extract, transform, and load data for tailored DL model development, thereby lowering the barrier to DL tool assessment for the general research community.
These successes have spurred growth in offering commercial tools for normal tissue segmentations for several disease sites. However, successes in normal tissue segmentation and a few cancers, such as brain gliomas, have yet to be translated to tumors in other disease sites and imaging modalities, such as contrast-enhanced and non-contrast CTs and cone-beam CTs that are routinely used in radiotherapy. New DL methods that learn the underlying spatial anatomic context, including those that use vision transformers and self-attention methods [91,92] have improved the ability of DL to extract the segmentation of challenging tumors. Another related recent innovation is the development of distillation learning and cross-modality learning [45,93,94], in which information from different modalities, such as CT or MRI, is used to inform and improve the extraction of relevant features that better signal the contrast between tumor and background. In addition to improving segmentation in imaging modalities with low soft-tissue contrast, such as CT and cone-beam CT, using the information learned from higher contrast modalities (e.g., MRI) can also benefit learning in new modalities for disease sites (such as MRI for the lung), in which expertly segmented datasets are limited [95].
Figure 3 shows the results with example segmentations produced by a cross-modality educed distillation learning method (CMEDL) [79], which combines learning from unpaired or unrelated sets of T2w turbo spin echo MRI and CT as well as cone beam computed tomography (CBCT) images for the segmentation of lung tumors. Segmentation on T2wMRI produced via unpaired distillation learning, in which many CT datasets (n = 300) relative to MRI datasets (n = 80) were available, demonstrates the additional use case of unpaired distillation learning for data augmentation. The results shown in Figure 3A–C are produced by three different models that were trained using the CMEDL approach. Extensive details of the CMEDL method are in the prior published methods for CT lung tumor [96], MRI lung tumor segmentation [79], and CBCT-based lung tumor segmentation [59]. Concisely, the CMEDL architecture makes use of two parallel segmentation subnetworks for a so-called tracker network (using MRI [Figure 3A,B], CT [Figure 3C]), and a student network (using CT [Figure 3A], CBCT [Figure 3B], and T2w MRI [Figure 3C]). Any segmentation architecture can be used, as shown using the popular Unet as well as a dense network called a multiple resolution residual network [97]. The teacher network forces the student network to extract features that better signal the contrast between foreground and background by applying feature distillation losses that match the high-level features computed from corresponding synthesized teacher modality (e.g., MRI) and student modality (e.g., CT) images.
The network itself is trained with unpaired images, in which corresponding sets of multiple-modality scans are not required for training. To accomplish training with unpaired modalities, a cross-modality synthesis network created using a GAN is applied. The GAN consists of a generator created using a 3DUnet that computes dense pixel regression by using tanh activation, and a PatchGAN discriminator network to distinguish the synthesized from the real images was used in training. The details of the number of images used in training, training losses, training epochs, etc. are in published methods [79]. The teacher network is initialized with example real images and corresponding segmentations to learn to extract the appropriate set of relevant features. The same network is then jointly optimized with the student network to further refine the extracted features using synthesized images produced from the images input to the student network using the GAN-based image-to-image translation network. The teacher and student networks are jointly optimized during training to make use of multi-task optimization. The GAN network for synthesizing the cross-modality images is cooperatively optimized such that this network’s parameters are updated only in iterations when the segmentation network’s parameters are frozen, and vice versa, to ensure stable training convergence.
The results of segmenting the tumor on CT images using a Unet network on a sample test case and optimized via the CMEDL approach with CTs (n = 377) and MRIs (n = 82) from external and internal institution datasets, respectively, are shown in Figure 3A. The results of segmenting an external institution CBCT image using a Unet network optimized with the CMEDL approach optimized with unpaired CBCTs (n = 216) and 82 MRIs from different sets of patients are shown in Figure 3B. Figure 3C shows a sample test-set MRI segmentation produced by training a Unet using the CMEDL approach. Separate models were constructed for the three results and optimized with different datasets. All networks were optimized with the Adam optimizer, with an initial learning rate of 2 × 10−4, batch size of 2, and early stopping was used to prevent overfitting of the networks. As shown in Figure 3, the algorithm generated segmentations closely approximates the expert delineation for the representative test cases.
Although the aforementioned method focuses on the segmentation of the gross tumor volume (GTV), it is also important to consider the tumor margin needed for effective treatment when using the AI-defined tumors for treatment planning and delivery [59,78,79,97,98]. For instance, in the context of thermal ablation, prior work by Singh et al. [99] showed that incorporating blood perfusion information from dynamic contrast MRI using commercial software tools could be utilized to better define the margins of breast tumors for thermal ablation. In the context of radiation therapy, the segmented GTV is often expanded to produce a clinical target volume (CTV) to incorporate the microscopic spread by using treatment planning software to generate automatic expansion with fixed criteria for different disease sites while aiming to limit radiation exposure to the adjacent healthy tissues. However, this approach does not always account for microscopic disease, and hence, it is resolved using a clinician’s manual delineation that leads to inter-rater variability [100]. Cardenas et al. [101] addressed this issue of clinical variability by using a stacked autoencoder deep network formulation to automatically learn the CTV definition for head and neck cancers while accounting for adjacent healthy tissues both for lymph nodes and high-risk CTV. A different prior work by Xie et al. [102] addressed the issue of lung cancer CTV definition by accounting for respiration and GTV contained within the CTV by constructing a customized loss function within a 3DUnet approach.

3.3. Optimization of Dose and Image Quality Improvement in CT Scans

CT is an essential component of modern healthcare [103,104]. With technical improvements, such as iterative reconstruction (IR) [105], dual-energy CT [106], ultra-high resolution CT [107], and the latest innovation of photon counting CT [106], the spectrum of potential clinical applications has dramatically increased [103]. Nevertheless, there is still much to be done to reduce radiation exposure while suppressing noise and preserving or improving spatial and contrast resolution [103,105,108,109]. Although current model-based IR algorithms and their variants compensate for the increased noise caused by reduced radiation doses, the shifted image texture with IR relative to conventional filtered back projection is subjectively inferior and less preferred by radiologists [108,109,110].
To address this challenge and democratize technology, researchers have looked to AI- or DL-based image reconstruction solutions to improve imaging capabilities while reducing radiation doses [111]. DL-based CT reconstruction (DLR) has emerged as a promising alternative to conventional CT reconstruction methods [109,112]. Several literature reports demonstrate DLR to be superior to IR at noise suppression and artifact reduction [113,114,115]. Therefore, radiologists subjectively prefer DLR for several diagnostic tasks [113,116]. One commercially available DL-based solution, TrueFidelity (General Electric Healthcare [GEHC], Madison, WI, USA), trains a deep convolutional neural network (CNN) to map low-dose CT images to a higher quality and high-dose version of the same data [109,115]. TrueFidelity differentiates and suppresses noise while reconstructing CT images with characteristics resembling the higher-quality scans from the training set [109]. A recent clinical investigation reported improvements in radiologists’ subjective image quality scores as well as gains in contrast-to-noise ratio and noise reduction while reducing radiation dose by more than 50% for the detection of liver lesions >0.5 cm from portal venous abdominal CT exams [117]. DLR methods are expected to be the future of CT image reconstruction [58,118]. With improved algorithms, computational power, and more data, DL-based image reconstruction will continue outperforming model-based IR and its variants at generating low-noise images without sufficient image quality across diagnostic tasks for human viewers [58,118].

3.4. Optimization of Image Quality in MRI Scans

Conventional MR data acquisition methods provide excellent soft-tissue contrasts in images and are routinely used for oncological diagnostic workups. SNR and spatial resolution constraints, motion artifacts, and longer scan times can, at times, be limiting factors in MRI, depending on the organ of interest [17,49]. For cancer patients unable to stay in the MRI scanners for a half hour or longer, there is an urgent need for rapid and robust MR imaging acquisition that improves patient comfort and throughput. For example, GEHC, a major MRI vendor, recently introduced the novel DL-based MR reconstruction (Recon), AIR™ Recon DL method, which is revolutionizing MR image reconstruction for anatomical T1w- and T2w imaging by improving the image quality with high SNR, sharpness, and reduced scan time.
The AIR™ Recon DL reconstruction process converts raw k-space data into high-quality images as its output [49,119]. This new approach will generate images free of ringing artifacts and reduced noise, leading to increased diagnostic accuracy compared with conventional methods. The AIR™ Recon DL pipeline does not require resolution-degrading filters, which are commonly embedded in the traditional reconstruction pipeline. Instead, it utilizes a deep CNN that works on raw, complex-valued imaging data to produce a clear output image. The CNN has been specifically designed to allow for a user-controlled reduction in image noise, reduction of truncation artifacts, and enhancement of edge sharpness. There is also a window of opportunity for AI to both improve image quality and quantify imaging biomarkers derived from quantitative techniques, such as diffusion-weighted (DW)-MRI that measures the random Brownian motion of water molecules in tissue [120].
Recent literature has shown promise for DW-MRI powered with DL Recon in diagnostic applications for brain tumors [121], liver cancer [122], and prostate cancer [123], reporting higher SNR and image quality. We are working with GEHC scientists to apply this technology to different body organs at our center, thereby DL Recon improving diagnostic images and the robustness of imaging biomarkers. This new DW-MRI protocol will allow for modification of the MRI acquisition parameters, including b-values and the number of excitations. Figure 4 demonstrates preliminary experience with this method, and the images were acquired from patients with papillary thyroid cancer and lymphoma. A whole-body DW-MRI was performed on the lymphoma patient to detect the existence of disease spread to other vital organs.

3.5. Bias in AI Models

Although the growing number of new AI models for narrow-specific tasks in CT and MRI is highly encouraging, the effort to tackle key challenges to implementation by the worldwide imaging community has yet to be addressed. AI-based system pipelines consist of data sampling and DL strategies, including various levels of learning supervision, before drawing conclusions from the learned model [9,10,11,15,16]. Therefore, uncertainty and bias are important considerations when working with AI tools [12]. Uncertainty is the degree of variability in the model’s predictions, although the bias is a systematic error in the model. However, inherent uncertainties and biases are associated with each step that arises in data collection, noise in the data, and modeling approaches with AI tools [124]. Reproducibility assesses measurement uncertainty, which in measurement typically arises from multiple sources. It is critical that the results of AI systems are both reproducible and reliable to enable the development of personalized cancer care strategies [21,91,125].
The AI tools developed so far have shown pivotal results in providing better accuracy for prognosis, diagnosis, and assessment of treatment response using tumor characteristics obtained from radiologic images. However, these studies do not explicitly account for bias in their AI training sets [12]. Bias in AI studies remains a major challenge that must be addressed by proper data collection practices. Suboptimal data collection can introduce bias and lead to a misleading perception of model performance, especially in subpopulations that may not be appropriately represented in a study’s dataset. The data collection process must be described in detail to demonstrate scientific rigor, which requires transparent inclusion and exclusion criteria as well as the target cancer patient demographics. Unequal demographics of cancer patients and disparate access to the healthcare system due to economic inequalities impede the study of certain cancers in underrepresented populations [114]. Variability in the manifestation of cancers across subgroups can act as confounders. Access to large, high-quality datasets across low-income countries is often understudied due to a lack of research funding. Moreover, pediatric patients and young children are not smaller versions of adults and should not be studied as such. Their organ size, shape, and appearance on CT or MRI exams differ considerably from those of adult patients. An AI system that may appear functional for adult patients should not be assumed to work for pediatric patients. While accounting for potential biases, investigators may unintentionally limit their data search to their centers or within collaborating groups. One solution to this limiting factor would be to rebalance datasets by including more representative data from underrepresented communities before training AI models.
Another potential solution is to train AI systems using raw or unprocessed CT scan data. Most CT scans that train current AI systems are processed for the human visual system. As a result, the steps to generate a human-interpretable image may lead to a loss of potentially relevant information because raw data is downsampled and compressed [126]. Moreover, each vendor implements proprietary solutions to enhance the quality of their scans so that they are more appealing than their competition. These processing steps inject unique patterns unrelated to the target signal that the AI systems could spuriously use to correlate with class labels. The issues stemming from post-processed data training could be overcome by developing end-to-end AI systems with raw CT data.

4. Discussion

Cross-sectional CT and MRI are an integral part of the diagnostic workup. Applications of novel narrow-specific AI tasks in these imaging techniques have shown promise for data acquisition, image segmentation and registration, and assessment of tumor responses to therapy in brain tumors [30], breast [32], head and neck [33,35], liver, lung, and abdominal cancers [29,61,127]. For example, the DL method has exhibited as an effective and clinically applicable tool for the segmentation of the head and neck anatomy for radiotherapy [34]. Despite exciting advancements in the AI field, challenges to the translation of these AI-based tools into radiology practice still exist. In reviewing these challenges and potential solutions, we recommend certain strategies for the CT and MRI fields in the era of AI, including collaboration between radiologists, treating physicians, and imaging scientists. The awareness of the general accuracy of the AI model and the degree of confidence in each prediction are needed and should be well documented. Oncology professionals must communicate their imaging needs for patient management to radiologists, thus motivating research and obtaining funding to perform the necessary pilot studies. Radiology must embrace the need for quantitative CT and MRI metrics beyond lesion size measurements. Our recommendations for the application of AI in CT and MRI may apply to additional imaging modalities, such as X-ray, mammography, ultrasonography, and PET. The extraction of imaging metrics using AI should be an integral part of radiology and/or oncology workflows without impeding productivity and may be incorporated into fully automated workflow systems in the future. The longitudinal tracking and extraction of imaging metrics from registered lesions and the tumor environment using AI methods will be both efficient and productive tools for interpreting clinical follow-up. Finally, analysis of big imaging data with the representation of cancer patients from all types of demographics as well as additional sources of data, such as genomics from clinical trial analysis, is expected to create a data-driven taxonomy of cancer, which will then serve to optimize treatment decisions and improve cancer prognosis. This is the best time to work together to move the imaging field forward with narrow-specific AI tasks.

5. Future Directions

One of the goals of AI tool development is to introduce automated methods ethically and safely into radiology practices. Since the inception of AI, experts have predicted the potential of highly tailored AI technologies for clinical oncological applications. The benefits of AI in cancer care go beyond the optimization of established treatment strategies, but we must ensure rigorous multi-disciplinary testing of these AI models before their adoption into clinical radiology workflows. However, regulatory oversight is necessary to address quality control issues and avoid algorithmic biases.

6. Conclusions

In this review, a few challenges and opportunities for AI application to oncological imaging were summarized using novel methods for synthesizing different contrast modality images, auto-segmentation, and image reconstruction with examples from lung CT and abdomen, pelvis, and head and neck MRI.
The major highlights of this review were centered on the application of AI methods, which can be used for the following narrow-specific tasks: (i) to synthesize different contrast modality images for a variety of purposes, including training networks for multi-modality segmentation, image harmonization, and missing modality synthesis, (ii) auto-segmentation for discerning abdominal organs is presented here, (iii) to improve CT and MR image quality, which will aid in better tumor detection, segmentation, and monitoring of response, and (iv) has created opportunities for collaboration, open team science, and knowledge sharing.
In the era of precision medicine, there is a growing interest in improving clinical decision-making as well as time to share knowledge and work together. AI tools are being developed for narrowly-specific tasks for oncological imaging needs and may contribute significantly towards enhancing clinician workflows and clinical decision-making.

Author Contributions

Conceptualization, A.S.-D.; writing—original draft preparation, R.P., A.D.S., A.S.K., U.M. and H.V.; writing—review and editing, A.S.-D., O.A., R.K.G.D., V.H., N.L., R.J.W., S.B. and J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NIH U01 CA211205 (A.S.D.) and NIH/NCI Cancer Center, grant number P30 CA008748 (MSK).

Acknowledgments

We thank Cecile Berberat and James Keller for editing the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Histed, S.N.; Lindenberg, M.L.; Mena, E.; Turkbey, B.; Choyke, P.L.; Kurdziel, K.A. Review of functional/anatomical imaging in oncology. Nucl. Med. Commun. 2012, 33, 349–361. [Google Scholar] [CrossRef]
  2. Beaton, L.; Bandula, S.; Gaze, M.N.; Sharma, R.A. How rapid advances in imaging are defining the future of precision radiation oncology. Br. J. Cancer 2019, 120, 779–790. [Google Scholar] [CrossRef]
  3. Meyer, H.J.; Purz, S.; Sabri, O.; Surov, A. Relationships between histogram analysis of ADC values and complex 18F-FDG-PET parameters in head and neck squamous cell carcinoma. PLoS ONE 2018, 13, e0202897. [Google Scholar] [CrossRef] [PubMed]
  4. Kim, H.S.; Lee, K.S.; Ohno, Y.; Van Beek, E.J.; Biederer, J. PET/CT versus MRI for diagnosis, staging, and follow-up of lung cancer. J. Magn. Reason. Imaging 2015, 42, 247–260. [Google Scholar] [CrossRef] [PubMed]
  5. Schwartz, L.H.; Litière, S.; de Vries, E.; Ford, R.; Gwyther, S.; Mandrekar, S.; Shankar, L.; Bogaerts, J.; Chen, A.; Dancey, J.; et al. RECIST 1.1-Update and clarification: From the RECIST committee. Eur. J. Cancer 2016, 62, 132–137. [Google Scholar] [CrossRef]
  6. Tacher, V.; Lin, M.; Chao, M.; Gjesteby, L.; Bhagat, N.; Mahammedi, A.; Ardon, R.; Mory, B.; Geschwind, J.F. Semiautomatic volumetric tumor segmentation for hepatocellular carcinoma: Comparison between C-arm cone beam computed tomography and MRI. Acad. Radiol. 2013, 20, 446–452. [Google Scholar] [CrossRef] [PubMed]
  7. Primakov, S.P.; Ibrahim, A.; van Timmeren, J.E.; Wu, G.; Keek, S.A.; Beuque, M.; Granzier, R.W.Y.; Lavrova, E.; Scrivener, M.; Sanduleanu, S.; et al. Automated detection and segmentation of non-small cell lung cancer computed tomography images. Nat. Commun. 2022, 13, 3423. [Google Scholar] [CrossRef]
  8. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H. Artificial intelligence in radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef]
  9. Thrall, J.H.; Li, X.; Li, Q.; Cruz, C.; Do, S.; Dreyer, K.; Brink, J. Artificial intelligence and machine learning in radiology: Opportunities, challenges, pitfalls, and criteria for success. J. Am. Coll. Radiol. 2018, 15, 504–508. [Google Scholar] [CrossRef]
  10. Bi, W.L.; Hosny, A.; Schabath, M.B.; Giger, M.L.; Birkbak, N.J.; Mehrtash, A.; Allison, T.; Arnaout, O.; Abbosh, C.; Dunn, I.F.; et al. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA Cancer J. Clin. 2019, 69, 127–157. [Google Scholar] [CrossRef]
  11. Dercle, L.; McGale, J.; Sun, S.; Marabelle, A.; Yeh, R.; Deutsch, E.; Mokrane, F.Z.; Farwell, M.; Ammari, S.; Schoder, H.; et al. Artificial intelligence and radiomics: Fundamentals, applications, and challenges in immunotherapy. J. Immunother. Cancer 2022, 10, e005292. [Google Scholar] [CrossRef]
  12. Abdar, M.; Pourpanah, F.; Hussain, S.; Rezazadegan, D.; Liu, L.; Ghavamzadeh, M.; Fieguth, P.; Cao, X.; Khosravi, A.; Acharya, U.R. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Inf. Fusion 2021, 76, 243–297. [Google Scholar] [CrossRef]
  13. Diaz, O.; Kushibar, K.; Osuala, R.; Linardos, A.; Garrucho, L.; Igual, L.; Radeva, P.; Prior, F.; Gkontra, P.; Lekadir, K. Data preparation for artificial intelligence in medical imaging: A comprehensive guide to open-access platforms and tools. Phys. Med. 2021, 83, 25–37. [Google Scholar] [CrossRef]
  14. Kelly, C.J.; Karthikesalingam, A.; Suleyman, M.; Corrado, G.; King, D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019, 17, 195. [Google Scholar] [CrossRef] [PubMed]
  15. Koh, D.-M.; Papanikolaou, N.; Bick, U.; Illing, R.; Kahn, C.E.; Kalpathi-Cramer, J.; Matos, C.; Martí-Bonmatí, L.; Miles, A.; Mun, S.K.; et al. Artificial intelligence and machine learning in cancer imaging. Commun. Med. 2022, 2, 133. [Google Scholar] [CrossRef]
  16. Abdel Razek, A.A.K.; Alksas, A.; Shehata, M.; AbdelKhalek, A.; Abdel Baky, K.; El-Baz, A.; Helmy, E. Clinical applications of artificial intelligence and radiomics in neuro-oncology imaging. Insights Imaging 2021, 12, 152. [Google Scholar] [CrossRef]
  17. Lin, M.; Wynne, J.F.; Zhou, B.; Wang, T.; Lei, Y.; Curran, W.J.; Liu, T.; Yang, X. Artificial intelligence in tumor subregion analysis based on medical imaging: A review. J. Appl. Clin. Med. Phys. 2021, 22, 10–26. [Google Scholar] [CrossRef] [PubMed]
  18. Petry, M.; Lansky, C.; Chodakiewitz, Y.; Maya, M.; Pressman, B. Decreased Hospital Length of Stay for ICH and PE after Adoption of an Artificial Intelligence-Augmented Radiological Worklist Triage System. Radiol. Res. Pract. 2022, 2022, 2141839. [Google Scholar] [CrossRef] [PubMed]
  19. Recht, M.P.; Dewey, M.; Dreyer, K.; Langlotz, C.; Niessen, W.; Prainsack, B.; Smith, J.J. Integrating artificial intelligence into the clinical practice of radiology: Challenges and recommendations. Eur. Radiol. 2020, 30, 3576–3584. [Google Scholar] [CrossRef] [PubMed]
  20. Huang, S.; Yang, J.; Fong, S.; Zhao, Q. Artificial intelligence in cancer diagnosis and prognosis: Opportunities and challenges. Cancer Lett. 2020, 471, 61–71. [Google Scholar] [CrossRef]
  21. Mahmood, U.; Shrestha, R.; Bates, D.D.B.; Mannelli, L.; Corrias, G.; Erdi, Y.E.; Kanan, C. Detecting Spurious Correlations with Sanity Tests for Artificial Intelligence Guided Radiology Systems. Front. Digit. Health 2021, 3, 671015. [Google Scholar] [CrossRef] [PubMed]
  22. Jiang, J.; Hu, Y.-C.; Tyagi, N.; Zhang, P.; Rimner, A.; Mageras, G.S.; Deasy, J.O.; Veeraraghavan, H. Tumor-Aware, Adversarial Domain Adaptation from CT to MRI for Lung Cancer Segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2018, Cham, Switzerland, 26 September 2018; pp. 777–785. [Google Scholar]
  23. Wang, T.; Lei, Y.; Tian, Z.; Dong, X.; Liu, Y.; Jiang, X.; Curran, W.J.; Liu, T.; Shu, H.-K.; Yang, X. Deep learning-based image quality improvement for low-dose computed tomography simulation in radiation therapy. J. Med. Imaging 2019, 6, 043504. [Google Scholar] [CrossRef] [PubMed]
  24. Davatzikos, C.; Barnholtz-Sloan, J.S.; Bakas, S.; Colen, R.; Mahajan, A.; Quintero, C.B.; Capellades Font, J.; Puig, J.; Jain, R.; Sloan, A.E.; et al. AI-based prognostic imaging biomarkers for precision neuro-oncology: The ReSPOND consortium. Neuro. Oncol. 2020, 22, 886–888. [Google Scholar] [CrossRef] [PubMed]
  25. Lipkova, J.; Chen, R.J.; Chen, B.; Lu, M.Y.; Barbieri, M.; Shao, D.; Vaidya, A.J.; Chen, C.; Zhuang, L.; Williamson, D.F.K.; et al. Artificial intelligence for multimodal data integration in oncology. Cancer cell 2022, 40, 1095–1110. [Google Scholar] [CrossRef] [PubMed]
  26. Berger, M.F.; Mardis, E.R. The emerging clinical relevance of genomics in cancer medicine. Nat. Rev. Clin. Oncol. 2018, 15, 353–365. [Google Scholar] [CrossRef] [PubMed]
  27. Dlamini, Z.; Francies, F.Z.; Hull, R.; Marima, R. Artificial intelligence (AI) and big data in cancer and precision oncology. Comput. Struct Biotechnol. J. 2020, 18, 2300–2311. [Google Scholar] [CrossRef]
  28. Jacobs, C.; Setio, A.A.A.; Scholten, E.T.; Gerke, P.K.; Bhattacharya, H.; FA, M.H.; Brink, M.; Ranschaert, E.; de Jong, P.A.; Silva, M.; et al. Deep Learning for Lung Cancer Detection on Screening CT Scans: Results of a Large-Scale Public Competition and an Observer Study with 11 Radiologists. Radiol. Artif. Intell. 2021, 3, e210027. [Google Scholar] [CrossRef]
  29. Turkbey, B.; Haider, M.A. Artificial Intelligence for Automated Cancer Detection on Prostate MRI: Opportunities and Ongoing Challenges, From the AJR Special Series on AI Applications. Am. J. Roentgenol. 2021, 219, 188–194. [Google Scholar] [CrossRef]
  30. Shin, I.; Kim, H.; Ahn, S.S.; Sohn, B.; Bae, S.; Park, J.E.; Kim, H.S.; Lee, S.-K. Development and Validation of a Deep Learning–Based Model to Distinguish Glioblastoma from Solitary Brain Metastasis Using Conventional MR Images. Am. J. NeuroRadiol. 2021, 42, 838–844. [Google Scholar] [CrossRef]
  31. Portnoi, T.; Yala, A.; Schuster, T.; Barzilay, R.; Dontchos, B.; Lamb, L.; Lehman, C. Deep Learning Model to Assess Cancer Risk on the Basis of a Breast MR Image Alone. AJR Am. J. Roentgenol. 2019, 213, 227–233. [Google Scholar] [CrossRef]
  32. Bahl, M. Harnessing the Power of Deep Learning to Assess Breast Cancer Risk. Radiology 2020, 294, 273–274. [Google Scholar] [CrossRef] [PubMed]
  33. Diamant, A.; Chatterjee, A.; Vallières, M.; Shenouda, G.; Seuntjens, J. Deep learning in head & neck cancer outcome prediction. Sci. Rep. 2019, 9, 2764. [Google Scholar] [PubMed]
  34. Nikolov, S.; Blackwell, S.; Zverovitch, A.; Mendes, R.; Livne, M.; De Fauw, J.; Patel, Y.; Meyer, C.; Askham, H.; Romera-Paredes, B. Clinically applicable segmentation of head and neck anatomy for radiotherapy: Deep learning algorithm development and validation study. J. Med. Internet Res. 2021, 23, e26151. [Google Scholar] [CrossRef] [PubMed]
  35. Kawahara, D.; Tsuneda, M.; Ozawa, S.; Okamoto, H.; Nakamura, M.; Nishio, T.; Nagata, Y. Deep learning-based auto segmentation using generative adversarial network on magnetic resonance images obtained for head and neck cancer patients. J. Appl. Clin. Med. Phys. 2022, 23, e13579. [Google Scholar] [CrossRef]
  36. Silva, M.; Schaefer-Prokop, C.M.; Jacobs, C.; Capretti, G.; Ciompi, F.; van Ginneken, B.; Pastorino, U.; Sverzellati, N. Detection of Subsolid Nodules in Lung Cancer Screening: Complementary Sensitivity of Visual Reading and Computer-Aided Diagnosis. Investig. Radiol. 2018, 53, 441–449. [Google Scholar] [CrossRef]
  37. Matchett, K.B.; Lynam-Lennon, N.; Watson, R.W.; Brown, J.A.L. Advances in Precision Medicine: Tailoring Individualized Therapies. Cancers 2017, 9, 146. [Google Scholar] [CrossRef]
  38. Cheung, H.; Rubin, D. Challenges and opportunities for artificial intelligence in oncological imaging. Clin. Radiol. 2021, 76, 728–736. [Google Scholar] [CrossRef]
  39. Fitzmaurice, C.; Abate, D.; Abbasi, N.; Abbastabar, H.; Abd-Allah, F.; Abdel-Rahman, O.; Abdelalim, A.; Abdoli, A.; Abdollahpour, I.; Abdulle, A.S.M.; et al. Global, Regional, and National Cancer Incidence, Mortality, Years of Life Lost, Years Lived With Disability, and Disability-Adjusted Life-Years for 29 Cancer Groups, 1990 to 2017 A Systematic Analysis for the Global Burden of Disease Study. JAMA Oncol. 2019, 5, 1749–1768. [Google Scholar] [CrossRef]
  40. Chakrabarty, S.; Sotiras, A.; Milchenko, M.; LaMontagne, P.; Hileman, M.; Marcus, D. MRI-based Identification and Classification of Major Intracranial Tumor Types by Using a 3D Convolutional Neural Network: A Retrospective Multi-institutional Analysis. Radiol. Artif. Intell. 2021, 3, e200301. [Google Scholar] [CrossRef]
  41. Kawka, M.; Dawidziuk, A.; Jiao, L.R.; Gall, T.M.H. Artificial intelligence in the detection, characterisation and prediction of hepatocellular carcinoma: A narrative review. Transl. Gastroenterol. Hepatol. 2022, 7, 41. [Google Scholar] [CrossRef]
  42. Ramón, Y.C.S.; Sesé, M.; Capdevila, C.; Aasen, T.; De Mattos-Arruda, L.; Diaz-Cano, S.J.; Hernández-Losa, J.; Castellví, J. Clinical implications of intratumor heterogeneity: Challenges and opportunities. J. Mol Med. 2020, 98, 161–177. [Google Scholar] [CrossRef] [PubMed]
  43. Tong, E.; McCullagh, K.L.; Iv, M. Advanced Imaging of Brain Metastases: From Augmenting Visualization and Improving Diagnosis to Evaluating Treatment. Adv. Neuroimaging Brain Metastases 2021, 11, 270. [Google Scholar] [CrossRef] [PubMed]
  44. Gang, Y.; Chen, X.; Li, H.; Wang, H.; Li, J.; Guo, Y.; Zeng, J.; Hu, Q.; Hu, J.; Xu, H. A comparison between manual and artificial intelligence-based automatic positioning in CT imaging for COVID-19 patients. Eur. Radiol. 2021, 31, 6049–6058. [Google Scholar] [CrossRef] [PubMed]
  45. Lin, D.J.; Johnson, P.M.; Knoll, F.; Lui, Y.W. Artificial Intelligence for MR Image Reconstruction: An Overview for Clinicians. J. Magn. Reson Imaging 2021, 53, 1015–1028. [Google Scholar] [CrossRef] [PubMed]
  46. Yaqub, M.; Jinchao, F.; Arshid, K.; Ahmed, S.; Zhang, W.; Nawaz, M.Z.; Mahmood, T. Deep learning-based image reconstruction for different medical imaging modalities. Comput. Math. Methods Med. 2022, 2022, 8750648. [Google Scholar] [CrossRef] [PubMed]
  47. Akagi, M.; Nakamura, Y.; Higaki, T.; Narita, K.; Honda, Y.; Zhou, J.; Yu, Z.; Akino, N.; Awai, K. Deep learning reconstruction improves image quality of abdominal ultra-high-resolution CT. Eur. Radiol. 2019, 29, 6163–6171. [Google Scholar] [CrossRef]
  48. Wei, L.; El Naqa, I. Artificial Intelligence for Response Evaluation With PET/CT. Semin. Nucl. Med. 2021, 51, 157–169. [Google Scholar] [CrossRef]
  49. Lebel, R.M. Performance characterization of a novel deep learning-based MR image reconstruction pipeline. arXiv 2020, arXiv:200806559. [Google Scholar] [CrossRef]
  50. Gassenmaier, S.; Afat, S.; Nickel, D.; Mostapha, M.; Herrmann, J.; Othman, A.E. Deep learning–accelerated T2-weighted imaging of the prostate: Reduction of acquisition time and improvement of image quality. Eur. J. Radiol. 2021, 137, 109600. [Google Scholar] [CrossRef]
  51. Iseke, S.; Zeevi, T.; Kucukkaya, A.S.; Raju, R.; Gross, M.; Haider, S.P.; Petukhova-Greenstein, A.; Kuhn, T.N.; Lin, M.; Nowak, M.; et al. Machine Learning Models for Prediction of Posttreatment Recurrence in Early-Stage Hepatocellular Carcinoma Using Pretreatment Clinical and MRI Features: A Proof-of-Concept Study. AJR Am. J. Roentgenol. 2023, 220, 245–255. [Google Scholar] [CrossRef]
  52. Kaus, M.R.; Warfield, S.K.; Nabavi, A.; Black, P.M.; Jolesz, F.A.; Kikinis, R. Automated segmentation of MR images of brain tumors. Radiology 2001, 218, 586–591. [Google Scholar] [CrossRef]
  53. Arimura, H.; Soufi, M.; Kamezawa, H.; Ninomiya, K.; Yamada, M. Radiomics with artificial intelligence for precision medicine in radiation therapy. J. Radiat. Res. 2019, 60, 150–157. [Google Scholar] [CrossRef] [PubMed]
  54. Rogers, W.; Thulasi Seetha, S.; Refaee, T.A.G.; Lieverse, R.I.Y.; Granzier, R.W.Y.; Ibrahim, A.; Keek, S.A.; Sanduleanu, S.; Primakov, S.P.; Beuque, M.P.L.; et al. Radiomics: From qualitative to quantitative imaging. Br. J. Radiol. 2020, 93, 20190948. [Google Scholar] [CrossRef]
  55. Torrente, M.; Sousa, P.A.; Hernández, R.; Blanco, M.; Calvo, V.; Collazo, A.; Guerreiro, G.R.; Núñez, B.; Pimentao, J.; Sánchez, J.C.; et al. An Artificial Intelligence-Based Tool for Data Analysis and Prognosis in Cancer Patients: Results from the Clarify Study. Cancers 2022, 14, 4041. [Google Scholar] [CrossRef] [PubMed]
  56. Razek, A.A.K.A.; Khaled, R.; Helmy, E.; Naglah, A.; AbdelKhalek, A.; El-Baz, A. Artificial intelligence and deep learning of head and neck cancer. Magn. Reson. Imaging Clin. 2022, 30, 81–94. [Google Scholar] [CrossRef] [PubMed]
  57. McCollough, C.; Leng, S. Use of artificial intelligence in computed tomography dose optimisation. Ann. ICRP 2020, 49, 113–125. [Google Scholar] [CrossRef] [PubMed]
  58. McLeavy, C.; Chunara, M.; Gravell, R.; Rauf, A.; Cushnie, A.; Talbot, C.S.; Hawkins, R. The future of CT: Deep learning reconstruction. Clin. Radiol. 2021, 76, 407–415. [Google Scholar] [CrossRef]
  59. Jiang, J.; Hu, Y.C.; Tyagi, N.; Zhang, P.; Rimner, A.; Deasy, J.O.; Veeraraghavan, H. Cross-modality (CT-MRI) prior augmented deep learning for robust lung tumor segmentation from small MR datasets. Med. Phys. 2019, 46, 4392–4404. [Google Scholar] [CrossRef]
  60. Venkadesh, K.V.; Setio, A.A.; Schreuder, A.; Scholten, E.T.; Chung, K.W.; Wille, M.M.; Saghir, Z.; van Ginneken, B.; Prokop, M.; Jacobs, C. Deep learning for malignancy risk estimation of pulmonary nodules detected at low-dose screening CT. Radiology 2021, 300, 438–447. [Google Scholar] [CrossRef]
  61. Liu, K.-L.; Wu, T.; Chen, P.-T.; Tsai, Y.M.; Roth, H.; Wu, M.-S.; Liao, W.-C.; Wang, W. Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: A retrospective study with cross-racial external validation. Lancet Digit. Health 2020, 2, e303–e313. [Google Scholar] [CrossRef]
  62. Aerts, H.J.W.L.; Velazquez, E.R.; Leijenaar, R.T.H.; Parmar, C.; Grossmann, P.; Carvalho, S.; Bussink, J.; Monshouwer, R.; Haibe-Kains, B.; Rietveld, D.; et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Commun. 2014, 5, 4006. [Google Scholar] [CrossRef]
  63. Hoebel, K.V.; Patel, J.B.; Beers, A.L.; Chang, K.; Singh, P.; Brown, J.M.; Pinho, M.C.; Batchelor, T.T.; Gerstner, E.R.; Rosen, B.R.; et al. Radiomics Repeatability Pitfalls in a Scan-Rescan MRI Study of Glioblastoma. Radiol. Artif. Intell. 2021, 3, e190199. [Google Scholar] [CrossRef] [PubMed]
  64. Mali, S.A.; Ibrahim, A.; Woodruff, H.C.; Andrearczyk, V.; Müller, H.; Primakov, S.; Salahuddin, Z.; Chatterjee, A.; Lambin, P. Making Radiomics More Reproducible across Scanner and Imaging Protocol Variations: A Review of Harmonization Methods. J. Pers. Med. 2021, 11, 842. [Google Scholar] [CrossRef] [PubMed]
  65. Shaham, U.; Stanton, K.P.; Zhao, J.; Li, H.; Raddassi, K.; Montgomery, R.; Kluger, Y. Removal of batch effects using distribution-matching residual networks. Bioinformatics 2017, 33, 2539–2546. [Google Scholar] [CrossRef]
  66. Horng, H.; Singh, A.; Yousefi, B.; Cohen, E.A.; Haghighi, B.; Katz, S.; Noël, P.B.; Shinohara, R.T.; Kontos, D. Generalized ComBat harmonization methods for radiomic features with multi-modal distributions and multiple batch effects. Sci. Rep. 2022, 12, 4493. [Google Scholar] [CrossRef] [PubMed]
  67. Andrearczyk, V.; Depeursinge, A.; Müller, H. Neural network training for cross-protocol radiomic feature standardization in computed tomography. J. Med. Imaging 2019, 6, 024008. [Google Scholar] [CrossRef] [PubMed]
  68. Liu, M.; Maiti, P.; Thomopoulos, S.; Zhu, A.; Chai, Y.; Kim, H.; Jahanshad, N. Style transfer using generative adversarial networks for multi-site mri harmonization. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, France, 18–22 September 2021; pp. 313–322. [Google Scholar]
  69. Li, Y.; Han, G.; Wu, X.; Li, Z.H.; Zhao, K.; Zhang, Z.; Liu, Z.; Liang, C. Normalization of multicenter CT radiomics by a generative adversarial network method. Phys. Med. Biol. 2021, 66, 055030. [Google Scholar] [CrossRef] [PubMed]
  70. Bashyam, V.M.; Doshi, J.; Erus, G.; Srinivasan, D.; Abdulkadir, A.; Singh, A.; Habes, M.; Fan, Y.; Masters, C.L.; Maruff, P. Deep Generative Medical Image Harmonization for Improving Cross-Site Generalization in Deep Learning Predictors. J. Magn. Reson Imaging 2022, 55, 908–916. [Google Scholar] [CrossRef]
  71. Alexander, A.; Jiang, A.; Ferreira, C.; Zurkiya, D. An intelligent future for medical imaging: A market outlook on artificial intelligence for medical imaging. J. Am. Coll. Radiol. 2020, 17, 165–170. [Google Scholar] [CrossRef]
  72. Armanious, K.; Jiang, C.; Fischer, M.; Küstner, T.; Hepp, T.; Nikolaou, K.; Gatidis, S.; Yang, B. MedGAN: Medical image translation using GANs. Comput. Med. Imaging Graph. 2020, 79, 101684. [Google Scholar] [CrossRef]
  73. Chen, J.; Wei, J.; Li, R. TarGAN: Target-aware generative adversarial networks for multi-modality medical image translation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Part VI 24. pp. 24–33. [Google Scholar]
  74. Emami, H.; Dong, M.; Nejad-Davarani, S.P.; Glide-Hurst, C.K. SA-GAN: Structure-aware GAN for organ-preserving synthetic CT generation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Part VI 24. pp. 471–481. [Google Scholar]
  75. Jiang, J.; Veeraraghavan, H. Unified cross-modality feature disentangler for unsupervised multi-domain MRI abdomen organs segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 347–358. [Google Scholar]
  76. Zeng, P.; Zhou, L.; Zu, C.; Zeng, X.; Jiao, Z.; Wiu, X.; Zhou, J.; Shen, D.; Wang, Y. 3D Convolutional Vision Transformer-GAN for PET Reconstructio. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2022: 25th International Conference, Singapore, 17 September 2022; pp. 516–526. [Google Scholar]
  77. Zhao, J.; Li, D.; Kassam, Z.; Howey, J.; Chong, J.; Chen, B.; Li, S. Tripartite-GAN: Synthesizing liver contrast-enhanced MRI to improve tumor detection. Med. Image Anal. 2020, 63, 101667. [Google Scholar] [CrossRef] [PubMed]
  78. Jiang, J.; Hu, Y.-C.; Tyagi, N.; Rimner, A.; Lee, N.; Deasy, J.O.; Berry, S.; Veeraraghavan, H. PSIGAN: Joint probabilistic segmentation and image distribution matching for unpaired cross-modality adaptation-based MRI segmentation. IEEE T Med. Imaging 2020, 39, 4071–4084. [Google Scholar] [CrossRef] [PubMed]
  79. Jiang, J.; Rimner, A.; Deasy, J.O.; Veeraraghavan, H. Unpaired cross-modality educed distillation (CMEDL) for medical image segmentation. IEEE T Med. Imaging 2021, 41, 1057–1068. [Google Scholar] [CrossRef]
  80. Jiang, J.; Tyagi, N.; Tringale, K.; Crane, C.; Veeraraghavan, H. Self-supervised 3D anatomy segmentation using self-distilled masked image transformer (SMIT). arXiv 2022, arXiv:2205.10342. [Google Scholar]
  81. Fei, Y.; Zu, C.; Jiao, Z.; Wu, X.; Zhou, J.; Shen, D.; Wang, Y. Classification-Aided High-Quality PET Image Synthesis via Bidirectional Contrastive GAN with Shared Information Maximization. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, 18–22 September 2022; Part VI. pp. 527–537. [Google Scholar]
  82. Wu, P.-W.; Lin, Y.-J.; Chang, C.-H.; Chang, E.Y.; Liao, S.-W. Relgan: Multi-domain image-to-image translation via relative attributes. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27–28 October 2019; pp. 5914–5922. [Google Scholar]
  83. Fave, X.; Zhang, L.; Yang, J.; Mackin, D.; Balter, P.; Gomez, D.; Followill, D.; Jones, A.K.; Stingo, F.; Liao, Z. Delta-radiomics features for the prediction of patient outcomes in non–small cell lung cancer. Sci. Rep. 2017, 7, 588. [Google Scholar] [CrossRef]
  84. Shi, L.; Rong, Y.; Daly, M.; Dyer, B.; Benedict, S.; Qiu, J.; Yamamoto, T. Cone-beam computed tomography-based delta-radiomics for early response assessment in radiotherapy for locally advanced lung cancer. Phys. Med. Biol. 2020, 65, 015009. [Google Scholar] [CrossRef] [PubMed]
  85. Sutton, E.J.; Onishi, N.; Fehr, D.A.; Dashevsky, B.Z.; Sadinski, M.; Pinker, K.; Martinez, D.F.; Brogi, E.; Braunstein, L.; Razavi, P. A machine learning model that classifies breast cancer pathologic complete response on MRI post-neoadjuvant chemotherapy. Breast Cancer Res. 2020, 22, 57. [Google Scholar] [CrossRef]
  86. Nardone, V.; Reginelli, A.; Grassi, R.; Boldrini, L.; Vacca, G.; D’Ippolito, E.; Annunziata, S.; Farchione, A.; Belfiore, M.P.; Desideri, I. Delta radiomics: A systematic review. La Radiol. Med. 2021, 126, 1571–1583. [Google Scholar] [CrossRef]
  87. Jin, C.; Yu, H.; Ke, J.; Ding, P.; Yi, Y.; Jiang, X.; Duan, X.; Tang, J.; Chang, D.T.; Wu, X. Predicting treatment response from longitudinal images using multi-task deep learning. Nat. Commun. 2021, 12, 1851. [Google Scholar] [CrossRef]
  88. Beers, A.; Brown, J.; Chang, K.; Hoebel, K.; Patel, J.; Ly, K.I.; Tolaney, S.M.; Brastianos, P.; Rosen, B.; Gerstner, E.R. DeepNeuro: An open-source deep learning toolbox for neuroimaging. Neuroinformatics 2021, 19, 127–140. [Google Scholar] [CrossRef]
  89. Mehrtash, A.; Pesteie, M.; Hetherington, J.; Behringer, P.A.; Kapur, T.; Wells, W.M., III; Rohling, R.; Fedorov, A.; Abolmaesumi, P. DeepInfer: Open-source deep learning deployment toolkit for image-guided therapy. In Proceedings of the Medical Imaging 2017: Image-Guided Procedures, Robotic Interventions, and Modeling, Orlando, FL, USA, 3 March 2017; pp. 410–416. [Google Scholar]
  90. Ma, N.; Li, W.; Brown, R.; Wang, Y.; Gorman, B.; Behrooz; Johnson, H.; Yang, I.; Kerfoot, E.; Li, Y.; et al. Project-MONAI/MONAI: 0.5.0. 2021. Available online: https://zenodo.org/record/4679866#.ZD9KOXZByUk (accessed on 30 January 2023).
  91. Haibe-Kains, B.; Adam, G.A.; Hosny, A.; Khodakarami, F.; Shraddha, T.; Kusko, R.; Sansone, S.-A.; Tong, W.; Wolfinger, R.D.; Mason, C.E.; et al. Transparency and reproducibility in artificial intelligence. Nature 2020, 586, E14–E16. [Google Scholar] [CrossRef] [PubMed]
  92. Cai, L.; Wu, M.; Chen, L.; Bai, W.; Yang, M.; Lyu, S.; Zhao, Q. Using Guided Self-Attention with Local Information for Polyp Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 18–22 September 2022; pp. 629–638. [Google Scholar]
  93. Zheng, Y. Cross-modality medical image detection and segmentation by transfer learning of shapel priors. In Proceedings of the 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), Brooklyn, NY, USA, 16–19 April 2015; pp. 424–427. [Google Scholar]
  94. Li, K.; Yu, L.; Wang, S.; Heng, P.-A. Towards cross-modality medical image segmentation with online mutual knowledge distillation. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 775–783. [Google Scholar]
  95. Li, K.; Wang, S.; Yu, L.; Heng, P.-A. Dual-teacher: Integrating intra-domain and inter-domain teachers for annotation-efficient cardiac segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 418–427. [Google Scholar]
  96. Gan, W.; Wang, H.; Gu, H.; Duan, Y.; Shao, Y.; Chen, H.; Feng, A.; Huang, Y.; Fu, X.; Ying, Y. Automatic segmentation of lung tumors on CT images based on a 2D & 3D hybrid convolutional neural network. Br. J. Radiol. 2021, 94, 20210038. [Google Scholar] [CrossRef]
  97. Jiang, J.; Hu, Y.C.; Liu, C.J.; Halpenny, D.; Hellmann, M.D.; Deasy, J.O.; Mageras, G.; Veeraraghavan, H. Multiple Resolution Residually Connected Feature Streams for Automatic Lung Tumor Segmentation from CT Images. IEEE Trans. Med. Imaging 2019, 38, 134–144. [Google Scholar] [CrossRef]
  98. Just, N. Improving tumour heterogeneity MRI assessment with histograms. Br. J. Cancer 2014, 111, 2205–2213. [Google Scholar] [CrossRef] [PubMed]
  99. Singh, M.; Singh, T.; Soni, S. Pre-operative Assessment of Ablation Margins for Variable Blood Perfusion Metrics in a Magnetic Resonance Imaging Based Complex Breast Tumour Anatomy: Simulation Paradigms in Thermal Therapies. Comput. Methods Programs Biomed. 2021, 198, 105781. [Google Scholar] [CrossRef] [PubMed]
  100. Cardenas, C.E.; Beadle, B.M.; Garden, A.S.; Skinner, H.D.; Yang, J.; Rhee, D.J.; McCarroll, R.E.; Netherton, T.J.; Gay, S.S.; Zhang, L. Generating high-quality lymph node clinical target volumes for head and neck cancer radiation therapy using a fully automated deep learning-based approach. Int. J. Radiat. Oncol. Biol. Phys. 2021, 109, 801–812. [Google Scholar] [CrossRef]
  101. Cardenas, C.E.; McCarroll, R.E.; Court, L.E.; Elgohari, B.A.; Elhalawani, H.; Fuller, C.D.; Kamal, M.J.; Meheissen, M.A.; Mohamed, A.S.; Rao, A. Deep learning algorithm for auto-delineation of high-risk oropharyngeal clinical target volumes with built-in dice similarity coefficient parameter optimization function. Int. J. Radiat. Oncol. Biol. Phys. 2018, 101, 468–478. [Google Scholar] [CrossRef] [PubMed]
  102. Xie, Y.; Kang, K.; Wang, Y.; Khandekar, M.J.; Willers, H.; Keane, F.K.; Bortfeld, T.R. Automated clinical target volume delineation using deep 3D neural networks in radiation therapy of Non-small Cell Lung Cancer. Phys. Imaging Radiat. Oncol. 2021, 19, 131–137. [Google Scholar] [CrossRef]
  103. McCollough, C.H. Computed tomography technology—And dose—In the 21st century. Health Phys. 2019, 116, 157–162. [Google Scholar] [CrossRef]
  104. Arndt, C.; Güttler, F.; Heinrich, A.; Bürckenmeyer, F.; Diamantis, I.; Teichgräber, U. Deep Learning CT Image Reconstruction in Clinical Practice. RoFo Fortschr. Auf Dem Geb. Der Rontgenstrahlen Und Der Nukl. 2021, 193, 252–261. [Google Scholar] [CrossRef]
  105. Willemink, M.J.; Noël, P.B. The evolution of image reconstruction for CT—From filtered back projection to artificial intelligence. Eur. Radiol. 2019, 29, 2185–2195. [Google Scholar] [CrossRef]
  106. Hsieh, S.S.; Leng, S.; Rajendran, K.; Tao, S.; McCollough, C.H.; Sciences, P.M. Photon counting CT: Clinical applications and future developments. IEEE Trans. Radiat. Plasma Med. Sci. 2020, 5, 441–452. [Google Scholar] [CrossRef] [PubMed]
  107. Kwan, A.C.; Pourmorteza, A.; Stutman, D.; Bluemke, D.A.; Lima, J.A. Next-generation hardware advances in CT: Cardiac applications. Radiology 2021, 298, 3–17. [Google Scholar] [CrossRef] [PubMed]
  108. Mileto, A.; Guimaraes, L.S.; McCollough, C.H.; Fletcher, J.G.; Yu, L. State of the art in abdominal CT: The limits of iterative reconstruction algorithms. Radiology 2019, 293, 491–503. [Google Scholar] [CrossRef] [PubMed]
  109. Hsieh, J.; Liu, E.; Nett, B.; Tang, J.; Thibault, J.-B.; Sahney, S. A New Era of Image Reconstruction: TrueFidelity™; White Paper; GE Healthcare: Chicago, IL, USA, 2019. [Google Scholar]
  110. Laurent, G.; Villani, N.; Hossu, G.; Rauch, A.; Noël, A.; Blum, A.; Gondim Teixeira, P.A. Full model-based iterative reconstruction (MBIR) in abdominal CT increases objective image quality, but decreases subjective acceptance. Eur. Radiol. 2019, 29, 4016–4025. [Google Scholar] [CrossRef]
  111. Chen, M.M.; Terzic, A.; Becker, A.S.; Johnson, J.M.; Wu, C.C.; Wintermark, M.; Wald, C.; Wu, J. Artificial intelligence in oncologic imaging. Eur. J. Radiol. Open 2022, 9, 100441. [Google Scholar] [CrossRef]
  112. Boedeker, K. AiCE Deep Learning Reconstruction: Bringing the Power of Ultra-High Resolution CT to Routine Imaging; Canon Medical Systems Corporation: Ohtawara, Japan, 2019. [Google Scholar]
  113. Jensen, C.T.; Liu, X.; Tamm, E.P.; Chandler, A.G.; Sun, J.; Morani, A.C.; Javadi, S.; Wagner-Bartak, N.A. Image quality assessment of abdominal CT by use of new deep learning image reconstruction: Initial experience. Am. J. Roentgenol. 2020, 215, 50–57. [Google Scholar] [CrossRef]
  114. Ricci Lara, M.A.; Echeveste, R.; Ferrante, E. Addressing fairness in artificial intelligence for medical imaging. Nat. Commun. 2022, 13, 4581. [Google Scholar] [CrossRef]
  115. Szczykutowicz, T.P.; Toia, G.V.; Dhanantwari, A.; Nett, B. A Review of Deep Learning CT Reconstruction: Concepts, Limitations, and Promise in Clinical Practice. Curr. Radiol. Rep. 2022, 10, 101–115. [Google Scholar] [CrossRef]
  116. Noda, Y.; Kawai, N.; Nagata, S.; Nakamura, F.; Mori, T.; Miyoshi, T.; Suzuki, R.; Kitahara, F.; Kato, H.; Hyodo, F. Deep learning image reconstruction algorithm for pancreatic protocol dual-energy computed tomography: Image quality and quantification of iodine concentration. Eur. Radiol. 2022, 32, 384–394. [Google Scholar] [CrossRef]
  117. Jensen, C.T.; Gupta, S.; Saleh, M.M.; Liu, X.; Wong, V.K.; Salem, U.; Qiao, W.; Samei, E.; Wagner-Bartak, N.A. Reduced-dose deep learning reconstruction for abdominal CT of liver metastases. Radiology 2022, 303, 90–98. [Google Scholar] [CrossRef] [PubMed]
  118. Chartrand, G.; Cheng, P.M.; Vorontsov, E.; Drozdzal, M.; Turcotte, S.; Pal, C.J.; Kadoury, S.; Tang, A. Deep Learning: A Primer for Radiologists. Radiographics 2017, 37, 2113–2131. [Google Scholar] [CrossRef] [PubMed]
  119. Argentieri, E.; Zochowski, K.; Potter, H.; Shin, J.; Lebel, R.; Sneag, D. Performance of a Deep Learning-Based MR Reconstruction Algorithm for the Evaluation of Peripheral Nerves. In Proceedings of the RSNA, Chicago, IL, USA, 1–6 December 2019. [Google Scholar]
  120. Le Bihan, D. Molecular diffusion, tissue microdynamics and microstructure. NMR Biomed. 1995, 8, 375–386. [Google Scholar] [CrossRef] [PubMed]
  121. Zhang, H.; Wang, C.; Chen, W.; Wang, F.; Yang, Z.; Xu, S.; Wang, H. Deep learning based multiplexed sensitivity-encoding (DL-MUSE) for high-resolution multi-shot DWI. NeuroImage 2021, 244, 118632. [Google Scholar] [CrossRef]
  122. Gadjimuradov, F.; Benkert, T.; Nickel, M.D.; Führes, T.; Saake, M.; Maier, A. Deep learning–guided weighted averaging for signal dropout compensation in DWI of the liver. Magn. Reson. Med. 2022, 88, 2679–2693. [Google Scholar] [CrossRef]
  123. Ueda, T.; Ohno, Y.; Yamamoto, K.; Murayama, K.; Ikedo, M.; Yui, M.; Hanamatsu, S.; Tanaka, Y.; Obama, Y.; Ikeda, H. Deep Learning Reconstruction of Diffusion-weighted MRI Improves Image Quality for Prostatic Imaging. Radiology 2022, 303, 373–381. [Google Scholar] [CrossRef]
  124. Langlotz, C.P.; Allen, B.; Erickson, B.J.; Kalpathy-Cramer, J.; Bigelow, K.; Cook, T.S.; Flanders, A.E.; Lungren, M.P.; Mendelson, D.S.; Rudie, J.D. A roadmap for foundational research on artificial intelligence in medical imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop. Radiology 2019, 291, 781. [Google Scholar] [CrossRef]
  125. Schork, N.J. Artificial intelligence and personalized medicine. In Precision Medicine in Cancer Therapy; Springer: Berlin/Heidelberg, Germany, 2019; pp. 265–283. [Google Scholar]
  126. Chung, C.; Kalpathy-Cramer, J.; Knopp, M.V.; Jaffray, D.A. In the era of deep learning, why reconstruct an image at all? J. Am. Coll. Radiol. 2021, 18, 170–173. [Google Scholar] [CrossRef]
  127. Li, G.; Zhang, X.; Song, X.; Duan, L.; Wang, G.; Xiao, Q.; Li, J.; Liang, L.; Bai, L.; Bai, S. Machine learning for predicting accuracy of lung and liver tumor motion tracking using radiomic features. Quant. Imaging Med. Surg. 2023, 13, 1605–1618. [Google Scholar] [CrossRef]
Figure 1. Artificial Intelligence in the clinical radiology workflow with examples from lung computed tomography.
Figure 1. Artificial Intelligence in the clinical radiology workflow with examples from lung computed tomography.
Cancers 15 02573 g001
Figure 2. (A,B) Synthesis of T2-weighted (T2w) and T1-weighted (T1w) magnetic resonance imaging (MRI) images from computed tomography image volume available in the open-source combined healthy abdominal organ segmentation (CHAOS) challenge dataset.
Figure 2. (A,B) Synthesis of T2-weighted (T2w) and T1-weighted (T1w) magnetic resonance imaging (MRI) images from computed tomography image volume available in the open-source combined healthy abdominal organ segmentation (CHAOS) challenge dataset.
Cancers 15 02573 g002
Figure 3. Segmentations produced by cross-modality distillation learning applied to representative cases consisting of (A) computed tomography (CT) image; (B) cone-beam CT image; and (C) T2-weighted magnetic resonance imaging image. Algorithm segmentations are shown in red, and the expert delineations are in yellow.
Figure 3. Segmentations produced by cross-modality distillation learning applied to representative cases consisting of (A) computed tomography (CT) image; (B) cone-beam CT image; and (C) T2-weighted magnetic resonance imaging image. Algorithm segmentations are shown in red, and the expert delineations are in yellow.
Cancers 15 02573 g003
Figure 4. (A) exhibits the line diagram of the diffusion-weighted magnetic resonance image (DW-MRI) powered with deep learning recon image acquisition scheme. (B) DW-MR image (b = 0 s/mm2) acquired from a 39-year-old female patient with papillary thyroid cancer. The blue arrow points to the thyroid gland. (C,D) whole body DW-MR image (b = 0 s/mm2) acquired from a 61-year-old male patient with lymphoma, showing representative diffusion images from the abdomen, pelvis, including liver (light orange arrow), pancreas (dark orange arrow), and the prostate (purple arrow).
Figure 4. (A) exhibits the line diagram of the diffusion-weighted magnetic resonance image (DW-MRI) powered with deep learning recon image acquisition scheme. (B) DW-MR image (b = 0 s/mm2) acquired from a 39-year-old female patient with papillary thyroid cancer. The blue arrow points to the thyroid gland. (C,D) whole body DW-MR image (b = 0 s/mm2) acquired from a 61-year-old male patient with lymphoma, showing representative diffusion images from the abdomen, pelvis, including liver (light orange arrow), pancreas (dark orange arrow), and the prostate (purple arrow).
Cancers 15 02573 g004
Table 1. Summary of Select Artificial Intelligence Literature on CT and MRI for Oncology.
Table 1. Summary of Select Artificial Intelligence Literature on CT and MRI for Oncology.
StudyNarrow-Specific TasksDesign: TitleObjectiveAdvantages/RecommendationsLimitations
Hosny, A. et al. [8]Medical Imaging (MI)Review: Artificial Intelligence (AI) in radiologyTo establish a general understanding of AI methods, particularly those pertaining to image-based tasks. The AI methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate how these methods are advancing the field.There is a need to understand that AI is unlike human intelligence in many ways. Excelling in one task does not necessarily imply excellence in others. The roles of radiologists will expand as they have access to better tools. The data to train AI on a massive scale will enable a robust AI that is generalizable across different patient demographics, geographic regions, diseases, and standards of care.Not Applicable (NA)
Koh, D.M. et al. [15]MIReview: Artificial Intelligence and machine learning in cancer imagingTo foster interdisciplinary communication because many technological solutions are being developed in isolation and may struggle to achieve routine clinical use. Hence, it is important to work together, including with commercial partners (as appropriate) to drive innovations and developments.There is a need for systematic evaluation of new software, which often undergoes only limited testing prior to release.NA
Razek, A.A.K.A. et al. [56]MIReview: Artificial Intelligence and deep learning of head and neck cancerTo summarize the clinical applications of AI in head and neck cancer, including differentiation, grading, staging, prognosis, genetic profile, and monitoring after treatment.AI studies are required to establish a powerful methodology and coupling of genetic and radiologic profiles to be validated in clinical use.NA
McCollough, C.H. et al. [57]MIReview: Use of Artificial Intelligence in computed tomography dose optimizationTo illustrate the promise of AI in the processes involved in a CT examination, from setting up the patient on the scanner table to the reconstruction of final images.AI could be a part of CT imaging in the future, and both manufacturers and users must proceed cautiously because it is not yet clear how these AI algorithms can be evaluated in the clinical setting.NA
Lin, D.J. et al. [45]Image reconstruction and registration (IRR)Review: Artificial Intelligence for MR Image Reconstruction: An Overview for CliniciansTo cover how deep learning algorithms transform raw k-space data into image data and examine accelerated imaging and artifact suppression.Future research needs continued sharing of image and raw k space datasets to expand access and allow for model comparisons, defining the best clinically relevant loss functions and/or quality metrics by which to judge a model’s performance, examining perturbations in model performance relating to acquisition parameters, and validating high-performing models in new scenarios to determine generalizability.NA
McLeavy, C.M. et al. [58]IRRReview: The future of CT: deep learning reconstructionTo emphasize the advantages of deep learning reconstruction (DLR) over other reconstruction methods regarding dose reduction, image quality, and tailoring protocols to specific clinical situations.DLR is the future of CT technology and should be considered when procuring new CT scanners.NA
Jiang J. et al. [59]Lesion segmentation, detection, and characterization (LSDC)Original Research: Cross-modality (CT-MRI) prior augmented deep learning for robust lung tumor segmentation from small MR datasetsTo develop a cross-modality (MR-CT) deep learning segmentation approach that augments training data using pseudo-MR images produced by transforming expert-segmented CT images.The advantage of this model is that it is learned as a deep generative adversarial network and transforms expert segmented CT into pseudo-MR images with expert segmentations.A minor limitation is the number of test datasets, particularly for longitudinal analysis, due to the lack of additional recruitment of patients.
Venkadesh, K.V. et al. [60]LSDCOriginal Research: Deep Learning for Malignancy Risk Estimation of Pulmonary Nodules Detected at Low-Dose Screening CTTo develop and validate a deep learning (DL) algorithm for malignancy risk estimation of pulmonary nodules detected at screening CT.The DL algorithm has the potential to provide reliable and reproducible malignancy risk scores for clinicians from low-dose screening CT, leading to better management in lung cancer.A minor limitation, the group did not assess how the algorithm would affect the radiologists’ assessment.
Bi, W.L. et al. [10]Clinical Applications in Oncology (CAO)Review: Artificial Intelligence in cancer imaging: Clinical challenges and applicationsHighlights AI applied to medical imaging of lung, brain, breast, and prostate cancer and illustrates how clinical problems are being addressed using imaging/radiomic feature types.AI applications in oncological imaging need to be vigorously validated for reproducibility and generalizability.NA
Huang, S. et al. [20]CAOReview: Artificial Intelligence in cancer diagnosis and prognosis: Opportunities and challengesHighlights how AI assists in cancer diagnosis and prognosis, specifically about its unprecedented accuracy, which is even higher than that of general statistical applications in oncology.The use of AI-based applications in clinical cancer research represents a paradigm shift in cancer treatment, leading to a dramatic improvement in patient survival due to enhanced prediction rates.NA
Diamant, A. et al. [33]CAOOriginal research: Deep learning in head & neck cancer outcome predictionTo apply convolutional neural network (CNN) to predict treatment outcomes of patients with head & neck cancer using pretreatment CT images.The work identifies traditional radiomic features derived from CT images that can be visualized and used to perform accurate outcome prediction in head & neck cancers. However, future work could be done to further investigate the difference between the two representations.There is no major limitation mentioned by the authors. However, they do mention that the framework used here considers the central slice, and the results could have been further improved by incorporating the entire tumor.
Liu, K.L. et al. [61]CAOOriginal research: Deep learning to distinguish pancreatic cancer tissue from noncancerous pancreatic tissue: a retrospective study with cross-racial external validationTo investigate whether CNNs can distinguish individuals with and without pancreatic cancer on CT, compared with radiologist interpretation.CNNs can accurately distinguish pancreatic cancer on CT, with acceptable generalizability to images of patients from various races and ethnicities. Additionally, CNNs can supplement radiologist interpretation.A minor limitation is the modest sample size.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Paudyal, R.; Shah, A.D.; Akin, O.; Do, R.K.G.; Konar, A.S.; Hatzoglou, V.; Mahmood, U.; Lee, N.; Wong, R.J.; Banerjee, S.; et al. Artificial Intelligence in CT and MR Imaging for Oncological Applications. Cancers 2023, 15, 2573. https://doi.org/10.3390/cancers15092573

AMA Style

Paudyal R, Shah AD, Akin O, Do RKG, Konar AS, Hatzoglou V, Mahmood U, Lee N, Wong RJ, Banerjee S, et al. Artificial Intelligence in CT and MR Imaging for Oncological Applications. Cancers. 2023; 15(9):2573. https://doi.org/10.3390/cancers15092573

Chicago/Turabian Style

Paudyal, Ramesh, Akash D. Shah, Oguz Akin, Richard K. G. Do, Amaresha Shridhar Konar, Vaios Hatzoglou, Usman Mahmood, Nancy Lee, Richard J. Wong, Suchandrima Banerjee, and et al. 2023. "Artificial Intelligence in CT and MR Imaging for Oncological Applications" Cancers 15, no. 9: 2573. https://doi.org/10.3390/cancers15092573

APA Style

Paudyal, R., Shah, A. D., Akin, O., Do, R. K. G., Konar, A. S., Hatzoglou, V., Mahmood, U., Lee, N., Wong, R. J., Banerjee, S., Shin, J., Veeraraghavan, H., & Shukla-Dave, A. (2023). Artificial Intelligence in CT and MR Imaging for Oncological Applications. Cancers, 15(9), 2573. https://doi.org/10.3390/cancers15092573

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop