Next Article in Journal
Mediating Effects of Diagnostic Route on the Comorbidity Gap in Survival of Patients with Diffuse Large B-Cell or Follicular Lymphoma in England
Previous Article in Journal
Efficient Synthesis with Green Chemistry Approach of Novel Pharmacophores of Imidazole-Based Hybrids for Tumor Treatment: Mechanistic Insights from In Situ to In Silico
Previous Article in Special Issue
Dual-Intended Deep Learning Model for Breast Cancer Diagnosis in Ultrasound Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Radiomic and Volumetric Measurements as Clinical Trial Endpoints—A Comprehensive Review

by
Ionut-Gabriel Funingana
1,2,3,4,*,†,
Pubudu Piyatissa
5,†,
Marika Reinius
1,2,3,4,
Cathal McCague
1,3,6,
Bristi Basu
1,2,3 and
Evis Sala
1,3,6,*
1
Cambridge University Hospitals NHS Foundation Trust, Cambridge CB2 0QQ, UK
2
Department of Oncology, University of Cambridge, Cambridge CB2 0XZ, UK
3
Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK
4
Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge CB2 0RE, UK
5
School of Clinical Medicine, University of Cambridge, Cambridge CB2 0SP, UK
6
Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work and are co-first authors.
Cancers 2022, 14(20), 5076; https://doi.org/10.3390/cancers14205076
Submission received: 14 September 2022 / Revised: 12 October 2022 / Accepted: 14 October 2022 / Published: 17 October 2022
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)

Abstract

:

Simple Summary

The extraction of quantitative data from standard-of-care imaging modalities offers opportunities to improve the relevance and salience of imaging biomarkers used in drug development. This review aims to identify the challenges and opportunities for discovering new imaging-based biomarkers based on radiomic and volumetric assessment in the single-site solid tumor sites: breast cancer, rectal cancer, lung cancer and glioblastoma. Developing approaches to harmonize three essential areas: segmentation, validation and data sharing may expedite regulatory approval and adoption of novel cancer imaging biomarkers.

Abstract

Clinical trials for oncology drug development have long relied on surrogate outcome biomarkers that assess changes in tumor burden to accelerate drug registration (i.e., Response Evaluation Criteria in Solid Tumors version 1.1 (RECIST v1.1) criteria). Drug-induced reduction in tumor size represents an imperfect surrogate marker for drug activity and yet a radiologically determined objective response rate is a widely used endpoint for Phase 2 trials. With the addition of therapies targeting complex biological systems such as immune system and DNA damage repair pathways, incorporation of integrative response and outcome biomarkers may add more predictive value. We performed a review of the relevant literature in four representative tumor types (breast cancer, rectal cancer, lung cancer and glioblastoma) to assess the preparedness of volumetric and radiomics metrics as clinical trial endpoints. We identified three key areas—segmentation, validation and data sharing strategies—where concerted efforts are required to enable progress of volumetric- and radiomics-based clinical trial endpoints for wider clinical implementation.

Graphical Abstract

1. Introduction

Clinical trials rely on pre-defined endpoints to evaluate the efficacy of a given medical product. Instead of directly measuring the clinical outcome, the process can be accelerated by the use of alternative markers that identify therapeutic response early during treatment [1]. These ‘surrogate endpoints’, as defined by the FDA section 507(e)(9), comprise laboratory measurements, radiographic images, physical signs or other measures, that are not themselves a direct measurement of clinical benefit, but which are at least reasonably likely to predict clinical benefit from a drug or biological product [2].
All FDA approved imaging-based surrogate endpoints for treatment response assessment in solid tumors—namely objective response rate (ORR), progression-free survival (PFS), metastasis-free survival, disease-free survival (DFS) and event-free survival (EFS))—currently employ the so-called Response Evaluation Criteria in Solid Tumors version 1.1 (RECIST v1.1) response criteria [2]. RECIST v1.1 evaluates the change in tumor burden by means of a series of standardized rules based on the unidimensional size of the lesions [3]. These criteria are mechanism-agnostic, and relationships between changes in tumor size and survival benefit vary according to tumor type [2].
Other widely employed surrogate endpoints include histopathological changes following drug exposure, such as pathological complete response (pCR) for patients with breast cancer [2]. Integration of more complex multimodal data, including data extraction using quantitative features from standard of care imaging, may improve the clinical relevance of surrogate endpoints during evaluation of novel therapies targeting complex biological systems (such as the immune system). There are broadly two different approaches to extracting quantitative features from imaging data: hand-crafted feature generation, for example, using one of many open source radiomic tools (MIRP, S-IBEX, RaCaT, SERA, PyRadiomics and RadiomiCRO), and deep learning approaches where features are generated as part of the learning process [4,5].
Prior to the implementation of RECIST v1.1, volumetric-based imaging biomarkers were considered as an alternative to assessment by unidimensional measurement, but the lack of standardization and evidence to support the transition led the RECIST working group to abandon the idea [3]. However, volumetric response measurement offers some advantages compared to RECIST v1.1, including reduction of inter-reader variability [6] and increased likelihood of correctly identifying pseudo-progression [7]. Several prospective studies suggest that volumetric measurements performed better than planar RECIST v1.1 assessment [8,9]. In contrast to volumetric assessment, it has been suggested that radiomic features may add more biological value to the standard dimensional measurements [4].
A Cancer Research UK—European Organisation for Research and Treatment of Cancer (CRUK-EORTC) consensus statement has summarized the roadmap for clinical translation of imaging biomarkers and highlighted the need for the candidate biomarkers to close two ‘translational gaps’: the validation as robust medical research tools; and integration into routine patient care [10]. Within this roadmap, imaging biomarkers may play a role in the discovery phase (e.g., non-clinical studies) and in early technical, biological and clinical validation. Significant efforts have been made internationally to harmonize all stages within a radiomics pipeline from data extraction and curation to operating procedures for standardization and reproducibility, interpretability, generalizability and regulatory approval. These have resulted in initiatives such as the Radiomic Quality Score (RQS), Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) and Image Biomarker Standardization Initiative (IBSI)) [11,12,13]. The steps required to address the second translational gap to bridge the space to routine patient care, involve incorporation into prospective multi-site clinical trials with further qualification and technical validation [10].
In order to assess the status of volumetric and radiomics-based biomarkers as potential clinical trial endpoints in oncological studies, this review aims to synthesize the literature of computational modelling of imaging features in predicting treatment response and outcome. To capture a representative summary of the current status of this expanding field, we focus on four key tumor types with a significant body of published data, in order to provide a detailed overview of methodological considerations and reporting of model performance. This informs our discussion of key areas that require improvement in order to close the ‘translational gaps’ and prepare volumetric and radiomics-based biomarkers for prime time.

2. Article Search Strategy and Study Selection

A literature search was performed on 1 July 2021 to identify primary research papers that have used radiomics or volumetric approaches to study predictive or prognostic models. Articles published between 1 January 2015 and 30 June 2021 were included (Figure 1). The main statements of the inclusion and exclusion criteria for the studies entered into the review are related to considered population (patients with solid tumor diagnosis), type of interventions (volumetric and/or radiomic analyses) and the comparators and outcomes measured against (RECIST response or pathologic response). To identify published computational/machine learning models in the oncological imaging literature, we used the following regular expressions to query the PubMed database: cancer/tumor/tumor/oncology/oncological/malignancy, radiomic/radiogenomic/volumetric/computational/machine/learning/deep learning/framework, predict/prediction, response/pathological and survival/outcome/PFS (see Supplementary Section S1 for PubMed Query box expression).

Data Extraction and Relevance Rating

The abstracts of the identified papers were screened using a list of inclusion and exclusion criteria (see Supplementary Table S1) using the open-source web-based platform https://www.rayyan.ai/ (accessed on 1 July 2021) [14]. Abstract screening was initially performed by one author (IGF). The papers were included in the three groups of relevance: included, excluded and maybe (the last group mainly for papers with abstracts of uncertain relevance due to unclear study design).
Four reviewers assessed the articles identified by the abstract screening process according to eight domains (see Supplementary for full details). The domains covered were patient characteristics; study design and dataset; imaging details; non-radiological data types; algorithm details; model performance; generalizability/reproducibility; and regulatory approval (e.g., FDA approval) (see Supplementary Section S2).

3. Results

3.1. Overview of Included Studies

The relevance rating of the published models was heterogeneous with only 27.3% of the total number of papers passing the screening stage, and only 124/168 papers from the first stage were eligible for the final review part (Figure 1).
The total number of published papers increased each year (Supplementary Figure S1). The highest proportion of eligible papers were identified for the single-site solid tumor sites: breast cancer, rectal cancer, lung cancer and glioblastoma (Figure 2) and therefore, the review focused on use of radiomics and volumetric endpoints for these four tumor cohorts.

3.1.1. Breast Cancer (BC)

After the initial abstract screening, 43 breast cancer papers were included in the final analysis. The evaluation of the full manuscripts led to the exclusion of nine publications for the following reasons: radiomic analysis of distant metastatic sites (n = 2), small sample size (n = 2), non-predictive or prognostic endpoints (n = 3), in silico model development (n = 1) and non-imaging modality (n = 1). The remaining articles (n = 34) were included in the final analysis [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48].

3.1.2. Rectal Cancer (RC)

After the initial abstract screening, 34 papers were included for further analysis; 19 were excluded for the following reasons: relating to non-rectal cancers (n = 1), insufficient information provided on how region of interest (ROI) segmentation was performed (n = 2), mixed or unspecified histopathology (n = 10), insufficient detail on the acquisition parameters (n = 4), no radiomic or volumetric analysis performed (n = 1) and non-predictive association study (n = 1). The remaining articles (n = 15) related to locally advanced rectal cancer which were all histologically confirmed as adenocarcinoma [49,50,51,52,53,54,55,56,57,58,59,60,61,62,63].

3.1.3. Lung Cancer (LC)

The initial abstract screen identified 49 lung cancer papers as relevant for manual review. Of these, five were excluded due to: non-imaging modality (n = 1), non-predictive or prognostic endpoints (n = 2), radiomic analysis of tumors metastatic to the lung (n = 1), treatment strategy not recorded (n = 1). This yielded a total of 44 papers [64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107].

3.1.4. Glioblastoma Multiforme (GBM)

A total of 42 articles on glioblastoma multiforme were identified by the abstract screening strategy described; 11 were excluded due to: investigation of cerebral blood volume rather than tumor volume (n = 3); neither radiomic nor volumetric analyses performed (n = 3); limited imaging or other methodological details (n = 2); mixed or unspecified histopathology (n = 2); detection of progressive disease on serial imaging rather than prediction of future response (n = 1). The remaining articles (n = 31) were included in the final analysis [108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138].

3.2. Study Design

The subgroup analyses revealed that the majority of the studies analyzed retrospective datasets: BC (76%), RC (80%), LC (89%) and GBM (80%), with the remainder included prospective collected data of clinical trials. A limited proportion of imaging data was collected as part of a multi-institution collaboration for the included tumor types: BC (23.5%), RC (40%), LC (25.0%) and GBM (22.5%) (See Table 1).

3.2.1. Breast Cancer

The majority of the papers (n = 26) were retrospective studies, and imaging data collected during the conduct of two prospective clinical trials was examined. Five papers included the analysis of imaging data from the ACRIN 6657/ISPY-1 (NCT00033397) clinical trial [18,24,25,32,36] and one paper from the ASAINT (NCT02599974) study [30]. Over half (5/8 studies) of the prospective studies comprised multi-institution collaborations and 31 out of 34 analyses utilized imaging data of neoadjuvantly-treated patients, two studies utilized imaging data of primary surgery cohorts and one study was of a second line treatment population.

3.2.2. Rectal Cancer

All studies related to first line treatment. The majority were retrospective (n = 12), and the remainder prospective in design, including, NCT01171300 [53] and FOWARC NCT01211210 [54]. Six of the papers were multi-institution studies, with the remainder involving a single institution.

3.2.3. Lung Cancer

Of the 44 papers reviewed, only six papers that were associated with one or more clinical trials: (NCT00533949, NCT02136355, NCT00087438, NCT00181545, NCT00181506, NCT00572325, NCT00573040, NCT01166204, NCT01084785 and NCT01936571) [78,79,82,95,103]. The majority of the studies (39/44) were retrospective and one third (17/44) of the studies for related to first line treatment.

3.2.4. Glioblastoma Multiforme

Study designs were mostly retrospective (n = 25); however, three studies with modest sample sizes (range 15–54) were conducted prospectively (including GLIAA-Pilot/DRKS00000633 and NCT02329795) [135,138]. A further two studies involved retrospective analyses of larger prospectively collected clinical trial datasets (NCT01089868 and DIRECTOR/NCT 00941460) [112,113], as well as one single-institution retrospective study including a prospective test-retest cohort [131]. Most analyses pertained to the first-line setting (n = 26), and fewer studies investigated recurrence images (n = 5).

3.3. Type of Imaging Modality and Strategy of Segmentation

The volumetric and/or radiomic features were extracted using a semi-automated or automated pipeline for 10/34 of BC (29.4%), 2/15 of RC (13.3%), 17/44 of LC papers (38.6%) and 18/31 of GBM (58.0%). Most of the studies assessed the performance of handcrafted features applying classical statistical methods or machine learning models, with only eight studies employing deep learning approaches.

3.3.1. Breast Cancer

The majority of papers (25/34) evaluated MRI imaging-based data. Four studies were of FDG-PET/CT scans, three studies were of ultrasound scans and two were of CT scans. Automated or semi-automated data extraction strategies were applied in 10 studies, but in the other 24 studies, a manual data curation step was included upstream to imaging features extraction. On average, two assessors per study were used to segment/assess the raw imaging data manually (median = 2, mean = 1.9, range 1–9), with more assessors required for the multi-institution prospective studies e.g., for the ACRIN 6657/ISPY-1 (NCT00033397) study [18,24,25,32,36], nine assessors in total (one per investigation site) were used.

3.3.2. Rectal Cancer

MRI is the preferred imaging modality for the local staging of rectal cancer [139], and this was reflected in the included papers, with the majority (n = 10) of studies reporting on MRI alone, or in combination with other imaging modalities (MRI and CT, n = 1; MRI and PET-CT n = 2; PET-MRI and CT n = 1). A single study used CT as its sole imaging modality [57]. All of the included studies clearly outlined the parameters for how the imaging was acquired, and some (n = 4) included detail on the pre-processing steps performed prior to image analysis. The method of the ROI segmentation in the 15 included studies varied: 13 included manual segmentations alone, 1 study contained a mix of manual and semi-automated segmentations, and 1 study contained only semi-automated segmentations. The number of assessors per study varied from 1 to 4, and were either radiologists, radio-oncologists or researchers experienced in segmentation.

3.3.3. Lung Cancer

The majority of papers (29/44) evaluated CT images, with 15/44 evaluating PET CT images and 1/44 evaluating Cone beam CT images. The image acquisition method, including scanner model, slice thickness and use of contrast, was reported fully in 26 papers. Segmentation was reported as automated, semi-automated or manual by 4, 18 and 22 studies respectively. Of the studies using semi-automated or manual segmentation, the number of assessors ranged between 1 and 3 (median = 1), with 20 papers not reporting the number of assessors. Tumor margin and peritumor voxels are salient features for predicting treatment failure [79], but none of the papers assessed carried out robustness studies to assess inter-reader variability.

3.3.4. Glioblastoma Multiforme

MRI is the standard radiological modality in GBM management [140], and was the sole imaging type in 29 of 31 studies. Other modalities investigated were PET-MRI (n = 1) and FET-PET (n = 1). In terms of image analysis pipelines, the segmentation approach was reported as automated, semi-automated or manual by 7, 11 and 9 studies, respectively, and was not specified in 4 manuscripts. Of the 20 studies using semi-automated or manual segmentation, the assessors ranged between 1 and 3 (median = 2) radiologists, oncologists or other expert researchers. A total of 26 studies used pre-treatment scans alone, whilst five used pre and post-treatment images.

3.4. Type of Algorithms and Primary Endpoints

The most commonly used methods were Cox regression (n = 40), logistic regression (n = 20) and random forest (n = 15) and support vector machine (SVM) (n = 16). Other machine learning models were implemented for the predictive or prognostic analysis of imaging features in 10 papers (See Table 2).

3.4.1. Breast Cancer

The algorithms implemented for radiomic or volumetric analyses were heterogeneous with 12 different types of algorithms deployed: 3D mathematical model (n = 1), extended Tofts–Kety model (ETK) (n = 1), convolutional neural network (CNN) algorithm (n = 3), Cox regression model (n = 6), Histogram (n = 1), Jacobian map (n = 1), linear discriminant analysis (n = 1), logistic regression (n = 6), other machine learning models (n = 1), nomogram (n = 5), random forest (n = 2) and SVM (n = 6). Additional computational studies were carried out for four out of six SVM analyses, two out of six logistic regression analyses and one out of five nomogram analyses. The additional computational models encompass Fisher’s linear discriminant (FLD), k-nearest neighbour (KNN), stochastic gradient descent (SGD), decision tree adaptive boosting (AdaBoost) and extreme gradient boosting (XGBoost) methods.
The pCR scores were the primary endpoint for most studies, either alone (n = 22) or in combination with survival endpoints (n = 5) or ORR by RECIST v1.1 (n = 1). The 6 remaining computational models tested either the prognostic value of the analyses against survival endpoints (disease-free survival for n = 4 or overall survival for n = 1) or the predictive value against a non-standardized imaging-based endpoint measuring the Kendall correlation coefficients (KCC) between model-predicted tumor response and the observed values at the time of the final scan (n = 1).

3.4.2. Rectal Cancer

A variety of approaches were applied to the analysis of the collected data. Overall, 11 studies used a single model (logistic regression: n = 6; linear regression: n = 2; partial least square (PLS) regression n = 1; LASSO Cox regression model n = 1; and SVM n = 1). Two studies applied multiple models to the analysis; the first (using logistic regression, SVM, and gradient boosting machine (GBM)) tested its models in parallel [54], the second used its first model (an SVM) to create a radiomic signature which was then ensembled with a second multivariate logistic regression model to create a radiomic model. The performances of both the radiomic signature and radiomic model were tested with an independent validation cohort [62]. Two studies used a deep learning-based approach.
The majority (n = 14) of the studies used pathological complete response, or disease response by tumor regression grade (TRG), as an endpoint and a single study used disease free survival (DFS) [58].

3.4.3. Lung Cancer

The algorithms implemented for the analysis of collected data were varied. Most applied a single model: 21 studies utilized logistic (n = 1)/linear (n = 1)/COX regression (n = 18}/Recursive partitioning (n = 1) approaches. Seven studies utilized machine learning approaches including support vector machine (n = 2), random forest (n = 3) and neural network (n = 2). Of the papers assessing multiple models (n = 16), approaches included logistic, linear and COX regression, unsupervised clustering, random forest, neural network, support vector machine and nomograms.
The primary endpoints assessed were either OS alone (n = 16), in combination with PFS (n = 6), or DFS (n = 3). The remaining papers assessed DFS (n = 5), PFS (n = 10), cause specific survival (n = 1), or other measurements of tumor response (n = 3) [90,103,104], including one paper measuring a pCR. While some papers utilized RECIST to assess tumor response to treatment (n = 8), Yang et al. [103] were the only group to compare the predictive ability of their imaging biomarker against that of RECIST or PERCIST. They found that their recursive partitioning analysis model achieved better pCR prediction (Concordance: 0.92; p = 0.03) than RECIST (Concordance: 0.54) or PERCIST (Concordance 0.58).

3.4.4. Glioblastoma Multiforme

Most (n = 29) studies applied a single model: 17 performed logistic/linear/Cox regression, one used apparent diffusion coefficient histogram analysis, and 11 used machine learning approaches (random forest: n = 6, support vector machine: n = 5). Of two studies that used multiple machine learning algorithms, one reported that XGBoost achieved superior performance to a random forest model [110]. The other demonstrated that linear regression-based genomic and radiogenomic prediction models outperformed counterparts that used RF, SVM, artificial neural network and gradient boosting methods [134].
Predicted endpoints were predominantly OS alone (n = 18), or both PFS and OS (n = 8). Other studies predicted PFS alone (n = 1); PFS and site of recurrence (n = 1); time to progression, OS and site of recurrence (n = 1) [135]; and site of recurrence (n = 2) [124,138].

3.5. Integration of Radiomics with Genomics and Multi-Omics

Imaging features were integrated with validated and prognostic and/or predictive markers (e.g., clinical, histological or molecular data) in 20/34 of BC models (58.8%), 9/15 of RC models (60.0%), 15/44 of LC models (34.0%) and 19/31 of GBM models (61.2%) in the articles included in this review. However, exploratory genomics or transcriptomics (including longitudinal ctDNA and RNA sequencing) or other molecular datapoints were integrated in a minority of the analyses (BC 1/34, RC 1/15, LC 5/44 and GBM 2/31).

3.5.1. Breast Cancer

Molecular properties (ER status by IHC and HER2 status by ISH/IHC) were reported for 26/34 (76%) of the manuscripts. Three papers included pre-specified molecular inclusion criteria, two included only HER2+ breast cancer population and one was for triple-negative breast cancer (TNBC). Clinical and molecular factors with established prognostic value were integrated with radiomics or volumetric features for the majority (20/34 studies—58.8%) of the models. Two radiomics studies integrated other molecular data points (RNA sequencing of TCIA cohort or lncRNA sequencing data) [27,32].

3.5.2. Rectal Cancer

A total of 10 studies used models which included clinical data, and only one of the 15 studies included molecular data, in the form of circulating tumor cells (CTCs) and extracellular vesicles (EVs) such as cancer microparticles (MPs) [63].

3.5.3. Lung Cancer

In all, 95% (n = 42) of papers recorded clinical details such as Eastern Cooperative Oncology Group (ECOG) performance status, and histology, and (n = 12) papers incorporated this data into multiomics models. Additionally, 9 papers also used deltaomic biomarkers, quantifying a change in imaging signature over time, and 2 of the 12 papers combined both multiomics and deltaomic data [91,107]. The molecular properties of lung tumors (EGFR pathogenic variants, ROS or ALK genes rearrangements, PD-L1 expression) were studied (n = 8), and used as inclusion criteria (n = 5), or assessed for predictive value (n = 4).

3.5.4. Glioblastoma Multiforme

Fewer than half (n = 12) of the publications reviewed included imaging features only in their prognostic models. These included an SVM model in a multi-institution radiomics study (n = 80) which achieved PFS prediction with AUC values between 0.82–0.88 [116], and an RF model from a smaller (n = 40) radiomics study based on TCGA data which reported similar AUC values for PFS (0.8537) and OS (0.8554) [129].
Nineteen studies integrated imaging features with other data types: clinical and molecular data (n = 10), clinical only (n = 8), molecular only (n = 1). Age was the most commonly utilized clinical data type and was specified as being a selected feature in 12 studies.

3.6. Validation and Data Sharing Strategy

One crucial step in closing the first translational gap is to deploy a suitable validation strategy that uses an independent dataset. Resampling approaches can be applied to validate models with limited samples of data.

3.6.1. Breast Cancer

Eleven studies have not deployed any validation strategies. For the remainder, the radiomics features were validated in an internal test set, the most commonly used technique was the k-fold cross-validation (15/34) followed by leave-one-out cross-validation approach (5.8%—2/34). Other approaches included Cox regression to select the features correlated with PFS (2/34), grid search method or calibration of receiver operating characteristics (ROC) curve using Hosmer–Lemeshow test. External validation was performed only in a small proportion of the studies (3/34) with one study examining inter- and intra-observer assessments variability [48]. The code used for analysis was made available only for one study [22].

3.6.2. Rectal Cancer

Out of the 15 papers, 13 included a clear internal validation method (these included 10-, 5-, 4 and 3-fold cross validation, leave-one-out cross validation, and bootstrapping methods). Of these, only four also tested their models on an external dataset. AUC was the most commonly quoted metric of performance (14 papers). In total, 10 of the studies used models including clinical data, and only one of the 15 studies included molecular data, in the form of circulating tumor cells (CTCs) and EVs such as cancer microparticles (MPs) [63]. None of the studies reviewed contained models or tools which had received regulatory approval for clinical use.
None of the 15 studies included made the code related to their models available.

3.6.3. Lung Cancer

Overall, only 20% (n = 9) of papers performed external validation, with the remaining papers either being internally validated (n = 18) or not undergoing any testing of validity (n = 17). Internal validation approaches included validation using a portion of the test dataset with K-fold cross validation, leave-one-out cross validation and bootstrapping, or validation in an independent internal sample. One study [77] separated its sample population by image matrix size, such that one matrix size was used in the test dataset, and one was used in the cohort, to test validity across matrix sizes. The rarity of external validation, combined with the rarity of prospective studies (n = 5) or studies incorporating multi-center data (n = 11) is reflected by the fact that none of the models assessed have been approved by regulatory bodies for use in a clinical context.

3.6.4. Glioblastoma Multiforme

Twelve manuscripts included single-institution datasets with fewer cases (range 15–181, mean 94.5), while four studies utilized larger datasets from two to six separate institutions (range 80–837, mean 288). Seven manuscripts focused entirely on publicly available data (TCGA/TCIA/BraTS17/BraTS18/Ivy-GAP), in addition to five and three papers which combined single and multi-institution data with open-source datasets respectively.
Overall, 26 out of 31 studies outlined a clear internal validation method (typically with 10-, 5- or 3-fold cross-validation, leave-one-out cross-validation or bootstrapping), and a further 10 studies were able to externally validate their findings in an independent cohort. Hazard ratios were the most commonly reported model performance metric (16 papers), followed by area under ROC curve (AUC, n = 12), Harrell’s C/concordance index (n = 10), accuracy (n = 7), sensitivity/specificity/positive or negative predictive value (n = 7) and integrated Brier score/mean prediction error rate (n = 4).

4. Discussion

FDA/EMEA guidance documentation details the rigorous steps required in the route to the regulatory biomarker qualification of Artificial Intelligence and Machine Learning (AI/ML)-endpoints (FDA-qualified medical device development tool (MDDT)) [141,142]. Our review offers a snapshot of the existing literature in studies of radiomic/volumetric-based predictive or prognostic models, from which three areas stand out as key priorities in bringing volumetric-based and radiomics-based clinical trial endpoints closer to prime-time clinical implementation, namely: segmentation, validation and data sharing strategies. We have evaluated tumor types with the highest number of publications (i.e., breast cancer, rectal cancer, lung cancer and glioblastoma) to illustrate the common challenges in implementing volumetric and/or radiomic endpoints in clinical trial design. Although different biological processes characterize these diverse tumor types, the challenges for implementing novel imaging biomarkers are likely to be similar.

4.1. Strategy of Segmentation

Classical radiomics requires segmentation of a region of interest from which radiomic features are computed [143]. Manual segmentation by expert readers is time consuming and has the potential to introduce reader-dependent bias [11,110,143]. It is possible to assess the inter-reader variability of the resulting features and exclude those that are variable [48,143]. However, this may affect the predictive ability of the resulting tool, as tumor margin and peritumoral areas have been found to be salient features for predicting tumor response [79].
Large sample sizes are required especially for training deep learning models; however, the associated increase in workload with manual segmentation creates a key bottleneck. The ideal strategy for segmentation in the context of large datasets would be reliable, reproducible and high throughput. Automated or semi-automated tools for segmentation are being developed and utilized by researchers to relieve this bottleneck [144]. The increased sample size accessed through these approaches allows researchers to test and develop more data hungry predictive tools such as convolutional neural networks [110,145].
Using automated tools to segment large quantities of data is a tempting solution to the bottleneck; however, AI segmentation is vulnerable to artefacts and noise [146]. It is important to keep this in mind when developing data sharing infrastructure. Archives should be curated to identify images that may be affected by noise or artefacts so that teams developing the tools can apply appropriate pre-processing steps or exclude data to eliminate the risk of bias [147]. On the other hand, the increased availability of publicly available datasets is aiding the development of these tools [144].
It is now possible to create a fully automated radiomics pipeline, resulting in a multiparametric signature with superior performance compared to fixed-parameter radiomics signatures and conventional prognostic factors [120]. The model created by Li et al. [120] utilized automated harmonization techniques such as resampling, automated segmentation and radiomic analysis. Unfortunately, it was limited by its small sample size, and retrospective nature [120], further highlighting the need for accessible data warehouses.
Furthermore, segmentation of tumor sub-volumes may also be of importance to model performance [109,112,129], as exemplified by manual segmentation of GBM subregions, with each subregion yielding a different AUC (tumor only: 52.99%, edema only: 61.77%, necrosis only: 63.85%, all three subregions: 66.99%) [129].

4.2. Validation Strategy

Validation of a predictive model is a useful indicator of its potential effectiveness in the target population, and this is reflected by its inclusion in the RQS [11]. The TRIPOD statement outlines the different types of validation methods available to researchers developing prognostic tools [148]. Both TRIPOD and the RQS rank external validation as the most effective form of validation [11,148], as the data is more independent [11]. Unfortunately, external validation is rare in the literature while internal validation is more common [149]. This lack of validation of predictive models is a barrier to their implementation in clinical practice [143].
External validation can be achieved using data from other institutions directly [68]. This allows initial validation but can result in a small validation dataset requiring further study to characterize [68]. Alternatively, some groups opt to create test and/or validation datasets using data from an image biobank such as the cancer imaging archive (TCIA) which contains data from multiple centers [99].
External validation does not necessarily need to occur with the same imaging methodology, for example, some authors validated a previously published CT radiomic signature using a cone beam CT dataset [66]. Although this required modification of the signature to remove radiomic features that varied between CT and CBCT, it illustrates how external validation can be used to broaden the applicability of a predictive tool.
An additional strategy to achieve external validation is to produce open-source code, along with a transparent methodology to allow independent verification of results by other research groups [11]. Requesting independent validation of an algorithm in an external dataset would be one option to externally validate a tool without the challenges of maintaining or using an image biobank.
Finally, in the absence of an external dataset, some studies have utilized temporal validation, where the validation dataset is from the same institution but at a different timepoint [92]. Some studies have used temporal validation as a prospective test of AI tools [131]. Other studies have changed the image acquisition protocols to create a heterogenous dataset to validate the tool against [112]. Similarly, Park et al. separated its sample population by image matrix size, such that one matrix size was used in the test dataset, and one was used in the cohort, to test validity across matrix sizes [77].

4.3. Data Sharing

Open data models are necessary to improve discovery and reproducibility. For example, the RECIST measurements were validated by the European Organization for Research and Treatment of Cancer (EORTC) data warehouse of case report forms (CRF) from 50 clinical trials, with more than 6500 patients treated with chemotherapy and more than 23,000 patients treated with targeted agents [150]. However, creating a data warehouse of case report forms containing longitudinal measurements instead of primary imaging data makes it difficult to assess inter-and intra-observer variability and allow for external validation studies [150].
Sharing of imaging and clinical data can contribute to the development of AI tools by enabling them to use larger sample sizes, externally validate their results and incorporate more variables in their predictive tools [11]. However, there are some challenges to implementing and using such bio-banks including confidentiality, data ownership and curating the data [145]. Radiomic signatures are sensitive to changes in technical elements such as voxel size, use of contrast and other technical variations [120], so it is important that this data is also included in biobanks to allow data harmonization or exclusion [99].
Other strategies to acquire data include using clinical trial data as the development [53,112], or validation [95] datasets. Utilizing clinical trial data has the added benefit of enabling prospective testing of the tool. Papers that utilize clinical trial data either gather data from trials conducted within their institution [82] or acquire data from data warehouses such as TCIA [95]. Unfortunately, prospective studies utilizing clinical trial data not sourced from data warehouses rarely mention any data sharing methodologies, with some papers offering access to the original data through writing to an author [112]. The reluctance to make data openly and publicly available could be due to the factors such as consent and data security, for example Lou et al. specifically request that the data are used only for the purposes outlined to the data provider, and mandate that redistribution of the data is prohibited [79].
TCIA is a data warehouse that uses encryption along with a semi-automated de-identification procedure which includes manual review for each image prior to publication, to check for missed patient details in the file, such as pixel embedded patient information [151]. Semi-automated and visual quality control procedures are also in place to ensure images are visible and un-corrupted [151]. The requirement for manual checks means that TCIA must employ staff to grow and maintain the archive, as well as assist researchers in uploading data to, or using data from the archive [151]. As TCIA aims to be a diverse archive of varying modalities, acquisition protocols and level of prior de-identification mean that curation of data uploaded TCIA is a labor-intensive process. This limits the amount of data that can be processed, uploaded and made publicly available at one time.
Further challenges to data sharing and curation arise from radiomic approaches such as deltaomic analyses which require images from multiple timepoints. Furthermore, multiomic tools require not just imaging data but associated clinical or genetic data. Despite these challenges, projects such as the cancer genome atlas (TCGA) endeavor to create publicly available ‘next generation’ data warehouses with clinical, radiological, genomic and other data types for various tumor types [145].
Other relevant approaches, such as the large public–private partnership Vol-PACT (Advanced metrics and modelling with Volumetric Computed Tomography for Precision Analysis of Clinical Trial results) [152]. The Vol-PACT effort aims to develop volumetric-based CT metrics for precision analysis of clinical trial results in measurable solid tumors (non–small-cell lung cancer, colorectal cancer, renal cell cancer, and melanoma) [152]. Its initial benchmarking of the volumetric and mathematical modelling is by RECIST v1.1, immune-related RECIST, and case report forms (CRFs) with a goal to establish a repository of semi-automated calculated tumor burden [152]. To date, the results of Vol-PACT approach has been published for melanoma trials [153] and metastatic colorectal cancer trials [154]. The random forest algorithm of imaging features from 575 advanced melanoma patients outperformed the standard RECIST v1.1 response method [153]. The DL scores of imaging data collected during metastatic colorectal cancer VELOUR trial (NCT00561470) showed a better performance compared to sized-based criteria (RECIST v1.1 and early tumor shrinkage (ETS)) [154].
Various organizations have attempted to provide guidance for researchers using patient data to develop AI tools. Notably, the Joint European and North American Multisociety Statement on the ethics of AI highlight the need for these tools to minimize harm, ensure any potential harm is equally distributed amongst stakeholders and to curtail bias [155]. Despite this, it is known that there is some asymmetry in the racial representation of imaging archives, for example, TCGA overrepresents Caucasian populations despite the fact that there are some tumor types with different behavior in other ethnicities [147]. The multisociety statement acknowledges the need to develop policy for ethical practice for researchers developing AI [155]. Such policies have since been created, for example, the Analytic Imaging Diagnostics Arena (AIDA) in Sweden has created a grassroots policy informed by various stakeholders to guide AI researchers, taking into account legal frameworks such as GDPR [156]. This has led to the creation of the AIDA data hub, a national level data warehouse, which can be accessed by researchers upon request [156]. Both the AIDA and the multisociety statement acknowledge that new ethical challenges mean that best practice for data sharing is evolving, so it is important to reassess regulations, policy and advice to keep up with this change [155,156].
Increased availability of data may also allow research groups to use the increased sample size to utilize approaches that demand more data, such as convolutional neural networks [145], or increasing the number of radiomic markers included in a model [11], as, without an appropriate sample size, these approaches may be vulnerable to issues such as overfitting [143]. In parallel with improving the AI approaches, it is essential to create the framework for wider and free data sharing as recommended by the CRUK data sharing guidelines [157].

5. Conclusions

Current validated imaging biomarkers based on RECIST v1.1 were developed using a large collection of CRFs. The extraction of quantitative features from standard of care imaging for prognostic or predictive modelling is an active area of research that is rapidly generating potential imaging-based biomarkers of treatment response and outcome in oncological research. To date however, radiomics or volumetric biomarkers have been assessed in relatively small study cohorts and have shown varying degrees of promise. Ultimately, these approaches need to be tested within prospective, large multicenter studies before being ready for wide-scale adoption in clinical practice. Future use-cases will likely require (a) higher throughput systems for automated segmentation, (b) a higher standard for publishing externally validated models, (c) improved imaging data sharing capabilities to allow more prospective studies with volumetric/radiomics endpoints and (d) approaches for integration of radiology-based data with multiomic data. A robust and effective ethical and legal framework for data sharing will be crucial to support the discovery and validation of novel cancer imaging biomarkers towards the aim of regulatory biomarker qualification and clinical adoption.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/cancers14205076/s1, Section S1: Search strategy; Section S2: Quality assessment method; Table S1: Study selection: inclusion/exclusion criteria; Figure S1: Timeline of reviewed publications.

Author Contributions

Conceptualization, I.-G.F., M.R., C.M., B.B. and E.S.; methodology, I.-G.F., M.R., C.M., B.B. and E.S.; formal analysis, I.-G.F., P.P., M.R. and C.M.; writing—original draft preparation, I.-G.F., P.P., M.R., C.M., B.B. and E.S.; writing—review and editing, I.-G.F., P.P., M.R., C.M., B.B. and E.S. All authors have read and agreed to the published version of the manuscript.

Funding

We acknowledge funding and support from Cancer Research UK (grant number A22905), the Cancer Research UK Cambridge Centre (A25177), the Mark Foundation for Cancer Research, Cancer Research UK Cambridge Centre [C9685/A25177], the Wellcome Trust Innovator Award [RG98755], the CRUK National Cancer Imaging Translational Accelerator (NCITA) [C42780/A27066], and the National Institute of Health Research (NIHR) Cambridge Biomedical Research Centre (BRC-1215-20014), the National Cancer Research Network, the Cancer Research UK Experimental Cancer Medicine Centres, Hutchison Whampoa Limited and Joseph Mitchell Trust Fund. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. The views expressed are those of the authors and not necessarily those of the NIHR or the Department of Health and Social Care.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Clinical Trial Endpoints for the Approval of Cancer Drugs and Biologics. FDA. Available online: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/clinical-trial-endpoints-approval-cancer-drugs-and-biologics (accessed on 12 October 2022).
  2. Table of Surrogate Endpoints That Were the Basis of Drug Approval or Licensure. FDA. Available online: https://www.fda.gov/drugs/development-resources/table-surrogate-endpoints-were-basis-drug-approval-or-licensure (accessed on 25 April 2022).
  3. Eisenhauer, E.A.; Therasse, P.; Bogaerts, J.; Schwartz, L.H.; Sargent, D.; Ford, R.; Dancey, J.; Arbuck, S.; Gwyther, S.; Mooney, M.; et al. New Response Evaluation Criteria in Solid Tumours: Revised RECIST Guideline (Version 1.1). Eur. J. Cancer 2009, 45, 228–247. [Google Scholar] [CrossRef]
  4. Bera, K.; Braman, N.; Gupta, A.; Velcheti, V.; Madabhushi, A. Predicting Cancer Outcomes with Radiomics and Artificial Intelligence in Radiology. Nat. Rev. Clin. Oncol. 2022, 19, 132–146. [Google Scholar] [CrossRef] [PubMed]
  5. Bettinelli, A.; Marturano, F.; Avanzo, M.; Loi, E.; Menghi, E.; Mezzenga, E.; Pirrone, G.; Sarnelli, A.; Strigari, L.; Strolin, S.; et al. A Novel Benchmarking Approach to Assess the Agreement among Radiomic Tools. Radiology 2022, 303. [Google Scholar] [CrossRef]
  6. Zimmermann, M.; Kuhl, C.; Engelke, H.; Bettermann, G.; Keil, S. Volumetric Measurements of Target Lesions: Does It Improve Inter-Reader Variability for Oncological Response Assessment According to RECIST 1.1 Guidelines Compared to Standard Unidimensional Measurements? Pol. J. Radiol. 2021, 86, e594–e600. [Google Scholar] [CrossRef]
  7. Nishino, M. Tumor Response Assessment for Precision Cancer Therapy: Response Evaluation Criteria in Solid Tumors and Beyond. Am. Soc. Clin. Oncol. Educ. Book 2018, 38, 1019–1029. [Google Scholar] [CrossRef]
  8. Hylton, N.M.; Blume, J.D.; Bernreuter, W.K.; Pisano, E.D.; Rosen, M.A.; Morris, E.A.; Weatherall, P.T.; Lehman, C.D.; Newstead, G.M.; Polin, S.; et al. Locally Advanced Breast Cancer: MR Imaging for Prediction of Response to Neoadjuvant Chemotherapy--Results from ACRIN 6657/I-SPY TRIAL. Radiology 2012, 263, 663–672. [Google Scholar] [CrossRef] [Green Version]
  9. Xiao, J.; Tan, Y.; Li, W.; Gong, J.; Zhou, Z.; Huang, Y.; Zheng, J.; Deng, Y.; Wang, L.; Peng, J.; et al. Tumor Volume Reduction Rate Is Superior to RECIST for Predicting the Pathological Response of Rectal Cancer Treated with Neoadjuvant Chemoradiation: Results from a Prospective Study. Oncol. Lett. 2015, 9, 2680–2686. [Google Scholar] [CrossRef] [Green Version]
  10. O’Connor, J.P.B.; Aboagye, E.O.; Adams, J.E.; Aerts, H.J.W.L.; Barrington, S.F.; Beer, A.J.; Boellaard, R.; Bohndiek, S.E.; Brady, M.; Brown, G.; et al. Imaging Biomarker Roadmap for Cancer Studies. Nat. Rev. Clin. Oncol. 2016, 14, 169–186. [Google Scholar] [CrossRef]
  11. Lambin, P.; Leijenaar, R.T.H.; Deist, T.M.; Peerlings, J.; De Jong, E.E.C.; Van Timmeren, J.; Sanduleanu, S.; Larue, R.T.H.M.; Even, A.J.G.; Jochems, A.; et al. Radiomics: The Bridge between Medical Imaging and Personalized Medicine. Nat. Rev. Clin. Oncol. 2017, 14, 749–762. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Moons, K.G.M.; Altman, D.G.; Reitsma, J.B.; Ioannidis, J.P.A.; Macaskill, P.; Steyerberg, E.W.; Vickers, A.J.; Ransohoff, D.F.; Collins, G.S. Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD): Explanation and Elaboration. Ann. Intern. Med. 2015, 162, W1–W73. [Google Scholar] [CrossRef]
  13. Zwanenburg, A.; Vallières, M.; Abdalah, M.A.; Aerts, H.J.W.L.; Andrearczyk, V.; Apte, A.; Ashrafinia, S.; Bakas, S.; Beukinga, R.J.; Boellaard, R.; et al. The Image Biomarker Standardization Initiative: Standardized Quantitative Radiomics for High-Throughput Image-Based Phenotyping. Radiology 2020, 295, 328–338. [Google Scholar] [CrossRef] [Green Version]
  14. Rayyan—Intelligent Systematic Review. Available online: https://www.rayyan.ai/ (accessed on 25 April 2022).
  15. Zhuang, X.; Chen, C.; Liu, Z.; Zhang, L.; Zhou, X.; Cheng, M.; Ji, F.; Zhu, T.; Lei, C.; Zhang, J.; et al. Multiparametric MRI-Based Radiomics Analysis for the Prediction of Breast Tumor Regression Patterns after Neoadjuvant Chemotherapy. Transl. Oncol. 2020, 13, 100831. [Google Scholar] [CrossRef]
  16. Zhou, J.; Lu, J.; Gao, C.; Zeng, J.; Zhou, C.; Lai, X.; Cai, W.; Xu, M. Predicting the Response to Neoadjuvant Chemotherapy for Breast Cancer: Wavelet Transforming Radiomics in MRI. BMC Cancer 2020, 20, 100. [Google Scholar] [CrossRef] [Green Version]
  17. Li, X.; Abramson, R.G.; Arlinghaus, L.R.; Kang, H.; Chakravarthy, A.B.; Abramson, V.G.; Farley, J.; Mayer, I.A.; Kelley, M.C.; Meszoely, I.M.; et al. Multiparametric Magnetic Resonance Imaging for Predicting Pathological Response after the First Cycle of Neoadjuvant Chemotherapy in Breast Cancer. Investig. Radiol. 2015, 50, 195–204. [Google Scholar] [CrossRef] [Green Version]
  18. Li, W.; Newitt, D.C.; Wilmes, L.J.; Jones, E.F.; Arasu, V.; Gibbs, J.; La Yun, B.; Li, E.; Partridge, S.C.; Kornak, J.; et al. Additive Value of Diffusion-Weighted MRI in the I-SPY 2 TRIAL. J. Magn. Reson. Imaging 2019, 50, 1742–1753. [Google Scholar] [CrossRef]
  19. Li, P.; Wang, X.; Xu, C.; Liu, C.; Zheng, C.; Fulham, M.J.; Feng, D.; Wang, L.; Song, S.; Huang, G. (18)F-FDG PET/CT Radiomic Predictors of Pathologic Complete Response (PCR) to Neoadjuvant Chemotherapy in Breast Cancer Patients. Eur. J. Nucl. Med. Mol. Imaging 2020, 47, 1116–1126. [Google Scholar] [CrossRef] [PubMed]
  20. Kim, Y.; Kim, S.H.; Lee, H.W.; Song, B.J.; Kang, B.J.; Lee, A.; Nam, Y. Intravoxel Incoherent Motion Diffusion-Weighted MRI for Predicting Response to Neoadjuvant Chemotherapy in Breast Cancer. Magn. Reson. Imaging 2018, 48, 27–33. [Google Scholar] [CrossRef]
  21. Kim, S.; Kim, M.J.; Kim, E.-K.; Yoon, J.H.; Park, V.Y. MRI Radiomic Features: Association with Disease-Free Survival in Patients with Triple-Negative Breast Cancer. Sci. Rep. 2020, 10, 3750. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Jiang, M.; Li, C.-L.; Luo, X.-M.; Chuan, Z.-R.; Lv, W.-Z.; Li, X.; Cui, X.-W.; Dietrich, C.F. Ultrasound-Based Deep Learning Radiomics in the Assessment of Pathological Complete Response to Neoadjuvant Chemotherapy in Locally Advanced Breast Cancer. Eur. J. Cancer 2021, 147, 95–105. [Google Scholar] [CrossRef]
  23. Jarrett, A.M.; Hormuth, D.A., 2nd; Wu, C.; Kazerouni, A.S.; Ekrut, D.A.; Virostko, J.; Sorace, A.G.; DiCarlo, J.C.; Kowalski, J.; Patt, D.; et al. Evaluating Patient-Specific Neoadjuvant Regimens for Breast Cancer via a Mathematical Model Constrained by Quantitative Magnetic Resonance Imaging Data. Neoplasia 2020, 22, 820–830. [Google Scholar] [CrossRef]
  24. Jahani, N.; Cohen, E.; Hsieh, M.K.; Weinstein, S.P.; Pantalone, L.; Hylton, N.; Newitt, D.; Davatzikos, C.; Kontos, D. Prediction of Treatment Response to Neoadjuvant Chemotherapy for Breast Cancer via Early Changes in Tumor Heterogeneity Captured by DCE-MRI Registration. Sci. Rep. 2019, 9. [Google Scholar] [CrossRef] [Green Version]
  25. Hylton, N.M.; Gatsonis, C.A.; Rosen, M.A.; Lehman, C.D.; Newitt, D.C.; Partridge, S.C.; Bernreuter, W.K.; Pisano, E.D.; Morris, E.A.; Weatherall, P.T.; et al. Neoadjuvant Chemotherapy for Breast Cancer: Functional Tumor Volume by MR Imaging Predicts Recurrence-Free Survival-Results from the ACRIN 6657/CALGB 150007 I-SPY 1 TRIAL. Radiology 2016, 279, 44–55. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Huang, X.; Mai, J.; Huang, Y.; He, L.; Chen, X.; Wu, X.; Li, Y.; Yang, X.; Dong, M.; Huang, J.; et al. Radiomic Nomogram for Pretreatment Prediction of Pathologic Complete Response to Neoadjuvant Therapy in Breast Cancer: Predictive Value of Staging Contrast-Enhanced CT. Clin. Breast Cancer 2021, 21, e388–e401. [Google Scholar] [CrossRef]
  27. Yamamoto, S.; Han, W.; Kim, Y.; Du, L.; Jamshidi, N.; Huang, D.; Kim, J.H.H.; Kuo, M.D.D. Breast Cancer: Radiogenomic Biomarker Reveals Associations among Dynamic Contrast-Enhanced MR Imaging, Long Noncoding RNA, and Metastasis. Radiology 2015, 275, 384–392. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Ha, R.; Chin, C.; Karcich, J.; Liu, M.Z.; Chang, P.; Mutasa, S.; Pascual Van Sant, E.; Wynn, R.T.; Connolly, E.; Jambawalikar, S. Prior to Initiation of Chemotherapy, Can We Predict Breast Tumor Response? Deep Learning Convolutional Neural Networks Approach Using a Breast MRI Tumor Dataset. J. Digit. Imaging 2019, 32, 693–701. [Google Scholar] [CrossRef]
  29. Ha, R.; Chang, P.; Karcich, J.; Mutasa, S.; Van Sant, E.P.; Connolly, E.; Chin, C.; Taback, B.; Liu, M.Z.; Jambawalikar, S. Predicting Post Neoadjuvant Axillary Response Using a Novel Convolutional Neural Network Algorithm. Ann. Surg. Oncol. 2018, 25, 3037–3043. [Google Scholar] [CrossRef] [PubMed]
  30. Groheux, D.; Martineau, A.; Teixeira, L.; Espié, M.; de Cremoux, P.; Bertheau, P.; Merlet, P.; Lemarignier, C. (18)FDG-PET/CT for Predicting the Outcome in ER+/HER2- Breast Cancer Patients: Comparison of Clinicopathological Parameters and PET Image-Derived Indices Including Tumor Texture Analysis. Breast Cancer Res. 2017, 19, 3. [Google Scholar] [CrossRef] [Green Version]
  31. Fan, M.; Chen, H.; You, C.; Liu, L.; Gu, Y.; Peng, W.; Gao, X.; Li, L. Radiomics of Tumor Heterogeneity in Longitudinal Dynamic Contrast-Enhanced Magnetic Resonance Imaging for Predicting Response to Neoadjuvant Chemotherapy in Breast Cancer. Front. Mol. Biosci. 2021, 8, 622219. [Google Scholar] [CrossRef]
  32. Drukker, K.; Li, H.; Antropova, N.; Edwards, A.; Papaioannou, J.; Giger, M.L. Most-Enhancing Tumor Volume by MRI Radiomics Predicts Recurrence-Free Survival “Early on” in Neoadjuvant Treatment of Breast Cancer. Cancer Imaging 2018, 18. [Google Scholar] [CrossRef] [Green Version]
  33. Dogan, B.E.; Yuan, Q.; Bassett, R.; Guvenc, I.; Jackson, E.F.; Cristofanilli, M.; Whitman, G.J. Comparing the Performances of Magnetic Resonance Imaging Size vs Pharmacokinetic Parameters to Predict Response to Neoadjuvant Chemotherapy and Survival in Patients With Breast Cancer. Curr. Probl. Diagn. Radiol. 2019, 48, 235–240. [Google Scholar] [CrossRef] [PubMed]
  34. Dasgupta, A.; Brade, S.; Sannachi, L.; Quiaoit, K.; Fatima, K.; DiCenzo, D.; Osapoetra, L.O.O.; Saifuddin, M.; Trudeau, M.; Gandhi, S.; et al. Quantitative Ultrasound Radiomics Using Texture Derivatives in Prediction of Treatment Response to Neo-Adjuvant Chemotherapy for Locally Advanced Breast Cancer. Oncotarget 2020, 11, 3782–3792. [Google Scholar] [CrossRef] [PubMed]
  35. Choi, J.H.; Kim, H.A.; Kim, W.; Lim, I.; Lee, I.; Byun, B.H.; Noh, W.C.; Seong, M.; Lee, S.; Kim, B.I.; et al. Early Prediction of Neoadjuvant Chemotherapy Response for Advanced Breast Cancer Using PET/MRI Image Deep Learning. Sci. Rep. 2020, 10, 21149. [Google Scholar] [CrossRef]
  36. Cattell, R.F.F.; Kang, J.J.J.; Ren, T.; Huang, P.B.B.; Muttreja, A.; Dacosta, S.; Li, H.; Baer, L.; Clouston, S.; Palermo, R.; et al. MRI Volume Changes of Axillary Lymph Nodes as Predictor of Pathologic Complete Responses to Neoadjuvant Chemotherapy in Breast Cancer. Clin. Breast Cancer 2020, 20, 68–79.e1. [Google Scholar] [CrossRef] [PubMed]
  37. Cain, E.H.; Saha, A.; Harowicz, M.R.; Marks, J.R.; Marcom, P.K.; Mazurowski, M.A. Multivariate Machine Learning Models for Prediction of Pathologic Response to Neoadjuvant Therapy in Breast Cancer Using MRI Features: A Study Using an Independent Validation Set. Breast Cancer Res. Treat. 2019, 173, 455–463. [Google Scholar] [CrossRef]
  38. Xiong, Q.; Zhou, X.; Liu, Z.; Lei, C.; Yang, C.; Yang, M.; Zhang, L.; Zhu, T.; Zhuang, X.; Liang, C.; et al. Multiparametric MRI-Based Radiomics Analysis for Prediction of Breast Cancers Insensitive to Neoadjuvant Chemotherapy. Clin. Transl. Oncol. 2020, 22, 50–59. [Google Scholar] [CrossRef] [PubMed]
  39. Braman, N.M.; Etesami, M.; Prasanna, P.; Dubchuk, C.; Gilmore, H.; Tiwari, P.; Plecha, D.; Madabhushi, A. Intratumoral and Peritumoral Radiomics for the Pretreatment Prediction of Pathological Complete Response to Neoadjuvant Chemotherapy Based on Breast DCE-MRI. Breast Cancer Res. 2017, 19, 57. [Google Scholar] [CrossRef] [Green Version]
  40. Bitencourt, A.G.V.; Gibbs, P.; Rossi Saccarelli, C.; Daimiel, I.; Lo Gullo, R.; Fox, M.J.; Thakur, S.; Pinker, K.; Morris, E.A.; Morrow, M.; et al. MRI-Based Machine Learning Radiomics Can Predict HER2 Expression Level and Pathologic Response after Neoadjuvant Therapy in HER2 Overexpressing Breast Cancer. EBioMedicine 2020, 61, 103042. [Google Scholar] [CrossRef]
  41. Bian, T.; Wu, Z.; Lin, Q.; Wang, H.; Ge, Y.; Duan, S.; Fu, G.; Cui, C.; Su, X. Radiomic Signatures Derived from Multiparametric MRI for the Pretreatment Prediction of Response to Neoadjuvant Chemotherapy in Breast Cancer. Br. J. Radiol. 2020, 93, 20200287. [Google Scholar] [CrossRef] [PubMed]
  42. Altoe, M.L.; Kalinsky, K.; Marone, A.; Kim, H.K.; Guo, H.; Hibshoosh, H.; Tejada, M.; Crew, K.D.; Accordino, M.K.; Trivedi, M.S.; et al. Changes in Diffuse Optical Tomography Images During Early Stages of Neoadjuvant Chemotherapy Correlate with Tumor Response in Different Breast Cancer Subtypes. Clin. Cancer Res 2021, 27, 1949–1957. [Google Scholar] [CrossRef] [PubMed]
  43. Tahmassebi, A.; Wengert, G.J.; Helbich, T.H.; Bago-Horvath, Z.; Alaei, S.; Bartsch, R.; Dubsky, P.; Baltzer, P.; Clauser, P.; Kapetas, P.; et al. Impact of Machine Learning With Multiparametric Magnetic Resonance Imaging of the Breast for Early Prediction of Response to Neoadjuvant Chemotherapy and Survival Outcomes in Breast Cancer Patients. Investig. Radiol. 2019, 54, 110–117. [Google Scholar] [CrossRef] [PubMed]
  44. Taghipour, M.; Wray, R.; Sheikhbahaei, S.; Wright, J.L.; Subramaniam, R.M. FDG Avidity and Tumor Burden: Survival Outcomes for Patients With Recurrent Breast Cancer. AJR. Am. J. Roentgenol. 2016, 206, 846–855. [Google Scholar] [CrossRef]
  45. Shia, W.-C.; Huang, Y.-L.; Wu, H.-K.; Chen, D.-R. Using Flow Characteristics in Three-Dimensional Power Doppler Ultrasound Imaging to Predict Complete Responses in Patients Undergoing Neoadjuvant Chemotherapy. J. Ultrasound Med. 2017, 36, 887–900. [Google Scholar] [CrossRef] [Green Version]
  46. O’Flynn, E.A.M.; Collins, D.; D’Arcy, J.; Schmidt, M.; de Souza, N.M. Multi-Parametric MRI in the Early Prediction of Response to Neo-Adjuvant Chemotherapy in Breast Cancer: Value of Non-Modelled Parameters. Eur. J. Radiol. 2016, 85, 837–842. [Google Scholar] [CrossRef] [Green Version]
  47. Lo, W.-C.; Li, W.; Jones, E.F.; Newitt, D.C.; Kornak, J.; Wilmes, L.J.; Esserman, L.J.; Hylton, N.M. Effect of Imaging Parameter Thresholds on MRI Prediction of Neoadjuvant Chemotherapy Response in Breast Cancer Subtypes. PLoS ONE 2016, 11, e0142047. [Google Scholar] [CrossRef] [Green Version]
  48. Liu, Z.; Li, Z.; Qu, J.; Zhang, R.; Zhou, X.; Li, L.; Sun, K.; Tang, Z.; Jiang, H.; Li, H.; et al. Radiomics of Multiparametric MRI for Pretreatment Prediction of Pathologic Complete Response to Neoadjuvant Chemotherapy in Breast Cancer: A Multicenter Study. Clin. Cancer Res. 2019, 25, 3538–3547. [Google Scholar] [CrossRef] [Green Version]
  49. van Griethuysen, J.J.M.; Lambregts, D.M.J.; Trebeschi, S.; Lahaye, M.J.; Bakers, F.C.H.; Vliegen, R.F.A.; Beets, G.L.; Aerts, H.J.W.L.; Beets-Tan, R.G.H. Radiomics Performs Comparable to Morphologic Assessment by Expert Radiologists for Prediction of Response to Neoadjuvant Chemoradiotherapy on Baseline Staging MRI in Rectal Cancer. Abdom. Radiol. 2020, 45, 632–643. [Google Scholar] [CrossRef]
  50. Schurink, N.W.; Min, L.A.; Berbee, M.; van Elmpt, W.; van Griethuysen, J.J.M.; Bakers, F.C.H.; Roberti, S.; van Kranen, S.R.; Lahaye, M.J.; Maas, M.; et al. Value of Combined Multiparametric MRI and FDG-PET/CT to Identify Well-Responding Rectal Cancer Patients before the Start of Neoadjuvant Chemoradiation. Eur. Radiol. 2020, 30, 2945–2954. [Google Scholar] [CrossRef]
  51. Liang, C.-Y.; Chen, M.-D.; Zhao, X.-X.; Yan, C.-G.; Mei, Y.-J.; Xu, Y.-K. Multiple Mathematical Models of Diffusion-Weighted Magnetic Resonance Imaging Combined with Prognostic Factors for Assessing the Response to Neoadjuvant Chemotherapy and Radiation Therapy in Locally Advanced Rectal Cancer. Eur. J. Radiol. 2019, 110, 249–255. [Google Scholar] [CrossRef]
  52. Delli Pizzi, A.; Chiarelli, A.M.; Chiacchiaretta, P.; d’Annibale, M.; Croce, P.; Rosa, C.; Mastrodicasa, D.; Trebeschi, S.; Lambregts, D.M.J.; Caposiena, D.; et al. MRI-Based Clinical-Radiomics Model Predicts Tumor Response before Treatment in Locally Advanced Rectal Cancer. Sci. Rep. 2021, 11, 5379. [Google Scholar] [CrossRef]
  53. Bulens, P.; Couwenberg, A.; Haustermans, K.; Debucquoy, A.; Vandecaveye, V.; Philippens, M.; Zhou, M.; Gevaert, O.; Intven, M. Development and Validation of an MRI-Based Model to Predict Response to Chemoradiotherapy for Rectal Cancer. Radiother. Oncol 2018, 126, 437–442. [Google Scholar] [CrossRef]
  54. Zhuang, Z.; Liu, Z.; Li, J.; Wang, X.; Xie, P.; Xiong, F.; Hu, J.; Meng, X.; Huang, M.; Deng, Y.; et al. Radiomic Signature of the FOWARC Trial Predicts Pathological Response to Neoadjuvant Treatment in Rectal Cancer. J. Transl. Med. 2021, 19, 256. [Google Scholar] [CrossRef]
  55. Shaish, H.; Aukerman, A.; Vanguri, R.; Spinelli, A.; Armenta, P.; Jambawalikar, S.; Makkar, J.; Bentley-Hibbert, S.; Del Portillo, A.; Kiran, R.; et al. Radiomics of MRI for Pretreatment Prediction of Pathologic Complete Response, Tumor Regression Grade, and Neoadjuvant Rectal Score in Patients with Locally Advanced Rectal Cancer Undergoing Neoadjuvant Chemoradiation: An International Multicenter Study. Eur. Radiol. 2020, 30, 6263–6273. [Google Scholar] [CrossRef]
  56. Liu, Y.; Zhang, F.-J.; Zhao, X.-X.; Yang, Y.; Liang, C.-Y.; Feng, L.-L.; Wan, X.-B.; Ding, Y.; Zhang, Y.-W. Development of a Joint Prediction Model Based on Both the Radiomics and Clinical Factors for Predicting the Tumor Response to Neoadjuvant Chemoradiotherapy in Patients with Locally Advanced Rectal Cancer. Cancer Manag. Res. 2021, 13, 3235–3246. [Google Scholar] [CrossRef]
  57. Bibault, J.; Giraud, P.; Housset, M.; Durdux, C.; Taieb, J.; Berger, A.; Coriat, R.; Chaussade, S.; Dousset, B.; Nordlinger, B.; et al. Deep Learning and Radiomics Predict Complete Response after Neo-Adjuvant Chemoradiation for Locally Advanced Rectal Cancer. Sci. Rep. 2018, 8, 12611. [Google Scholar] [CrossRef]
  58. Meng, Y.; Zhang, Y.; Dong, D.; Li, C.; Liang, X.; Zhang, C.; Wan, L.; Zhao, X.; Xu, K.; Zhou, C.; et al. Novel Radiomic Signature as a Prognostic Biomarker for Locally Advanced Rectal Cancer. J. Magn. Reson. Imaging 2018, 48, 605–614. [Google Scholar] [CrossRef]
  59. Schurink, N.W.; van Kranen, S.R.; Berbee, M.; van Elmpt, W.; Bakers, F.C.H.; Roberti, S.; van Griethuysen, J.J.M.; Min, L.A.; Lahaye, M.J.; Maas, M.; et al. Studying Local Tumour Heterogeneity on MRI and FDG-PET/CT to Predict Response to Neoadjuvant Chemoradiotherapy in Rectal Cancer. Eur. Radiol. 2021, 31, 7031–7038. [Google Scholar] [CrossRef]
  60. Shi, L.; Zhang, Y.; Nie, K.; Sun, X.; Niu, T.; Yue, N.; Kwong, T.; Chang, P.; Chow, D.; Chen, J.-H.; et al. Machine Learning for Prediction of Chemoradiation Therapy Response in Rectal Cancer Using Pre-Treatment and Mid-Radiation Multi-Parametric MRI. Magn. Reson. Imaging 2019, 61, 33–40. [Google Scholar] [CrossRef]
  61. Wan, L.; Zhang, C.; Zhao, Q.; Meng, Y.; Zou, S.; Yang, Y.; Liu, Y.; Jiang, J.; Ye, F.; Ouyang, H.; et al. Developing a Prediction Model Based on MRI for Pathological Complete Response after Neoadjuvant Chemoradiotherapy in Locally Advanced Rectal Cancer. Abdom. Radiol. 2019, 44, 2978–2987. [Google Scholar] [CrossRef]
  62. Liu, Z.; Zhang, X.-Y.; Shi, Y.-J.; Wang, L.; Zhu, H.-T.; Tang, Z.; Wang, S.; Li, X.-T.; Tian, J.; Sun, Y.-S. Radiomics Analysis for Evaluation of Pathological Complete Response to Neoadjuvant Chemoradiotherapy in Locally Advanced Rectal Cancer. Clin. Cancer Res. 2017, 23, 7253–7262. [Google Scholar] [CrossRef] [Green Version]
  63. Kassam, Z.; Burgers, K.; Walsh, J.C.; Lee, T.Y.; Leong, H.S.; Fisher, B. A Prospective Feasibility Study Evaluating the Role of Multimodality Imaging and Liquid Biopsy for Response Assessment in Locally Advanced Rectal Carcinoma. Abdom. Radiol. 2019, 44, 3641–3651. [Google Scholar] [CrossRef]
  64. Zheng, Y.; Huang, Y.; Bi, G.; Chen, Z.; Lu, T.; Xu, S.; Zhan, C.; Wang, Q. Enlarged Mediastinal Lymph Nodes in Computed Tomography Are a Valuable Prognostic Factor in Non-Small Cell Lung Cancer Patients with Pathologically Negative Lymph Nodes. Cancer Manag. Res. 2020, 12, 10875–10886. [Google Scholar] [CrossRef]
  65. Zhang, N.; Liang, R.; Gensheimer, M.F.; Guo, M.; Zhu, H.; Yu, J.; Diehn, M.; Loo, B.W.J.; Li, R.; Wu, J. Early Response Evaluation Using Primary Tumor and Nodal Imaging Features to Predict Progression-Free Survival of Locally Advanced Non-Small Cell Lung Cancer. Theranostics 2020, 10, 11707–11718. [Google Scholar] [CrossRef]
  66. van Timmeren, J.E.E.; Leijenaar, R.T.H.H.T.H.H.; van Elmpt, W.; Reymen, B.; Oberije, C.; Monshouwer, R.; Bussink, J.; Brink, C.; Hansen, O.; Lambin, P. Survival Prediction of Non-Small Cell Lung Cancer Patients Using Radiomics Analyses of Cone-Beam CT Images. Radiother. Oncol. 2017, 123, 363–369. [Google Scholar] [CrossRef] [Green Version]
  67. Tunali, I.; Gray, J.E.; Qi, J.; Abdalah, M.; Jeong, D.K.; Guvenis, A.; Gillies, R.J.; Schabath, M.B. Novel Clinical and Radiomic Predictors of Rapid Disease Progression Phenotypes among Lung Cancer Patients Treated with Immunotherapy: An Early Report. Lung Cancer 2019, 129, 75–79. [Google Scholar] [CrossRef]
  68. Trebeschi, S.; Drago, S.G.G.; Birkbak, N.J.J.; Kurilova, I.; Cǎlin, A.M.M.; Delli Pizzi, A.; Lalezari, F.; Lambregts, D.M.J.; Rohaan, M.W.W.; Parmar, C.; et al. Predicting Response to Cancer Immunotherapy Using Noninvasive Radiomic Biomarkers. Ann. Oncol. 2019, 30, 998–1004. [Google Scholar] [CrossRef] [Green Version]
  69. Sun, W.; Jiang, M.; Dang, J.; Chang, P.; Yin, F.-F. Effect of Machine Learning Methods on Predicting NSCLC Overall Survival Time Based on Radiomics Analysis. Radiat. Oncol. 2018, 13, 197. [Google Scholar] [CrossRef] [Green Version]
  70. Steiger, S.; Arvanitakis, M.; Sick, B.; Weder, W.; Hillinger, S.; Burger, I.A. Analysis of Prognostic Values of Various PET Metrics in Preoperative (18)F-FDG PET for Early-Stage Bronchial Carcinoma for Progression-Free and Overall Survival: Significantly Increased Glycolysis Is a Predictive Factor. J. Nucl. Med. 2017, 58, 1925–1930. [Google Scholar] [CrossRef] [Green Version]
  71. Soufi, M.; Arimura, H.; Nagami, N. Identification of Optimal Mother Wavelets in Survival Prediction of Lung Cancer Patients Using Wavelet Decomposition-Based Radiomic Features. Med. Phys. 2018, 45, 5116–5128. [Google Scholar] [CrossRef]
  72. Sharma, A.; Mohan, A.; Bhalla, A.S.; Sharma, M.C.; Vishnubhatla, S.; Das, C.J.; Pandey, A.K.; Sekhar Bal, C.; Patel, C.D.; Sharma, P.; et al. Role of Various Metabolic Parameters Derived From Baseline 18F-FDG PET/CT as Prognostic Markers in Non-Small Cell Lung Cancer Patients Undergoing Platinum-Based Chemotherapy. Clin. Nucl. Med. 2018, 43, e8–e17. [Google Scholar] [CrossRef]
  73. Seban, R.-D.; Mezquita, L.; Berenbaum, A.; Dercle, L.; Botticella, A.; Le Pechoux, C.; Caramella, C.; Deutsch, E.; Grimaldi, S.; Adam, J.; et al. Baseline Metabolic Tumor Burden on FDG PET/CT Scans Predicts Outcome in Advanced NSCLC Patients Treated with Immune Checkpoint Inhibitors. Eur. J. Nucl. Med. Mol. Imaging 2020, 47, 1147–1157. [Google Scholar] [CrossRef]
  74. Seban, R.-D.; Assie, J.-B.; Giroux-Leprieur, E.; Massiani, M.-A.; Soussan, M.; Bonardel, G.; Chouaid, C.; Playe, M.; Goldfarb, L.; Duchemann, B.; et al. FDG-PET Biomarkers Associated with Long-Term Benefit from First-Line Immunotherapy in Patients with Advanced Non-Small Cell Lung Cancer. Ann. Nucl. Med. 2020, 34, 968–974. [Google Scholar] [CrossRef]
  75. Pellegrino, S.; Fonti, R.; Mazziotti, E.; Piccin, L.; Mozzillo, E.; Damiano, V.; Matano, E.; De Placido, S.; Del Vecchio, S. Total Metabolic Tumor Volume by 18F-FDG PET/CT for the Prediction of Outcome in Patients with Non-Small Cell Lung Cancer. Ann. Nucl. Med. 2019, 33, 937–944. [Google Scholar] [CrossRef]
  76. Yu, W.; Tang, C.; Hobbs, B.P.; Li, X.; Koay, E.J.; Wistuba, I.I.; Sepesi, B.; Behrens, C.; Rodriguez Canales, J.; Parra Cuentas, E.R.; et al. Development and Validation of a Predictive Radiomics Model for Clinical Outcomes in Stage I Non-Small Cell Lung Cancer. Int. J. Radiat. Oncol. Biol. Phys. 2018, 102, 1090–1097. [Google Scholar] [CrossRef]
  77. Park, S.; Ha, S.; Lee, S.-H.H.S.H.; Paeng, J.C.C.; Keam, B.; Kim, T.M.M.; Kim, D.-W.W.D.W.; Heo, D.S.S. Intratumoral Heterogeneity Characterized by Pretreatment PET in Non-Small Cell Lung Cancer Patients Predicts Progression-Free Survival on EGFR Tyrosine Kinase Inhibitor. PLoS ONE 2018, 13, e0189766. [Google Scholar] [CrossRef]
  78. Oberije, C.; De Ruysscher, D.; Houben, R.; van de Heuvel, M.; Uyterlinde, W.; Deasy, J.O.O.; Belderbos, J.; Dingemans, A.-M.C.M.C.A.M.C.; Rimner, A.; Din, S.; et al. A Validated Prediction Model for Overall Survival From Stage III Non-Small Cell Lung Cancer: Toward Survival Prediction for Individual Patients. Int. J. Radiat. Oncol. Biol. Phys. 2015, 92, 935–944. [Google Scholar] [CrossRef] [Green Version]
  79. Lou, B.; Doken, S.; Zhuang, T.; Wingerter, D.; Gidwani, M.; Mistry, N.; Ladic, L.; Kamen, A.; Abazeed, M.E.E.E. An Image-Based Deep Learning Framework for Individualizing Radiotherapy Dose. Lancet Digit. Health 2019, 1, e136–e147. [Google Scholar] [CrossRef] [Green Version]
  80. Li, H.; Galperin-Aizenberg, M.; Pryma, D.; Simone, C.B., 2nd; Fan, Y. Unsupervised Machine Learning of Radiomic Features for Predicting Treatment Response and Overall Survival of Early Stage Non-Small Cell Lung Cancer Patients Treated with Stereotactic Body Radiation Therapy. Radiother. Oncol. 2018, 129, 218–226. [Google Scholar] [CrossRef]
  81. Li, H.; Zhang, R.; Wang, S.; Fang, M.; Zhu, Y.; Hu, Z.; Dong, D.; Shi, J.; Tian, J. CT-Based Radiomic Signature as a Prognostic Factor in Stage IV ALK-Positive Non-Small-Cell Lung Cancer Treated With TKI Crizotinib: A Proof-of-Concept Study. Front. Oncol. 2020, 10, 57. [Google Scholar] [CrossRef] [Green Version]
  82. Lee, J.H.H.; Lee, H.Y.Y.; Ahn, M.-J.J.M.J.; Park, K.; Ahn, J.S.S.; Sun, J.-M.M.J.M.; Lee, K.S.S. Volume-Based Growth Tumor Kinetics as a Prognostic Biomarker for Patients with EGFR Mutant Lung Adenocarcinoma Undergoing EGFR Tyrosine Kinase Inhibitor Therapy: A Case Control Study. Cancer Imaging 2016, 16, 5. [Google Scholar] [CrossRef] [Green Version]
  83. Kirienko, M.; Cozzi, L.; Antunovic, L.; Lozza, L.; Fogliata, A.; Voulaz, E.; Rossi, A.; Chiti, A.; Sollini, M. Prediction of Disease-Free Survival by the PET/CT Radiomic Signature in Non-Small Cell Lung Cancer Patients Undergoing Surgery. Eur. J. Nucl. Med. Mol. Imaging 2018, 45, 207–217. [Google Scholar] [CrossRef]
  84. Kim, D.-H.; Jung, J.-H.; Son, S.H.; Kim, C.-Y.; Hong, C.M.; Oh, J.-R.; Jeong, S.Y.; Lee, S.-W.; Lee, J.; Ahn, B.-C. Prognostic Significance of Intratumoral Metabolic Heterogeneity on 18F-FDG PET/CT in Pathological N0 Non-Small Cell Lung Cancer. Clin. Nucl. Med. 2015, 40, 708–714. [Google Scholar] [CrossRef] [PubMed]
  85. Khorrami, M.; Prasanna, P.; Gupta, A.; Patil, P.; Velu, P.D.; Thawani, R.; Corredor, G.; Alilou, M.; Bera, K.; Fu, P.; et al. Changes in CT Radiomic Features Associated with Lymphocyte Distribution Predict Overall Survival and Response to Immunotherapy in Non-Small Cell Lung Cancer. Cancer Immunol. Res. 2020, 8, 108–119. [Google Scholar] [CrossRef] [PubMed]
  86. Khorrami, M.; Khunger, M.; Zagouras, A.; Patil, P.; Thawani, R.; Bera, K.; Rajiah, P.; Fu, P.; Velcheti, V.; Madabhushi, A. Combination of Peri- and Intratumoral Radiomic Features on Baseline CT Scans Predicts Response to Chemotherapy in Lung Adenocarcinoma. Radiol. Artif. Intell. 2019, 1, 180012. [Google Scholar] [CrossRef] [PubMed]
  87. Yousefi, B.; LaRiviere, M.J.; Cohen, E.A.; Buckingham, T.H.; Yee, S.S.; Black, T.A.; Chien, A.L.; Noël, P.; Hwang, W.-T.; Katz, S.I.; et al. Combining Radiomic Phenotypes of Non-Small Cell Lung Cancer with Liquid Biopsy Data May Improve Prediction of Response to EGFR Inhibitors. Sci. Rep. 2021, 11, 9984. [Google Scholar] [CrossRef]
  88. Khorrami, M.; Jain, P.; Bera, K.; Alilou, M.; Thawani, R.; Patil, P.; Ahmad, U.; Murthy, S.; Stephans, K.; Fu, P.; et al. Predicting Pathologic Response to Neoadjuvant Chemoradiation in Resectable Stage III Non-Small Cell Lung Cancer Patients Using Computed Tomography Radiomic Features. Lung Cancer 2019, 135, 1–9. [Google Scholar] [CrossRef]
  89. Kamiya, S.; Iwano, S.; Umakoshi, H.; Ito, R.; Shimamoto, H.; Nakamura, S.; Naganawa, S. Computer-Aided Volumetry of Part-Solid Lung Cancers by Using CT: Solid Component Size Predicts Prognosis. Radiology 2018, 287, 1030–1040. [Google Scholar] [CrossRef] [Green Version]
  90. Kakino, R.; Nakamura, M.; Mitsuyoshi, T.; Shintani, T.; Kokubo, M.; Negoro, Y.; Fushiki, M.; Ogura, M.; Itasaka, S.; Yamauchi, C.; et al. Application and Limitation of Radiomics Approach to Prognostic Prediction for Lung Stereotactic Body Radiotherapy Using Breath-Hold CT Images with Random Survival Forest: A Multi-Institutional Study. Med. Phys. 2020, 47, 4634–4643. [Google Scholar] [CrossRef]
  91. Jiao, Z.; Li, H.; Xiao, Y.; Aggarwal, C.; Galperin-Aizenberg, M.; Pryma, D.; Simone, C.B. 2nd B. 2nd B. 2nd B. 2nd; Feigenberg, S.J.J.J.J.; Kao, G.D.D.D.D.; Fan, Y. Integration of Risk Survival Measures Estimated From Pre- and Posttreatment Computed Tomography Scans Improves Stratification of Patients With Early-Stage Non-Small Cell Lung Cancer Treated With Stereotactic Body Radiation Therapy. Int. J. Radiat. Oncol. Biol. Phys. 2021, 109, 1647–1656. [Google Scholar] [CrossRef]
  92. Hyun, S.H.; Ahn, H.K.; Ahn, M.-J.J.; Ahn, Y.C.; Kim, J.; Shim, Y.M.; Choi, J.Y. Volume-Based Assessment With 18F-FDG PET/CT Improves Outcome Prediction for Patients With Stage IIIA-N2 Non-Small Cell Lung Cancer. AJR Am. J. Roentgenol. 2015, 205, 623–628. [Google Scholar] [CrossRef]
  93. Du, Q.; Baine, M.; Bavitz, K.; McAllister, J.; Liang, X.; Yu, H.; Ryckman, J.; Yu, L.; Jiang, H.; Zhou, S.; et al. Radiomic Feature Stability across 4D Respiratory Phases and Its Impact on Lung Tumor Prognosis Prediction. PLoS ONE 2019, 14, e0216480. [Google Scholar] [CrossRef]
  94. Domachevsky, L.; Groshar, D.; Galili, R.; Saute, M.; Bernstine, H. Survival Prognostic Value of Morphological and Metabolic Variables in Patients with Stage I and II Non-Small Cell Lung Cancer. Eur. Radiol. 2015, 25, 3361–3367. [Google Scholar] [CrossRef] [PubMed]
  95. Cui, S.; Ten Haken, R.K.K.K.K.; El Naqa, I. Integrating Multiomics Information in Deep Learning Architectures for Joint Actuarial Outcome Prediction in Non-Small Cell Lung Cancer Patients After Radiation Therapy. Int. J. Radiat. Oncol. Biol. Phys. 2021, 110, 893–904. [Google Scholar] [CrossRef] [PubMed]
  96. Choe, J.; Lee, S.M.; Do, K.-H.; Lee, J.B.; Lee, S.M.; Lee, J.-G.; Seo, J.B. Prognostic Value of Radiomic Analysis of Iodine Overlay Maps from Dual-Energy Computed Tomography in Patients with Resectable Lung Cancer. Eur. Radiol. 2019, 29, 915–923. [Google Scholar] [CrossRef]
  97. Buizza, G.; Toma-Dasu, I.; Lazzeroni, M.; Paganelli, C.; Riboldi, M.; Chang, Y.; Smedby, Ö.; Wang, C. Early Tumor Response Prediction for Lung Cancer Patients Using Novel Longitudinal Pattern Features from Sequential PET/CT Image Scans. Phys. Med. 2018, 54, 21–29. [Google Scholar] [CrossRef]
  98. Yossi, S.; Krhili, S.; Muratet, J.-P.; Septans, A.-L.; Campion, L.; Denis, F. Early Assessment of Metabolic Response by 18F-FDG PET during Concomitant Radiochemotherapy of Non-Small Cell Lung Carcinoma Is Associated with Survival: A Retrospective Single-Center Study. Clin. Nucl. Med. 2015, 40, e215–e221. [Google Scholar] [CrossRef] [PubMed]
  99. Blanc-Durand, P.; Campedel, L.; Mule, S.; Jegou, S.; Luciani, A.; Pigneur, F.; Itti, E. Prognostic Value of Anthropometric Measures Extracted from Whole-Body CT Using Deep Learning in Patients with Non-Small-Cell Lung Cancer. Eur. Radiol. 2020, 30, 3528–3537. [Google Scholar] [CrossRef] [PubMed]
  100. Bak, S.H.; Park, H.; Sohn, I.; Lee, S.H.; Ahn, M.-J.J.; Lee, H.Y. Prognostic Impact of Longitudinal Monitoring of Radiomic Features in Patients with Advanced Non-Small Cell Lung Cancer. Sci. Rep. 2019, 9, 8730. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  101. Astaraki, M.; Wang, C.; Buizza, G.; Toma-Dasu, I.; Lazzeroni, M.; Smedby, Ö. Early Survival Prediction in Non-Small Cell Lung Cancer from PET/CT Images Using an Intra-Tumor Partitioning Method. Phys. Med. 2019, 60, 58–65. [Google Scholar] [CrossRef]
  102. Ahn, H.K.; Lee, H.; Kim, S.G.; Hyun, S.H. Pre-Treatment (18)F-FDG PET-Based Radiomics Predict Survival in Resected Non-Small Cell Lung Cancer. Clin. Radiol. 2019, 74, 467–473. [Google Scholar] [CrossRef]
  103. Yang, D.M.; Palma, D.A.; Kwan, K.; Louie, A.V.; Malthaner, R.; Fortin, D.; Rodrigues, G.B.; Yaremko, B.P.; Laba, J.; Gaede, S.; et al. Predicting Pathological Complete Response (PCR) after Stereotactic Ablative Radiation Therapy (SABR) of Lung Cancer Using Quantitative Dynamic [(18)F]FDG PET and CT Perfusion: A Prospective Exploratory Clinical Study. Radiat. Oncol. 2021, 16, 11. [Google Scholar] [CrossRef] [PubMed]
  104. Yan, M.; Wang, W. Radiomic Analysis of CT Predicts Tumor Response in Human Lung Cancer with Radiotherapy. J. Digit. Imaging 2020, 33, 1401–1403. [Google Scholar] [CrossRef]
  105. Wu, J.; Lian, C.; Ruan, S.; Mazur, T.R.; Mutic, S.; Anastasio, M.A.; Grigsby, P.W.; Vera, P.; Li, H. Treatment Outcome Prediction for Cancer Patients Based on Radiomics and Belief Function Theory. IEEE Trans. Radiat. Plasma Med. Sci. 2019, 3, 216–224. [Google Scholar] [CrossRef] [PubMed]
  106. Wang, X.-Y.; Zhao, Y.-F.; Liu, Y.; Yang, Y.-K.; Wu, N. Prognostic Value of Metabolic Variables of [18F]FDG PET/CT in Surgically Resected Stage I Lung Adenocarcinoma. Medicine 2017, 96, e7941. [Google Scholar] [CrossRef] [PubMed]
  107. Wang, L.; Dong, T.; Xin, B.; Xu, C.; Guo, M.; Zhang, H.; Feng, D.; Wang, X.; Yu, J. Integrative Nomogram of CT Imaging, Clinical, and Hematological Features for Survival Prediction of Patients with Locally Advanced Non-Small Cell Lung Cancer. Eur. Radiol. 2019, 29, 2958–2967. [Google Scholar] [CrossRef]
  108. Kickingereder, P.; Burth, S.; Wick, A.; Götz, M.; Eidel, O.; Schlemmer, H.P.; Maier-Hein, K.H.; Wick, W.; Bendszus, M.; Radbruch, A.; et al. Radiomic Profiling of Glioblastoma: Identifying an Imaging Predictor of Patient Survival with Improved Performance over Established Clinical and Radiologic Risk Models. Radiology 2016, 280, 880–889. [Google Scholar] [CrossRef] [Green Version]
  109. Chaddad, A.; Desrosiers, C.; Hassan, L.; Tanougast, C. A Quantitative Study of Shape Descriptors from Glioblastoma Multiforme Phenotypes for Predicting Survival Outcome. Br. J. Radiol. 2016, 89, 20160575. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  110. Shboul, Z.A.A.; Alam, M.; Vidyaratne, L.; Pei, L.; Elbakary, M.I.I.; Iftekharuddin, K.M.M. Feature-Guided Deep Radiomics for Glioblastoma Patient Survival Prediction. Front. Neurosci. 2019, 13, 966. [Google Scholar] [CrossRef]
  111. Chaddad, A.; Daniel, P.; Sabri, S.; Desrosiers, C.; Abdulkarim, B. Integration of Radiomic and Multi-Omic Analyses Predicts Survival of Newly Diagnosed IDH1 Wild-Type Glioblastoma. Cancers 2019, 11, 1148. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  112. Vils, A.; Bogowicz, M.; Tanadini-Lang, S.; Vuong, D.; Saltybaeva, N.; Kraft, J.; Wirsching, H.G.; Gramatzki, D.; Wick, W.; Rushing, E.; et al. Radiomic Analysis to Predict Outcome in Recurrent Glioblastoma Based on Multi-Center MR Imaging From the Prospective DIRECTOR Trial. Front. Oncol. 2021, 11. [Google Scholar] [CrossRef] [PubMed]
  113. Ingrisch, M.; Schneider, M.J.; Nörenberg, D.; De Figueiredo, G.N.; Maier-Hein, K.; Suchorska, B.; Schüller, U.; Albert, N.; Brückmann, H.; Reiser, M.; et al. Radiomic Analysis Reveals Prognostic Information in T1-Weighted Baseline Magnetic Resonance Imaging in Patients With Glioblastoma. Investig. Radiol. 2017, 52, 360–366. [Google Scholar] [CrossRef]
  114. Sanghani, P.; Ang, B.T.; King, N.K.K.; Ren, H. Regression Based Overall Survival Prediction of Glioblastoma Multiforme Patients Using a Single Discovery Cohort of Multi-Institutional Multi-Channel MR Images. Med. Biol. Eng. Comput. 2019, 57, 1683–1691. [Google Scholar] [CrossRef] [PubMed]
  115. Kickingereder, P.; Götz, M.; Muschelli, J.; Wick, A.; Neuberger, U.; Shinohara, R.T.; Sill, M.; Nowosielski, M.; Schlemmer, H.-P.; Radbruch, A.; et al. Large-Scale Radiomic Profiling of Recurrent Glioblastoma Identifies an Imaging Predictor for Stratifying Anti-Angiogenic Treatment Response. Clin. Cancer Res. 2016, 22, 5765–5771. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  116. Fathi Kazerooni, A.; Akbari, H.; Shukla, G.; Badve, C.; Rudie, J.D.D.D.; Sako, C.; Rathore, S.; Bakas, S.; Pati, S.; Singh, A.; et al. Cancer Imaging Phenomics via CaPTk: Multi-Institutional Prediction of Progression-Free Survival and Pattern of Recurrence in Glioblastoma. JCO Clin. Cancer Inform. 2020, 4, 234–244. [Google Scholar] [CrossRef] [PubMed]
  117. Bakas, S.; Shukla, G.; Akbari, H.; Erus, G.; Sotiras, A.; Rathore, S.; Sako, C.; Ha, S.M.; Rozycki, M.; Singh, A.; et al. Integrative Radiomic Analysis for Pre-Surgical Prognostic Stratification of Glioblastoma Patients: From Advanced to Basic MRI Protocols. Proc. SPIE Int. Soc. Opt. Eng. 2020, 11315, 112. [Google Scholar] [CrossRef]
  118. Ferguson, S.D.; Hodges, T.R.; Majd, N.K.; Alfaro-Munoz, K.; Al-Holou, W.N.; Suki, D.; de Groot, J.F.; Fuller, G.N.; Xue, L.; Li, M.; et al. A Validated Integrated Clinical and Molecular Glioblastoma Long-Term Survival-Predictive Nomogram. Neuro-Oncol. Adv. 2021, 3, vdaa146. [Google Scholar] [CrossRef] [PubMed]
  119. Pérez-Beteta, J.; Molina-García, D.; Martínez-González, A.; Henares-Molina, A.; Amo-Salas, M.; Luque, B.; Arregui, E.; Calvo, M.; Borrás, J.M.; Martino, J.; et al. Morphological MRI-Based Features Provide Pretreatment Survival Prediction in Glioblastoma. Eur. Radiol. 2019, 29, 1968–1977. [Google Scholar] [CrossRef]
  120. Li, Q.; Bai, H.; Chen, Y.; Sun, Q.; Liu, L.; Zhou, S.; Wang, G.; Liang, C.; Li, Z.C. A Fully-Automatic Multiparametric Radiomics Model: Towards Reproducible and Prognostic Imaging Signature for Prediction of Overall Survival in Glioblastoma Multiforme. Sci. Rep. 2017, 7, 14331. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  121. Kim, N.; Chang, J.S.; Wee, C.W.; Kim, I.A.; Chang, J.H.; Lee, H.S.; Kim, S.H.; Kang, S.-G.; Kim, E.H.; Yoon, H.I.; et al. Validation and Optimization of a Web-Based Nomogram for Predicting Survival of Patients with Newly Diagnosed Glioblastoma. Strahlenther. Und Onkol. 2020, 196, 58–69. [Google Scholar] [CrossRef]
  122. Patel, K.S.; Everson, R.G.; Yao, J.; Raymond, C.; Goldman, J.; Schlossman, J.; Tsung, J.; Tan, C.; Pope, W.B.; Ji, M.S.; et al. Diffusion Magnetic Resonance Imaging Phenotypes Predict Overall Survival Benefit From Bevacizumab or Surgery in Recurrent Glioblastoma With Large Tumor Burden. Neurosurgery 2020, 87, 931–938. [Google Scholar] [CrossRef] [PubMed]
  123. Chaddad, A.; Daniel, P.; Desrosiers, C.; Toews, M.; Abdulkarim, B. Novel Radiomic Features Based on Joint Intensity Matrices for Predicting Glioblastoma Patient Survival Time. IEEE J. Biomed. Heal. Inform. 2019, 23, 795–804. [Google Scholar] [CrossRef] [PubMed]
  124. Rathore, S.; Akbari, H.; Doshi, J.; Shukla, G.; Rozycki, M.; Bilello, M.; Lustig, R.; Davatzikos, C. Radiomic Signature of Infiltration in Peritumoral Edema Predicts Subsequent Recurrence in Glioblastoma: Implications for Personalized Radiotherapy Planning. J. Med. Imaging 2018, 5, 21219. [Google Scholar] [CrossRef] [PubMed]
  125. Chakhoyan, A.; Woodworth, D.C.; Harris, R.J.; Lai, A.; Nghiemphu, P.L.; Liau, L.M.; Pope, W.B.; Cloughesy, T.F.; Ellingson, B.M. Mono-Exponential, Diffusion Kurtosis and Stretched Exponential Diffusion MR Imaging Response to Chemoradiation in Newly Diagnosed Glioblastoma. J. Neurooncol. 2018, 139, 651–659. [Google Scholar] [CrossRef] [PubMed]
  126. Pérez-Beteta, J.; Martínez-González, A.; Molina, D.; Amo-Salas, M.; Luque, B.; Arregui, E.; Calvo, M.; Borrás, J.M.; López, C.; Claramonte, M.; et al. Glioblastoma: Does the Pre-Treatment Geometry Matter? A Postcontrast T1 MRI-Based Study. Eur. Radiol. 2017, 27, 1096–1104. [Google Scholar] [CrossRef] [PubMed]
  127. Sanghani, P.; Ang, B.T.; King, N.K.K.; Ren, H. Overall Survival Prediction in Glioblastoma Multiforme Patients from Volumetric, Shape and Texture Features Using Machine Learning. Surg. Oncol. 2018, 27, 709–714. [Google Scholar] [CrossRef] [PubMed]
  128. Chang, K.; Zhang, B.; Guo, X.; Zong, M.; Rahman, R.; Sanchez, D.; Winder, N.; Reardon, D.A.; Zhao, B.; Wen, P.Y.; et al. Multimodal Imaging Patterns Predict Survival in Recurrent Glioblastoma Patients Treated with Bevacizumab. Neuro. Oncol. 2016, 18, 1680–1687. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  129. Chaddad, A.; Sabri, S.; Niazi, T.; Abdulkarim, B. Prediction of Survival with Multi-Scale Radiomic Analysis in Glioblastoma Patients. Med. Biol. Eng. Comput. 2018, 56, 2287–2300. [Google Scholar] [CrossRef]
  130. Tan, Y.; Mu, W.; Wang, X.-C.; Yang, G.-Q.; Gillies, R.J.; Zhang, H. Improving Survival Prediction of High-Grade Glioma via Machine Learning Techniques Based on MRI Radiomic, Genetic and Clinical Risk Factors. Eur. J. Radiol. 2019, 120, 108609. [Google Scholar] [CrossRef]
  131. Kickingereder, P.; Neuberger, U.; Bonekamp, D.; Piechotta, P.L.; Götz, M.; Wick, A.; Sill, M.; Kratz, A.; Shinohara, R.T.; Jones, D.T.W.; et al. Radiomic Subtyping Improves Disease Stratification beyond Key Molecular, Clinical, and Standard Imaging Characteristics in Patients with Glioblastoma. Neuro. Oncol. 2018, 20, 848–857. [Google Scholar] [CrossRef] [PubMed]
  132. Pérez-Beteta, J.; Molina-García, D.; Ortiz-Alhambra, J.A.; Fernández-Romero, A.; Luque, B.; Arregui, E.; Calvo, M.; Borrás, J.M.; Meléndez, B.; Rodríguez de Lope, Á.; et al. Tumor Surface Regularity at MR Imaging Predicts Survival and Response to Surgery in Patients with Glioblastoma. Radiology 2018, 288, 218–225. [Google Scholar] [CrossRef] [PubMed]
  133. Choi, Y.; Ahn, K.-J.J.; Nam, Y.; Jang, J.; Shin, N.-Y.Y.; Choi, H.S.; Jung, S.-L.L.; Kim, B.-S. soo Analysis of Heterogeneity of Peritumoral T2 Hyperintensity in Patients with Pretreatment Glioblastoma: Prognostic Value of MRI-Based Radiomics. Eur. J. Radiol. 2019, 120, 108642. [Google Scholar] [CrossRef] [PubMed]
  134. Wijethilake, N.; Islam, M.; Ren, H. Radiogenomics Model for Overall Survival Prediction of Glioblastoma. Med. Biol. Eng. Comput. 2020, 58, 1767–1777. [Google Scholar] [CrossRef]
  135. Carles, M.; Popp, I.; Starke, M.M.M.; Mix, M.; Urbach, H.; Schimek-Jasch, T.; Eckert, F.; Niyazi, M.; Baltas, D.; Grosu, A.L.L. FET-PET Radiomics in Recurrent Glioblastoma: Prognostic Value for Outcome after Re-Irradiation? Radiat. Oncol. 2021, 16, 46. [Google Scholar] [CrossRef] [PubMed]
  136. Beig, N.; Bera, K.; Prasanna, P.; Antunes, J.; Correa, R.; Singh, S.; Saeed Bamashmos, A.; Ismail, M.; Braman, N.; Verma, R.; et al. Radiogenomic-Based Survival Risk Stratification of Tumor Habitat on Gd-T1w MRI Is Associated with Biological Processes in Glioblastoma. Clin. Cancer Res. 2020, 26, 1866–1876. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  137. Choi, Y.; Nam, Y.; Jang, J.; Shin, N.-Y.Y.; Lee, Y.S.; Ahn, K.-J.J.; Kim, B.-S.; Park, J.-S.S.; Jeon, S.-S.; Hong, Y.G. Radiomics May Increase the Prognostic Value for Survival in Glioblastoma Patients When Combined with Conventional Clinical and Genetic Prognostic Models. Eur. Radiol. 2021, 31, 2084–2093. [Google Scholar] [CrossRef]
  138. Lundemann, M.; Munck Af Rosenschöld, P.; Muhic, A.; Larsen, V.A.; Poulsen, H.S.; Engelholm, S.A.; Andersen, F.L.; Kjær, A.; Larsson, H.B.W.; Law, I.; et al. Feasibility of Multi-Parametric PET and MRI for Prediction of Tumour Recurrence in Patients with Glioblastoma. Eur. J. Nucl. Med. Mol. Imaging 2019, 46, 603–613. [Google Scholar] [CrossRef]
  139. Horvat, N.; Rocha, C.C.T.; Oliveira, B.C.; Petkovska, I.; Gollub, M.J. MRI of Rectal Cancer: Tumor Staging, Imaging Techniques, and Management. Radiographics 2019, 39, 367–387. [Google Scholar] [CrossRef]
  140. Weller, M.; van den Bent, M.; Preusser, M.; Le Rhun, E.; Tonn, J.C.; Minniti, G.; Bendszus, M.; Balana, C.; Chinot, O.; Dirven, L.; et al. EANO Guidelines on the Diagnosis and Treatment of Diffuse Gliomas of Adulthood. Nat. Rev. Clin. Oncol. 2021, 18, 170–186. [Google Scholar] [CrossRef] [PubMed]
  141. Medical Device Development Tools (MDDT). FDA. Available online: https://www.fda.gov/medical-devices/science-and-research-medical-devices/medical-device-development-tools-mddt (accessed on 26 April 2022).
  142. Medical Devices. European Medicines Agency. Available online: https://www.ema.europa.eu/en/human-regulatory/overview/medical-devices (accessed on 26 April 2022).
  143. van Timmeren, J.E.; Cester, D.; Tanadini-Lang, S.; Alkadhi, H.; Baessler, B. Radiomics in Medical Imaging—“How-to” Guide and Critical Reflection. Insights Imaging 2020, 11, 1–16. [Google Scholar] [CrossRef] [PubMed]
  144. Işin, A.; Direkoǧlu, C.; Şah, M. Review of MRI-Based Brain Tumor Image Segmentation Using Deep Learning Methods. Procedia Comput. Sci. 2016, 102, 317–324. [Google Scholar] [CrossRef] [Green Version]
  145. Gatta, R.; Depeursinge, A.; Ratib, O.; Michielin, O.; Leimgruber, A. Integrating Radiomics into Holomics for Personalised Oncology: From Algorithms to Bedside. Eur. Radiol. Exp. 2020, 4. [Google Scholar] [CrossRef]
  146. Corrias, G.; Micheletti, G.; Barberini, L.; Suri, J.S.; Saba, L. Texture Analysis Imaging “What a Clinical Radiologist Needs to Know”. Eur. J. Radiol. 2022, 146. [Google Scholar] [CrossRef]
  147. Bhinder, B.; Gilvary, C.; Madhukar, N.S.; Elemento, O. Artificial Intelligence in Cancer Research and Precision Medicine. Cancer Discov. 2021, 11, 900. [Google Scholar] [CrossRef]
  148. Collins, G.S.; Reitsma, J.B.; Altman, D.G.; Moons, K.G.M. Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD): The TRIPOD Statement. BMC Med. 2015, 13, 1–10. [Google Scholar] [CrossRef] [Green Version]
  149. Papanikolaou, N.; Matos, C.; Koh, D.M. How to Develop a Meaningful Radiomic Signature for Clinical Use in Oncologic Patients. Cancer Imaging 2020, 20, 1–10. [Google Scholar] [CrossRef]
  150. Litière, S.; Isaac, G.; De Vries, E.G.E.; Bogaerts, J.; Chen, A.; Dancey, J.; Ford, R.; Gwyther, S.; Hoekstra, O.; Huang, E.; et al. RECIST 1.1 for Response Evaluation Apply Not Only to Chemotherapy-Treated Patients But Also to Targeted Cancer Agents: A Pooled Database Analysis. J. Clin. Oncol. 2019, 37, 1102–1110. [Google Scholar] [CrossRef]
  151. Clark, K.; Vendt, B.; Smith, K.; Freymann, J.; Kirby, J.; Koppel, P.; Moore, S.; Phillips, S.; Maffitt, D.; Pringle, M.; et al. The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository. J. Digit. Imaging 2013, 26, 1045–1057. [Google Scholar] [CrossRef] [Green Version]
  152. Dercle, L.; Connors, D.E.; Tang, Y.; Adam, S.J.; Gönen, M.; Hilden, P.; Karovic, S.; Maitland, M.; Moskowitz, C.S.; Kelloff, G.; et al. Vol-PACT: A Foundation for the NIH Public-Private Partnership That Supports Sharing of Clinical Trial Data for the Development of Improved Imaging Biomarkers in Oncology. JCO Clin. Cancer Inform. 2018, 2, 1–12. [Google Scholar] [CrossRef]
  153. Dercle, L.; Zhao, B.; Gönen, M.; Moskowitz, C.S.; Firas, A.; Beylergil, V.; Connors, D.E.; Yang, H.; Lu, L.; Fojo, T.; et al. Early Readout on Overall Survival of Patients With Melanoma Treated With Immunotherapy Using a Novel Imaging Analysis. JAMA Oncol. 2022, 8. [Google Scholar] [CrossRef]
  154. Lu, L.; Dercle, L.; Zhao, B.; Schwartz, L.H. Deep Learning for the Prediction of Early On-Treatment Response in Metastatic Colorectal Cancer from Serial Medical Imaging. Nat. Commun. 2021, 12. [Google Scholar] [CrossRef]
  155. Raymond Geis, J.; Brady, A.P.; Wu, C.C.; Spencer, J.; Ranschaert, E.; Jaremko, J.L.; Langer, S.G.; Kitts, A.B.; Birch, J.; Shields, W.F.; et al. Ethics of Artificial Intelligence in Radiology: Summary of the Joint European and North American Multisociety Statement. Radiology 2019, 293, 436–440. [Google Scholar] [CrossRef]
  156. Hedlund, J.; Eklund, A.; Lundström, C. Key Insights in the AIDA Community Policy on Sharing of Clinical Imaging Data for Research in Sweden. Sci. Data 2020, 7, 1–6. [Google Scholar] [CrossRef] [PubMed]
  157. Data Sharing Guidelines. Cancer Research UK. Available online: https://www.cancerresearchuk.org/funding-for-researchers/applying-for-funding/policies-that-affect-your-grant/submission-of-a-data-sharing-and-preservation-strategy/data-sharing-guidelines (accessed on 6 September 2022).
Figure 1. Flow diagram of studies selection process.
Figure 1. Flow diagram of studies selection process.
Cancers 14 05076 g001
Figure 2. The outcome of the decision by year of publication. The bubble chart summarizes the relationships between tumor types, year of publication, number of publications and outcome of the relevance of assessment at abstract screening. The size of the dots represents the number of publications and the colors of the outcome of the decision after assessing the relevance of the publications’ abstracts. The scatter pie chart summarizes the proportion of papers in each relevance category.
Figure 2. The outcome of the decision by year of publication. The bubble chart summarizes the relationships between tumor types, year of publication, number of publications and outcome of the relevance of assessment at abstract screening. The size of the dots represents the number of publications and the colors of the outcome of the decision after assessing the relevance of the publications’ abstracts. The scatter pie chart summarizes the proportion of papers in each relevance category.
Cancers 14 05076 g002
Table 1. Summary of imaging modality and study design (* = USS; ** = MRI and CT, n = 1; MRI and PET-CT n = 2; PET-MRI and CT n = 1; *** = Cone beam CT; **** = PET-MRI (n = 1) and FET-PET (n = 1).
Table 1. Summary of imaging modality and study design (* = USS; ** = MRI and CT, n = 1; MRI and PET-CT n = 2; PET-MRI and CT n = 1; *** = Cone beam CT; **** = PET-MRI (n = 1) and FET-PET (n = 1).
Tumor TypeTotal Number of PapersProportion with Elements of Prospective Study DesignProportion with Multicenter DataImaging Modality
MRIPET/CTCTOther
Breast cancer348825423 *
Rectal cancer153610014 **
Lung cancer44511015291 ***
Glioblastoma316729002 ****
Table 2. Summary of primary endpoints. Classification of surrogate endpoints is based on definitions from FDA’s surrogate endpoint table [2].
Table 2. Summary of primary endpoints. Classification of surrogate endpoints is based on definitions from FDA’s surrogate endpoint table [2].
Non-FDA Surrogate EndpointsPathology-Based Surrogate Endpoints
(+/−RECIST Endpoints)
RECIST-Based Surrogate EndpointsSurvival Endpoints
(+/− RECIST Endpoints)
Total
Breast cancer1284134
Disease-free survival [21,27,32,47]00404
Overall survival [44]00011
Pathologic complete response [15,16,17,18,19,20,22,26,28,29,31,33,35,36,37,38,39,40,41,45,46,48]0220022
Pathologic complete response + Disease-free survival [24,25,30,34,42,43]05005
Pathologic complete response + Durable objective overall response rate [34]01001
Predictive therapy response [23]10001
Rectal cancer1401015
Disease-free survival [58]00101
Pathologic complete response [49,50,51,52,53,54,55,56,57,59,60,61,62,63]1400014
Lung cancer10172644
Disease-free survival [83,84,89,102,105]00505
Durable objective overall response rate [90,104]00202
Overall survival [64,66,69,71,72,76,78,82,85,91,93,94,97,98,100,101,107]0001717
Overall survival + Progression-free Survival [70,73,75,80,86,87]00066
Overall survival + Disease-free survival [88,92,96]00033
Pathologic complete response [103]10001
Progression-free survival [65,67,68,77,79,81,99,106]00808
Progression-free survival + Longterm benefit [74]00101
Progression-free survival + Radiation pneumonitis [95]00101
Glioblastoma2022631
Associations with biological processes [136]10001
Overall survival [109,110,111,113,114,117,118,119,120,121,122,123,125,127,130,132,133,134]0001818
Overall survival + Progression-free Survival [108,115,126,128,129,136,137]00088
Progression-free survival [112]00101
Progression-free survival + pMGMT status [116]00101
Recurrence site [124,138]20002
Grand Total19282453124
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Funingana, I.-G.; Piyatissa, P.; Reinius, M.; McCague, C.; Basu, B.; Sala, E. Radiomic and Volumetric Measurements as Clinical Trial Endpoints—A Comprehensive Review. Cancers 2022, 14, 5076. https://doi.org/10.3390/cancers14205076

AMA Style

Funingana I-G, Piyatissa P, Reinius M, McCague C, Basu B, Sala E. Radiomic and Volumetric Measurements as Clinical Trial Endpoints—A Comprehensive Review. Cancers. 2022; 14(20):5076. https://doi.org/10.3390/cancers14205076

Chicago/Turabian Style

Funingana, Ionut-Gabriel, Pubudu Piyatissa, Marika Reinius, Cathal McCague, Bristi Basu, and Evis Sala. 2022. "Radiomic and Volumetric Measurements as Clinical Trial Endpoints—A Comprehensive Review" Cancers 14, no. 20: 5076. https://doi.org/10.3390/cancers14205076

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop