Topic Editors

Department of Mechanical and Automotive Engineering, School of Engineering, RMIT University, Melbourne, VIC 3083, Australia
Machine learning, Cyclica Inc. Company, Toronto, Canada
Department of Engineering, Durham University, Durham, UK
Prof. Dr. Ali Hekmatnia
Radiology Department, School of Medicine, Isfahan University of Medical Sciences, Isfahan 81746-73461, Iran

Artificial Intelligence in Cancer Diagnosis and Therapy

Abstract submission deadline
closed (20 December 2022)
Manuscript submission deadline
closed (20 December 2022)
Viewed by
117370
Topic Artificial Intelligence in Cancer Diagnosis and Therapy book cover image

A printed edition is available here.

Topic Information

Dear Colleagues,

Cancer is the second leading cause of death worldwide. According to the World Health Organisation (WHO), around 10 million people died from cancer globally in 2020. Early detection of cancer is of utmost importance for the effective treatment and prevention of the spread of cancer cells to other parts of the body (Metastasis). Artificial Intelligence (AI) has been revolutionizing discovery, diagnosis, and treatment designs. It can aid not only in cancer detection but also in cancer therapy design, identification of new therapeutic targets with accelerating drug discovery, and in improving cancer surveillance when analyzing patient and cancer statistics. AI-guided cancer care could also be effective in clinical screening and management with better health outcomes. The Machine Learning (ML) algorithms developed based on biological and computer sciences can significantly help scientists in facilitating discovery process of biological systems behind cancer initiation, growth, and metastasis. They can be also used by physicians and surgeons in the effective diagnosis and treatment design for different types of cancer and for biotechnology and pharmaceutical industries in carrying out more efficient drug discovery.

Dr. Hamid Khayyam
Dr. Ali Madani
Dr. Rahele Kafieh
Prof. Dr. Ali Hekmatnia
Topic Editors

Keywords

  •  artificial intelligence
  •  machine learning
  •  bioinformatics
  •  modeling complex biological systems
  •  computational cancer biology
  •  computational drug discovery
  •  radiology
  •  radiation therapy (oncology)
  •  cancer diagnosis and cancer therapy

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
- - 2020 20.8 Days CHF 1600
Cancers
cancers
5.2 7.4 2009 17.9 Days CHF 2900
Current Oncology
curroncol
2.6 2.6 1994 18 Days CHF 2200
Diagnostics
diagnostics
3.6 3.6 2011 20.7 Days CHF 2600
Onco
onco
- - 2021 18.3 Days CHF 1000

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (40 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
21 pages, 3976 KiB  
Article
Prognostication in Advanced Cancer by Combining Actigraphy-Derived Rest-Activity and Sleep Parameters with Routine Clinical Data: An Exploratory Machine Learning Study
by Shuchita Dhwiren Patel, Andrew Davies, Emma Laing, Huihai Wu, Jeewaka Mendis and Derk-Jan Dijk
Cancers 2023, 15(2), 503; https://doi.org/10.3390/cancers15020503 - 13 Jan 2023
Cited by 4 | Viewed by 1854
Abstract
Survival prediction is integral to oncology and palliative care, yet robust prognostic models remain elusive. We assessed the feasibility of combining actigraphy, sleep diary data, and routine clinical parameters to prognosticate. Fifty adult outpatients with advanced cancer and estimated prognosis of <1 year [...] Read more.
Survival prediction is integral to oncology and palliative care, yet robust prognostic models remain elusive. We assessed the feasibility of combining actigraphy, sleep diary data, and routine clinical parameters to prognosticate. Fifty adult outpatients with advanced cancer and estimated prognosis of <1 year were recruited. Patients were required to wear an Actiwatch® (wrist actigraph) for 8 days, and complete a sleep diary. Univariate and regularised multivariate regression methods were used to identify predictors from 66 variables and construct predictive models of survival. A total of 49 patients completed the study, and 34 patients died within 1 year. Forty-two patients had disrupted rest-activity rhythms (dichotomy index (I < O ≤ 97.5%) but I < O did not have prognostic value in univariate analyses. The Lasso regularised derived algorithm was optimal and able to differentiate participants with shorter/longer survival (log rank p < 0.0001). Predictors associated with increased survival time were: time of awakening sleep efficiency, subjective sleep quality, clinician’s estimate of survival and global health status score, and haemoglobin. A shorter survival time was associated with self-reported sleep disturbance, neutrophil count, serum urea, creatinine, and C-reactive protein. Applying machine learning to actigraphy and sleep data combined with routine clinical data is a promising approach for the development of prognostic tools. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

13 pages, 4155 KiB  
Article
Regulation of Epithelial–Mesenchymal Transition Pathway and Artificial Intelligence-Based Modeling for Pathway Activity Prediction
by Shihori Tanabe, Sabina Quader, Ryuichi Ono, Horacio Cabral, Kazuhiko Aoyagi, Akihiko Hirose, Edward J. Perkins, Hiroshi Yokozaki and Hiroki Sasaki
Onco 2023, 3(1), 13-25; https://doi.org/10.3390/onco3010002 - 06 Jan 2023
Cited by 1 | Viewed by 2170
Abstract
Because activity of the epithelial–mesenchymal transition (EMT) is involved in anti-cancer drug resistance, cancer malignancy, and shares some characteristics with cancer stem cells (CSCs), we used artificial intelligence (AI) modeling to identify the cancer-related activity of the EMT-related pathway in datasets of gene [...] Read more.
Because activity of the epithelial–mesenchymal transition (EMT) is involved in anti-cancer drug resistance, cancer malignancy, and shares some characteristics with cancer stem cells (CSCs), we used artificial intelligence (AI) modeling to identify the cancer-related activity of the EMT-related pathway in datasets of gene expression. We generated images of gene expression overlayed onto molecular pathways with Ingenuity Pathway Analysis (IPA). A dataset of 50 activated and 50 inactivated pathway images of EMT regulation in the development pathway was then modeled by the DataRobot Automated Machine Learning platform. The most accurate models were based on the Elastic-Net Classifier algorithm. The model was validated with 10 additional activated and 10 additional inactivated pathway images. The generated models had false-positive and false-negative results. These images had significant features of opposite labels, and the original data were related to Parkinson’s disease. This approach reliably identified cancer phenotypes and treatments where EMT regulation in the development pathway was activated or inactivated thereby identifying conditions where therapeutics might be applied or developed. As there are a wide variety of cancer phenotypes and CSC targets that provide novel insights into the mechanism of CSCs’ drug resistance and cancer metastasis, our approach holds promise for modeling and simulating cellular phenotype transition, as well as predicting molecular-induced responses. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

14 pages, 2629 KiB  
Article
The Development of an Intelligent Agent to Detect and Non-Invasively Characterize Lung Lesions on CT Scans: Ready for the “Real World”?
by Martina Sollini, Margarita Kirienko, Noemi Gozzi, Alessandro Bruno, Chiara Torrisi, Luca Balzarini, Emanuele Voulaz, Marco Alloisio and Arturo Chiti
Cancers 2023, 15(2), 357; https://doi.org/10.3390/cancers15020357 - 05 Jan 2023
Viewed by 1194
Abstract
(1) Background: Once lung lesions are identified on CT scans, they must be characterized by assessing the risk of malignancy. Despite the promising performance of computer-aided systems, some limitations related to the study design and technical issues undermine these tools’ efficiency; an “intelligent [...] Read more.
(1) Background: Once lung lesions are identified on CT scans, they must be characterized by assessing the risk of malignancy. Despite the promising performance of computer-aided systems, some limitations related to the study design and technical issues undermine these tools’ efficiency; an “intelligent agent” to detect and non-invasively characterize lung lesions on CT scans is proposed. (2) Methods: Two main modules tackled the detection of lung nodules on CT scans and the diagnosis of each nodule into benign and malignant categories. Computer-aided detection (CADe) and computer aided-diagnosis (CADx) modules relied on deep learning techniques such as Retina U-Net and the convolutional neural network; (3) Results: Tests were conducted on one publicly available dataset and two local datasets featuring CT scans acquired with different devices to reveal deep learning performances in “real-world” clinical scenarios. The CADe module reached an accuracy rate of 78%, while the CADx’s accuracy, specificity, and sensitivity stand at 80%, 73%, and 85.7%, respectively; (4) Conclusions: Two different deep learning techniques have been adapted for CADe and CADx purposes in both publicly available and private CT scan datasets. Experiments have shown adequate performance in both detection and diagnosis tasks. Nevertheless, some drawbacks still characterize the supervised learning paradigm employed in networks such as CNN and Retina U-Net in real-world clinical scenarios, with CT scans from different devices with different sensors’ fingerprints and spatial resolution. Continuous reassessment of CADe and CADx’s performance is needed during their implementation in clinical practice. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

15 pages, 3338 KiB  
Article
Predicting Tumor Perineural Invasion Status in High-Grade Prostate Cancer Based on a Clinical–Radiomics Model Incorporating T2-Weighted and Diffusion-Weighted Magnetic Resonance Images
by Wei Zhang, Weiting Zhang, Xiang Li, Xiaoming Cao, Guoqiang Yang and Hui Zhang
Cancers 2023, 15(1), 86; https://doi.org/10.3390/cancers15010086 - 23 Dec 2022
Cited by 2 | Viewed by 1957
Abstract
Purpose: To explore the role of bi-parametric MRI radiomics features in identifying PNI in high-grade PCa and to further develop a combined nomogram with clinical information. Methods: 183 high-grade PCa patients were included in this retrospective study. Tumor regions of interest (ROIs) were [...] Read more.
Purpose: To explore the role of bi-parametric MRI radiomics features in identifying PNI in high-grade PCa and to further develop a combined nomogram with clinical information. Methods: 183 high-grade PCa patients were included in this retrospective study. Tumor regions of interest (ROIs) were manually delineated on T2WI and DWI images. Radiomics features were extracted from lesion area segmented images obtained. Univariate logistic regression analysis and the least absolute shrinkage and selection operator (LASSO) method were used for feature selection. A clinical model, a radiomics model, and a combined model were developed to predict PNI positive. Predictive performance was estimated using receiver operating characteristic (ROC) curves, calibration curves, and decision curves. Results: The differential diagnostic efficiency of the clinical model had no statistical difference compared with the radiomics model (area under the curve (AUC) values were 0.766 and 0.823 in the train and test group, respectively). The radiomics model showed better discrimination in both the train cohort and test cohort (train AUC: 0.879 and test AUC: 0.908) than each subcategory image (T2WI train AUC: 0.813 and test AUC: 0.827; DWI train AUC: 0.749 and test AUC: 0.734). The discrimination efficiency improved when combining the radiomics and clinical models (train AUC: 0.906 and test AUC: 0.947). Conclusion: The model including radiomics signatures and clinical factors can accurately predict PNI positive in high-grade PCa patients. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

14 pages, 2409 KiB  
Article
Using Whole Slide Gray Value Map to Predict HER2 Expression and FISH Status in Breast Cancer
by Qian Yao, Wei Hou, Kaiyuan Wu, Yanhua Bai, Mengping Long, Xinting Diao, Ling Jia, Dongfeng Niu and Xiang Li
Cancers 2022, 14(24), 6233; https://doi.org/10.3390/cancers14246233 - 17 Dec 2022
Cited by 1 | Viewed by 2043
Abstract
Accurate detection of HER2 expression through immunohistochemistry (IHC) is of great clinical significance in the treatment of breast cancer. However, manual interpretation of HER2 is challenging, due to the interobserver variability among pathologists. We sought to explore a deep learning method to predict [...] Read more.
Accurate detection of HER2 expression through immunohistochemistry (IHC) is of great clinical significance in the treatment of breast cancer. However, manual interpretation of HER2 is challenging, due to the interobserver variability among pathologists. We sought to explore a deep learning method to predict HER2 expression level and gene status based on a Whole Slide Image (WSI) of the HER2 IHC section. When applied to 228 invasive breast carcinoma of no special type (IBC-NST) DAB-stained slides, our GrayMap+ convolutional neural network (CNN) model accurately classified HER2 IHC level with mean accuracy 0.952 ± 0.029 and predicted HER2 FISH status with mean accuracy 0.921 ± 0.029. Our result also demonstrated strong consistency in HER2 expression score between our system and experienced pathologists (intraclass correlation coefficient (ICC) = 0.903, Cohen’s κ = 0.875). The discordant cases were found to be largely caused by high intra-tumor staining heterogeneity in the HER2 IHC group and low copy number in the HER2 FISH group. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

17 pages, 6292 KiB  
Article
Development and Validation of an Ultrasound-Based Radiomics Nomogram for Identifying HER2 Status in Patients with Breast Carcinoma
by Yinghong Guo, Jiangfeng Wu, Yunlai Wang and Yun Jin
Diagnostics 2022, 12(12), 3130; https://doi.org/10.3390/diagnostics12123130 - 12 Dec 2022
Cited by 3 | Viewed by 1529
Abstract
(1) Objective: To evaluate the performance of ultrasound-based radiomics in the preoperative prediction of human epidermal growth factor receptor 2-positive (HER2+) and HER2− breast carcinoma. (2) Methods: Ultrasound images from 309 patients (86 HER2+ cases and 223 HER2− cases) were retrospectively analyzed, of [...] Read more.
(1) Objective: To evaluate the performance of ultrasound-based radiomics in the preoperative prediction of human epidermal growth factor receptor 2-positive (HER2+) and HER2− breast carcinoma. (2) Methods: Ultrasound images from 309 patients (86 HER2+ cases and 223 HER2− cases) were retrospectively analyzed, of which 216 patients belonged to the training set and 93 patients assigned to the time-independent validation set. The region of interest of the tumors was delineated, and the radiomics features were extracted. Radiomics features underwent dimensionality reduction analyses using the intra-class correlation coefficient (ICC), Mann–Whitney U test, and the least absolute shrinkage and selection operator (LASSO) algorithm. The radiomics score (Rad-score) for each patient was calculated through a linear combination of the nonzero coefficient features. The support vector machine (SVM), K nearest neighbors (KNN), logistic regression (LR), decision tree (DT), random forest (RF), naive Bayes (NB) and XGBoost (XGB) machine learning classifiers were trained to establish prediction models based on the Rad-score. A clinical model based on significant clinical features was also established. In addition, the logistic regression method was used to integrate Rad-score and clinical features to generate the nomogram model. The leave-one-out cross validation (LOOCV) method was used to validate the reliability and stability of the model. (3) Results: Among the seven classifier models, the LR achieved the best performance in the validation set, with an area under the receiver operating characteristic curve (AUC) of 0.786, and was obtained as the Rad-score model, while the RF performed the worst. Tumor size showed a statistical difference between the HER2+ and HER2− groups (p = 0.028). The nomogram model had a slightly higher AUC than the Rad-score model (AUC, 0.788 vs. 0.786), but no statistical difference (Delong test, p = 0.919). The LOOCV method yielded a high median AUC of 0.790 in the validation set. (4) Conclusion: The Rad-score model performs best among the seven classifiers. The nomogram model based on Rad-score and tumor size has slightly better predictive performance than the Rad-score model, and it has the potential to be utilized as a routine modality for preoperatively determining HER2 status in BC patients non-invasively. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

17 pages, 1712 KiB  
Article
Prediction of Postoperative Pathologic Risk Factors in Cervical Cancer Patients Treated with Radical Hysterectomy by Machine Learning
by Zhengjie Ou, Wei Mao, Lihua Tan, Yanli Yang, Shuanghuan Liu, Yanan Zhang, Bin Li and Dan Zhao
Curr. Oncol. 2022, 29(12), 9613-9629; https://doi.org/10.3390/curroncol29120755 - 06 Dec 2022
Cited by 6 | Viewed by 1662
Abstract
Pretherapeutic serological parameters play a predictive role in pathologic risk factors (PRF), which correlate with treatment and prognosis in cervical cancer (CC). However, the method of pre-operative prediction to PRF is limited and the clinical availability of machine learning methods remains unknown in [...] Read more.
Pretherapeutic serological parameters play a predictive role in pathologic risk factors (PRF), which correlate with treatment and prognosis in cervical cancer (CC). However, the method of pre-operative prediction to PRF is limited and the clinical availability of machine learning methods remains unknown in CC. Overall, 1260 early-stage CC patients treated with radical hysterectomy (RH) were randomly split into training and test cohorts. Six machine learning classifiers, including Gradient Boosting Machine, Support Vector Machine with Gaussian kernel, Random Forest, Conditional Random Forest, Naive Bayes, and Elastic Net, were used to derive diagnostic information from nine clinical factors and 75 parameters readily available from pretreatment peripheral blood tests. The best results were obtained by RF in deep stromal infiltration prediction with an accuracy of 70.8% and AUC of 0.767. The highest accuracy and AUC for predicting lymphatic metastasis with Cforest were 64.3% and 0.620, respectively. The highest accuracy of prediction for lymphavascular space invasion with EN was 59.7% and the AUC was 0.628. Blood markers, including D-dimer and uric acid, were associated with PRF. Machine learning methods can provide critical diagnostic prediction on PRF in CC before surgical intervention. The use of predictive algorithms may facilitate individualized treatment options through diagnostic stratification. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

22 pages, 1513 KiB  
Article
DECT-CLUST: Dual-Energy CT Image Clustering and Application to Head and Neck Squamous Cell Carcinoma Segmentation
by Faicel Chamroukhi, Segolene Brivet, Peter Savadjiev, Mark Coates and Reza Forghani
Diagnostics 2022, 12(12), 3072; https://doi.org/10.3390/diagnostics12123072 - 06 Dec 2022
Cited by 2 | Viewed by 1631
Abstract
Dual-energy computed tomography (DECT) is an advanced CT computed tomography scanning technique enabling material characterization not possible with conventional CT scans. It allows the reconstruction of energy decay curves at each 3D image voxel, representing varied image attenuation at different effective scanning energy [...] Read more.
Dual-energy computed tomography (DECT) is an advanced CT computed tomography scanning technique enabling material characterization not possible with conventional CT scans. It allows the reconstruction of energy decay curves at each 3D image voxel, representing varied image attenuation at different effective scanning energy levels. In this paper, we develop novel unsupervised learning techniques based on mixture models and functional data analysis models to the clustering of DECT images. We design functional mixture models that integrate spatial image context in mixture weights, with mixture component densities being constructed upon the DECT energy decay curves as functional observations. We develop dedicated expectation–maximization algorithms for the maximum likelihood estimation of the model parameters. To our knowledge, this is the first article to develop statistical functional data analysis and model-based clustering techniques to take advantage of the full spectral information provided by DECT. We evaluate the application of DECT to head and neck squamous cell carcinoma. Current image-based evaluation of these tumors in clinical practice is largely qualitative, based on a visual assessment of tumor anatomic extent and basic one- or two-dimensional tumor size measurements. We evaluate our methods on 91 head and neck cancer DECT scans and compare our unsupervised clustering results to tumor contours traced manually by radiologists, as well as to several baseline algorithms. Given the inter-rater variability even among experts at delineating head and neck tumors, and given the potential importance of tissue reactions surrounding the tumor itself, our proposed methodology has the potential to add value in downstream machine learning applications for clinical outcome prediction based on DECT data in head and neck cancer. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

14 pages, 2103 KiB  
Article
Development and Validation of Novel Deep-Learning Models Using Multiple Data Types for Lung Cancer Survival
by Jason C. Hsu, Phung-Anh Nguyen, Phan Thanh Phuc, Tsai-Chih Lo, Min-Huei Hsu, Min-Shu Hsieh, Nguyen Quoc Khanh Le, Chi-Tsun Cheng, Tzu-Hao Chang and Cheng-Yu Chen
Cancers 2022, 14(22), 5562; https://doi.org/10.3390/cancers14225562 - 12 Nov 2022
Cited by 5 | Viewed by 2477
Abstract
A well-established lung-cancer-survival-prediction model that relies on multiple data types, multiple novel machine-learning algorithms, and external testing is absent in the literature. This study aims to address this gap and determine the critical factors of lung cancer survival. We selected non-small-cell lung cancer [...] Read more.
A well-established lung-cancer-survival-prediction model that relies on multiple data types, multiple novel machine-learning algorithms, and external testing is absent in the literature. This study aims to address this gap and determine the critical factors of lung cancer survival. We selected non-small-cell lung cancer patients from a retrospective dataset of the Taipei Medical University Clinical Research Database and Taiwan Cancer Registry between January 2008 and December 2018. All patients were monitored from the index date of cancer diagnosis until the event of death. Variables, including demographics, comorbidities, medications, laboratories, and patient gene tests, were used. Nine machine-learning algorithms with various modes were used. The performance of the algorithms was measured by the area under the receiver operating characteristic curve (AUC). In total, 3714 patients were included. The best performance of the artificial neural network (ANN) model was achieved when integrating all variables with the AUC, accuracy, precision, recall, and F1-score of 0.89, 0.82, 0.91, 0.75, and 0.65, respectively. The most important features were cancer stage, cancer size, age of diagnosis, smoking, drinking status, EGFR gene, and body mass index. Overall, the ANN model improved predictive performance when integrating different data types. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

17 pages, 3962 KiB  
Article
Deep Learning for Automated Elective Lymph Node Level Segmentation for Head and Neck Cancer Radiotherapy
by Victor I. J. Strijbis, Max Dahele, Oliver J. Gurney-Champion, Gerrit J. Blom, Marije R. Vergeer, Berend J. Slotman and Wilko F. A. R. Verbakel
Cancers 2022, 14(22), 5501; https://doi.org/10.3390/cancers14225501 - 09 Nov 2022
Cited by 9 | Viewed by 2175
Abstract
Depending on the clinical situation, different combinations of lymph node (LN) levels define the elective LN target volume in head-and-neck cancer (HNC) radiotherapy. The accurate auto-contouring of individual LN levels could reduce the burden and variability of manual segmentation and be used regardless [...] Read more.
Depending on the clinical situation, different combinations of lymph node (LN) levels define the elective LN target volume in head-and-neck cancer (HNC) radiotherapy. The accurate auto-contouring of individual LN levels could reduce the burden and variability of manual segmentation and be used regardless of the primary tumor location. We evaluated three deep learning approaches for the segmenting individual LN levels I–V, which were manually contoured on CT scans from 70 HNC patients. The networks were trained and evaluated using five-fold cross-validation and ensemble learning for 60 patients with (1) 3D patch-based UNets, (2) multi-view (MV) voxel classification networks and (3) sequential UNet+MV. The performances were evaluated using Dice similarity coefficients (DSC) for automated and manual segmentations for individual levels, and the planning target volumes were extrapolated from the combined levels I–V and II–IV, both for the cross-validation and for an independent test set of 10 patients. The median DSC were 0.80, 0.66 and 0.82 for UNet, MV and UNet+MV, respectively. Overall, UNet+MV significantly (p < 0.0001) outperformed other arrangements and yielded DSC = 0.87, 0.85, 0.86, 0.82, 0.77, 0.77 for the combined and individual level I–V structures, respectively. Both PTVs were also significantly (p < 0.0001) more accurate with UNet+MV, with DSC = 0.91 and 0.90, respectively. The accurate segmentation of individual LN levels I–V can be achieved using an ensemble of UNets. UNet+MV can further refine this result. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

21 pages, 1264 KiB  
Review
Artificial Intelligence-Driven Diagnosis of Pancreatic Cancer
by Bahrudeen Shahul Hameed and Uma Maheswari Krishnan
Cancers 2022, 14(21), 5382; https://doi.org/10.3390/cancers14215382 - 31 Oct 2022
Cited by 9 | Viewed by 4827
Abstract
Pancreatic cancer is among the most challenging forms of cancer to treat, owing to its late diagnosis and aggressive nature that reduces the survival rate drastically. Pancreatic cancer diagnosis has been primarily based on imaging, but the current state-of-the-art imaging provides a poor [...] Read more.
Pancreatic cancer is among the most challenging forms of cancer to treat, owing to its late diagnosis and aggressive nature that reduces the survival rate drastically. Pancreatic cancer diagnosis has been primarily based on imaging, but the current state-of-the-art imaging provides a poor prognosis, thus limiting clinicians’ treatment options. The advancement of a cancer diagnosis has been enhanced through the integration of artificial intelligence and imaging modalities to make better clinical decisions. In this review, we examine how AI models can improve the diagnosis of pancreatic cancer using different imaging modalities along with a discussion on the emerging trends in an AI-driven diagnosis, based on cytopathology and serological markers. Ethical concerns regarding the use of these tools have also been discussed. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Graphical abstract

46 pages, 38427 KiB  
Article
Artificial Intelligence Predicted Overall Survival and Classified Mature B-Cell Neoplasms Based on Immuno-Oncology and Immune Checkpoint Panels
by Joaquim Carreras, Giovanna Roncador and Rifat Hamoudi
Cancers 2022, 14(21), 5318; https://doi.org/10.3390/cancers14215318 - 28 Oct 2022
Cited by 11 | Viewed by 4176
Abstract
Artificial intelligence (AI) can identify actionable oncology biomarkers. This research integrates our previous analyses of non-Hodgkin lymphoma. We used gene expression and immunohistochemical data, focusing on the immune checkpoint, and added a new analysis of macrophages, including 3D rendering. The AI comprised machine [...] Read more.
Artificial intelligence (AI) can identify actionable oncology biomarkers. This research integrates our previous analyses of non-Hodgkin lymphoma. We used gene expression and immunohistochemical data, focusing on the immune checkpoint, and added a new analysis of macrophages, including 3D rendering. The AI comprised machine learning (C5, Bayesian network, C&R, CHAID, discriminant analysis, KNN, logistic regression, LSVM, Quest, random forest, random trees, SVM, tree-AS, and XGBoost linear and tree) and artificial neural networks (multilayer perceptron and radial basis function). The series included chronic lymphocytic leukemia, mantle cell lymphoma, follicular lymphoma, Burkitt, diffuse large B-cell lymphoma, marginal zone lymphoma, and multiple myeloma, as well as acute myeloid leukemia and pan-cancer series. AI classified lymphoma subtypes and predicted overall survival accurately. Oncogenes and tumor suppressor genes were highlighted (MYC, BCL2, and TP53), along with immune microenvironment markers of tumor-associated macrophages (M2-like TAMs), T-cells and regulatory T lymphocytes (Tregs) (CD68, CD163, MARCO, CSF1R, CSF1, PD-L1/CD274, SIRPA, CD85A/LILRB3, CD47, IL10, TNFRSF14/HVEM, TNFAIP8, IKAROS, STAT3, NFKB, MAPK, PD-1/PDCD1, BTLA, and FOXP3), apoptosis (BCL2, CASP3, CASP8, PARP, and pathway-related MDM2, E2F1, CDK6, MYB, and LMO2), and metabolism (ENO3, GGA3). In conclusion, AI with immuno-oncology markers is a powerful predictive tool. Additionally, a review of recent literature was made. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

13 pages, 1530 KiB  
Article
Deep Learning-Based Classification of Uterine Cervical and Endometrial Cancer Subtypes from Whole-Slide Histopathology Images
by JaeYen Song, Soyoung Im, Sung Hak Lee and Hyun-Jong Jang
Diagnostics 2022, 12(11), 2623; https://doi.org/10.3390/diagnostics12112623 - 28 Oct 2022
Cited by 7 | Viewed by 2726
Abstract
Uterine cervical and endometrial cancers have different subtypes with different clinical outcomes. Therefore, cancer subtyping is essential for proper treatment decisions. Furthermore, an endometrial and endocervical origin for an adenocarcinoma should also be distinguished. Although the discrimination can be helped with various immunohistochemical [...] Read more.
Uterine cervical and endometrial cancers have different subtypes with different clinical outcomes. Therefore, cancer subtyping is essential for proper treatment decisions. Furthermore, an endometrial and endocervical origin for an adenocarcinoma should also be distinguished. Although the discrimination can be helped with various immunohistochemical markers, there is no definitive marker. Therefore, we tested the feasibility of deep learning (DL)-based classification for the subtypes of cervical and endometrial cancers and the site of origin of adenocarcinomas from whole slide images (WSIs) of tissue slides. WSIs were split into 360 × 360-pixel image patches at 20× magnification for classification. Then, the average of patch classification results was used for the final classification. The area under the receiver operating characteristic curves (AUROCs) for the cervical and endometrial cancer classifiers were 0.977 and 0.944, respectively. The classifier for the origin of an adenocarcinoma yielded an AUROC of 0.939. These results clearly demonstrated the feasibility of DL-based classifiers for the discrimination of cancers from the cervix and uterus. We expect that the performance of the classifiers will be much enhanced with an accumulation of WSI data. Then, the information from the classifiers can be integrated with other data for more precise discrimination of cervical and endometrial cancers. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

15 pages, 11987 KiB  
Article
Learn to Estimate Genetic Mutation and Microsatellite Instability with Histopathology H&E Slides in Colon Carcinoma
by Yimin Guo, Ting Lyu, Shuguang Liu, Wei Zhang, Youjian Zhou, Chao Zeng and Guangming Wu
Cancers 2022, 14(17), 4144; https://doi.org/10.3390/cancers14174144 - 27 Aug 2022
Viewed by 1607
Abstract
Colorectal cancer is one of the most common malignancies and the third leading cause of cancer-related mortality worldwide. Identifying KRAS, NRAS, and BRAF mutations and estimating MSI status is closely related to the individualized therapeutic judgment and oncologic prognosis of CRC patients. In [...] Read more.
Colorectal cancer is one of the most common malignancies and the third leading cause of cancer-related mortality worldwide. Identifying KRAS, NRAS, and BRAF mutations and estimating MSI status is closely related to the individualized therapeutic judgment and oncologic prognosis of CRC patients. In this study, we introduce a cascaded network framework with an average voting ensemble strategy to sequentially identify the tumor regions and predict gene mutations & MSI status from whole-slide H&E images. Experiments on a colorectal cancer dataset indicate that the proposed method can achieve higher fidelity in both gene mutation prediction and MSI status estimation. In the testing set, our method achieves 0.792, 0.886, 0.897, and 0.764 AUCs for KRAS, NRAS, BRAF, and MSI, respectively. The results suggest that the deep convolutional networks have the potential to provide diagnostic insight and clinical guidance directly from pathological H&E slides. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Graphical abstract

12 pages, 2863 KiB  
Article
Clinically Applicable Pathological Diagnosis System for Cell Clumps in Endometrial Cancer Screening via Deep Convolutional Neural Networks
by Qing Li, Ruijie Wang, Zhonglin Xie, Lanbo Zhao, Yiran Wang, Chao Sun, Lu Han, Yu Liu, Huilian Hou, Chen Liu, Guanjun Zhang, Guizhi Shi, Dexing Zhong and Qiling Li
Cancers 2022, 14(17), 4109; https://doi.org/10.3390/cancers14174109 - 25 Aug 2022
Cited by 3 | Viewed by 2300
Abstract
Objectives: The soaring demand for endometrial cancer screening has exposed a huge shortage of cytopathologists worldwide. To address this problem, our study set out to establish an artificial intelligence system that automatically recognizes and diagnoses pathological images of endometrial cell clumps (ECCs). Methods: [...] Read more.
Objectives: The soaring demand for endometrial cancer screening has exposed a huge shortage of cytopathologists worldwide. To address this problem, our study set out to establish an artificial intelligence system that automatically recognizes and diagnoses pathological images of endometrial cell clumps (ECCs). Methods: We used Li Brush to acquire endometrial cells from patients. Liquid-based cytology technology was used to provide slides. The slides were scanned and divided into malignant and benign groups. We proposed two (a U-net segmentation and a DenseNet classification) networks to identify images. Another four classification networks were used for comparison tests. Results: A total of 113 (42 malignant and 71 benign) endometrial samples were collected, and a dataset containing 15,913 images was constructed. A total of 39,000 ECCs patches were obtained by the segmentation network. Then, 26,880 and 11,520 patches were used for training and testing, respectively. On the premise that the training set reached 100%, the testing set gained 93.5% accuracy, 92.2% specificity, and 92.0% sensitivity. The remaining 600 malignant patches were used for verification. Conclusions: An artificial intelligence system was successfully built to classify malignant and benign ECCs. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

17 pages, 9983 KiB  
Article
Integrated Analysis of Tumor Mutation Burden and Immune Infiltrates in Hepatocellular Carcinoma
by Yulan Zhao, Ting Huang and Pintong Huang
Diagnostics 2022, 12(8), 1918; https://doi.org/10.3390/diagnostics12081918 - 08 Aug 2022
Cited by 4 | Viewed by 2597
Abstract
Tumor mutation burdens (TMBs) act as an indicator of immunotherapeutic responsiveness in various tumors. However, the relationship between TMBs and immune cell infiltrates in hepatocellular carcinoma (HCC) is still obscure. The present study aimed to explore the potential diagnostic markers of TMBs for [...] Read more.
Tumor mutation burdens (TMBs) act as an indicator of immunotherapeutic responsiveness in various tumors. However, the relationship between TMBs and immune cell infiltrates in hepatocellular carcinoma (HCC) is still obscure. The present study aimed to explore the potential diagnostic markers of TMBs for HCC and analyze the role of immune cell infiltration in this pathology. We used OA datasets from The Cancer Genome Atlas database. First, the “maftools” package was used to screen the highest mutation frequency in all samples. R software was used to identify differentially expressed genes (DEGs) according to mutation frequency and perform functional correlation analysis. Then, the gene ontology (GO) enrichment analysis was performed with “clusterProfiler”, “enrichplot”, and “ggplot2” packages. Finally, the correlations between diagnostic markers and infiltrating immune cells were analyzed, and CIBERSORT was used to evaluate the infiltration of immune cells in HCC tissues. As a result, we identified a total of 359 DEGs in this study. These DEGs may affect HCC prognosis by regulating fatty acid metabolism, hypoxia, and the P53 pathway. The top 15 genes were selected as the hub genes through PPI network analysis. SRSF1, SNRPA1, and SRSF3 showed strong similarities in biological effects, NCBP2 was demonstrated as a diagnostic marker of HCC, and high NCBP2 expression was significantly correlated with poor over survival (OS) in HCC. In addition, NCBP2 expression was correlated with the infiltration of B cells (r = 0.364, p = 3.30 × 10−12), CD8+ T cells (r = 0.295, p = 2.71 × 10−8), CD4+ T cells, (r = 0.484, p = 1.37 × 10−21), macrophages (r = 0.551, p = 1.97 × 10−28), neutrophils (r = 0.457, p = 3.26 × 10−19), and dendritic cells (r = 0.453, p = 1.97 × 10−18). Immune cell infiltration analysis revealed that the degree of central memory T-cell (Tcm) infiltration may be correlated with the HCC process. In conclusion, NCBP2 can be used as diagnostic markers of HCC, and immune cell infiltration plays an important role in the occurrence and progression of HCC. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

11 pages, 1290 KiB  
Article
ViSTA: A Novel Network Improving Lung Adenocarcinoma Invasiveness Prediction from Follow-Up CT Series
by Wei Zhao, Yingli Sun, Kaiming Kuang, Jiancheng Yang, Ge Li, Bingbing Ni, Yingjia Jiang, Bo Jiang, Jun Liu and Ming Li
Cancers 2022, 14(15), 3675; https://doi.org/10.3390/cancers14153675 - 28 Jul 2022
Cited by 2 | Viewed by 1887
Abstract
To investigate the value of the deep learning method in predicting the invasiveness of early lung adenocarcinoma based on irregularly sampled follow-up computed tomography (CT) scans. In total, 351 nodules were enrolled in the study. A new deep learning network based on temporal [...] Read more.
To investigate the value of the deep learning method in predicting the invasiveness of early lung adenocarcinoma based on irregularly sampled follow-up computed tomography (CT) scans. In total, 351 nodules were enrolled in the study. A new deep learning network based on temporal attention, named Visual Simple Temporal Attention (ViSTA), was proposed to process irregularly sampled follow-up CT scans. We conducted substantial experiments to investigate the supplemental value in predicting the invasiveness using serial CTs. A test set composed of 69 lung nodules was reviewed by three radiologists. The performance of the model and radiologists were compared and analyzed. We also performed a visual investigation to explore the inherent growth pattern of the early adenocarcinomas. Among counterpart models, ViSTA showed the best performance (AUC: 86.4% vs. 60.6%, 75.9%, 66.9%, 73.9%, 76.5%, 78.3%). ViSTA also outperformed the model based on Volume Doubling Time (AUC: 60.6%). ViSTA scored higher than two junior radiologists (accuracy of 81.2% vs. 75.4% and 71.0%) and came close to the senior radiologist (85.5%). Our proposed model using irregularly sampled follow-up CT scans achieved promising accuracy in evaluating the invasiveness of the early stage lung adenocarcinoma. Its performance is comparable with senior experts and better than junior experts and traditional deep learning models. With further validation, it can potentially be applied in clinical practice. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

15 pages, 1791 KiB  
Article
Comparative Multicentric Evaluation of Inter-Observer Variability in Manual and Automatic Segmentation of Neuroblastic Tumors in Magnetic Resonance Images
by Diana Veiga-Canuto, Leonor Cerdà-Alberich, Cinta Sangüesa Nebot, Blanca Martínez de las Heras, Ulrike Pötschger, Michela Gabelloni, José Miguel Carot Sierra, Sabine Taschner-Mandl, Vanessa Düster, Adela Cañete, Ruth Ladenstein, Emanuele Neri and Luis Martí-Bonmatí
Cancers 2022, 14(15), 3648; https://doi.org/10.3390/cancers14153648 - 27 Jul 2022
Cited by 13 | Viewed by 2889
Abstract
Tumor segmentation is one of the key steps in imaging processing. The goals of this study were to assess the inter-observer variability in manual segmentation of neuroblastic tumors and to analyze whether the state-of-the-art deep learning architecture nnU-Net can provide a robust solution [...] Read more.
Tumor segmentation is one of the key steps in imaging processing. The goals of this study were to assess the inter-observer variability in manual segmentation of neuroblastic tumors and to analyze whether the state-of-the-art deep learning architecture nnU-Net can provide a robust solution to detect and segment tumors on MR images. A retrospective multicenter study of 132 patients with neuroblastic tumors was performed. Dice Similarity Coefficient (DSC) and Area Under the Receiver Operating Characteristic Curve (AUC ROC) were used to compare segmentation sets. Two more metrics were elaborated to understand the direction of the errors: the modified version of False Positive (FPRm) and False Negative (FNR) rates. Two radiologists manually segmented 46 tumors and a comparative study was performed. nnU-Net was trained-tuned with 106 cases divided into five balanced folds to perform cross-validation. The five resulting models were used as an ensemble solution to measure training (n = 106) and validation (n = 26) performance, independently. The time needed by the model to automatically segment 20 cases was compared to the time required for manual segmentation. The median DSC for manual segmentation sets was 0.969 (±0.032 IQR). The median DSC for the automatic tool was 0.965 (±0.018 IQR). The automatic segmentation model achieved a better performance regarding the FPRm. MR images segmentation variability is similar between radiologists and nnU-Net. Time leverage when using the automatic model with posterior visual validation and manual adjustment corresponds to 92.8%. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

16 pages, 2235 KiB  
Article
Novel Harmonization Method for Multi-Centric Radiomic Studies in Non-Small Cell Lung Cancer
by Marco Bertolini, Valeria Trojani, Andrea Botti, Noemi Cucurachi, Marco Galaverni, Salvatore Cozzi, Paolo Borghetti, Salvatore La Mattina, Edoardo Pastorello, Michele Avanzo, Alberto Revelant, Matteo Sepulcri, Chiara Paronetto, Stefano Ursino, Giulia Malfatti, Niccolò Giaj-Levra, Lorenzo Falcinelli, Cinzia Iotti, Mauro Iori and Patrizia Ciammella
Curr. Oncol. 2022, 29(8), 5179-5194; https://doi.org/10.3390/curroncol29080410 - 22 Jul 2022
Cited by 7 | Viewed by 3185
Abstract
The purpose of this multi-centric work was to investigate the relationship between radiomic features extracted from pre-treatment computed tomography (CT), positron emission tomography (PET) imaging, and clinical outcomes for stereotactic body radiation therapy (SBRT) in early-stage non-small cell lung cancer (NSCLC). One-hundred and [...] Read more.
The purpose of this multi-centric work was to investigate the relationship between radiomic features extracted from pre-treatment computed tomography (CT), positron emission tomography (PET) imaging, and clinical outcomes for stereotactic body radiation therapy (SBRT) in early-stage non-small cell lung cancer (NSCLC). One-hundred and seventeen patients who received SBRT for early-stage NSCLC were retrospectively identified from seven Italian centers. The tumor was identified on pre-treatment free-breathing CT and PET images, from which we extracted 3004 quantitative radiomic features. The primary outcome was 24-month progression-free-survival (PFS) based on cancer recurrence (local/non-local) following SBRT. A harmonization technique was proposed for CT features considering lesion and contralateral healthy lung tissues using the LASSO algorithm as a feature selector. Models with harmonized CT features (B models) demonstrated better performances compared to the ones using only original CT features (C models). A linear support vector machine (SVM) with harmonized CT and PET features (A1 model) showed an area under the curve (AUC) of 0.77 (0.63–0.85) for predicting the primary outcome in an external validation cohort. The addition of clinical features did not enhance the model performance. This study provided the basis for validating our novel CT data harmonization strategy, involving delta radiomics. The harmonized radiomic models demonstrated the capability to properly predict patient prognosis. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

11 pages, 2282 KiB  
Review
Efficacy of Artificial Intelligence-Assisted Discrimination of Oral Cancerous Lesions from Normal Mucosa Based on the Oral Mucosal Image: A Systematic Review and Meta-Analysis
by Ji-Sun Kim, Byung Guk Kim and Se Hwan Hwang
Cancers 2022, 14(14), 3499; https://doi.org/10.3390/cancers14143499 - 19 Jul 2022
Cited by 10 | Viewed by 2315
Abstract
The accuracy of artificial intelligence (AI)-assisted discrimination of oral cancerous lesions from normal mucosa based on mucosal images was evaluated. Two authors independently reviewed the database until June 2022. Oral mucosal disorder, as recorded by photographic images, autofluorescence, and optical coherence tomography (OCT), [...] Read more.
The accuracy of artificial intelligence (AI)-assisted discrimination of oral cancerous lesions from normal mucosa based on mucosal images was evaluated. Two authors independently reviewed the database until June 2022. Oral mucosal disorder, as recorded by photographic images, autofluorescence, and optical coherence tomography (OCT), was compared with the reference results by histology findings. True-positive, true-negative, false-positive, and false-negative data were extracted. Seven studies were included for discriminating oral cancerous lesions from normal mucosa. The diagnostic odds ratio (DOR) of AI-assisted screening was 121.66 (95% confidence interval [CI], 29.60; 500.05). Twelve studies were included for discriminating all oral precancerous lesions from normal mucosa. The DOR of screening was 63.02 (95% CI, 40.32; 98.49). Subgroup analysis showed that OCT was more diagnostically accurate (324.33 vs. 66.81 and 27.63) and more negatively predictive (0.94 vs. 0.93 and 0.84) than photographic images and autofluorescence on the screening for all oral precancerous lesions from normal mucosa. Automated detection of oral cancerous lesions by AI would be a rapid, non-invasive diagnostic tool that could provide immediate results on the diagnostic work-up of oral cancer. This method has the potential to be used as a clinical tool for the early diagnosis of pathological lesions. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

24 pages, 2793 KiB  
Review
Diagnostic Strategies for Breast Cancer Detection: From Image Generation to Classification Strategies Using Artificial Intelligence Algorithms
by Jesus A. Basurto-Hurtado, Irving A. Cruz-Albarran, Manuel Toledano-Ayala, Mario Alberto Ibarra-Manzano, Luis A. Morales-Hernandez and Carlos A. Perez-Ramirez
Cancers 2022, 14(14), 3442; https://doi.org/10.3390/cancers14143442 - 15 Jul 2022
Cited by 14 | Viewed by 3788
Abstract
Breast cancer is one the main death causes for women worldwide, as 16% of the diagnosed malignant lesions worldwide are its consequence. In this sense, it is of paramount importance to diagnose these lesions in the earliest stage possible, in order to have [...] Read more.
Breast cancer is one the main death causes for women worldwide, as 16% of the diagnosed malignant lesions worldwide are its consequence. In this sense, it is of paramount importance to diagnose these lesions in the earliest stage possible, in order to have the highest chances of survival. While there are several works that present selected topics in this area, none of them present a complete panorama, that is, from the image generation to its interpretation. This work presents a comprehensive state-of-the-art review of the image generation and processing techniques to detect Breast Cancer, where potential candidates for the image generation and processing are presented and discussed. Novel methodologies should consider the adroit integration of artificial intelligence-concepts and the categorical data to generate modern alternatives that can have the accuracy, precision and reliability expected to mitigate the misclassifications. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

11 pages, 2260 KiB  
Article
Prediction of Nodal Metastasis in Lung Cancer Using Deep Learning of Endobronchial Ultrasound Images
by Yuki Ito, Takahiro Nakajima, Terunaga Inage, Takeshi Otsuka, Yuki Sata, Kazuhisa Tanaka, Yuichi Sakairi, Hidemi Suzuki and Ichiro Yoshino
Cancers 2022, 14(14), 3334; https://doi.org/10.3390/cancers14143334 - 08 Jul 2022
Cited by 1 | Viewed by 2259
Abstract
Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a valid modality for nodal lung cancer staging. The sonographic features of EBUS helps determine suspicious lymph nodes (LNs). To facilitate this use of this method, machine-learning-based computer-aided diagnosis (CAD) of medical imaging has been introduced [...] Read more.
Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a valid modality for nodal lung cancer staging. The sonographic features of EBUS helps determine suspicious lymph nodes (LNs). To facilitate this use of this method, machine-learning-based computer-aided diagnosis (CAD) of medical imaging has been introduced in clinical practice. This study investigated the feasibility of CAD for the prediction of nodal metastasis in lung cancer using endobronchial ultrasound images. Image data of patients who underwent EBUS-TBNA were collected from a video clip. Xception was used as a convolutional neural network to predict the nodal metastasis of lung cancer. The prediction accuracy of nodal metastasis through deep learning (DL) was evaluated using both the five-fold cross-validation and hold-out methods. Eighty percent of the collected images were used in five-fold cross-validation, and all the images were used for the hold-out method. Ninety-one patients (166 LNs) were enrolled in this study. A total of 5255 and 6444 extracted images from the video clip were analyzed using the five-fold cross-validation and hold-out methods, respectively. The prediction of LN metastasis by CAD using EBUS images showed high diagnostic accuracy with high specificity. CAD during EBUS-TBNA may help improve the diagnostic efficiency and reduce invasiveness of the procedure. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Graphical abstract

11 pages, 3587 KiB  
Article
Machine Learning Based on MRI DWI Radiomics Features for Prognostic Prediction in Nasopharyngeal Carcinoma
by Qiyi Hu, Guojie Wang, Xiaoyi Song, Jingjing Wan, Man Li, Fan Zhang, Qingling Chen, Xiaoling Cao, Shaolin Li and Ying Wang
Cancers 2022, 14(13), 3201; https://doi.org/10.3390/cancers14133201 - 30 Jun 2022
Cited by 5 | Viewed by 1991
Abstract
Purpose: This study aimed to explore the predictive efficacy of radiomics analyses based on readout-segmented echo-planar diffusion-weighted imaging (RESOLVE-DWI) for prognosis evaluation in nasopharyngeal carcinoma in order to provide further information for clinical decision making and intervention. Methods: A total of 154 patients [...] Read more.
Purpose: This study aimed to explore the predictive efficacy of radiomics analyses based on readout-segmented echo-planar diffusion-weighted imaging (RESOLVE-DWI) for prognosis evaluation in nasopharyngeal carcinoma in order to provide further information for clinical decision making and intervention. Methods: A total of 154 patients with untreated NPC confirmed by pathological examination were enrolled, and the pretreatment magnetic resonance image (MRI)—including diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC) maps, T2-weighted imaging (T2WI), and contrast-enhanced T1-weighted imaging (CE-T1WI)—was collected. The Random Forest (RF) algorithm selected radiomics features and established the machine-learning models. Five models, namely model 1 (DWI + ADC), model 2 (T2WI + CE-T1WI), model 3 (DWI + ADC + T2WI), model 4 (DWI + ADC + CE-T1WI), and model 5 (DWI + ADC + T2WI + CE-T1WI), were constructed. The average area under the curve (AUC) of the validation set was determined in order to compare the predictive efficacy for prognosis evaluation. Results: After adjusting the parameters, the RF machine learning models based on extracted imaging features from different sequence combinations were obtained. The invalidation sets of model 1 (DWI + ADC) yielded the highest average AUC of 0.80 (95% CI: 0.79–0.81). The average AUCs of the model 2, 3, 4, and 5 invalidation sets were 0.72 (95% CI: 0.71–0.74), 0.66 (95% CI: 0.64–0.68), 0.74 (95% CI: 0.73–0.75), and 0.75 (95% CI: 0.74–0.76), respectively. Conclusion: A radiomics model derived from the MRI DWI of patients with nasopharyngeal carcinoma was generated in order to evaluate the risk of recurrence and metastasis. The model based on MRI DWI can provide an alternative approach for survival estimation, and can reveal more information for clinical decision-making and intervention. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

15 pages, 388 KiB  
Review
Virtual Reality Rehabilitation Systems for Cancer Survivors: A Narrative Review of the Literature
by Antonio Melillo, Andrea Chirico, Giuseppe De Pietro, Luigi Gallo, Giuseppe Caggianese, Daniela Barone, Michelino De Laurentiis and Antonio Giordano
Cancers 2022, 14(13), 3163; https://doi.org/10.3390/cancers14133163 - 28 Jun 2022
Cited by 7 | Viewed by 4250
Abstract
Rehabilitation plays a crucial role in cancer care, as the functioning of cancer survivors is frequently compromised by impairments that can result from the disease itself but also from the long-term sequelae of the treatment. Nevertheless, the current literature shows that only a [...] Read more.
Rehabilitation plays a crucial role in cancer care, as the functioning of cancer survivors is frequently compromised by impairments that can result from the disease itself but also from the long-term sequelae of the treatment. Nevertheless, the current literature shows that only a minority of patients receive physical and/or cognitive rehabilitation. This lack of rehabilitative care is a consequence of many factors, one of which includes the transportation issues linked to disability that limit the patient’s access to rehabilitation facilities. The recent COVID-19 pandemic has further shown the benefits of improving telemedicine and home-based rehabilitative interventions to facilitate the delivery of rehabilitation programs when attendance at healthcare facilities is an obstacle. In recent years, researchers have been investigating the benefits of the application of virtual reality to rehabilitation. Virtual reality is shown to improve adherence and training intensity through gamification, allow the replication of real-life scenarios, and stimulate patients in a multimodal manner. In our present work, we offer an overview of the present literature on virtual reality-implemented cancer rehabilitation. The existence of wide margins for technological development allows us to expect further improvements, but more randomized controlled trials are needed to confirm the hypothesis that VRR may improve adherence rates and facilitate telerehabilitation. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

15 pages, 1433 KiB  
Review
Machine Learning Tools for Image-Based Glioma Grading and the Quality of Their Reporting: Challenges and Opportunities
by Sara Merkaj, Ryan C. Bahar, Tal Zeevi, MingDe Lin, Ichiro Ikuta, Khaled Bousabarah, Gabriel I. Cassinelli Petersen, Lawrence Staib, Seyedmehdi Payabvash, John T. Mongan, Soonmee Cha and Mariam S. Aboian
Cancers 2022, 14(11), 2623; https://doi.org/10.3390/cancers14112623 - 25 May 2022
Cited by 7 | Viewed by 3662
Abstract
Technological innovation has enabled the development of machine learning (ML) tools that aim to improve the practice of radiologists. In the last decade, ML applications to neuro-oncology have expanded significantly, with the pre-operative prediction of glioma grade using medical imaging as a specific [...] Read more.
Technological innovation has enabled the development of machine learning (ML) tools that aim to improve the practice of radiologists. In the last decade, ML applications to neuro-oncology have expanded significantly, with the pre-operative prediction of glioma grade using medical imaging as a specific area of interest. We introduce the subject of ML models for glioma grade prediction by remarking upon the models reported in the literature as well as by describing their characteristic developmental workflow and widely used classifier algorithms. The challenges facing these models—including data sources, external validation, and glioma grade classification methods —are highlighted. We also discuss the quality of how these models are reported, explore the present and future of reporting guidelines and risk of bias tools, and provide suggestions for the reporting of prospective works. Finally, this review offers insights into next steps that the field of ML glioma grade prediction can take to facilitate clinical implementation. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

15 pages, 820 KiB  
Article
Deep Learning Using CT Images to Grade Clear Cell Renal Cell Carcinoma: Development and Validation of a Prediction Model
by Lifeng Xu, Chun Yang, Feng Zhang, Xuan Cheng, Yi Wei, Shixiao Fan, Minghui Liu, Xiaopeng He, Jiali Deng, Tianshu Xie, Xiaomin Wang, Ming Liu and Bin Song
Cancers 2022, 14(11), 2574; https://doi.org/10.3390/cancers14112574 - 24 May 2022
Cited by 13 | Viewed by 2194
Abstract
This retrospective study aimed to develop and validate deep-learning-based models for grading clear cell renal cell carcinoma (ccRCC) patients. A cohort enrolling 706 patients (n = 706) with pathologically verified ccRCC was used in this study. A temporal split was applied to [...] Read more.
This retrospective study aimed to develop and validate deep-learning-based models for grading clear cell renal cell carcinoma (ccRCC) patients. A cohort enrolling 706 patients (n = 706) with pathologically verified ccRCC was used in this study. A temporal split was applied to verify our models: the first 83.9% of the cases (years 2010–2017) for development and the last 16.1% (year 2018–2019) for validation (development cohort: n = 592; validation cohort: n = 114). Here, we demonstrated a deep learning(DL) framework initialized by a self-supervised pre-training method, developed with the addition of mixed loss strategy and sample reweighting to identify patients with high grade for ccRCC. Four types of DL networks were developed separately and further combined with different weights for better prediction. The single DL model achieved up to an area under curve (AUC) of 0.864 in the validation cohort, while the ensembled model yielded the best predictive performance with an AUC of 0.882. These findings confirms that our DL approach performs either favorably or comparably in terms of grade assessment of ccRCC with biopsies whilst enjoying the non-invasive and labor-saving property. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

13 pages, 19087 KiB  
Article
Synaptophysin, CD117, and GATA3 as a Diagnostic Immunohistochemical Panel for Small Cell Neuroendocrine Carcinoma of the Urinary Tract
by Gi Hwan Kim, Yong Mee Cho, So-Woon Kim, Ja-Min Park, Sun Young Yoon, Gowun Jeong, Dong-Myung Shin, Hyein Ju and Se Un Jeong
Cancers 2022, 14(10), 2495; https://doi.org/10.3390/cancers14102495 - 19 May 2022
Viewed by 2402
Abstract
Although SCNEC is based on its characteristic histology, immunohistochemistry (IHC) is commonly employed to confirm neuroendocrine differentiation (NED). The challenge here is that SCNEC may yield negative results for traditional neuroendocrine markers. To establish an IHC panel for NED, 17 neuronal, basal, and [...] Read more.
Although SCNEC is based on its characteristic histology, immunohistochemistry (IHC) is commonly employed to confirm neuroendocrine differentiation (NED). The challenge here is that SCNEC may yield negative results for traditional neuroendocrine markers. To establish an IHC panel for NED, 17 neuronal, basal, and luminal markers were examined on a tissue microarray construct generated from 47 cases of 34 patients with SCNEC as a discovery cohort. A decision tree algorithm was employed to analyze the extent and intensity of immunoreactivity and to develop a diagnostic model. An external cohort of eight cases and transmission electron microscopy (TEM) were used to validate the model. Among the 17 markers, the decision tree diagnostic model selected 3 markers to classify NED with 98.4% accuracy in classification. The extent of synaptophysin (>5%) was selected as the initial parameter, the extent of CD117 (>20%) as the second, and then the intensity of GATA3 (≤1.5, negative or weak immunoreactivity) as the third for NED. The importance of each variable was 0.758, 0.213, and 0.029, respectively. The model was validated by the TEM and using the external cohort. The decision tree model using synaptophysin, CD117, and GATA3 may help confirm NED of traditional marker-negative SCNEC. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

0 pages, 519 KiB  
Communication
Method for the Intraoperative Detection of IDH Mutation in Gliomas with Differential Mobility Spectrometry
by Ilkka Haapala, Anton Rauhameri, Antti Roine, Meri Mäkelä, Anton Kontunen, Markus Karjalainen, Aki Laakso, Päivi Koroknay-Pál, Kristiina Nordfors, Hannu Haapasalo, Niku Oksala, Antti Vehkaoja and Joonas Haapasalo
Curr. Oncol. 2022, 29(5), 3252-3258; https://doi.org/10.3390/curroncol29050265 - 04 May 2022
Cited by 2 | Viewed by 2001
Abstract
Isocitrate dehydrogenase (IDH) mutation status is an important factor for surgical decision-making: patients with IDH-mutated tumors are more likely to have a good long-term prognosis, and thus favor aggressive resection with more survival benefit to gain. Patients with IDH wild-type tumors have generally [...] Read more.
Isocitrate dehydrogenase (IDH) mutation status is an important factor for surgical decision-making: patients with IDH-mutated tumors are more likely to have a good long-term prognosis, and thus favor aggressive resection with more survival benefit to gain. Patients with IDH wild-type tumors have generally poorer prognosis and, therefore, conservative resection to avoid neurological deficit is favored. Current histopathological analysis with frozen sections is unable to identify IDH mutation status intraoperatively, and more advanced methods are therefore needed. We examined a novel method suitable for intraoperative IDH mutation identification that is based on the differential mobility spectrometry (DMS) analysis of the tumor. We prospectively obtained tumor samples from 22 patients, including 11 IDH-mutated and 11 IDH wild-type tumors. The tumors were cut in 88 smaller specimens that were analyzed with DMS. With a linear discriminant analysis (LDA) algorithm, the DMS was able to classify tumor samples with 86% classification accuracy, 86% sensitivity, and 85% specificity. Our results show that DMS is able to differentiate IDH-mutated and IDH wild-type tumors with good accuracy in a setting suitable for intraoperative use, which makes it a promising novel solution for neurosurgical practice. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

12 pages, 1835 KiB  
Article
Development of an Image Analysis-Based Prognosis Score Using Google’s Teachable Machine in Melanoma
by Stephan Forchhammer, Amar Abu-Ghazaleh, Gisela Metzler, Claus Garbe and Thomas Eigentler
Cancers 2022, 14(9), 2243; https://doi.org/10.3390/cancers14092243 - 29 Apr 2022
Cited by 8 | Viewed by 2663
Abstract
Background: The increasing number of melanoma patients makes it necessary to establish new strategies for prognosis assessment to ensure follow-up care. Deep-learning-based image analysis of primary melanoma could be a future component of risk stratification. Objectives: To develop a risk score for overall [...] Read more.
Background: The increasing number of melanoma patients makes it necessary to establish new strategies for prognosis assessment to ensure follow-up care. Deep-learning-based image analysis of primary melanoma could be a future component of risk stratification. Objectives: To develop a risk score for overall survival based on image analysis through artificial intelligence (AI) and validate it in a test cohort. Methods: Hematoxylin and eosin (H&E) stained sections of 831 melanomas, diagnosed from 2012–2015 were photographed and used to perform deep-learning-based group classification. For this purpose, the freely available software of Google’s teachable machine was used. Five hundred patient sections were used as the training cohort, and 331 sections served as the test cohort. Results: Using Google’s Teachable Machine, a prognosis score for overall survival could be developed that achieved a statistically significant prognosis estimate with an AUC of 0.694 in a ROC analysis based solely on image sections of approximately 250 × 250 µm. The prognosis group “low-risk” (n = 230) showed an overall survival rate of 93%, whereas the prognosis group “high-risk” (n = 101) showed an overall survival rate of 77.2%. Conclusions: The study supports the possibility of using deep learning-based classification systems for risk stratification in melanoma. The AI assessment used in this study provides a significant risk estimate in melanoma, but it does not considerably improve the existing risk classification based on the TNM classification. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

12 pages, 1618 KiB  
Article
Predictive Value of 18F-FDG PET/CT Using Machine Learning for Pathological Response to Neoadjuvant Concurrent Chemoradiotherapy in Patients with Stage III Non-Small Cell Lung Cancer
by Jang Yoo, Jaeho Lee, Miju Cheon, Sang-Keun Woo, Myung-Ju Ahn, Hong Ryull Pyo, Yong Soo Choi, Joung Ho Han and Joon Young Choi
Cancers 2022, 14(8), 1987; https://doi.org/10.3390/cancers14081987 - 14 Apr 2022
Cited by 8 | Viewed by 2242
Abstract
We investigated predictions from 18F-FDG PET/CT using machine learning (ML) to assess the neoadjuvant CCRT response of patients with stage III non-small cell lung cancer (NSCLC) and compared them with predictions from conventional PET parameters and from physicians. A retrospective study was [...] Read more.
We investigated predictions from 18F-FDG PET/CT using machine learning (ML) to assess the neoadjuvant CCRT response of patients with stage III non-small cell lung cancer (NSCLC) and compared them with predictions from conventional PET parameters and from physicians. A retrospective study was conducted of 430 patients. They underwent 18F-FDG PET/CT before initial treatment and after neoadjuvant CCRT followed by curative surgery. We analyzed texture features from segmented tumors and reviewed the pathologic response. The ML model employed a random forest and was used to classify the binary outcome of the pathological complete response (pCR). The predictive accuracy of the ML model for the pCR was 93.4%. The accuracy of predicting pCR using the conventional PET parameters was up to 70.9%, and the accuracy of the physicians’ assessment was 80.5%. The accuracy of the prediction from the ML model was significantly higher than those derived from conventional PET parameters and provided by physicians (p < 0.05). The ML model is useful for predicting pCR after neoadjuvant CCRT, which showed a higher predictive accuracy than those achieved from conventional PET parameters and from physicians. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

12 pages, 3577 KiB  
Article
Gut Microbial Shifts Indicate Melanoma Presence and Bacterial Interactions in a Murine Model
by Marco Rossi, Salvatore M. Aspromonte, Frederick J. Kohlhapp, Jenna H. Newman, Alex Lemenze, Russell J. Pepe, Samuel M. DeFina, Nora L. Herzog, Robert Donnelly, Timothy M. Kuzel, Jochen Reiser, Jose A. Guevara-Patino and Andrew Zloza
Diagnostics 2022, 12(4), 958; https://doi.org/10.3390/diagnostics12040958 - 12 Apr 2022
Viewed by 2952
Abstract
Through a multitude of studies, the gut microbiota has been recognized as a significant influencer of both homeostasis and pathophysiology. Certain microbial taxa can even affect treatments such as cancer immunotherapies, including the immune checkpoint blockade. These taxa can impact such processes both [...] Read more.
Through a multitude of studies, the gut microbiota has been recognized as a significant influencer of both homeostasis and pathophysiology. Certain microbial taxa can even affect treatments such as cancer immunotherapies, including the immune checkpoint blockade. These taxa can impact such processes both individually as well as collectively through mechanisms from quorum sensing to metabolite production. Due to this overarching presence of the gut microbiota in many physiological processes distal to the GI tract, we hypothesized that mice bearing tumors at extraintestinal sites would display a distinct intestinal microbial signature from non-tumor-bearing mice, and that such a signature would involve taxa that collectively shift with tumor presence. Microbial OTUs were determined from 16S rRNA genes isolated from the fecal samples of C57BL/6 mice challenged with either B16-F10 melanoma cells or PBS control and analyzed using QIIME. Relative proportions of bacteria were determined for each mouse and, using machine-learning approaches, significantly altered taxa and co-occurrence patterns between tumor- and non-tumor-bearing mice were found. Mice with a tumor had elevated proportions of Ruminococcaceae, Peptococcaceae.g_rc4.4, and Christensenellaceae, as well as significant information gains and ReliefF weights for Bacteroidales.f__S24.7, Ruminococcaceae, Clostridiales, and Erysipelotrichaceae. Bacteroidales.f__S24.7, Ruminococcaceae, and Clostridiales were also implicated through shifting co-occurrences and PCA values. Using these seven taxa as a melanoma signature, a neural network reached an 80% tumor detection accuracy in a 10-fold stratified random sampling validation. These results indicated gut microbial proportions as a biosensor for tumor detection, and that shifting co-occurrences could be used to reveal relevant taxa. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

26 pages, 9705 KiB  
Article
System for the Recognizing of Pigmented Skin Lesions with Fusion and Analysis of Heterogeneous Data Based on a Multimodal Neural Network
by Pavel Alekseevich Lyakhov, Ulyana Alekseevna Lyakhova and Nikolay Nikolaevich Nagornov
Cancers 2022, 14(7), 1819; https://doi.org/10.3390/cancers14071819 - 03 Apr 2022
Cited by 7 | Viewed by 3568
Abstract
Today, skin cancer is one of the most common malignant neoplasms in the human body. Diagnosis of pigmented lesions is challenging even for experienced dermatologists due to the wide range of morphological manifestations. Artificial intelligence technologies are capable of equaling and even surpassing [...] Read more.
Today, skin cancer is one of the most common malignant neoplasms in the human body. Diagnosis of pigmented lesions is challenging even for experienced dermatologists due to the wide range of morphological manifestations. Artificial intelligence technologies are capable of equaling and even surpassing the capabilities of a dermatologist in terms of efficiency. The main problem of implementing intellectual analysis systems is low accuracy. One of the possible ways to increase this indicator is using stages of preliminary processing of visual data and the use of heterogeneous data. The article proposes a multimodal neural network system for identifying pigmented skin lesions with a preliminary identification, and removing hair from dermatoscopic images. The novelty of the proposed system lies in the joint use of the stage of preliminary cleaning of hair structures and a multimodal neural network system for the analysis of heterogeneous data. The accuracy of pigmented skin lesions recognition in 10 diagnostically significant categories in the proposed system was 83.6%. The use of the proposed system by dermatologists as an auxiliary diagnostic method will minimize the impact of the human factor, assist in making medical decisions, and expand the possibilities of early detection of skin cancer. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

11 pages, 11532 KiB  
Article
Cost-Effectiveness of Artificial Intelligence Support in Computed Tomography-Based Lung Cancer Screening
by Sebastian Ziegelmayer, Markus Graf, Marcus Makowski, Joshua Gawlitza and Felix Gassert
Cancers 2022, 14(7), 1729; https://doi.org/10.3390/cancers14071729 - 29 Mar 2022
Cited by 14 | Viewed by 3711
Abstract
Background: Lung cancer screening is already implemented in the USA and strongly recommended by European Radiological and Thoracic societies as well. Upon implementation, the total number of thoracic computed tomographies (CT) is likely to rise significantly. As shown in previous studies, modern artificial [...] Read more.
Background: Lung cancer screening is already implemented in the USA and strongly recommended by European Radiological and Thoracic societies as well. Upon implementation, the total number of thoracic computed tomographies (CT) is likely to rise significantly. As shown in previous studies, modern artificial intelligence-based algorithms are on-par or even exceed radiologist’s performance in lung nodule detection and classification. Therefore, the aim of this study was to evaluate the cost-effectiveness of an AI-based system in the context of baseline lung cancer screening. Methods: In this retrospective study, a decision model based on Markov simulation was developed to estimate the quality-adjusted life-years (QALYs) and lifetime costs of the diagnostic modalities. Literature research was performed to determine model input parameters. Model uncertainty and possible costs of the AI-system were assessed using deterministic and probabilistic sensitivity analysis. Results: In the base case scenario CT + AI resulted in a negative incremental cost-effectiveness ratio (ICER) as compared to CT only, showing lower costs and higher effectiveness. Threshold analysis showed that the ICER remained negative up to a threshold of USD 68 for the AI support. The willingness-to-pay of USD 100,000 was crossed at a value of USD 1240. Deterministic and probabilistic sensitivity analysis showed model robustness for varying input parameters. Conclusion: Based on our results, the use of an AI-based system in the initial low-dose CT scan of lung cancer screening is a feasible diagnostic strategy from a cost-effectiveness perspective. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

13 pages, 740 KiB  
Article
Discovery of Pre-Treatment FDG PET/CT-Derived Radiomics-Based Models for Predicting Outcome in Diffuse Large B-Cell Lymphoma
by Russell Frood, Matthew Clark, Cathy Burton, Charalampos Tsoumpas, Alejandro F. Frangi, Fergus Gleeson, Chirag Patel and Andrew F. Scarsbrook
Cancers 2022, 14(7), 1711; https://doi.org/10.3390/cancers14071711 - 28 Mar 2022
Cited by 10 | Viewed by 2581
Abstract
Background: Approximately 30% of patients with diffuse large B-cell lymphoma (DLBCL) will have recurrence. The aim of this study was to develop a radiomic based model derived from baseline PET/CT to predict 2-year event free survival (2-EFS). Methods: Patients with DLBCL treated with [...] Read more.
Background: Approximately 30% of patients with diffuse large B-cell lymphoma (DLBCL) will have recurrence. The aim of this study was to develop a radiomic based model derived from baseline PET/CT to predict 2-year event free survival (2-EFS). Methods: Patients with DLBCL treated with R-CHOP chemotherapy undergoing pre-treatment PET/CT between January 2008 and January 2018 were included. The dataset was split into training and internal unseen test sets (ratio 80:20). A logistic regression model using metabolic tumour volume (MTV) and six different machine learning classifiers created from clinical and radiomic features derived from the baseline PET/CT were trained and tuned using four-fold cross validation. The model with the highest mean validation receiver operator characteristic (ROC) curve area under the curve (AUC) was tested on the unseen test set. Results: 229 DLBCL patients met the inclusion criteria with 62 (27%) having 2-EFS events. The training cohort had 183 patients with 46 patients in the unseen test cohort. The model with the highest mean validation AUC combined clinical and radiomic features in a ridge regression model with a mean validation AUC of 0.75 ± 0.06 and a test AUC of 0.73. Conclusions: Radiomics based models demonstrate promise in predicting outcomes in DLBCL patients. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

22 pages, 17053 KiB  
Review
Current Value of Biparametric Prostate MRI with Machine-Learning or Deep-Learning in the Detection, Grading, and Characterization of Prostate Cancer: A Systematic Review
by Henrik J. Michaely, Giacomo Aringhieri, Dania Cioni and Emanuele Neri
Diagnostics 2022, 12(4), 799; https://doi.org/10.3390/diagnostics12040799 - 24 Mar 2022
Cited by 16 | Viewed by 3111
Abstract
Prostate cancer detection with magnetic resonance imaging is based on a standardized MRI-protocol according to the PI-RADS guidelines including morphologic imaging, diffusion weighted imaging, and perfusion. To facilitate data acquisition and analysis the contrast-enhanced perfusion is often omitted resulting in a biparametric prostate [...] Read more.
Prostate cancer detection with magnetic resonance imaging is based on a standardized MRI-protocol according to the PI-RADS guidelines including morphologic imaging, diffusion weighted imaging, and perfusion. To facilitate data acquisition and analysis the contrast-enhanced perfusion is often omitted resulting in a biparametric prostate MRI protocol. The intention of this review is to analyze the current value of biparametric prostate MRI in combination with methods of machine-learning and deep learning in the detection, grading, and characterization of prostate cancer; if available a direct comparison with human radiologist performance was performed. PubMed was systematically queried and 29 appropriate studies were identified and retrieved. The data show that detection of clinically significant prostate cancer and differentiation of prostate cancer from non-cancerous tissue using machine-learning and deep learning is feasible with promising results. Some techniques of machine-learning and deep-learning currently seem to be equally good as human radiologists in terms of classification of single lesion according to the PIRADS score. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

19 pages, 713 KiB  
Review
Advancements in Oncology with Artificial Intelligence—A Review Article
by Nikitha Vobugari, Vikranth Raja, Udhav Sethi, Kejal Gandhi, Kishore Raja and Salim R. Surani
Cancers 2022, 14(5), 1349; https://doi.org/10.3390/cancers14051349 - 06 Mar 2022
Cited by 20 | Viewed by 6539
Abstract
Well-trained machine learning (ML) and artificial intelligence (AI) systems can provide clinicians with therapeutic assistance, potentially increasing efficiency and improving efficacy. ML has demonstrated high accuracy in oncology-related diagnostic imaging, including screening mammography interpretation, colon polyp detection, glioma classification, and grading. By utilizing [...] Read more.
Well-trained machine learning (ML) and artificial intelligence (AI) systems can provide clinicians with therapeutic assistance, potentially increasing efficiency and improving efficacy. ML has demonstrated high accuracy in oncology-related diagnostic imaging, including screening mammography interpretation, colon polyp detection, glioma classification, and grading. By utilizing ML techniques, the manual steps of detecting and segmenting lesions are greatly reduced. ML-based tumor imaging analysis is independent of the experience level of evaluating physicians, and the results are expected to be more standardized and accurate. One of the biggest challenges is its generalizability worldwide. The current detection and screening methods for colon polyps and breast cancer have a vast amount of data, so they are ideal areas for studying the global standardization of artificial intelligence. Central nervous system cancers are rare and have poor prognoses based on current management standards. ML offers the prospect of unraveling undiscovered features from routinely acquired neuroimaging for improving treatment planning, prognostication, monitoring, and response assessment of CNS tumors such as gliomas. By studying AI in such rare cancer types, standard management methods may be improved by augmenting personalized/precision medicine. This review aims to provide clinicians and medical researchers with a basic understanding of how ML works and its role in oncology, especially in breast cancer, colorectal cancer, and primary and metastatic brain cancer. Understanding AI basics, current achievements, and future challenges are crucial in advancing the use of AI in oncology. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

14 pages, 2029 KiB  
Article
Thermal Ablation of Liver Tumors Guided by Augmented Reality: An Initial Clinical Experience
by Marco Solbiati, Tiziana Ierace, Riccardo Muglia, Vittorio Pedicini, Roberto Iezzi, Katia M. Passera, Alessandro C. Rotilio, S. Nahum Goldberg and Luigi A. Solbiati
Cancers 2022, 14(5), 1312; https://doi.org/10.3390/cancers14051312 - 03 Mar 2022
Cited by 18 | Viewed by 2529
Abstract
Background: Over the last two decades, augmented reality (AR) has been used as a visualization tool in many medical fields in order to increase precision, limit the radiation dose, and decrease the variability among operators. Here, we report the first in vivo study [...] Read more.
Background: Over the last two decades, augmented reality (AR) has been used as a visualization tool in many medical fields in order to increase precision, limit the radiation dose, and decrease the variability among operators. Here, we report the first in vivo study of a novel AR system for the guidance of percutaneous interventional oncology procedures. Methods: Eight patients with 15 liver tumors (0.7–3.0 cm, mean 1.56 + 0.55) underwent percutaneous thermal ablations using AR guidance (i.e., the Endosight system). Prior to the intervention, the patients were evaluated with US and CT. The targeted nodules were segmented and three-dimensionally (3D) reconstructed from CT images, and the probe trajectory to the target was defined. The procedures were guided solely by AR, with the position of the probe tip was subsequently confirmed by conventional imaging. The primary endpoints were the targeting accuracy, the system setup time, and targeting time (i.e., from the target visualization to the correct needle insertion). The technical success was also evaluated and validated by co-registration software. Upon completion, the operators were assessed for cybersickness or other symptoms related to the use of AR. Results: Rapid system setup and procedural targeting times were noted (mean 14.3 min; 12.0–17.2 min; 4.3 min, 3.2–5.7 min, mean, respectively). The high targeting accuracy (3.4 mm; 2.6–4.2 mm, mean) was accompanied by technical success in all 15 lesions (i.e., the complete ablation of the tumor and 13/15 lesions with a >90% 5-mm periablational margin). No intra/periprocedural complications or operator cybersickness were observed. Conclusions: AR guidance is highly accurate, and allows for the confident performance of percutaneous thermal ablations. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

14 pages, 14482 KiB  
Article
3D Convolutional Neural Network-Based Denoising of Low-Count Whole-Body 18F-Fluorodeoxyglucose and 89Zr-Rituximab PET Scans
by Bart M. de Vries, Sandeep S. V. Golla, Gerben J. C. Zwezerijnen, Otto S. Hoekstra, Yvonne W. S. Jauw, Marc C. Huisman, Guus A. M. S. van Dongen, Willemien C. Menke-van der Houven van Oordt, Josée J. M. Zijlstra-Baalbergen, Liesbet Mesotten, Ronald Boellaard and Maqsood Yaqub
Diagnostics 2022, 12(3), 596; https://doi.org/10.3390/diagnostics12030596 - 25 Feb 2022
Cited by 1 | Viewed by 2057
Abstract
Acquisition time and injected activity of 18F-fluorodeoxyglucose (18F-FDG) PET should ideally be reduced. However, this decreases the signal-to-noise ratio (SNR), which impairs the diagnostic value of these PET scans. In addition, 89Zr-antibody PET is known to have a low [...] Read more.
Acquisition time and injected activity of 18F-fluorodeoxyglucose (18F-FDG) PET should ideally be reduced. However, this decreases the signal-to-noise ratio (SNR), which impairs the diagnostic value of these PET scans. In addition, 89Zr-antibody PET is known to have a low SNR. To improve the diagnostic value of these scans, a Convolutional Neural Network (CNN) denoising method is proposed. The aim of this study was therefore to develop CNNs to increase SNR for low-count 18F-FDG and 89Zr-antibody PET. Super-low-count, low-count and full-count 18F-FDG PET scans from 60 primary lung cancer patients and full-count 89Zr-rituximab PET scans from five patients with non-Hodgkin lymphoma were acquired. CNNs were built to capture the features and to denoise the PET scans. Additionally, Gaussian smoothing (GS) and Bilateral filtering (BF) were evaluated. The performance of the denoising approaches was assessed based on the tumour recovery coefficient (TRC), coefficient of variance (COV; level of noise), and a qualitative assessment by two nuclear medicine physicians. The CNNs had a higher TRC and comparable or lower COV to GS and BF and was also the preferred method of the two observers for both 18F-FDG and 89Zr-rituximab PET. The CNNs improved the SNR of low-count 18F-FDG and 89Zr-rituximab PET, with almost similar or better clinical performance than the full-count PET, respectively. Additionally, the CNNs showed better performance than GS and BF. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

11 pages, 1676 KiB  
Article
Evaluation of Computer-Aided Detection (CAD) in Screening Automated Breast Ultrasound Based on Characteristics of CAD Marks and False-Positive Marks
by Jeongmin Lee, Bong Joo Kang, Sung Hun Kim and Ga Eun Park
Diagnostics 2022, 12(3), 583; https://doi.org/10.3390/diagnostics12030583 - 24 Feb 2022
Cited by 6 | Viewed by 1989
Abstract
The present study evaluated the effectiveness of computer-aided detection (CAD) system in screening automated breast ultrasound (ABUS) and analyzed the characteristics of CAD marks and the causes of false-positive marks. A total of 846 women who underwent ABUS for screening from January 2017 [...] Read more.
The present study evaluated the effectiveness of computer-aided detection (CAD) system in screening automated breast ultrasound (ABUS) and analyzed the characteristics of CAD marks and the causes of false-positive marks. A total of 846 women who underwent ABUS for screening from January 2017 to December 2017 were included. Commercial CAD was used in all ABUS examinations, and its diagnostic performance and efficacy in shortening the reading time (RT) were evaluated. In addition, we analyzed the characteristics of CAD marks and the causes of false-positive marks. A total of 1032 CAD marks were displayed based on the patient and 534 CAD marks on the lesion. Five cases of breast cancer were diagnosed. The sensitivity, specificity, PPV, and NPV of CAD were 60.0%, 59.0%, 0.9%, and 99.6% for 846 patients. In the case of a negative study, it was less time-consuming and easier to make a decision. Among 530 false-positive marks, 459 were identified clearly for pseudo-lesions; the most common cause was marginal shadowing, followed by Cooper’s ligament shadowing, peri-areolar shadowing, rib, and skin lesions. Even though CAD does not improve the performance of ABUS and a large number of false-positive marks were detected, the addition of CAD reduces RT, especially in the case of negative screening ultrasound. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

12 pages, 1077 KiB  
Article
Machine Learning Model to Stratify the Risk of Lymph Node Metastasis for Early Gastric Cancer: A Single-Center Cohort Study
by Ji-Eun Na, Yeong-Chan Lee, Tae-Jun Kim, Hyuk Lee, Hong-Hee Won, Yang-Won Min, Byung-Hoon Min, Jun-Haeng Lee, Poong-Lyul Rhee and Jae J. Kim
Cancers 2022, 14(5), 1121; https://doi.org/10.3390/cancers14051121 - 22 Feb 2022
Cited by 3 | Viewed by 2362
Abstract
Stratification of the risk of lymph node metastasis (LNM) in patients with non-curative resection after endoscopic resection (ER) for early gastric cancer (EGC) is crucial in determining additional treatment strategies and preventing unnecessary surgery. Hence, we developed a machine learning (ML) model and [...] Read more.
Stratification of the risk of lymph node metastasis (LNM) in patients with non-curative resection after endoscopic resection (ER) for early gastric cancer (EGC) is crucial in determining additional treatment strategies and preventing unnecessary surgery. Hence, we developed a machine learning (ML) model and validated its performance for the stratification of LNM risk in patients with EGC. We enrolled patients who underwent primary surgery or additional surgery after ER for EGC between May 2005 and March 2021. Additionally, patients who underwent ER alone for EGC between May 2005 and March 2016 and were followed up for at least 5 years were included. The ML model was built based on a development set (70%) using logistic regression, random forest (RF), and support vector machine (SVM) analyses and assessed in a validation set (30%). In the validation set, LNM was found in 337 of 4428 patients (7.6%). Among the total patients, the area under the receiver operating characteristic (AUROC) for predicting LNM risk was 0.86 in the logistic regression, 0.85 in RF, and 0.86 in SVM analyses; in patients with initial ER, AUROC for predicting LNM risk was 0.90 in the logistic regression, 0.88 in RF, and 0.89 in SVM analyses. The ML model could stratify the LNM risk into very low (<1%), low (<3%), intermediate (<7%), and high (≥7%) risk categories, which was comparable with actual LNM rates. We demonstrate that the ML model can be used to identify LNM risk. However, this tool requires further validation in EGC patients with non-curative resection after ER for actual application. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

Back to TopTop