Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (36)

Search Parameters:
Keywords = the cancer imaging archive (TCIA)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
8 pages, 4426 KB  
Proceeding Paper
Application of Image Analysis Technology in Detecting and Diagnosing Liver Tumors
by Van-Khang Nguyen, Chiung-An Chen, Cheng-Yu Hsu and Bo-Yi Li
Eng. Proc. 2025, 92(1), 9; https://doi.org/10.3390/engproc2025092009 - 16 Apr 2025
Viewed by 2106
Abstract
We applied processing technology to detect and diagnose liver tumors in patients. The cancer imaging archive (TCIA) was used as it contains images of patients diagnosed with liver tumors by medical experts. These images were analyzed to detect and segment liver tumors using [...] Read more.
We applied processing technology to detect and diagnose liver tumors in patients. The cancer imaging archive (TCIA) was used as it contains images of patients diagnosed with liver tumors by medical experts. These images were analyzed to detect and segment liver tumors using advanced segmentation techniques. Following segmentation, the images were converted into binary images for the automatic detection of the liver’s shape. The tumors within the liver were then localized and measured. By employing these image segmentation techniques, we accurately determined the size of the tumors. The application of medical image processing techniques significantly aids medical experts in identifying liver tumors more efficiently. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

28 pages, 4033 KB  
Article
Advancing Prostate Cancer Diagnostics: A ConvNeXt Approach to Multi-Class Classification in Underrepresented Populations
by Declan Ikechukwu Emegano, Mubarak Taiwo Mustapha, Ilker Ozsahin, Dilber Uzun Ozsahin and Berna Uzun
Bioengineering 2025, 12(4), 369; https://doi.org/10.3390/bioengineering12040369 - 1 Apr 2025
Cited by 5 | Viewed by 1445
Abstract
Prostate cancer is a leading cause of cancer-related morbidity and mortality worldwide, with diagnostic challenges magnified in underrepresented regions like sub-Saharan Africa. This study introduces a novel application of ConvNeXt, an advanced convolutional neural network architecture, for multi-class classification of prostate histopathological images [...] Read more.
Prostate cancer is a leading cause of cancer-related morbidity and mortality worldwide, with diagnostic challenges magnified in underrepresented regions like sub-Saharan Africa. This study introduces a novel application of ConvNeXt, an advanced convolutional neural network architecture, for multi-class classification of prostate histopathological images into normal, benign, and malignant categories. The dataset, sourced from a tertiary healthcare institution in Nigeria, represents a typically underserved African population, addressing critical disparities in global diagnostic research. We also used the ProstateX dataset (2017) from The Cancer Imaging Archive (TCIA) to validate our result. A comprehensive pipeline was developed, leveraging advanced data augmentation, Grad-CAM for interpretability, and an ablation study to enhance model optimization and robustness. The ConvNeXt model achieved an accuracy of 98%, surpassing the performance of traditional CNNs (ResNet50, 93%; EfficientNet, 94%; DenseNet, 92%) and transformer-based models (ViT, 88%; CaiT, 86%; Swin Transformer, 95%; RegNet, 94%). Also, using the ProstateX dataset, the ConvNeXt model recorded 87.2%, 85.7%, 86.4%, and 0.92 as accuracy, recall, F1 score, and AUC, respectively, as validation results. Its hybrid architecture combines the strengths of CNNs and transformers, enabling superior feature extraction. Grad-CAM visualizations further enhance explainability, bridging the gap between computational predictions and clinical trust. Ablation studies demonstrated the contributions of data augmentation, optimizer selection, and learning rate tuning to model performance, highlighting its robustness and adaptability for deployment in low-resource settings. This study advances equitable health care by addressing the lack of regional representation in diagnostic datasets and employing a clinically aligned three-class classification approach. Combining high performance, interpretability, and scalability, this work establishes a foundation for future research on diverse and underrepresented populations, fostering global inclusivity in cancer diagnostics. Full article
Show Figures

Figure 1

16 pages, 1864 KB  
Article
Overall Staging Prediction for Non-Small Cell Lung Cancer (NSCLC): A Local Pilot Study with Artificial Neural Network Approach
by Eva Y. W. Cheung, Virginia H. Y. Kwong, Kaby C. F. Ng, Matthias K. Y. Lui, Vincent T. W. Li, Ryan S. T. Lee, William K. P. Ham and Ellie S. M. Chu
Cancers 2025, 17(3), 523; https://doi.org/10.3390/cancers17030523 - 4 Feb 2025
Cited by 2 | Viewed by 2592
Abstract
Background: Non-small cell lung cancer (NSCLC) has been the most common cancer globally in the recent decade. CT is the most common imaging modality for the initial diagnosis of NSCLC. The gold standard for definitive diagnosis is the histological evaluation of a biopsy [...] Read more.
Background: Non-small cell lung cancer (NSCLC) has been the most common cancer globally in the recent decade. CT is the most common imaging modality for the initial diagnosis of NSCLC. The gold standard for definitive diagnosis is the histological evaluation of a biopsy or surgical sample, which usually requires a long processing time for the confirmation of diagnosis. This study aims to develop artificial intelligence models to predict overall staging based on patient demographics and radiomics retrieved from the initial CT images, so as to prioritize later-stage patients for histology evaluation to facilitate cancer diagnosis. Method: Two cohorts of NSCLC patient datasets were utilized for this study. The NSCLC-radiomics dataset from The Cancer Imaging Archive (TCIA) was divided into 70% for the training group and 30% for the internal testing group. Another cohort from a local hospital was collected for the an external testing group. Patient demographics and 107 radiomic features were retrieved from the gross tumor volume delineated by clinical oncologists on CT images. Artificial neural networks were used to build models for NSCLC overall staging (stage I, II, or III) prediction. Four traditional classifiers were also adopted to build models for comparison. Result: The proposed feed-forward neural network (FFNN) model showed good performance in predicting overall staging with an accuracy of 88.84%, 76.67%, and 74.52% in overall accuracies in validation, internal cohort testing, and external cohort testing, respectively. The sensitivity and specificity are balanced in all the stages, with average precision and F1 score in each of the stages. Conclusion: The FFNN demonstrated good performance in overall staging prediction for NSCLC patients. It has the benefit of predicting multiple overall stages in a single model. The software required and the proposed model are simple. It can be operated on a general-purpose computer in the radiology department. The application will eventually be used as a prediction tool to prioritize the biopsy or surgery sample for histological analysis and molecular investigation, thus shortening the time for diagnosis by pathologists, which supports the triage of patients for further testing. Full article
(This article belongs to the Section Cancer Causes, Screening and Diagnosis)
Show Figures

Figure 1

17 pages, 4735 KB  
Article
Automated Audit and Self-Correction Algorithm for Seg-Hallucination Using MeshCNN-Based On-Demand Generative AI
by Sihwan Kim, Changmin Park, Gwanghyeon Jeon, Seohee Kim and Jong Hyo Kim
Bioengineering 2025, 12(1), 81; https://doi.org/10.3390/bioengineering12010081 - 16 Jan 2025
Cited by 1 | Viewed by 2406
Abstract
Recent advancements in deep learning have significantly improved medical image segmentation. However, the generalization performance and potential risks of data-driven models remain insufficiently validated. Specifically, unrealistic segmentation predictions deviating from actual anatomical structures, known as a Seg-Hallucination, often occur in deep learning-based models. [...] Read more.
Recent advancements in deep learning have significantly improved medical image segmentation. However, the generalization performance and potential risks of data-driven models remain insufficiently validated. Specifically, unrealistic segmentation predictions deviating from actual anatomical structures, known as a Seg-Hallucination, often occur in deep learning-based models. The Seg-Hallucinations can result in erroneous quantitative analyses and distort critical imaging biomarker information, yet effective audits or corrections to address these issues are rare. Therefore, we propose an automated Seg-Hallucination surveillance and correction (ASHSC) algorithm utilizing only 3D organ mask information derived from CT images without reliance on the ground truth. Two publicly available datasets were used in developing the ASHSC algorithm: 280 CT scans from the TotalSegmentator dataset for training and 274 CT scans from the Cancer Imaging Archive (TCIA) dataset for performance evaluation. The ASHSC algorithm utilizes a two-stage on-demand strategy with mesh-based convolutional neural networks and generative artificial intelligence. The segmentation quality level (SQ-level)-based surveillance stage was evaluated using the area under the receiver operating curve, sensitivity, specificity, and positive predictive value. The on-demand correction performance of the algorithm was assessed using similarity metrics: volumetric Dice score, volume error percentage, average surface distance, and Hausdorff distance. Average performance of the surveillance stage resulted in an AUROC of 0.94 ± 0.01, sensitivity of 0.82 ± 0.03, specificity of 0.90 ± 0.01, and PPV of 0.92 ± 0.01 for test dataset. After the on-demand refinement of the correction stage, all the four similarity metrics were improved compared to a single use of the AI-segmentation model. This study not only enhances the efficiency and reliability of handling the Seg-Hallucination but also eliminates the reliance on ground truth. The ASHSC algorithm offers intuitive 3D guidance for uncertainty regions, while maintaining manageable computational complexity. The SQ-level-based on-demand correction strategy adaptively minimizes uncertainties inherent in deep-learning-based organ masks and advances automated auditing and correction methodologies. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Graphical abstract

17 pages, 4660 KB  
Article
Robust Real-Time Cancer Tracking via Dual-Panel X-Ray Images for Precision Radiotherapy
by Jing Wang, Jingjing Dai, Na Li, Chulong Zhang, Jiankai Zhang, Zuledesi Silayi, Haodi Wu, Yaoqing Xie, Xiaokun Liang and Huailing Zhang
Bioengineering 2024, 11(11), 1051; https://doi.org/10.3390/bioengineering11111051 - 22 Oct 2024
Cited by 2 | Viewed by 3315
Abstract
Respiratory-induced tumor motion presents a critical challenge in lung cancer radiotherapy, potentially impacting treatment precision and efficacy. This study introduces an innovative, deep learning-based approach for real-time, markerless lung tumor tracking utilizing orthogonal X-ray projection images. It incorporates three key components: (1) a [...] Read more.
Respiratory-induced tumor motion presents a critical challenge in lung cancer radiotherapy, potentially impacting treatment precision and efficacy. This study introduces an innovative, deep learning-based approach for real-time, markerless lung tumor tracking utilizing orthogonal X-ray projection images. It incorporates three key components: (1) a sophisticated data augmentation technique combining a hybrid deformable model with 3D thin-plate spline transformation, (2) a state-of-the-art Transformer-based segmentation network for precise tumor boundary delineation, and (3) a CNN regression network for accurate 3D tumor position estimation. We rigorously evaluated this approach using both patient data from The Cancer Imaging Archive and dynamic thorax phantom data, assessing performance across various noise levels and comparing it with current leading algorithms. For TCIA patient data, the average DSC and HD95 values were 0.9789 and 1.8423 mm, respectively, with an average centroid localization deviation of 0.5441 mm. On CIRS phantoms, DSCs were 0.9671 (large tumor) and 0.9438 (small tumor) with corresponding HD95 values of 1.8178 mm and 1.9679 mm. The 3D centroid localization accuracy was consistently below 0.33 mm. The processing time averaged 90 ms/frame. Even under high noise conditions (S2 = 25), errors for all data remained within 1 mm with tracking success rates mostly at 100%. In conclusion, the proposed markerless tracking method demonstrates superior accuracy, noise robustness, and real-time performance for lung tumor localization during radiotherapy. Its potential to enhance treatment precision, especially for small tumors, represents a significant step toward improving radiotherapy efficacy and personalizing cancer treatment. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

11 pages, 978 KB  
Article
Estimating Progression-Free Survival in Patients with Primary High-Grade Glioma Using Machine Learning
by Agnieszka Kwiatkowska-Miernik, Piotr Gustaw Wasilewski, Bartosz Mruk, Katarzyna Sklinda, Maciej Bujko and Jerzy Walecki
J. Clin. Med. 2024, 13(20), 6172; https://doi.org/10.3390/jcm13206172 - 16 Oct 2024
Cited by 6 | Viewed by 2727
Abstract
Background/Objectives: High-grade gliomas are the most common primary malignant brain tumors in adults. These neoplasms remain predominantly incurable due to the genetic diversity within each tumor, leading to varied responses to specific drug therapies. With the advent of new targeted and immune [...] Read more.
Background/Objectives: High-grade gliomas are the most common primary malignant brain tumors in adults. These neoplasms remain predominantly incurable due to the genetic diversity within each tumor, leading to varied responses to specific drug therapies. With the advent of new targeted and immune therapies, which have demonstrated promising outcomes in clinical trials, there is a growing need for image-based techniques to enable early prediction of treatment response. This study aimed to evaluate the potential of radiomics and artificial intelligence implementation in predicting progression-free survival (PFS) in patients with highest-grade glioma (CNS WHO 4) undergoing a standard treatment plan. Methods: In this retrospective study, prediction models were developed in a cohort of 51 patients with pathologically confirmed highest-grade glioma (CNS WHO 4) from the authors’ institution and the repository of the Cancer Imaging Archive (TCIA). Only patients with confirmed recurrence after complete tumor resection with adjuvant radiotherapy and chemotherapy with temozolomide were included. For each patient, 109 radiomic features of the tumor were obtained from a preoperative magnetic resonance imaging (MRI) examination. Four clinical features were added manually—sex, weight, age at the time of diagnosis, and the lobe of the brain where the tumor was located. The data label was the time to recurrence, which was determined based on follow-up MRI scans. Artificial intelligence algorithms were built to predict PFS in the training set (n = 75%) and then validate it in the test set (n = 25%). The performance of each model in both the training and test datasets was assessed using mean absolute percentage error (MAPE). Results: In the test set, the random forest model showed the highest predictive performance with 1-MAPE = 92.27% and a C-index of 0.9544. The decision tree, gradient booster, and artificial neural network models showed slightly lower effectiveness with 1-MAPE of 88.31%, 80.21%, and 91.29%, respectively. Conclusions: Four of the six models built gave satisfactory results. These results show that artificial intelligence models combined with radiomic features could be useful for predicting the progression-free survival of high-grade glioma patients. This could be beneficial for risk stratification of patients, enhancing the potential for personalized treatment plans and improving overall survival. Further investigation is necessary with an expanded sample size and external multicenter validation. Full article
Show Figures

Figure 1

15 pages, 1533 KB  
Article
MRI T2w Radiomics-Based Machine Learning Models in Imaging Simulated Biopsy Add Diagnostic Value to PI-RADS in Predicting Prostate Cancer: A Retrospective Diagnostic Study
by Jia-Cheng Liu, Xiao-Hao Ruan, Tsun-Tsun Chun, Chi Yao, Da Huang, Hoi-Lung Wong, Chun-Ting Lai, Chiu-Fung Tsang, Sze-Ho Ho, Tsui-Lin Ng, Dan-Feng Xu and Rong Na
Cancers 2024, 16(17), 2944; https://doi.org/10.3390/cancers16172944 - 23 Aug 2024
Cited by 6 | Viewed by 1752
Abstract
Background: Currently, prostate cancer (PCa) prebiopsy medical image diagnosis mainly relies on mpMRI and PI-RADS scores. However, PI-RADS has its limitations, such as inter- and intra-radiologist variability and the potential for imperceptible features. The primary objective of this study is to evaluate the [...] Read more.
Background: Currently, prostate cancer (PCa) prebiopsy medical image diagnosis mainly relies on mpMRI and PI-RADS scores. However, PI-RADS has its limitations, such as inter- and intra-radiologist variability and the potential for imperceptible features. The primary objective of this study is to evaluate the effectiveness of a machine learning model based on radiomics analysis of MRI T2-weighted (T2w) images for predicting PCa in prebiopsy cases. Method: A retrospective analysis was conducted using 820 lesions (363 cases, 457 controls) from The Cancer Imaging Archive (TCIA) Database for model development and validation. An additional 83 lesions (30 cases, 53 controls) from Hong Kong Queen Mary Hospital were used for independent external validation. The MRI T2w images were preprocessed, and radiomic features were extracted. Feature selection was performed using Cross Validation Least Angle Regression (CV-LARS). Using three different machine learning algorithms, a total of 18 prediction models and 3 shape control models were developed. The performance of the models, including the area under the curve (AUC) and diagnostic values such as sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), were compared to the PI-RADS scoring system for both internal and external validation. Results: All the models showed significant differences compared to the shape control model (all p < 0.001, except SVM model PI-RADS+2 Features p = 0.004, SVM model PI-RADS+3 Features p = 0.002). In internal validation, the best model, based on the LR algorithm, incorporated 3 radiomic features (AUC = 0.838, sensitivity = 76.85%, specificity = 77.36%). In external validation, the LR (3 features) model outperformed PI-RADS in predictive value with AUC 0.870 vs. 0.658, sensitivity 56.67% vs. 46.67%, specificity 92.45% vs. 84.91%, PPV 80.95% vs. 63.64%, and NPV 79.03% vs. 73.77%. Conclusions: The machine learning model based on radiomics analysis of MRI T2w images, along with simulated biopsy, provides additional diagnostic value to the PI-RADS scoring system in predicting PCa. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

12 pages, 5257 KB  
Article
The Radiogenomic Landscape of Clear Cell Renal Cell Carcinoma: Insights into Lipid Metabolism through Evaluation of ADFP Expression
by Federico Greco, Andrea Panunzio, Caterina Bernetti, Alessandro Tafuri, Bruno Beomonte Zobel and Carlo Augusto Mallio
Diagnostics 2024, 14(15), 1667; https://doi.org/10.3390/diagnostics14151667 - 1 Aug 2024
Cited by 9 | Viewed by 1909
Abstract
This study aims to explore the relationship between radiological imaging and genomic characteristics in clear cell renal cell carcinoma (ccRCC), focusing on the expression of adipose differentiation-related protein (ADFP) detected through computed tomography (CT). The goal is to establish a radiogenomic lipid profile [...] Read more.
This study aims to explore the relationship between radiological imaging and genomic characteristics in clear cell renal cell carcinoma (ccRCC), focusing on the expression of adipose differentiation-related protein (ADFP) detected through computed tomography (CT). The goal is to establish a radiogenomic lipid profile and understand its association with tumor characteristics. Data from The Cancer Genome Atlas (TCGA) and the Cancer Imaging Archive (TCIA) were utilized to correlate imaging features with adipose differentiation-related protein (ADFP) expression in ccRCC. CT scans assessed various tumor features, including size, composition, margin, necrosis, and growth pattern, alongside measurements of tumoral Hounsfield units (HU) and abdominal adipose tissue compartments. Statistical analyses compared demographics, clinical–pathological features, adipose tissue quantification, and tumoral HU between groups. Among 197 patients, 22.8% exhibited ADFP expression significantly associated with hydronephrosis. Low-grade ccRCC patients expressing ADFP had higher quantities of visceral and subcutaneous adipose tissue and lower tumoral HU values compared to their high-grade counterparts. Similar trends were observed in low-grade ccRCC patients without ADFP expression. ADFP expression in ccRCC correlates with specific imaging features such as hydronephrosis and altered adipose tissue distribution. Low-grade ccRCC patients with ADFP expression display a distinct lipid metabolic profile, emphasizing the relationship between radiological features, genomic expression, and tumor metabolism. These findings suggest potential for personalized diagnostic and therapeutic strategies targeting tumor lipid metabolism. Full article
Show Figures

Figure 1

20 pages, 5692 KB  
Article
HLFSRNN-MIL: A Hybrid Multi-Instance Learning Model for 3D CT Image Classification
by Huilong Chen and Xiaoxia Zhang
Appl. Sci. 2024, 14(14), 6186; https://doi.org/10.3390/app14146186 - 16 Jul 2024
Cited by 2 | Viewed by 3047
Abstract
At present, many diseases are diagnosed by computer tomography (CT) image technology, which affects the health of the lives of millions of people. In the process of disease confrontation, it is very important for patients to detect diseases in the early stage by [...] Read more.
At present, many diseases are diagnosed by computer tomography (CT) image technology, which affects the health of the lives of millions of people. In the process of disease confrontation, it is very important for patients to detect diseases in the early stage by deep learning of 3D CT images. The paper offers a hybrid multi-instance learning model (HLFSRNN-MIL), which hybridizes high-low frequency feature fusion (HLFFF) with sequential recurrent neural network (SRNN) for CT image classification tasks. Firstly, the hybrid model uses Resnet-50 as the deep feature. The main feature of the HLFSRNN-MIL lies in its ability to make full use of the advantages of the HLFFF and SRNN methods to make up for their own weakness; i.e., the HLFFF can extract more targeted feature information to avoid the problem of excessive gradient fluctuation during training, and the SRNN is used to process the time-related sequences before classification. The experimental study of the HLFSRNN-MIL model is on two public CT datasets, namely, the Cancer Imaging Archive (TCIA) dataset on lung cancer and the China Consortium of Chest CT Image Investigation (CC-CCII) dataset on pneumonia. The experimental results show that the model exhibits better performance and accuracy. On the TCIA dataset, HLFSRNN-MIL with Residual Network (ResNet) as the feature extractor achieves an accuracy (ACC) of 0.992 and an area under curve (AUC) of 0.997. On the CC-CCII dataset, HLFSRNN-MIL achieves an ACC of 0.994 and an AUC of 0.997. Finally, compared with the existing methods, HLFSRNN-MIL has obvious advantages in all aspects. These experimental results demonstrate that HLFSRNN-MIL can effectively solve the disease problem in the field of 3D CT images. Full article
Show Figures

Figure 1

15 pages, 1082 KB  
Article
Deep Attention Fusion Hashing (DAFH) Model for Medical Image Retrieval
by Gangao Wu, Enhui Jin, Yanling Sun, Bixia Tang and Wenming Zhao
Bioengineering 2024, 11(7), 673; https://doi.org/10.3390/bioengineering11070673 - 2 Jul 2024
Cited by 4 | Viewed by 2705
Abstract
In medical image retrieval, accurately retrieving relevant images significantly impacts clinical decision making and diagnostics. Traditional image-retrieval systems primarily rely on single-dimensional image data, while current deep-hashing methods are capable of learning complex feature representations. However, retrieval accuracy and efficiency are hindered by [...] Read more.
In medical image retrieval, accurately retrieving relevant images significantly impacts clinical decision making and diagnostics. Traditional image-retrieval systems primarily rely on single-dimensional image data, while current deep-hashing methods are capable of learning complex feature representations. However, retrieval accuracy and efficiency are hindered by diverse modalities and limited sample sizes. Objective: To address this, we propose a novel deep learning-based hashing model, the Deep Attention Fusion Hashing (DAFH) model, which integrates advanced attention mechanisms with medical imaging data. Methods: The DAFH model enhances retrieval performance by integrating multi-modality medical imaging data and employing attention mechanisms to optimize the feature extraction process. Utilizing multimodal medical image data from the Cancer Imaging Archive (TCIA), this study constructed and trained a deep hashing network that achieves high-precision classification of various cancer types. Results: At hash code lengths of 16, 32, and 48 bits, the model respectively attained Mean Average Precision (MAP@10) values of 0.711, 0.754, and 0.762, highlighting the potential and advantage of the DAFH model in medical image retrieval. Conclusions: The DAFH model demonstrates significant improvements in the efficiency and accuracy of medical image retrieval, proving to be a valuable tool in clinical settings. Full article
Show Figures

Figure 1

13 pages, 4147 KB  
Article
Preoperative Prediction of Perineural Invasion and Prognosis in Gastric Cancer Based on Machine Learning through a Radiomics–Clinicopathological Nomogram
by Heng Jia, Ruzhi Li, Yawei Liu, Tian Zhan, Yuan Li and Jianping Zhang
Cancers 2024, 16(3), 614; https://doi.org/10.3390/cancers16030614 - 31 Jan 2024
Cited by 19 | Viewed by 3426
Abstract
Purpose: The aim of this study was to construct and validate a nomogram for preoperatively predicting perineural invasion (PNI) in gastric cancer based on machine learning, and to investigate the impact of PNI on the overall survival (OS) of gastric cancer patients. Methods: [...] Read more.
Purpose: The aim of this study was to construct and validate a nomogram for preoperatively predicting perineural invasion (PNI) in gastric cancer based on machine learning, and to investigate the impact of PNI on the overall survival (OS) of gastric cancer patients. Methods: Data were collected from 162 gastric patients and analyzed retrospectively, and radiomics features were extracted from contrast-enhanced computed tomography (CECT) scans. A group of 42 patients from the Cancer Imaging Archive (TCIA) were selected as the validation set. Univariable and multivariable analyses were used to analyze the risk factors for PNI. The t-test, Max-Relevance and Min-Redundancy (mRMR) and the least absolute shrinkage and selection operator (LASSO) were used to select radiomics features. Radscores were calculated and logistic regression was applied to construct predictive models. A nomogram was developed by combining clinicopathological risk factors and the radscore. The area under the curve (AUC) values of receiver operating characteristic (ROC) curves, calibration curves and clinical decision curves were employed to evaluate the performance of the models. Kaplan–Meier analysis was used to study the impact of PNI on OS. Results: The univariable and multivariable analyses showed that the T stage, N stage and radscore were independent risk factors for PNI (p < 0.05). A nomogram based on the T stage, N stage and radscore was developed. The AUC of the combined model yielded 0.851 in the training set, 0.842 in the testing set and 0.813 in the validation set. The Kaplan–Meier analysis showed a statistically significant difference in OS between the PNI group and the non-PNI group (p < 0.05). Conclusions: A machine learning-based radiomics–clinicopathological model could effectively predict PNI in gastric cancer preoperatively through a non-invasive approach, and gastric cancer patients with PNI had relatively poor prognoses. Full article
Show Figures

Figure 1

18 pages, 9265 KB  
Article
AI Deployment on GBM Diagnosis: A Novel Approach to Analyze Histopathological Images Using Image Feature-Based Analysis
by Eva Y. W. Cheung, Ricky W. K. Wu, Albert S. M. Li and Ellie S. M. Chu
Cancers 2023, 15(20), 5063; https://doi.org/10.3390/cancers15205063 - 19 Oct 2023
Cited by 15 | Viewed by 4169
Abstract
Background: Glioblastoma (GBM) is one of the most common malignant primary brain tumors, which accounts for 60–70% of all gliomas. Conventional diagnosis and the decision of post-operation treatment plan for glioblastoma is mainly based on the feature-based qualitative analysis of hematoxylin and eosin-stained [...] Read more.
Background: Glioblastoma (GBM) is one of the most common malignant primary brain tumors, which accounts for 60–70% of all gliomas. Conventional diagnosis and the decision of post-operation treatment plan for glioblastoma is mainly based on the feature-based qualitative analysis of hematoxylin and eosin-stained (H&E) histopathological slides by both an experienced medical technologist and a pathologist. The recent development of digital whole slide scanners makes AI-based histopathological image analysis feasible and helps to diagnose cancer by accurately counting cell types and/or quantitative analysis. However, the technology available for digital slide image analysis is still very limited. This study aimed to build an image feature-based computer model using histopathology whole slide images to differentiate patients with glioblastoma (GBM) from healthy control (HC). Method: Two independent cohorts of patients were used. The first cohort was composed of 262 GBM patients of the Cancer Genome Atlas Glioblastoma Multiform Collection (TCGA-GBM) dataset from the cancer imaging archive (TCIA) database. The second cohort was composed of 60 GBM patients collected from a local hospital. Also, a group of 60 participants with no known brain disease were collected. All the H&E slides were collected. Thirty-three image features (22 GLCM and 11 GLRLM) were retrieved from the tumor volume delineated by medical technologist on H&E slides. Five machine-learning algorithms including decision-tree (DT), extreme-boost (EB), support vector machine (SVM), random forest (RF), and linear model (LM) were used to build five models using the image features extracted from the first cohort of patients. Models built were deployed using the selected key image features for GBM diagnosis from the second cohort (local patients) as model testing, to identify and verify key image features for GBM diagnosis. Results: All five machine learning algorithms demonstrated excellent performance in GBM diagnosis and achieved an overall accuracy of 100% in the training and validation stage. A total of 12 GLCM and 3 GLRLM image features were identified and they showed a significant difference between the normal and the GBM image. However, only the SVM model maintained its excellent performance in the deployment of the models using the independent local cohort, with an accuracy of 93.5%, sensitivity of 86.95%, and specificity of 99.73%. Conclusion: In this study, we have identified 12 GLCM and 3 GLRLM image features which can aid the GBM diagnosis. Among the five models built, the SVM model proposed in this study demonstrated excellent accuracy with very good sensitivity and specificity. It could potentially be used for GBM diagnosis and future clinical application. Full article
(This article belongs to the Section Cancer Causes, Screening and Diagnosis)
Show Figures

Figure 1

14 pages, 2538 KB  
Article
MRI-Based Deep Learning Method for Classification of IDH Mutation Status
by Chandan Ganesh Bangalore Yogananda, Benjamin C. Wagner, Nghi C. D. Truong, James M. Holcomb, Divya D. Reddy, Niloufar Saadat, Kimmo J. Hatanpaa, Toral R. Patel, Baowei Fei, Matthew D. Lee, Rajan Jain, Richard J. Bruce, Marco C. Pinho, Ananth J. Madhuranthakam and Joseph A. Maldjian
Bioengineering 2023, 10(9), 1045; https://doi.org/10.3390/bioengineering10091045 - 5 Sep 2023
Cited by 17 | Viewed by 4365
Abstract
Isocitrate dehydrogenase (IDH) mutation status has emerged as an important prognostic marker in gliomas. This study sought to develop deep learning networks for non-invasive IDH classification using T2w MR images while comparing their performance to a multi-contrast network. Methods: Multi-contrast brain tumor MRI [...] Read more.
Isocitrate dehydrogenase (IDH) mutation status has emerged as an important prognostic marker in gliomas. This study sought to develop deep learning networks for non-invasive IDH classification using T2w MR images while comparing their performance to a multi-contrast network. Methods: Multi-contrast brain tumor MRI and genomic data were obtained from The Cancer Imaging Archive (TCIA) and The Erasmus Glioma Database (EGD). Two separate 2D networks were developed using nnU-Net, a T2w-image-only network (T2-net) and a multi-contrast network (MC-net). Each network was separately trained using TCIA (227 subjects) or TCIA + EGD data (683 subjects combined). The networks were trained to classify IDH mutation status and implement single-label tumor segmentation simultaneously. The trained networks were tested on over 1100 held-out datasets including 360 cases from UT Southwestern Medical Center, 136 cases from New York University, 175 cases from the University of Wisconsin–Madison, 456 cases from EGD (for the TCIA-trained network), and 495 cases from the University of California, San Francisco public database. A receiver operating characteristic curve (ROC) was drawn to calculate the AUC value to determine classifier performance. Results: T2-net trained on TCIA and TCIA + EGD datasets achieved an overall accuracy of 85.4% and 87.6% with AUCs of 0.86 and 0.89, respectively. MC-net trained on TCIA and TCIA + EGD datasets achieved an overall accuracy of 91.0% and 92.8% with AUCs of 0.94 and 0.96, respectively. We developed reliable, high-performing deep learning algorithms for IDH classification using both a T2-image-only and a multi-contrast approach. The networks were tested on more than 1100 subjects from diverse databases, making this the largest study on image-based IDH classification to date. Full article
(This article belongs to the Special Issue Novel MRI Techniques and Biomedical Image Processing)
Show Figures

Figure 1

12 pages, 1951 KB  
Article
Radiomics-Clinical AI Model with Probability Weighted Strategy for Prognosis Prediction in Non-Small Cell Lung Cancer
by Fuk-Hay Tang, Yee-Wai Fong, Shing-Hei Yung, Chi-Kan Wong, Chak-Lap Tu and Ming-To Chan
Biomedicines 2023, 11(8), 2093; https://doi.org/10.3390/biomedicines11082093 - 25 Jul 2023
Cited by 11 | Viewed by 3693
Abstract
In this study, we propose a radiomics clinical probability-weighted model for the prediction of prognosis for non-small cell lung cancer (NSCLC). The model combines radiomics features extracted from radiotherapy (RT) planning images with clinical factors such as age, gender, histology, and tumor stage. [...] Read more.
In this study, we propose a radiomics clinical probability-weighted model for the prediction of prognosis for non-small cell lung cancer (NSCLC). The model combines radiomics features extracted from radiotherapy (RT) planning images with clinical factors such as age, gender, histology, and tumor stage. CT images with radiotherapy structures of 422 NSCLC patients were retrieved from The Cancer Imaging Archive (TCIA). Radiomic features were extracted from gross tumor volumes (GTVs). Five machine learning algorithms, namely decision trees (DT), random forests (RF), extreme boost (EB), support vector machine (SVM) and generalized linear model (GLM) were optimized by a voted ensemble machine learning (VEML) model. A probabilistic weighted approach is used to incorporate the uncertainty associated with both radiomic and clinical features and to generate a probabilistic risk score for each patient. The performance of the model is evaluated using a receiver operating characteristic (ROC). The Radiomic model, clinical factor model, and combined radiomic clinical probability-weighted model demonstrated good performance in predicting NSCLC survival with AUC of 0.941, 0.856 and 0.949, respectively. The combined radiomics clinical probability-weighted enhanced model achieved significantly better performance than the radiomic model in 1-year survival prediction (chi-square test, p < 0.05). The proposed model has the potential to improve NSCLC prognosis and facilitate personalized treatment decisions. Full article
Show Figures

Figure 1

22 pages, 1615 KB  
Article
VGG16 Feature Extractor with Extreme Gradient Boost Classifier for Pancreas Cancer Prediction
by Wilson Bakasa and Serestina Viriri
J. Imaging 2023, 9(7), 138; https://doi.org/10.3390/jimaging9070138 - 7 Jul 2023
Cited by 38 | Viewed by 4591
Abstract
The prognosis of patients with pancreatic ductal adenocarcinoma (PDAC) is greatly improved by an early and accurate diagnosis. Several studies have created automated methods to forecast PDAC development utilising various medical imaging modalities. These papers give a general overview of the classification, segmentation, [...] Read more.
The prognosis of patients with pancreatic ductal adenocarcinoma (PDAC) is greatly improved by an early and accurate diagnosis. Several studies have created automated methods to forecast PDAC development utilising various medical imaging modalities. These papers give a general overview of the classification, segmentation, or grading of many cancer types utilising conventional machine learning techniques and hand-engineered characteristics, including pancreatic cancer. This study uses cutting-edge deep learning techniques to identify PDAC utilising computerised tomography (CT) medical imaging modalities. This work suggests that the hybrid model VGG16–XGBoost (VGG16—backbone feature extractor and Extreme Gradient Boosting—classifier) for PDAC images. According to studies, the proposed hybrid model performs better, obtaining an accuracy of 0.97 and a weighted F1 score of 0.97 for the dataset under study. The experimental validation of the VGG16–XGBoost model uses the Cancer Imaging Archive (TCIA) public access dataset, which has pancreas CT images. The results of this study can be extremely helpful for PDAC diagnosis from computerised tomography (CT) pancreas images, categorising them into five different tumours (T), node (N), and metastases (M) (TNM) staging system class labels, which are T0, T1, T2, T3, and T4. Full article
Show Figures

Figure 1

Back to TopTop