Next Article in Journal
Resveratrol-Laden Nano-Systems in the Cancer Environment: Views and Reviews
Previous Article in Journal
Demographics, Clinical Characteristics and Survival Outcomes of Primary Urinary Tract Malignant Melanoma Patients: A Population-Based Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Contrast-Enhanced CT-Based Deep Learning System for Preoperative Prediction of Colorectal Cancer Staging and RAS Mutation

1
Department of General Surgery, The Second Affiliated Hospital of Nanjing Medical University, No. 121, Jiangjiayuan Road, Nanjing 210011, China
2
Department of Radiology, The Second Affiliated Hospital of Nanjing Medical University, Nanjing 210011, China
3
Key Laboratory of Modern Toxicology, Ministry of Education, School of Public Health, Nanjing Medical University, Nanjing 211166, China
*
Author to whom correspondence should be addressed.
Cancers 2023, 15(18), 4497; https://doi.org/10.3390/cancers15184497
Submission received: 23 August 2023 / Revised: 4 September 2023 / Accepted: 8 September 2023 / Published: 10 September 2023
(This article belongs to the Section Cancer Informatics and Big Data)

Abstract

:

Simple Summary

This study explored the role of CT-based deep learning in detecting colorectal cancer tumor location and preoperatively predicting the stage and RAS gene mutation status of colorectal cancer patients. The deep learning model we built achieved excellent performance. The detection network based on Yolov7 realized the detection and preoperative staging of colorectal cancer with an average mean accuracy of 0.98 in the validation cohort. The vision transformer-based prediction network achieved accurate prediction of preoperative RAS in colorectal cancer patients, achieving an area under the receiver operating characteristic curve (AUC) of 0.9591 and 0.9554 in the test cohort and the validation cohort, respectively. This study also explored the clinical applications of deep learning models. Based on the proposed detection network and prediction network, we built a deep learning system for clinicians who do not understand deep learning.

Abstract

Purpose: This study aimed to build a deep learning system using enhanced computed tomography (CT) portal-phase images for predicting colorectal cancer patients’ preoperative staging and RAS gene mutation status. Methods: The contrast-enhanced CT image dataset comprises the CT portal-phase images from a retrospective cohort of 231 colorectal cancer patients. The deep learning system was developed via migration learning for colorectal cancer detection, staging, and RAS gene mutation status prediction. This study used pre-trained Yolov7, vision transformer (VIT), swin transformer (SWT), EfficientNetV2, and ConvNeXt. 4620, and contrast-enhanced CT images and annotated tumor bounding boxes were included in the tumor identification and staging dataset. A total of 19,700 contrast-enhanced CT images comprise the RAS gene mutation status prediction dataset. Results: In the validation cohort, the Yolov7-based detection model detected and staged tumors with a mean accuracy precision (IoU = 0.5) (mAP_0.5) of 0.98. The area under the receiver operating characteristic curve (AUC) in the test set and validation set for the VIT-based prediction model in predicting the mutation status of the RAS genes was 0.9591 and 0.9554, respectively. The detection network and prediction network of the deep learning system demonstrated great performance in explaining contrast-enhanced CT images. Conclusion: In this study, a deep learning system was created based on the foundation of contrast-enhanced CT portal-phase imaging to preoperatively predict the stage and RAS mutation status of colorectal cancer patients. This system will help clinicians choose the best treatment option to increase colorectal cancer patients’ chances of survival and quality of life.

1. Introduction

The most prevalent gastrointestinal tract cancer worldwide and the second-leading cause of tumor-related mortality is colorectal cancer [1]. In the past few decades, significant improvements have been made in the care of people with colorectal cancer [2]. In recent years, advances in treatment strategies have played a significant role in raising survival rates [3,4]. However, the overall survival of patients with advanced colorectal cancer remains poor. The choice of therapy options for people with colorectal cancer depends on TNM staging. Medical imaging is frequently used to preoperatively assess patient staging; nevertheless, its accuracy in determining TNM staging remains poor [5].
Several studies have reported the use of contrast-enhanced CT to predict patient outcomes through precise identification and stratification of patients carrying specific mutated genes [6,7]. Tumor genetic profiling is a powerful tool to personalize therapy with the creation of customized treatments. All patients with suspected or confirmed metastatic colorectal cancer should be genotyped for tumor tissue RAS mutations, as these mutations predict resistance to the anti-epidermal growth factor receptor (EGFR) monoclonal antibodies, cetuximab and panitumumab. This recommendation is made in clinical guidelines on a regular basis [8,9]. Therefore, identification of RAS mutation status before or during treatment is essential for predicting treatment outcomes and determining individualized treatment strategies for colorectal cancer patients. In clinical practice, biopsies or postoperative collections are the most often utilized genetic testing techniques. Due to the intra-tumoral heterogeneity of colorectal cancer, these procedures are invasive, and local tumor sampling and biopsy techniques might not be representative [10].
In oncology, the accurate identification of imaging biomarkers is critical to enable clinicians to individualize their treatment choices [11]. According to previous studies, medical imaging can capture tumor biology at the genetic and cellular levels [12]. A common imaging test for patients with colorectal cancer who are being evaluated before surgery is contrast-enhanced CT, which is widely utilized in clinical settings [13]. Deep learning has recently achieved close attention in oncology research due to its ability to extract more information from input data [14,15,16,17]. The predictive performance of deep learning models under specific conditions has been demonstrated to be no worse than that of experienced clinicians [18,19]. This provided the basis for our study.
Thus, the objective of this investigation was to create a deep learning system for preoperative staging and RAS mutation status prediction in patients with colorectal cancer, which, to the best of our knowledge, has not been reported in any published studies.

2. Materials and Methods

Approval for this study was received from the Ethics Committee of the Second Affiliated Hospital of Nanjing Medical University (NO. 2023-KY-141-01). The patients’ or their family members’ informed consent was acquired. We removed all private patient information.

2.1. Patients

A total of 231 colorectal cancer patients took part in this study, which enrolled patients from January 2017 to June 2022. Patient staging and RAS mutation status were derived from postoperative pathological results. The inclusion criteria of this study were that contrast-enhanced CT examination was performed within one week before colorectal resection, postoperative pathology confirmed colorectal cancer, detection of RAS gene mutation status after colorectal resection and definite RAS gene mutation status, and no chemotherapy or radiotherapy before operation. The exclusion criteria of this study were poor gastric distension or artefacts in CT images, preoperative radiotherapy or chemotherapy, small colorectal cancer lesions that were difficult to identify, and being unable to determine RAS gene mutation status in the patient. The Supplementary Materials provide detailed information on the testing methods for detecting RAS gene mutations in patients.

2.2. CT Image Acquisition

CT examinations were performed using a Siemens Definition Flash Dual Source CT (Somatom Definition, Siemens Healthcare, Forchheim, Germany). All patients were instructed to fast for more than 8 h and to inject anisodamine 20 mg intravenously to avoid gastric motility. Additionally, all patients were asked to take 1000 mL of warm water orally to dilate the stomach before the examination and to hold their breath during the examination. After the non-enhanced abdominal CT scan, the patients were intravenously injected with 1.5 mL/kg of iodinated contrast medium (ioversol injection 320 mg I/mL, Jiangsu Hengrui Pharmaceuticals Co., Ltd., Lianyungang, China) at a flow rate of 3.0 mL/s via an automatic pump syringe. After the contrast agent injection started, and when the contrast agent concentration reached 100 Hu, the imaging taken after 20 s was the arterial phase, the imaging taken at 35 s after the arterial-phase imaging was the venous phase, and the imaging taken at 90 s after the venous-phase imaging was the delayed phase. The parameters of the CT scan were as follows: tube voltage of 120 kV, tube current of 150–300 mA, field of view of 30–50 cm, matrix 512 × 512, rotation time of 0·5 s, and pitch of 1.0; the images were reconstructed with section thicknesses of 2 mm.

2.3. CT Images Collection

Studies demonstrated that characteristics taken from contrast-enhanced CT portal images had superior colorectal cancer prediction accuracy [20,21]. Thus, we collected the contrast-enhanced CT portal-phase images of all patients and resampled them. The supplemental information includes a comprehensive description of how the CT scans were acquired. All of the patients’ contrast-enhanced CT portal-phase images were examined by two radiologists with a combined expertise of more than eight years in medical imaging. They checked the quality of the patients’ enhanced CT images and screened the five images from each patient with the largest tumor area. The two physicians had no information about the patients’ pathology and their review process was independent of each other. If their opinions diverged, the final decision was made by a chief physician with 15 years of expertise in medical imaging.

2.4. Dataset Construction

From the CT images of each patient, to create the dataset, we chose five axial slices that had the greatest tumor area screened by the radiologists. Specifically, the section with the largest tumor cross-section was centered, with two sections above the center and two sections below, for a total of five sections (Figure 1).
We collected the patients’ pathologic staging from their postoperative pathology report. For automated tumor site recognition and prediction staging, we gathered 685 stage III images and 470 stage II images retrospectively. We expanded the original dataset via data augmentation. It decreased the likelihood that the model would be overfitting when processing the dataset [22]. After data enhancement, the dataset includes a total of 4620 images.
We retrospectively collected 525 CT images with gene mutations and 460 CT images without gene mutations for gene mutation status prediction in colorectal cancer patients. We expanded the dataset by applying 19 transformations to the original images using image enhancement techniques.
All of the images were first normalized. After that, these images were arbitrarily split into three groups: a training cohort, a testing cohort, and a validation cohort, in the order of (7:2:1). The training cohort was used to train the model, the test cohort was used for fine-tuning, and the validation cohort was used for evaluating the model’s effectiveness.

2.5. Model Construction

The construction of the model consists of two parts. Using the contrast-enhanced CT venous-phase images, the first step is to identify the tumor and determine the colorectal cancer stage (detection model). Predicting the tumor’s genetic status (prediction model) is the second step.
Using Yolov7, which was pre-trained on the Coco dataset, we created a detection model [23]. Data enhancement methods like HSV, translate, flip, scale, mix-up, and mosaic were used in the construction of detection model. We set the learning rate to 0.01 and the number of epochs to 200.
We preprocessed the images of the training cohort and the test cohort differently while creating the prediction model [24]. The data enhancement techniques of random clipping and random transformation were selected for the training cohort, and the data enhancement techniques of center cropping and resizing were selected for the test and validation cohorts. EfficientNetV2 and ConvNeXt achieve high accuracy in comparison to other models while utilizing fewer processing resources [25,26]. The transformer’s structure has high non-local feature extraction capabilities, and VIT and SWT perform well in the classification space [27,28,29]. All of them had prior training using the ImageNet dataset [30,31]. We set the learning rate of the convolutional neural network to 0.01, the learning rate of the transformer to 0.001, and the number of epochs to 200.

2.6. Model Evaluation

Three categories make up the YOLOv7 loss function: class loss, location loss, and objection loss. On each layer of the feature maps, the loss computation was performed. The precision–recall (P-R) curves, mAP, confusion matrix, and F1 score curve were utilized to further assess the detection model’s performance.
For the prediction model, the accuracy and loss values were used to evaluate the classification performance of the neural network, from which the best neural network was selected. To assess the network’s performance further, receiver operator characteristic (ROC) curves and P-R curves were employed. The output provided to the neural layers was shown using gradient-weighted class activation mapping (Grad-CAM) [32].

3. Results

3.1. Patients

A total of 231 patients were included in the study. Table 1 provides a summary of the clinical characteristics of the patients included in the detection model dataset. Additionally, Table 2 presents the clinical characteristics of the patients included in the prediction model dataset.

3.2. Detection Model Performance

The Yolov7 found the best optimization settings after 180 learning epochs. In the test cohort, the detection model had a precision of 0.96, a recall of 0.95, and a mAP_0.5 of 0.97 (Figure 2). Figure 3A,B depict the confusion matrix for both the test and validation cohorts. The results showed that in both cohorts, the detection model performed excellent classification of both stages. Furthermore, we used the mAP and F1 scores to evaluate the accuracy of the model’s detection performance. The test cohorts’ and validation cohorts’ mAP_0.5 values were 0.981 and 0.970, respectively (Figure 3C,D). The model’s F1 scores in the test and validation cohorts were 0.95 and 0.96, respectively (Figure 3E,F). This demonstrated that the model performed well in terms of detection.

3.3. Prediction Model Performance

Based on the accuracy and loss of the neural network training process, all neural networks reached the optimal optimization parameters after 170 learning cycles (Figure 4). The results show that VIT has the best classification performance for our dataset (Figure 3). Therefore, we chose VIT to construct the prediction model. The confusion matrix findings revealed that the prediction model performed well in both the test and validation cohorts (Figure 5A,B). Additionally, the ROC curve and P-R curve show that the prediction model has excellent classification performance in the validation cohort (Figure 5C,D). The AUC for the test and validation cohorts are 0.9591 and 0.9554, respectively (Delong Test, p = 0.449). The representative images of VIT’s Grad-CAM are shown in Figure 6. Based on Grad-CAM, we selected the first Layer Norm layer in the last Encoder Block module in VIT to generate a representative image of VIT. The heatmap generated based on VIT shows important regions in the CT image. The surrounding area and central position of the tumor in the CT image are of great value for the evaluation of tumor gene status. This suggests that VIT has the ability to detect tumor heterogeneity.

3.4. Deep Learning System

The deep learning system is made up of two parts: the detection model detects tumors and predicts staging, and the prediction model predicts RAS gene mutation status.

4. Discussion

We developed and validated an upgraded computed tomography-based deep learning system for preoperative prediction of staging and RAS gene mutation status in colorectal cancer patients in this work. The deep learning system successfully differentiated colorectal cancer patients based on staging and RAS gene mutation status, allowing tailored preoperative staging and RAS gene mutation status evaluation.
Accurate staging evaluation and RAS gene mutation status identification are critical in colorectal cancer patients’ therapy options and prognosis assessment [33]. Medical imaging is a typical tool for determining preoperative staging, although its accuracy is not optimal [34]. Endoscopic biopsy is a typical preoperative approach for determining gene mutation status. It can, however, result in major problems such as infection, bleeding, and perforation [35]. Several studies have sought to evaluate RAS gene mutation status using positron emission tomography (PET/CT) [36]. Although some results have been achieved, this is not a standard preoperative test for colorectal cancer patients. Contrast-enhanced CT is more commonly used for tumor detection and treatment [37]. CT imaging involves the scanning of a certain thickness of a layer of the body’s examination area using an X-ray beam. X-rays transmitted through the layer are received by a detector, converted into visible light, and then transformed into electrical signals by an optical-to-electrical converter, and then into digital signals by an analog/digital converter, which are inputted into a computer for processing. This contains a great deal of information. It has been shown that deep learning features extracted from a tumor region can offer a relevant quantitative representation of the extent of lymph node metastasis in patients [38]. Deep learning can extract more information from the input data for mapping the input data or for observing the relationship between features and the output and is not dependent on understanding the features of the data [14]. Thus, deep learning will mine information from medical CT images that is difficult for humans to notice, increasing the hope of achieving diagnostic value. To the best of our knowledge, this is the first study to employ contrast-enhanced CT imaging with deep learning to predict staging and RAS gene mutation status before surgery in colorectal cancer.
The bulk of deep learning research in colorectal cancer has focused on categorization of endoscopic or pathological images, diagnosis, and prognosis analysis [39,40]. There are many studies that attempted to use deep learning to analyze pathological images of colorectal cancer patients to predict lymph node status of patients [41]. Some researchers have also investigated the direct use of endoscopic images to diagnose the depth of submucosal infiltration in colorectal cancer using deep learning [42]. After a systematic literature search, we found two deep learning studies using magnetic resonance imaging (MRI) for the predictive identification of the T-stage in rectal cancer patients [43,44]. Although MRI has advantages such as high sensitivity and specificity, it is not a routine preoperative test for colorectal cancer patients due to its high price. Moreover, we did not find studies that used CT images to predict the stage of patients with colorectal cancer preoperatively. Clinical standards demand that practitioners identify TNM staging before beginning any therapy [45]. TNM staging is routinely used for risk stratification and therapeutic decision making, and CT is a routine imaging test used for preoperative staging of gastric cancer [46]. The use of abdominal contrast-enhanced CT has greatly improved the accuracy of gastric cancer staging, with preoperative T-stage and N-stage accuracies of 70% and 75%, respectively [47,48]. However, the final interpretation of CT images still depends on clinical experience and the personal opinion of radiologists. The staging assessment by clinicians is to some extent a subjective evaluation that lacks objectivity [49]. The F1 value of the deep learning system constructed in this study reached 0.95 and 0.96 in the test cohort and validation cohort, respectively, offering a novel approach for assisting radiologists in screening and reducing their workload. Huang et al. demonstrated that combining numerous indicators into a single model aids in individualizing patient care and outperforms utilizing a single marker [21]. We are strongly inclined toward the view that focusing on the T-stage or N-stage may not allow a thorough assessment of patients’ state, which may alter clinician’s diagnosis, treatment, and prognosis appraisal of the patients. Using contrast-enhanced CT portal-phase images, the detection model we constructed was able to predict the staging of colorectal cancer patients, which would be helpful to clinicians in making their diagnosis and treatment decisions.
With the introduction of cetuximab and panitumumab, two anti-epidermal growth factor receptor (EGFR)-targeted antibodies, the treatment of progressive colorectal cancer has entered the era of customized therapy [50]. However, tissues obtained via endoscopic biopsy are inaccurate, and approximately one-quarter of patients diagnosed with endoscopic biopsy prove to have more advanced disease after surgical resection [51]. Endoscopic biopsy specimens’ gene expression profiles may be influenced by sampling mistakes [52]. Liquid biopsy has evolved in recent years as an alternate approach for identifying genetic status [53]. However, the pricey apparatus needed for the investigation, the long analysis time, and the low detectability and specificity have all hampered the translation of this innovative technology from the laboratory to clinical use [54]. Contrast-enhanced CT is a relatively low-risk, non-invasive preoperative routine scan compared to endoscopy and tissue biopsy [55]. In this study, a prediction model was built and tested on a test set and a validation set, and the model achieved good performance. Our findings revealed that contrast-enhanced CT, as a typical preoperative scan used in colorectal cancer patients, has intrinsic receptor expression features and can thus represent RAS gene mutation status. Several studies have shown links between CT features and genes in lung tumors [56]. Predictive models are not yet sufficient to replace pathology biopsies for a variety of reasons, including clinician bias and poor deep learning interpretability. However, deep learning and contrast-enhanced CT examinations have many advantages over pathology biopsies. First, CT examinations are readily available, relatively inexpensive, and noninvasive. In addition, almost all patients with colorectal cancer undergo contrast-enhanced CT before treatment and are commonly imaged multiple times during treatment, but not all patients undergo genomic sequencing. Second, colorectal cancer is highly heterogeneous and progressive at the physiological and genomic levels. Genomic heterogeneity in different locations of primary tumors and metastatic tumors is a significant contributor to treatment failure and the establishment of therapeutic resistance. Third, when genomic analysis is performed, tumor biopsy samples are obtained from a single location in a single pass, which is prone to sampling errors. However, predictive models target images of the entire tumor, and not a localized site [57,58]. Therefore, precise treatment of colorectal cancer requires spatiotemporal analysis of tumor RAS gene mutation status. The findings of this study highlight the fact that contrast-enhanced CT in colorectal cancer has an intrinsic advantage for detecting RAS gene mutation status. This is useful since contrast-enhanced CT makes it easier for doctors to establish the mutational status of genes. Grad-CAM can depict the deep learning model’s output, and further research should be undertaken based on this finding. Contrast-enhanced CT and deep learning have the ability to quantify intra- and inter-tumor heterogeneity and enable more accurate colorectal cancer therapy.
Radiomics is a new topic that has attracted a lot of interest in cancer clinical research [59]. With the greatest AUC value of 0.76 obtained, Li et al. created clinical line graphs based on radiomic characteristics for predicting lymph node metastases in colorectal cancer patients [60]. The combination of functional parameters of CT and radiomic features is helpful for the diagnosis and T-staging of colorectal cancer [61]. Xue et al. used radiomic features to construct a model to predict KRAS mutation status in colorectal cancer patients, with an AUC value of 0.75 [62].
This study’s prediction model had a much higher AUC than its radiomic equivalent. This study’s remarkable result can be credited to the usage of deep learning techniques. Yun et al. discovered that combining deep learning features with radiomic features affects their deep learning model’s classification performance [63]. According to Chalkidou et al., radiomic properties are characterized by human bias [64]. Simultaneously, radiomics has always had repeatability issues [65]. The usefulness of classical radiomics has been called into doubt since the introduction of deep learning [66,67]. Deep learning enables important properties to be learnt automatically, without researchers’ prior definition, and these abstract representations also improve learning by boosting generality and accuracy while minimizing possible bias [68]. We tend to favor this view. The distinctions between different tissue types might not be adequately accounted for when examining radiomic characteristics due to the constraints of human-defined radiomics.
More importantly, this study looks at the therapeutic use of deep learning models. Previous deep learning research results, while good, have only been evaluated using internal or external test groups and have not been applied to clinical practice, which is contrary to the trend of personalized medicine [69,70,71]. Schmidt et al. contend that medical research should be directed toward therapeutic applications [72]. Thus, a deep learning system for clinicians based on the best model is useful, and our model demonstrates excellent predictive performance. Without specialist annotation, as clinicians upload contrast-enhanced CT images obtained from colorectal cancer patients, the proposed deep learning system displays summary results for patient staging and RAS gene mutation status prediction. Despite the obstacles in transferring medical research findings into the development of therapeutic technologies, as Cabitza et al. pointed out, we feel it is a worthy undertaking [73,74]. With its rapid learning and data processing capabilities, deep learning will revolutionize how we respond to colorectal cancer and become a vital tool for physicians.
This study has a number of drawbacks. To begin with, the colorectal cancer patients in this study were recruited from a single location, and the deep learning system may perform poorly on contrast-enhanced CT scans from other institutions. We will make a deliberate effort in future research to eliminate variance between hospitals using multi-center studies and significantly improve the deep learning system. Furthermore, only contrast-enhanced CT portal-phase images were employed in this investigation for prediction. The use of arterial-phase and delayed-phase contrast-enhanced CT imaging in colorectal cancer has to be researched further. Finally, a two-dimensional model was used to build the deep learning system for this investigation. We will investigate the clinical use of 3D models in contrast-enhanced CT.

5. Conclusions

In conclusion, the proposed deep learning system can predict the preoperative staging and RAS gene mutation status of colorectal cancer patients using just contrast-enhanced CT images. The deep learning system will assist physicians in evaluating the staging and RAS mutation status of colorectal cancer patients prior to surgery and selecting the best treatment strategy, thus decreasing the physical and financial stresses on patients.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/cancers15184497/s1. RAS mutation analysis.

Author Contributions

N.L. designed the study. J.Z. (Jianguo Zhu) and Y.L. collected and organized the clinical data. N.L. and X.G. completed the modeling and data analysis and wrote the manuscript. J.Z. (Jianping Zhang) was responsible for the submission of the final version of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant No. 81874058, owned by Jianping Zhang.

Institutional Review Board Statement

Approval was received from the Ethics Committee of the Second Affiliated Hospital of Nanjing Medical University (NO. 2023-KY-141-01). The patients’ or their family members’ informed consent was acquired. We removed all private patient information.

Informed Consent Statement

Approval from the Ethics Committee was received. The patients’ or their family members’ informed consent was acquired.

Data Availability Statement

The data supporting this study are available from the corresponding author upon request.

Acknowledgments

The authors thanked all colleagues who contributed to this work.

Conflicts of Interest

The authors declare that this study was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef] [PubMed]
  2. Cremolini, C.; Loupakis, F.; Antoniotti, C.; Lupi, C.; Sensi, E.; Lonardi, S.; Mezi, S.; Tomasello, G.; Ronzoni, M.; Zaniboni, A.; et al. FOLFOXIRI plus bevacizumab versus FOLFIRI plus bevacizumab as first-line treatment of patients with metastatic colorectal cancer: Updated overall survival and molecular subgroup analyses of the open-label, phase 3 TRIBE study. Lancet Oncol. 2015, 16, 1306–1315. [Google Scholar] [CrossRef] [PubMed]
  3. Strickler, J.H.; Wu, C.; Bekaii-Saab, T. Targeting BRAF in metastatic colorectal cancer: Maximizing molecular approaches. Cancer Treat. Rev. 2017, 60, 109–119. [Google Scholar] [CrossRef] [PubMed]
  4. Sundar, R.; Hong, D.S.; Kopetz, S.; Yap, T.A. Targeting BRAF-Mutant Colorectal Cancer: Progress in Combination Strategies. Cancer Discov. 2017, 7, 558–560. [Google Scholar] [CrossRef] [PubMed]
  5. Bibault, J.E.; Giraud, P.; Housset, M.; Durdux, C.; Taieb, J.; Berger, A.; Coriat, R.; Chaussade, S.; Dousset, B.; Nordlinger, B.; et al. Deep Learning and Radiomics predict complete response after neo-adjuvant chemoradiation for locally advanced rectal cancer. Sci. Rep. 2018, 8, 12611. [Google Scholar] [CrossRef]
  6. Camidge, D.R.; Doebele, R.C.; Kerr, K.M. Comparing and contrasting predictive biomarkers for immunotherapy and targeted therapy of NSCLC. Nature reviews. Clin. Oncol. 2019, 16, 341–355. [Google Scholar] [CrossRef]
  7. Li, Q.; Guan, X.; Chen, S.; Yi, Z.; Lan, B.; Xing, P.; Fan, Y.; Wang, J.; Luo, Y.; Yuan, P.; et al. Safety, Efficacy, and Biomarker Analysis of Pyrotinib in Combination with Capecitabine in HER2-Positive Metastatic Breast Cancer Patients: A Phase I Clinical Trial. Clin. Cancer Res. 2019, 25, 5212–5220. [Google Scholar] [CrossRef]
  8. Barras, D.; Missiaglia, E.; Wirapati, P.; Sieber, O.M.; Jorissen, R.N.; Love, C.; Molloy, P.L.; Jones, I.T.; McLaughlin, S.; Gibbs, P.; et al. BRAF V600E Mutant Colorectal Cancer Subtypes Based on Gene Expression. Clin. Cancer Res. 2017, 23, 104–115. [Google Scholar] [CrossRef]
  9. Peeters, M.; Oliner, K.S.; Price, T.J.; Cervantes, A.; Sobrero, A.F.; Ducreux, M.; Hotko, Y.; André, T.; Chan, E.; Lordick, F.; et al. Analysis of KRAS/NRAS Mutations in a Phase III Study of Panitumumab with FOLFIRI Compared with FOLFIRI Alone as Second-line Treatment for Metastatic Colorectal Cancer. Clin. Cancer Res. 2015, 21, 5469–5479. [Google Scholar] [CrossRef]
  10. Jia, L.L.; Zhao, J.X.; Zhao, L.P.; Tian, J.H.; Huang, G. Current status and quality of radiomic studies for predicting KRAS mutations in colorectal cancer patients: A systematic review and meta-analysis. Eur. J. Radiol. 2023, 158, 110640. [Google Scholar] [CrossRef]
  11. European Society of Radiology (ESR). White paper on imaging biomarkers. Insights Imaging 2010, 1, 42–45. [Google Scholar] [CrossRef]
  12. Aerts, H.J.; Velazquez, E.R.; Leijenaar, R.T.; Parmar, C.; Grossmann, P.; Carvalho, S.; Bussink, J.; Monshouwer, R.; Haibe-Kains, B.; Rietveld, D.; et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Commun. 2014, 5, 4006. [Google Scholar] [CrossRef]
  13. Amin, M.B.; Greene, F.L.; Edge, S.B.; Compton, C.C.; Gershenwald, J.E.; Brookland, R.K.; Meyer, L.; Gress, D.M.; Byrd, D.R.; Winchester, D.P. The Eighth Edition AJCC Cancer Staging Manual: Continuing to build a bridge from a population-based to a more “personalized” approach to cancer staging. CA Cancer J. Clin. 2017, 67, 93–99. [Google Scholar] [CrossRef]
  14. Huang, S.; Yang, J.; Fong, S.; Zhao, Q. Artificial intelligence in cancer diagnosis and prognosis: Opportunities and challenges. Cancer Lett. 2020, 471, 61–71. [Google Scholar] [CrossRef] [PubMed]
  15. Deo, R.C. Machine Learning in Medicine. Circulation 2015, 132, 1920–1930. [Google Scholar] [CrossRef] [PubMed]
  16. Wong, D.; Yip, S. Machine learning classifies cancer. Nature 2018, 555, 446–447. [Google Scholar] [CrossRef]
  17. Cellina, M.; Cè, M.; Irmici, G.; Ascenti, V.; Khenkina, N.; Toto-Brocchi, M.; Martinenghi, C.; Papa, S.; Carrafiello, G. Artificial Intelligence in Lung Cancer Imaging: Unfolding the Future. Diagnostics 2022, 12, 2644. [Google Scholar] [CrossRef]
  18. Kermany, D.S.; Goldbaum, M.; Cai, W.; Valentim, C.C.S.; Liang, H.; Baxter, S.L.; McKeown, A.; Yang, G.; Wu, X.; Yan, F.; et al. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning. Cell 2018, 172, 1122–1131.e9. [Google Scholar] [CrossRef] [PubMed]
  19. Jiang, K.; Jiang, X.; Pan, J.; Wen, Y.; Huang, Y.; Weng, S.; Lan, S.; Nie, K.; Zheng, Z.; Ji, S.; et al. Current Evidence and Future Perspective of Accuracy of Artificial Intelligence Application for Early Gastric Cancer Diagnosis with Endoscopy: A Systematic and Meta-Analysis. Front. Med. 2021, 8, 629080. [Google Scholar] [CrossRef]
  20. Kim, K.; Kim, S.; Han, K.; Bae, H.; Shin, J.; Lim, J.S. Diagnostic Performance of Deep Learning-Based Lesion Detection Algorithm in CT for Detecting Hepatic Metastasis from Colorectal Cancer. Korean J. Radiol. 2021, 22, 912–921. [Google Scholar] [CrossRef]
  21. Huang, Y.Q.; Liang, C.H.; He, L.; Tian, J.; Liang, C.S.; Chen, X.; Ma, Z.L.; Liu, Z.Y. Development and Validation of a Radiomics Nomogram for Preoperative Prediction of Lymph Node Metastasis in Colorectal Cancer. J. Clin. Oncol. 2016, 34, 2157–2164. [Google Scholar] [CrossRef] [PubMed]
  22. Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; Yang, Y. Random Erasing Data Augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 13001–13008. [Google Scholar] [CrossRef]
  23. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
  24. Drozdzal, M.; Chartrand, G.; Vorontsov, E.; Shakeri, M.; Di Jorio, L.; Tang, A.; Romero, A.; Bengio, Y.; Pal, C.; Kadoury, S. Learning normalized inputs for iterative estimation in medical image segmentation. Med. Image Anal. 2018, 44, 1–13. [Google Scholar] [CrossRef]
  25. Tan, M.; Le, Q.V. Efficientnet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
  26. Liu, Z.; Mao, H.; Wu, C.Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 11976–11986. [Google Scholar]
  27. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. In Proceedings of the International Conference on Learning Representations, Virtual, 3–7 May 2021. [Google Scholar]
  28. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. arXiv 2021, arXiv:2103.14030. [Google Scholar]
  29. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, Long Beach, CA, USA, 4–9 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017; pp. 6000–6010. [Google Scholar]
  30. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  31. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
  32. Selvaraju, R.R.; Das, A.; Vedantam, R.; Cogswell, M.; Parikh, D.; Batra, D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vis. 2016, 128, 336–359. [Google Scholar] [CrossRef]
  33. National Comprehensive Cancer Network (NCCN) Guidelines. Available online: http://www.nccn.org/ (accessed on 11 January 2022).
  34. Nasseri, Y.; Langenfeld, S.J. Imaging for Colorectal Cancer. Surg. Clin. N. Am. 2017, 97, 503–513. [Google Scholar] [CrossRef]
  35. Levy, I.; Gralnek, I.M. Complications of diagnostic colonoscopy, upper endoscopy, and enteroscopy. Best Pract. Res. Clin. Gastroenterol. 2016, 30, 705–718. [Google Scholar] [CrossRef]
  36. He, P.; Zou, Y.; Qiu, J.; Yang, T.; Peng, L.; Zhang, X. Pretreatment (18)F-FDG PET/CT Imaging Predicts the KRAS/NRAS/BRAF Gene Mutational Status in Colorectal Cancer. J. Oncol. 2021, 2021, 6687291. [Google Scholar] [CrossRef]
  37. Obaro, A.E.; Plumb, A.A.; Fanshawe, T.R.; Torres, U.S.; Baldwin-Cleland, R.; Taylor, S.A.; Halligan, S.; Burling, D.N. Post-imaging colorectal cancer or interval cancer rates after CT colonography: A systematic review and meta-analysis. Lancet Gastroenterol. Hepatol. 2018, 3, 326–336. [Google Scholar] [CrossRef] [PubMed]
  38. Dong, D.; Fang, M.-J.; Tang, L.; Shan, X.-H.; Gao, J.-B.; Giganti, F.; Wang, R.-P.; Chen, X.; Wang, X.-X.; Palumbo, D.; et al. Deep Learning Radiomic Nomogram Can Predict the Number of Lymph Node Metastasis in Locally Advanced Gastric Cancer: An International Multicenter Study. Ann. Oncol. 2020, 31, 912–920. [Google Scholar] [CrossRef] [PubMed]
  39. Pacal, I.; Karaboga, D.; Basturk, A.; Akay, B.; Nalbantoglu, U. A comprehensive review of deep learning in colon cancer. Comput. Biol. Med. 2020, 126, 104003. [Google Scholar] [CrossRef] [PubMed]
  40. Liang, F.; Wang, S.; Zhang, K.; Liu, T.J.; Li, J.N. Development of artificial intelligence technology in diagnosis, treatment, and prognosis of colorectal cancer. World J. Gastrointest. Oncol. 2022, 14, 124–152. [Google Scholar] [CrossRef] [PubMed]
  41. Bedrikovetski, S.; Dudi-Venkata, N.N.; Kroon, H.M.; Seow, W.; Vather, R.; Carneiro, G.; Moore, J.W.; Sammour, T. Artificial intelligence for pre-operative lymph node staging in colorectal cancer: A systematic review and meta-analysis. BMC Cancer 2021, 21, 1058. [Google Scholar] [CrossRef]
  42. Minami, S.; Saso, K.; Miyoshi, N.; Fujino, S.; Kato, S.; Sekido, Y.; Hata, T.; Ogino, T.; Takahashi, H.; Uemura, M.; et al. Diagnosis of Depth of Submucosal Invasion in Colorectal Cancer with AI Using Deep Learning. Cancers 2022, 14, 5361. [Google Scholar] [CrossRef]
  43. Wu, Q.Y.; Liu, S.L.; Sun, P.; Li, Y.; Liu, G.W.; Liu, S.S.; Hu, J.L.; Niu, T.Y.; Lu, Y. Establishment and clinical application value of an automatic diagnosis platform for rectal cancer T-staging based on a deep neural network. Chin. Med. J. 2021, 134, 821–828. [Google Scholar] [CrossRef]
  44. Hou, M.; Zhou, L.; Sun, J. Deep-learning-based 3D super-resolution MRI radiomics model: Superior predictive performance in preoperative T-staging of rectal cancer. Eur. Radiol. 2023, 33, 1–10. [Google Scholar] [CrossRef]
  45. AK, A.A.; Garvin, J.H.; Redd, A.; Carter, M.E.; Sweeny, C.; Meystre, S.M. Automated Extraction and Classification of Cancer Stage Mentions from Unstructured Text Fields in a Central Cancer Registry. AMIA Jt. Summits Transl. Sci. Proc. 2018, 2017, 16–25. [Google Scholar]
  46. Lu, Y.; Yu, Q.; Gao, Y.; Zhou, Y.; Liu, G.; Dong, Q.; Ma, J.; Ding, L.; Yao, H.; Zhang, Z.; et al. Identification of Metastatic Lymph Nodes in MR Imaging with Faster Region-Based Convolutional Neural Networks. Cancer Res. 2018, 78, 5135–5143. [Google Scholar] [CrossRef]
  47. Kubota, K.; Suzuki, A.; Shiozaki, H.; Wada, T.; Kyosaka, T.; Kishida, A. Accuracy of Multidetector-Row Computed Tomography in the Preoperative Diagnosis of Lymph Node Metastasis in Patients with Gastric Cancer. Gastrointest. Tumors 2017, 3, 163–170. [Google Scholar] [CrossRef]
  48. Joo, I.; Lee, J.M.; Kim, J.H.; Shin, C.-I.; Han, J.K.; Choi, B.I. Prospective Comparison of 3T MRI with Diffusion-Weighted Imaging and MDCT for the Preoperative TNM Staging of Gastric Cancer. J. Magn. Reson. Imaging 2015, 41, 814–821. [Google Scholar] [CrossRef]
  49. Zheng, L.; Zhang, X.; Hu, J.; Gao, Y.; Zhang, X.; Zhang, M.; Li, S.; Zhou, X.; Niu, T.; Lu, Y.; et al. Establishment and Applicability of a Diagnostic System for Advanced Gastric Cancer T Staging Based on a Faster Region-Based Convolutional Neural Network. Front. Oncol. 2020, 10, 1238. [Google Scholar] [CrossRef]
  50. Tang, Y.L.; Li, D.D.; Duan, J.Y.; Sheng, L.M.; Wang, X. Resistance to targeted therapy in metastatic colorectal cancer: Current status and new developments. World J. Gastroenterol. 2023, 29, 926–948. [Google Scholar] [CrossRef] [PubMed]
  51. Zou, L.; Jiang, Q.; Guo, T.; Wu, X.; Wang, Q.; Feng, Y.; Zhang, S.; Fang, W.; Zhou, W.; Yang, A. Endoscopic characteristics in predicting prognosis of biopsy-diagnosed gastric low-grade intraepithelial neoplasia. Chin. Med. J. 2022, 135, 26–35. [Google Scholar] [CrossRef] [PubMed]
  52. Wang, N.; Wang, X.; Li, W.; Ye, H.; Bai, H.; Wu, J.; Chen, M. Contrast-Enhanced CT Parameters of Gastric Adenocarcinoma: Can Radiomic Features Be Surrogate Biomarkers for HER2 over-Expression Status? Cancer Manag. Res. 2020, 12, 1211–1219. [Google Scholar] [CrossRef] [PubMed]
  53. Kalligosfyri, P.M.; Nikou, S.; Karteri, S.; Kalofonos, H.P.; Bravou, V.; Kalogianni, D.P. Rapid Multiplex Strip Test for the Detection of Circulating Tumor DNA Mutations for Liquid Biopsy Applications. Biosensors 2022, 12, 97. [Google Scholar] [CrossRef] [PubMed]
  54. Wang, J.; Wuethrich, A.; Sina, A.A.; Lane, R.E.; Lin, L.L.; Wang, Y.; Cebon, J.; Behren, A.; Trau, M. Tracking extracellular vesicle phenotypic changes enables treatment monitoring in melanoma. Sci. Adv. 2020, 6, eaax3223. [Google Scholar] [CrossRef]
  55. Chang, X.; Guo, X.; Li, X.; Han, X.; Li, X.; Liu, X.; Ren, J. Potential Value of Radiomics in the Identification of Stage T3 and T4a Esophagogastric Junction Adenocarcinoma Based on Contrast-Enhanced CT Images. Front. Oncol. 2021, 11, 627947. [Google Scholar] [CrossRef] [PubMed]
  56. Liu, Y.; Kim, J.; Balagurunathan, Y.; Li, Q.; Garcia, A.L.; Stringfield, O.; Ye, Z.; Gillies, R.J. Radiomic Features Are Associated with EGFR Mutation Status in Lung Adenocarcinomas. Clin. Lung Cancer 2016, 17, 441–448.e6. [Google Scholar] [CrossRef]
  57. Russo, M.; Crisafulli, G.; Sogari, A.; Reilly, N.M.; Arena, S.; Lamba, S.; Bartolini, A.; Amodio, V.; Magrì, A.; Novara, L.; et al. Adaptive mutability of colorectal cancers in response to targeted therapies. Science 2019, 366, 1473–1480. [Google Scholar] [CrossRef]
  58. Russo, M.; Siravegna, G.; Blaszkowsky, L.S.; Corti, G.; Crisafulli, G.; Ahronian, L.G.; Mussolin, B.; Kwak, E.L.; Buscarino, M.; Lazzari, L.; et al. Tumor Heterogeneity and Lesion-Specific Response to Targeted Therapy in Colorectal Cancer. Cancer Discov. 2016, 6, 147–153. [Google Scholar] [CrossRef] [PubMed]
  59. Liu, S.; Liu, S.; Ji, C.; Zheng, H.; Pan, X.; Zhang, Y.; Guan, W.; Chen, L.; Guan, Y.; Li, W.; et al. Application of CT texture analysis in predicting histopathological characteristics of gastric cancers. Eur. Radiol. 2017, 27, 4951–4959. [Google Scholar] [CrossRef]
  60. Li, M.; Zhang, J.; Dan, Y.; Yao, Y.; Dai, W.; Cai, G.; Yang, G.; Tong, T. A clinical-radiomics nomogram for the preoperative prediction of lymph node metastasis in colorectal cancer. J. Transl. Med. 2020, 18, 46. [Google Scholar] [CrossRef] [PubMed]
  61. Dou, Y.; Liu, Y.; Kong, X.; Yang, S. T staging with functional and radiomics parameters of computed tomography in colorectal cancer patients. Medicine 2022, 101, e29244. [Google Scholar] [CrossRef] [PubMed]
  62. Xue, T.; Peng, H.; Chen, Q.; Li, M.; Duan, S.; Feng, F. Preoperative prediction of KRAS mutation status in colorectal cancer using a CT-based radiomics nomogram. Br. J. Radiol. 2022, 95, 20211014. [Google Scholar] [CrossRef]
  63. Yun, J.; Park, J.E.; Lee, H.; Ham, S.; Kim, N.; Kim, H.S. Radiomic features and multilayer perceptron network classifier: A robust MRI classification strategy for distinguishing glioblastoma from primary central nervous system lymphoma. Sci. Rep. 2019, 9, 5746. [Google Scholar] [CrossRef]
  64. Chalkidou, A.; O’Doherty, M.J.; Marsden, P.K. False Discovery Rates in PET and CT Studies with Texture Features: A Systematic Review. PLoS ONE 2015, 10, e0124165. [Google Scholar] [CrossRef]
  65. Traverso, A.; Wee, L.; Dekker, A.; Gillies, R. Repeatability and Reproducibility of Radiomic Features: A Systematic Review. Int. J. Radiat. Oncol. Biol. Phys. 2018, 102, 1143–1158. [Google Scholar] [CrossRef]
  66. Xu, Y.; Hosny, A.; Zeleznik, R.; Parmar, C.; Coroller, T.; Franco, I.; Mak, R.H.; Aerts, H. Deep Learning Predicts Lung Cancer Treatment Response from Serial Medical Imaging. Clin. Cancer Res. 2019, 25, 3266–3275. [Google Scholar] [CrossRef]
  67. Hosny, A.; Parmar, C.; Coroller, T.P.; Grossmann, P.; Zeleznik, R.; Kumar, A.; Bussink, J.; Gillies, R.J.; Mak, R.H.; Aerts, H. Deep learning for lung cancer prognostication: A retrospective multi-cohort radiomics study. PLoS Med. 2018, 15, e1002711. [Google Scholar] [CrossRef]
  68. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  69. Li, L.; Chen, Y.; Shen, Z.; Zhang, X.; Sang, J.; Ding, Y.; Yang, X.; Li, J.; Chen, M.; Jin, C.; et al. Convolutional neural network for the diagnosis of early gastric cancer based on magnifying narrow band imaging. Gastric Cancer 2020, 23, 126–132. [Google Scholar] [CrossRef] [PubMed]
  70. Ueyama, H.; Kato, Y.; Akazawa, Y.; Yatagai, N.; Komori, H.; Takeda, T.; Matsumoto, K.; Ueda, K.; Matsumoto, K.; Hojo, M.; et al. Application of artificial intelligence using a convolutional neural network for diagnosis of early gastric cancer based on magnifying endoscopy with narrow-band imaging. J. Gastroenterol. Hepatol. 2021, 36, 482–489. [Google Scholar] [CrossRef]
  71. Balachandran, V.P.; Gonen, M.; Smith, J.J.; DeMatteo, R.P. Nomograms in oncology: More than meets the eye. Lancet Oncol. 2015, 16, e173–e180. [Google Scholar] [CrossRef] [PubMed]
  72. Schmidt, D.R.; Patel, R.; Kirsch, D.G.; Lewis, C.A.; Vander Heiden, M.G.; Locasale, J.W. Metabolomics in cancer research and emerging applications in clinical oncology. CA Cancer J. Clin. 2021, 71, 333–358. [Google Scholar] [CrossRef] [PubMed]
  73. de Boer, L.L.; Spliethoff, J.W.; Sterenborg, H.; Ruers, T.J.M. Review: In vivo optical spectral tissue sensing-how to go from research to routine clinical application? Lasers Med. Sci. 2017, 32, 711–719. [Google Scholar] [CrossRef]
  74. Cabitza, F.; Rasoini, R.; Gensini, G.F. Unintended Consequences of Machine Learning in Medicine. JAMA 2017, 318, 517–518. [Google Scholar] [CrossRef]
Figure 1. Examples of images selected for this study.
Figure 1. Examples of images selected for this study.
Cancers 15 04497 g001
Figure 2. Variation in each metric using the Yolov7 over 200 epochs. mAP_0.5: mean average precision (IoU = 0.5). The Yolov7 attained the best-optimized parameters after 180 learning epochs, with a precision of 0.96 and a recall of 0.95 in the test cohort.
Figure 2. Variation in each metric using the Yolov7 over 200 epochs. mAP_0.5: mean average precision (IoU = 0.5). The Yolov7 attained the best-optimized parameters after 180 learning epochs, with a precision of 0.96 and a recall of 0.95 in the test cohort.
Cancers 15 04497 g002
Figure 3. Performance of the detection model in the test and validation cohorts. (A,B) The confusion matrix in the test cohort (A) and the validation cohort (B). (C,D) The P-R relationship. The test cohort’s (C) and external validation cohort’s (D) mAP_0.5 values are 0.981 and 0.970, respectively. (E,F) The F1 curve. The model’s F1 scores in the test (E) and validation (F) cohorts are 0.95 and 0.96, respectively.
Figure 3. Performance of the detection model in the test and validation cohorts. (A,B) The confusion matrix in the test cohort (A) and the validation cohort (B). (C,D) The P-R relationship. The test cohort’s (C) and external validation cohort’s (D) mAP_0.5 values are 0.981 and 0.970, respectively. (E,F) The F1 curve. The model’s F1 scores in the test (E) and validation (F) cohorts are 0.95 and 0.96, respectively.
Cancers 15 04497 g003
Figure 4. Variation in each metric over 200 epochs using different neural networks. After 170 learning epochs, all neural networks attained the best-optimized parameters based on the training loss and accuracy value. In the test cohort, the VIT model outperformed the CNNs in classification.
Figure 4. Variation in each metric over 200 epochs using different neural networks. After 170 learning epochs, all neural networks attained the best-optimized parameters based on the training loss and accuracy value. In the test cohort, the VIT model outperformed the CNNs in classification.
Cancers 15 04497 g004
Figure 5. Evaluation of the prediction model’s performance. (A,B) The confusion matrix in the test (A) and external validation (B) cohort. (C) The ROC curves. The AUC values of test cohort and external validation cohort are 0.9591 and 0.9554, respectively. (D) The P-R curves. These results show that the prediction model has good classification performance.
Figure 5. Evaluation of the prediction model’s performance. (A,B) The confusion matrix in the test (A) and external validation (B) cohort. (C) The ROC curves. The AUC values of test cohort and external validation cohort are 0.9591 and 0.9554, respectively. (D) The P-R curves. These results show that the prediction model has good classification performance.
Cancers 15 04497 g005
Figure 6. The prediction model provided enhanced CT portal-phase images and a feature heat map. Color bars show the prominence of the characteristics.
Figure 6. The prediction model provided enhanced CT portal-phase images and a feature heat map. Color bars show the prominence of the characteristics.
Cancers 15 04497 g006
Table 1. Characteristics of patients included in the detection model.
Table 1. Characteristics of patients included in the detection model.
Clinical Characteristics
Age (mean ± SD) 63.97 ± 11.086
Gender, NO (%)
Male170
Female61
Laboratory tests, median (IQR)
Albumin40.60 (37.30, 42.90)
Neutrophil4.49 (3.23, 6.52)
Lymphocyte1.29 (0.94, 1.71)
CEA level, NO (%)
Normal179
Abnormal52
CA125 level, NO (%)
Normal203
Abnormal28
CA199 level, NO (%)
Normal186
Table 2. Characteristics of patients included in the prediction model.
Table 2. Characteristics of patients included in the prediction model.
Clinical Characteristics
Age (mean ± SD) 63.79 ± 11.143
Gender, NO (%)
Male148
Female49
Laboratory tests, median (IQR)
Albumin40.50 (37.85, 43.65)
Neutrophil4.11 (3.15, 5.86)
Lymphocyte1.34 (1.05, 1.74)
CEA level, NO (%)
Normal145
Abnormal52
CA125 level, NO (%)
Normal165
Abnormal32
CA199 level, NO (%)
Normal158
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, N.; Guan, X.; Zhu, J.; Li, Y.; Zhang, J. A Contrast-Enhanced CT-Based Deep Learning System for Preoperative Prediction of Colorectal Cancer Staging and RAS Mutation. Cancers 2023, 15, 4497. https://doi.org/10.3390/cancers15184497

AMA Style

Lu N, Guan X, Zhu J, Li Y, Zhang J. A Contrast-Enhanced CT-Based Deep Learning System for Preoperative Prediction of Colorectal Cancer Staging and RAS Mutation. Cancers. 2023; 15(18):4497. https://doi.org/10.3390/cancers15184497

Chicago/Turabian Style

Lu, Na, Xiao Guan, Jianguo Zhu, Yuan Li, and Jianping Zhang. 2023. "A Contrast-Enhanced CT-Based Deep Learning System for Preoperative Prediction of Colorectal Cancer Staging and RAS Mutation" Cancers 15, no. 18: 4497. https://doi.org/10.3390/cancers15184497

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop