Next Article in Journal
Review of Semantic Segmentation of Medical Images Using Modified Architectures of UNET
Previous Article in Journal
Blood Glucose Prediction Method Based on Particle Swarm Optimization and Model Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Noninvasive Classification of Glioma Subtypes Using Multiparametric MRI to Improve Deep Learning

1
Department of Magnetic Resonance, Lanzhou University Second Hospital, Lanzhou 730030, China
2
Second Clinical School, Lanzhou University, Lanzhou 730000, China
3
School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, China
4
Department of Clinical Science, Philips Healthcare, Xi’an 710000, China
5
Department of Pathology, Lanzhou University Second Hospital, Lanzhou 730030, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Diagnostics 2022, 12(12), 3063; https://doi.org/10.3390/diagnostics12123063
Submission received: 31 October 2022 / Revised: 26 November 2022 / Accepted: 3 December 2022 / Published: 6 December 2022
(This article belongs to the Section Medical Imaging and Theranostics)

Abstract

Background: Deep learning (DL) methods can noninvasively predict glioma subtypes; however, there is no set paradigm for the selection of network structures and input data, including the image combination method, image processing strategy, type of numeric data, and others. Purpose: To compare different combinations of DL frameworks (ResNet, ConvNext, and vision transformer (VIT)), image preprocessing strategies, magnetic resonance imaging (MRI) sequences, and numerical data for increasing the accuracy of DL models for differentiating glioma subtypes prior to surgery. Methods: Our dataset consisted of 211 patients with newly diagnosed gliomas who underwent preoperative MRI with standard and diffusion-weighted imaging methods. Different data combinations were used as input for the three different DL classifiers. Results: The accuracy of the image preprocessing strategies, including skull stripping, segment addition, and individual treatment of slices, was 5%, 10%, and 12.5% higher, respectively, than that of the other strategies. The accuracy increased by 7.5% and 10% following the addition of ADC and numeric data, respectively. ResNet34 exhibited the best performance, which was 5% and 17.5% higher than that of ConvNext tiny and VIT-base, respectively. Data Conclusions: The findings demonstrated that the addition of quantitatively numeric data, ADC images, and effective image preprocessing strategies improved model accuracy for datasets of similar size. The performance of ResNet was superior for small or medium datasets.

1. Introduction

Molecular biomarkers of gliomas provide important information regarding the diagnosis and prognosis of gliomas [1,2]. According to the new WHO Classification of Tumors of the Central Nervous System (WHO CNS5, Geneva, Switzerland, 2021), diffuse gliomas are categorized into three subgroups based on mutations in isocitrate dehydrogenase (IDH), 1p19q-codeletion status, and mutations in telomerase reverse transcriptase (TERT) (astrocytic, IDH-mutant; oligodendroglial, IDH-mutant, and 1p/19q-codeleted; glioblastoma (GBM), IDH-wildtype) [3]. Previous studies have demonstrated that compared to IDH-wildtype gliomas, IDH-mutant gliomas are less aggressive and have a better response to treatment with temozolomide [4]. Similar to the IDH mutants, the 1p/19q-codeleted subtype responds well to certain combinations of chemotherapy and is associated with a better prognosis [5]. Therefore, understanding the molecular type of gliomas is essential for providing prognosis information and selecting the most appropriate treatment regimen.
Genetic information regarding gliomas is currently obtained by analysis of biopsied or surgically resected tumor tissues by neuropathologists. However, analysis of the same sample by different experts is likely to result in inconsistencies even when strict grading strategy standards are followed, owing to the heterogeneity of the tumor itself and the extraction of atypical samples [1]. Additionally, molecular genetic testing approaches are usually expensive and time-consuming, and all medical institutions are not capable of providing professional testing services. Noninvasive imaging techniques that provide supplementary information for treatment decisions can aid doctors in explaining the prognosis for some patients who cannot undergo surgery.
Magnetic resonance imaging (MRI) allows the noninvasive evaluation of patients by including the genetic information of the local region and by revealing the heterogeneity of the global area. Numerous studies have attempted to use MRI imaging in recent years for solving virtual biopsies, and deep learning (DL) proves to be a successful approach in this regard.
DL is a subset of machine learning, and is essentially a neural network with multiple layers. It has four main aspects in medical image analysis tasks, including classification, segmentation, detection, and registration [6,7]. The network structure of the classification task primarily includes convolutional neural networks (CNNs) and non-convolutional vision transformers (VITs). Studies on the molecular typing of gliomas have employed residual convolutional neural networks (ResNet) to achieve superior results, and the accuracy of the overall test set is reported to be greater than 85% [8,9,10]. The unique residual block of ResNet deepens the network without lowering the effect [8]. These studies additionally demonstrated that the establishment of a three-classification model aids in increasing model accuracy by lowering the accumulation of errors in numerous two-step strategies. The studies further confirmed that the shortcomings of traditional MRI may be compensated by ADC imaging, which can provide insights into tumor infiltration in a narrow range [9]. However, the lower-level features outside the effective receptive fields in CNNs cannot be represented owing to the fact that higher-level feature maps only describe the characteristics within those fields. The attention mechanism of VITs has therefore been employed for solving this problem. VITs divide an image into a series of non-overlapping small blocks, which are analyzed as elements, similar to words [11]. It has been demonstrated that the performance of VITs is superior in certain X-ray, computed tomography (CT), and MRI classification tasks [12,13,14]. It has been recently demonstrated that the performance of the ConvNext network, a CNN network inspired by the swin-transformer parameter setting, is comparable to the VIT model, with the ImageNet-22K dataset [15]. The network performed well in predicting breast tumors with ultrasound (US) imaging data and the detection of COVID-19 using lung CT imaging data [16].
Although comprehensive predictive models have been described in previous studies, there are several issues with DL applications that need to be resolved. At present, there are no definite data-slicing methods from the input side, and the accuracy of the models is affected by the permutation and combination of different MRI sequences. Additionally, the incorporation of numeric data, including clinical information and known hallmark features of each subtype, such as patient age and gender, tumor position, and T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) mismatches, can improve the predictive ability of the models. The architecture of the neural network also plays an important role in determining the classification effect.
In this study, we compared different DL frameworks, MRI sequences, and slice preprocessing strategies for improving the accuracy of our model, and aimed to explore the effect of different inputs on the accuracy of our model in predicting glioma subtypes. Numeric mixed data extracted from clinical information and known hallmark features were also added as extra feature inputs for predicting glioma subtypes.

2. Materials and Methods

2.1. Patients

This retrospective study was approved by the local institutional review board and informed consent was waived. A total of 451 patients with brain tumors were recruited from Lanzhou University Second Hospital China, between January 2019 and April 2022. Patients with the following inclusion criteria were selected for the study: (1) patients ≥ 18 years old; (2) patients with pathologically confirmed gliomas; and (3) patients who did not undergo prior treatment. The exclusion criteria were as follows: (1) patients with indeterminable molecular subgroups (n = 132); (2) patients for whom preoperative MRI acquisitions did not include either T1-weighted post-contrast (T1c), T2-weighted (T2), or T2-FLAIR and ADC imaging (n = 108). A flowchart of the strategy used for patient inclusion/exclusion in this study is provided in Figure 1. A total of 211 patients were finally included in the study.

2.2. Pathological Analysis

The operative tissue samples were processed using standard clinical techniques. The IDH1 mutation status was assessed by immunohistochemistry analysis using the H09 clone (Dianova, Hamburg, Germany) generated against the R132H mutant of IDH1. Fluorescent in situ hybridization testing was performed for assessing 1p/19q codeletion. The TERT promoter mutation status was assigned by sequencing. All the cases were evaluated and re-classified according to the WHO CNS5 criteria by two experienced pathologists who were blinded to the findings of imaging. The final decision was made by a third pathologist in case of any discordance in the results.

2.3. MRI Protocols

MRI imaging was performed using a 3T scanner (Ingenia and Ingenia CX; Philips Healthcare, Best, The Netherlands) equipped with 80/200 mT/m gradients and a dS Head 16 channel-coil. The imaging parameters of the MRI sequence protocols are summarized in Table S1. The imaging protocol included T2WI and T2-weighted FLAIR, along with 3D T1-weighted IR-SPGR imaging, which performed the pre- and post-injection of a gadolinium-based contrast agent. A 2D trace-weighted single-shot echo planar imaging sequence (TR/TE: 2668/88 msec, slice thickness 6 mm, matrix 144 × 192, number of acquisitions: 5, acquisition time: 32 s, flip angle: 90°) was used for diffusion-weighted imaging. The ADC map was automatically computed with 2 gradient values (b = 0 and b = 1000 s/mm2) by the processing software provided with the MRI scanner.

2.4. Dataset

The three glioma subtypes were randomly divided at a ratio of 7:1.5:1.5 into the training, validation, and test sets, respectively. It was ascertained whether the proportions of the three tumors in the training, validation, and test sets matched those of all the patients. The final training, test, and validation sets comprised 139, 32, and 40 patients, respectively. (Table 1).

2.5. Image Postprocessing

All the images were registered to the T1 image volumes using the BRAINSFit or Elastix tool of 3D Slicer with B-spline warping, and resampled to an identical 1 mm isotropic spatial coordinate (Figure 2).
The signal intensities were subsequently normalized across all the images, as described hereafter. Firstly, the images in the skull stripping method were obtained by manually delineating the ROI. Secondly, pixels above the 99.9th percentile were selected as threshold pixel intensity and denoted as the 99.9th percentile. Thirdly, the mean of each pixel was removed, and the resulting value was divided by the standard deviation. Fourthly, the images have scaled a value between 0 and 1 by removing the minimum and dividing by the gap between the minimum and maximum pixels.
The image slice with the largest tumor area in the axial direction was selected by masks of the 3D segmented lesion volumes of T2WI and T1c. Additional slices spaced 5 mm apart were added once to the two adjacent layers of the biggest tumor area in the axial direction. The images were not cropped for better accuracy.
Different combinations of the 3 image modalities obtained from each of the three groups described in Figure 2 were subsequently used to produce a multi-contrast RGB image for each slice, where each image contrast was preserved as a red, green, or blue color channel, similar to the method described by Julia et al. [17] The multi-contrast RGB image thus constructed enabled transfer learning from a pre-trained network with exceptionally high accuracy in classifying a large-scale dataset of millions of color images in ImageNet.

2.6. Numeric Data

The numeric data included the age and gender of the patients, tumor position, T2-FLAIR mismatches, gadolinium enhancement, and tumor margins. The age of the patient at the time of surgery was also included as numeric data.
The position of a tumor was based on the coordinates of its center, and was divided into three groups: the first group included tumors in the frontal, parietal, or occipital lobes; the second group included tumors in the temporal and insular lobes; and the third group included tumors in other regions. Gadolinium enhancement was classified as non-enhancement, patchy enhancing, or rim enhancing.
T2-FLAIR mismatch signs were indicated by the presence or absence of complete/near-complete hyperintense signals on T2WI and relatively hypointense signals on FLAIR except for a hyperintense peripheral rim.
The tumor margin was selected as an indicator of a strong predominance of tumor character; sharp or blurred tumor interfaces with the brain were observed on both T1- and T2-weighted sequences. Tumor margin circumscription was judged as a summary marker from all pulse sequences, and was considered sharp if >50 percent of the tumor circumference was geographically marginated, as if a pencil line could be traced around the tumor. Circumscription was deemed absent if 50 percent of the tumor circumference was circumscribed [9,18].
The aforementioned numeric data were analyzed by two neuroradiologist readers who were blinded to the histopathologic diagnosis, molecular classification, and patient outcome. Inter-reader agreement was determined following the collection of independent data, and discordant results were resolved through consensus.
The categorical variables, including the gender of the patients, tumor position, gadolinium enhancement, T2-FLAIR mismatch signs, and tumor margins, were one-hot encoded.

2.7. Model Details

2.7.1. Slice Preprocessing Strategies

Three groups were set up for comparing the image processing strategies and determining the optimum approach that can be applied for improving model accuracy. Group A and group B independently investigated whether skull stripping and lesion segments, which contain location information but lack image information, would improve prediction accuracy. Group C investigated two different slice-combining paradigms, of which the first paradigm involved the individual treatment of each slice during training and combining slice predictions later on, as described by Chang et al. [19]. The second strategy involved the pooling of slices for a single prediction per patient, as described by Bien et al. (Figure 3) [20].

2.7.2. Structures

As depicted in Figure 4, 3-class models of DL were developed for subtyping diffuse gliomas based on ResNet34, ConvNext tiny, and VIT-base.
ResNet34, ConvNext tiny, and VIT-base were pre-trained on ILSVRC2012 datasets (ImageNet-1K), and two strategies of transfer learning were used for training, as described hereafter [21]. In the first strategy, a fully connected layer was added to the original network, while the second strategy involved the retraining of fully connected layers. Only image data were used for training the models. The first strategy of transfer learning is used in that use only image inputs, while the second strategy is used on models using both image and numeric data inputs. The different combinations of MRI sequences and slice preprocessing strategies were compared using ResNet34 and the first transfer learning strategy. The performance of the different networks was compared using the same combination of MRI sequences and the second transfer learning strategy. The networks with the best performance integrated the numeric data to generate the final classification result. The flowchart of the entire process is depicted in Figure 5.

2.7.3. Model Explanation

GradCAM, a heatmap-based feature attribution method, was used to explain the model [22]. In contrast to CAM, GradCAM does not require modification of the network structure and has been validated in the literature on DL for assigning feature importance to different areas of images [17,23]. By extracting features in areas corresponding to human interpretation, this method rapidly confirmed whether the models constructed herein were behaving as expected.

2.7.4. Model Evaluation

The performance of the models was assessed by evaluating the accuracy using the test sets. The accuracy was determined using the following formula:
T o t a l   a c c u r a c y = T o t a l   c o r r e c t   p r e d i c t i o n s T o t a l   n u m b e r   o f   p r e d i c t i o n s × 100 %
C l a s s   a c c u r a c y = C l a s s   c o r r e c t   p r e d i c t i o n s C l a s s   n u m b e r   o f   p r e d i c t i o n s × 100 %
We further employed receiver operating characteristic curve (ROC) analysis to assess the diagnostic performance of different networks (Supplement Figure S1).
A heatmap-based feature attribution method, GradCAM [22], was used for model explanation. L1 regularization was employed in the final feature layer for improving visualization. The train and loss curve of the combination of ResNet34 and the numeric data model was additionally determined (Supplement Figure S2).

3. Results

3.1. Patient Characteristics

The clinical characteristics of the patients are provided in Table 2. The patients were categorized into the astrocytoma (n = 54), oligodendroglioma (n = 67), and glioblastoma (n = 90) groups. The patients had a mean age of 48.1 ± 11.8 years, and the mean age of the patients in the astrocytoma group (40.3 ± 11.5 years) was lower than that of the other subtypes.

3.2. Model Comparison

3.2.1. Addition of ADC to Models

We evaluated whether the addition of ADC would improve the utilization of the simple common sequences as inputs to the ResNet34 model. The model with the best performance had an overall patient accuracy of 60.0% with T1c and FLAIR images and test sets, and the individual class accuracies for the oligodendroglioma, astrocytoma, and glioblastoma subtypes were 78.6%, 50.0%, and 62.5%, respectively. However, the overall test accuracy of the model with the best performance increased by 7.5% when ADC maps were combined with T1c and FLAIR images in the input. The combination of T1, T2, and T1c resulted in the same overall test accuracy as that of the combination of T1, T2, and ADC. Details of the class accuracies for each test cohort and the confusion matrices are provided in Table 3.

3.2.2. Different Slice Preprocessing Strategies

We investigated the effect of different slice preprocessing strategies on model performance. The overall patient accuracy of the skull stripping group was 67.5% with the test set, and the individual class accuracies for the oligodendroglioma, astrocytoma, and glioblastoma subtypes were 85.7%, 40.0%, and 62.5%, respectively. The overall patient accuracy of the skull-stripping group was 5% higher than that of the non-stripped group (Table 4). Furthermore, the overall test accuracy increased by 10.0% following a segment addition to the input. However, slice pooling resulted in an overall patient accuracy of 42.5% with the test set, which was 12.5% lower than that of the individual slice treatment approach (Table 4).

3.2.3. Benefits of Using Numeric Data

The use of both image and numeric data inputs in the best 3-class model based on ResNet34 resulted in an overall patient accuracy of 95.7%, 75.0%, and 70.0% for the training, validation, and test sets, respectively, while the individual test class accuracies were 85.7%, 30%, and 81.3% for the astrocytoma, oligodendroglioma, and glioblastoma subtypes, respectively. The test class accuracy of this strategy was 2.5% higher than that of the best 3-class model using image data only. The 3-class model based on ResNet34, and integrated both image and numeric data, was selected as the best-performing model (Figure 6).

3.2.4. Comparison of Different Networks

When using ResNet34, the first strategy of transfer learning had an overall accuracy of 67.5%, and outperformed the second retraining strategy with an overall accuracy of 60.0%. Furthermore, the ResNet34 method outperformed ConvNext tiny and VIT-base in terms of overall accuracy and prediction of glioma subtypes (Table 5).

3.2.5. Visualization and GradCAM

Figure 7 depicts the representative GradCAM images of the correct and incorrect predictions of the best 3-class model using ADC, T1c, and T2-FLAIR images as inputs. The regions of red cycles represent the lesions. In GradCAM imaging, colors nearer to red and blue indicate regions with higher and lower weights, respectively, in the network. The network focuses on lesions in the majority of correctly predicted tumors, while GradCAM heatmaps aid in identifying whether the network is not looking at the correct region of images from misclassified patients.

4. Discussion

In this study, we thoroughly explored and compared different combinations of DL frameworks, MRI sequences, slice preprocessing strategies, and numerical data for simultaneously differentiating astrocytomas, oligodendrogliomas, and glioblastomas prior to surgery. Model accuracy differs based on the slice preprocessing approach used. In this study, we primarily compared the effects of three different slice processing strategies, namely, skull stripping, segment addition, and slice-combining paradigms. Although skull stripping has been employed in the majority of previous studies, few studies have achieved superior results using this strategy. The results demonstrated that the overall patient accuracy of skull stripping was 5% higher than that of the non-stripped group. This could be attributed to the fact that the final algorithm comprised learning features derived from the skull and not from an area of the brain. Similar to the method described by Matsui et al. [24], we included a segment in addition to an imaging sequence; however, our segments only contained positional information. The addition of segments increased the accuracy by 10%, and increased attention to the position of the brain segments. Two different slice-combining paradigms were compared, of which the first paradigm included the individual treatment of each slice while training and combining slice predictions later on, and the second strategy involved pooling slices for a single prediction per patient. The first strategy treats each slice independently; therefore, a single batch frequently contains slices from multiple patients. The gradients are backpropagated slice by slice and a final patient-level prediction is obtained only after the completion of training. In contrast, the second method uses all slices from a single patient in the same batch; therefore, the number of slices represents the effective batch size. Average or maximum pooling is performed in the final layer for condensing all the slices into a single feature vector that generates a single prediction per patient. A single value is therefore back-propagated through the network for each patient after the calculation of losses. It has been reported that updating network weights based on individual slices improves training/validation loss curves and increases overall patient-level accuracy [17]. The 3-class model approach was selected owing to the prior superiority of its performance compared to that of the 2-tiered strategy, which first predicts glioblastomas, and subsequently predicts astrocytomas and oligodendrogliomas. In this instance, the accumulation of errors during a repeated 2-group classification is worse compared to the complications of 3-group classification [17,25].
One unanticipated finding of this study was that the ResNet34 architecture achieved higher accuracy than ConvNext tiny and VIT-base. The lower accuracy of ConvNext tiny could be attributed to the fact that the benefit of the depth of the neural network of the tiny architectures of ConvNext tiny and its depthwise convolution are often not observed until the higher-order data of greater magnitude are used for training. Previous studies have demonstrated that the classification accuracy of the VIT-base is equivalent to Resnet only when the size of the images is scaled to 384 × 384, and the convolutional neural network outperforms VIT-base when the size of the images is 224 × 224 [26]. The number of parameters and flops in ResNet34 is substantially lower than those of the VIT-base and Convnext tiny, which allow significant savings in computational resources. Additionally, Khan et al. [27] presented a model that extracted features using two separate CNN networks at the same time, and the model outperformed a single CNN network. However, as mixed CNN network structures require retraining and cannot use the pre-trained weights of ImageNet-1K, no comparison was performed in this study.
ADC performed best when combined with T1c and T2-FLAIR, and the overall test accuracy increased when the generic sequences were included. ADC predicted the complicated diffusion patterns within a voxel of biological tissue. Cellular proliferation increases membrane density and raises the membrane surface; the volume of slow diffusion phases (SDP) in each voxel consequently increases almost linearly with the number of cells per voxel, resulting in a lower ADC following cellular proliferation [28,29]. Recent studies have suggested that ADCs can aid in determining the IDH mutation status; however, ADCs did not provide the advantage of subtype prediction over T1c alone in the present study. The findings indicated that T1c is indispensable in predicting glioma subtypes.
The results of the present study implied that the majority of DL models could precisely discriminate glioblastomas from oligodendrogliomas in the multiclass setting. The earlier result could be explained by the fact that the IDH family of enzymes, which exist in glioblastomas, are part of the Krebs cycle and therefore present in the cytoplasm. Intermediates of the Krebs cycle are used in anabolic reactions that lead to the biosynthesis of various substances, including nucleotides, phospholipids, amino acids, and choline, provide building blocks for cellular proliferation, and act as precursors for membrane biosynthesis, all of which support tumor growth and metastasis [30,31,32,33,34,35]. As IDH-wildtype tumors have more vasculogenesis than IDH-mutant tumors, it follows that IDH-wildtype tumors have higher microvascular density [36,37]. These alterations can be easily observed in anatomical and functional sequences; however, the accuracy of glioblastoma prediction in this study was slightly lower than that reported in previous studies [17,38]. According to CNS5, glioblastomas may be designated as IDH-wildtype CNS WHO grade 4 even in cases that are apparently of histologically lower grade and where images may lack the obvious features of high-grade glioblastoma [3]. Furthermore, oligodendroglial tumors are highly cellular lesions with densely packed, relatively small cells in the central region. Diffuse infiltrative growth is frequently associated with the formation of prominent secondary structures in the peripheral regions with low cellular density, including the clustering of tumor cells around pre-existing neuron perikarya (satellitosis), under the pial surface (subpial aggregation), and surrounding small cortical vessels (perivascular aggregates) [39]. Both ADC and T1 can accurately reflect the aforementioned characteristics of oligodendrogliomas; however, the power of DL models in discriminating astrocytomas is still limited even after the addition of numeric data. This is attributed to the fact that the majority of astrocytomas have no fixed features. Although low-grade astrocytomas have a specific T2-FLAIR mismatch, the sensitivity of predictive models is low. Low grades are, therefore, frequently mislabeled as oligodendrogliomas, while high grades are mislabeled as glioblastomas. Certain gliomas have characteristics of both astrocytes and oligodendrocytes. The cells of the two subtypes may remain diffusely mixed or separated, although the latter form is rare. The biological diversity of these tumors is difficult to capture by precise microscopic criteria, and histopathological samples are not always completely representative [39,40]. Additionally, the field strength parameters of heterogeneous scanning can produce images of varying quality, and may play a crucial role in reducing the sensitivity.
The highest overall accuracy of 67.5% for subtype prediction was achieved when MRI data were included in the datasets. In contrast, an overall accuracy of 70.0% was achieved when both imaging and numeric data were incorporated. However, the addition of numeric data increased the accuracy by only 2.5%. Owing to the addition of clinical information, the second strategy of transfer learning should be used for retraining the fully connected layer; however, the best model using image data used the first strategy of transfer learning. The accuracy improved by 10% when the second strategy of transfer learning was used and clinical information was included. In fact, the addition of numerical data bridged the deficit in transfer learning. Owing to the limitation of transfer learning channels, only three different image sequences contributed to the network for training, which was insufficient for accomplishing the goal of multimodal assessment. However, model accuracy would remain very poor despite the use of more imaging and numeric data if transfer learning is not used. The numeric data attempts to secure the entry of all sequence information and implements multi-modal diagnosis when the transfer learning approach is used in networks. In this study, the numeric data included clinical information and the known hallmark features of each subgroup. Previous studies have demonstrated that glioma subtypes occur in fairly well-defined age groups [41]. The preferred tumor location also differs significantly across different glioma subtypes [42,43], and tumor margins in 1p/19q-codeleted tumors are frequently indistinct [9]. Another significant finding is the T2-FLAIR mismatch sign, which is 100% specific for the diagnosis of IDH-mutant 1p/19q non-codeleted gliomas (astrocytomas) [44]. These numerical characteristics distinguish the glioma subtypes from one another. Although the efficacy of feature extraction in DL largely depends on the scale of the dataset, additional numeric data with proven substantial differences across subtypes may further increase the accuracy when the sizes of the datasets are same.
It is generally accepted that a black box is not very interpretable for widely used DL networks, such as CNN. GradCAM heatmaps depict regions of the network that are prioritized for a given classification. The results of GradCAM analysis in this study demonstrated that DL models can learn from signals in tumor regions, and that it is possible to learn generalizable imaging features. The GradCAM heatmaps depicted in Figure 4 provide evidence that the network focused on the lesions in the majority of correctly predicted tumors. Although GradCAM heatmaps provide insights into the region that the model is looking at, they do not attribute feature importance and have limited spatial resolution based on the size of the final output layer of the selected model [17].
The present study has some limitations as described hereafter. First, the number of patients in this study was quite low for DL and more patients should be included for adequate training in future studies. Secondly, according to the new WHO CNS5 scheme, gliomas with IDH-wildtype and grade 2–3 histopathological types should have one of the following characteristics in order to be diagnosed as glioblastomas: high levels of epidermal growth factor receptor (EGFR), whole chromosome 7 gain and whole chromosome 10 loss (+7/−10), or mutations in the TERT promoter. However, only TERT promoter mutations were considered in the present study.

5. Conclusions

In conclusion, we explored the effect of different inputs on the accuracy of models predicting glioma subtypes. The results demonstrated that certain slice preprocessing strategies, including skull stripping, segment addition, and the individual treatment of slices could achieve superior results. Additionally, the inclusion of ADC improved the overall accuracy of our models, which emphasized the need for adding functional sequences that closely reflect the underlying tumor biology and can be used in future multisite investigations of glioma subtypes. The inclusion of extra quantitative numeric data with a validated substantial difference between subtypes also increased the accuracy when the datasets were of the same size. We believe that the inclusion of more clinical symptoms or radiomics features would provide more information to our model and improve classification outcomes. More functional sequences and numerical data need to be included in follow-up studies with larger datasets for validating our findings and providing insights into training CNN models for the classification of gliomas.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/diagnostics12123063/s1, Figure S1: Structure of the network.; Figure S2: Train and loss curve of final selected model; Table S1: Imaging parameters of MRI acquisition protocols.

Author Contributions

Data curation, R.W., L.M., T.G., T.W., P.W., and P.Z.; Formal analysis, Y.L.; Methodology, D.X. and W.H.; Visualization, D.X.; Writing—original draft, D.X. and X.R.; Writing—review and editing, K.A. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This study has received funding from the Gansu Province Clinical Research Center for Functional and Molecular Imaging, Gansu Provincial Science and Technology Plan Project (grant number: 21JR7RA438), Gansu Province Health Industry Research Program, China (grant number: GSWSKY2020-68), and the Second Hospital of Lanzhou University-Cuiying Science and Technology Innovation Fund Project (grant number: CY2021-BJ-A05).

Institutional Review Board Statement

All the subjects provided informed consent for inclusion before participation in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Lanzhou University Second Hospital (approval number: 2021A-543, dated 1 September 2021).

Informed Consent Statement

Informed consent was obtained from all the subjects involved in the study.

Data Availability Statement

The data in this article is not provided for protecting the privacy of the patients. If necessary, the author can be contacted via email at xiongdh0801@163.com.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. van den Bent, M.J. Interobserver variation of the histopathological diagnosis in clinical trials on glioma: A clinician’s perspective. Acta Neuropathol. 2010, 120, 297–304. [Google Scholar] [CrossRef] [PubMed]
  2. Cancer Genome Atlas Research Network; Brat, D.J.; Verhaak, R.G.; Aldape, K.D.; Yung, W.K.; Salama, S.R.; Cooper, L.A.; Rheinbay, E.; Miller, C.R.; Vitucci, M.; et al. Comprehensive, Integrative Genomic Analysis of Diffuse Lower-Grade Gliomas. N. Engl. J. Med. 2015, 372, 2481–2498. [Google Scholar] [CrossRef]
  3. Louis, D.N.; Perry, A.; Wesseling, P.; Brat, D.J.; Cree, I.A.; Figarella-Branger, D.; Hawkins, C.; Ng, H.K.; Pfister, S.M.; Reifenberger, G.; et al. The 2021 WHO Classification of Tumors of the Central Nervous System: A summary. Neuro-Oncology 2021, 23, 1231–1251. [Google Scholar] [CrossRef]
  4. Eckel-Passow, J.E.; Lachance, D.H.; Molinaro, A.M.; Walsh, K.M.; Decker, P.A.; Sicotte, H.; Pekmezci, M.; Rice, T.; Kosel, M.L.; Smirnov, I.V.; et al. Glioma Groups Based on 1p/19q, IDH, and TERT Promoter Mutations in Tumors. N. Engl. J. Med. 2015, 372, 2499–2508. [Google Scholar] [CrossRef] [PubMed]
  5. Weller, M.; van den Bent, M.; Tonn, J.C.; Stupp, R.; Preusser, M.; Cohen-Jonathan-Moyal, E.; Henriksson, R.; Le Rhun, E.; Balana, C.; Chinot, O.; et al. European Association for Neuro-Oncology (EANO) guideline on the diagnosis and treatment of adult astrocytic and oligodendroglial gliomas. Lancet Oncol. 2017, 18, e315–e329. [Google Scholar] [CrossRef] [PubMed]
  6. Chen, X.; Wang, X.; Zhang, K.; Fung, K.M.; Thai, T.C.; Moore, K.; Mannel, R.S.; Liu, H.; Zheng, B.; Qiu, Y. Recent advances and clinical applications of deep learning in medical image analysis. Med. Image Anal. 2022, 79, 102444. [Google Scholar] [CrossRef]
  7. Kadry, S.; Nam, Y.; Rauf, H.T.; Rajinikanth, V.; Lawal, I.A. Automated detection of brain abnormality using deep-learning-scheme: A study. In Proceedings of the 2021 Seventh International Conference on Bio Signals, Images, and Instrumentation (ICBSII), Chennai, India, 25–27 March 2021; pp. 1–5. [Google Scholar]
  8. Kaiming, H.; Xiangyu, Z.; Shaoqing, R.; Jian, S. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  9. Kim, J.W.; Park, C.-K.; Park, S.-H.; Kim, Y.H.; Han, J.H.; Kim, C.-Y.; Sohn, C.-H.; Chang, K.-H.; Jung, H.-W. Relationship between radiological characteristics and combined 1p and 19q deletion in World Health Organization grade III oligodendroglial tumours. J. Neurol. Neurosurg. Psychiatry 2011, 82, 224–227. [Google Scholar] [CrossRef]
  10. Decuyper, M.; Bonte, S.; Deblaere, K.; Van Holen, R. Automated MRI based pipeline for segmentation and prediction of grade, IDH mutation and 1p19q co-deletion in glioma. Comput Med. Imaging Graph. 2021, 88, 101831. [Google Scholar] [CrossRef]
  11. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  12. Niu, C.; Wang, G. Unsupervised contrastive learning based transformer for lung nodule detection. Phys. Med. Biol. 2022, 67, 204001. [Google Scholar] [CrossRef]
  13. Wu, Y.; Qi, S.; Sun, Y.; Xia, S.; Yao, Y.; Qian, W. A vision transformer for emphysema classification using CT images. Phys. Med. Biol. 2021, 66, 245016. [Google Scholar] [CrossRef] [PubMed]
  14. Park, S.; Kim, G.; Oh, Y.; Seo, J.B.; Lee, S.M.; Kim, J.H.; Moon, S.; Lim, J.K.; Park, C.M.; Ye, J.C. Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation. Nat. Commun. 2022, 13, 3848. [Google Scholar] [CrossRef]
  15. Liu, Z.; Mao, H.; Wu, C.-Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 19–20 June 2022; pp. 11976–11986. [Google Scholar]
  16. Tian, G.; Wang, Z.; Wang, C.; Chen, J.; Liu, G.; Xu, H.; Lu, Y.; Han, Z.; Zhao, Y.; Li, Z.; et al. A deep ensemble learning-based automated detection of COVID-19 using lung CT images and Vision Transformer and ConvNeXt. Front. Microbiol 2022, 13, 1024104. [Google Scholar] [CrossRef] [PubMed]
  17. Cluceru, J.; Interian, Y.; Phillips, J.J.; Molinaro, A.M.; Luks, T.L.; Alcaide-Leon, P.; Olson, M.P.; Nair, D.; LaFontaine, M.; Shai, A.; et al. Improving the noninvasive classification of glioma genetic subtype with deep learning and diffusion-weighted imaging. Neuro Oncol. 2022, 24, 639–652. [Google Scholar] [CrossRef]
  18. Johnson, D.R.; Diehn, F.E.; Giannini, C.; Jenkins, R.B.; Jenkins, S.M.; Parney, I.F.; Kaufmann, T.J. Genetically Defined Oligodendroglioma Is Characterized by Indistinct Tumor Borders at MRI. Am. J. Neuroradiol. 2017, 38, 678–684. [Google Scholar] [CrossRef] [PubMed]
  19. Chang, K.; Bai, H.X.; Zhou, H.; Su, C.; Bi, W.L.; Agbodza, E.; Kavouridis, V.K.; Senders, J.T.; Boaro, A.; Beers, A.; et al. Residual Convolutional Neural Network for the Determination of IDH Status in Low- and High-Grade Gliomas from MR Imaging. Clin. Cancer Res. 2018, 24, 1073–1081. [Google Scholar] [CrossRef] [PubMed]
  20. Bien, N.; Rajpurkar, P.; Ball, R.L.; Irvin, J.; Park, A.; Jones, E.; Bereket, M.; Patel, B.N.; Yeom, K.W.; Shpanskaya, K.; et al. Deep-learning-assisted diagnosis for knee magnetic resonance imaging: Development and retrospective validation of MRNet. PLoS Med. 2018, 15, e1002699. [Google Scholar] [CrossRef]
  21. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE conference on computer vision and pattern recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  22. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  23. Yebasse, M.; Shimelis, B.; Warku, H.; Ko, J.; Cheoi, K.J. Coffee Disease Visualization and Classification. Plants (Basel) 2021, 10, 1257. [Google Scholar] [CrossRef]
  24. Matsui, Y.; Maruyama, T.; Nitta, M.; Saito, T.; Tsuzuki, S.; Tamura, M.; Kusuda, K.; Fukuya, Y.; Asano, H.; Kawamata, T.; et al. Prediction of lower-grade glioma molecular subtypes using deep learning. J. Neurooncol. 2020, 146, 321–327. [Google Scholar] [CrossRef]
  25. Wu, S.; Meng, J.; Yu, Q.; Li, P.; Fu, S. Radiomics-based machine learning methods for isocitrate dehydrogenase genotype prediction of diffuse gliomas. J. Cancer Res. Clin. Oncol. 2019, 145, 543–550. [Google Scholar] [CrossRef]
  26. Tummala, S.; Kadry, S.; Bukhari, S.A.C.; Rauf, H.T. Classification of Brain Tumor from Magnetic Resonance Imaging Using Vision Transformers Ensembling. Curr. Oncol. 2022, 29, 7498–7511. [Google Scholar] [CrossRef] [PubMed]
  27. Khan, M.A.; Ashraf, I.; Alhaisoni, M.; Damaševičius, R.; Scherer, R.; Rehman, A.; Bukhari, S.A.C. Multimodal brain tumor classification using deep learning and robust feature selection: A machine learning application for radiologists. Diagnostics 2020, 10, 565. [Google Scholar] [CrossRef] [PubMed]
  28. Le Bihan, D. The ‘wet mind’: Water and functional neuroimaging. Phys. Med. Biol. 2007, 57, R57–R90. [Google Scholar] [CrossRef] [PubMed]
  29. Le Bihan, D.; Johansen-Berg, H. Diffusion MRI at 25: Exploring brain tissue structure and function. Neuroimage 2012, 61, 324–341. [Google Scholar] [CrossRef]
  30. Hanahan, D.; Weinberg, R.A. Hallmarks of cancer: The next generation. Cell 2011, 144, 646–674. [Google Scholar] [CrossRef]
  31. Lunt, S.Y.; Vander Heiden, M.G. Aerobic glycolysis: Meeting the metabolic requirements of cell proliferation. Annu. Rev. Cell Dev. Biol. 2011, 27, 441–464. [Google Scholar] [CrossRef] [PubMed]
  32. Cantor, J.R.; Sabatini, D.M. Cancer cell metabolism: One hallmark, many faces. Cancer Discov. 2012, 2, 881–898. [Google Scholar] [CrossRef]
  33. Hensley, C.T.; Wasti, A.T.; DeBerardinis, R.J. Glutamine and cancer: Cell biology, physiology, and clinical opportunities. J. Clin. Investig. 2013, 123, 3678–3684. [Google Scholar] [CrossRef]
  34. Dang, C.V. Glutaminolysis: Supplying carbon or nitrogen or both for cancer cells? Cell Cycle 2010, 9, 3884–3886. [Google Scholar] [CrossRef]
  35. Glunde, K.; Bhujwalla, Z.M.; Ronen, S.M. Choline metabolism in malignant transformation. Nat. Rev. Cancer 2011, 11, 835–848. [Google Scholar] [CrossRef]
  36. Kickingereder, P.; Sahm, F.; Radbruch, A.; Wick, W.; Heiland, S.; Deimling, A.; Bendszus, M.; Wiestler, B. IDH mutation status is associated with a distinct hypoxia/angiogenesis transcriptome signature which is non-invasively predictable with rCBV imaging in human glioma. Sci. Rep. 2015, 5, 16238. [Google Scholar] [CrossRef] [PubMed]
  37. Tan, W.; Xiong, J.; Huang, W.; Wu, J.; Zhan, S.; Geng, D. Noninvasively detecting Isocitrate dehydrogenase 1 gene status in astrocytoma by dynamic susceptibility contrast MRI. J. Magn. Reson. Imaging 2017, 45, 492–499. [Google Scholar] [CrossRef] [PubMed]
  38. Li, Y.; Wei, D.; Liu, X.; Fan, X.; Wang, K.; Li, S.; Zhang, Z.; Ma, K.; Qian, T.; Jiang, T.; et al. Molecular subtyping of diffuse gliomas using magnetic resonance imaging: Comparison and correlation between radiomics and deep learning. Eur Radiol 2022, 32, 747–758. [Google Scholar] [CrossRef]
  39. Wesseling, P.; van den Bent, M.; Perry, A. Oligodendroglioma: Pathology, molecular mechanisms and markers. Acta Neuropathol. 2015, 129, 809–827. [Google Scholar] [CrossRef] [PubMed]
  40. Giannini, C.; Burger, P.C.; Berkey, B.A.; Cairncross, J.G.; Jenkins, R.B.; Mehta, M.; Curran, W.J.; Aldape, K. Anaplastic oligodendroglial tumors: Refining the correlation among histopathology, 1p 19q deletion and clinical outcome in Intergroup Radiation Therapy Oncology Group Trial 9402. Brain Pathol. 2008, 18, 360–369. [Google Scholar] [CrossRef] [PubMed]
  41. Bai, J.; Varghese, J.; Jain, R. Adult Glioma WHO Classification Update, Genomics, and Imaging: What the Radiologists Need to Know. Top. Magn Reson Imaging 2020, 29, 71–82. [Google Scholar] [CrossRef]
  42. Carrillo, J.A.; Lai, A.; Nghiemphu, P.L.; Kim, H.J.; Phillips, H.S.; Kharbanda, S.; Moftakhar, P.; Lalaezari, S.; Yong, W.; Ellingson, B.M.; et al. Relationship between tumor enhancement, edema, IDH1 mutational status, MGMT promoter methylation, and survival in glioblastoma. AJNR Am. J. Neuroradiol 2012, 33, 1349–1355. [Google Scholar] [CrossRef]
  43. Qi, S.; Yu, L.; Li, H.; Ou, Y.; Qiu, X.; Ding, Y.; Han, H.; Zhang, X. Isocitrate dehydrogenase mutation is associated with tumor location and magnetic resonance imaging characteristics in astrocytic neoplasms. Oncol. Lett. 2014, 7, 1895–1902. [Google Scholar] [CrossRef]
  44. Patel, S.H.; Poisson, L.M.; Brat, D.J.; Zhou, Y.; Cooper, L.; Snuderl, M.; Thomas, C.; Franceschi, A.M.; Griffith, B.; Flanders, A.E.; et al. T2-FLAIR Mismatch, an Imaging Biomarker for IDH and 1p/19q Status in Lower-grade Gliomas: A TCGA/TCIA Project. Clin. Cancer Res. 2017, 23, 6078–6085. [Google Scholar] [CrossRef]
Figure 1. Flowchart depicting patient inclusion and exclusion criteria.
Figure 1. Flowchart depicting patient inclusion and exclusion criteria.
Diagnostics 12 03063 g001
Figure 2. Schematic depicting the image processing strategy. (A) The largest tumor area and two adjacent layers 5 mm apart in the axial direction were selected using segmented contrast-enhancing (CEL) or T2 lesions (T2L). (B) Three of the four sequences of interest (T2-FLAIR, T1c, T2, and ADC) were placed in the red (R), green (G), and blue (B) channels of a color image that was used as the input to the network.
Figure 2. Schematic depicting the image processing strategy. (A) The largest tumor area and two adjacent layers 5 mm apart in the axial direction were selected using segmented contrast-enhancing (CEL) or T2 lesions (T2L). (B) Three of the four sequences of interest (T2-FLAIR, T1c, T2, and ADC) were placed in the red (R), green (G), and blue (B) channels of a color image that was used as the input to the network.
Diagnostics 12 03063 g002
Figure 3. Comparison of different image preprocessing strategies. (A) skull stripping or non-stripped strategies. (B) Segment addition or replacement by a zeroed matrix of the same shape. (C) Slice pooling for a single prediction per patient or individual treatment of each slice.
Figure 3. Comparison of different image preprocessing strategies. (A) skull stripping or non-stripped strategies. (B) Segment addition or replacement by a zeroed matrix of the same shape. (C) Slice pooling for a single prediction per patient or individual treatment of each slice.
Diagnostics 12 03063 g003
Figure 4. Structure of different networks. (A) the structure of ResNet34. The mark under the block indicates the number of repetitions of the block in this group. (B) the structure of ConvNext. The block structure could be seen on the right, which consists of two pointwise (PW) convolutions and a depthwise convolution (DW) convolution, seen on the right. (C) the structure of Vit. Contains Patch + Position Embedding and Transformer Encoder.
Figure 4. Structure of different networks. (A) the structure of ResNet34. The mark under the block indicates the number of repetitions of the block in this group. (B) the structure of ConvNext. The block structure could be seen on the right, which consists of two pointwise (PW) convolutions and a depthwise convolution (DW) convolution, seen on the right. (C) the structure of Vit. Contains Patch + Position Embedding and Transformer Encoder.
Diagnostics 12 03063 g004
Figure 5. The flowchart of the entire process. All images were first normalized. Secondly, ADC, FLAIR, and T1c were used to create RGB. Thirdly, we used the second strategy of transfer learning to compare three frameworks and select the best one. The best model was used in combination with numerical data to get the final prediction. ResNet34 with the first strategy of transfer learning was used to compare the different MRI sequence and slicer preprocessing strategies.
Figure 5. The flowchart of the entire process. All images were first normalized. Secondly, ADC, FLAIR, and T1c were used to create RGB. Thirdly, we used the second strategy of transfer learning to compare three frameworks and select the best one. The best model was used in combination with numerical data to get the final prediction. ResNet34 with the first strategy of transfer learning was used to compare the different MRI sequence and slicer preprocessing strategies.
Diagnostics 12 03063 g005
Figure 6. Patient accuracy and class accuracies of the final models. The 3-class model based on ResNet34, and integrated both image and numeric data, was selected as the best-performing model.
Figure 6. Patient accuracy and class accuracies of the final models. The 3-class model based on ResNet34, and integrated both image and numeric data, was selected as the best-performing model.
Diagnostics 12 03063 g006
Figure 7. Visualization of imaging features and GradCAM analysis of the final 3-class model for predicting oligodendrogliomas, astrocytomas, and glioblastomas. The regions of red cycles represent the lesions. Colors nearer to red and blue indicate regions with higher and lower weights.
Figure 7. Visualization of imaging features and GradCAM analysis of the final 3-class model for predicting oligodendrogliomas, astrocytomas, and glioblastomas. The regions of red cycles represent the lesions. Colors nearer to red and blue indicate regions with higher and lower weights.
Diagnostics 12 03063 g007
Table 1. Summary of dataset.
Table 1. Summary of dataset.
All DatasetTrainingValidationTesting
All gliomas2111393240
Oligodendrogliomas67431014
Astrocytoma5436810
Glioblastoma90601416
Table 2. Numeric data of the patients included in the study.
Table 2. Numeric data of the patients included in the study.
ParameterAll GliomasOligodendrogliomaAstrocytomaGlioblastoma
Number of patients211675490
Median age (years)48.1 ± 11.846.9 ± 10.040.3 ± 11.553.6 ± 10.4
Gender
Male105322450
Female106353051
Enhancement category
Nonenhancing7637372
Patchy enhancing4728136
Rim enhancing883382
Tumor location category
Frontal, parietal, or occipital139573646
Temporal and insular 5071033
Others 223811
T2-FLAIR mismatch
Present141130
Absent197664190
Tumor margin
Present274212
Absent184633388
Table 3. Summary of classification results for different sequence combinations on test sets.
Table 3. Summary of classification results for different sequence combinations on test sets.
Combinations of 3 Image Modalitiesn (Total)n (Correct)AccuracySubtypeConfusion Matrices
T1, T2, T1c402050.0% OligodendrogliomaAstrocytomaGlioblastoma
Oligodendroglioma14750.0%Oligodendroglioma707
Astrocytoma10330.0%Astrocytoma037
Glioblastoma161062.5%Glioblastoma3310
T1, T2, ADC402050.0%
Oligodendroglioma14428.6%Oligodendroglioma4010
Astrocytoma1000%Astrocytoma406
Glioblastoma1616100.0%Glioblastoma0016
FLAIR, T1c, Zero*402460.0%
Oligodendroglioma141178.6%Oligodendroglioma1121
Astrocytoma10550.0%Astrocytoma532
Glioblastoma161062.5%Glioblastoma5110
FLAIR, T1c, ADC402767.5%
Oligodendroglioma141285.7%Oligodendroglioma1202
Astrocytoma10440.0%Astrocytoma541
Glioblastoma161168.8%Glioblastoma5011
Note: ResNet34 and second strategy of transfer learning were used to generate all result. Bold parts represent titles of different combinations. Zero *: zeroed matrix of same shape.
Table 4. Summary of classification results for different slice preprocessing strategies on test sets.
Table 4. Summary of classification results for different slice preprocessing strategies on test sets.
Image Processing Strategyn (Total)n (Correct)AccuracySubtypeConfusion Matrices
Skull stripping (FLAIR, ADC, T1c)402767.5% OligodendrogliomaAstrocytomaGlioblastoma
Oligodendroglioma141285.7%Oligodendroglioma1202
Astrocytoma10440.0%Astrocytoma541
Glioblastoma161162.5%Glioblastoma5011
Not-cropped (FLAIR, ADC, T1c)402460.0%
Oligodendroglioma141178.6%Oligodendroglioma923
Astrocytoma10220.0%Astrocytoma433
Glioblastoma161168.8%Glioblastoma3013
Segment addition (T1c, T1c, se)402460.0%
Oligodendroglioma141178.6%Oligodendroglioma1112
Astrocytoma10220.0%Astrocytoma622
Glioblastoma161168.8%Glioblastoma5011
Image only (T1c, T1c, Zero *)402050.0%
Oligodendroglioma14428.6%Oligodendroglioma464
Astrocytoma10220.0%Astrocytoma325
Glioblastoma161487.5%Glioblastoma0214
Slice pooling (ADC(n − 1), n, (n + 1))401742.5%
Oligodendroglioma14642.9%Oligodendroglioma635
Astrocytoma10440.0%Astrocytoma343
Glioblastoma16743.8%Glioblastoma817
Individual slice treatment (ADC, ADC, ADC)402255.0%
Oligodendroglioma14857.1%Oligodendroglioma824
Astrocytoma10110.0%Astrocytoma613
Glioblastoma161381.3%Glioblastoma3013
Note: ResNet34 and second strategy of transfer learning were used to generate all result. Bold parts represent titles of different combinations. Zero *: zeroed matrix of same shape.
Table 5. Summary of classification results for different strategies of transfer learning on test sets.
Table 5. Summary of classification results for different strategies of transfer learning on test sets.
Combinations of 3 Image Modalitiesn (Total)n (Correct)AccuracySubtypeConfusion Matrices
ResNet34 with Transfer method A
(ADC, FLAIR, T1c)
402767.5% OligodendrogliomaAstrocytomaGlioblastoma
Oligodendroglioma141285.7%Oligodendroglioma1202
Astrocytoma10440.0%Astrocytoma541
Glioblastoma161168.8%Glioblastoma5011
ResNet34 with Transfer method B
(ADC, FLAIR, T1c)
402460.0%
Oligodendroglioma141071.4%Oligodendroglioma1013
Astrocytoma10220.0%Astrocytoma721
Glioblastoma161275.0%Glioblastoma4012
ConvNext tiny with Transfer method B
(ADC, FLAIR, T1c)
402255.0%
Oligodendroglioma14857.1%Oligodendroglioma824
Astrocytoma10220.0%Astrocytoma622
Glioblastoma161275.0%Glioblastoma4012
VIT-base with Transfer method B
(ADC, FLAIR, T1c)
401742.5%
Oligodendroglioma141285.7%Oligodendroglioma1220
Astrocytoma10220.0%Astrocytoma820
Glioblastoma1600.0%Glioblastoma000
ResNet34 not pretrained
(All images)
402050.0%
Oligodendroglioma 14857.1%Oligodendroglioma806
Astrocytoma10330.0%Astrocytoma432
Glioblastoma16956.3%Glioblastoma639
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xiong, D.; Ren, X.; Huang, W.; Wang, R.; Ma, L.; Gan, T.; Ai, K.; Wen, T.; Li, Y.; Wang, P.; et al. Noninvasive Classification of Glioma Subtypes Using Multiparametric MRI to Improve Deep Learning. Diagnostics 2022, 12, 3063. https://doi.org/10.3390/diagnostics12123063

AMA Style

Xiong D, Ren X, Huang W, Wang R, Ma L, Gan T, Ai K, Wen T, Li Y, Wang P, et al. Noninvasive Classification of Glioma Subtypes Using Multiparametric MRI to Improve Deep Learning. Diagnostics. 2022; 12(12):3063. https://doi.org/10.3390/diagnostics12123063

Chicago/Turabian Style

Xiong, Diaohan, Xinying Ren, Weiting Huang, Rui Wang, Laiyang Ma, Tiejun Gan, Kai Ai, Tao Wen, Yujing Li, Pengfei Wang, and et al. 2022. "Noninvasive Classification of Glioma Subtypes Using Multiparametric MRI to Improve Deep Learning" Diagnostics 12, no. 12: 3063. https://doi.org/10.3390/diagnostics12123063

APA Style

Xiong, D., Ren, X., Huang, W., Wang, R., Ma, L., Gan, T., Ai, K., Wen, T., Li, Y., Wang, P., Zhang, P., & Zhang, J. (2022). Noninvasive Classification of Glioma Subtypes Using Multiparametric MRI to Improve Deep Learning. Diagnostics, 12(12), 3063. https://doi.org/10.3390/diagnostics12123063

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop