Next Article in Journal
Cognitive Impairment in Multiple Sclerosis
Previous Article in Journal
Semi-Supervised Medical Image Segmentation with Co-Distribution Alignment
Previous Article in Special Issue
Mask-Transformer-Based Networks for Teeth Segmentation in Panoramic Radiographs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recognizing Pediatric Tuberous Sclerosis Complex Based on Multi-Contrast MRI and Deep Weighted Fusion Network

1
Research Centre for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Department of Neurology, Shenzhen Children’s Hospital, Shenzhen 518000, China
4
Department of Radiology, Shenzhen Children’s Hospital, Shenzhen 518000, China
5
Department of Emergency, Shenzhen Children’s Hospital, Shenzhen 518000, China
6
Research Department, Hong Kong Sanatorium & Hospital, Hong Kong 999077, China
7
Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Bioengineering 2023, 10(7), 870; https://doi.org/10.3390/bioengineering10070870
Submission received: 29 May 2023 / Revised: 24 June 2023 / Accepted: 12 July 2023 / Published: 22 July 2023
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)

Abstract

:
Multi-contrast magnetic resonance imaging (MRI) is wildly applied to identify tuberous sclerosis complex (TSC) children in a clinic. In this work, a deep convolutional neural network with multi-contrast MRI is proposed to diagnose pediatric TSC. Firstly, by combining T2W and FLAIR images, a new synthesis modality named FLAIR3 was created to enhance the contrast between TSC lesions and normal brain tissues. After that, a deep weighted fusion network (DWF-net) using a late fusion strategy is proposed to diagnose TSC children. In experiments, a total of 680 children were enrolled, including 331 healthy children and 349 TSC children. The experimental results indicate that FLAIR3 successfully enhances the visibility of TSC lesions and improves the classification performance. Additionally, the proposed DWF-net delivers a superior classification performance compared to previous methods, achieving an AUC of 0.998 and an accuracy of 0.985. The proposed method has the potential to be a reliable computer-aided diagnostic tool for assisting radiologists in diagnosing TSC children.

1. Introduction

Tuberous sclerosis complex (TSC) is a rare neurodevelopmental disorder caused by mutations in the TSC1 and TSC2 genes [1,2]. It is characterized by angiofibromas of the face, epilepsy, an intellectual disability, and hamartomas in multiple organs including the heart, kidneys, brain, and lungs [3,4,5]. The majority of pediatric TSC patients experience their initial seizure in the first year of life [6,7,8], which has a severe impact on the lives of TSC children [9,10]. Therefore, it is urgent and valuable to develop valid and robust classification models for TSC children in a clinic.
Neurological symptoms are prevalent in nearly all children with TSC, and multi-contrast magnetic resonance imaging (MRI) is frequently employed for a clinical diagnosis [11]. To date, T2-weighted imaging (T2W) and fluid-attenuated inversion recovery (FLAIR) have been commonly utilized in a pediatric TSC diagnosis, allowing for the identification of lesions and facilitating high lesion-to-brain contrast visualization. But, the cerebrospinal fluid (CSF) signal is strong in T2W, which severely interferes with the visualization of periventricular TSC lesions. FLAIR imaging can suppress cerebrospinal fluid and sufficiently show the lesion–brain contrast clearly, and FLAIR also reduces the signal-to-noise ratio while pressing CSF [12]. Currently, it is not possible for a single MRI sequence to produce all the required tissue contrasts in a single contrast image due to the trade-offs that need to be made when choosing MRI pulse sequence parameters [13]. In recent studies, it has been demonstrated that a synthesized contrast that blends T2W and FLAIR imaging can augment the contrast of multiple sclerosis (MS) lesions, leading to an improved diagnostic efficacy [12,13]. However, to the best of our knowledge, there are not studies on applying a synthesis contrast combining T2W and FLAIR for diagnosing pediatric TSC so far.
Otherwise, deep learning has been studied as an advanced artificial intelligence technology that can automatically learn from medical image data and extract a large number of features [14]. Previously, deep learning models and multi-contrast MRIs have been successfully used for automatically detecting strokes [15] and classifying brain tissues [16]. Until now, convolutional neural networks (CNNs) have been applied to assist in tuber segmentation in TSC patients [17]. Sanchez et al. [18] used two types of contrast MRI, T2W and FLAIR, for the detection task of TSC tubers and achieved the receiver operating characteristic curve that can have an area under the curve (AUC) of 0.99. However, their approach employed a 2D network and solely relied on handpicked MRI slices with evident tubers as input to the network. This method failed to account for the spatial attributes of MRI and neglected the fact that not all TSC patients exhibit visible lesions. Additionally, their datasets were limited to merely 114 TSC patients and 114 controls. Alternatively, recent research suggests that 3D CNNs excel at capturing the spatial characteristics of MRI and effectively capitalize on the interplay between voxels. Consequently, they have been reported to yield superior results in predicting chronological age [19].
To further raise the performance of identifying TSC children in a clinic, a novel deep learning method, named the deep weighted fusion network (DWF-net), was proposed to effectively diagnose pediatric TSC lesions with multi-contrast MRIs. The proposed method has a synthesis contrast, named FLAIR3, from the combination of T2W and FLAIR that can maximize the lesion–brain contrast of pediatric TSC lesions. Moreover, the proposed method has a 3D CNN strategy of the weighted late fusion model combined with multi-contrast MRI to automatically diagnose pediatric TSC. The experimental dataset has a total of 680 children, including 331 healthy and 349 TSC children. Experiments intuitively show that the new synthesis FLAIR3 contrast and the weighted 3D CNN strategy can effectively improve the contrast saliency of pediatric TSC lesions, and the classification performance.
The proposed deep learning method is efficient in distinguishing TSC children from healthy children and presently achieves the best performance. The proposed method has great potential in helping clinical doctors diagnose TSC children and provides an effective research tool for pediatric doctors.

2. Methods

2.1. Optimal Combination of T2W and FLAIR

Cortical and subcortical nodules are the most common lesions in TSC children. The increased prominence of lesions is crucial for clinical doctors to diagnose pediatric TSC [20]. The T2W signal is related to water content, and most of the lesions have stronger T2W signals than surrounding normal tissues, often exhibiting a bright state. Therefore, the location and size of the pediatric TSC lesions can be seen from the T2W sequence. However, the outline of the lesion is relatively vague in the T2W sequence, and it is difficult to clearly outline the outline of the lesion. Moreover, there was a strong cerebrospinal fluid (CSF) signal interference in T2W. FLAIR, also known as water-suppression imaging, suppresses (darkens) CSF hyperintensity in T2W, thereby making lesions adjacent to CSF clear (brightened). Compared with the T2W sequence, the FLAIR sequence can better represent the surroundings of the lesion and clearly show the lesion area. FLAIR is a T2W scan that selectively suppresses CSF by reversing pulses. However, CSF signal suppression comes at the expense of reducing the signal-to-noise ratio [12]. FLAIR2 and FLAIR3 have been proposed to combine T2W and FLAIR to improve lesion visualization in MS disease [12,13]. Inspired by [12,13], we propose to optimize the combination of T2W and FLAIR as a new modality named FLAIR3 in pediatric TSC disease as follows [13]:
FLAIR3 = FLAIRα × T2Wβ
s.t. α + β = 3
where the optimized α is 1.55 and β is 1.45 based on the signal equations of FLAIR and T2W [13], which can optimally balance the lesion contrast between FLAIR and T2W.

2.2. Late Fusion Strategies

Some recent studies [21] have shown that the late fusion model could grasp the data distribution effectively and finally achieve the best classification performance. Inspired by [22,23], a weighted late fusion strategy was used to combine multi-contrast MRI for classification tasks in pediatric TSC patients. First, T2W, FLAIR, and FLAIR3 were fed into a feature extractor. We propose a deep weighted network (DWF net) that takes the scores of the T2W, FLAIR, and FLAIR3 models as input, and outputs the final classification with a simple and efficient weighted average integration method, as follows:
S DWF   =   W 1   ×   S T 2 W + W 2   ×   S FLAIR + W 3   ×   S FLAIR 3 s . t .   i = 1 3 W i = 1
where ST2W, SFLAIR, and SFLAIR3 represent the classification scores of T2W, FLAIR, and FLAIR3 models, respectively. SDWF denotes the final output prediction scores of the proposed DWF-net. W1, W2, and W3 are the weights of the prediction scores of the three multi-contrast MRIs.
To explore the optimal fusion between multi-contrast MRI and to enhance the AUC of the proposed DWF-net, the experiments were performed for values of W1 between 0 and 1, and W2 from 0.1 to 1−W1 with a step of 0.1; W3 is 1−W1W2. The weight-searching algorithm is shown in Algorithm 1.
Algorithm 1 The weight searching algorithm for fusion
Input: The prediction scores ST2W, SFLAIR, and SFLAIR3 of three input images and corresponding ground truth y on testing set.
Output: The weight (W1, W2, and W3) with best AUC on testing set.
1: Initialize AUC best ← 0.
2: for i: =0 to 10 do
3:  for j: =0 to 10–i do
4:     k ← 10-ij
5:     S temp = (i×ST2W + j×SFLAIR + k×SFLAIR3) × 0.1
6:     AUC temp = Compare (Stemp, y)
7:     if AUC temp > AUC best then
8:      AUC bestAUC temp
9:      W1i×0.1
10:     W2j×0.1
11:     W3k×0.1
12:     end for
13:  end for
14: end for
Return W1, W2, and W3

2.3. Network Architectures

The proposed DWF-net method for pediatric TSC patients was implemented using two different 3D CNN architectures. The following sections describe two different 3D CNN models.
ResNet was proposed in 2015 and has been widely applied in detection, segmentation, recognition, and other fields [24]. In addition, ResNet has demonstrated a stable and excellent classification performance in image classification among different variants of various 3D CNNs [24]. Therefore, the first 3D CNN model we consider is 3D-ResNet, which uses a shortcut connection to make a reference for the input of each layer and learns to form a residual function. The residual function is easier to optimize, making the number of network layers much deeper, and can easily obtain a higher accuracy from deeper depths.
For the second 3D CNN model, we utilized the 3D-EfficientNet architecture [25] as our feature extractor. This classification network is known for its efficiency in improving accuracy and reducing the training time and network parameters. The EfficientNet was designed using a neural architecture search and employs the mobile inverted bottleneck convolution (MBConv) module as its core structure. This module, similar to depth-wise separable convolution, minimizes parameters significantly. In addition, the attention idea of the squeeze-and-excitation network (SENet) is also introduced [26] in EfficientNet. The attention mechanism of SENet allows the model to focus more on channel features that are most informative, while suppressing those unimportant channel features, thereby improving the model performance.
As shown in Figure 1a, for the pediatric TSC identification tasks with one single MRI modality, the 3D-ResNet34 and 3D-EfficientNet were used as a feature extractor. When DWF-net was used, two or three modalities were applied as inputs, as shown in Figure 1b. Table 1 displays the 10 models that were trained in this study, each with distinct architectures and inputs.

3. Materials and Experiments

3.1. Dataset

In this study, all pediatric volunteers were from Shenzhen Children’s Hospital. The study was approved by the Ethics Committee of Shenzhen Children’s Hospital (No.2019005). Written informed consent was obtained from all pediatric volunteers and/or their parents. In total, 349 TSC children and 331 healthy children (HC) were included in this study. Inclusion criteria for pediatric TSC patients were (1) aged 0–20 years, (2) no other neurological disorders, and (3) clinically diagnosed with TSC. (4) T2W and FLAIR images are complete and clear. Inclusion criteria for healthy children were (1) aged 0–20 years, (2) without any neurological disorder, (3) clinically defined normal or non-specific findings during routine clinical care. (4) T2W and FLAIR images are complete and clear. Figure 2 shows the exclusion and inclusion criteria of our study.
The data were randomly split into train-validation-test sets in a 7:1:2 ratio. To ensure that every group had the same class proportion, stratified random sampling was employed. Training, validation, and testing datasets had no overlap of patients.

3.2. Data Processing

Firstly, a FMRIB Linear Image Registration Tool (FLIRT) of FSL (http://fsl.fmrib.ox.ac.uk (accessed on 1 January 2021.)) was used to register T2W into the FLAIR space, and mutual information was used as the cost function. In neuroimaging studies, the lesions are usually located in the brain tissue, and the skull part is an irrelevant site. When brain MRI images are used for classification network research, the brain tissue of the region of interest is often the input. HD-bet is an algorithm for extracting brain tissue [27], which can remove irrelevant images such as of the neck and eyeball. Therefore, in the second step, the deep learning tool HD-bet is used to strip the skull in MRI. Subsequently, all 3D MRI images were resized to 128 × 128 × 128, and the image intensity was normalized to the range of 0 to 1 using the min–max normalization formula:
x N o r m a l i z e d = x M i n ( x ) M a x ( x ) M i n ( x )
where Max(x) and Min(x) represent the highest and lowest values of the brain-extracted MRI images, respectively, and xNormalized refers to the normalized MRI images. Finally, T2W and FLAIR were combined and transformed into FLAIR3. The flowchart illustrating the data preprocessing can be found in Figure 3.

3.3. Baseline and Effectiveness of Skull Stripping

In this study, we compared 10 different proposed 3D CNN models with a 2D-InceptionV3 model [18] (baseline model) to evaluate the effectiveness of the proposed deep learning methods. The 2D-InceptionV3 model was exclusively trained on our FLAIR data, with the maximum transverse slice of the FLAIR chosen as the input. Furthermore, we conducted a series of experiments on FLAIR images and T2W images with and without skull-stripping preprocessing to assess the effectiveness of the skull-stripping methodology.

3.4. Comparison of Normalization Methods

Typically, normalization methods often have a significant impact on the performance of deep learning models. The min–max normalization and Z-score normalization are most used in medical image normalization. While the min–max normalization approach is appropriate for most kinds of data and can effortlessly maintain the initial data distribution structure, it is not ideal for handling sparse data and is prone to being affected by outliers. The Z-score normalization method employs the mean and standard deviation of the original data to normalize it. The following formula illustrates this:
x N o r m a l i z e d = x M e a n ( x ) s t d ( x )
When M e a n x = 0, s t d ( x ) = 1, that is, the mean is 0 and the standard deviation is 1, meaning that the processed data conform to the standard normal distribution. This Z-score method is suitable for most types of data, but it is a centralized method, which will change the distribution structure of the original data, and it is also not suitable for the processing of sparse data. To explore the effectiveness of the normalization operation, we conducted three sets of experiments on both T2W and FLAIR images when using the same network, which are without the normalization method, the Z-score normalization, and the min–max normalization, respectively.

3.5. Model Training and Evaluation

For our experiments, we used the same partitioning for the training set, validation set, and test set across all models. Each model was trained using a learning rate of 0.0001, SGD optimization, a batch size of 4, and 50 epochs, with the binary cross-entropy loss function. To implement the training, validation, and testing process, we used Python version 3.8.10 and PyTorch version 1.9.0 environments.
For each cohort, we calculated the area under the curve (AUC) of the receiver operating characteristic (ROC), accuracy (ACC), sensitivity (SEN), and specificity (SPE) to evaluate the classification performance of all models. These metrics rely on the true positive (TP), which counts the total number of correct positive classifications, and the true negative (TN), which represents the total number of accurate negative classifications. The false positive (FP) accounts for the total number of positive classifications that are incorrect, while the false negative (FN) represents the total number of negative classifications that are incorrect. We obtained the ACC, SEN, and SPE through the following formulas:
Accuracy (ACC): The percentage of the whole sample that is correctly classified:
A C C = T P + T N T P + T N + F P + F N
Sensitivity (SEN): The percentage of the total sample that is true and correctly classified:
S E N = T P T P + F N
Specificity (SPE): The percentage of the total sample that is negative and correctly classified:
S P E = T N T N + F P

3.6. Statistical Analysis

For this research, categorical variables were presented using the frequency and percentage, while continuous variables were expressed as the mean ± standard deviation. Continuous variables were analyzed using the F-test, while categorical variables underwent a chi-square analysis. Statistical significance was defined as p < 0.05. All statistical analyses were performed using the scikit learn, scipy, and stats libraries in Python 3.8.10.

4. Results

4.1. Clinical Characteristics of Patients

All of the 680 child subjects’ primary clinical features are listed in Table 2. Among the 349 TSC patients, 188 (53.9%) were identified as male, averaging 45.5 months in age. Moreover, among the 331 HC, 183 (55.3%) were identified as male, averaging 733 months in age. There was a significant difference in the average age between the HC group and the TSC group, with a p-value less than 0.05. There was no significant difference in gender.

4.2. Visualization Results of FLAIR3

Figure 4 shows FLAIR, T2W, and FLAIR3 images of a TSC child and a healthy child. On three MRI images of the TSC child, it can be observed that the contrast between the lesions and brain tissue on FLAIR is not clear enough, there is a severe interference of cerebrospinal fluid on T2W, and the contrast and clarity of the lesions on the newly generated FLAIR3 image are significantly improved (TSC lesion as shown by the red arrow). In addition, FLAIR3 inhibits cerebrospinal fluid and can clearly locate the TSC lesion.

4.3. Performance of the Models

The performance of DWF-net varies with the weight of W1, W2, and W3 as shown in Figure 5. The feature extractor in Figure 5a is 3D-EfficientNet, and the best AUC performance of 3D-EfficientNet is 0.989 (W1 = 0.0, W2 = 0.3, W3 = 0.7). Among the models evaluated, Res_DWF_net (with weight parameters W1 = 0.2, W2 = 0.3, W3 = 0.5), which employs 3D-ResNet as a feature extractor and a late fusion strategy as depicted in Figure 5b, achieves the highest performance. This model has an accuracy of 0.985 and an AUC of 0.998, outperforming other models.
The results for all the compared models in the testing dataset are presented in Table 3. When using 3D-EfficientNet, FLAIR3 achieves an AUC performance of 0.987 and the AUC of Eff_FLAIR_T2W is 0.974, and the AUC of FLAIR3 is higher than Eff_FLAIR_T2W. FLAIR3 achieves an AUC performance of 0.997 when using 3D-ResNet as the feature extraction network. When the feature extraction network is 3D ResNet, the AUC of Res_FLAIR_T2W is 0.994, and the AUC of FLAIR3 is higher than Res_FLAIR_T2W.
When using the same single-modal MRI as inputs, 3D-ResNet outperforms 3D-EfficientNet. Additionally, the AUC performance of the FLAIR3 model outperforms the T2W-only model and FLAIR-only model. The baseline network (InceptionV3) achieves an AUC performance of 0.952, and the performance of our all-3D network exceeds the AUC performance of the baseline network of InceptionV3.
ROC curves for all models of the testing cohort are shown in Figure 6a–c, and Figure 6d shows the classification performance for all models of the testing cohort.

4.4. Results of Skull Stripping

The classification performance of FLAIR and T2W images, with or without skull dissection, is presented in Table 4. The table demonstrates that if the network structure and input modality remain constant and the skull dissection preprocessing is not carried out, the classification performance of 3D ResNet and 3D EfficientNet will show a decline.

4.5. Comparison of Normalization Methods

Table 5 and Figure 7 depict the classification performance of three normalization methods, including without normalization, Z-score normalization, and min–max normalization on FLAIR images and T2W images. The horizontal axis represents the different normalization techniques, while the vertical axis represents their corresponding performance. In instances where the input modality and network structure remain constant, it is worth noting that the without-normalization method has the poorest AUC performance. Furthermore, the AUC performance of the min–max normalization technique is better than the Z-score normalization technique.

5. Discussion

The main objective of the proposed approach is to identify TSC children at an early stage using a 3D CNN model in conjunction with multi-contrast MRI in an automated manner. Initially, the approach incorporates FLAIR3 as a novel modality for diagnosing pediatric TSC lesions and optimizes the T2W and FLAIR combination to enhance the lesion–brain contrast in a clinic. The findings indicate that FLAIR3 has the ability to enhance the prominence of TSC lesions, while also enhancing classification accuracy and providing a more intuitive understanding of our deep learning model. Otherwise, the proposed method used two networks as feature extractors; one is 3D-EfficientNet, which is a parameter-efficient deep convolutional neural network framework, and the other classification network is 3D-ResNet, which is a classical residual network. Previously, the FLAIR3 modality was only used in MS disease [13], but the proposed methods generalized it to pediatric TSC disease and demonstrated that FLAIR3 was able to better visualize TSC lesions. Furthermore, a multi-modal fusion network for multi-contrast MRI data was proposed, which can feed FLAIR3 as a new modality into the proposed DWF-net network, finally achieving a state-of-the-art classification performance in identifying children with pediatric TSC. And the dataset has no PET and EEG as input, and only has just the structural MRI that can be easily and wildly collected at any hospital, which helpfully maximizes the potential applicability of the proposed approach in clinical practice. In summary, the proposed method also has innovations in the following aspects: 1) the use of a weighted fusion algorithm to maximize the fusion multi-contrast MRI and optimize weights to improve performance; 2) firstly proposes to use a FLAIR3 image to position and visualize the lesions in a clinical diagnosis of TSC. 3) The utilization of FLAIR3 as the complementary imaging input to maximize the information extracted from the structure MRI.
In comparison to the 2D CNN model InceptionV3 discussed in [18], the proposed 3D CNN models exhibit an enhanced classification performance. Some previous studies are also consistent with our conclusion that 3D networks perform better than 2D networks [19,28]. We believe that the performance improvement of the 3D network is mainly due to the full use of the spatial features of MRI voxels, which can extract more information. In this study, the proposed late fusion method can improve the classification performance compared to a single modality using a 3D CNN approach, implying that combining multiple contrasting MRI can exploit complementary visual information between multiple sequences. This result is consistent with a recent study by Han Peng et al. [29], which demonstrated that combining models from diverse modalities with complementary information leads to a superior performance. The success of the ensemble strategy is not only attributed to the number of large models but also to independent information gathered from different modalities. Additionally, recent research has revealed that the late fusion method outperforms the early fusion technique [30,31]. In addition, Jonsson et al. used a majority voting strategy to form the final predictions and achieved performance gains with multimodal inputs [22]. In our experimental results, our findings indicate that when utilizing the same MRI modality as network inputs, all models with 3D-ResNet feature extractors outperform the 3D-EfficientNet model. One possible explanation is that 3D-ResNet has more network parameters than 3D-Effectient, and the network structure is more complicated. Therefore, 3D-ResNet can extract more high-level image feature information than 3D-EfficientNet.
Surprisingly, our experiments have successfully demonstrated the effectiveness of FLAIR3 in a pediatric TSC diagnosis, and the AUC performance of the FLAIR3-only model outperforms the T2W-only model and FLAIR-only model when using the same network. We found that the use of 3D-EfficientNet results in a better AUC score for the Eff_FLAIR3 model compared to the Eff_FLAIR_T2W model and that the Res_FLAIR3 model outperforms the Res_FLAIR_T2W model when using the feature extraction network 3D ResNet. This could imply that FLAIR3 can provide more information. When the late fusion strategy is used, the weight W3 of FLAIR3 is the largest. A reasonable note is that FLAIR3 can enhance the lesion-to-brain contrast and the TSC lesion is clearer in FLAIR3 than in T2W and FLAIR, so FLAIR3 can offer more low-dimensional visual lesion information for deep learning during the feature extraction stage. Such low-dimensional visual information may be very helpful for our deep learning algorithms, which could increase the interpretability of our deep learning algorithms [32].
Moreover, skull stripping plays a crucial role in computational neuro-imaging by being a vital preprocessing step that has a direct impact on subsequent analyses [33,34,35]. In this study, we found that both the 3D-ResNet and 3D-EfficientNet models perform better when utilizing MRI with skull stripping applied as the input. This may be due to the fact that the pixel value of the skull is significantly higher than that of the brain tissue [30,36], which allows for more information to be extracted during the feature selection phase. However, it is important to note that such information may be irrelevant for our deep learning methods and may even reduce their performance [37].
Furthermore, image normalization is critical to develop powerful deep learning methods [38,39]. In this study, the experiments included normalization, no normalization, min–max normalization, and Z-score normalization. All of the results showed that the AUC performance without the normalization method is the worst; the AUC performance of the min–max normalization is better than the Z-score normalization when the input modality and network structure are the same. Therefore, we suggest that in future similar studies, the min–max normalization method can be used as a primary choice to normalize the MRI images.
Otherwise, many experts considered that tubers are stable in size and appearance after birth and that the proportion to the whole brain will not obviously change with age [40]. The myelination process in a clinic has three stages, namely before 7–8 months of age, 7–8 months to 2 years of age, and after 2 years of age. So, the TSC situation of MRI after 2 years of age should be the same as before, but myelination after 2 years of age may not have affected our MRI images [41]. But these are statistical results, and there are some different situations for different TSC patients. In a clinic, MRI should be scanned several times under the age of 2 to reflect dynamic changes in epileptic lesions. Here, we did not exclude children under 2 years of age for being close to real clinical situations. The deep learning method we proposed can be promoted in a clinic and only needs to collect FLAIR and T2W images of a patient. Our method is simple and effective in a clinic and can be used as a computer-aided tool to help doctors diagnose TSC patients. In the future, further situations of TSC patients should be evaluated.

6. Conclusions

In summary, a novel deep learning method of the weighted late fusion model was proposed to effectively diagnose pediatric TSC lesions with multi-contrast and synthesis-contrast FLAIR3 MRI. The collected dataset of pediatric TSC disease has a total of 680 children, including 331 healthy and 349 TSC children. The current testing results illustrated that the proposed approach can attain a state-of-the-art AUC of 0.998 and accuracy of 0.985. As such, this method can act as a robust foundation for future studies regarding pediatric TSC patients.

7. Patents

The work reported in this manuscript has resulted in a patent.

Author Contributions

Data curation, C.Z., X.Z., R.L., Y.Z. (Yihang Zhou) and Z.H.; Formal analysis, J.L., J.Y., Z.-C.L., Y.Z. (Yihang Zhou) and D.L.; Investigation, Z.H., H.W. and D.L.; Methodology, D.J., J.L., D.L., Z.H. and H.W.; Resources, D.J. and R.L.; Software, J.Y., Y.Z. (Yanjie Zhu) and H.W.; Validation, D.J., C.Z., X.Z. and H.W.; Writing—original draft, D.J. and Z.-C.L.; Writing—review and editing, Z.H., H.W. and D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This study received support from various sources, including the Sanming Project of Medicine in Shenzhen (SZSM201812005), the Guangdong High-level Hospital Construction Fund (Shenzhen Children’s Hospital, ynkt2021-zz11), the Pearl River Talent Recruitment Program of Guangdong Province (2019QN01Y986), the Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province (2020B1212060051), the National Natural Science Foundation of China (62271474, 6161871373, 81729003, and 81901736), the Strategic Priority Research Program of Chinese Academy of Sciences (XDB25000000 and XDC07040000), the Science and Technology Plan Program of Guangzhou (202007030002), the Key Field R&D Program of Guangdong Province (2018B030335001), and the Shenzhen Science and Technology Program (JCYJ20210324115810030, KQTD20180413181834876, and JCYJ20220530160005012).

Institutional Review Board Statement

This study was approved by the Ethics Committee of Shenzhen Children’s Hospital (No.2019005).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study. Written informed consent was obtained from the patient(s) to publish this paper.

Data Availability Statement

All of our data is from Shenzhen Children’s Hospital and the data are unavailable due to privacy or ethical restrictions.

Acknowledgments

We thank the Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences for providing experimental equipment.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Chu-Shore, C.J.; Major, P.; Camposano, S.; Muzykewicz, D.; Thiele, E.A. The natural history of epilepsy in tuberous sclerosis complex. Epilepsia 2009, 51, 1236–1241. [Google Scholar] [CrossRef] [PubMed]
  2. Liu, A.J.; Lusk, J.B.; Ervin, J.; Burke, J.; O’Brien, R.; Wang, S.H.J. Tuberous sclerosis complex is a novel, amyloid-independent tauopathy associated with elevated phosphorylated 3R/4R tau aggregation. Acta Neuropathol. Commun. 2022, 10, 27. [Google Scholar] [CrossRef] [PubMed]
  3. Henske, E.P.; Jóźwiak, S.; Kingswood, J.C.; Sampson, J.R.; Thiele, E.A. Tuberous sclerosis complex. Nat. Rev. Dis. Primers 2016, 2, 16035. [Google Scholar] [CrossRef] [PubMed]
  4. Sato, A.; Tominaga, K.; Iwatani, Y.; Kato, Y.; Wataya-Kaneda, M.; Makita, K.; Nemoto, K.; Taniike, M.; Kagitani-Shimono, K. Abnormal White Matter Microstructure in the Limbic System Is Associated with Tuberous Sclerosis Complex-Associated Neuropsychiatric Disorders. Front. Neurol. 2022, 13, 782479. [Google Scholar] [CrossRef] [PubMed]
  5. Liu, L.; Yu, C.; Yan, G. Identification of a novel heterozygous TSC2 splicing variant in a patient with Tuberous sclerosis complex: A case report. Medicine 2022, 101, e28666. [Google Scholar] [CrossRef]
  6. Miszewska, D.; Sugalska, M.; Jóźwiak, S. Risk Factors Associated with Refractory Epilepsy in Patients with Tuberous Sclerosis Complex: A Systematic Review. J. Clin. Med. 2021, 10, 5495. [Google Scholar] [CrossRef]
  7. Okanishi, T.; Akiyama, T.; Tanaka, S.-I.; Mayo, E.; Mitsutake, A.; Boelman, C.; Go, C.; Snead, O.C.; Drake, J.; Rutka, J.; et al. Interictal high frequency oscillations correlating with seizure outcome in patients with widespread epileptic networks in tuberous sclerosis complex. Epilepsia 2014, 55, 1602–1610. [Google Scholar] [CrossRef]
  8. Zhang, K.; Hu, W.-H.; Zhang, C.; Meng, F.-G.; Chen, N.; Zhang, J.-G. Predictors of seizure freedom after surgical management of tuberous sclerosis complex: A systematic review and meta-analysis. Epilepsy Res. 2013, 105, 377–383. [Google Scholar] [CrossRef]
  9. Yang, J.; Zhao, C.; Su, S.; Liang, D.; Hu, Z.; Wang, H.; Liao, J. Machine Learning in Epilepsy Drug Treatment Outcome Prediction Using Multi-modality Data in Children with Tuberous Sclerosis Complex. In Proceedings of the 2020 6th International Conference on Big Data and Information Analytics (BigDIA), Shenzhen, China, 4–6 December 2020; IEEE: Manhattan, NY, USA, 2020. [Google Scholar] [CrossRef]
  10. De Ridder, J.; Verhelle, B.; Vervisch, J.; Lemmens, K.; Kotulska, K.; Moavero, R.; Curatolo, P.; Weschke, B.; Riney, K.; Feucht, M.; et al. Early epileptiform EEG activity in infants with tuberous sclerosis complex predicts epilepsy and neurodevelopmental outcomes. Epilepsia 2021, 62, 1208–1219. [Google Scholar] [CrossRef]
  11. Russo, C.; Nastro, A.; Cicala, D.; De Liso, M.; Covelli, E.M.; Cinalli, G. Neuroimaging in tuberous sclerosis complex. Childs Nerv. Syst. 2020, 36, 2497–2509. [Google Scholar] [CrossRef]
  12. Wiggermann, V.; Hernandez-Torres, E.; Traboulsee, A.; Li DK, B.; Rauscher, A. FLAIR2: A Combination of FLAIR and T2 for Improved MS Lesion Detection. Am. J. Neuroradiol. 2016, 37, 259–265. [Google Scholar] [CrossRef]
  13. Gabr, R.E.; Hasan, K.M.; Haque, M.E.; Nelson, F.M.; Wolinsky, J.S.; Narayana, P.A. Optimal combination of FLAIR and T2-weighted MRI for improved lesion contrast in multiple sclerosis. J. Magn. Reson. Imaging 2016, 44, 1293–1300. [Google Scholar] [CrossRef] [PubMed]
  14. Lyu, Q.; Shan, H.; Steber, C.; Helis, C.; Whitlow, C.; Chan, M.; Wang, G. Multi-Contrast Super-Resolution MRI Through a Progressive Network. IEEE Trans. Med. Imaging 2020, 39, 2738–2749. [Google Scholar] [CrossRef] [PubMed]
  15. Cetinoglu, Y.K.; Koska, I.O.; Uluc, M.E.; Gelal, M.F. Detection and vascular territorial classification of stroke on diffusion-weighted MRI by deep learning. Eur. J. Radiol. 2021, 145, 110050. [Google Scholar] [CrossRef] [PubMed]
  16. Srikrishna, M.; Pereira, J.B.; Heckemann, R.A.; Volpe, G.; van Westen, D.; Zettergren, A.; Kern, S.; Wahlund, L.-O.; Westman, E.; Skoog, I.; et al. Deep learning from MRI-derived labels enables automatic brain tissue classification on human brain CT. Neuroimage 2021, 244, 118606. [Google Scholar] [CrossRef]
  17. Park, D.K.; Kim, W.; Thornburg, O.S.; McBrian, D.K.; McKhann, G.M.; Feldstein, N.A.; Maddocks, A.B.; Gonzalez, E.; Shen, M.Y.; Akman, C.; et al. Convolutional neural network-aided tuber segmentation in tuberous sclerosis complex patients correlates with electroencephalogram. Epilepsia 2022, 63, 1530–1541. [Google Scholar] [CrossRef]
  18. Sánchez Fernández, I.; Yang, E.; Calvachi, P.; Amengual-Gual, M.; Wu, J.Y.; Krueger, D.; Northrup, H.; Bebin, M.E.; Sahin, M.; Yu, K.-H.; et al. Deep learning in rare disease. Detection of tubers in tuberous sclerosis complex. PLoS ONE 2020, 15, e0232376. [Google Scholar] [CrossRef]
  19. Cole, J.H.; Poudel, R.P.; Tsagkrasoulis, D.; Caan, M.W.; Steves, C.; Spector, T.D.; Montana, G. Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker. Neuroimage 2017, 163, 115–124. [Google Scholar] [CrossRef]
  20. Moffa, A.P.; Grilli, G.; Perfetto, F.; Specchiulli, L.P.; Vinci, R.; Macarini, L.; Zizzo, L. Neuroimaging features of tuberous sclerosis complex and Chiari type I malformation: A rare association. J. Pediatr. Neurosci. 2018, 13, 224–228. [Google Scholar] [CrossRef]
  21. Liang, G.; Xing, X.; Liu, L.; Zhang, Y.; Ying, Q.; Lin, A.L.; Jacobs, N. Alzheimer’s Disease Classification Using 2D Convolutional Neural Networks. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Guadalajara, Mexico, 1–5 November 2021; IEEE: Manhattan, NY, USA, 2021; pp. 3008–3012. [Google Scholar]
  22. Jonsson, B.A.; Bjornsdottir, G.; Thorgeirsson, T.E.; Ellingsen, L.M.; Walters, G.B.; Gudbjartsson, D.F.; Stefansson, H.; Ulfarsson, M.O. Brain age prediction using deep learning uncovers associated sequence variants. Nat. Commun. 2019, 10, 1–10. [Google Scholar] [CrossRef]
  23. Eweje, F.R.; Bao, B.; Wu, J.; Dalal, D.; Liao, W.H.; He, Y.; Luo, Y.; Lu, S.; Zhang, P.; Peng, X.; et al. Deep Learning for Classification of Bone Lesions on Routine MRI. EBioMedicine 2021, 68, 103402. [Google Scholar] [CrossRef]
  24. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  25. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  26. Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  27. Isensee, F.; Schell, M.; Pflueger, I.; Brugnara, G.; Bonekamp, D.; Neuberger, U.; Wick, A.; Schlemmer, H.-P.; Heiland, S.; Wick, W.; et al. Automated brain extraction of multisequence MRI using artificial neural networks. Hum. Brain Mapp. 2019, 40, 4952–4964. [Google Scholar] [CrossRef] [PubMed]
  28. Kim, H.; Lee, Y.; Kim, Y.H.; Lim, Y.M.; Lee, J.S.; Woo, J.; Jang, S.K.; Oh, Y.J.; Kim, H.W.; Lee, E.J.; et al. Deep Learning-Based Method to Differentiate Neuromyelitis Optica Spectrum Disorder from Multiple Sclerosis. Front. Neurol. 2020, 11, 599042. [Google Scholar] [CrossRef] [PubMed]
  29. Peng, H.; Gong, W.; Beckmann, C.F.; Vedaldi, A.; Smith, S.M. Accurate brain age prediction with lightweight deep neural networks. Med. Image Anal. 2020, 68, 101871. [Google Scholar] [CrossRef] [PubMed]
  30. De Luna, A.; Marcia, R.F. Data-Limited Deep Learning Methods for Mild Cognitive Impairment Classification in Alzheimer’s Disease Patients. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Guadalajara, Mexico, 1–5 November 2021; pp. 2641–2646. [Google Scholar]
  31. Jian, J.; Li, Y.A.; Xia, W.; He, Z.; Zhang, R.; Li, H.; Zhao, X.; Zhao, S.; Zhang, J.; Cai, S.; et al. MRI-Based Multiple Instance Convolutional Neural Network for Increased Accuracy in the Differentiation of Borderline and Malignant Epithelial Ovarian Tumors. J. Magn. Reason. Imaging 2021, 56, 173–181. [Google Scholar] [CrossRef]
  32. Banerjee, S.; Dong, M.; Lee, M.-H.; O’Hara, N.; Juhasz, C.; Asano, E.; Jeong, J.-W. Deep Relational Reasoning for the Prediction of Language Impairment and Postoperative Seizure Outcome Using Preoperative DWI Connectome Data of Children with Focal Epilepsy. IEEE Trans. Med. Imaging 2020, 40, 793–804. [Google Scholar] [CrossRef]
  33. Thakur, S.P.; Doshi, J.; Pati, S.; Ha, S.M.; Sako, C.; Talbar, S.; Kulkarni, U.; Davatzikos, C.; Erus, G.; Bakas, S. Skull-Stripping of Glioblastoma MRI Scans Using 3D Deep Learning. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries; Springer: Cham, Switzerland, 2020; Volume 11992, pp. 57–68. [Google Scholar]
  34. Fischmeister, F.P.; Höllinger, I.; Klinger, N.; Geissler, A.; Wurnig, M.C.; Matt, E.; Rath, J.; Robinson, S.D.; Trattnig, S.; Beisteiner, R. The benefits of skull stripping in the normalization of clinical fMRI data. NeuroImage Clin. 2013, 3, 369–380. [Google Scholar] [CrossRef]
  35. Kleesiek, J.; Urban, G.; Hubert, A.; Schwarz, D.; Maier-Hein, K.; Bendszus, M.; Biller, A. Deep MRI brain extraction: A 3D convolutional neural network for skull stripping. Neuroimage 2016, 129, 460–469. [Google Scholar] [CrossRef]
  36. Fatima, A.; Shahid, A.R.; Raza, B.; Madni, T.M.; Janjua, U.I. State-of-the-Art Traditional to the Machine- and Deep-Learning-Based Skull Stripping Techniques, Models, and Algorithms. J. Digit. Imaging 2020, 33, 1443–1464. [Google Scholar] [CrossRef]
  37. Jiang, D.; Hu, Z.; Zhao, C.; Zhao, X.; Yang, J.; Zhu, Y.; Liang, D.; Wang, H. Identification of Children’s Tuberous Sclerosis Complex with Multiple-contrast MRI and 3D Convolutional Network. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, UK, 11–15 July 2022; pp. 2924–2927. [Google Scholar]
  38. Zheng, Y.; Jiang, Z.; Zhang, H.; Xie, F.; Hu, D.; Sun, S.; Shi, J.; Xue, C. Stain Standardization Capsule for Application-Driven Histopathological Image Normalization. IEEE J. Biomed. Health Inform. 2021, 25, 337–347. [Google Scholar] [CrossRef]
  39. Isaksson, L.J.; Raimondi, S.; Botta, F.; Pepa, M.; Gugliandolo, S.G.; De Angelis, S.P.; Marvaso, G.; Petralia, G.; DE Cobelli, O.; Gandini, S.; et al. Effects of MRI image normalization techniques in prostate cancer radiomics. Phys. Medica 2020, 71, 7–13. [Google Scholar] [CrossRef] [PubMed]
  40. Curatolo, P. Intractable epilepsy in tuberous sclerosis: Is the tuber removal not enough? Dev. Med. Child Neurol. 2010, 52, 987. [Google Scholar] [CrossRef]
  41. Davis, P.E.; Filip-Dhima, R.; Sideridis, G.; Peters, J.M.; Au, K.S.; Northrup, H.; Bebin, E.M.; Wu, J.Y.; Krueger, D.; Sahin, M.; et al. Presentation and Diagnosis of Tuberous Sclerosis Complex in Infants. Pediatrics 2017, 140, e20164040. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Overall network structure, (a) single modality model pipeline, (b) schematic of the proposed DWF-net pipeline. The two dotted lines represent the optimal combination of T2W and FLAIR to generate FLAIR3.
Figure 1. Overall network structure, (a) single modality model pipeline, (b) schematic of the proposed DWF-net pipeline. The two dotted lines represent the optimal combination of T2W and FLAIR to generate FLAIR3.
Bioengineering 10 00870 g001
Figure 2. Study exclusion and inclusion criteria of the pediatric dataset.
Figure 2. Study exclusion and inclusion criteria of the pediatric dataset.
Bioengineering 10 00870 g002
Figure 3. Flowchart of the data preprocessing.
Figure 3. Flowchart of the data preprocessing.
Bioengineering 10 00870 g003
Figure 4. Representative MRI from a TSC child and a healthy child, including T2W, FLAIR, and the proposed FLAIR3 (the red arrow highlights the TSC lesion).
Figure 4. Representative MRI from a TSC child and a healthy child, including T2W, FLAIR, and the proposed FLAIR3 (the red arrow highlights the TSC lesion).
Bioengineering 10 00870 g004
Figure 5. The performance of DWF-net with different weights. The feature extractor in (a) is 3D-EfficientNet, and the feature extractor in (b) is 3D-ResNet. The horizontal axis represents the weight of W1, W2, and W3, and the vertical axis represents the performance of AUC.
Figure 5. The performance of DWF-net with different weights. The feature extractor in (a) is 3D-EfficientNet, and the feature extractor in (b) is 3D-ResNet. The horizontal axis represents the weight of W1, W2, and W3, and the vertical axis represents the performance of AUC.
Bioengineering 10 00870 g005
Figure 6. (ac) represent the ROC curves for all models of the testing cohort. (d) represents the classification performance for all models of the testing cohort. The horizontal axis shows the model name, while the vertical axis represents the performance regarding AUC, ACC, SEN, and SPE.
Figure 6. (ac) represent the ROC curves for all models of the testing cohort. (d) represents the classification performance for all models of the testing cohort. The horizontal axis shows the model name, while the vertical axis represents the performance regarding AUC, ACC, SEN, and SPE.
Bioengineering 10 00870 g006
Figure 7. The classification performance of the without-normalization method, the Z-score normalization, and the min–max normalization in FLAIR images and T2W images. (a) 3D-EfficientNet as a network feature extractor, FLAIR as the network input. (b) 3D-ResNet as a network feature extractor, FLAIR as the network input. (c) 3D-EfficientNet as a network feature extractor, T2W as the network input. (d) 3D-ResNet as a network feature extractor, T2W as the network input.
Figure 7. The classification performance of the without-normalization method, the Z-score normalization, and the min–max normalization in FLAIR images and T2W images. (a) 3D-EfficientNet as a network feature extractor, FLAIR as the network input. (b) 3D-ResNet as a network feature extractor, FLAIR as the network input. (c) 3D-EfficientNet as a network feature extractor, T2W as the network input. (d) 3D-ResNet as a network feature extractor, T2W as the network input.
Bioengineering 10 00870 g007
Table 1. Detailed information on ten network structures.
Table 1. Detailed information on ten network structures.
Model NameInput ModalityMethod
Eff_FLAIRFLAIR only3D-EfficientNet
Eff_T2WT2W only3D-EfficientNet
Eff_FLAIR3FLAIR3 only3D-EfficientNet
Eff_FLAIR_T2WFLAIR + T2WDWF_net
Eff_DWF_netFLAIR + T2W + FLAIR3DWF_net
Res_FLAIRFLAIR only3D-ResNet34
Res_T2T2W only3D-ResNet34
Res_FLAIR3FLAIR3 only3D-ResNet34
Res_FLAIR_T2WFLAIR + T2WDWF_net
Res_DWF_netFLAIR + T2W + FLAIR3DWF_net
Table 2. The main clinical characteristics of all 680 child subjects.
Table 2. The main clinical characteristics of all 680 child subjects.
TSCHCp-Value
Number349331-
Male, number (%)188 (53.9%)183 (55.3%)0.711
Age at imaging, mean ± SD (months)45.5 ± 46.673.3 ± 49.2<0.001
Table 3. Detailed performance of different models in pediatric testing datasets.
Table 3. Detailed performance of different models in pediatric testing datasets.
Input ModalityModel NameAUCACCSENSPE
FLAIR + T2WInceptionV3 [18]0.9330.8510.8120.893
FLAIR onlyEff_FLAIR0.9740.9110.8690.954
T2W onlyEff_T2W0.9710.9190.8690.970
FLAIR3Eff_FLAIR30.9870.9260.8840.970
FLAIR + T2WEff_FLAIR_T2W0.9740.9330.9280.939
FLAIR + T2W + FLAIR3
(W1 = 0.0, W2 = 0.3, W3 = 0.7)
Eff_DWF_net0.9890.9630.9420.985
FLAIR onlyRes_FLAIR0.9940.9700.9860.955
T2W onlyRes_T2W0.9830.9560.9130.999
FLAIR3Res_FLAIR30.9970.9780.9570.999
FLAIR + T2WRes_FLAIR_T2W0.9940.9700.9420.999
FLAIR + T2W + FLAIR3
(W1 = 0.2, W2 = 0.3, W3 = 0.5)
Res_DWF_net0.9980.9850.9710.999
Table 4. The results of with/without skull stripping in T2W and FLAIR.
Table 4. The results of with/without skull stripping in T2W and FLAIR.
ModalityModel NamePreprocessingAUCACCSENSPE
FLAIR only3D-EfficientNetWithout skull stripping0.8980.8290.7540.909
Skull stripping0.9740.9110.8690.954
3D-ResNetWithout skull stripping0.9590.8810.8550.909
Skull stripping0.9940.9700.9860.955
T2W only3D-EfficientNetWithout skull stripping0.9680.9160.8810.951
Skull stripping0.9710.919 0.869 0.970
3D-ResNetWithout skull stripping0.9140.8290.7970.863
Skull stripping0.9830.9560.9130.999
Table 5. The classification performance of with/without skull stripping in FLAIR images and T2W images.
Table 5. The classification performance of with/without skull stripping in FLAIR images and T2W images.
ModalityModel NamePreprocessingAUCACCSENSPE
FLAIR only3D-EfficientNetWithout normalization0.9510.8990.8630.936
Z-score0.9650.8670.7540.984
Min–max0.9740.9110.8690.954
3D-ResNetWithout normalization0.9850.9330.9710.893
Z-score0.9140.8670.7970.933
Min–max0.9940.9700.9860.955
T2W only3D-EfficientNetWithout normalization0.9500.9110.8840.939
Z-score0.9670.9330.8980.969
Min–max0.9710.9190.8690.970
3D-ResNetWithout normalization0.9740.9180.9270.909
Z-score0.9820.9180.8840.954
Min–max0.9830.9560.9130.999
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, D.; Liao, J.; Zhao, C.; Zhao, X.; Lin, R.; Yang, J.; Li, Z.-C.; Zhou, Y.; Zhu, Y.; Liang, D.; et al. Recognizing Pediatric Tuberous Sclerosis Complex Based on Multi-Contrast MRI and Deep Weighted Fusion Network. Bioengineering 2023, 10, 870. https://doi.org/10.3390/bioengineering10070870

AMA Style

Jiang D, Liao J, Zhao C, Zhao X, Lin R, Yang J, Li Z-C, Zhou Y, Zhu Y, Liang D, et al. Recognizing Pediatric Tuberous Sclerosis Complex Based on Multi-Contrast MRI and Deep Weighted Fusion Network. Bioengineering. 2023; 10(7):870. https://doi.org/10.3390/bioengineering10070870

Chicago/Turabian Style

Jiang, Dian, Jianxiang Liao, Cailei Zhao, Xia Zhao, Rongbo Lin, Jun Yang, Zhi-Cheng Li, Yihang Zhou, Yanjie Zhu, Dong Liang, and et al. 2023. "Recognizing Pediatric Tuberous Sclerosis Complex Based on Multi-Contrast MRI and Deep Weighted Fusion Network" Bioengineering 10, no. 7: 870. https://doi.org/10.3390/bioengineering10070870

APA Style

Jiang, D., Liao, J., Zhao, C., Zhao, X., Lin, R., Yang, J., Li, Z. -C., Zhou, Y., Zhu, Y., Liang, D., Hu, Z., & Wang, H. (2023). Recognizing Pediatric Tuberous Sclerosis Complex Based on Multi-Contrast MRI and Deep Weighted Fusion Network. Bioengineering, 10(7), 870. https://doi.org/10.3390/bioengineering10070870

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop