You are currently viewing a new version of our website. To view the old version click .
Medicina
  • Article
  • Open Access

6 January 2023

A Novel Generative Adversarial Network-Based Approach for Automated Brain Tumour Segmentation

,
,
,
,
and
1
School of Computer Science, University of Petroleum and Energy Studies (UPES), Dehradun 248007, India
2
Persistent Systems, India 411057, India
3
School of Business and Management, CHRIST University, Bangalore 560074, India
*
Authors to whom correspondence should be addressed.
This article belongs to the Section Oncology

Abstract

Background: Medical image segmentation is more complicated and demanding than ordinary image segmentation due to the density of medical pictures. A brain tumour is the most common cause of high mortality. Objectives: Extraction of tumorous cells is particularly difficult due to the differences between tumorous and non-tumorous cells. In ordinary convolutional neural networks, local background information is restricted. As a result, previous deep learning algorithms in medical imaging have struggled to detect anomalies in diverse cells. Methods: As a solution to this challenge, a deep convolutional generative adversarial network for tumour segmentation from brain Magnetic resonance Imaging (MRI) images is proposed. A generator and a discriminator are the two networks that make up the proposed model. This network focuses on tumour localisation, noise-related issues, and social class disparities. Results: Dice Score Coefficient (DSC), Peak Signal to Noise Ratio (PSNR), and Structural Index Similarity (SSIM) are all generally 0.894, 62.084 dB, and 0.88912, respectively. The model’s accuracy has improved to 97 percent, and its loss has reduced to 0.012. Conclusions: Experiments reveal that the proposed approach may successfully segment tumorous and benign tissues. As a result, a novel brain tumour segmentation approach has been created.

1. Introduction

A malignant tumour is an extremely harmful health risk that can be fatal. To reduce the community’s fatality rate, early detection, diagnosis, and treatment are essential. Cancerous cells can now be found using a range of imaging modalities, including magnetic resonance imaging (MRI), computed tomography scans (CT), positron emission tomography (PET), and X-rays, thanks to advancements in medical imaging technology. High resolution, a high signal-to-noise ratio, and the ability to image soft tissues are all advantages that MRI has over other imaging modalities [1]. Compared to MRI images, CT scans offer poorer contrast in soft tissues. For these reasons, MRI is the method that is most frequently used for segmenting and diagnosing brain tumours. Compared to CT pictures, MRI scans show a noticeable contrast between tumorous and non-tumorous cells. Brain MRI is broken down into different parts, such as white matter, cerebrospinal fluid, grey matter, and various lesions/tumours, for the purpose of analysing anomalies in brain pictures. Brain MRI imaging uses the modalities of spin-lattice relaxation (T1-weighted), spin-spin relaxation (T2-weighted), and fluid attenuation intention recovery (FLAIR). Each tissue has a unique indication because of the variations in these modalities [2]. Due to the high contrast value, tumours can be easily separated from normal tissue in MRI scans. Brain MR scans can be used by the radiologist to detect various lesions and cancers, which helps with medication recommendations. Due to the reliability issues with many sensory modalities, segmenting medical images can be challenging. The manual segmentation of tumour cells is a labour-intensive and time-consuming technique. In addition, certain artefacts, such as motion artefacts, have an impact on image segmentation. Partial volume effect (PVE) is caused when healthy tissues overlap in terms of intensity, and PVE-like characteristics can be seen in tumorous tissues [3].
Medical images are affected by the noise from accessories and auxiliary devices. For the aim of making a diagnosis, this method is crucial for extracting data from an image. Effective and precise tumorous zone segregation is essential for brain MRI segmentation. So, brain tumours are divided using automated segmentation methods. In a real-time situation, automatic segmentation will help radiologists make more rapid and accurate diagnoses of cancers. Deep belief networks, restricted Boltzmann machines, stack auto-encoder networks, and deep convolutional neural networks are examples of automated segmentation methods. The most widely used segmentation method in biomedical image processing, convolutional neural networks, allows for more accurate segmentation and identification of brain MRI signals. According to existing methods, algorithm efficiency in the classification and segmentation of tumorous and non-tumorous cells should be enhanced. Despite their effectiveness, deep convolutional neural networks have several limitations in terms of what they can do. Existing computer-assisted diagnosis methods are unreliable because of how inaccurate the trained model is. According to the literature assessment, there are a number of issues with current technology.
Classification and segmentation tasks are not aligned with one another. Models for segmentation and classification must be distinct from one another. Lesions, cancers, and healthy cells could not be distinguished with any accuracy by earlier models. In order to detect tiny lesions as tumorous cells, they are frequently segmented. The class imbalance between tumorous and non-tumorous cells exists in earlier versions, and they are rigid when it comes to adjusting layer sizes for different input sizes across different datasets.
Both benign and malignant brain tumours are possible. Unlike malignant tumours, which are cancerous, benign tumours can be treated because they are not cancerous. If a cure is not found, especially for malignant tumours, the patient may pass away. Because of this, early tumour prediction and detection can help to lower the death rate. At a very early stage, cancers can be found in any picture modality using automated artificial intelligence approaches. For this purpose, real-time segmentation of brain tumours from the different modality scans are required. GAN’s have proven to attain much high efficiency for brain tumour segmentation in a real-time scenario.
Convolutional neural networks (CNN) and generative adversarial networks (GAN), two deep learning approaches, are mostly used for automated brain tumour segmentation. In contrast to GANs, CNNs are hybrid deep learning models that can make decisions based on a variety of inputs. Unlike CNNs, which need huge, labelled datasets for their training, GANs use unsupervised learning and do not need as many large datasets. Fewer labelled datasets provided to GANs during training can shorten training time while simultaneously improving accuracy or efficiency of the network. The following advantages of GAN over CNN are discussed in this work along with an overview of the several GAN-based designs.
Generative adversarial networks are used to enhance the accuracy of currently used computer-assisted technologies. Through unsupervised learning using generative adversarial networks, the basic data distribution from a collection of supplied samples is effectively captured [4]. When working with high-dimensional data, including images and text, this technique becomes more difficult. In order to achieve this, we employ generative adversarial networks [5], which offer a mapping from the latent space to the high-dimensional data. GAN’s capacity to extract information from all types of image data has led to more promising findings in the segmentation of MRI data, such as when using MRI data to segment CT images [6].
Analysing MRI data can be facilitated by computer-aided diagnostics (CAD). The interest in creating CAD-based methods based on deep learning and artificial intelligence has significantly increased recently. Deep learning techniques, however, require training with lots of medical imaging data. GANs, or generative adversarial networks, are capable of creating fresh samples of data and accurately simulating the distribution of the actual data. The generator and discriminator neural networks in GANs, a specific kind of deep learning models, are combined. The discriminator aims to categorise the images as real or artificial while the generator creates fresh examples. The overall training of the model is significantly improved by the adversarial training. In addition to being used for applications such as super-resolution, segmentation, and diagnosis, GANs methods have also been employed for the generation of synthetic data in the field of medical imaging.
The major contributions of this research work are:
  • In order to significantly improve tumour localisation and tumour segmentation, a real-time generative adversarial network is proposed. The generator is generating the segmented tumour output which is compared with the ground truth tumour mask in discriminator section.
  • The model has attained a comparatively high accuracy in segmenting high-resolution images of brain tumour.
  • The exact tumour areas have been clearly marked by the model.
The structure of this essay is as follows: The history of automated brain tumour segmentation techniques and the application of GAN networks are covered in Section 2. Additionally, it discusses related research examining the efficacy, dice score coefficient, and other metrics of tumour segmentation techniques based on GAN. The proposed technique and the associated algorithms are described in Section 3. In Section 4, values obtained from various performance metrics are discussed together with the quantitative and qualitative outcomes of the automated brain tumour segmentation technology that is suggested. Section 5 talks about RTGAN’s conclusion and upcoming work.

3. Research Methodology

Medical image segmentation may now be performed precisely in real time because of recent developments in generative adversarial networks. Due to their quick and effective learning capabilities, GANs have become more and more popular.
Deep convolutional GANs based on transfer learning are effective at segmenting semantic brain tumours. For semantic segmentation of medical images, GAN’s are appealing due to the learning process and lower heuristic cost [30]. The Vox2vox model [16] served as inspiration for the suggested concept.
As part of the proposed RTGAN, GANs are trained to segment brain MRI images, and then they are used as feature extractors for supervised tasks using discriminator and generator network segments. Both a generator network and a discriminator network are part of the DCGAN. The segmented tumour picture made from the brain MRI is sent to the generator together with the actual segmented tumour images, and the discriminator processes the results. The discriminator then forecasts the labels for the created output and the true output. Following is the generator’s precise configuration:
  • One 3D image with 4 different modalities: T1, T1gd, T2, and FLAIR.
  • Four 3D convolutional down-sampling blocks having kernel size 4, same padding, stride 2, and leakyRelu activation function. The initial filter set is 64, which is doubled after every convolutional block.
  • Four 3D convolutional residual blocks having kernel size, padding, and activation function the same as mentioned above and stride as 1.
  • Three 3D deconvolutional up-sampling blocks having kernel size = 4, stride = 2, and activation function = Relu.
  • One 3D deconvolutional layer with four filters (background, edema tumour (ED), core tumour (NET), and active tumour (ET), each labelled with 0, 1, 2, and 3).
The detailed configuration of discriminator network is as follows:
  • The 3D image generated from generator network and segmentation ground truth.
  • Four 3D convolutional down-sampling blocks having same configuration as in generator.
  • One 3D convolutional layer with kernel size = 4, filter = 1, stride = 1, and same padding.

4. Algorithm

1.
Start
2.
Reshaping image I = > 512 ∗ 512 ∗ 512 -> 128 ∗ 128 ∗ 128
3.
Generator:
  • I fed to 4 3D convolutional down-sampling blocks -> I‘ (16 ∗ 16 ∗ 16 ∗ 256).
  • I‘ is fed to residual blocks with dropout = 0.2 ->I‘‘ (8 ∗ 8 ∗ 8 ∗ 512).
  • I‘‘ is fed to 3 up-sampling blocks generating I1 (64 ∗ 64 ∗ 64 ∗ 128).
  • I1 is fed to 3D deconvolutional layer with softmax function hence generates segmented image I2 -> 128 ∗ 128 ∗ 128 ∗ 4
4.
Discriminator:
  • I2 + ground truth segmented image -> Four 3D convolutional down-sampling blocks ->I2‘ (8 ∗ 8 ∗ 8 ∗ 512)
  • I2‘ -> one 3D convolutional layer -> I3‘
  • I3‘ -> sigmoid activation function -> Final segmented output I3(8 ∗ 8 ∗ 8 ∗ 1)
5.
End
Dataset and Preprocessing: The BraTS dataset, which consists of 98 patients and a 3D brain MRI dataset including entire tumours, core tumours, and active tumours, was employed in the training. 73 T1, T2, and FLAIR MRI scans are included in each patient’s dataset folder [31,32,33]. The first step in preparing medical photos is noise removal [34]. Each MRI image is subjected to intensity normalisation during the preprocessing stage, and patch augmentation is used to reduce the model’s memory usage.

5. Experiments and Discussions

Losses: The generator loss G L and discriminator loss D L of this GAN-based suggested model are both evaluated. The general dice loss G D L in between the ground truth and generator’s output with a scalar coefficient, i.e., and the discriminator output inaccuracy L 1 between the ground truth and prediction image, are multiplied by a tensor of ones to obtain the generator loss. α ≥ 0,
G L = L 1 D x , y ^ , 1 + α G D L y , y ^  
The discriminator loss is calculated by multiplying the L 1 error of the discriminator output in between the novel picture and the relevant segmented forecast given by the generator by the error of the discriminator output between the original image and ground truth with a tensor of ones, i.e.,
D L = L 1 D x , y , 1 + L 1 D x , y ^ , 0  
Training Details: Tensorflow 2.1, the Keras library, and Python 3.7 are used to train the suggested RTGAN model. The model is trained and authorised on sub-volumes of size 128 × 128 × 128 from 98 patients over 100 epochs using batch size 4 on a machine with all necessary libraries.
An overview of the discriminator GAN for several layer types, including conv2d, leaky ReLU, flatten, dropout, and dense, is shown in Table 2. The figure also shows the output variations in parameter for each layer type.
Table 2. Literature Survey of GAN-based Brain tumour Segmentation Models.
The description of each layer of discriminator GAN is elaborated in Figure 2. Figure 3 represents the architecture of discriminator GAN. It represents various layers added to the discriminator model of GAN which will differentiate the segmented output with the ground truth output.
Figure 2. Discriminator summary for tumour segmentation.
Figure 3. Architecture of discriminator GAN.
Figure 4 depicts a summary of generator GAN for several layer types such as Dense, leaky ReLU, reshape Conv2DT, and Conv2D, as well as the output shape of variation in param for each layer type.
Figure 4. Generator GAN for several layers.
Figure 5 represents the architecture of generator GAN. It represents various layers added to the generator model of GAN which will fetch the tumour regions from the real datasets and passed to all the layers of generator that will deeply segment the tumour regions. The description of each layer is elaborated in Figure 2.
Figure 5. Architecture of generator GAN.
Figure 6 depicts a combined summary for the GAN model, with a total parameter count of 7,815,876 as illustrated by this figure. The number of trainable parameters in this GAN model is 7,686,915; the number of non-trainable parameters is 128,961.
Figure 6. Combined summary for the GAN model.
Figure 7 shows the parameter performance for both the discriminator and the generator summary of the GAN model by representing the generator and discriminator loss at every scan of each epoch.
Figure 7. Parameter performance.

6. Results and Observations

The model has been trained for 100 epochs, and the quality of the segmented images for some epochs is shown in Figure 8. The first part of image is the input image fed to the model. The second part is the maximum segmented region and last part is the final segmented output from the RTGAN model.
Figure 8. Segmented Output.
The model loss is depicted in Figure 9. The loss diminishes as the number of epochs grows, as seen by the graph of loss vs. epochs in the following figure. In order to reduce the loss to 0.012, testing is carried out across 100 epochs.
Figure 9. Model Loss.
The accuracy of the model is displayed in Figure 10 at 97%. The accuracy of the model was calculated in discriminator using segmented truth from the generator minus the ground truth. The accuracy graphs are shown in the following figure, and as the number of epochs rises, accuracy also rises. When the model is put through 100 epochs of testing, the test accuracy rises.
Figure 10. Accuracy.
The quantitative findings from the GAN algorithm are presented in this section. Table 3 Image 1 has a Structured Similarity Index (SSIM) value of 0.9021, a Peak Signal-to-Noise Ratio (PSNR) value of 57.30dB, and a Dice Score Coefficient (DSC) value of 0.87. The PSNR value is 69.01 dB, the DSC value is 0.88, and the SSIM value is 0.90110, as shown in Image 2. The PSNR value is 59.32 dB, the DSC value is 0.93, and the SSIM value is 0.8251, as shown in Image 3. The value for the dice score coefficient (DSC) is 0.93, 61.21 dB for the PSNR, and 0.8761 for the SSIM, as shown in Image 4. In Image 5, the SSIM value is 0.9121, the PSNR value is 61.65 dB, and the DSC value is 0.80. Image 6 displays the SSIM value as 0.9561, the PSNR value as 60.23 dB, and the DSC value as 0.94. Image 7 displays the SSIM value as 0.9231, PSNR value as 62.16 dB, and DSC value as 0.90
Table 3. Results using RT-GAN algorithm.
The last figure of results compares the dice score coefficient of proposed model with previous models. Figure 11 illustrates the comparative analysis of proposed work with previous models.
Figure 11. Comparative Analysis of Proposed Work with Previous Models.

7. Conclusions and Future Scope

According to earlier research, GANs are superior to CNN at segmenting medical images. In this study, it is suggested to segment brain tumours in real time using a GAN-based model. Given their better effectiveness, short processing speeds, ability to diagnose problems in real time, and independence from labelled datasets, GANs are preferable to CNN. GANs are hybrid deep learning models, not discriminative deep learning models such as CNNs. Unlike CNNs, GANs do not need large datasets for training because they are unsupervised learning algorithms. The training time for GANs is decreased while the network’s accuracy and effectiveness are increased when only a few labelled datasets are used in training.
This work proposes RTGAN, which consists of a generator and a discriminator. The generator network includes the dense, leaky ReLU, reshape Conv2DT, and Conv2D in addition to the output form of change in param for each layer type. Conv2d, leaky ReLU, flatten, dropout, and dense are some of the different types of discriminator GAN layers. The output parameter variation forms for each layer type also belong to this group.
The effectiveness of the segmented images was assessed after the model was trained across a number of epochs for 100 epochs at a time. The segmentation quality of the images is good, and PSNR, SSIM, and DSC are used for quantitative analysis. RTGAN has demonstrated its ability to generate high-quality segmentation results for evaluation criteria such as the structural similarity index, dice score coefficient, and peak signal-to-noise ratio. DSC, PSNR, and SSIM are all generally 0.894, 62.084 dB, and 0.88912, respectively. The model’s accuracy has improved to 97 percent, and its loss has reduced to 0.012. The suggested model is ideal for real-time applications due to its high precision and dice scoring coefficient. This work’s drawback is that it needs to be tested using a variety of additional image modalities, including those for low-grade gliomas, glioblastomas, and astrocytomas. The correctness of the final test is then obtained. The work could also be compared to other models based on different parameters related to accuracy such as precision, F1 score, etc.

Author Contributions

R.S. has written the paper and conceptualized the whole concept along with the support of T.C.; T.C. and R.T. has performed testing and validating the concept. A.S. has deeply analysed the performance of the model. P.C. and D.S. have deeply reviewed the paper. And RS has completed the whole task under the supervision of T.C., P.C., D.S., R.S., T.C., A.S. jointly designed the diagrams and run the model and T.C. and A.S. jointly managed the review responses. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Datasets are publicly available, and citations related to datasets are mentioned within the paper.

Acknowledgments

On behalf of all authors, we thanked UPES, Dehradun, India for providing us the opportunity to complete the work and also provided us the environment to run the same.

Conflicts of Interest

There are no conflict of interest for any of the authors mentioned in the paper.

References

  1. Bahadure, N.B.; Ray, A.K.; Thethi, H.P. Image analysis for MRI based brain tumor detection and feature extraction using biologically inspired BWT and SVM. Int. J. Biomed. Imaging 2017, 2017, 9749108. [Google Scholar] [CrossRef] [PubMed]
  2. Havaei, M.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Pal, C.; Jodoin, P.M.; Larochelle, H. Brain tumor segmentation with deep neural networks. Med. Image Anal. 2017, 35, 18–31. [Google Scholar] [CrossRef] [PubMed]
  3. Agravat, R.R.; Raval, M.S. Deep Learning for Automated Brain Tumor Segmentation in Mri Images. In Soft Computing Based Medical Image Analysis; Academic Press: Cambridge, MA, USA, 2018; pp. 183–201. [Google Scholar]
  4. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  5. Ahmetoğlu, A.; Alpaydın, E. Hierarchical Mixtures of Generators for Adversarial Learning. In 2020 25th International Conference on Pattern Recognition (ICPR); IEEE: Piscataway, NJ, USA, 2021; pp. 316–323. [Google Scholar]
  6. Jiang, J.; Hu, Y.C.; Tyagi, N.; Zhang, P.; Rimner, A.; Mageras, G.S.; Deasy, J.O.; Veeraraghavan, H. Tumor-Aware, Adversarial Domain Adaptation from ct to Mri for Lung Cancer Segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2018; pp. 777–785. [Google Scholar]
  7. Mok, T.C.; Chung, A. Learning Data Augmentation for Brain Tumor Segmentation with Coarse-to-Fine Generative Adversarial Networks. In International MICCAI Brainlesion Workshop; Springer: Cham, Switzerland, 2018; pp. 70–80. [Google Scholar]
  8. Rezaei, M.; Yang, H.; Meinel, C. Voxel-GAN: Adversarial Framework for Learning Imbalanced Brain Tumor Segmentation. In International MICCAI Brainlesion Workshop; Springer: Cham, Switzerland, 2018; pp. 321–333. [Google Scholar]
  9. Yang, W.; Zhao, J.; Qiang, Y.; Yang, X.; Dong, Y.; Du, Q.; Shi, G.; Zia, M.B. Dscgans: Integrate Domain Knowledge in Training Dual-Path Semi-Supervised Conditional Generative Adversarial Networks and s3vm for Ultrasonography Thyroid Nodules Classification. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2019; pp. 558–566. [Google Scholar]
  10. Chen, C.; Dou, Q.; Chen, H.; Heng, P.A. Semantic-Aware Generative Adversarial Nets for Unsupervised Domain Adaptation in Chest X-ray Segmentation. In International Workshop on Machine Learning in Medical Imaging; Springer: Cham, Switzerland, 2018; pp. 143–151. [Google Scholar]
  11. Yu, F.; Zhao, J.; Gong, Y.; Wang, Z.; Li, Y.; Yang, F.; Dong, B.; Li, Q.; Zhang, L. Annotation-free cardiac vessel segmentation via knowledge transfer from retinal images. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2019; pp. 714–722. [Google Scholar]
  12. Luo, B.; Shen, J.; Cheng, S.; Wang, Y.; Pantic, M. Shape Constrained Network for Eye Segmentation in the Wild. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 2–5 March 2020; pp. 1952–1960. [Google Scholar]
  13. Wegmayr, V.; Hörold, M.; Buhmann, J.M. Generative Aging of Brain MRI for Early Prediction of MCI-AD Conversion. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 08–11 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1042–1046. [Google Scholar]
  14. Nema, S.; Dudhane, A.; Murala, S.; Naidu, S. RescueNet: An unpaired GAN for brain tumor segmentation. Biomed. Signal Process. Control. 2020, 55, 101641. [Google Scholar] [CrossRef]
  15. Sun, Y.; Zhou, C.; Fu, Y.; Xue, X. Parasitic GAN for semi-supervised brain tumor segmentation. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1535–1539. [Google Scholar]
  16. Cirillo, M.D.; Abramian, D.; Eklund, A. Vox2Vox: 3D-GAN for Brain Tumour Segmentation. In International MICCAI Brainlesion Workshop; Springer: Cham, Switzerland, 2020; pp. 274–284. [Google Scholar]
  17. Cui, S.; Wei, M.; Liu, C.; Jiang, J. GAN-segNet: A deep generative adversarial segmentation network for brain tumor semantic segmentation. Int. J. Imaging Syst. Technol. 2021, 32, 857–868. [Google Scholar] [CrossRef]
  18. Gan, X.; Wang, L.; Chen, Q.; Ge, Y.; Duan, S. GAU-Net: U-Net Based on Global Attention Mechanism for brain tumor segmentation. J. Phys. Conf. Ser. 2021, 1861, 012041. [Google Scholar] [CrossRef]
  19. Cheng, G.; Ji, H.; He, L. Correcting and reweighting false label masks in brain tumor segmentation. Med. Phys. 2021, 48, 169–177. [Google Scholar] [CrossRef] [PubMed]
  20. Xi, N. Semi-supervised Attentive Mutual-info Generative Adversarial Network for Brain Tumor Segmentation. In Proceedings of the 2019 International Conference on Image and Vision Computing New Zealand (IVCNZ), Dunedin, New Zealand, 2–4 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–7. [Google Scholar]
  21. Tokuoka, Y.; Suzuki, S.; Sugawara, Y. An Inductive Transfer Learning Approach Using Cycle-Consistent Adversarial Domain Adaptation with Application to Brain Tumor Segmentation. In Proceedings of the 2019 6th International Conference on Biomedical and Bioinformatics Engineering, Shanghai, China, 2–4 November 2019; pp. 44–48. [Google Scholar]
  22. Chen, H.; Qin, Z.; Ding, Y.; Lan, T. Brain Tumor Segmentation with Generative Adversarial Nets. In Proceedings of the 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, 25–28 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 301–305. [Google Scholar]
  23. Das, J.; Patel, R.; Pankajakshan, V. Brain Tumor Segmentation Using Discriminator Loss. In Proceedings of the 2019 National Conference on Communications (NCC), Bangalore, India, 20–23 February 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
  24. Wu, X.; Bi, L.; Fulham, M.; Feng, D.D.; Zhou, L.; Kim, J. Unsupervised brain tumor segmentation using a symmetric-driven adversarial network. Neurocomputing 2021, 455, 242–254. [Google Scholar] [CrossRef]
  25. Hamghalam, M.; Lei, B.; Wang, T. Brain Tumor Synthetic Segmentation in 3D Multimodal MRI Scans. In International MICCAI Brainlesion Workshop; Springer: Cham, Switzerland, 2019; pp. 153–162. [Google Scholar]
  26. Alex, V.; KP, M.S.; Chennamsetty, S.S.; Krishnamurthi, G. Generative Adversarial Networks for Brain Lesion Detection. In Medical Imaging 2017: Image Processing; International Society for Optics and Photonics: Bellingham, WA, USA, 2017; Volume 10133, p. 101330G. [Google Scholar]
  27. Pang, S.; Du, A.; Orgun, M.A.; Yu, Z.; Wang, Y.; Wang, Y.; Liu, G. CTumorGAN: A unified framework for automatic computed tomography tumor segmentation. Eur. J. Nucl. Med. Mol. Imaging 2020, 47, 2248–2268. [Google Scholar] [CrossRef] [PubMed]
  28. Li, Y.; Chen, Y.; Shi, Y. Brain tumor segmentation using 3D generative adversarial networks. Int. J. Pattern Recognit. Artif. Intell. 2021, 35, 2157002. [Google Scholar] [CrossRef]
  29. Mukherkjee, D.; Saha, P.; Kaplun, D.; Sinitca, A.; Sarkar, R. Brain tumor image generation using an aggregation of GAN models with style transfer. Sci. Rep. 2022, 12, 9141. [Google Scholar] [CrossRef] [PubMed]
  30. Nyúl, L.G.; Udupa, J.K.; Zhang, X. New variants of a method of MRI scale standardization. IEEE Trans. Med. Imaging 2000, 19, 143–150. [Google Scholar] [CrossRef] [PubMed]
  31. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging 2015, 34, 1993–2024. [Google Scholar] [CrossRef] [PubMed]
  32. Bakas, S.; Akbari, H.; Sotiras, A.; Bilello, M.; Rozycki, M.; Kirby, J.S.; Freymann, J.B.; Farahani, K.; Davatzikos, C. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Nat. Sci. Data 2017, 4, 170117. [Google Scholar] [CrossRef] [PubMed]
  33. Bakas, S.; Reyes, M.; Jakab, A.; Bauer, S.; Rempfler, M.; Crimi, A.; Shinohara, R.T.; Berger, C.; Ha, S.M.; Rozycki, M.; et al. Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge. arXiv 2018, arXiv:1811.02629. [Google Scholar]
  34. Asante-Mensah, M.G.; Cichocki, A. Medical Image de-Noising Using Deep Networks. In 2018 IEEE International Conference on Data Mining Workshops (ICDMW); IEEE: Piscataway, NJ, USA, 2018; pp. 315–319. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.